Skip to main content
ONTAP MetroCluster

Required MetroCluster IP components and naming conventions

Contributors netapp-martyh netapp-pcarriga netapp-driley netapp-folivia netapp-thomi NetAppZacharyWambold netapp-aoife

When planning your MetroCluster IP configuration, you must understand the required and supported hardware and software components. For convenience and clarity, you should also understand the naming conventions used for components in examples throughout the documentation.

Supported software and hardware

The hardware and software must be supported for the MetroCluster IP configuration.

When using AFF systems, all controller modules in the MetroCluster configuration must be configured as AFF systems.

Hardware redundancy requirements in a MetroCluster IP configuration

Because of the hardware redundancy in the MetroCluster IP configuration, there are two of each component at each site. The sites are arbitrarily assigned the letters A and B, and the individual components are arbitrarily assigned the numbers 1 and 2.

ONTAP cluster requirements in a MetroCluster IP configuration

MetroCluster IP configurations require two ONTAP clusters, one at each MetroCluster site.

Naming must be unique within the MetroCluster configuration.

Example names:

  • Site A: cluster_A

  • Site B: cluster_B

IP switch requirements in a MetroCluster IP configuration

MetroCluster IP configurations require four IP switches. The four switches form two switch storage fabrics that provide the ISL between each of the clusters in the MetroCluster IP configuration.

The IP switches also provide intracluster communication among the controller modules in each cluster.

Naming must be unique within the MetroCluster configuration.

Example names:

  • Site A: cluster_A

    • IP_switch_A_1

    • IP_switch_A_2

  • Site B: cluster_B

    • IP_switch_B_1

    • IP_switch_B_2

Controller module requirements in a MetroCluster IP configuration

MetroCluster IP configurations require four or eight controller modules.

The controller modules at each site form an HA pair. Each controller module has a DR partner at the other site.

Each controller module must be running the same ONTAP version. Supported platform models depend on the ONTAP version:

  • New MetroCluster IP installations on FAS systems are not supported in ONTAP 9.4.

    Existing MetroCluster IP configurations on FAS systems can be upgraded to ONTAP 9.4.

  • Beginning with ONTAP 9.5, new MetroCluster IP installations on FAS systems are supported.

  • Beginning with ONTAP 9.4, controller modules configured for ADP are supported.

Controller models limited to four-node configurations

These models are limited to four in a MetroCluster configuration.

  • AFF A220

  • AFF A250

  • FAS2750

  • FAS500f

For example, the following configurations are not supported:

  • An eight-node configuration consisting of eight AFF A250 controllers.

  • An eight-node configuration consisting of four AFF 220 controllers and four FAS500f controllers.

  • Two four-node MetroCluster IP configurations each consisting of AFF A250 controllers and sharing the same back-end switches.

  • An eight-node configuration consisting of DR Group 1 with AFF A250 controllers and DR Group 2 with FAS9000 controllers.

You can configure two separate four-node MetroCluster IP configurations with the same back-end switches if the second MetroCluster does not include any of the above models.

Example names

The following example names are used in the documentation:

  • Site A: cluster_A

    • controller_A_1

    • controller_A_2

  • Site B: cluster_B

    • controller_B_1

    • controller_B_2

Gigabit Ethernet adapter requirements in a MetroCluster IP configuration

MetroCluster IP configurations use a 40/100 Gbps or 10/25 Gbps Ethernet adapter for the IP interfaces to the IP switches used for the MetroCluster IP fabric.

Platform model

Required Gigabit Ethernet adapter

Required slot for adapter


AFF A900 and FAS9500


Slot 5, Slot 7

e5b, e7b

AFF A700 and FAS9000


Slot 5

e5a, e5b

AFF A800, AFF C800

X1146A/onboard ports

Slot 1

e0b. e1b

FAS8300, AFF A400 and AFF C400


Slot 1

e1a, e1b

AFF A300 and FAS8200


Slot 1

e1a, e1b

FAS2750, AFF A150, and AFF A220

Onboard ports

Slot 0

e0a, e0b

FAS500f, AFF A250, and AFF C250

Onboard ports

Slot 0

e0c, e0d

AFF A320

Onboard ports

Slot 0

e0g, e0h

Pool and drive requirements (minimum supported)

Eight SAS disk shelves are recommended (four shelves at each site) to allow disk ownership on a per-shelf basis.

A four-node MetroCluster IP configuration requires the minimum configuration at each site:

  • Each node has at least one local pool and one remote pool at the site.

  • At least seven drives in each pool.

    In a four-node MetroCluster configuration with a single mirrored data aggregate per node, the minimum configuration requires 24 disks at the site.

In a minimum supported configuration, each pool has the following drive layout:

  • Three root drives

  • Three data drives

  • One spare drive

In a minimum supported configuration, at least one shelf is needed per site.

MetroCluster configurations support RAID-DP and RAID4.

Drive location considerations for partially populated shelves

For correct auto-assignment of drives when using shelves that are half populated (12 drives in a 24-drive shelf), drives should be located in slots 0-5 and 18-23.

In a configuration with a partially populated shelf, the drives must be evenly distributed in the four quadrants of the shelf.

Drive location considerations for AFF A800 internal drives

For correct implementation of the ADP feature, the AFF A800 system disk slots must be divided into quarters and the disks must be located symmetrically in the quarters.

An AFF A800 system has 48 drive bays. The bays can be divided into quarters:

  • Quarter one:

    • Bays 0 - 5

    • Bays 24 - 29

  • Quarter two:

    • Bays 6 - 11

    • Bays 30 - 35

  • Quarter three:

    • Bays 12 - 17

    • Bays 36 - 41

  • Quarter four:

    • Bays 18 - 23

    • Bays 42 - 47

If this system is populated with 16 drives, they must be symmetrically distributed among the four quarters:

  • Four drives in the first quarter: 0, 1, 2, 3

  • Four drives in the second quarter: 6, 7, 8, 9

  • Four drives in the third quarter: 12, 13, 14, 15

  • Four drives in the fourth quarter: 18, 19, 20, 21

Mixing IOM12 and IOM 6 modules in a stack

Your version of ONTAP must support shelf mixing. Refer to the NetApp Interoperability Matrix Tool (IMT) to see if your version of ONTAP supports shelf mixing.