Skip to main content
FlexPod

Technical specifications for small, medium and large architectures

Contributors

This section discusses a sample Bill of Materials for different size storage architectures.

Bill of material for small, medium, and large architectures.

The FlexPod design is a flexible infrastructure that encompasses many different components and software versions. Use TR-4036: FlexPod Technical Specifications as a guide to assembling a valid FlexPod configuration. The configurations in the table below are the minimum requirements for FlexPod, and are just a sample. The configuration can be expanded for each product family as required for different environments and use cases.

For this sizing exercise small corresponds to a Category 3 MEDITECH environment, medium to a Category 5, and large to a Category 6.

Small Medium Large

Platform

One NetApp AFF A220 all-flash storage system HA pair

One NetApp AFF A220 HA pair

One NetApp AFF A300 all-flash storage system HA pair

Disk shelves

9TB x 3.8TB

13TB x 3.8TB

19TB x 3.8TB

MEDITECH database size

3TB-12TB

17TB

>30TB

MEDITECH IOPS

<22,000 IOPs

>25,000 IOPs

>32,000 IOPs

Total IOPS

22000

27000

35000

Raw

34.2TB

49.4TB

68.4TB

Usable capacity

18.53TiB

27.96TiB

33.82TiB

Effective capacity (2:1 storage efficiency)

55.6TiB

83.89TiB

101.47TiB

Note Some customer environments might have multiple MEDITECH production workloads running simultaneously or might have higher IOPS requirements. In such cases, work with the NetApp account team to size the storage systems according to the required IOPS and capacity. You should be able to determine the right platform to serve the workloads. For example, there are customers successfully running multiple MEDITECH environments on a NetApp AFF A700 all-flash storage system HA pair.

The following table shows the standard software required for MEDITECH configurations.

Software Product family Version or release Details

Storage

ONTAP

ONTAP 9.4 general availability (GA)

Network

Cisco UCS fabric interconnects

Cisco UCSM 4.x

Current recommended release

Cisco Nexus Ethernet switches

7.0(3)I7(6)

Current recommended release

Cisco FC: Cisco MDS 9132T

8.3(2)

Current recommended release

Hypervisor

Hypervisor

VMware vSphere ESXi 6.7

Virtual machines (VMs)

Windows 2016

Management

Hypervisor management system

VMware vCenter Server 6.7 U1 (VCSA)

NetApp Virtual Storage Console (VSC)

VSC 7.0P1

NetApp SnapCenter

SnapCenter 4.0

Cisco UCS Manager

4.x

The following table shows an small (category 3) example configuration – infrastructure components.

Layer Product family Quantity and model Details

Compute

Cisco UCS 5108 Chassis

1

Supports up to eight half-width or four full-width blades. Add chassis as server requirement grows.

Cisco Chassis I/O Modules

2 x 2208

8GB x 10GB uplink ports

Cisco UCS blade servers

4 x B200 M5

Each with 2 x 14 cores, 2.6GHz or higher clock speed, and 384GB
BIOS 3.2(3#)

Cisco UCS Virtual Interface Cards

4 x UCS 1440

VMware ESXi fNIC FC driver: 1.6.0.47
VMware ESXi eNIC Ethernet driver: 1.0.27.0
(See interoperability matrix: https://ucshcltool.cloudapps.cisco.com/public/)

2 x Cisco UCS Fabric Interconnects (FI)

2 x UCS 6454 FI

4th-generation fabric interconnects supporting 10/25/100GB Ethernet and 32GB FC

Network

Cisco Ethernet switches

2 x Nexus 9336c-FX2

1GB, 10GB, 25GB, 40GB, 100GB

Storage network

IP Network Nexus 9k for BLOB storage

FI and UCS chassis

FC: Cisco MDS 9132T

Two Cisco 9132T switches

Storage

NetApp AFF A300 all-flash storage system

1 HA Pair

2-node cluster for all MEDITECH workloads (File Server, Image Server, SQL Server, VMware, and so on)

DS224C disk shelf

1 DS224C disk shelf

Solid-state drive (SSD)

9 x 3.8TB

The following table shows medium (category 5) example configuration – Infrastructure components

Layer Product family Quantity and model Details

Compute

Cisco UCS 5108 chassis

1

Supports up to eight half-width or four full-width blades. Add chassis as server requirement grows.

Cisco chassis I/O modules

2 x 2208

8GB x 10GB uplink ports

Cisco UCS blade servers

6 x B200 M5

Each with 2 x 16 cores, 2.5GHz/or higher clock speed, and 384GB or more memory
BIOS 3.2(3#)

Cisco UCS virtual interface card (VIC)

6 x UCS 1440 VICs

VMware ESXi fNIC FC driver: 1.6.0.47
VMware ESXi eNIC Ethernet driver: 1.0.27.0
(See interoperability matrix: )

2 x Cisco UCS Fabric Interconnects (FI)

2 x UCS 6454 FI

4th-generation fabric interconnects supporting 10GB/25GB/100GB Ethernet and 32GB FC

Network

Cisco Ethernet switches

2 x Nexus 9336c-FX2

1GB, 10GB, 25GB, 40GB, 100GB

Storage network

IP Network Nexus 9k for BLOB storage

FC: Cisco MDS 9132T

Two Cisco 9132T switches

Storage

NetApp AFF A220 all-flash storage system

2 HA Pair

2-node cluster for all MEDITECH workloads (File Server, Image Server, SQL Server, VMware, and so on)

DS224C disk shelf

1 x DS224C disk shelf

SSD

13 x 3.8TB

The following table shows a large (category 6) example configuration – infrastructure components.

Layer Product family Quantity and model Details

Compute

Cisco UCS 5108 chassis

1

Cisco chassis I/O modules

2 x 2208

8 x 10GB uplink ports

Cisco UCS blade servers

8 x B200 M5

Each with 2 x 24 cores, 2.7GHz and 768GB
BIOS 3.2(3#)

Cisco UCS virtual interface card (VIC)

8 x UCS 1440 VICs

VMware ESXi fNIC FC driver: 1.6.0.47
VMware ESXi eNIC Ethernet driver: 1.0.27.0
(review interoperability matrix: https://ucshcltool.cloudapps.cisco.com/public/)

2 x Cisco UCS fabric interconnects (FI)

2 x UCS 6454 FI

4th-generation fabric interconnects supporting 10GB/25GB/100GB Ethernet and 32GB FC

Network

Cisco Ethernet switches

2 x Nexus 9336c-FX2

2 x Cisco Nexus 9332PQ1, 10GB, 25GB, 40GB, 100GB

Storage network

IP Network N9k for BLOB storage

FC: Cisco MDS 9132T

Two Cisco 9132T switches

Storage

AFF A300

1 HA Pair

2-node cluster for all MEDITECH workloads (File Server, Image Server, SQL Server, VMware, and so on)

DS224C disk shelf

1 x DS224C disk shelves

SSD

19 x 3.8TB

Note These configurations provide a starting point for sizing guidance. Some customer environments might have multiple MEDITECH production and non-MEDITECH workloads running simultaneously, or they might have higher IOP requirements. You should work with the NetApp account team to size the storage systems based on the required IOPS, workloads, and capacity to determine the right platform to serve the workloads.