TR-4693: FlexPod Datacenter for Epic EHR Deployment Guide
Contributors Download PDF of this page
Brian O’Mahony, NetApp
Ganesh Kamath, NetApp
Mike Brennan, Cisco
This technical report is for customers who plan to deploy Epic on FlexPod systems. It provides a brief overview of the FlexPod architecture for Epic and covers the setup and installation of FlexPod to deploy Epic for healthcare.
FlexPod systems deployed to host Epic HyperSpace, InterSystems Caché database, Cogito Clarity analytics and reporting suite, and services servers hosting the Epic application layer provide an integrated platform for a dependable, high-performance infrastructure that can be deployed rapidly. The FlexPod integrated platform is deployed by skilled FlexPod channel partners and is supported by Cisco and NetApp technical assistance centers.
Overall solution benefits
By running an Epic environment on the FlexPod architectural foundation, healthcare organizations can expect to see an improvement in staff productivity and a decrease in capital and operating expenses. FlexPod Datacenter with Epic delivers several benefits specific to the healthcare industry:
Simplified operations and lowered costs. Eliminate the expense and complexity of legacy proprietary RISC/UNIX platforms by replacing them with a more efficient and scalable shared resource capable of supporting clinicians wherever they are. This solution delivers higher resource utilization for greater ROI.
Quicker deployment of infrastructure. Whether it’s in an existing data center or a remote location, the integrated and tested design of FlexPod Datacenter with Epic enables customers to have the new infrastructure up and running in less time with less effort.
Scale-out architecture. Scale SAN and NAS from terabytes to tens of petabytes without reconfiguring running applications.
Nondisruptive operations. Perform storage maintenance, hardware lifecycle operations, and software upgrades without interrupting the business.
Secure multitenancy. This benefit supports the increased needs of virtualized server and storage shared infrastructure, enabling secure multitenancy of facility-specific information, particularly if hosting multiple instances of databases and software.
Pooled resource optimization. This benefit can help reduce physical server and storage controller counts, load balance workload demands, and boost utilization while improving performance.
Quality of service (QoS). FlexPod offers QoS on the entire stack. Industry-leading QoS storage policies enable differentiated service levels in a shared environment. These policies enable optimal performance for workloads and help in isolating and controlling runaway applications.
Storage efficiency. Reduce storage costs with the NetApp 7: 1 storage efficiency guarantee.
Agility. The industry-leading workflow automation, orchestration, and management tools offered by FlexPod systems allow IT to be far more responsive to business requests. These business requests can range from Epic backup and provisioning of additional test and training environments to analytics database replications for population health management initiatives.
Productivity. Quickly deploy and scale this solution for optimal clinician end- user experiences.
Data Fabric. The NetApp Data Fabric architecture weaves data together across sites, beyond physical boundaries, and across applications. The NetApp Data Fabric is built for data-driven enterprises in a data-centric world. Data is created and used in multiple locations, and it often needs to be leveraged and shared with other locations, applications, and infrastructures. Customers want a way to manage data that is consistent and integrated. It provides a way to manage data that puts IT in control and simplifies ever-increasing IT complexity.
A New approach for infrastructure for Epic EHR
Healthcare provider organizations remain under pressure to maximize the benefits of their substantial investments in industry-leading Epic electronic health records (EHRs). For mission-critical applications, when customers design their data centers for Epic solutions, they often identify the following goals for their data center architecture:
High availability of the Epic applications
Ease of implementing Epic in the data center
Agility and scalability to enable growth with new Epic releases or applications
Alignment with Epic guidance and target platforms
Manageability, stability, and ease of support
Robust data protection, backup, recovery, and business continuance
As Epic users evolve their organizations to become accountable care organizations and adjust to tightened, bundled reimbursement models, the challenge becomes delivering the required Epic infrastructure in a more efficient and agile IT delivery model.
Over the past decade, the Epic infrastructure customarily consisted of proprietary RISC processor- based servers running proprietary versions of UNIX and traditional SAN storage arrays. These server and storage platforms offer little by way of virtualization and can result in prohibitive capital and operating costs, given increasing IT budget constraints.
Epic now supports a production target platform consisting of a Cisco Unified Computing System (Cisco UCS) with Intel Xeon processors, virtualized with VMware ESXi, running Red Hat Enterprise Linux (RHEL). This platform coupled with Epic’s High Comfort Level ranking for NetApp storage running ONTAP, a new era of Epic data center optimization has begun.
Value of prevalidated converged infrastructure
Epic is prescriptive as to its customers’ hardware requirements because of an overarching requirement for delivering predictable low-latency system performance and high availability.
FlexPod, a prevalidated, rigorously tested converged infrastructure from the strategic partnership of Cisco and NetApp, is engineered and designed specifically for delivering predictable low-latency system performance and high availability. This approach results in Epic high comfort levels and ultimately the best response time for users of the Epic EHR system.
The FlexPod solution from Cisco and NetApp meets Epic system requirements with a high performing, modular, prevalidated, converged, virtualized, efficient, scalable, and cost-effective platform. It provides:
Modular architecture. FlexPod addresses the varied needs of the Epic modular architecture with purpose-configured FlexPod platforms for each specific workload. All components are connected through a clustered server and storage management fabric and a cohesive management toolset.
Accelerated application deployment. The prevalidated architecture reduces implementation integration time and risk to expedite Epic project plans. NetApp OnCommand Workforce Automation (OnCommand WFA) workflows for Epic automate Epic backup and refresh and remove the need for custom unsupported scripts. Whether the solution is used for an initial rollout of Epic, a hardware refresh, or expansion, more resources can be shifted to the business value of the project.
Industry-leading technology at each level of the converged stack. Cisco, NetApp, VMware, and Red Hat are all ranked as number 1 or number 2 by industry analysts in their respective categories of servers, networking, storage, and open systems Linux.
Investment protection with standardized, flexible IT. The FlexPod reference architecture anticipates new product versions and updates, with rigorous ongoing interoperability testing to accommodate future technologies as they become available.
Proven deployment across a broad range of environments. Pretested and jointly validated with popular hypervisors, operating systems, applications, and infrastructure software, FlexPod has been installed in some of Epic’s largest customer organizations.
Proven FlexPod architecture and cooperative support
FlexPod is a proven data center solution, offering a flexible, shared infrastructure that easily scales to support growing workload demands without affecting performance. By leveraging the FlexPod architecture, this solution delivers the full benefits of FlexPod, including:
Performance to meet the Epic workload requirements. Depending on the reference workload requirements (small, medium, large), different ONTAP platforms can be deployed to meet the required I/O profile.
Scalability to easily accommodate clinical data growth. Dynamically scale virtual machines (VMs), servers, and storage capacity on demand, without traditional limits.
Enhanced efficiency. Reduce both administration time and TCO with a converged virtualized infrastructure, which is easier to manage and stores data more efficiently while driving more performance from Epic software. NetApp OnCommand WFA automation simplifies the solution to reduce test environment refresh time from hours or days to minutes.
Reduced risk. Minimize business disruption with a prevalidated platform built on a defined architecture that eliminates deployment guesswork and accommodates ongoing workload optimization.
FlexPod Cooperative Support. NetApp and Cisco have established Cooperative Support, a strong, scalable, and flexible support model to address the unique support requirements of the FlexPod converged infrastructure. This model uses the combined experience, resources, and technical support expertise of NetApp and Cisco to provide a streamlined process for identifying and resolving a customer’s FlexPod support issue, regardless of where the problem resides. The FlexPod Cooperative Support model helps to make sure that your FlexPod system operates efficiently and benefits from the most up-to-date technology, while providing an experienced team to help resolve integration issues.
FlexPod Cooperative Support is especially valuable to healthcare organizations running business-critical applications such as Epic on the FlexPod converged infrastructure.
The following figure illustrates the FlexPod cooperative support model.
In addition to these benefits, each component of the FlexPod Datacenter stack with Epic solution delivers specific benefits for Epic EHR workflows.
Cisco Unified Computing System
A self-integrating, self-aware system, Cisco UCS consists of a single management domain interconnected with a unified I/O infrastructure. Cisco UCS for Epic environments has been aligned with Epic infrastructure recommendations and best practices to help ensure that the infrastructure can deliver critical patient information with maximum availability.
The foundation of Epic on Cisco UCS architecture is Cisco UCS technology, with its integrated systems management, Intel Xeon processors, and server virtualization. These integrated technologies solve data center challenges and enable customers to meet their goals for data center design for Epic. Cisco UCS unifies LAN, SAN, and systems management into one simplified link for rack servers, blade servers, and VMs. Cisco UCS is an end-to-end I/O architecture that incorporates Cisco unified fabric and Cisco fabric extender (FEX) technology to connect every component in Cisco UCS with a single network fabric and a single network layer.
The system is designed as a single virtual blade chassis that incorporates and scales across multiple blade chassis, rack servers, and racks. The system implements a radically simplified architecture that eliminates the multiple redundant devices that populate traditional blade server chassis and result in layers of complexity: Ethernet and FC switches and chassis management modules. Cisco UCS consists of a redundant pair of Cisco fabric interconnects (FIs) that provide a single point of management, and a single point of control, for all I/O traffic.
Cisco UCS uses service profiles to help ensure that virtual servers in the Cisco UCS infrastructure are configured correctly. Service profiles include critical server information about the server identity such as LAN and SAN addressing, I/O configurations, firmware versions, boot order, network VLAN, physical port, and QoS policies. Service profiles can be dynamically created and associated with any physical server in the system in minutes rather than hours or days. The association of service profiles with physical servers is performed as a simple, single operation and enables migration of identities between servers in the environment without requiring any physical configuration changes. It facilitates rapid bare-metal provisioning of replacements for failed servers.
Using service profiles helps to make sure that servers are configured consistently throughout the enterprise. When using multiple Cisco UCS management domains, Cisco UCS Central can use global service profiles to synchronize configuration and policy information across domains. If maintenance needs to be performed in one domain, the virtual infrastructure can be migrated to another domain. This approach helps to ensure that even when a single domain is offline, applications continue to run with high availability.
Cisco UCS has been extensively tested with Epic over a multi- year period to demonstrate that it meets the server configuration requirements. Cisco UCS is a supported server platform, as listed in customers’ “Epic Hardware Configuration Guide.”
Cisco Nexus switches and MDS multilayer directors provide enterprise-class connectivity and SAN consolidation. Cisco multiprotocol storage networking reduces business risk by providing flexibility and options: FC, Fibre Connection (FICON), FC over Ethernet (FCoE), SCSI over IP (iSCSI), and FC over IP (FCIP).
Cisco Nexus switches offer one of the most comprehensive data center network feature sets in a single platform. They deliver high performance and density for both data center and campus core. They also offer a full feature set for data center aggregation, end-of-row, and data center interconnect deployments in a highly resilient modular platform.
Cisco UCS integrates computing resources with Cisco Nexus switches and a unified I/O fabric that identifies and handles different types of network traffic, including storage I/O, streamed desktop traffic, management, and access to clinical and business applications:
Infrastructure scalability. Virtualization, efficient power and cooling, cloud scale with automation, high density, and performance all support efficient data center growth.
Operational continuity. The design integrates hardware, NX-OS software features, and management to support zero-downtime environments.
Transport flexibility. Incrementally adopt new networking technologies with a cost-effective solution.
Together, Cisco UCS with Cisco Nexus switches and MDS multilayer directors provide a compute, networking, and SAN connectivity solution for Epic.
NetApp storage running ONTAP software reduces overall storage costs while delivering the low-latency read and write response times and IOPS required for Epic workloads. ONTAP supports both all-flash and hybrid storage configurations to create an optimal storage platform to meet Epic requirements. NetApp flash-accelerated systems received the Epic High Comfort Level rating, providing Epic customers with the performance and responsiveness key to latency- sensitive Epic operations. NetApp can also isolate production from nonproduction by creating multiple fault domains in a single cluster. NetApp reduces performance issues by guaranteeing a minimum performance level for workloads with ONTAP minimum QoS.
The scale-out architecture of the ONTAP software can flexibly adapt to various I/O workloads. To deliver the necessary throughput and low latency required for clinical applications while providing a modular scale-out architecture, all-flash configurations are typically used in ONTAP architectures. All- flash arrays will be required by Epic by year 2020 and are required by Epic today for customers with more than 5 million global references. AFF nodes can be combined in the same scale-out cluster with hybrid (HDD and flash) storage nodes suitable for storing large datasets with high throughput. Customers can clone, replicate, and back up the Epic environment (from expensive SSD storage) to more economical HDD storage on other nodes, meeting or exceeding Epic guidelines for SAN-based cloning and backup of production disk pools. With NetApp cloud- enabled storage and Data Fabric, you can back up to object storage on the premises or in the cloud.
ONTAP offers features that are extremely useful in Epic environments, simplifying management, increasing availability and automation, and reducing the total amount of storage needed:
Outstanding performance. The NetApp AFF solution shares the same unified storage architecture, ONTAP software, management interface, rich data services, and advanced feature set as the rest of the FAS product families. This innovative combination of all-flash media with ONTAP delivers the consistent low latency and high IOPS of all-flash storage with the industry-leading ONTAP software.
Storage efficiency. Reduce total capacity requirements with deduplication, NetApp FlexClone, inline compression, inline compaction, thin replication, thin provisioning, and aggregate deduplication.
NetApp deduplication provides block-level deduplication in a FlexVol volume or data constituent. Essentially, deduplication removes duplicate blocks, storing only unique blocks in the FlexVol volume or data constituent.
Deduplication works with a high degree of granularity and operates on the active file system of the FlexVol volume or data constituent. It is application transparent, and therefore it can be used to deduplicate data originating from any application that uses the NetApp system. Volume deduplication can be run as an inline process (starting in Data ONTAP 8.3.2) and/or as a background process that can be configured to run automatically, be scheduled, or run manually through the CLI, NetApp System Manager, or NetApp OnCommand Unified Manager.
The following figure illustrates how NetApp deduplication works at the highest level.
Space-efficient cloning. The FlexClone capability allows you to almost instantly create clones to support backup and test environment refresh. These clones consume additional storage only as changes are made.
Integrated data protection. Full data protection and disaster recovery features help customers protect critical data assets and provide disaster recovery.
Nondisruptive operations. Upgrading and maintenance can be performed without taking data offline.
Epic workflow automation. NetApp has designed OnCommand WFA workflows to automate and simplify the Epic backup solution and refresh of test environments such as SUP, REL, and REL VAL. This approach eliminates the need for any custom unsupported scripts, reducing deployment time, operations hours, and disk capacity required for NetApp and Epic best practices.
QoS. Storage QoS allows you to limit potential bully workloads. More importantly, QoS can guarantee minimum performance for critical workloads such as Epic production. NetApp QoS can reduce performance-related issues by limiting contention.
OnCommand Insight Epic dashboard. The Epic Pulse tool can identify an application issue and its effect on the end user. The OnCommand Insight Epic dashboard can help identify the root cause of the issue and gives full visibility into the complete infrastructure stack.
Data Fabric. NetApp Data Fabric simplifies and integrates data management across cloud and on-premises to accelerate digital transformation. It delivers consistent and integrated data management services and applications for data visibility and insights, data access and control, and data protection and security. NetApp is integrated with AWS, Azure, Google Public Cloud, and IBM Cloud clouds, giving customers a wide breadth of choice.
The following figure illustrates FlexPod for Epic workloads.
Epic is a software company headquartered in Verona, Wisconsin. The following excerpt from the company’s website describes the span of functions supported by Epic software:
“Epic makes software for midsize and large medical groups, hospitals, and integrated healthcare organizations—working with customers that include community hospitals, academic facilities, children’s organizations, safety net providers, and multi-hospital systems. Our integrated software spans clinical, access, and revenue functions and extends into the home. ”
It is beyond the scope of this document to cover the wide span of functions supported by Epic software. From the storage system point of view, however, for each deployment, all Epic software shares a single patient-centric database. Epic uses the InterSystems Caché database, which is available for various operating systems, including IBM AIX and Linux.
The primary focus of this document is to enable the FlexPod stack (servers and storage) to satisfy performance-driven requirements for the InterSystems Caché database used in an Epic software environment. Generally, dedicated storage resources are provided for the production database, whereas shadow database instances share secondary storage resources with other Epic software-related components, such as Clarity reporting tools. Other software environment storage, such as that used for application and system files, is also provided by the secondary storage resources.
Purpose-built for specific Epic workloads
Though Epic does not resell server, network, or storage hardware, hypervisors, or operating systems, the company has specific requirements for each component of the infrastructure stack. Therefore, Cisco and NetApp worked together to test and enable FlexPod Datacenter to be successfully configured, deployed, and supported to meet customers’ Epic production environment requirements. This testing, technical documentation, and growing number of successful mutual customers have resulted in Epic expressing an increasingly high level of comfort in FlexPod Datacenter’s ability to meet Epic customers’ needs. See the “Epic Storage Products and Technology Status” document and the “Epic Hardware Configuration Guide. ”
The end-to-end Epic reference architecture is not monolithic, but modular. The figure below outlines five distinct modules, each with unique workload characteristics.
These interconnected but distinct modules have often resulted in Epic customers having to purchase and manage specialty silos of storage and servers. These might include a vendor’s platform for traditional tier 1 SAN; a different platform for NAS file services; platforms specific to protocol requirements of FC, FCoE, iSCSI, NFS, and SMB/CIFS; separate platforms for flash storage; and appliances and tools to attempt to manage these silos as virtual storage pools.
With FlexPod connected through ONTAP, you can implement purpose-built nodes optimized for each targeted workload, achieving the economies of scale and streamlined operational management of a consistent compute, network, and storage data center.
Caché production database
Caché, manufactured by InterSystems, is the database system on which Epic is built. All patient data in Epic is stored in a Caché database.
In an InterSystems Caché database, the data server is the access point for persistently stored data. The application server services database queries and makes data requests to the data server. For most Epic software environments, the use of the symmetric multiprocessor architecture in a single database server suffices to service the Epic applications’ database requests. In large deployments, using InterSystems’ Enterprise Caché Protocol can support a distributed database model.
By using failover-enabled clustered hardware, a standby data server can access the same disks (that is, storage) as the primary data server and take over the processing responsibilities in the event of a hardware failure.
InterSystems also provides technologies to satisfy shadow, disaster recovery, and high-availability (HA) requirements. InterSystems’ shadow technology can be used to asynchronously replicate a Caché database from a primary data server to one or more secondary data servers.
Cogito Clarity is Epic’s integrated analytics and reporting suite. Starting as a copy of the production Caché database, Cogito Clarity delivers information that can help improve patient care, analyze clinical performance, manage revenue, and measure compliance. As an OLAP environment, Cogito Clarity utilizes either Microsoft SQL Server or Oracle RDBMS. Because this environment is distinct from the Caché production database environment, it is important to architect a FlexPod platform that supports the Cogito Clarity requirements following Cisco and NetApp published validated design guides for SQL Server and Oracle environments.
Epic Hyperspace Desktop Services
Hyperspace is the presentation component of the Epic suite. It reads and writes data from the Caché database and presents it to the user. Most hospital and clinic staff members interact with Epic using the Hyperspace application.
Although Hyperspace can be installed directly on client workstations, many healthcare organizations use application virtualization through a Citrix XenApp farm or a virtual desktop infrastructure (VDI) to deliver applications to users. Virtualizing XenApp server farms using ESXi is supported. See the validated designs for FlexPod for ESXi in the “References” section for configuration and implementation guidelines.
For customers interested in deploying full VDI Citrix XenDesktop or VMware Horizon View systems, careful attention must be paid for an optimal clinical workflow experience. A foundational step for obtaining precise configurations is to clearly understand and document the scope of the project, including detailed mapping of user profiles. Many user profiles include access to applications beyond Epic. Variables in profiles include:
Authentication, especially Imprivata or similar tap- and-go single sign-on (SSO), for nomadic clinician users
PACS Image Viewer
Dictation software and devices such as Dragon NaturallySpeaking
Document management such as Hyland OnBase or Perceptive Software integration
Departmental applications such as health information management coding from 3M Health Care or OptumHealth
Pre-Epic legacy EMR or revenue cycle apps, which the customer might still use
Video conferencing capabilities that could require use of video acceleration cards in the servers
Your certified FlexPod reseller, with specific certifications in VMware Horizon View or Citrix XenDesktop, will work with your Cisco and NetApp Epic solutions architect and professional services provider to scope and architect the solution for your specific VDI requirements.
Disaster recovery and shadow copies
Evolving to active-active dual data centers
In Epic software environments, a single patient-centric database is deployed. Epic’s hardware requirements refer to the physical server hosting the primary Caché data server as the production database server. This server requires dedicated, high-performance storage for files belonging to the primary database instance. For HA, Epic supports the use of a failover database server that has access to the same files.
A reporting shadow database server is typically deployed to provide read-only access to production data. It hosts a Caché data server configured as a backup shadow of the production Caché data server. This database server has the same storage capacity requirements as the production database server. This storage is sized differently from a performance perspective because reporting workload characteristics are different.
A shadow database server can also be deployed to support Epic’s read-only (SRO) functionality, in which access is provided to a copy of production in read-only mode. This type of database server can be switched to read-write mode for business continuity reasons.
To meet business continuity and disaster recovery (DR) objectives, a DR shadow database server is commonly deployed at a site geographically separate from the production and/or reporting shadow database servers. A DR shadow database server also hosts a Caché data server configured as a backup shadow of the production Caché data server. It can be configured to act as a shadow read-write instance if the production site is unavailable for an extended time. Like the reporting shadow database server, the storage for its database files has the same capacity requirements as the production database server. In contrast, this storage is sized the same as production from a performance perspective, for business continuity reasons.
For healthcare organizations that need continuous uptime for Epic and have multiple data centers, FlexPod can be used to build an active-active design for Epic deployment. In an active-active scenario, FlexPod hardware is installed into a second data center and is used to provide continuous availability and quick failover or disaster recovery solutions for Epic. The “Epic Hardware Configuration Guide” provided to customers should be shared with Cisco and NetApp to facilitate the design of an active-active architecture that meets Epic’s guidelines.
NetApp and Cisco are experienced in migrating legacy Epic installations to FlexPod systems following Epic’s best practices for platform migration. They can work through any details if a platform migration is required.
One consideration for new customers moving to Epic or existing customers evaluating a hardware and software refresh is the licensing of the Caché database. InterSystems Caché can be purchased with either a platform-specific license (limited to a single hardware OS architecture) or a platform-independent license. A platform-independent license allows the Caché database to be migrated from one architecture to another, but it costs more than a platform-specific license.
|Customers with platform-specific licensing might need to budget for additional licensing costs to switch platforms.|
Epic storage considerations
RAID performance and protection
Epic recognizes the value of NetApp RAID DP, RAID-TEC, and WAFL technologies in achieving levels of data protection and performance that meet Epic-defined requirements. Furthermore, with NetApp efficiency technologies, NetApp storage systems can deliver the overall read performance required for Epic environments while using fewer disk drives.
Epic requires the use of NetApp sizing methods to properly size a NetApp storage system for use in Epic environments. For more information, see TR-3930i: NetApp Sizing Guidelines for Epic. NetApp Field Portal access is required to view this document.
Isolation of production disk groups
See the Epic All-Flash Reference Architecture Strategy Handbook for details about the storage layout on an all-flash array. In summary, disk pool 1 (production) must be stored on a separate storage fault domain from disk pool 2. An ONTAP node in the same cluster is a fault domain.
Epic recommends the use of flash for all full-size operational databases, not just the production operational databases. At present this approach is only a recommendation; however, by calendar year 2020 it will be a requirement for all customers.
For very large sites, where the production OLTP database is expected to exceed 5 million global references per second, the Cogito workloads should be placed on a third array to minimize the impact to the performance of the production OLTP database. The test bed configuration used in this document is an all-flash array.
High availability and redundancy
Epic recommends the use of HA storage systems to mitigate hardware component failure. This recommendation extends from basic hardware, such as redundant power supplies, to networking, such as multipath networking.
At the storage node level, Epic highlights the use of redundancy to enable nondisruptive upgrades and nondisruptive storage expansion.
Pool 1 storage must reside on separate disks from the pool 2 storage for the performance isolation reasons previously stated, both of which NetApp storage arrays provide by default out of the box. This separation also provides data-level redundancy for disk-level failures.
Epic recommends the use of effective monitoring tools to identify or predict any storage system bottlenecks.
NetApp OnCommand Unified Manager, bundled with ONTAP, can be used to monitor capacity, performance, and headroom. For customers with OnCommand Insight, an Insight dashboard has been developed for Epic that gives complete visibility into storage, network, and compute beyond what the Epic Pulse monitoring tool provides. Although Pulse can detect an issue, Insight can identify the issue early, before it has an impact.
Epic recognizes that storage node-based NetApp Snapshot technology can minimize performance impacts on production workloads compared to traditional file-based backups. When Snapshot backups are intended for use as a recovery source for the production database, the backup method must be implemented with database consistency in mind.
Epic cautions against expanding storage without considering storage hotspots. For example, if storage is frequently added in small increments, storage hotspots can develop where data is not evenly spread across disks.
Comprehensive management tools and automation capabilities
Cisco Unified Computing System with Cisco UCS Manager
Cisco focuses on three key elements to deliver the best data center infrastructure: simplification, security, and scalability. The Cisco UCS Manager software combined with platform modularity provides a simplified, secure, and scalable desktop virtualization platform.
Simplified. Cisco UCS provides a radical new approach to industry-standard computing and provides the core of the data center infrastructure for all workloads. Among the many features and benefits of Cisco UCS are the reduction in the number of servers needed, the reduction in the number of cables used per server, and the capability to rapidly deploy or re- provision servers through Cisco UCS service profiles. With fewer servers and cables to manage and with streamlined server and application workload provisioning, operations are significantly simplified. Scores of blade and rack servers can be provisioned in minutes with Cisco UCS Manager service profiles. Cisco UCS service profiles eliminate server integration run books and eliminate configuration drift. This approach accelerates the time to productivity for end users, improves business agility, and allows IT resources to be allocated to other tasks.
Cisco UCS Manager (UCSM) automates many mundane, error-prone data center operations such as configuration and provisioning of server, network, and storage access infrastructure. In addition, Cisco UCS B-Series blade servers and C-Series rack servers with large memory footprints enable high application user density, which helps reduce server infrastructure requirements.
Simplification leads to faster, more successful Epic infrastructure deployment. Cisco and its technology partners such as VMware and Citrix and storage partners IBM, NetApp, and Pure Storage have developed integrated, validated architectures, including predefined converged architecture infrastructure packages such as FlexPod. Cisco virtualization solutions have been tested with VMware vSphere, Linux, Citrix XenDesktop, and XenApp.
Secure. Although VMs are inherently more secure than their physical predecessors, they introduce new security challenges. Mission-critical web and application servers using a common infrastructure such as virtual desktops are now at a higher risk for security threats. Inter–virtual machine traffic now poses an important security consideration that IT managers need to address, especially in dynamic environments in which VMs, using VMware vMotion, move across the server infrastructure.
Virtualization, therefore, significantly increases the need for virtual machine–level awareness of policy and security, especially given the dynamic and fluid nature of virtual machine mobility across an extended computing infrastructure. The ease with which new virtual desktops can proliferate magnifies the importance of a virtualization-aware network and security infrastructure. Cisco data center infrastructure (Cisco UCS, Cisco MDS, and Cisco Nexus family solutions) for desktop virtualization provides strong data center, network, and desktop security, with comprehensive security from the desktop to the hypervisor. Security is enhanced with segmentation of virtual desktops, virtual machine–aware policies and administration, and network security across the LAN and WAN infrastructure.
Scalable. Growth of virtualization solutions is all but inevitable, so a solution must be able to scale, and scale predictably, with that growth. The Cisco virtualization solutions support high virtual machine density (VMs per server), and additional servers scale with near-linear performance. Cisco data center infrastructure provides a flexible platform for growth and improves business agility. Cisco UCS Manager service profiles allow on-demand host provisioning and make it just as easy to deploy dozens of hosts as it is to deploy hundreds.
Cisco UCS servers provide near-linear performance and scale. Cisco UCS implements the patented Cisco Extended Memory Technology to offer large memory footprints with fewer sockets (with scalability to up to 1TB of memory with 2- and 4-socket servers). Using unified fabric technology as a building block, Cisco UCS server aggregate bandwidth can scale to up to 80Gbps per server, and the northbound Cisco UCS fabric interconnect can output 2Tbps at line rate, helping prevent desktop virtualization I/O and memory bottlenecks. Cisco UCS, with its high-performance, low-latency unified fabric-based networking architecture, supports high volumes of virtual desktop traffic, including high-resolution video and communications traffic. In addition, Cisco storage partner NetApp helps to maintain data availability and optimal performance during boot and login storms as part of the Cisco virtualization solutions.
Cisco UCS, Cisco MDS, and Cisco Nexus data center infrastructure designs provide an excellent platform for growth, with transparent scaling of server, network, and storage resources to support desktop virtualization, data center applications, and cloud computing.
VMware vCenter Server
VMware vCenter Server provides a centralized platform for managing Epic environments so healthcare organizations can automate and deliver a virtual infrastructure with confidence:
Simple deployment. Quickly and easily deploy vCenter Server using a virtual appliance.
Centralized control and visibility. Administer the entire vSphere infrastructure from a single location.
Proactive optimization. Allocate and optimize resources for maximum efficiency.
Management. Use powerful plug-ins and tools to simplify management and extend control.
Virtual Storage Console for VMware vSphere
Virtual Storage Console (VSC), VASA Provider, and Storage Replication Adapter (SRA) for VMware vSphere from NetApp are a virtual appliance. This product suite includes capabilities of VSC, VASA Provider, and SRA. The product suite includes SRA and VASA Provider as plug-ins to vCenter Server, which provides end-to-end lifecycle management for VMs in VMware environments using NetApp storage systems.
The virtual appliance for VSC, VASA Provider, and SRA integrates smoothly with the VMware vSphere Web Client and enables you to use SSO services. In an environment with multiple vCenter Server instances, each vCenter Server instance that you want to manage must have its own registered instance of VSC. The VSC dashboard page enables you to quickly check the overall status of your datastores and VMs.
By deploying the virtual appliance for VSC, VASA Provider, and SRA, you can perform the following tasks:
Using VSC to deploy and manage storage and configure the ESXi host. You can use VSC to add credentials, remove credentials, assign credentials, and set up permissions for storage controllers in your VMware environment. In addition, you can manage ESXi servers that are connected to NetApp storage systems. You can set recommended best practice values for host timeouts, NAS, and multipathing for all the hosts with a couple of clicks. You can also view storage details and collect diagnostic information.
Using VASA Provider to create storage capability profiles and set alarms. VASA Provider for ONTAP is registered with VSC as soon as you enable the VASA Provider extension. You can create and use storage capability profiles and virtual datastores. You can also set alarms to alert you when the thresholds for volumes and aggregates are almost full. You can monitor the performance of virtual machine disks (VMDKs) and the VMs that are created on virtual datastores.
Using SRA for disaster recovery. You can use SRA to configure protected and recovery sites in your environment for disaster recovery during failures.
NetApp OnCommand Insight and ONTAP
NetApp OnCommand Insight integrates infrastructure management into the Epic service delivery chain. This approach provides healthcare organizations with better control, automation, and analysis of the storage, network, and compute infrastructure. IT can optimize the current infrastructure for maximum benefit while simplifying the process of determining what and when to buy. It also mitigates the risks associated with complex technology migrations. Because it requires no agents, installation is straightforward and nondisruptive. Installed storage and SAN devices are continually discovered, and detailed information is collected for full visibility of your entire storage environment. You can quickly identify misused, misaligned, underused, or orphaned assets and reclaim them to fuel future expansion:
Optimize existing resources. Identify misused, underused, or orphaned assets using established best practices to avoid problems and meet service levels.
Make better decisions. Real-time data helps resolve capacity problems more quickly to accurately plan future purchases, avoid overspending, and defer capital expenditures.
Accelerate IT initiatives. Better understand virtual environments to manage risks, minimize downtime, and speed cloud deployment.
OnCommand Insight dashboard. This Epic dashboard was developed by NetApp for Epic and provides a comprehensive view of the complete infrastructure stack and goes beyond Pulse monitoring. OnCommand Insight can proactively identify contention issues in compute, network, and storage.
NetApp OnCommand workflow automation
OnCommand WFA is a free software solution that helps to automate storage management tasks, such as provisioning, migration, decommissioning, data protection configurations, and cloning storage. You can use OnCommand WFA to build workflows to complete tasks that are specified by your processes.
A workflow is a repetitive and procedural task that consists of steps, including the following types of tasks:
Provisioning, migrating, or decommissioning storage for databases or file systems
Setting up a new virtualization environment, including storage switches and datastores
Setting up storage for an application as part of an end-to-end orchestration process
Workflows can be built to quickly set up and configure NetApp storage as per recommended best practices for Epic workloads. OnCommand WFA workflows for Epic replace all customer unsupported scripting required for Epic workflows to automate backup and test environment refresh.
SnapCenter is a unified, scalable platform for data protection. SnapCenter provides centralized control and oversight, allowing users to manage application-consistent, database-consistent Snapshots copies. SnapCenter enables the backup, restore, clone, and backup, verification of virtual machine (VMs) from both primary and secondary destinations (SnapMirror and SnapVault). With SnapCenter, database, storage, and virtualization administrators have a single tool to manage backup, restore, and clone operations for various applications, databases, and VMs.
SnapCenter enables centralized application resource management and easy data protection job execution by using resource groups and policy management (including scheduling and retention settings). SnapCenter provides unified reporting by using a dashboard, multiple reporting options, job monitoring, and log and event viewers.
SnapCenter can back up VMware, RHEL, SQL, Oracle, and CIFS. Combined with Epic WFA backup workflow integration, NetApp provides a backup solution for any Epic environment.