Data center colocation is a practical way to save money and resources. You can have full control over your IT infrastructure without building an on-premises data center for your organization. However, all colocation facilities are not made equal. You must analyze your infrastructure requirements before you select a colocation data center to host your servers and other IT hardware.
When choosing a colocation center, businesses focus on its location, proximity, security, and cost. All these are important, but one crucial aspect that we often tend to miss is the facility’s redundancy level.
Here, we discuss what data center redundancy is, why it is important, and how the redundancy levels are determined. This detailed guide will help you ask the right questions to your colocation provider and enable you to make an informed decision.
What is data center redundancy?
Data center redundancy is a critical aspect of data center architecture and management. It refers to the design and implementation of multiple, independent systems to support the critical operations of a data center in case of a failure of a primary system. Redundancy can be achieved by duplicating critical infrastructural components of a data center, such as power supply, network connectivity, cooling equipment, and others. It minimizes downtime and ensures undisrupted operation.
Some examples of data center redundancy are:
Power redundancy: Power redundancy would mean using multiple power sources like backup generators and UPS to ensure a continuous power supply. Even if there is a power outage, the backup system will continue the operation uninterrupted.
Network redundancy: Data centers use multiple network paths to achieve network redundancy. If one network fails, the other one kicks in. It ensures continuous access to the internet or other networks in case of a failure of one connection.
Cooling redundancy: The temperature of a data center needs to be controlled for smooth operation. Data centers use multiple, independent cooling systems to ensure that the facility remains within its temperature and humidity specifications even if the primary cooling system malfunctions.
Equipment redundancy: Similar to other cases, equipment redundancy means keeping provision of multiple components such as switches and routers to ensure that the data center remains operational even in case of a failure of a critical component.
Data center redundancy aims to provide a highly available and resilient environment for mission-critical applications and data. In general, you can expect a certain degree of redundancy in all data centers, but the level of redundancy will vary depending on the types of business they cater to because Some businesses have more tolerance to downtime than others.
The importance of data center redundancy
Uptime’s 2022 outage analysis report shows outage rates and severity are still significantly high in the digital infrastructure sector. The financial loss and operational disruptions resulting from these outages are on the rise.
The report shows 20% of the organizations in the US have faced at least one major downtime in the last three years. These outages caused significant financial losses, reputation was harmed, they faced compliance issues, and in some extreme cases, the failure resulted in a loss of life. 60% of these failures resulted in at least $100,000 in total losses.
The above data shows why data center uptime is critical for any business. Additionally, prolonged and frequent downtimes can significantly impact your team’s productivity. As the maximum tolerable period of disruption (MTPD) continues to decrease for most sectors, companies today need to recover quickly from an unforeseen outage. This can only be achieved by choosing a data center that implements the required redundant infrastructure.
Understanding the data center redundancy levels
In order to determine the redundancy configuration you need for your organization’s IT workload, first, you must understand how the data center redundancy level is measured.
While enquiring about data center redundancy level, you will come across the terms like N, N+1, 2N, and so on. It may seem complicated to decode these terminologies without knowing the basis on which the measurement system is built.
Definition of N in the context of data center redundancy
To put it in simple terms, N denotes the minimum infrastructure required for running the data center with a full IT workload. For example, if a data center requires seven cooling system units to keep the facility cold enough to be fully functional, then the N=7. Similarly, if five USB units are required in a data center to run its operation with a full workload, then the N=5.
Zero redundancy means the colocation data center has exactly the same number of components that are required for its operations. If any one component fails, the operation will be disrupted, and the clients will experience downtime. To avoid this situation, colocation providers implement N+1, N+2, or 2N redundancy in their infrastructure.
N+1 redundancy configuration- what is it?
When a colocation data center says it has an N+1 configuration in place for power supply, they mean they have one additional power source to provide backup in case of an outage. The same applies to other components, such as cooling and network connectivity.
This configuration provides a basic level of redundancy and allows for uninterrupted operation even if one component goes out. However, if more than one piece of equipment malfunctions, this configuration will not be able to provide the required backup.
N+1 redundancy configuration is highly effective for scheduled maintenance of data center components. The data center can continue to operate while one component is under maintenance.
Remember that each component within a data center can have different redundancy levels. For example, the power redundancy of a data center can be N+2, and the cooling redundancy can be N+1.
2N redundancy configuration
“2N” refers to having two times the number of components required for a data center to run with a full IT workload. For example, if the data center needs three power supplies for normal operation but has six power supplies in place, then its power redundancy stands at 2N.
As in 2N redundancy infrastructure, there are twice as many components as are required for normal operation, and the data center can continue to function even if multiple components fail. It offers a higher level of redundancy and ensures minimal downtime.
The 2N redundancy can be applied to different components within a data center, such as cooling systems, network paths, and power supply. As you can imagine, the implementation of 2N redundancy requires significant investment. Thus, the rent of such a colocation center will also be higher. This configuration is typically used in mission-critical data centers where maximum reliability and availability are required.
As mentioned, a 2N redundancy is not always implemented across the components. A colocation center can have 2N redundancy only for the power supply, as a power outage is a leading cause of data center downtime. However, it may implement N+1 redundancy configurations for all other components.
Data center tier level and redundancy
The Uptime Institute has devised a standardized system for tier classification of data centers based on their reliability and performance. The tier rankings describe the infrastructure resource availability in a data center. As redundancy plays a critical role in ensuring maximum uptime, it is intrinsically related to tier classification.
The Uptime Institute classifies the data centers into the following tiers:
A Tier 1 facility has an expected uptime of 99.671% a year. It depends on a single path for power and cooling and doesn’t have any redundancy. The entire facility needs to be shut down for maintenance or repair work. The operation will be impacted in case of capacity or distribution failures.
The expected uptime for a Tier 2 facility is 99.741% per year. Although it uses a single path for power and cooling, it includes partial redundancy. A site-wide shutdown is still required for maintenance and repair work. The facility’s operation might get disrupted by capacity failures and will get disrupted by distribution failures.
A Tier 3 data center has multiple independent distribution paths and redundant cooling systems. It employs full N+1 redundancy for power and cooling. The redundancy allows planned maintenance and repair work without disrupting the operation. The expected uptime for a Tier 3 facility is 99.982% per year. Equipment failure or operator error will still impact the site’s operation.
The highest tier. A Tier 4 data center is completely fault tolerant and employs redundancy for all the components. The component redundancy for power and cooling in a Tier 4 facility stands at 2N or 2N+1. 99.995% uptime a year is expected in a Tier 4 data center. The maintenance work can be done without impacting the operation, and equipment or distribution failure will not have any impact on its operation.
The reliability and availability of a data center largely depend on its redundancy level. To choose a colocation data center for your organization, first, you need to gauge your business’s downtime tolerance. Depending on that, the redundancy level you need will vary. Discuss with the representatives of the colocation facility of your choice to find out whether they can provide the required support.
Coloco offers colocation services in Baltimore and Washington DC. We have multiple transit providers and redundant connectivity in our data center. Get in touch with us to know more about the configuration of our colocation facility. The most affordable price is guaranteed.