How to leverage hyperscale Data Centers for scalability
Modern Data Centers are synonymous with massive high-speed computational capabilities, data storage at scale, automation,Β virtualization, high-end security, andΒ cloud computingΒ capacities. They hold massive amounts of data and provide sophisticated computing capacities. Earlier, a simple network of racks with storage units and a set of management tools to individually manage them were enough. The architecture was simple to understand, and only local resources were consumed in its operation.
However, as organizations became increasingly internet dependent, the volumes of the data exploded with more of it added by the social media and the sensing devices that grew manifold. Remote access to this data through the Web emerged as the trend. The local tools that were used earlier in traditional Data Center were fragmented and were inefficient to handle not just the volumes but also complexities that in effect needed a large infrastructure. There were challenges of scaling up when companies expanded, and performance dipped when peak loads were required to be handled. This led to the evolution of hyperscaling as a solution.
Hyperscale is based on the concept of distributed systems and on-demand provisions of the IT resources. Unlike the traditional Data Center, aΒ hyperscale Data CenterΒ calls in a large number of servers working together at high speeds. This ability gives the Data Center a capacity to scale both horizontally and vertically.Β Horizontal scalingΒ involves on-demand provisioning of more machines from the network when scaling is required. Vertical scaling is about adding power to existing machines to increase their computing capacities. Typically hyperscale Data Centers have lower load times and higher uptimes, even in the demanding situations like the need for high-volume data processing.
Today, there are more than 400Β hyperscale Data CentersΒ operating in the world, with the United States alone having 44% of the global Data Center sites. By 2020, theΒ hyperscaled Data CenterΒ count is expected to reach 500 as predicted by Synergy Research Group. Other leading countries withΒ hyperscaled Data CenterΒ footprints are Australia, Brazil, Canada, Germany, India and Singapore.
Hyperscale Data Center Can Do More at Less Time and Lower Cost
A traditional Data Center typically has a SAN (Storage Area Network) provided mostly by a single vendor. The machines within the Data Center would be running on Windows or Linux, and multiple servers would be connected through commodity switches. Each server in the network would have its local management software installed in it and each equipment connected to them would have its own switch to activate the connection. In short, each component in a traditional Data Center would work in isolation.
In contrast, a hyperscale Data Center employs a clustered structure with multiple nodes housed in a single rack space. Hyperscaling uses storage capacities within the servers by creating a shared pool of resources, which eliminates the need for installation of a SAN. The hyperconvergence also makes it easier to upgrade the systems and provide support through a single vendor solution for the whole infrastructure. Instead of having to manage individual arrays and management interfaces, hyperscaling means integration of all capacities, such as storage, management, networks and data, which are managed from a single interface.
Installing, managing and maintaining a large infrastructure consisting of huge Data Centers would have been impossible for emerging companies or startups that have limited capital and other resources. However, withΒ hyperconvergence, even microenterprises and SMEs, as well as early stage startups can now enjoy access to a large pool of resources that are cost-effective and provide highΒ scalabilityΒ in addition toΒ flexibility. With hyperconvergence, these companies can use Data Center services in at a much lesser cost with the additional benefit of scalability on demand.
AΒ hyperscale Data CenterΒ would typically have more than 5000 servers that are linked through a high-speed fiber optics network. A company can start small with only a few servers configured for use and then, later at any point of time, automatically provision additional storage from any of the servers in the network as their business scales up. An estimate of the demand for additional infrastructure is made based on how the workloads are increasing, and a proactive step can be taken to scale up capacities to meet the increasing need for resources. Unlike traditional Data Centers that work in isolation,Β hyperscaled infrastructuresΒ depend on the idea of making all servers work in tandem, creating a unified system of storage and computing.
When implementing aΒ hyperscale infrastructure, the supplier could play a significant role through the delivery of next-gen technologies that need high R&D investments. According to a McKinsey report, the top five companies using hyperconverged infrastructure have over $50 billion of capital invested in 2017 and these investments are growing at the rate of 20% annually.
LeveragingΒ hyperscaled Data Centers, businesses can achieve superior performance and deliver more at a lower cost and a fraction of time than before. This provides businesses with the flexibility of scaling up on demand and an opportunity to continue operations without any interruptions.
Sify offers state of the art Data Centers to ensure the highest levels of availability, security, and connectivity for your IT infra. Our Data Centers are strategically located in different seismic zones across India, with highly redundant power and cooling systems that meet and even exceed the industryβs highest standards.
Ensure Lower Opex with Data Center Monitoring
Data Centers are the backbone of todayβs IT world. Growing business, demand that the Data Centers operate at maximum efficiency. However, building Data Centers, maintaining and running them involves a lot of operational expenses for the company. It is important for companies to look for options that can help them lower Opex for their Data Centers. Proper capacity Planning, advanced monitoring techniques, and predictive analysis can help companies to achieve these goals and help improve business growth. Real-time monitoring helps Data Center operators to improve agility and efficiency of their Data Centers and achieve high performance at a lower cost.
Todayβs digital world requires constant connectivity, which in turn requires all time availability. But there could be several things that could cause outages β like overloaded circuit chip, air conditioner unit malfunction, overheating of unmonitored servers, failure of UPS (uninterrupted power supply) and power surge. So how do we ensure availability? Implementing DCIM (Data Center Infrastructure Management) technologies can help you improve reliability. DCIM systems monitor power and environmental conditions within the Data Center. It helps in building and maintaining databases, facilitate capacity planning and assist with change management. Real-time monitoring helps improve availability and lower Opex.
Servers and electronic devices installed in Data Centers generate a lot of heat. Overheated devices are more likely to fail. Hence, Data Centers are usually kept at temperatures similar to refrigerators. Thus most of the power in a Data Center is consumed for cooling purpose. There are various techniques and technologies that Data Center operators can implement to save energy. Recent strategies like free cooling and chiller-free Data Centers, expand the allowable temperature and humidity ranges for Data Center device operations. Implementing these strategies help save energy costs. A telecommunication giant Century Link had an electricity bill of over $80 million in 2011 which made them think of a solution to lower this cost. CenturyLink implemented a monitoring program. With this monitoring program, their engineers were able to safely raise the supply air temperatures without compromising availability and with this solution CenturyLink was able to save $2.9 million annually.
As per ASHRAE (American Society of Heating, Refrigeration and Air Conditioning Engineers) new guidelines, the strategies like free cooling and chiller-free Data Centers can offer substantial savings and one might expect Data Center operators would make use of these seemingly straight forward adjustments. However, as per a survey, many Data Center operators are not yet following these techniques and average server supply air temperature for the Data Center is far cooler than ASHRAE recommendations.
Most of the Data Centers are provisioned for peak loads that may occur only a few times in a year. Server utilization in most of the Data Centers is only 12-18% or may peak at 20%. However, these servers are plugged in 24x7x365. In summary, though the servers are idle they are drawing the same amount of power that other operational servers are drawing. Power distribution and backup equipment implemented in Data Centers also cause substantial energy waste. Similar to cooling strategies, most of the owners employ alternate strategies to improve power efficiency. However, most of them are on the computer side. Increasing density of the IT load per rack, with the help of server consolidation and virtualization, can offer substantial savings, not only in equipment but also in electricity and space. This is an important consideration when a Data Center is located in constrained energy supply or electricity situation in the context of high real estate prices, as in most of the urban areas.
Increasing density leads to concentrated thermal output and needs modified power requirements. The effective way to maintain continuous availability in high-density deployments is real-time monitoring and granular control of the physical infrastructure. Power proportional computing or matching power supply to compute demand is the recent innovation that few of the operators are using to improve energy efficiency. Few operators use dynamic provisioning technologies or power capping features already installed on their servers. However, raising inlet air temperatures causes the risk of equipment failure. Without an in-depth understanding of the relationship between compute demand and power dynamics, implementing power capping increases the risk of the required processing capacity not being available when required. Without real-time monitoring and management, there is a high risk of equipment failure in a Data Center.
Real-time monitoring helps businesses get critical information to manage possible risks in the Data Center. Monitoring helps improve efficiency and decrease costs, enabling businesses to have availability and saving. They can lower Opex and still maintain high availability.
With the help of Real-time monitoring, a small issue can be spotted, before it becomes a large problem. In a smart Data Center, several thousands of sensors across the facility collect the information regarding air pressure, humidity, temperature, power usage, utilization, fan speed and much more β all in real time. All this information is then aggregated, normalized and reported in a specified format to operators. This allows operators to understand and adjust controls in response to the conditions β to avoid failures and maintain availability.
Monitoring has lot many benefits. Monitoring data can be used by cloud and hosting providers to document their compliance with the service level agreements. Monitoring data allows operators to automate and optimize control of physical infrastructure. Real-time monitoring gives visibility at a macro and micro level, for businesses to improve client confidence, increase Data Center availability, energy efficiency, productivity and at the same time reduce their operational expenditures by optimizing Data Centers with the help of monitoring data.