Future of Data Center: Architectural Advances
Data is growing at an exponential rate in the modern borderless world. Over 2.5 Quintilian bytes of data is generated every day across the globe. India alone is set to produce 2.3 million petabytes of digital data by the year 2020, and it is growing at a rate thatβs much faster than the world average. Many enterprises are also exploring online data backup in the cloud further fueling this data explosion.
This data explosion increases the demand for storage capacities that are served by Data Centers. In just two decades, Data Centers have scaled up from the size of a room to the size of a commercial tower giving way to accommodate this increased storage need. Besides storage, modern Data Centers are also sprucing up to handle more services. They are more connected than ever and can meet the needs of the contemporary business world. New solutions have emerged around Data Center architecture that can bring competitive advantages to users through more optimized performance. Data Centers have now become critical components of a modern IT infrastructure.
In India, we see emerging businesses growing at a fast pace, with cloud computing technologies and cloud warehouses taking the lead to store enormous amounts of digital data. At the beginning of the 21st century, most organizations in India had captive Data Centers that were self-managed. With advances introduced in cloud technologies and specialized players adding more capabilities, the self-managed option was replaced by the outsourcing model. Increase in the users, economic growth of the country, and cost advantages of cloud-based Data Centers are some of the trends driving adoption of a cloud-based architecture. Captive Data Centers are expensive to accommodate and challenging to scale. However, cloud-based Data Center architectures are more flexible.
Many new technologies, services, and facilities that were premium and rare earlier are now part of standard offerings in modern Data Centers. These services are reshaping the way businesses operate today.
Another trend to note is the emergence of Modular 4th generation Data Centers. These Data Centers comprise modular units that help in quickly scaling up the infrastructure. In addition to the components in the racks being modular, the building itself could be modular. For instance, some Data Centers are built in shipping containers. Scaling up means adding more shipping containers with Data Center units.
Resolving the Challenges
Many challenges of the past have now been resolved with architectural advances in the Data Center space. For instance, Pod architecture for SaaS assigns a set of machines to a specific job or customer for all of its required tasks. To create redundancies for power and cooling in a Data Center, a lot of assembling needs to be done which can incur a cost. You may also need to construct additional racks. However, POD comes with frames that are free standing and compatible with most equipment so it can be used for all needs including power, cooling, and cabling. So, your need for construction within the Data Center facility is minimized. It can simplify infrastructure scaling to support your digital growth. It is a standardized deployment that can automate user provisions. It allows you to use shared storage, firewall, and load balancing while customizing individual PODs as per your business needs. When scaling up users, you would not need to perk up your whole infrastructure but only add or remove specific resources user-by-user, which can help reduce overheads.
While Data Centers serve as an ideal place to use your critical applications, operating them has been a big challenge in the past. A Data Center is affected by many environmental factors that add inevitable complexities. A Data Center operator needs to take care of the cooling needs of Data Centers as well as maintain correct levels of air and humidity in the storage spaces. These challenges make it worthwhile for companies to try cloud-based shared storage space managed by third-party experts who could be better equipped to counter these problems. In modern warehouses, Computer Room Air Conditioning (CRAC) device is used instead of traditional air conditioning, which can monitor as well as maintain humidity, air flow, and temperature in a Data Center.
The future is smart!
The future of the Data Center is smart: modern Data Centers are now offering converged infrastructure, and the trend is further moving towards hyper-convergence. This has brought many advantages for Data Center operations and has also solved problems that paralyzed companies earlier. The risk of hardware failure, for instance, plagues companies with the risk of losing data and they struggle to rebuild their infrastructure. Siloed approaches to managing servers was another challenge that made Data Center operations expensive and complicated. With converged infrastructures, the process of managing a Data Center gets organized; with a single interface used for infrastructure management, your company turns more proactive in streamlining your operational processes and in keeping your data on the cloud safe.
While consolidation of operations through convergence makes management easier, most servers are still siloed, and that is where hyper-convergence plays its magic. Hyper-converged Data Centers are software-defined Data Centers that are also called smart Data Centers. They use virtualization and converge all operational layers including computing, networking, and storage into a single box. With hyper-convergence, everything is now on the same server which brings improved efficiencies, reduced costs, and increased control over Data Center components.
Colocation: A trend to watch
Rethink IT, replace captive servers with cloud services. You would now need much less space for storing the same amount of data than you needed in a captive Data Center. Welcome to the concept of managed colocation!
Colocation services (or Colo) are delivered by Data Center solution providers to enhance user experience. A hybrid cloud drives them and provides specialized services for their users. A collocation is a place where customers have better control over their private infrastructure, and with increased proximity to the public cloud, they can also be closer to their customers.
A colocation service relies on the principles of abstraction, software-based provisioning, automation, unified management, and microservices. Colo facilities are highly flexible as it can reap the advantages of both private and public cloud with a hybrid infrastructure. While private cloud gives enhanced security and control, the public cloud makes it easy to transport data over encrypted connections and gives you additional storage space.
Modern colocation services are now shifting to Data Center-as-a-Service (DCaaS) which is a much more flexible deployment than Software as a Service, Platform as a Service, and Infrastructure as a Service models. A hybrid DCaaS colocation architecture has a public IaaS platform, on hosted or on-premise private cloud and a Wide Area Network (WAN) to connect the two. A major advantage of DCaaS is the change in the cost equation. DCaaS providers have high economies of scale that allow them to offer you volume-based discounts taking your costs down. The DCaaS hybrid cloud architecture not only provides hybrid storage flexibility and cost advantage but also other benefits like increased redundancies, improved agility, and maximum security.
A hybrid cloud combines the resources available to you on the private cloud and the public cloud and gives you the flexibility to seamlessly move your data between them. With changes in your cost structures and business needs, you can flip your resources between the two clouds anytime. If youβve reached the designed capacity of your current private cloud, you can always switch to a Public cloud for further expansion. For instance, Cloud bursting can give you on-demand storage over the public cloud so that you can shift the increased burden on your private cloud to the public in peak business seasons.
Data Center technologies are still emerging, and new architectures like hybrid cloud and hyper-convergence are taking shape. In the future, more companies would realize the benefits of these architectural modifications and will be able to enjoy far higher capacities and advanced Data Center management capabilities.
Sify offers state of the art Data Centers to ensure the highest levels of availability, security, and connectivity for your IT infra. Our Data Centers are strategically located in different seismic zones across India, with highly redundant power and cooling systems that meet and even exceed the industryβs highest standards.
How to leverage hyperscale Data Centers for scalability
Modern Data Centers are synonymous with massive high-speed computational capabilities, data storage at scale, automation, virtualization, high-end security, and cloud computing capacities. They hold massive amounts of data and provide sophisticated computing capacities. Earlier, a simple network of racks with storage units and a set of management tools to individually manage them were enough. The architecture was simple to understand, and only local resources were consumed in its operation.
However, as organizations became increasingly internet dependent, the volumes of the data exploded with more of it added by the social media and the sensing devices that grew manifold. Remote access to this data through the Web emerged as the trend. The local tools that were used earlier in traditional Data Center were fragmented and were inefficient to handle not just the volumes but also complexities that in effect needed a large infrastructure. There were challenges of scaling up when companies expanded, and performance dipped when peak loads were required to be handled. This led to the evolution of hyperscaling as a solution.
Hyperscale is based on the concept of distributed systems and on-demand provisions of the IT resources. Unlike the traditional Data Center, a hyperscale Data Center calls in a large number of servers working together at high speeds. This ability gives the Data Center a capacity to scale both horizontally and vertically. Horizontal scaling involves on-demand provisioning of more machines from the network when scaling is required. Vertical scaling is about adding power to existing machines to increase their computing capacities. Typically hyperscale Data Centers have lower load times and higher uptimes, even in the demanding situations like the need for high-volume data processing.
Today, there are more than 400 hyperscale Data Centers operating in the world, with the United States alone having 44% of the global Data Center sites. By 2020, the hyperscaled Data Center count is expected to reach 500 as predicted by Synergy Research Group. Other leading countries with hyperscaled Data Center footprints are Australia, Brazil, Canada, Germany, India and Singapore.
Hyperscale Data Center Can Do More at Less Time and Lower Cost
A traditional Data Center typically has a SAN (Storage Area Network) provided mostly by a single vendor. The machines within the Data Center would be running on Windows or Linux, and multiple servers would be connected through commodity switches. Each server in the network would have its local management software installed in it and each equipment connected to them would have its own switch to activate the connection. In short, each component in a traditional Data Center would work in isolation.
In contrast, a hyperscale Data Center employs a clustered structure with multiple nodes housed in a single rack space. Hyperscaling uses storage capacities within the servers by creating a shared pool of resources, which eliminates the need for installation of a SAN. The hyperconvergence also makes it easier to upgrade the systems and provide support through a single vendor solution for the whole infrastructure. Instead of having to manage individual arrays and management interfaces, hyperscaling means integration of all capacities, such as storage, management, networks and data, which are managed from a single interface.
Installing, managing and maintaining a large infrastructure consisting of huge Data Centers would have been impossible for emerging companies or startups that have limited capital and other resources. However, with hyperconvergence, even microenterprises and SMEs, as well as early stage startups can now enjoy access to a large pool of resources that are cost-effective and provide high scalability in addition to flexibility. With hyperconvergence, these companies can use Data Center services in at a much lesser cost with the additional benefit of scalability on demand.
A hyperscale Data Center would typically have more than 5000 servers that are linked through a high-speed fiber optics network. A company can start small with only a few servers configured for use and then, later at any point of time, automatically provision additional storage from any of the servers in the network as their business scales up. An estimate of the demand for additional infrastructure is made based on how the workloads are increasing, and a proactive step can be taken to scale up capacities to meet the increasing need for resources. Unlike traditional Data Centers that work in isolation, hyperscaled infrastructures depend on the idea of making all servers work in tandem, creating a unified system of storage and computing.
When implementing a hyperscale infrastructure, the supplier could play a significant role through the delivery of next-gen technologies that need high R&D investments. According to a McKinsey report, the top five companies using hyperconverged infrastructure have over $50 billion of capital invested in 2017 and these investments are growing at the rate of 20% annually.
LeveragingΒ hyperscaled Data Centers, businesses can achieve superior performance and deliver more at a lower cost and a fraction of time than before. This provides businesses with the flexibility of scaling up on demand and an opportunity to continue operations without any interruptions.
Sify offers state of the art Data Centers to ensure the highest levels of availability, security, and connectivity for your IT infra. Our Data Centers are strategically located in different seismic zones across India, with highly redundant power and cooling systems that meet and even exceed the industryβs highest standards.
How to orchestrate workloads between public and private clouds
Imagine how an orchestra combines a multitude of instruments to create a symphony. In the same way, a hybrid cloud orchestrates, skillfully combining public and private cloud, to create a seamless cloud infrastructure. As multiple applications on a publicβprivate hybrid infrastructure could add to complexities, with the help of orchestration, a centralized structure can be created to allow management of multiple applications using a single interface. Differences in bandwidths, workloads, and access controls can all be managed by this orchestration software.
Integration of different technologies in a hybrid infrastructure determines how effective the orchestration is. For seamless integration, the compatibility between different systems and applications must be ensured, so that orchestration of workloads between public and private clouds is seamless, providing the needed high-performance compute. In the absence of orchestration, enterprises using a hybrid cloud would be forced to manage the public and private clouds in silos, which can put pressure on their resources and demand additional overheads. In addition, orchestration provides the benefit of streamlining resources for coordination, making it easier to manage multiple workloads.
Orchestration on the Private Cloud
A private cloud may not be cheap, but it brings its advantages. It gives greater control over assets and provides enhanced cloud security, resiliency, and flexibility to the system. With private cloud orchestration, automation of the infrastructure could be managed, by establishing workflows that work without human intervention. While private cloud automation would initiate processes automatically, orchestration results in a unified structure of workflows. In this arrangement, resources can be provisioned as needed to optimize the workloads on a private cloud. Thus, an organization can realize savings in engineering time, and IT costs.
What does orchestration involve?
Orchestration enables a coordinated deployment of automation services on the cloud. Cloud orchestration happens at three levels: resource, workload, and service. At the resource level, IT resources are allocated, and at workload level, they are shared. At the service level, services are deployed so that shared resources are optimally utilized. While individual automation only takes care of a single task, orchestration automates end-to-end processes. It is similar to creating a process flow that automates the sequence of automation. The workflows created in the process enable technologies to manage themselves. There are many orchestration tools available in the market that can be used by organizations based on their individual requirements. Some popular tools are Chef, Puppet, Heat, Juju and Docker. Chef is used at OS level while Puppet is more popular at middleware level. Heat is an orchestration method developed from OpenStack, and it can orchestrate everything in OpenStack. Juju is used at the service level while Docker serves both as a tool for orchestration and technology for virtualization.
Workload placement considerations
In a hybrid cloud, both public and private applications generate different workloads. To manage these workloads, and handle their seamless switch between public and private cloud infrastructure, an appropriate cloud strategy is needed. Distributing the workload between different IT assets is a business decision in which regulatory compliance requirements, trade-offs, business risks, cost, and growth priorities are taken into consideration. For instance, certain countries like China may have certain federal restrictions on the use of the internet for which a private WAN can be deployed. The cost could be a concern for an organization looking to provide last-mile connectivity if private cloud is deployed, but with public infrastructure used for addressing service needs of remote locations, cost savings can be realized. A private or hybrid cloud may require establishing an in-house team for IT support while public cloud can work without any, and with limited, cloud expertise in-house.
Technical parameters such as data volume, performance, security, and integration are considered when orchestrating workloads between different cloud deployments. Based on the level of importance each of these attributes carries, workloads can be shifted between the public and private cloud. Public clouds could be deployed for workloads that require a higher level of security but a lower level of integration and performance, such as CRM and information systems. When a continuous demand for a higher level of integration arises, an organization may have to add a private cloud to the IT infrastructure. Workloads like file printing, networking, and systems management may work with either nature of the cloud. However, if data volumes grow, the public cloud would not suffice, and the organization should orchestrate to a private cloud. Applications like enterprise resource planning, data marts, and Big Data analytics make use of high volumes of data that need a private cloud to manage.
A true hybrid cloud allows for easy migration of workloads between public and private clouds. It is always wise to develop a hybrid cloud strategy depending upon workloads, as portability of workloads becomes possible and traditional applications can be bridged with modern applications across the cloud infrastructure. When cloud deployments are planned based on the changing needs of the workloads using orchestration, the enterprise can make their IT infrastructure more optimized, flexible, and adaptive.
Sifyβs many enterprise-class cloud services deliver massive scale and geographic reach with minimum investment. We help design the right solution to fit your needs and budget, with ready-to-use compute, storage and network resources to host your applications on a public, private or hybrid multi-tenant cloud infrastructure.
Hyper convergence And Its Growing Importance
For years, Data Centers formed the backbone of IT infrastructure, in which primarily hardware-driven convergence was deployed to assemble several parts of the IT function. However, in this type of infrastructure, the compute, storage, and networking components are discrete and thus difficult to manage. Hyperconvergence is a system that creates a pool of IT resources, which can be shared over a network by different users.
Going a step further, the components are brought together, which appear as a single entity to the user through the creation of clusters of nodes. In a hyperconverged environment, the components cannot be separated. The software-defined elements are implemented virtually, with seamless integration into the hypervisor environment. ComputerWorld called hyperconvergence βData Center in a boxβ as early as 2012 when the concept was only catching on. Today, hyperconvergence is a $5 billion industry and is growing fast.
Why is Hyperconvergence So Popular?
Development of infrastructure earlier entailed a huge spend as many infrastructure units like SANs and blade servers were required. Hyperconvergence brought a pre-integrated infrastructure bundled in a single box, which is managed with a unified layer. Significant cost savings are realized as a hyperconverged infrastructure could be built over low-cost hardware.
Pure software-based architectures and hybrid options make a hyperconverged infrastructure model highly flexible. Using this flexibility, organizations are required to deploy resources only on an as-needed basis, and investment is needed only for the required storage. In the era of unpredictable data growth, this flexibility acts as a reward.
The integrated architecture of a hyperconverged infrastructure makes it robust, in not just handling the business database processing workloads, but also the mission-critical applications. High-performance computation, speed, flexibility, agility, and cost saving are a few benefits of hyperconverged infrastructure. These capabilities have been attractive enough for organizations to move to hyperconverged infrastructure in the past few years.
Need for Hyperconvergence in Business
Today, mission-critical applications have to deal with demanding workloads that fluctuate based on real-time interactions between users and machines. The magnitude of workloads, unimaginable in the past, are now getting into the forefront like Big Data, IoT, Machine Learning, and Artificial Intelligence. These workloads provide business advantages such as operational efficiencies and competitive edge by delivering actionable insights, but an infrastructure that has inherent scalability is required to handle these workloads. Hyperconvergence exactly fits this requirement. WAN latency, which was dreaded earlier with Tier 1 workloads, is no more a concern, thanks to hyperconvergence.
Modern businesses host their solutions on the cloud. Whether it is a private cloud, public cloud or a hybrid cloud, HCI (Hyperconverged Infrastructure) can streamline the resources with finesse. Regulatory compliance and other types of compliance become easier with HCI. The hybrid infrastructures consisting of cloud and on-premise infrastructure provides greater flexibility, data privacy, and data protection when coupled with HCI.
How does HCI make a difference to a business? If the cost of analysis is consuming an unreasonable part of the organizationβs business cost, then it becomes difficult to sustain. Hyperconvergence addresses this problem by offering high scalability, on-the-go adoption flexibility, and resource optimization to ultimately reduce overheads and deliver value.
An example of how HCI can deliver scalability and flexibility to a business could be a VDI rollout. Before any company would do a VDI rollout on a large scale, the first step would be to test the system performance with a minimum viable product. However, when this platform is configured for scaling to meet the growing business demands, one single error in prediction of performance could prove to be costly, compromising the whole roll-out. Hyperconvergence ensures the reliability of this process as a provision exists for adding nodes to clusters any time so that scaling for performance doesnβt even count as an issue.
Data Center consolidation is another important area of HCI application that brings savings for a business through resource optimization. Hyperconvergence not only brings down the amount of hardware by eliminating SAN from a network but also reduces the variety of components used through data compression and deduplication. Cutting down both on software and hardware components cuts down the cost of physical infrastructure and reduces complexity.
How HCI Can Shape Businesses
Several applications and benefits have emerged with the wide adoption of HCI and here are some that are shaping modern businesses today.
Digital Transformation with HCI
Digital transformation is seeing wider adoption as companies are keen to embrace evolving technologies and build sustainable competitive advantages. This is because HCI gives them a single flexible platform to consolidate resources and virtualize workloads. The HCI environment can serve as the backbone for the organizationβs digital transformation strategy. It is robustly designed for scalability, and so as the organizationβs workloads increase, additional components can be provisioned to meet the spurt in demand. As compared to traditional 3-tier architecture, this flexibility would deliver savings in the TCO (Total Cost of Ownership).
Resource Optimization with HCI
Many businesses still use manual processes for provisioning service processes, severely constraining IT efficiencies. This also demands a significant CAPEX, in addition to causing increased workload. HCI can save both cost and time that goes into provisioning by providing a low-cost infrastructure and several automation features. Therefore the organization is required to spend much less time in provisioning, managing, operating, and maintaining IT assets.
Moreover, the organizationβs network, storage, and compute can all be scaled up on demand in a pay-per-use model, propelling optimization. This means that every added component gives more value at a lower cost. Organizationβs assets also get utilized most efficiently.
With HCI come many benefits that are acting as triggers for its wider adoption and growth. More businesses keen on enhancing their enterprise capabilities would adopt HCI for their needs, and the future looks promising for this technology.
Sifyβs many enterprise-class cloud services deliver massive scale and geographic reach with minimum investment. We help design the right solution to fit your needs and budget, with ready-to-use compute, storage and network resources to host your applications on a public, private or hybrid multi-tenant cloud infrastructure.
M2M is shifting to M2M
The final eulogy to βMan is the Master of the Machineβ has been written.
In the movie Terminator 3, the sequel delves into the takeover of earth by machines, until the very end, when the machine itself has a change of heart. However ominous those signs are, what is undeniable is that the age of machines is upon us.
From mere input mongers to making sense of the mountain of data, cataloguing them, analysing them and delivering a seemingly analogues interpretation of it, machines have become the new indispensable smartphone for todayβs Enterprise. Within this paradigm, the original input feeder, the man, is now relegated to building strategies on top of the results that the machine has spewed to him. The shift from Man to Machines, to Machines to Machines is now here to stay.
The component that has built itself into an indispensable position in this entire equation is that of the Data Center. Not the legacy coLocation versions but the new age, intelligent data player that offers compute, store, analyze, cohabits the Cloud and Applications within itself. One that is intelligent and elastic enough to accommodate the growing data demands of tech-dictated enterprises.
The Data Center, referred rather insipidly to its very reason of its existence, is now a chameleon in the entire IT structure. In some cases, it is the eventual residency point for the data. In others, it is starting point of information that has been decoded and awaits a decision. And in between, is the binding agent in an ever-spawning network.
Come to think of it. What if it was removed from the equation? Or perhaps, more benevolently, scaled down to just a rudimentary Data Center. Before we answer that question, hereβs something of an analogy.
Imagine if all your data was not retrievable one not-so-fine morning. Will we see a repeat of the dark ages? Perhaps so. It is therefore not a far-fetched misnomer when Data is referred to as the new economy.
So, what size of data is the world coming to? Hereβs a curtain raiser.
Shantanu Gupta, director of Connected Intelligent Solutions at Intel, introduces the next-generation prefixes for going beyond the yottabyte; brontobyte and gegobyte.
A brontobyte, which isnβt an official SI prefix, but is apparently recognized by some people in the measurement community, is a 1 followed by 27 zeros. Gupta uses it to describe the type of sensor data weβll get from the internet of things. From there, a gegobyte (10 to the power of 30) is just a short distance away.
Now imagine the computational strength required to make sense of this volume. Companies will hence need to have a future-proof strategy in place for collecting, organizing, cleansing, storing, and securing data β and for applying analytics to derive real-time insights to transform their businesses.
A story in Information Management highlights βBig Data Analytics: The Currency of the 21st Century Enterprise.β Quite an interesting read. The gist of the argument: Personal data has an economic value that can be bought, sold, and traded.
Emerging technologies are driving transformation within organizations. The year 2019 will see Artificial Intelligence (AI) and Machine Learning (ML) driving change in enterprises. We already see numerous use cases of these emerging technologies in industries such as BFSI, healthcare, telecom, manufacturing, and home automation. These technologies can cull data and get real-time insights about the business and offer timely solutions or corrective action, often without human intervention. AI-backed automation and predictive analytics will help predict challenges that may arise; it will streamline operations, save costs, enhance customer experience, and perform repetitive tasks. While the adoption of ML technologies will lead to exponential growth of enterprise data, the accuracy of outputs is a factor of the sanctity of the input.
That calls for a trustworthy Data Center partner, not only to store the data but also to analyze and manage it. The ideal Data Center partner should do both β cater to current requirements and also adapt to the changing IT landscape.
According to a Frost & Sullivan report, from an APAC standpoint, it said, the Data Center services market will grow at a compound annual growth rate (CAGR) of 14.7% from 2015-2022 to reach US$31.95 billion at the end of 2022. Specifically, the India Data Center market is expected to reach values of approximately $4 billion by 2024, growing at CAGR of around 9% during 2018-2024. Major cities such as Mumbai, Bangalore, and Hyderabad are witnessing high investments of local and international operators in the Indian market. The increasing construction of hyperscale facilities with the power capacity of over 50 MW will fuel the need for innovative infrastructure in the market over the next few years.
A recent study of 500 International Data Centers threw up key insights into what constitutes a well thought out Data Center strategy and one that ticks the right boxes for an Enterprises when selecting a DC partner.

It is therefore evident that the Data Center should be built to solve a business problem β both current and future, should have the flexibility to adapt to changing demands and should be agile enough to accommodate newer dynamics of the business. The paradox in the situation is that as the Data Center grows, the density of the data within it will also expand; all this on hardware that will significantly shrink. Computing power therefore becomes the differentiator and will help negate any push backs that volume will bring up.
It is not lost on DC players that Security is the other differentiator. If this data falls into the wrong hands, it could create havoc, resulting in million dollar loses for corporations. It would impact the credibility of trustworthy institutions, entrusted with sensitive consumer data. Here are two recent incidents.
- In January 2019, the HIV-positive status of 14,200 people in Singapore was leaked online. Details included identification numbers, contact details, and addresses were available in the public domain.
- In December 2018, a cyber-attack exposed the records of 500 million guests of the hotel giant Marriott International. The attack occurred over a period of four years and was traced back to a Chinese spy agency.
The emphasis on security and compliance is even stronger now with the European Unionβs General Data Protection Regulation (GDPR). In fact, GDPR is hailed as one of the most critical pivots in data privacy rules in the past two decades. It is going to fundamentally change how data is handled, stored, and processed.
Given the geographic-agnostic nature of such attacks, it is not lost on Indian IT companies to be wary of an impending attack. The Government-steered Personal Data Protection Bill mandates stringent rules for security, consent of customers, data privacy and data localization. Indian businesses will need to realign their Data Center strategies to comply with this Bill, which could eventually become law. This law will push business leaders to rethink identity and access security, encryption, data systems and application security, cloud security, and DDoS, among other things. And thatβs where machine to machine will score higher. Little wonder that CIOs are in favour of the benefits of automating the whole of atleast a majority of the work chain.
Machine to machine allows for a predictable, systemic patterns, allowing for hyperscale computing, deep-dive analytics, trend spotting, vulnerability recognition and elimination, risk mitigation, even alternate computing, without the vulnerabilities of man to machine directions. The choice therefore in front of the CIO are to go with a service provider who is an SI or an IT architect who has provisioned the entire landscape and hence can implement machine-derived predictable automated results.
Does this mean it is the end of human thinking? Quite to the contrary, it started because of human thinking.
Sify has always taken pride in supporting technology advancements since the launch of its first Enterprise Data Center in 2001 and we invite you to download a copy of Gartnerβs Market Guide that tracks the evolution of Data Center Services Market in India and highlights the wider choice of providers, hosting locations and services.
A CFO for All Seasons: Interview of M P Vijay Kumar, CFO on CFOThoughtLeader.com
I have one primary agenda for the next 12 months: to ensure that the organization has enough support available across all of the functions to enable scale. I want to ensure that every part of the organization is in a position to enable scale and monetize market opportunities, explains Vijay. At times, it must seem to Vijay Kumar that his 12-year tenure as a CFO has been spent not at one company, but three. This must be a sense that most C-suite members at Sify Technologies likely experience in light of the companyβs appetite for continuous reinvention.
Listen to the Interview
At times, it must seem to Vijay Kumar that his 12-year tenure as a CFO has been spent not at one company, but three. This must be a sense that most C-suite members at Sify Technologies likely experience in light of the companyβs appetite for continuous reinvention.
Back in 2007, when Kumar arrived at the information and communications technology company, Sify was widely known as a consumer businessβand one perhaps without the will or resources to attract business customers. As CFO, Kumar was part of a management team tasked with changing that perception both inside and outside of Sifyβs existing world. More specifically, Kumar and his finance team were responsible for calculating and tracking the necessary capital expenditures that could provide the new business-to-business infrastructure that business customers would demand. Of course, no sooner was the infrastructure in place then Sify decided to super-size its business services menu, making it a bonafide provider of technology services. Looking forward, Kumar says that Sifyβs latest innovation involves not so much its customer offerings but how customers buy its offerings by using outcome-based pricing. This is an approach that Kumar believes will empower Sify to open a new chapter of growth.
https://www.cfothoughtleader.com/cfopodcasts/493-vijay-kumar-cfo-sify-technologies/
Cloud Service Models Compared: IaaS, PaaS & SaaS
Cloud computing has been dominating the business discussions across the world as it is consumed by the whole business ecosystem and serves both small and large enterprises. Companies are faced with a choice between three predominant models of cloud deployment when adopting the technology for their business. A company may select from SaaS, PaaS, and IaaS models based on their needs and the capabilities of cloud serviceΒ models. Each model has inherent advantages and characteristics.
SaaS (Software as A Service)
SaaS service models have captured the largest share in the cloud world. In SaaS, third-party service providers deliver applications while the access to them is granted to a client through a Web interface. The cloud service provider manages everything including hardware, data, networking, runtime, data, middleware, operating systems, and applications. Some SaaS services that are popular in the business world are Salesforce, GoToMeeting, Dropbox, Google Apps, and Cisco WebEx.
Service Delivery:Β A SaaS application is made available over the Web and can be installed on-premise or can be executed right from the browser, depending upon the application. As opposed to traditional software, SaaS-based software is delivered predominantly in subscription-based pricing. While popular end-user applications such as Dropbox and MS Office Apps offer a free trial for a limited period, their extended usage, integrations, and customer support could come at a nominal subscription cost.
How to identify if it is SaaS?Β If everything is being managed from a centralized location on a cloud platform by your service provider, and your application is hosted on a remote server to which you are given the access through Web-based connectivity, then it is likely to be SaaS.
Benefits: The cost of licensing is less in this model, and it also provides a mobility advantage to the workforce as the applications can be accessed from anywhere using the Web2. In this model, everything at the back-end is taken care of by the service provider while the client can use the features of specific applications. If there are any technical issues faced in infrastructure, the client can depend on the service provider to remove them.
When to Choose?Β You can choose this model if you do not want to take the burden of managing your IT infrastructure as well as the platform, and only want to focus on the respective application and services. You can pass on the laborious work of installation, upgrading, and management to the third-party companies that have expertise in public cloud management.
PaaS (Platform as A Service)
In the PaaS service model, the third-party service provider delivers software components and the framework to build applications while clients can take care of the development of the application. Such a framework allows companies to develop custom applications over the platform that is served. In this model, the service provider can manage servers, virtualization, storage, software, and networking while developers are allowed to develop customized applications. PaaS model can work with both private cloud and public cloud.
Service Delivery:Β A middleware is built into the model which can be used by developers. The developer does not need to do hard coding from scratch as the platform provides the libraries. This reduces the development time and enhances the productivity of an application developer enabling companies to reduce time-to-market.
How to identify if it is PaaS?Β If you are using integrated databases, have resources made available that can quickly scale, and you have access to many different cloud services to help you in developing, testing, and deploying applications, it is PaaS.
Benefits:Β The processes of development and testing are both cost-effective and fast. PaaS model delivers an operating environment and some on-demand services such as CRM, ERP, and Web conferencing. With PaaS, you can also enjoy additional microservices to enhance your run-time quality. Additional services can also be availed such as directory, workflow, security, and scheduling. Other benefits of using this service model include cross-platform development, built-in components, no licensing cost, and efficient application Lifecycle management.
When to Choose?Β PaaS is most suited if you want to create your application but need others to maintain the platform for you. When your developers need creative freedom to build highly customized applications and require you to provide tools for development, this would be the model to select.
IaaS (Infrastructure as A Service)
In this cloud service model, Data Center infrastructure components are provided including servers, virtualization, storage, software, and networking. This is a pay-as-you-go model which provides access to all services that can be utilized as per your needs. IaaS is like renting space and infrastructure components from a cloud service provider using a subscription model.
Service Delivery:Β The infrastructure can be managed remotely by a client. On this infrastructure, companies can install their own platforms and do the development. Some popular examples of IaaS service models are Microsoft Azure, Amazon Web Services (AWS), and Google Compute Engine (GCE).
How to identify if it is IaaS?Β If you have all the resources available as a service, your cost of operation is relative to your consumption, and you have complete control over your infrastructure, it is IaaS.
Benefits:Β A company need not invest heavily in infrastructure deployment but can use virtual Data Centers. A major advantage of this service model is that with it, a single API (Application Programming Interface) can be used to access services from multiple cloud providers. A virtualized interface can be used over pre-configured hardware, and platforms can be installed by a client. IaaS service providers also give you security features for the management of your infrastructure through licensing agreements.
When to Choose?Β IaaS service model is most useful when you are starting a company and need hardware and software setups for your company. You may not commit to specific hardware or software but can enjoy the freedom of scaling up anytime you need with this deployment.
You can choose between the three models depending on your business needs and availability of resources to manage things. Irrespective of the model you choose, cloud Data Center does provide you a great cost advantage and flexibility with experts to back you in difficult times. Your choice of the cloud service model would affect the level of control you have over your infrastructure and applications. Depending on the needs of your business, you can select a model after a careful evaluation of the benefits of each of the cloud service models.
Sifyβs many enterprise-class cloud services deliver massive scale and geographic reach with minimum investment. We help design the right solution to fit your needs and budget, with ready-to-use compute, storage and network resources to host your applications on a public, private or hybrid multi-tenant cloud infrastructure.
From Legacy to the Modern-day Data Center Cooling Systems
Modern-day Data Centers provide massive computational capabilities while having a smaller footprint. This poses a significant challenge for keeping the Data Center cool, since more transistors in computer chips, means more heat dissipation, which requires greater cooling. Thereby, it has come to a point where traditional cooling systems are no longer adequate for modern Data Center cooling.
Legacy Cooling:Β Most Data Centers still use legacy cooling systems. They use raised floors to deliver cold air to Data Center servers, and this comes from Computer Room Air Conditioner (CRAC) units. These Data Centers use perforated tiles to allow cold air to leave from the plenum to enter the main area near the servers. Once this air passes through the server units, heated air is then returned to the CRAC unit for cooling. CRAC units have humidifiers to produce steam for running fans for cooling. Hence, they also ensure the required humidity conditions.
However, as room dimensions increased in modern Data Centers, legacy cooling systems become inadequate. These Data Centers need additional cooling systems besides the CRAC unit. Here is a list of techniques and methods used for modern Data Center cooling.
Server Cooling:Β Heat generated by the servers are absorbed and drawn away using a combination of fans, heat sinks, pipes within ITE (Information Technology Equipment) units.1 Sometimes, a server immersion cooling system may also be used for enhanced heat transfer.
Space Cooling:Β The overall heat generated within a Data Center is also transferred to air and then into a liquid form using the CRAC unit.
Heat Rejection:Β Heat rejection is an integral part of the overall cooling process. The heat taken from the server is displaced using CRAC units, CRAH (Computer Room Air Handler) units, split systems, airside economization, direct evaporative cooling and indirect evaporative cooling systems. An economizing cooling system turns off the refrigerant cycle drawing air from outside into the Data Center so that the inside air can get mixed with the outside air to create a balance. Evaporated water is used by these systems to supplement this process by absorbing energy into chilled water and then lowering the bulb temperature to match the temperature of the air.
Containments:Β Hot and cold aisle containment use air handlers to contain cool or hot air and let the remaining air out. A hot containment would contain hot exhaust air and let cooler air out while cold containment would do vice versa.3 Many new Data Centers use hot aisle containment which is considered as a more flexible cooling solution as it can meet the demands of increased density of systems.
Closed-Couple cooling:Β Closed-Couple Cooling or CCC includes above-rack, in-rack or rear-door heat exchanger systems. It involves bringing the cooling system closer to the server racks itself for enhanced heat-exchange.2 This technology is very effective as well as flexible with long-term provisions but requires significant investments.
Conclusion
Companies can choose a cooling system based on the cooling needs, infrastructure density, uptime needs, space factors, and cost factors. The choice of the right cooling system becomes critical when the Data Center needs to have high uptime and avoid any downtime due to energy issues.
Sify offers state of the art Data Centers to ensure the highest levels of availability, security, and connectivity for your IT infra. Our Data Centers are strategically located in different seismic zones across India, with highly redundant power and cooling systems that meet and even exceed the industryβs highest standards.
Five ways cloud is transforming the business world
Organizations around the globe are inclining towards the cloud technology and cloud platforms for enhanced data management and security, cost-efficient services, and of course, the ability to access distributed computing and storage facilities from anywhere, anytime.
In a recent study it was found that nearly 41% of the surveyed respondents showed interest in investing in and increasing the spending on cloud technologies, with over 51% large and medium companies planning to expand their budgets for cloud tech. As a result of this rapidly increasing interest and demand for cloud-based services, cloud computing service providers and cloud computing companies are on the rise.
But, why is the business world rapidly shifting to the cloud computing?
Cloud computing can bring about major positive transitions for businesses as it offers grounds for innovation coupled with organizational efficiency. Letβs look at the five ways in which cloud computing is transforming the business world.
1. Enhanced Operations
One of the best features of cloud computing solutions is that they can scale as a company grows. Cloud computing service providers allow companies (big or small) to move a part or all of their operations from a local network to the cloud platform, thereby making it easier for them to access a host of facilities such as data storage, data processing, and much more. Usually, cloud computing service providers have a strong and dedicated support team that can assist users through real-time communication.
Another major benefit of using a third-party service is that the responsibility of data and system management and the associated risks falls under the purview of the service provider. So, one can take advantage of the cloud services without having to worry about the risks.
2. Cost Reduction
Cloud computing services are highly cost-effective. Companies and businesses using the services of third-party cloud computing providers need not bear extra expenses in setting up the required infrastructure or hire additional in-house IT professionals for installing, managing, and upgrading the systems. As mentioned above, all these needs are taken care of by the cloud computing providers. Also, small companies can take advantage of the same tools and resources that are used by large corporations, without having to incur additional IT overhead. This means cloud computing services can reduce IT costs and increase the operating capital which can be steered towards improving other core areas of the business. With increased productivity, efficiency, flexibility and reduced costs, businesses can become more innovative and agile in their operations.
3. Fortified Security And Storage
Unlike the past, cloud computing service providers are now extremely cautious about the security and safety of their users. All the sensitive user data, files, and other important documents are stored across a distributed network. Since the data is never stored in one single physical device and the enforcement of encrypted passwords and restricted user access, the safety and security of the user data are enhanced. Steps are taken to further protect the data by incorporating firewall and anti-malware software within the cloud infrastructure.
Furthermore, cloud computing service providers allow businesses to leverage the best quality hardware for faster data access and exchange. This further boosts the operational efficiency, speed, and productivity.
4. Improved Flexibility
With cloud services, employees can access the same resources while working remotely that they could access while working from the office. Thanks to cloud technology, employees can now work within the comfort zone of their homes and get work done seamlessly. Mobile devices make this even more convenient by allowing employees to enjoy the flexibility of working at their own pace while also facilitating real-time communication between them and the users. Cloud companies can, thus, deliver better and more efficient services by investing in a band of dedicated remote employees instead of maintaining a full-house of on-site employees. This fast increasing mobile workforce delivering quality cloud solutions is a big reason why businesses today are making the transition to cloud computing providers.
5. Better Customer Support
Today, cloud computing providers have upgraded their game by offering an array of support options for businesses to choose from. Apart from the conventional telephone service, businesses can now opt for AI-powered chatbots that can interact with customers like a real human being. As most cloud computing providers offer impressive bandwidth, it facilitates improved communications which allow a firmβs customer support team to handle customers requests swiftly. The speedy and prompt delivery of support services will ensure that the customers donβt have to wait for hours for their queries/requests to be addressed. All this together leads to a richer customer support experience.
The end result? Happy customers who endorse the brand to a larger network.
Cloud computing is a versatile platform that offers an extensive variety of solutions to the common challenges and hurdles that businesses face in their day-to-day functioning. And that is precisely why cloud computing solutions and services are increasingly penetrating the business world by the minute and transforming it for the better.
Opt for dedicated private cloud infrastructure services for your mission-critical workloads.
To learn more about SIFYβS GOINFINIT PRIVATE β an enterprise-grade, fully integrated private cloud IT platform with specific controls, compliance and IT architecture available in a flexible consumption model.
Five major challenges during Data Center migration
Data Center migration is essential for companies looking to meet the growing demands of the IT/data services. However essential, this process comes with its own set of challenges. Thus, it would be wise to tread carefully and assess both the core necessities and challenges that usually accompany Data Center migration.
Data Center migration involves moving a part or the entire IT operation of a company from the existing Data Center to another, either physically or virtually (shared Data Centers via the cloud). Here are the five basic challenges, or rather considerations to make for a successful Data Center migration:
1. Service Provider Credentials
Before rushing into a collaboration with a Data Center migration provider, it is important to assess the service providers business; what is their track record of providing Data Center services as well as maintaining colocation Data Centers; do they operate by purchasing the Data Center services from multiple providers; what are the terms and conditions. Usually, if a provider leases services instead of having their own facilities, it can be safely said that it will not offer stability or efficiency in delivering the pivotal IT needs for a digital enterprise.
Hence, doing a little research ahead of the Data Center migration about the provider helps assess the quality of services of the provider.
2. Customer Service
An excellent customer service is demanded of any service, and Data Center migration is no exception. Providers that have created a good track record of their services in the market have done so by not only delivering seamless and actionable IT/data solutions but also by catering to the minor troubles and issues faced by their customers. The services of good providers are accompanied by on-demand expert assistance with less wait time. Also, one has to consider whether or not a prospective provider can meet all the expectations and business demands as and when necessary. Thus, companies should always try to dig a little deeper to gain knowledge about the customer service of a provider before choosing one for Data Center migration.
3. Data Center Location
The location of a service providerβs Data Center is a major factor for the future operations of a company. It may be so that a provider promises to deliver all the core needs of a company today, but in the future, they may falter from that promise. For instance, if a provider takes over another Data Center providerβs location, there are chances that the Data Center facilities may fall within close proximity of each other. As a result, the service provider may feel itβs unnecessary to have multiple Data Centers situated closely and may shut down some chosen locations. This can become a huge inconvenience if a companyβs location is in the phase out location. Thus, to avoid such unsavory situations, itβs best to choose a provider with their own facilities located strategically.
4. Service Bundling
Customers can hugely benefit from Data Center providers that provide access to facility resources and network connectivity to the users. However, not all providers are able to deliver this. Providers that do not own their own facilities, locations, and operations, often collaborate with third-party providers or platforms, which may cease to exist in the future. And when that happens, it is sure to affect a companyβs operations. And it might end up in a situation where the customers will have to make adjustments with two separate providers that may no longer be able to offer seamless and efficient services.
5. Reliability
Finally, one of the most important factors to consider for Data Center migration is the reliability of the service provider. To determine this, one has to analyze the security systems, HVAC features, OPEX, availability and uptime, and other such measures. It would be wise to choose a provider with a history of minimal number of service outages, since a service outage can cost you dearly. Also, while choosing a provider, one should check if it is a certified Data Center that offers stable, cost-efficient, and state-of-the-art services.
These are the five core areas where companies can face numerous challenges while opting for a Data Center migration. However, they can be easily overcome if addressed with a little caution and risk-assessment approach.
Outsource complex and time-consuming Data Center migration to Sify.
To learn more about SIFY DATA CENTER MIGRATION SERVICES and how we can be your best choice to carefully plan and perfectly execute your Data Center migration project.