Next year is shaping up to be another one of economic uncertainty, anxiety and threat. Cloud, data center and edge infrastructure will be impacted by economic and geopolitical forces. Infrastructure and operations teams are likely to confront tightening budgets, supply chain disruptions and shortages of skilled staff.
This will not be a year to realize grand ambitions. Rather, 2023 marks a moment to refocus, retool and rethink your infrastructure. Within a crisis there is opportunity — in this case, the chance to make positive changes that may be long overdue.
As we do every year, my team at Gartner has made predictions about the most important trends for infrastructure in 2023 — and, more importantly, what engineers and architects need to do about them. Here are our predictions along with our recommendations for where to focus in the year to come.
Cloud teams will optimize and refactor cloud infrastructure
Nearly every business already makes use of cloud services, but many cloud implementations are poor. In 2023, when many businesses will not undertake major new cloud expansion projects, infrastructure teams will finally have the time and space to optimize their existing cloud assets and pay down the technical debt they have incurred. However, making architectural changes to running code is not within I&O teams’ remit. They will need to collaborate closely with software developers and business units to make mutually beneficial changes.
SEE: Hiring Kit: Cloud Engineer (TechRepublic Premium)
Cost optimization will naturally be the first concern, but remember that cost control in the public cloud is a function of architecture. For example, while it might be cheaper and more efficient to run an application in a serverless container model, migrating it would require refactoring running code and realigning operations around the new architecture. Reducing cloud spending is a job for engineers, not accountants.
Cloud teams should also rethink how they provide resilience. Historically, this was done by building redundancies at the infrastructure layer, but cloud-native applications build resilience into the application itself. Cloud services have commoditized sophisticated resilience and redundancy capabilities that previously only the largest enterprises could afford — use them. Aim for automated and transparent backup of containerized applications with the ability to restore workloads to multiple platforms.
Data center teams will adopt cloud principles on-premises
IT organizations increasingly demand the benefits and operating model of the public cloud, even for those workloads that remain on-premises. Cloud providers showed businesses a better way to create and use applications and data. The pressure is on data center teams to deliver similar value by making on-prem infrastructure more cloud-like: service-centric, elastic, and highly scalable with capacity on demand and consumption-based pricing.
SEE: 20 good habits network administrators need–and 10 habits to break (free PDF) (TechRepublic)
Data center teams are especially hard-hit by the ongoing supply chain disruptions. Gartner data continues to show huge delays in the delivery of new IT equipment. Lead times for network equipment now average 200 days, and some clients have reported delays of over a year. Data center teams must “sweat” their existing assets, rather than expecting to refresh them.
To address the former problem, build cloud-native infrastructure on premises. At a minimum, the data center should offer container infrastructure and Kubernetes as a service. Later, expand into hosting other services, such as databases or an event bus. Cloud providers already offer these services; if your data center doesn’t, developers will go to the cloud to get them, whether or not that’s the best architectural choice for their workloads.
For the latter problem, there are a variety of hosted services that bring cloud-like economic models to on-premises infrastructure. First, co-location data centers are increasingly popular — especially platform-based ones that can provide not just floorspace, but hardware on demand. For all but the very largest companies, it no longer makes financial sense to build and maintain your own data center.
Second, all the major data center hardware vendors now offer consumption-based pricing models, where you pay only for the infrastructure you consume. Both place the onus on a vendor to source and provision data center hardware as needed. In a time of widespread uncertainty around supply chains, consumption-based infrastructure models transfer the risks of supply chain management to a vendor that is better equipped to deal with them.
New application architectures will demand new kinds of infrastructure
Despite the challenges of the year ahead, innovation should not stop entirely. Indeed, the opposite is true: New types of workloads will require new types of infrastructure.
For example, because of the sheer amount of data now being generated outside the data center, I&O teams are turning to edge infrastructure, out of necessity. Edge-based streaming analytics platforms can ingest and transform data in situ before transmitting the results to the cloud or central data centers for processing. This allows organizations to use cloud-based artificial intelligence and machine learning services without incurring exorbitant fees for cloud data storage or bandwidth. Edge infrastructure is quickly becoming non-optional for data-intensive use cases.
SEE: Don’t curb your enthusiasm: Trends and challenges in edge computing (TechRepublic)
Furthermore, the major content delivery networks are now offering an expanded menu of services at the edge, including serverless functions-as-a-service, hosted databases and persistent storage. This “serverless edge” or “CDN developer edge” architecture makes it possible to host sophisticated applications entirely at the edge.
I&O teams can now use edge infrastructure to satisfy data sovereignty requirements, to effect complex staged software deployments or to host a static website as close as possible to end users. In some cases, cloud IaaS may no longer be needed, since the CDN becomes your infrastructure.
Successful organizations will make skills growth their highest priority
In 2021, Gartner highlighted the “skills crisis” among I&O teams as a key concern for 2022. As we look to 2023, this crisis has not abated. If anything, it has worsened. Lack of skills remains the primary barrier to infrastructure modernization initiatives.
I&O teams must prioritize skills growth above all else. Successful teams are already doing so: They set aside dedicated work time for employees to learn new skills on the clock, and they have established centers of excellence and/or communities of practice to share best practices and new ideas.
SEE: The COVID-19 gender gap: Why women are leaving their jobs and how to get them back to work (free PDF) (TechRepublic)
In a striking change from last year, demand for operational skills has outpaced demand for development skills. Gartner tracks job postings, wage data and hiring scale to identify the most-sought-after skills in the IT labor market. In the most recent survey, infrastructure-as-code and Kubernetes topped that list. Of the 20 skills rated as “Critical Need,” core operations and DevOps skills made up most of the list for the first time.
The most sophisticated I&O teams are transforming into internal consultancies to whom product teams and business units can turn for expert advice on optimizing and securing infrastructure. These teams treat other internal teams as customers. Consulting teams collaborate with their internal customers as peers and advisors, rather than “parachuting in” to take over and execute a project. This often starts by embracing the Site Reliability Engineering model.
In 2023, I&O teams must pivot to support new technologies and ways of working — all while navigating a year of economic uncertainty. I&O technical professionals responsible for cloud, data center and the edge should follow these recommendations to prepare their infrastructure and the businesses it serves for disruption in the year ahead.
Paul Delory is a research vice president at Gartner, Inc. covering topics including infrastructure automation, DevOps, virtualization, and private and hybrid cloud architectures. He is the agenda manager for the Data Center Infrastructure for Technical Professionals and Cloud and Edge Computing for Technical Professionals research agendas at Gartner.