By Chris Pearson, President, 5G Americas
In the world of wireless telecommunications, there is perhaps nothing so shrouded in mystery and complexity than when introducing cloud native concepts. We are an industry that is undergoing a profound digital transformation that is introducing massive changes to our ability to scale our economies and deliver hyper-scale, yet localized data relevant to billions of customers around the world – in real time and wherever there is a mobile device.
The intertwining of the mobile industry with cloud computing has been one of incredible growth and change.
It’s a lot like growing up. As a father of two teenagers, I can identify with and remember the challenges I faced during those awkward years. Our wireless industry is moving from 4G to 5G, and just like awkward teenagers, has had to grapple with growing pains as we attempt to scale in our reach, coverage, bandwidth, and sophistication. Some technologies, such as network function virtualization (NFV) and software-defined networking (SDN), have been excellent milestones towards an even larger digital transformation: cloud-native.
In 5G Americas’ latest white paper, 5G and the Cloud, we introduce these cloud-native concepts, grapple with how they will change 5G networks, identify challenges that remain to the adoption of cloud-native, and offer some solutions in the form of reference architectures. But first, what is the cloud? And what does it offer?
Cloud Native Concepts – So What’s the Big Deal?
At the core, the notion of cloud computing is based on the idea that by putting together a lot of computational resources in a single logical place, you can do more incredible things more efficiently as you scale up your resources. Let’s take a look at a few of these ideas.
This notion of containerization stems from the earlier work from companies like Sun Microsystems and VMWare back in the early 2000’s. Briefly, it means you can group together certain hardware resources at scale to create precisely the right recipe for whatever you want to accomplish and whenever you want to accomplish it. For instance, in your laptop or PC, you have a central processing unit (CPU), graphics processing unit (GPU), sound card or chip, RAM, and some kind of storage like a hard drive.
Now imagine you had 10,000 PC’s. It would logically make sense to put “like together with like” – 10,000 sticks of RAM together, 10,000 graphics cards together, and so forth. Certain tasks will require different types and amounts of resources. For instance, if I wanted to render a motion picture in full 8K, it would require much more graphics-intensive computing than, say, if I wanted to send out one million text messages. So, I would package up just the right amount of GPU’s, CPU’s, RAM and storage that I need in some kind of container.
A ‘system container’ could behave pretty much like a virtual machine, include maybe a handful of CPU’s, GPU’s, RAM, and parts of a hard drive all managed by an operating system. Whereas an ‘application container’ might package these resources around a specific task using whatever operating system the host is running on.
This is essentially what happens in a data center, where cloud computing is located. The more units of any resource I have available, the more raw computing power I could divert to it. In some instances, such as AI, having thousands of AI compute cores could generate some very incredible results. Such a thing could not exist in the old world of the late 1990’s. But as data centers have densified their compute power, the hyper-scale capabilities of the cloud have evolved into very powerful tools.
Do you ever wonder how many features and functions something like the Uber mobile app must put together? There’s a map, a driver database, rider database, vehicle information, billing and tipping, price quoting, drive routing, customer service engine, recommendation engine – all kinds of stuff under the hood! Each one of these features is itself a micro-service. And each micro-service calls upon different resources from the cloud.
By introducing micro services using containers, you can add new features, remove or edit existing features, or modify processes without taking down the whole system. You could also deliver just a portion of these micro-services or provide the entire shebang depending on what device you think your user wants. For instance, running an Airbnb application on the PC might provide a much richer, fuller video streaming experience and VR capabilities to see inside a rental than you might get on a smartphone. Therefore, it is said that micro services provide resiliency, portability, granular application scalability, and efficient resource consumption.
With the ability to provide specific micro-services in resilient, portable containers, the old way of doing business has fallen away. In the old days, you used to spend years writing a piece of software and build up to a major release – much like how the movie industry works with motion pictures. Lots of development up front, followed by a huge launch, and then work with a skeleton crew of operations afterwards. Whatever bugs you couldn’t fix in development, you threw it over the fence to the operations team!
Well that world is rapidly changing. The new world is now one of continuous upgrades and enhancements, where DevOps teams scale, monitor, improve, ensure reliability, and ship code faster than ever before. You’re no longer on product launch cycles, you’re on upgrade cycles. DevOps provides organizations with a much more fluid, agile approach for providing customers what they need. If the old world was like the Army, then DevOps is like Special Forces.
Control-User Plane Separation (CUPS)
It’s been said before that software is eating the world. To a certain degree, that may be true. In 2007, Apple changed the wireless world with the introduction of the iPhone, to which many scoffed at the fact there was no keyboard. It was simply a glass surface with a single home button and all the smartphone’s major features became simply digital applications.
This is what is happening in the wireless industry today. We are undergoing the start of a massive digital transformation that is creating a separation between the ‘control plane’ and ‘user plane’ for the management of data. The functionality of a piece of equipment is becoming separated from the actual underlying hardware and replaced with digital software on top. These ideas are what form the basis of network function virtualization (NFV) and software-defined networking (NFV).
With all these interdependent containers and micro-services running together, how do you keep your world straight? What happens if one of your containers is running one kind of machine and the others on another kind of machine? How do you ensure they can talk?
That’s where service mesh comes in. A service mesh is simply a platform or infrastructure layer that allows for the orchestration, management and automation of all these component parts. Service meshes allow for containers and services to talk to each other via their API’s, regardless of where they’re located or what kind of architectures they’re based on. Think of it as a kind of universal language translator that keeps things chugging along. Therefore, service meshes are said to be ‘abstract’ and cloud agnostic.
Continuous Everything for Toolchains
Since everything can now talk to every other thing, it should be a pretty straightforward leap from here to perfect automation, right? It is not without challenges. Moving to cloud native does lead to increased agility and complexity, but if everything is done right, then continuous integration of all these containers should eventually lead to the ability to provide a seamless continuous delivery, similar to what 5G Americas talked about in a previous white paper Management, Orchestration, and Automation.
Put it all together and you get this very easy-to-understand chart:
What Does this Mean for 5G Networks?
In the past, our industry has been built around large proprietary “boxes” built on top of global consensus driven standards. Our industry has done a tremendous job for subscribers around the world based on this model. But that model or concept surrounding “boxes” is now starting to change. From a business standpoint, wireless operators increasingly see opportunity in the adoption of cloud principles to drive additional automation and improve service availability for customers.
Being more agile allows for faster innovation in key areas of networks, specifically Firewall (F/W), Routing, Deep Packet Inspection (DPI), and Charging and Datastore. Sequential improvement in any of these key areas means a better experience for wireless customers overall, therefore there has been a major push by many areas of the industry to move from a monolith model, towards a cloud native approach.
For wireless networks, there are several challenges to consider in this process, including ensuring the software code is properly versioned and configured, data caches and messaging queues are integrated, application processes are organized, development and production of new services don’t disrupt existing services, and network ports are properly managed.
To organize this, 3GPP has created a reference architecture based on ETSI standards that is based around the delivery of these services to users – or Service Based Architecture (SBA) for short. This reference architecture is modular, open, re-usable, extensible, and allows for fast production of new services. It includes an interface to allow network operators to easily manage and orchestrate the new system. It would look something like this:
How are we getting there?
Ultimately, getting to cloud native is going to be a comprehensive process that will need to be done in phases. Like fixing a bridge while traffic is still going across, there will be a period of time where the complexity of the network may actually increase before we settle into a final state. There will certainly be challenges along the way.
Broadly speaking, there still remain numerous issues involving the adoption of cloud native philosophy, which impacts an operator’s entire workflow process. There are automation and orchestration issues, which could confuse how, where and when certain services get resourced. There could be platform shortcomings where data centers may not be up to speed, or new network issues introduced by the introduction of new services – or even an increase in security risks.
Ultimately different network operators are at different points along their path to digital maturity in adopting these cloud native principles into their networks. But all is not lost. Every challenge comes with new proposed solutions.
If experience has taught me anything, it’s that there are a lot of smart people working on our wireless networks today. The arc of progress always moves in the right direction to serve customers and enterprises that rely on wireless technology. Over the next few years, it will be interesting to witness the progress of these technology trends. The road to cloud native is paved with great ideas.
Viewpoints are the expressed opinions of independent wireless industry analysts and stakeholders. They do not necessarily reflect the opinion of the 5G Americas association or its member companies.