Warning: Invalid argument supplied for foreach() in /home/customer/www/futurenetworld.net/public_html/wp-content/themes/futurenet-group/template-parts/content.php on line 13

Powering a new era of enterprise computing

Contributed by Charlie Foo, Vice President & General Manager, Asia Pacific Enterprise Business, NVIDIA.

Artificial intelligence (AI) is redefining enterprise computing and heralding a new era in the data center. It has become mission critical for every enterprise.

Many enterprises are already deploying NVIDIA AI in the data center, highlighting the importance of AI in today’s multi-tenant data centers, hybrid-cloud environments, and accelerated infrastructure deployments.

Powering the AI platform are NVIDIA GPUs, sophisticated chips that accelerate many aspects of AI including both training and inference. And helping securely connect GPU-powered servers to each other and to store is the Arm-based NVIDIA BlueField data processing unit (DPU), which ignites unprecedented innovation for modern data centers. The DPU is a new type of processor that offloads, accelerates and isolates a wide range of advanced networking, security services and storage. With the DPU, enterprises get a secure and accelerated infrastructure for any workload–including AI, in any environment, from cloud to data center to edge.

The DPU-accelerated computing platform reduces the processing burden on CPUs, improving application performance and server efficiency. DPUs can also accelerate encryption, protect application integrity, and provide security isolation, helping create a zero-trust security model.

Partnership with VMware

In 2020, NVIDIA and VMware started partnering on an end-to-end enterprise platform for AI as well as a new architecture for data center, cloud and edge that uses NVIDIA GPUs and VMware virtualization to support current and next-generation applications.

The result is the NVIDIA and VMware AI-Ready Enterprise Platform accelerated by NVIDIA GPUs and optimized and delivered on VMware vSphere 8. This represents a huge moment for enterprise computing and the modern data center. The platform can drive new AI-enabled applications such as recommender systems, speech and vision analytics, and natural language processing.

Last August, VMware and NVIDIA launched an additional collaboration around NVIDIA DPUs.

Watch NVIDIA’s Anissh Pandey participate in the ‘Enabling the intelligent 5G Edge’ panel at FutureNet Asia 2022

VMware vSphere on DPU

VMware vSphere 8 Plus comes with ESXi and NSX support on the NVIDIA BlueField DPU. ESXi is a hypervisor installed on the physical machine while NSX is a software-defined approach to network virtualization and security.

The latest release of the VMware solution lets users improve infrastructure performance by offloading and accelerating ESXi and NSX functions with the DPU, providing more host resources to business applications.

Certain latency- and bandwidth-sensitive workloads that previously used virtualization “pass-thru” can now run fully virtualized with similar performance in this new architecture, without losing key vSphere capabilities around VM mobility, such as vMotion and DRS.

Enterprises can depend on vSphere to manage the DPU lifecycle to reduce operational overhead.  They can also boost infrastructure security by isolating infrastructure domains on a DPU.

vSphere 8 can fit seamlessly into today’s data center architecture while enabling the future to come about.

Vinod Joseph, Technical Leader, APJ & China, VMware adds “The Network Edge is the next major evolution after the advent of the Cloud, capable of delivering more revenue services and innovative use-cases for consumers, businesses, and society in general. VMware is at the forefront of building technology partnerships with NVDIA, to drive the adoption of DPUs, and GPUs at the network edge – in order to build, & deliver newer applications and services.”

Zero-trust security

VMware vSphere 8 with BlueField DPUs is vital to bringing hybrid clouds, multi-tenant clouds and zero-trust security to enterprises.

As enterprises continue to generate and process increasing amounts of data, they need better performance and security.

Here’s where DPUs in the new infrastructure architecture accelerate performance, free up CPU cycles and provide better security.

Running the VMware vSphere 8 NSX distributed firewall on BlueField DPUs (available as tech preview in NSX 4.0) makes every node more secure at virtually every touch point. It is like having a firewall in every single computer.

Experience the solution on NVIDIA LaunchPad

NVIDIA recently announced a new data center solution with Dell Technologies that combines Dell PowerEdge servers with BlueField DPUs, NVIDIA GPUs and NVIDIA AI Enterprise software, and it is optimized for VMware vSphere 8. The potent combination brings state-of-the-art AI training, AI inference, data processing, data science and zero-trust security capabilities to enterprises.

It is something that enterprises across industries have been clamoring for. In the telecommunications sector, Japan’s NTT Communications is deploying multi-tenant services based on the platform.

Enterprises can quickly experience the power and benefits of these technologies on NVIDIA LaunchPad, a free, hands-on lab program that provides access to hardware and software for end-to-end workflows in AI, data science and more. Customers can test vSphere 8 running on Dell servers with vSphere offloads to the BlueField DPU, for example.

There’s no need to procure and stand up infrastructure to offload, accelerate and isolate vSphere on a DPU before experiencing the lab.

Those interested in trying out the VMware vSphere 8 engine with BlueField DPUs on NVIDIA LaunchPad can apply to try out the solution.

Get Started with AI and VMware Labs | NVIDIA

Energy crisis and calls for climate action – Is there an immediate solution for telecom operators?

By Mohammad Nur-A-Alam, Head of Sustainable Analytics Products, Nokia.

With urgent action need to address climate challenges, leaders and policymakers are preparing for the United Nations’ latest Climate Change Conference, COP27, set to be held in Egypt.

Telecom operators and enterprises are playing their part in addressing net zero actions by working within the ESG frameworks of Scope 1, 2 & 3 emission categories. However, it’s clear that working towards net zero is currently an expensive prospect, requiring a step-by-step approach based on significant technology innovations alongside policy support.

A mid-size communication service provider (CSP) consumes approximately 300 – 350 GWh of electricity a year, and with rising scarcity and increased cost of electricity per KWh, CSPs are struggling to keep their OPEX stable. In recent discussions with top CSP executives, one question keeps recurring – how can I keep my energy costs at the same level?

As they evolve, CPE/UE, RAN and core software features, SoC & architectures for Mobile, fixed and Data Center networks are adding energy efficiency at every step. Adopting an end-to-end energy solution, covering both AI/ML software-based sleep and hard switch-off, together with site auxiliary savings, can reduce energy consumption by up to 60%. In actual numbers, this means a cut in electricity consumption of 200 GWh. But how do we get there?

 

AI-Based Reduced Energy Consumption: 

The answer lies in an AI-based approach, along with the help of data science. Some key features of AI-based energy reduction products should be –

  1. AI-based soft sleep of RAN equipment – Typically, RAN sites are installed with 2G/3G/4G/5G radio & system modules. AI algorithms can assess the load on the RAN and command a soft switch-off. CSPs can usually save 8-15% on top of their traditional savings programs.
  2. AI-biased hard switch off for RAN equipment – Once the savings window is identified by the AI-based soft sleep, further cuts in energy consumption are possible with a hard switch off. Our study shows that 10% additional savings can be added on top of using soft switches. However, CSPs must be careful with legacy equipment in the network – usually, newer technology like 4G and 5G equipment is more robust and can achieve these extra savings safely.
  3. AI-biased AC control for site auxiliary energy savings – A typical AC consumes 2.2 kWh during load conditions, whereas standby mode uses only 100Wh. AI-based air conditioning can massively reduce the operating time and level of cooling systems throughout the day.
  4. Intelligent fresh air ventilator for natural cooling – AI can manage the exchange of hot and cold air inside and outside equipment sites, using high precision temperature control to save on the energy used in air conditioning. Since every site is different, the AI engine and automation calculates and adapts precisely to the specific site conditions.
  5. AI-based anomaly detection – Some faulty or aged equipment in the network typically consumes more power than usual and AI should be able to flag network elements for replacement, allowing extra savings and improving network performance.

 

Additionally, these insights provide a view of overall network level energy consumption as well as savings for each network element, giving business analysts and management a comprehensive view to plan their next steps.

For AI-based soft and hard switch-off, operators need a product that can maintain all major network performance KPI & QoE levels intact – for a critical network infrastructure, quality degradations are not an option.

 

AI-based truck roll reduction   

A CSP’s network consists of 10 – 50,000 or even more sites depending on the country’s population and geography. These sites are installed with operational equipment to run critical network infrastructures and need periodic and reactive maintenance to maintain optimal network performance. Each year will see around 100,000 site visits performed, with each visit needing truck rolls that contribute to significant costs and CO2 emissions. With AI-based truck roll reduction, operators can reduce the number of site visits by approximately 15 – 20%.

 

Nokia has leading products for both AI-based energy and truck roll reductions, helping CSPs and enterprises worldwide to fight the energy crisis and tackle climate change.

Delivering Automation and Agility for Telecom Networking Transformation

Contributed by Saad Sheikh, Lead Systems Architect APJ, Dell Technologies.

Digital transformation and network modernisation are some of the main drivers for telecom network modernisation. While the new cloud and dis-aggregated deployment models in 5G enable this transformation, they bring also bring challenges that needs to be addressed. Applying DevOps techniques to a telecom networking infrastructure enables smooth delivery and operation through two key capabilities: automation and agility in telecom infrastructure life-cycle management.

We live in a hybrid cloud world, where automation and infrastructure life-cycle management becomes complex. More and more changes are expected to be done across different layers in Open RAN and 5G cloud-native system. To manage these changes we must apply an automated and agile delivery approach in:

  1. Software delivery
    Inventory systems, software repositories, and automation platforms that enable the delivery of software containers in the form of DevOps Continuous Integration / Continuous Deployment (CI/CD) pipelines that integrates different customer environments across development labs, testing environments, and production systems.
  2. Test platforms
    Test framework and tools that enable pre and post testing and verification that enable both the open source and vendor agnostic tools as well as vendor and Network Equipment Provider (NEP) specific tools to give a unified observability and testing layer to the Communication Service Providers (CSPs).

Defining use cases for a Telecom DevOps

There has been interest in the industry to standardise efforts on Telecom DevOps models and how it can be realised in practice for the CSPs. The most important use cases for driving standardisation in Telecom DevOps are is around delivering a Test Automation System that drives industry collaboration to create 5G revenue opportunities.

There are challenges when accelerating and simplifying the testing of an open and modern telecom ecosystem around test automation and orchestration. CSPs will not only need to have systems that support faster deployment but also the operations to manage and automate the application once they are put to commercial use. The definition and architecture of Test Platforms is evolving as different CSPs have different preferences and partners choices. Such test automation systems need to be evolved over time to cover different use cases such as voice, data, Open RAN, and along with varying platform choices like Private, Public, and Hybrid cloud deployments.

Dell Technologies is developing more than just a test automation system with the Solution Integration Platform in our Open Telecom Ecosystem Labs (OTEL). It addresses the very real need in the Telecom industry for a secure, reliable, multivendor testing and integration environment that is committed to open standards and 5G innovation. The Solution Integration Platform defines the platforms, connectivity, and lifecycle management processes used by the OTEL Lab. It brings partners and CSPs and industry partners together in telecom’s future together using the latest DevOps techniques to jointly create the open, cloud native, standards-based ecosystem needed to deliver 5G’s full potential.

Realisation of Test Automation Platform Architecture

A reference architecture to approach such a test platform can be depicted as follows.

The architecture provides a cloud-native end-to-end solution validation platform leveraging the latest DevOps techniques accelerating the CI/CD pipeline. With ready-to-use automated test solution to monitor and automate the test activities, the Solution Integration Platform de-risks and accelerates Open RAN and 5G Cloud Native network development, particularly at the integration and testing phases, as well as during delivery and deployment so CSPs can create new service opportunities in a competitive market.

The Solution Integration Platform exercises and performs all the necessary of fetching, building, deploying, sanitising, testing, and analysing of software delivery to test the solutions end to end, using a variety tools and automation techniques. It includes Dell Technologies own Bare Metal Orchestrator for platform deployment automation as well as other state of the art test automation and DevOps infrastructure.

Our Bare Metal Orchestration illustrates our commitment to enabling full-stack operational automation at the scale and complexity of telecom network operators. And Dell Technologies OTEL (Open Telecom Ecosystem Labs) Solution Integration Platform shows our commitment to enable products, software, tools and services that empower the Telecom industry to deploy and manage the dis-aggregated systems at scale. We are partnering with CSPs to build solutions of the future using the latest DevOps techniques to deliver the automation and agility needed for telecom networking transformation.

 

About the Author: Saad Sheikh

Saad Sheikh is APJ Lead Systems Architect for Orchestration and NextGen Ops in Dell Telecom Systems Business (TSB) . In this role he is responsible to support partners , NEP’s and customers to simplify and accelerate Networks transformation to Open and Dis-aggregated Infrastructures and solutions (5G , Edge Computing , Core and Cloud Platforms) using Dell’s products and capabilities  that are based on Multi Cloud , Data driven , ML/AI supported and open ways to build next generation Operational capabilities . In addition as part of Dell CTO team he represent Dell in Linux Foundation , TMforum , GSMA , ETSI , ONAP and TIP .

Unleash IoT with Intelligent Edge Devices

Contributed by Intel.

Enable rapid business intelligence at the edge with technologies for the Internet of Things (IoT)

Edge devices gather data, usually in real time, to help improve processes, create better customer experiences, and enhance the quality of products. When empowered by the right technologies, edge computing increases intelligence across the entire business, anytime and anywhere.

What Is an Edge Device?

Today, enterprises are extending analytics and business intelligence closer to the points where data is generated. Edge computing solutions place Internet of Things (IoT) devices, gateways, and computing infrastructure as close as possible to the source of data—and to the systems and people who need to make data-driven decisions. With the right technologies in place, edge computing improves processes, automates tasks, creates better customer experiences, and much more.

Types of Edge Devices

Edge devices vary widely in physical form and capability since they serve many different purposes. Intelligent edge devices offer capabilities beyond those of RFID tags, temperature detectors, and vibration sensors. With built-in processors, these smart devices can accommodate advanced capabilities like onboard analytics or AI.

For example, intelligent edge devices used in manufacturing may include vision-guided robots or industrial PCs. Digital cockpit systems built into commercial vehicles can help support driver assistance. In hospitals, devices monitoring patients can look for changes in vital signs and notify medical personnel when needed. Smart cities are deploying IoT devices to monitor weather conditions and traffic patterns and to give citizens real-time information on public transit.

Smart city with digital overlay tracking vehicles

 

Edge Devices and Computer Vision

By adding cameras and computer vision capabilities to intelligent devices, systems can “see” and identify objects. This type of artificial intelligence (AI) is valuable in a range of environments, making it possible to quickly identify objects, inspect manufactured products, and more.

Machine vision is a type of computer vision used in industrial IoT (IIoT). These solutions can monitor equipment or workers to help improve productivity and enhance safety.

In a factory, for example, machine vision can automate the inspection of products speeding off an assembly line to ensure each unit meets its specifications. Defective products can be flagged for further inspection by employees.

Vision solutions can also assist with corporate security, monitor the production floor for employee safety, or help identify procedural manufacturing bottlenecks.

Estimates suggest that by 2025, 55.6 percent of all data will come from IoT devices, such as security cameras, RFID readers, industrial equipment, digital signage, medical implants, and other connected things.1

 

Connecting Edge Devices to a Network

Most edge devices must share data to an edge server (as part of an edge cloud) for storage and analysis. An efficient, secure connection for the transmission and protection of IoT data is essential.

Wireless or 5G connectivity can deliver fast data exchange across the edge network. Intel® technologies for the edge and IoT help you provision and connect devices to your network and take advantage of 5G for ultrafast speeds with extremely low latency.

Woman using 5G network outside at night

 

Intel Hardware and Technologies for Edge Devices

Choosing the most efficient and economical IoT solution depends on your unique business challenges. The right solution must meet requirements for workload performance, size or power constraints, and budget. Intel offers a range of compute to enable IoT solutions from the edge to the cloud and help your enterprise realize the best return on your investments.

Intel® Core™ and Intel Atom® Processors

Intel® Core™ processors deliver the power and responsiveness to support many types of IoT devices, including kiosks, industrial PCs, and more. Select Intel® Core™ processors on the Intel vPro® platform enable additional remote manageability and help security capabilities.

Built for embedded applications, Intel Atom® processors deliver a balance of performance and low power consumption for smart cameras, mobile point of sale devices, and other edge devices where power consumption may pose a challenge.

Intel® Movidius™ Vision Processing Units (VPUs)

Intel® Movidius™ Vision Processing Units (VPUs) offer fast, low-power performance to support computer vision and AI workloads. These specially designed VPUs enable parallel programmability and workload-specific hardware acceleration for deep neural networks and computer vision applications in devices like augmented or virtual reality systems and smart cameras.

Intel® Xeon® D Processors

Edge servers bring powerful data processing, analytics, and AI capabilities closer to the point of data generation. Intel® Xeon® D processors deliver the high performance and reliability needed to accelerate compute, storage, memory, and networking at the edge. Select SKUs of Intel® Xeon® D processors offer extended temperature ratings for embedded and rugged applications with integrated Ethernet and AI acceleration.

Intel Security Technologies

Edge-based devices and systems must keep proprietary, sensitive, and personal information safe. In some industries like healthcare, data sovereignty and compliance are especially critical.

Intel’s products are architected to deliver advanced security with built-in, silicon-enabled security technologies that help protect potential attack surfaces.2 Rooted in silicon, our security technologies help create a trusted foundation for computing that customers can depend on with product lines that span edge, endpoint, data center, cloud, and network, offering access to a common approach to security for simplified deployment and integration.

Intel Partner Solutions

Intel® IoT Market Ready Solutions offer your business scalable, repeatable, end-to-end solutions specially designed for your industry.

In addition, the Intel® Network Builders Edge Ecosystem helps accelerate the development, adoption, and deployment of edge-centric technologies, helping improve access to tested and optimized solutions for network edge and cloud environments.

For data center modernization, Intel® Data Center Builders unites a global network of providers. The resulting solutions and technologies are optimized to meet customer needs.

With comprehensive technologies for edge computing, IoT, 5G, and AI, Intel and our partner ecosystem are helping businesses achieve intelligence everywhere. This makes it possible to deploy powerful capabilities wherever they are needed most—from edge devices to the cloud—to transform operations and experiences.

Website: www.intel.com/edge
Contact Email: apj.edge@intel.com

The Journey to Autonomous Networks: 6 Challenges to Overcome Before Putting Artificial Intelligence and Machine Learning to Work

Contributed by Yuval Stein, AVP Technologies at TEOCO

Advanced, zero-touch network operations – what it takes to create an ‘autonomous network’ – requires telecom network engineers to take the results from AI/ML algorithms and use them efficiently within a network’s operational processes. Up until now, much of the industry focus has been on the challenges developing the proper AI, yet sometimes forgotten, there are also challenges in putting the AI results to use in a way that contributes to the quality of the services and the network.  That’s what this blog post explores. But before I get ahead of myself, let’s discuss the purpose of autonomous networks.

Autonomous Networks – what they are, and why we need them

Building an autonomous telecom network is somewhat akin to building a living organism.  Just like our bodies can automatically self-regulate many of the functions that keep us alive – our stomachs digest, our lungs breathe, and our hearts beat – autonomous networks function in a similar way. The goal for Communication Service Providers (CSPs) is to create a fully automated environment that is self-configuring, self-healing, self-optimizing and self-evolving. A network that will hum along with minimal to no human intervention.

Why is this so important?  Yes, there are cost savings that can be achieved through automation, but that has become almost secondary. Let’s stick with the human body analogy for a bit longer. If we suffer a small cut or cold, our bodies automatically begin the healing process.  We don’t have to tell it what to do; it just happens automatically. If everyone had to rush to the hospital to seek the advice of a specialist for every sniffle, bump, and bruise, we would find it hard to get through the day. Job productivity would plummet, and there wouldn’t be enough medical personnel to deal with all the demand. That is the same issue for telecom networks.  Systems have become so complex and fast-moving that human intervention can’t be relied upon to fix every minor network error – there are simply too many to manage, and response times need to be immediate. Automation needs to be the default, and humans should only have to step in for the bigger issues – when absolutely needed.

It Takes a Village: Solving the Autonomous Service Assurance Stack

Communication Service Providers, companies like TEOCO, and groups like TM Forum, are working to create the software, systems and tools required to enable fully autonomous networks – and we are getting closer every day. As you can see in figure 1 below, there is a stack of service assurance steps that need to be achieved – each one building upon the one prior.

 

The 3 bottom steps in the diagram below are well established, with their own operational methodologies. CSPs know how to work with network messages, events, alarms and KPIs. However, the upper layers, including Analytics and Automation – are very different. They require incorporating and acting upon things like probabilities and forecasts, which are relatively new types of information that until now have only existed in separate silos across various departments. Now, it all needs to work together. This requires new methodologies (and APIs) for how to incorporate these new stages before networks can become truly ‘Autonomous’.

Figure 1- the Autonomous Network Assurance Stack

Six Human and Technical Operational Challenges for Managing AI/ML Data

Understanding how to best leverage the information and data being generated by machine learning and artificial intelligence, and how to ‘operationalize’ the ongoing use of this information, is the task at hand.

As the saying goes, the devil is in the details. In integrating these new layers, which as mentioned above work very differently than the previous layers, we’ve identified a ‘language gap,’ for lack of a better term – with both human and technical hurdles that need to be overcome.

Before we can address this gap, we need to understand it and identify exactly what challenges we are facing before we can fix them. My belief is that they are both human and technical. After all, even with automation there will always be people involved at some level. I’ve outlined some of these challenges below:

Human Challenges

  1. Lack of Trust: This is the main human challenge by far, as algorithms using deep mathematics are often not easily explainable.  The use of visual cues and explanations within the user interface, along with achieving proven results over time, helps build trust.
  2. Defining What is Actionable – and How to Act Upon It: AI and ML results are rarely black and white.  Like forecasting the weather, they often involve probabilities. But instead of predicting the chance of rain, the AI/ML results may show that there is an 80% probability for a network function failure in the next 12 hours. Network engineers need to decide- is this a high enough probability to require the system to automatically change a network configuration? And is there enough information to know what that change should be?
  3. Cost Benefit Analyses: Once an issue is predicted, are we able to compare the cost and impact of the expected failure to the cost and impact of the fix? To run a network in a cost-efficient manner, these types of decisions will be required on a regular basis. And what about future impacts? If a minor network error occurs you may decide to ignore it. But what if it could lead to a larger, more costly issue down the road– how do we predict and account for these?

Technical challenges

  1. Optimizing Data Size: When it comes to machine learning, there is always a delicate balance between getting enough data to generate good AI/ML results, but not so much that it takes too long to process.  Sometimes it is better to reduce the amount of data ingested so the algorithms can provide their findings closer to real-time. This needs to be done carefully though, to maintain quality results. Similarly, if the data output is too large, it becomes too complex and unwieldly for other systems to work with. Therefore, we need to reduce the results to those that are ‘operationally affective’. But how do we know which data to use and which to ignore?  Sometimes these efforts are complex enough that they require their own algorithms.
  2. Lack of standardization of APIs – Further standardization of APIs will eventually create a true ecosystem of best-of-breed systems that can work together seamlessly to create a truly autonomous network. The industry is still evolving in these efforts- with more work ahead. Currently, Automated Root Cause Analysis is the only widely adopted AI/ML API. There needs to be more.
  3. AIOps Challenges – Creating, analyzing, and working with AI and ML data is very different from traditional software. A typical software solution is ready to go upon implementation and no changes are required until the next upgrade, but that isn’t typically the case with AI and ML. These systems have shorter lifecycles and are best defined as a hybrid mix of both a product and a service. They require regular re-training and updates because they are constantly learning from new data all the time. Having the right support structure in place for the ‘care and feeding’ of these new systems will be critical to their success.

Aside from the operational challenges associated with creating an autonomous network, automation in general requires upfront investments in technology, skills, and services. These investments can be significant and are better shared across the whole enterprise. A hybrid approach, which involves selecting the most cost-effective tool for each scenario, may (in the short-term) enable more-rapid deployments. However, this approach can prompt expensive maintenance and vendor management issues in the long term.

Automation also has implications for staff and organizational change. Automation projects can be delayed or difficult to justify where redundancies or reassignment create cost implications. Automation is best achieved where there is a clear understanding of each end-to-end process, and each process is closely managed to prevent poor practices from being replicated through the automation.

Is it worth it?

Some may wonder if these challenges are worth the effort and expense.  The truth is that the telecom industry is at an inflection point, where for progress to continue we must address automation in a way to get both a positive return on investment in the long term, along with immediate results and efficiencies in the short term. What worked yesterday- the systems, processes, and skillsets – won’t work tomorrow. Telecom networks – and the future services they will enable – demand a new operational playbook.

TEOCO is at the forefront of this effort. We are working to address each one of these challenges by participating in research catalysts with groups like the TM Forum and investing in our own research and design; constantly exploring ways to help our customers manage the challenges ahead.

 

To learn more and hear how we are addressing some of these challenges, sign up for our FutureNet hosted webinar, Leveraging AIOps towards advanced zero-touch operations on the 15th of September at 4pm BST. Or contact us today.

 

Welcome to the Future of Network Automation: Juniper Paragon Automation as a Service

Contributed by Kanika Atri, Senior Director, Strategic Marketing, Juniper Networks.

Service Providers (SPs) don’t invest in automation for automation’s sake. They do it to achieve business outcomes. Faster time-to-market. Reduced operational complexity and costs. More reliable, higher-quality subscriber experiences. Now, as traffic volumes explode and operators introduce new business and consumer services, including 5G, edge cloud, Internet of Things (IoT), Fixed Wireless Access (FWA) and more, automation has become a top strategic priority for SPs. There is no place this matters more than where these growth trends and new service types converge: the Cloud Metro.

Figure 1: Drivers of Network Automation – Heavy Reading

 

However, when we survey the market, we see a need to fundamentally reinvent how network automation is deployed and consumed. In fact, market analysts suggest that more than 70% of “Do-it-Yourself” (DIY) in-house network automation projects fail. And when organizations rely on legacy vendor automation solutions, they don’t deliver real business outcomes. For example, in a recent Heavy Reading survey, 40% of SPs said that using a generic automation framework is actually a barrier to adopting automation in transport networks.

At Juniper Networks, we believe there is a better way, and it must be guided by three core principles:

  • Speed matters: The automation tools we use should have a maniacal focus on “time to first business outcome,” and that time should be measured not in years, but in days. Automation should let SPs move at the pace of business, not the pace of internal operations.
  • No overhead: It shouldn’t require a huge investment in capital budget, time and personnel to deploy, operate and continuously update automation software and hardware. Automation should let SPs focus on productivity, not production.
  • Easy button: It should be super easy for staff to use network automation—without needing extensive training or software development skills. Automation software should work for SPs, instead of SPs working for the software.

How can the industry deliver these requirements? Juniper believes the future of network automation is cloud-delivered and Artificial Intelligence (AI)-enabled. It’s been proven in other domains. Now, it’s time to bring this model to the Wide-Area Network (WAN).

Juniper is already an established player in WAN automation, with many of the world’s premier SPs and enterprises using Paragon™ Automation, particularly for closed-loop automation use-cases. Now, we are taking that value proposition further and doubling down on public cloud and AI.

Today, Juniper announced the launch of Paragon Automation as a Service. This solution is more than just a reimagined network automation suite – it’s a reimagined automation experience. And it paves the way for more sustainable business operations in the Cloud Metro and across the network.

 

Reimagining the Future with Paragon Automation as a Service  

With Paragon Automation as a Service, Juniper is reimagining the automation experience in the following ways:

  • It’s cloud delivered. Just sign up, log in, connect the devices and GO in minutes. There’s no need to worry about hardware/software installation overhead, and it’s much faster than trying to implement DIY automation that might take months or years.
  • It’s AI-enabled. The Paragon Automation cloud infrastructure comes with built-in AI and Machine Learning (ML) data and training pipelines, and WAN-specific AIOps use cases. SPs can spot WAN issues that would elude human analysis using conventional tools, identify root causes and fix them before they impact the service experience. And with AI/ML, the automation framework keeps learning every day and gets better at predicting such issues over time.
  • It’s assurance native. Paragon Active Assurance test agents are now natively embedded into Junos OS Evolved in every ACX7000 platform. These built-in “Experience Sensors” generate synthetic traffic to measure service/application quality, anywhere through the network. Coupled with Paragon Automation as a Service, SPs can detect and fix experience issues proactively. It’s yet another proof point of delivering on our vision of experience-first networking.
  • It’s trust verifiable. Paragon Automation as a Service helps SPs establish network trust at three levels. At the hardware level, it guarantees they are using genuine Juniper gear by validating a unique Device ID linked to TPM 2.0 chips embedded in our routers. At the software level, it then continuously checks software integrity and finally, calculates a network-wide trust score, providing them with actionable insights about potential risks.
  • It’s use-case based. That’s the value of cloud: use only what is needed and pay for only what is used. If SPs only need to automate one use case to solve an immediate business problem – then that’s where they start with Paragon Automation as a Service. There’s no need to go “all-in,” or “boil the ocean” by deploying and training legacy automation systems as one would with most vendor and DIY solutions. With Paragon Automation as a Service, SPs can start small, go fast. SPs can choose their own adventure across the lifecycle of Plan, Orchestrate, Detect and Assure and Optimize use cases.
  • It’s intuitively simple. We combined the art and science of UX design to make Paragon Automation as a Service so simple to use, it can feel like it’s reading your mind. The modern UI is built on a layered information architecture, recommendation engines and visual guides that flow with the user’s operational journey.

 

Why is the future of automation in the cloud?

Because it just makes sense—from both a business and technical perspective. There’s a reason most of the world’s software has moved to a cloud-delivered SaaS model. It’s why 61% of SPs told Heavy Reading that they want cloud-based network automation in a new survey on transport automation.

Furthermore, Analysys Mason quantified the benefits of SaaS-based automation compared to DIY/ legacy solutions, and the value of SaaS is just staggeringly obvious.

  • Business benefits:
    • The simplicity of a SaaS platform reduces deployment times by 50% for most projects, translating directly to faster time-to-market and increased revenues.
    • Once deployed, SaaS automation platforms allow SPs to add new use cases 70% faster than with traditional approaches.
    • SPs incur 40% lower costs with a SaaS-based automation approach compared to internal development. This owes to savings in hardware, staffing and operational costs for installing and maintaining your own solution.
  • Technical benefits: With cloud, innovation comes fast. Think weekly releases of software improvements, compared to months with traditional solutions. When it comes to AI, it makes sense to use cloud so SPs can leverage the collective intelligence from anonymized data, ML models and knowledge from networks around the world.

Figure 2: Benefits of Network Automation as a Service (vs. DIY) – Analysys Mason

 

Compared to today’s typical automation approaches, whether it’s a “big-bang” automation system from a vendor or an SP’s own DIY system, most of these efforts fail on the three guiding principles:

  • Slow time to first outcome: Applying a generic automation solution to all network domains requires a huge investment in time and effort. Often, they are just too broad and complex, making them difficult to get off the ground, evolve, scale and, most importantly, deliver business outcomes. SPs might, eventually, see some positive outcomes, but they won’t get there quickly or easily.
  • Extra overhead: The costs to build, deploy and manage the infrastructure grow quickly. Often, it ends up costing much more than budgeted, making it difficult to prove ROI.
  • No easy button: Traditional solutions require extensive training and skills to use effectively. This gets even harder when skilled employees are scarce and staff churn is high.

Bottom line: a cloud-delivered, AI-enabled network automation approach offers much faster, cost-effective, simpler and outcome-focused results.

Automating the Cloud Metro

As SPs adopt a Cloud Metro model, distributing more capacity and service intelligence out closer to subscribers, they can’t rely on yesterday’s approaches to metro network operations. What they want is built-in security, assurance and AIOps.

Figure 3: SP requirements for metro network automation – Heavy Reading

 

Coupling Juniper Paragon Automation as a Service with our Cloud Metro platform is a perfect fit, providing a complete, shrink-wrapped solution. SPs get on-box elements such as embedded active assurance sensors and built-in Zero Trust security, plus off-box elements from Paragon Automation as a Service. That’s how they can build sustainable Cloud Metro operations across the lifecycle.

  • Day 0: Authenticate and onboard thousands of Cloud Metro devices in minutes, not days, and launch new services more quickly with Paragon Automation as a Service.
  • Day 1: Natively enable the quality of Cloud Metro services with built-in Experience Sensors and establish network trust by verifying hardware and software integrity at scale.
  • Day 2 and beyond: Find, fix and predict Cloud Metro problems before they impact user experience, thanks to WAN AIOps built into Paragon Automation as a Service.

Service Providers are more than ready for automation. But most automation solutions haven’t been ready to deliver the outcomes SPs need—until now.

See the power of Paragon Automation as a Service for yourself with this sneak peek into just one example of the future of automation, our AI-enabled Device Onboarding as a Service and discover how we’re reimagining the onboarding process to make it instantaneous, virtually error-free and secured.

Cloud-first, AI-enabled automation is the future. Juniper’s initial offering will be available early 2023—and we’re just getting started. Throughout next year, we expect to roll out additional use cases and an AI-driven conversational assistant.

Statement of Product Direction

The information on this page may contain Juniper’s development and plans for future products, features, or enhancements (“SOPD Information”). SOPD Information is subject to change at any time, without notice. Juniper provides no assurances, and assumes no responsibility, that future products, features, or enhancements will be introduced. In no event should any purchase decision be based upon reliance of timeframes or specifics outlined as part of SOPD Information, because Juniper may delay or never introduce the future products, features, or enhancements.

Any SOPD Information within, or referenced or obtained from, this website by any person does not give rise to any reliance claim, or any estoppel, against Juniper in connection with, or arising out of, any representations set forth in the SOPD Information. Juniper is not liable for any loss or damage (howsoever incurred) by any person in connection with, or arising out of, any representations set forth in the SOPD Information.

Juniper Networks, the Juniper Networks logo, Juniper, Junos, and other trademarks listed here are registered trademarks of Juniper Networks, Inc. and/or its affiliates in the United States and other countries. Other names may be trademarks of their respective owners. 

Simplifying the Edge Opportunity with Automation

Contributed by Susan White, Head of Strategy and Portfolio Marketing, Netcracker.

Enterprises need edge compute for many reasons – enabling low latency for real-time IoT services, reducing backhaul costs for video processing and keeping sensitive data local – to name a few. It’s becoming a core strategy for many companies in vertical markets to boost productivity, performance and increase customer satisfaction.

However, most enterprises cannot deal with the complexity of putting all the piece parts together and running these mission critical services with strict SLAs. The choice and location of edge cloud platform, connectivity options, security solutions, distributed 5G core and MEC applications presents a hefty project that requires extensive design, testing, validation, deployment and continuous assurance.

Enterprises need simplicity.

This creates an ideal opportunity for CSPs to remove this complexity and at the same time deepen their engagement and value in enterprise and vertical markets. However, no CSP has all the piece parts that are needed to offer enterprise customers a successful edge solution. It’s a multi-partner play that requires a great deal of cooperation and integration across the ecosystem.

This is why automation is essential to build and monetize a successful edge business. Whether it’s part of a private or hybrid 5G network, automation is required at both the operational and business levels.

 

Unifying and Scaling Edge Operations

Operations automation of the edge domain is essential for three key reasons:

  1. Unification of operational processes within the edge cloud. Today, edge platform automation (on-premise or network-edge) is separated from the service lifecycle of VNFs and CNFs, which in turn is separated from high value MEC applications. This results in high complexity and manual intervention. These processes must be streamlined to commission the entire edge platform, services and even network slices with zero touch.
  2. E2E view across highly distributed edge locations, transport and core. Edge services are highly complex and may have stringent latency and performance requirements. Resources can reside on multiple edge hosts (CSP and hyperscaler) and multiple locations (on-premise, network-edge, regional, core). Edge service deployments extend beyond the edge. An E2E view is essential to automate edge services.
  3. The ability to scale to many customers. Silo private 5G and edge networks make replication and scale impossible. This in turn makes the business case harder to justify. A unified operations environment is needed that can start small and scale to many businesses.

Netcracker has built these capabilities into its Edge Orchestration solution. We are helping CSPs globally, including Etisalat, develop a successful edge business. It goes beyond the intelligent placement and lifecycle management of edge services to also ensure the RAN has the right QoS enabled, connectivity between all edge hosts is ready (CSP or hyperscaler) and the 5G UPF breakout function is enabled for edge hosts. With its modular architecture and multitenancy, CSPs can scale their edge services across public and hybrid 4G/5G deployments.

 

Giving Enterprises Visibility, Control and the Right Partner Solutions

Enterprises are demanding faster access to their services with the simplicity of selecting, activating and managing their services on-demand from an intuitive portal. This necessities automation at the business layer also. Back end processes such as product catalogue, customer management and revenue management need to be streamlined with the front-end portal to enable automation from customer order to activation.

Serving vertical markets with right edge service will require a dynamic ecosystem of partners to address their specific business needs. The resulting multi-party solution will require a new approach for partner management that automates the myriad of processes from on-boarding partners to cataloguing, product management, pricing, partner management, and settlements.

Netcracker’s Digital Marketplace solution brings all these capabilities together to help CSPs give their customers an easy way to purchase services with complete visibility and control. The integrated Partner Management solution, adopted by T-Mobile US for its wholesale MVNO and IoT business, helps CSPs to build a vibrant partner ecosystem that will be essential to compete in the edge market.

Edge is a complex business – but it has the potential to be a very lucrative one with the right automation in place that brings simplicity and value to enterprise customers.

 

Automation? It’s just a milestone on the route to the Zero Touch Experience

Contributed by Atul Purohit, Head of Technology, Europe, Nokia.

Automation has been near the top of everyone’s agenda for a long time – which department of a CSP doesn’t want to make better and more extensive use of automation?

But do we have a clear view of what this automation is actually for? What is its ultimate converging point?

When anyone in a CSP speaks about automation, they will most likely mean automation of operations or processes, such as tool sets, alarms and/or ticketing. But automation is a much bigger concept and goes far beyond the CSP’s operational aspects.

In a recent discussion at FutureNet World, I was part of a panel of CSPs and developers who delved into automation, its true goals and what it could ultimately mean for end users.

We hear a lot about Zero Touch Automation, where automated tools elevate people to the status of observers who can intervene if needed. This is great but it is only one step along the road – the ultimate destination is the Zero Touch Experience, where everything the end user needs happens in the background without them having to think about it. According to research by Nokia, some 60% of new revenue in 5G will come from B2B2X business models, making partner ecosystems increasingly important.

If we look at Uber or Airbnb, the starting point is not the architecture but the end customer experience – for them, it’s all about how they can offer that Zero Touch Experience.  Essentially, automation is a fundamental building block towards that seamless experience – the customer has come to buy or access a certain service and everything in your organization needs to collaborate to make that point of value exchange as painless and seamless as possible. We need to start with the Zero Touch Experience and work backwards to build the Zero Touch Automation that can make it work.

 

One service to bind them all

What CSPs have perhaps been lacking so far is a common theme which they can build their Zero Touch Automation around. History tells us that football binds nations – there are goals that an entire nation can get excited and work seamlessly towards. Perhaps CSPs need a similar binding act, a rallying goal for all departments to converge on. With 5G offerings like network slicing, this might just happen, a concept that CSPs can actually use to tie everything together. On one hand, network slicing can lead towards an outcome like a defined use case and on the other, it could lead to multiple domain specific automations. We need a ‘religion’ (like football) to bind all these aspects and network slicing could be the right way forward for a CSP.

It’s clear that the services of the future will not be completely within a CSP’s own environment. Up until now, telecom services have been very much vertically integrated and completely self-contained. But with 5G, things disaggregate, and you have elements of service which will be outside a CSP’s control. Without an overarching philosophy such as network slicing, this could lead to automation being tackled in different parts of the organization with varying degrees of maturity.

In the 5G era, the concept of customers has grown to include third parties such as developers as a new revenue generating demographic. The ability to provide network as code or APIs to developers is something which 5G brings to the table – the developers often have a simple and clear ask from the network and it is up to CSPs and vendors like Nokia to deliver on that promise.

This new era of services for developers was outlined by Jad Meouchy, CTO of Bad VR, who told us what he is looking for from CSPs: “I’m looking for an API, or access points, or information or datasets or something that I can start exploring and understanding and discover the possibilities. Use cases like streaming and gaming are very interesting and we are keen to see how we can make use of them – we’re always looking at what the system thinks it’s supposed to do and thinking about how we can use that to meet our needs.”

 

A pick and mix approach

Developers such as Bad VR clearly want a Zero Touch Experience, the ability to easily pick and choose what works for them. At an industry conference some years ago, 20 service providers were presenting their edge strategy towards developers. At the end of all these presentations, a developer was invited to comment. She said, “Believe it or not, gents, I’m here for the booze and free food. I don’t care about your 20 different distinct ways to interact with your specific ecosystems – I don’t have the time and the patience to go through all of them.”

Developers obviously do not want to do different things when interacting with different service providers. Nokia is playing its part by using automation to open up APIs with almost all of our products. As with other vendors, we need to open up our APIs via a sandbox environment, which developers can test and trial to see how applications can work. Automation is the key to this.

As my fellow panellist Pal Gronsund of Telenor said: “Zero touch is a vision that we are all striving towards.”

Javier Garcia Jimenez from Telefonica added: “Enterprises are knocking on our doors and automation is crucial for scaling our capabilities.”

These enterprises clearly want to innovate on top of CSPs’ 5G platforms. They want to pick up the right slice based service for their needs from the CSP’s catalogue and define and adjust their own SLAs. Once their CSP confirms the order, it is automatically deployed and assured for them.

In terms of what we are seeing in the market, there are a number of Proofs of Concept happening and most of these are proving technology. Repeating these multiple times is a costly proposition – at Nokia, we blueprint these POCs so that there can be a fair degree of reuse across various customers.

As well as the technical side, there are also other aspects we need to consider. Javier emphasized this when he said: “It’s also about culture, organization, transformation, not just about the technology – it’s about how we implement the new operating model. Managing the lifecycle is crucial when we manage the end-to-end 5G slice, and ultimately to provide a Network as a Service type of approach.”

We need to deliver value to end customers – opening up networks via APIs and using automation to give developers and enterprises what they want using zero touch operations, is the way to achieve it.

Intelligent Operations for New 5G & Edge Ecosystems

Contributed by Vinod Joseph, Technical Leader of APJ and China, VMware

The digital era reinvented how communication service providers (CSPs) connect with customers. CSPs that deliver excellent services and customer service are rewarded, as every customer agrees that customer service is important in their choice of loyalty to a brand.

An important part of the 5G ecosystem is edge computing. It facilitates radical new use cases that extend from the data center core to the network edges. Edge computing allows for compute and analytics to be moved closer to the data instead of exchanging ever growing amounts of data among cloud servers. As the ecosystem evolves, 5G and edge computing will further converge to enable edge network management, collect and capitalize on massive amounts of data while maintaining integrity and even ownership, and build pervasive intelligence for enabling various latency-sensitive, enterprise, and private services.

Intelligent operations & automation for 5G and edge ecosystems is paramount to delivering excellent services that win customers and trailblaze the latest and greatest service innovations. The three keys to supporting business growth with intelligent 5G operations is efficiency, flexibility and scalability achieved through automation, & consistent cloud infrastructure deployed across diverse assets & multiple clouds, respectively.

Building intelligent automation, & operations for a 5G and edge world

The rigidity of today’s hardware and siloed architectures makes it destined to fail in the current market where quick pivots are necessary to keep up with the competition. Appliance-based upgrades can take months and disrupt service. Cloud-native technologies and software-centric models provide the necessary agility and flexibility to adapt, deploy quick patches and upgrades, and to expand to meet ever-changing conditions, all while avoiding downtime.

Two tasks are fundamental to address the complexity challenge and for CSPs to succeed: automation and optimization. Automation is required to cope with new scenarios in network and edge planning, operation, and management. Automation should ease the operation of network and compute infrastructure for rapidly growing vertical industries, transportation, and enterprise use cases that are bringing along with them new infrastructure owners. Optimization should also ease the extension of cloud computing and fast-growing Artificial Intelligence/Machine Learning (AI/ML) applications to the edge, as well as introduce self-optimization to best serve applications. Optimization strategies will ensure that every node in mobile networks can provide low latency, high reliability, and pervasive intelligence capabilities.

An operational framework that is backed by intelligent automation & optimization is necessary to build the 5G Core, protect the edge and virtualize the RAN. This is essentially ruled by these five pillars:

  1. Simplification
  2. Unification
  3. Intelligence
  4. Openness
  5. Culture

VMware can help guide CSPs through their digital transformation journey. VMware enables CSPs to consolidate their legacy siloed platforms & operational frameworks to a consistent multi-cloud infrastructure. This allows CSPs to deploy any application on a common cloud layer with intelligent automation & operations.

Unleash 5G potential with automation and end-to-end scalability

Intelligent, analytics-driven automation is more than just automating manual processes. It is the ability to take input from several sources, analyse that input to generate actionable insights, and then execute upon them via intelligent actions. This type of automation is required in the complex end-to-end setting of 5G architecture. VMware Telco Cloud Platform provides Intelligent automation required at the network core, edge, & RAN. Furthermore, by incorporating analytic and AI capability into the automation process, critical insights can be extracted from the network assets which can be used to generate optimal, dynamic configurations allowing rapid service deployment and providing a framework with business agility and flexibility.

Intelligent automation software is the key to the proper placement of Cloud-native Network Functions/ Virtual Network Functions (CNFs/ VNFs) within a mobile network and enabling Mobile Network Operators (MNOs) to maximize the utilization of network resources by re-allocating unused resources to other slices. With its advanced analytics, and automated orchestration features,

The VMware Telco Cloud Platform enables the creation of a self-driving, self-healing, and self-optimizing 5G network with zero-touch capabilities. CSPs can now implement zero-touch service lifecycle management, which includes automating the design, creation, modification, and monitoring of network functions and services, as well as the provisioning of underlying resources for those services, as and when required. The VMware Telco Cloud Platform supports the scaling and orchestration of network resources for 5G Core, RAN, & the edge.

Workload balancing, initiating tests across clouds and edge clusters, network provisioning and lifecycle management are all tasks at which the VMware automation solution excels. The VMware Telco Cloud Platform is always aware of Kubernetes clusters resources and dynamically reallocates work to optimize performance and create a self-organizing network.

Adopts multi-cloud for multi-access edge computing

CSPs employ huge, distributed networks comprised of various types of environments hosted by various providers. This allows them to provide a variety of services and take advantage of cost-effective compute and storage offered by various vendors. Distributing work across platforms, clouds and environments allows CSPs to optimize their investments, reduce their capital expenses, and offer new services.

VMware Telco Cloud provides multi-layer, end-to-end visibility so administrators can easily keep tabs on network and service layers through a single pane of glass. Consistent deployment and management across multi-cloud environments allows network teams to operate and manage multiple vendors, networks and resources through one control panel. Simplicity and visibility, again, saves time for IT teams and allows them to dedicate their effort toward the most important component of business failure or success: the customer experience.

This blog is adapted from a previous version published on the VMware Telco Blog website. To see the original blog and to learn more about VMware Telco Cloud visit our blog site here: https://blogs.vmware.com/telco/

Preparing for the Open RAN transition. What are the two key challenges to overcome?

Contributed By Elisa Polystar

The mobile industry is preparing to invest in Open RAN solutions but there are two key challenges to consider: co-existence and multi-vendor support. Both require adaptation of operational systems. What does this mean?

Open RAN – positive news from MWC 2022

It’s fair to say that Open RAN was one of the big talking points at MWC 2022. While estimates for when Open RAN deployments will begin to dominate overall RAN deployments and displace proprietary solutions vary, there’s a strong consensus that Open RAN will play an increasingly important role in supporting 5G coverage plans.

This consensus is backed by a number of forecasts. You can take your pick, but everything points to a flourishing future – for example, Dell’Oro Group reckons Open RAN will account for 15% of total RAN deployments – by as early as 2026, with the Asia Pacific region leading the way.

Meanwhile, consultancy firm Tecknexus counts nearly 50 trial deployments of Open RAN systems, across more than 30 countries (as of December 2021) – and these are just those that are publicly known. Many of these are likely to be very small scale and may be related to extending coverage beyond the current RAN footprint, rather than being indicative of the replacement of existing RAN solutions, but it’s still signs of growing momentum and points to a bright future.

Indeed, announcements at MWC confirmed this growing momentum, with a number of operators talking up trials, new partnerships and various initiatives, designed to support expansion of the Open RAN ecosystem. And this really is key: one of the drivers to Open RAN is to reduce dependence on a handful of suppliers and to enable a new ecosystem of smaller and typically (for now, at least) independent vendors to flourish.

Understanding challenges that scale will bring

This raises two interesting questions as we move inevitably towards scaling up early-stage trials. First, the obvious. Open RAN is going to co-exist with traditional RAN solutions for some time to come, particularly in national networks. It may take share more rapidly in private networks. The bottom line is that we are heading for a world of hybrid macro public networks, with increasing penetration of new Open RAN solutions being deployed alongside traditional RAN offers and the currently deployed footprint.

Indeed, this phase must exist for some time to come – simply because many operators have already chosen their key RAN partners for 5G rollout. For decisions that have already been taken, it’s safe to say that Open RAN lacked the requisite maturity to be a contender. That will change, but it will clearly take time.

Second, if Open RAN is really going to emerge at commercial scale from a new generation of vendors, we must also consider the likely multi-vendor nature of the new RAN environment. Yes, traditional vendors will move towards Open RAN too, but they may retain some proprietary elements. However, new vendors will deliver both commoditised solutions – and, in all likelihood, deliver specialised solutions to suit different use cases – from small cells to macro cells – and even portable cells for flexible coverage deployments.

OSS adaptation is essential – and must start now

This is exciting – but it does mean that MNOs will also have to adapt to support multiple RAN partners, instead of the traditional one or two. Quite apart from what that means from a commercial perspective, it has significant implications from an operational view.

A term that’s widely used in the industry is agnosticism. We often talk about how solutions are protocol agnostic, for example – which simply means they can connect on any suitable interface for the task in hand. To make the most of Open RAN and to ensure that, not only can new Open RAN solutions be deployed alongside traditional ones (which will be around for a long time to come), but that they can also pick and choose vendors as they like, MNOs will need to become agnostic to different RAN products. That’s a huge task.

We’re essentially saying that MNOs need to build an operational environment into which any RAN – open or otherwise – can be seamlessly deployed into their networks and integrated with their operational systems. To achieve this will require a radical shift from current approaches and a more flexible OSS.

This problem is gaining recognition, as our recent white paper made clear. In “Open RAN automation – how to ensure compatibility with your evolving network”, we outlined the challenges of aligning new, Open RAN investments with existing and traditional RAN solutions, highlighting some key use cases that must be addressed to ensure compatibility.

At present, many trials are based on the introduction of a new, single vendor for Open RAN. From these, we’re learning how to solve the first challenge – how Open RAN and proprietary solutions can co-exist successfully. So, we already know how to include Open RAN alongside existing solutions.

Getting ready for hybrid, multi-vendor networks

In the future, we can expect multiple vendors to be integrated into the network. For that to happen, all interoperability issues need to be solved and MNOs need to be confident that their operational and assurance systems can flex to embrace these deployments.

We haven’t quite reached that point, but we are definitely moving in the right direction – and, by exposing key use cases that must be addressed, we gradually defining how this transition can be enabled. In other words, we can already see how to move to the future, agnostic environment that we’ll need.

So, the message is: watch this space. By taking steps to adapt solutions such as our automation platform now, we’re getting ready for the coming wave of Open RAN deployments, so that MNOs can support the multi-vendor networks they need.

Interested in knowing more? Download our whitepaper, where we explore ways in which Open RAN can be integrated and aligned with automation systems in hybrid networks, to ensure compatibility with the future evolution path required to deliver full-service 5G capabilities.

 

Kalle Lehtinen, CTO, Vice President Technology & Architecture, Elisa & Thomas Nilsson, Chief Technology Officer, Elisa Polystar are co presenting at FutureNet World on the 10/11 May in London, register here.

 

Interested in knowing more? Download our whitepaper, where we explore ways in which Open RAN can be integrated and aligned with automation systems in hybrid networks, to ensure compatibility with the future evolution path required to deliver full-service 5G capabilities.

Link: https://www.elisapolystar.com/open-ran-automation/

Making networks greener doesn’t have to hurt

Contributed by Andrew Burrell, Head of Marketing & Communications, Nokia.

One of the reasons for the world’s glacially slow response to the dangers of climate change is the sheer difficulty of taking effective action. Most people accept that dramatic reductions in greenhouse gas (GHG) emissions are needed, but the high costs and often severe lifestyle changes needed are daunting barriers.

We could cite many examples, but let’s look at just two:

  • The aviation industry must reduce carbon emissions, but means people flying less
  • Reducing the carbon footprint of buildings is essential, but means substantial spending on new heating and insulating systems.

Fortunately, it’s a different story when it comes to mobile telecom networks. While such networks may account for just over 1% of global electricity consumption, that 1% adds up to a significant volume of GHG emissions. Unsurprisingly, customers, investors, regulators and governments are urging communications service providers (CSPs) to take action.

While CSPs may be keen to reduce their network energy consumption (and thus costs), the challenge is to do it without compromising network performance, customer satisfaction or the bottom line.

Target the radio network

So how can it be done without forcing CSPs into large-scale hardware redeployments, comprehensive network modernization or architecture redesigns? And how can network carbon footprints be reduced without degrading the user experience?

CSPs aiming to cut the largest possible chunk of energy consumption at the fastest possible pace need to focus on the radio access network (RAN). That’s because the RAN accounts for around 80% of all mobile network energy consumption.  “Waste” is an issue because only 15% of that energy is used to transmit data. The other 85% goes into secondary systems such as heating and cooling, lighting, uninterruptible and other power supplies, and running idle resources.

Modernizing network infrastructure can help but is hindered by slow upgrade cycles and requires high upfront CAPEX investments. If we want to have an immediate impact, there are two main strategies to reduce network energy use: dynamic network shutdowns and full-site power management.

Dynamically shutting down unused network elements during low-traffic periods can save much energy. Artificial intelligence (AI) maximizes the potential savings by using all sorts of data to precisely predict when to shut down infrastructure and perfectly balance energy savings, network performance and customer experience.

Using AI to control dynamic shutdowns can extend sleep times by several hours compared to statically scheduled shutdowns. AI can further boost energy savings by another 50% by eliminating the need to keep resources on standby ready to serve a sudden uplift in demand.

Managing passive equipment

Yet dynamic shutdowns only account for network elements, not power-hungry auxiliary components such as fans, cooling systems, lighting and power supplies. To ensure maximum energy efficiency, AI-powered energy consumption management must cover both active radio and passive equipment. The key is to benchmark energy trends to spot performance anomalies in historically “invisible” passive equipment that could be draining energy and might need to be reconfigured or replaced. Implementing such AI-based energy management can reduce energy costs by 20-30%.

The best news is that, because it is software, AI-based energy efficiency can be deployed in just a few weeks without major upfront investment. Software as a Service business models can also mean that CSPs pay their vendors only for the outcomes that are actually achieved. Implementing the technology over a public cloud can make it even more convenient by easing the processing and analysis of the large volume and velocity of network data.

Maintaining the customer experience

The question that now arises is how can CSPs guarantee that network performance and customer experience don’t suffer when parts of the network are powered down? How do they ensure resources are powered up again in time for traffic peaks? In other words, how to ensure network performance requirements and energy consumption are precisely aligned?

A problem of such complexity calls for AI-based energy solutions that can predict precisely the right time to power off resources and power them on again. Just-in-time waking is hard to achieve with static or rules-based methods, usually requiring extensive wake-up windows or the use of standby mode to shorten response times.

China Mobile adopts an AI-based solution

This was the situation facing China Mobile which wanted a to cut energy consumption and control costs without compromising the customer experience. The CSP realized it needed a comprehensive energy efficiency plan to reduce emissions and lower costs without affecting the customer experience or compromising network performance.

China Mobile decided to use the Nokia AVA Energy Efficiency solution for:

  • Predictive and dynamic management of passive and active components to gain much finer-grained control over energy consumption
  • Predictive closed loop actions for faster, automated responses to changing conditions instead of relying on manual interventions that cause delayed responses
  • Automated remote antenna control to adjust coverage dynamically according to shifting capacity requirements.

China mobile was able to permanently balance energy savings and performance requirements, allowing key performance indicators (KPIs) to be pre-set, with savings calculated by the AI system.

Contrary to most areas of daily life, energy savings in telecoms do not require massive lifestyle changes and do not have an impact on the services and the experiences customers are used to. That has to be good news for people and the planet.

Andrew is speaking on the ‘Energy efficiency and sustainability by design’ panel discussion at FutureNet World on the 11th of May. Register here.