Warning: Invalid argument supplied for foreach() in /home/customer/www/futurenetworld.net/public_html/wp-content/themes/futurenet-group/template-parts/content.php on line 13

The Journey to Autonomous Networks: 6 Challenges to Overcome Before Putting Artificial Intelligence and Machine Learning to Work

Contributed by Yuval Stein, AVP Technologies at TEOCO

Advanced, zero-touch network operations – what it takes to create an ‘autonomous network’ – requires telecom network engineers to take the results from AI/ML algorithms and use them efficiently within a network’s operational processes. Up until now, much of the industry focus has been on the challenges developing the proper AI, yet sometimes forgotten, there are also challenges in putting the AI results to use in a way that contributes to the quality of the services and the network.  That’s what this blog post explores. But before I get ahead of myself, let’s discuss the purpose of autonomous networks.

Autonomous Networks – what they are, and why we need them

Building an autonomous telecom network is somewhat akin to building a living organism.  Just like our bodies can automatically self-regulate many of the functions that keep us alive – our stomachs digest, our lungs breathe, and our hearts beat – autonomous networks function in a similar way. The goal for Communication Service Providers (CSPs) is to create a fully automated environment that is self-configuring, self-healing, self-optimizing and self-evolving. A network that will hum along with minimal to no human intervention.

Why is this so important?  Yes, there are cost savings that can be achieved through automation, but that has become almost secondary. Let’s stick with the human body analogy for a bit longer. If we suffer a small cut or cold, our bodies automatically begin the healing process.  We don’t have to tell it what to do; it just happens automatically. If everyone had to rush to the hospital to seek the advice of a specialist for every sniffle, bump, and bruise, we would find it hard to get through the day. Job productivity would plummet, and there wouldn’t be enough medical personnel to deal with all the demand. That is the same issue for telecom networks.  Systems have become so complex and fast-moving that human intervention can’t be relied upon to fix every minor network error – there are simply too many to manage, and response times need to be immediate. Automation needs to be the default, and humans should only have to step in for the bigger issues – when absolutely needed.

It Takes a Village: Solving the Autonomous Service Assurance Stack

Communication Service Providers, companies like TEOCO, and groups like TM Forum, are working to create the software, systems and tools required to enable fully autonomous networks – and we are getting closer every day. As you can see in figure 1 below, there is a stack of service assurance steps that need to be achieved – each one building upon the one prior.

 

The 3 bottom steps in the diagram below are well established, with their own operational methodologies. CSPs know how to work with network messages, events, alarms and KPIs. However, the upper layers, including Analytics and Automation – are very different. They require incorporating and acting upon things like probabilities and forecasts, which are relatively new types of information that until now have only existed in separate silos across various departments. Now, it all needs to work together. This requires new methodologies (and APIs) for how to incorporate these new stages before networks can become truly ‘Autonomous’.

Figure 1- the Autonomous Network Assurance Stack

Six Human and Technical Operational Challenges for Managing AI/ML Data

Understanding how to best leverage the information and data being generated by machine learning and artificial intelligence, and how to ‘operationalize’ the ongoing use of this information, is the task at hand.

As the saying goes, the devil is in the details. In integrating these new layers, which as mentioned above work very differently than the previous layers, we’ve identified a ‘language gap,’ for lack of a better term – with both human and technical hurdles that need to be overcome.

Before we can address this gap, we need to understand it and identify exactly what challenges we are facing before we can fix them. My belief is that they are both human and technical. After all, even with automation there will always be people involved at some level. I’ve outlined some of these challenges below:

Human Challenges

  1. Lack of Trust: This is the main human challenge by far, as algorithms using deep mathematics are often not easily explainable.  The use of visual cues and explanations within the user interface, along with achieving proven results over time, helps build trust.
  2. Defining What is Actionable – and How to Act Upon It: AI and ML results are rarely black and white.  Like forecasting the weather, they often involve probabilities. But instead of predicting the chance of rain, the AI/ML results may show that there is an 80% probability for a network function failure in the next 12 hours. Network engineers need to decide- is this a high enough probability to require the system to automatically change a network configuration? And is there enough information to know what that change should be?
  3. Cost Benefit Analyses: Once an issue is predicted, are we able to compare the cost and impact of the expected failure to the cost and impact of the fix? To run a network in a cost-efficient manner, these types of decisions will be required on a regular basis. And what about future impacts? If a minor network error occurs you may decide to ignore it. But what if it could lead to a larger, more costly issue down the road– how do we predict and account for these?

Technical challenges

  1. Optimizing Data Size: When it comes to machine learning, there is always a delicate balance between getting enough data to generate good AI/ML results, but not so much that it takes too long to process.  Sometimes it is better to reduce the amount of data ingested so the algorithms can provide their findings closer to real-time. This needs to be done carefully though, to maintain quality results. Similarly, if the data output is too large, it becomes too complex and unwieldly for other systems to work with. Therefore, we need to reduce the results to those that are ‘operationally affective’. But how do we know which data to use and which to ignore?  Sometimes these efforts are complex enough that they require their own algorithms.
  2. Lack of standardization of APIs – Further standardization of APIs will eventually create a true ecosystem of best-of-breed systems that can work together seamlessly to create a truly autonomous network. The industry is still evolving in these efforts- with more work ahead. Currently, Automated Root Cause Analysis is the only widely adopted AI/ML API. There needs to be more.
  3. AIOps Challenges – Creating, analyzing, and working with AI and ML data is very different from traditional software. A typical software solution is ready to go upon implementation and no changes are required until the next upgrade, but that isn’t typically the case with AI and ML. These systems have shorter lifecycles and are best defined as a hybrid mix of both a product and a service. They require regular re-training and updates because they are constantly learning from new data all the time. Having the right support structure in place for the ‘care and feeding’ of these new systems will be critical to their success.

Aside from the operational challenges associated with creating an autonomous network, automation in general requires upfront investments in technology, skills, and services. These investments can be significant and are better shared across the whole enterprise. A hybrid approach, which involves selecting the most cost-effective tool for each scenario, may (in the short-term) enable more-rapid deployments. However, this approach can prompt expensive maintenance and vendor management issues in the long term.

Automation also has implications for staff and organizational change. Automation projects can be delayed or difficult to justify where redundancies or reassignment create cost implications. Automation is best achieved where there is a clear understanding of each end-to-end process, and each process is closely managed to prevent poor practices from being replicated through the automation.

Is it worth it?

Some may wonder if these challenges are worth the effort and expense.  The truth is that the telecom industry is at an inflection point, where for progress to continue we must address automation in a way to get both a positive return on investment in the long term, along with immediate results and efficiencies in the short term. What worked yesterday- the systems, processes, and skillsets – won’t work tomorrow. Telecom networks – and the future services they will enable – demand a new operational playbook.

TEOCO is at the forefront of this effort. We are working to address each one of these challenges by participating in research catalysts with groups like the TM Forum and investing in our own research and design; constantly exploring ways to help our customers manage the challenges ahead.

 

To learn more and hear how we are addressing some of these challenges, sign up for our FutureNet hosted webinar, Leveraging AIOps towards advanced zero-touch operations on the 15th of September at 4pm BST. Or contact us today.

 

Welcome to the Future of Network Automation: Juniper Paragon Automation as a Service

Contributed by Kanika Atri, Senior Director, Strategic Marketing, Juniper Networks.

Service Providers (SPs) don’t invest in automation for automation’s sake. They do it to achieve business outcomes. Faster time-to-market. Reduced operational complexity and costs. More reliable, higher-quality subscriber experiences. Now, as traffic volumes explode and operators introduce new business and consumer services, including 5G, edge cloud, Internet of Things (IoT), Fixed Wireless Access (FWA) and more, automation has become a top strategic priority for SPs. There is no place this matters more than where these growth trends and new service types converge: the Cloud Metro.

Figure 1: Drivers of Network Automation – Heavy Reading

 

However, when we survey the market, we see a need to fundamentally reinvent how network automation is deployed and consumed. In fact, market analysts suggest that more than 70% of “Do-it-Yourself” (DIY) in-house network automation projects fail. And when organizations rely on legacy vendor automation solutions, they don’t deliver real business outcomes. For example, in a recent Heavy Reading survey, 40% of SPs said that using a generic automation framework is actually a barrier to adopting automation in transport networks.

At Juniper Networks, we believe there is a better way, and it must be guided by three core principles:

  • Speed matters: The automation tools we use should have a maniacal focus on “time to first business outcome,” and that time should be measured not in years, but in days. Automation should let SPs move at the pace of business, not the pace of internal operations.
  • No overhead: It shouldn’t require a huge investment in capital budget, time and personnel to deploy, operate and continuously update automation software and hardware. Automation should let SPs focus on productivity, not production.
  • Easy button: It should be super easy for staff to use network automation—without needing extensive training or software development skills. Automation software should work for SPs, instead of SPs working for the software.

How can the industry deliver these requirements? Juniper believes the future of network automation is cloud-delivered and Artificial Intelligence (AI)-enabled. It’s been proven in other domains. Now, it’s time to bring this model to the Wide-Area Network (WAN).

Juniper is already an established player in WAN automation, with many of the world’s premier SPs and enterprises using Paragon™ Automation, particularly for closed-loop automation use-cases. Now, we are taking that value proposition further and doubling down on public cloud and AI.

Today, Juniper announced the launch of Paragon Automation as a Service. This solution is more than just a reimagined network automation suite – it’s a reimagined automation experience. And it paves the way for more sustainable business operations in the Cloud Metro and across the network.

 

Reimagining the Future with Paragon Automation as a Service  

With Paragon Automation as a Service, Juniper is reimagining the automation experience in the following ways:

  • It’s cloud delivered. Just sign up, log in, connect the devices and GO in minutes. There’s no need to worry about hardware/software installation overhead, and it’s much faster than trying to implement DIY automation that might take months or years.
  • It’s AI-enabled. The Paragon Automation cloud infrastructure comes with built-in AI and Machine Learning (ML) data and training pipelines, and WAN-specific AIOps use cases. SPs can spot WAN issues that would elude human analysis using conventional tools, identify root causes and fix them before they impact the service experience. And with AI/ML, the automation framework keeps learning every day and gets better at predicting such issues over time.
  • It’s assurance native. Paragon Active Assurance test agents are now natively embedded into Junos OS Evolved in every ACX7000 platform. These built-in “Experience Sensors” generate synthetic traffic to measure service/application quality, anywhere through the network. Coupled with Paragon Automation as a Service, SPs can detect and fix experience issues proactively. It’s yet another proof point of delivering on our vision of experience-first networking.
  • It’s trust verifiable. Paragon Automation as a Service helps SPs establish network trust at three levels. At the hardware level, it guarantees they are using genuine Juniper gear by validating a unique Device ID linked to TPM 2.0 chips embedded in our routers. At the software level, it then continuously checks software integrity and finally, calculates a network-wide trust score, providing them with actionable insights about potential risks.
  • It’s use-case based. That’s the value of cloud: use only what is needed and pay for only what is used. If SPs only need to automate one use case to solve an immediate business problem – then that’s where they start with Paragon Automation as a Service. There’s no need to go “all-in,” or “boil the ocean” by deploying and training legacy automation systems as one would with most vendor and DIY solutions. With Paragon Automation as a Service, SPs can start small, go fast. SPs can choose their own adventure across the lifecycle of Plan, Orchestrate, Detect and Assure and Optimize use cases.
  • It’s intuitively simple. We combined the art and science of UX design to make Paragon Automation as a Service so simple to use, it can feel like it’s reading your mind. The modern UI is built on a layered information architecture, recommendation engines and visual guides that flow with the user’s operational journey.

 

Why is the future of automation in the cloud?

Because it just makes sense—from both a business and technical perspective. There’s a reason most of the world’s software has moved to a cloud-delivered SaaS model. It’s why 61% of SPs told Heavy Reading that they want cloud-based network automation in a new survey on transport automation.

Furthermore, Analysys Mason quantified the benefits of SaaS-based automation compared to DIY/ legacy solutions, and the value of SaaS is just staggeringly obvious.

  • Business benefits:
    • The simplicity of a SaaS platform reduces deployment times by 50% for most projects, translating directly to faster time-to-market and increased revenues.
    • Once deployed, SaaS automation platforms allow SPs to add new use cases 70% faster than with traditional approaches.
    • SPs incur 40% lower costs with a SaaS-based automation approach compared to internal development. This owes to savings in hardware, staffing and operational costs for installing and maintaining your own solution.
  • Technical benefits: With cloud, innovation comes fast. Think weekly releases of software improvements, compared to months with traditional solutions. When it comes to AI, it makes sense to use cloud so SPs can leverage the collective intelligence from anonymized data, ML models and knowledge from networks around the world.

Figure 2: Benefits of Network Automation as a Service (vs. DIY) – Analysys Mason

 

Compared to today’s typical automation approaches, whether it’s a “big-bang” automation system from a vendor or an SP’s own DIY system, most of these efforts fail on the three guiding principles:

  • Slow time to first outcome: Applying a generic automation solution to all network domains requires a huge investment in time and effort. Often, they are just too broad and complex, making them difficult to get off the ground, evolve, scale and, most importantly, deliver business outcomes. SPs might, eventually, see some positive outcomes, but they won’t get there quickly or easily.
  • Extra overhead: The costs to build, deploy and manage the infrastructure grow quickly. Often, it ends up costing much more than budgeted, making it difficult to prove ROI.
  • No easy button: Traditional solutions require extensive training and skills to use effectively. This gets even harder when skilled employees are scarce and staff churn is high.

Bottom line: a cloud-delivered, AI-enabled network automation approach offers much faster, cost-effective, simpler and outcome-focused results.

Automating the Cloud Metro

As SPs adopt a Cloud Metro model, distributing more capacity and service intelligence out closer to subscribers, they can’t rely on yesterday’s approaches to metro network operations. What they want is built-in security, assurance and AIOps.

Figure 3: SP requirements for metro network automation – Heavy Reading

 

Coupling Juniper Paragon Automation as a Service with our Cloud Metro platform is a perfect fit, providing a complete, shrink-wrapped solution. SPs get on-box elements such as embedded active assurance sensors and built-in Zero Trust security, plus off-box elements from Paragon Automation as a Service. That’s how they can build sustainable Cloud Metro operations across the lifecycle.

  • Day 0: Authenticate and onboard thousands of Cloud Metro devices in minutes, not days, and launch new services more quickly with Paragon Automation as a Service.
  • Day 1: Natively enable the quality of Cloud Metro services with built-in Experience Sensors and establish network trust by verifying hardware and software integrity at scale.
  • Day 2 and beyond: Find, fix and predict Cloud Metro problems before they impact user experience, thanks to WAN AIOps built into Paragon Automation as a Service.

Service Providers are more than ready for automation. But most automation solutions haven’t been ready to deliver the outcomes SPs need—until now.

See the power of Paragon Automation as a Service for yourself with this sneak peek into just one example of the future of automation, our AI-enabled Device Onboarding as a Service and discover how we’re reimagining the onboarding process to make it instantaneous, virtually error-free and secured.

Cloud-first, AI-enabled automation is the future. Juniper’s initial offering will be available early 2023—and we’re just getting started. Throughout next year, we expect to roll out additional use cases and an AI-driven conversational assistant.

Statement of Product Direction

The information on this page may contain Juniper’s development and plans for future products, features, or enhancements (“SOPD Information”). SOPD Information is subject to change at any time, without notice. Juniper provides no assurances, and assumes no responsibility, that future products, features, or enhancements will be introduced. In no event should any purchase decision be based upon reliance of timeframes or specifics outlined as part of SOPD Information, because Juniper may delay or never introduce the future products, features, or enhancements.

Any SOPD Information within, or referenced or obtained from, this website by any person does not give rise to any reliance claim, or any estoppel, against Juniper in connection with, or arising out of, any representations set forth in the SOPD Information. Juniper is not liable for any loss or damage (howsoever incurred) by any person in connection with, or arising out of, any representations set forth in the SOPD Information.

Juniper Networks, the Juniper Networks logo, Juniper, Junos, and other trademarks listed here are registered trademarks of Juniper Networks, Inc. and/or its affiliates in the United States and other countries. Other names may be trademarks of their respective owners. 

Simplifying the Edge Opportunity with Automation

Contributed by Susan White, Head of Strategy and Portfolio Marketing, Netcracker.

Enterprises need edge compute for many reasons – enabling low latency for real-time IoT services, reducing backhaul costs for video processing and keeping sensitive data local – to name a few. It’s becoming a core strategy for many companies in vertical markets to boost productivity, performance and increase customer satisfaction.

However, most enterprises cannot deal with the complexity of putting all the piece parts together and running these mission critical services with strict SLAs. The choice and location of edge cloud platform, connectivity options, security solutions, distributed 5G core and MEC applications presents a hefty project that requires extensive design, testing, validation, deployment and continuous assurance.

Enterprises need simplicity.

This creates an ideal opportunity for CSPs to remove this complexity and at the same time deepen their engagement and value in enterprise and vertical markets. However, no CSP has all the piece parts that are needed to offer enterprise customers a successful edge solution. It’s a multi-partner play that requires a great deal of cooperation and integration across the ecosystem.

This is why automation is essential to build and monetize a successful edge business. Whether it’s part of a private or hybrid 5G network, automation is required at both the operational and business levels.

 

Unifying and Scaling Edge Operations

Operations automation of the edge domain is essential for three key reasons:

  1. Unification of operational processes within the edge cloud. Today, edge platform automation (on-premise or network-edge) is separated from the service lifecycle of VNFs and CNFs, which in turn is separated from high value MEC applications. This results in high complexity and manual intervention. These processes must be streamlined to commission the entire edge platform, services and even network slices with zero touch.
  2. E2E view across highly distributed edge locations, transport and core. Edge services are highly complex and may have stringent latency and performance requirements. Resources can reside on multiple edge hosts (CSP and hyperscaler) and multiple locations (on-premise, network-edge, regional, core). Edge service deployments extend beyond the edge. An E2E view is essential to automate edge services.
  3. The ability to scale to many customers. Silo private 5G and edge networks make replication and scale impossible. This in turn makes the business case harder to justify. A unified operations environment is needed that can start small and scale to many businesses.

Netcracker has built these capabilities into its Edge Orchestration solution. We are helping CSPs globally, including Etisalat, develop a successful edge business. It goes beyond the intelligent placement and lifecycle management of edge services to also ensure the RAN has the right QoS enabled, connectivity between all edge hosts is ready (CSP or hyperscaler) and the 5G UPF breakout function is enabled for edge hosts. With its modular architecture and multitenancy, CSPs can scale their edge services across public and hybrid 4G/5G deployments.

 

Giving Enterprises Visibility, Control and the Right Partner Solutions

Enterprises are demanding faster access to their services with the simplicity of selecting, activating and managing their services on-demand from an intuitive portal. This necessities automation at the business layer also. Back end processes such as product catalogue, customer management and revenue management need to be streamlined with the front-end portal to enable automation from customer order to activation.

Serving vertical markets with right edge service will require a dynamic ecosystem of partners to address their specific business needs. The resulting multi-party solution will require a new approach for partner management that automates the myriad of processes from on-boarding partners to cataloguing, product management, pricing, partner management, and settlements.

Netcracker’s Digital Marketplace solution brings all these capabilities together to help CSPs give their customers an easy way to purchase services with complete visibility and control. The integrated Partner Management solution, adopted by T-Mobile US for its wholesale MVNO and IoT business, helps CSPs to build a vibrant partner ecosystem that will be essential to compete in the edge market.

Edge is a complex business – but it has the potential to be a very lucrative one with the right automation in place that brings simplicity and value to enterprise customers.

 

Automation? It’s just a milestone on the route to the Zero Touch Experience

Contributed by Atul Purohit, Head of Technology, Europe, Nokia.

Automation has been near the top of everyone’s agenda for a long time – which department of a CSP doesn’t want to make better and more extensive use of automation?

But do we have a clear view of what this automation is actually for? What is its ultimate converging point?

When anyone in a CSP speaks about automation, they will most likely mean automation of operations or processes, such as tool sets, alarms and/or ticketing. But automation is a much bigger concept and goes far beyond the CSP’s operational aspects.

In a recent discussion at FutureNet World, I was part of a panel of CSPs and developers who delved into automation, its true goals and what it could ultimately mean for end users.

We hear a lot about Zero Touch Automation, where automated tools elevate people to the status of observers who can intervene if needed. This is great but it is only one step along the road – the ultimate destination is the Zero Touch Experience, where everything the end user needs happens in the background without them having to think about it. According to research by Nokia, some 60% of new revenue in 5G will come from B2B2X business models, making partner ecosystems increasingly important.

If we look at Uber or Airbnb, the starting point is not the architecture but the end customer experience – for them, it’s all about how they can offer that Zero Touch Experience.  Essentially, automation is a fundamental building block towards that seamless experience – the customer has come to buy or access a certain service and everything in your organization needs to collaborate to make that point of value exchange as painless and seamless as possible. We need to start with the Zero Touch Experience and work backwards to build the Zero Touch Automation that can make it work.

 

One service to bind them all

What CSPs have perhaps been lacking so far is a common theme which they can build their Zero Touch Automation around. History tells us that football binds nations – there are goals that an entire nation can get excited and work seamlessly towards. Perhaps CSPs need a similar binding act, a rallying goal for all departments to converge on. With 5G offerings like network slicing, this might just happen, a concept that CSPs can actually use to tie everything together. On one hand, network slicing can lead towards an outcome like a defined use case and on the other, it could lead to multiple domain specific automations. We need a ‘religion’ (like football) to bind all these aspects and network slicing could be the right way forward for a CSP.

It’s clear that the services of the future will not be completely within a CSP’s own environment. Up until now, telecom services have been very much vertically integrated and completely self-contained. But with 5G, things disaggregate, and you have elements of service which will be outside a CSP’s control. Without an overarching philosophy such as network slicing, this could lead to automation being tackled in different parts of the organization with varying degrees of maturity.

In the 5G era, the concept of customers has grown to include third parties such as developers as a new revenue generating demographic. The ability to provide network as code or APIs to developers is something which 5G brings to the table – the developers often have a simple and clear ask from the network and it is up to CSPs and vendors like Nokia to deliver on that promise.

This new era of services for developers was outlined by Jad Meouchy, CTO of Bad VR, who told us what he is looking for from CSPs: “I’m looking for an API, or access points, or information or datasets or something that I can start exploring and understanding and discover the possibilities. Use cases like streaming and gaming are very interesting and we are keen to see how we can make use of them – we’re always looking at what the system thinks it’s supposed to do and thinking about how we can use that to meet our needs.”

 

A pick and mix approach

Developers such as Bad VR clearly want a Zero Touch Experience, the ability to easily pick and choose what works for them. At an industry conference some years ago, 20 service providers were presenting their edge strategy towards developers. At the end of all these presentations, a developer was invited to comment. She said, “Believe it or not, gents, I’m here for the booze and free food. I don’t care about your 20 different distinct ways to interact with your specific ecosystems – I don’t have the time and the patience to go through all of them.”

Developers obviously do not want to do different things when interacting with different service providers. Nokia is playing its part by using automation to open up APIs with almost all of our products. As with other vendors, we need to open up our APIs via a sandbox environment, which developers can test and trial to see how applications can work. Automation is the key to this.

As my fellow panellist Pal Gronsund of Telenor said: “Zero touch is a vision that we are all striving towards.”

Javier Garcia Jimenez from Telefonica added: “Enterprises are knocking on our doors and automation is crucial for scaling our capabilities.”

These enterprises clearly want to innovate on top of CSPs’ 5G platforms. They want to pick up the right slice based service for their needs from the CSP’s catalogue and define and adjust their own SLAs. Once their CSP confirms the order, it is automatically deployed and assured for them.

In terms of what we are seeing in the market, there are a number of Proofs of Concept happening and most of these are proving technology. Repeating these multiple times is a costly proposition – at Nokia, we blueprint these POCs so that there can be a fair degree of reuse across various customers.

As well as the technical side, there are also other aspects we need to consider. Javier emphasized this when he said: “It’s also about culture, organization, transformation, not just about the technology – it’s about how we implement the new operating model. Managing the lifecycle is crucial when we manage the end-to-end 5G slice, and ultimately to provide a Network as a Service type of approach.”

We need to deliver value to end customers – opening up networks via APIs and using automation to give developers and enterprises what they want using zero touch operations, is the way to achieve it.

Intelligent Operations for New 5G & Edge Ecosystems

Contributed by Vinod Joseph, Technical Leader of APJ and China, VMware

The digital era reinvented how communication service providers (CSPs) connect with customers. CSPs that deliver excellent services and customer service are rewarded, as every customer agrees that customer service is important in their choice of loyalty to a brand.

An important part of the 5G ecosystem is edge computing. It facilitates radical new use cases that extend from the data center core to the network edges. Edge computing allows for compute and analytics to be moved closer to the data instead of exchanging ever growing amounts of data among cloud servers. As the ecosystem evolves, 5G and edge computing will further converge to enable edge network management, collect and capitalize on massive amounts of data while maintaining integrity and even ownership, and build pervasive intelligence for enabling various latency-sensitive, enterprise, and private services.

Intelligent operations & automation for 5G and edge ecosystems is paramount to delivering excellent services that win customers and trailblaze the latest and greatest service innovations. The three keys to supporting business growth with intelligent 5G operations is efficiency, flexibility and scalability achieved through automation, & consistent cloud infrastructure deployed across diverse assets & multiple clouds, respectively.

Building intelligent automation, & operations for a 5G and edge world

The rigidity of today’s hardware and siloed architectures makes it destined to fail in the current market where quick pivots are necessary to keep up with the competition. Appliance-based upgrades can take months and disrupt service. Cloud-native technologies and software-centric models provide the necessary agility and flexibility to adapt, deploy quick patches and upgrades, and to expand to meet ever-changing conditions, all while avoiding downtime.

Two tasks are fundamental to address the complexity challenge and for CSPs to succeed: automation and optimization. Automation is required to cope with new scenarios in network and edge planning, operation, and management. Automation should ease the operation of network and compute infrastructure for rapidly growing vertical industries, transportation, and enterprise use cases that are bringing along with them new infrastructure owners. Optimization should also ease the extension of cloud computing and fast-growing Artificial Intelligence/Machine Learning (AI/ML) applications to the edge, as well as introduce self-optimization to best serve applications. Optimization strategies will ensure that every node in mobile networks can provide low latency, high reliability, and pervasive intelligence capabilities.

An operational framework that is backed by intelligent automation & optimization is necessary to build the 5G Core, protect the edge and virtualize the RAN. This is essentially ruled by these five pillars:

  1. Simplification
  2. Unification
  3. Intelligence
  4. Openness
  5. Culture

VMware can help guide CSPs through their digital transformation journey. VMware enables CSPs to consolidate their legacy siloed platforms & operational frameworks to a consistent multi-cloud infrastructure. This allows CSPs to deploy any application on a common cloud layer with intelligent automation & operations.

Unleash 5G potential with automation and end-to-end scalability

Intelligent, analytics-driven automation is more than just automating manual processes. It is the ability to take input from several sources, analyse that input to generate actionable insights, and then execute upon them via intelligent actions. This type of automation is required in the complex end-to-end setting of 5G architecture. VMware Telco Cloud Platform provides Intelligent automation required at the network core, edge, & RAN. Furthermore, by incorporating analytic and AI capability into the automation process, critical insights can be extracted from the network assets which can be used to generate optimal, dynamic configurations allowing rapid service deployment and providing a framework with business agility and flexibility.

Intelligent automation software is the key to the proper placement of Cloud-native Network Functions/ Virtual Network Functions (CNFs/ VNFs) within a mobile network and enabling Mobile Network Operators (MNOs) to maximize the utilization of network resources by re-allocating unused resources to other slices. With its advanced analytics, and automated orchestration features,

The VMware Telco Cloud Platform enables the creation of a self-driving, self-healing, and self-optimizing 5G network with zero-touch capabilities. CSPs can now implement zero-touch service lifecycle management, which includes automating the design, creation, modification, and monitoring of network functions and services, as well as the provisioning of underlying resources for those services, as and when required. The VMware Telco Cloud Platform supports the scaling and orchestration of network resources for 5G Core, RAN, & the edge.

Workload balancing, initiating tests across clouds and edge clusters, network provisioning and lifecycle management are all tasks at which the VMware automation solution excels. The VMware Telco Cloud Platform is always aware of Kubernetes clusters resources and dynamically reallocates work to optimize performance and create a self-organizing network.

Adopts multi-cloud for multi-access edge computing

CSPs employ huge, distributed networks comprised of various types of environments hosted by various providers. This allows them to provide a variety of services and take advantage of cost-effective compute and storage offered by various vendors. Distributing work across platforms, clouds and environments allows CSPs to optimize their investments, reduce their capital expenses, and offer new services.

VMware Telco Cloud provides multi-layer, end-to-end visibility so administrators can easily keep tabs on network and service layers through a single pane of glass. Consistent deployment and management across multi-cloud environments allows network teams to operate and manage multiple vendors, networks and resources through one control panel. Simplicity and visibility, again, saves time for IT teams and allows them to dedicate their effort toward the most important component of business failure or success: the customer experience.

This blog is adapted from a previous version published on the VMware Telco Blog website. To see the original blog and to learn more about VMware Telco Cloud visit our blog site here: https://blogs.vmware.com/telco/

Preparing for the Open RAN transition. What are the two key challenges to overcome?

Contributed By Elisa Polystar

The mobile industry is preparing to invest in Open RAN solutions but there are two key challenges to consider: co-existence and multi-vendor support. Both require adaptation of operational systems. What does this mean?

Open RAN – positive news from MWC 2022

It’s fair to say that Open RAN was one of the big talking points at MWC 2022. While estimates for when Open RAN deployments will begin to dominate overall RAN deployments and displace proprietary solutions vary, there’s a strong consensus that Open RAN will play an increasingly important role in supporting 5G coverage plans.

This consensus is backed by a number of forecasts. You can take your pick, but everything points to a flourishing future – for example, Dell’Oro Group reckons Open RAN will account for 15% of total RAN deployments – by as early as 2026, with the Asia Pacific region leading the way.

Meanwhile, consultancy firm Tecknexus counts nearly 50 trial deployments of Open RAN systems, across more than 30 countries (as of December 2021) – and these are just those that are publicly known. Many of these are likely to be very small scale and may be related to extending coverage beyond the current RAN footprint, rather than being indicative of the replacement of existing RAN solutions, but it’s still signs of growing momentum and points to a bright future.

Indeed, announcements at MWC confirmed this growing momentum, with a number of operators talking up trials, new partnerships and various initiatives, designed to support expansion of the Open RAN ecosystem. And this really is key: one of the drivers to Open RAN is to reduce dependence on a handful of suppliers and to enable a new ecosystem of smaller and typically (for now, at least) independent vendors to flourish.

Understanding challenges that scale will bring

This raises two interesting questions as we move inevitably towards scaling up early-stage trials. First, the obvious. Open RAN is going to co-exist with traditional RAN solutions for some time to come, particularly in national networks. It may take share more rapidly in private networks. The bottom line is that we are heading for a world of hybrid macro public networks, with increasing penetration of new Open RAN solutions being deployed alongside traditional RAN offers and the currently deployed footprint.

Indeed, this phase must exist for some time to come – simply because many operators have already chosen their key RAN partners for 5G rollout. For decisions that have already been taken, it’s safe to say that Open RAN lacked the requisite maturity to be a contender. That will change, but it will clearly take time.

Second, if Open RAN is really going to emerge at commercial scale from a new generation of vendors, we must also consider the likely multi-vendor nature of the new RAN environment. Yes, traditional vendors will move towards Open RAN too, but they may retain some proprietary elements. However, new vendors will deliver both commoditised solutions – and, in all likelihood, deliver specialised solutions to suit different use cases – from small cells to macro cells – and even portable cells for flexible coverage deployments.

OSS adaptation is essential – and must start now

This is exciting – but it does mean that MNOs will also have to adapt to support multiple RAN partners, instead of the traditional one or two. Quite apart from what that means from a commercial perspective, it has significant implications from an operational view.

A term that’s widely used in the industry is agnosticism. We often talk about how solutions are protocol agnostic, for example – which simply means they can connect on any suitable interface for the task in hand. To make the most of Open RAN and to ensure that, not only can new Open RAN solutions be deployed alongside traditional ones (which will be around for a long time to come), but that they can also pick and choose vendors as they like, MNOs will need to become agnostic to different RAN products. That’s a huge task.

We’re essentially saying that MNOs need to build an operational environment into which any RAN – open or otherwise – can be seamlessly deployed into their networks and integrated with their operational systems. To achieve this will require a radical shift from current approaches and a more flexible OSS.

This problem is gaining recognition, as our recent white paper made clear. In “Open RAN automation – how to ensure compatibility with your evolving network”, we outlined the challenges of aligning new, Open RAN investments with existing and traditional RAN solutions, highlighting some key use cases that must be addressed to ensure compatibility.

At present, many trials are based on the introduction of a new, single vendor for Open RAN. From these, we’re learning how to solve the first challenge – how Open RAN and proprietary solutions can co-exist successfully. So, we already know how to include Open RAN alongside existing solutions.

Getting ready for hybrid, multi-vendor networks

In the future, we can expect multiple vendors to be integrated into the network. For that to happen, all interoperability issues need to be solved and MNOs need to be confident that their operational and assurance systems can flex to embrace these deployments.

We haven’t quite reached that point, but we are definitely moving in the right direction – and, by exposing key use cases that must be addressed, we gradually defining how this transition can be enabled. In other words, we can already see how to move to the future, agnostic environment that we’ll need.

So, the message is: watch this space. By taking steps to adapt solutions such as our automation platform now, we’re getting ready for the coming wave of Open RAN deployments, so that MNOs can support the multi-vendor networks they need.

Interested in knowing more? Download our whitepaper, where we explore ways in which Open RAN can be integrated and aligned with automation systems in hybrid networks, to ensure compatibility with the future evolution path required to deliver full-service 5G capabilities.

 

Kalle Lehtinen, CTO, Vice President Technology & Architecture, Elisa & Thomas Nilsson, Chief Technology Officer, Elisa Polystar are co presenting at FutureNet World on the 10/11 May in London, register here.

 

Interested in knowing more? Download our whitepaper, where we explore ways in which Open RAN can be integrated and aligned with automation systems in hybrid networks, to ensure compatibility with the future evolution path required to deliver full-service 5G capabilities.

Link: https://www.elisapolystar.com/open-ran-automation/

Making networks greener doesn’t have to hurt

Contributed by Andrew Burrell, Head of Marketing & Communications, Nokia.

One of the reasons for the world’s glacially slow response to the dangers of climate change is the sheer difficulty of taking effective action. Most people accept that dramatic reductions in greenhouse gas (GHG) emissions are needed, but the high costs and often severe lifestyle changes needed are daunting barriers.

We could cite many examples, but let’s look at just two:

  • The aviation industry must reduce carbon emissions, but means people flying less
  • Reducing the carbon footprint of buildings is essential, but means substantial spending on new heating and insulating systems.

Fortunately, it’s a different story when it comes to mobile telecom networks. While such networks may account for just over 1% of global electricity consumption, that 1% adds up to a significant volume of GHG emissions. Unsurprisingly, customers, investors, regulators and governments are urging communications service providers (CSPs) to take action.

While CSPs may be keen to reduce their network energy consumption (and thus costs), the challenge is to do it without compromising network performance, customer satisfaction or the bottom line.

Target the radio network

So how can it be done without forcing CSPs into large-scale hardware redeployments, comprehensive network modernization or architecture redesigns? And how can network carbon footprints be reduced without degrading the user experience?

CSPs aiming to cut the largest possible chunk of energy consumption at the fastest possible pace need to focus on the radio access network (RAN). That’s because the RAN accounts for around 80% of all mobile network energy consumption.  “Waste” is an issue because only 15% of that energy is used to transmit data. The other 85% goes into secondary systems such as heating and cooling, lighting, uninterruptible and other power supplies, and running idle resources.

Modernizing network infrastructure can help but is hindered by slow upgrade cycles and requires high upfront CAPEX investments. If we want to have an immediate impact, there are two main strategies to reduce network energy use: dynamic network shutdowns and full-site power management.

Dynamically shutting down unused network elements during low-traffic periods can save much energy. Artificial intelligence (AI) maximizes the potential savings by using all sorts of data to precisely predict when to shut down infrastructure and perfectly balance energy savings, network performance and customer experience.

Using AI to control dynamic shutdowns can extend sleep times by several hours compared to statically scheduled shutdowns. AI can further boost energy savings by another 50% by eliminating the need to keep resources on standby ready to serve a sudden uplift in demand.

Managing passive equipment

Yet dynamic shutdowns only account for network elements, not power-hungry auxiliary components such as fans, cooling systems, lighting and power supplies. To ensure maximum energy efficiency, AI-powered energy consumption management must cover both active radio and passive equipment. The key is to benchmark energy trends to spot performance anomalies in historically “invisible” passive equipment that could be draining energy and might need to be reconfigured or replaced. Implementing such AI-based energy management can reduce energy costs by 20-30%.

The best news is that, because it is software, AI-based energy efficiency can be deployed in just a few weeks without major upfront investment. Software as a Service business models can also mean that CSPs pay their vendors only for the outcomes that are actually achieved. Implementing the technology over a public cloud can make it even more convenient by easing the processing and analysis of the large volume and velocity of network data.

Maintaining the customer experience

The question that now arises is how can CSPs guarantee that network performance and customer experience don’t suffer when parts of the network are powered down? How do they ensure resources are powered up again in time for traffic peaks? In other words, how to ensure network performance requirements and energy consumption are precisely aligned?

A problem of such complexity calls for AI-based energy solutions that can predict precisely the right time to power off resources and power them on again. Just-in-time waking is hard to achieve with static or rules-based methods, usually requiring extensive wake-up windows or the use of standby mode to shorten response times.

China Mobile adopts an AI-based solution

This was the situation facing China Mobile which wanted a to cut energy consumption and control costs without compromising the customer experience. The CSP realized it needed a comprehensive energy efficiency plan to reduce emissions and lower costs without affecting the customer experience or compromising network performance.

China Mobile decided to use the Nokia AVA Energy Efficiency solution for:

  • Predictive and dynamic management of passive and active components to gain much finer-grained control over energy consumption
  • Predictive closed loop actions for faster, automated responses to changing conditions instead of relying on manual interventions that cause delayed responses
  • Automated remote antenna control to adjust coverage dynamically according to shifting capacity requirements.

China mobile was able to permanently balance energy savings and performance requirements, allowing key performance indicators (KPIs) to be pre-set, with savings calculated by the AI system.

Contrary to most areas of daily life, energy savings in telecoms do not require massive lifestyle changes and do not have an impact on the services and the experiences customers are used to. That has to be good news for people and the planet.

Andrew is speaking on the ‘Energy efficiency and sustainability by design’ panel discussion at FutureNet World on the 11th of May. Register here.

The “March of Time” Requires Intelligent Service Assurance as part of AIOps

Contributed by Mark Geere, Product Marketing, Teoco

I am lucky enough to own an old MGB sports car, and it has been sitting in my garage for the winter. As is typical with these older British cars, it wouldn’t start after a couple of months “rest.” But once I opened the bonnet and applied some simple manual human intelligence, fault finding, and diagnostics, with a little bit of coaxing it eventually started up. I thought of this as I was driving around the next day in my more modern Jaguar XF; I opened the bonnet on this car and realized I had no chance of even trying to understand what was going on in there. Even the Jaguar specialists at my local garage often comment that they must send some faults back to the franchise dealers, as they have the specialist tools, diagnostics, and processes to fix and mend a difficult issues.

I have seen the same march of time impact technology within the telecommunications sector. I previously worked for a software vendor as an engineer on mobile telecoms networks, where we helped design, manage, and assure the relatively simple physical-based networks of 2G/3G. I’ve seen these networks grow in complexity over the years. Through the 4G era of internet and video streaming capabilities to the 5G era of today, virtual/cloud-based networks are becoming a reality, along with new services and the added intricacies of network slicing and MEC capabilities.

Let’s face it. All this innovation is towards achieving the ultimate goal of CSPs to increase their bottom line for their shareholders. However, as part of this they need to ensure that their operational OPEX costs do not rise as fast as the complexity of their network. On the other side of the equation, CSPs have learnt, from painful history, that if they do not provide a dedicated focus on the consumer/Enterprise experience and service declines, it soon translates into a significant loss in revenue. At this point AI/ML and automation within operations become critical tools for balancing both sides of the equation, and it’s why CSPs are looking to AIOps for help.

To gain real benefits within operations, the CSP needs to intelligently automate many previously manual tasks and ensure their service assurance solution seamlessly integrates within their BSS and OSS architectures. As service automation becomes more of a reality (especially at the Enterprise level), Service Assurance solutions will need “talk” to the orchestration and fulfilment solutions with a far tighter, nearer to real-time integration, so that they can translate the intent-based requirements into measurable parameters. This will not always result in predictable outputs from specific inputs, but it will create intelligently-derived performance indicators that impact the intent of service level agreements.

At TEOCO, one of our key capabilities is our ability to integrate multi-technology and multi-vendor network equipment into a single independent view across an entire network. However, even though this remains a unique selling proposition, it is not enough. We recognize the impacts of the “march of time,” and over the years we have worked with industry bodies such as TM Forum, along with our customers, to constantly upgrade our HELIX Service Assurance solution to meet the new challenges of each era. As CSPs move towards zero-touch service operations, TEOCO has focused on providing two key aspects for Service Maintenance & Operations:

  • Providing intelligence within our solution through the introduction of Machine Learning: This reduces the time required to locate faults, the ability to intelligently find the right metrics to manage SLAs, and to drastically reduce the number of alarms across multiple technologies and vendor equipment into those that matter.
  • Seamlessly integrating into operators’ BSS/OSS architectures by being cloud-ready and the use of Open APIs.

AIOps is still immature within CSP operations. It envisions a high level of AI­-assisted or AI­-driven automation in IT and network operations. Though zero-touch is a radical leap, it is an essential part of CSPs’ drive towards operational automation. The introduction of intelligent, integrated service assurance is key to its success.

To learn more about Helix and TEOCO’s full line of service assurance solutions, please visit our website or contact us for more information.

Mark is a panelist in the session: Applying AIOps for Zero Touch Automation through intelligent Service Assurance, at Futurenet World in May. Register today to join him.

Cloudifying the network infrastructure

Contributed by Teresa Monteiro, Director of Marketing, Infinera

In recent years network automation has evolved past SDN and NFV, with the cloud emerging as a major player. Networks that once extended from physical to virtual are now moving to cloud. Cloud-native network functions are helping service providers expand beyond connectivity, and multi-cloud and hybrid cloud architectures are core to meet the distributed computing requirements for scale and agility imposed by 5G and IoT.

In this blog I will also discuss the role of cloud in network automation – but I will do it from a different perspective: that of cloudifying the embedded network infrastructure.

Cloud-native down to the infrastructure

When we think of optical network automation, we typically think of the network management and control layer, and of centralized automation applications enabled by an optical domain controller – automation applications such as network discovery and inventory, path computation, or service restoration.

But let’s not forget that there are also important automation applications implemented in the network operating system (NOS), the operating system running on the individual network elements. The adoption of a cloud-native architecture at the NOS facilitates agile delivery and deployment of such applications.

By cloud-native architectures and technologies, we specifically mean a NOS that is microservices-based and can be deployed in containers with the support of a container management system. This choice of software architecture has many well-known benefits; today, I will focus on the fact that it allows for software modules, developed, and compiled elsewhere, to run autonomously in a network element environment, deployed in what is called a guest container.

In simple words, a guest container is an isolated component within the NOS that can host and execute a software agent. This software agent has access to open, exposed interfaces, but not to any other internal NOS parameters.

The deployment of software agents within guest containers enables the extension of NOS features, accelerates the introduction of innovative automation applications, and supports the development of customized functionality.

A NOS-agnostic software agent can be implemented and compiled independently, in a foreign development environment, by an operator or a third-party supplier, and, once downloaded, will run smoothly in a cloud-native NOS.

Furthermore, since a containerized architecture offers a variety of deployment options, the same agent can be deployed on the fly and run locally on a network element, on a server, or in the cloud. It can be ported easily across platforms: to the cloud, when an application needs to scale, to the network element processor when there are latency constraints.

An example: adaptive streaming telemetry

Let me describe a concrete example, that of an automation application named adaptive streaming telemetry that extends and improves the standard streaming telemetry mechanism and has been successfully implemented as a NOS-agnostic software agent.

Streaming telemetry is a network monitoring methodology where an external system subscribes to a specific network element data stream, among all the monitoring data that the equipment is able to expose. From there on, the network element pushes all corresponding data, in an almost continuous manner, to the server that subscribed it. Streaming telemetry ensures low-latency, high-performance data collection, enabling near real-time access to large volumes of network data.

However, standard streaming telemetry can still be improved. In a modern network, there are plenty of network parameters available to be streamed; under normal operation, many are redundant and monitoring them does not add meaningful information, while also imposing an unnecessary load on the system. This is where adaptive streaming telemetry, a solution that adjusts dynamically to the network status and evolves with the network’s needs, is useful:

  • Under normal network operations, a fixed, limited set of parameters is included in each data stream and pushed to the data collectors at a moderate frequency. These are the parameters that best summarize the network status.
  • Upon network status change, the streaming frequency and the content of the data stream are adjusted: the push frequency may be increased, or more parameters can be added to the telemetry stream, for further insight.

The power of adaptive streaming telemetry

This approach decreases the load on the data communication network compared to standard telemetry, but it continues to support fine-grained visibility when and where needed. It also contributes to better overall data quality, which, in turn, allows for better compliance to SLAs, improves characterization of a network element’s health, and unleashes the predictive power of analytics and machine learning.

Infinera has worked jointly with Oracle and Microsoft in adaptive solutions that extend the standard gRPC-based streaming telemetry. We have successfully demonstrated adaptive streaming using a NOS-agnostic software agent implemented in the Go open source programming language, running in a NOS guest container. The same agent was seamlessly deployed and tested in Infinera’s optical network operating systems as well as in in SONiC, an open-source network operating system that includes strong support for routing protocols. The use of the same software across various technologies and equipment vendors ensures that the behavior of adaptive streaming telemetry is uniform and consistent.

Leveraging cloud towards autonomous networking

Automation applications like adaptive streaming telemetry, that intelligently observe the network, are key ingredients for implementing intent-based cognitive networking. Adaptive streaming telemetry is one example of a fast-growing ecosystem of embedded network automation applications that are also leveraging cloud technologies to bring operators closer to the vision of a network that is self-adapting, self-healing, and self-optimizing.

This content has been adapted from Infinera blogs, published in 2021/2022.

The Growth of CI/CD in Network Automation

Contributed By: Morgan Stern, VP Automation Strategy, Itential

Among the more enjoyable tasks that my job affords me is the opportunity to discuss both goals and challenges CSP network and orchestration teams face as they relate to network automation.

While these conversations typically span several different topics, trends do emerge; and while some of the ideas we discuss never materialize, others do gain traction and become the standard approach for those teams.

Last year, one trend that saw huge momentum was the intersection of network automation and CI/CD. Several CSPs I met with were developing pipelines to manage their network lifecycle activities. This made sense as the practices that changed the way software was developed, tested, and deployed could yield the same types of benefits to the processes for integrating and deploying network automations and orchestration.

While it felt like everyone wanted to discuss CI/CD, the focus of their pipelines varied significantly across different CSPs. For example, some wanted to create pipelines for the specific configs in an “Infrastructure as Code” (IaC) model. Others, wanted to create pipelines for their automation artifacts – the scripts, workflows, templates, etc. – seeking a more structured process for development, testing and deployment now that automation activities were evolving from task-centric to end-to-end process focused. . And in other cases, some CSPs wanted to develop pipelines for managing the automation platforms themselves, to provide the capability to automate the testing, integration, and deployment of the automation/orchestration engines through the version upgrade and patch processes.

After much investigation, which included talking with CSPs around the world, as well as with engineers and SMEs across a few different disciplines, three common themes emerged:

 

  1. The desire for implementing orchestration/automation CI/CD pipelines was in response to the growing dependency and complexity of automation activities within the CSPs.

Automation has graduated from small scripts used by individual engineers to solve specific problems, to automation architectures that solve for end-to-end activities. During this evolution, the limitations of some approaches became obvious. Certain practices did not scale well and created an enormous amount of technical debt in the attempt to take tools that were designed to solve one problem (automate a single task) and apply them to a much larger and more complex problem (automate a business/technical process).

 

  1. The problem set that any given CSP was trying to solve with CI/CD was a reasonably accurate indicator of where that CSP was in their automation and orchestration journey.

Infrastructure as Code (IaC) was the starting point for most organizations, but as the infrastructure required to execute automations became more complex, teams realized that they needed to address the complexity of developing, testing, and deploying the automation assets themselves to ensure interoperability, versioning, and security of assets like new scripts, workflows, and templates. The next challenge was how to create pipelines to ensure that, without disruption to the production environment, the automation tools and platforms were integrated, tested, and upgraded securely..

 

  1. Pipeline development created additional opportunities for automation, particularly for testing activities.

Multiple CSPs wanted the ability to dynamically instantiate working environments to improve the quality of their testing efforts. This created the need for capability to replicate portions of the production network and network services within the lab on demand, and then to have mechanisms for storing and managing network snapshots that could be accessed, versioned, and automatically updated to reflect architectural or vendor changes in production.

I’m excited to observe the best practices that emerge from these activities, and the new tools CSPs employ to manage their automation pipelines. As a network automation vendor, we are always looking for features to simplify pipeline development, and we expect other vendors and the Open-Source community to do the same. With these new capabilities, automation activities should accelerate dramatically, driving significant incremental business benefits for productivity and agility and opening the door for new service offerings made possible through these innovations.

 

 

 

 

Automate everything at cloud scale with Oracle’s Unified Operations portfolio

Contributed by: Anil Rao, Senior Director of Product Marketing and Strategy, Oracle Communications

Many of us in the communications industry remember a time when nearly all operations were manual. Order fulfillment was done by moving paper files from one desk to another, communications networks were made up of boxes into which people plugged cables, and network operations employees handled monitoring manually, and escalated with phone calls or onsite fixes.

Today, Communications Service Providers (CSPs) face a rapidly changing set of technology and business drivers that are fundamental to how they run their operations. Cloud and programmable networks are replacing physical networks, 5G and edge are driving new complexity and requirements, consumer and enterprise customers demand one-click self-service experiences, and business models to monetize networks are in flux.

As a result of these changes, service providers face many challenges. They must reduce operating expenses to improve profitability in increasingly commoditized markets. Siloed tools and fractured operations make it difficult to move beyond the manual, swivel chair operations in an era of ever-increasing network complexity. A lack of technical agility hinders the ability to exploit the benefits of cloud technology, and a lack of business agility constrains the ability to capitalize on new business models and B2B2X relationships enabled by 5G. In short, traditional telcos must transform to become “techcos” – See the recent white paper here – modern service providers that embrace automation and cloud native solutions to deploy innovative business models and deliver high value experiences to their customers.

Service providers today are looking to automate everything that can be automated to reduce costs, tackle complexity, and exploit emerging business opportunities. This includes “north-south” automation whereby customer intent is captured during the self-serve ordering process, then modeled and seamlessly implemented in downstream systems from business operations to service operation, to resource operations at the network level. Intent-based automation is then complemented by “east-west” automation in a closed loop fashion with end-to-end service automation spanning service orchestration, inventory, and assurance.

Introducing Oracle Unified Operations

Oracle’s Unified Operations portfolio is designed to help service providers automate everything at cloud scale. It unifies silos and abstracts complexity across multi-domain networks and services while also unifying multi-vendor networks and tools to provide a ‘single pane of glass’ visibility. Additionally, the unification of orchestration, assurance, and inventory functions across the operations environment provides automated lifecycle management. Cloud native, dynamic, and highly scalable, Unified Operations is architected for the communications needs and networks of today and the future.

The portfolio is comprised of four cloud native solutions, all aligned with TM Forum Open APIs

  1. Unified Assurance. Unifying network and service monitoring across diverse network technologies, generations, and domains, to drive ML-based automated fault and event management, root cause analysis, and performance monitoring.
  2. Unified Orchestration. Bringing together domain specific configuration systems and orchestration platforms with a multi-domain service orchestration platform to drive intent-driven automation
  3. Unified Inventory and Topology. Unifying service, network, and resource visibility across diverse network technologies, generations, and domains, aiding with real time views for automated orchestration and assurance.
  4. Unified Orchestration and Assurance. Bringing together all the components of Unified Operations, this composite solution drives closed loop automation for mobile, fixed line, digital services.

 

Built upon a rich portfolio of Oracle’s proven OSS products together with those of our recent acquisition of Federos , Unified Operations is already helping hundreds of service providers around the world to automate their assurance, orchestration, and inventory processes to improve profitability, deliver a positive customer experience, and capitalize on the opportunity to monetize 5G.

About Oracle Communications:

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. To learn more about Oracle Communications industry solutions, subscribe to our blog and visit: Oracle Communications LinkedIn, or join the conversation at Twitter @OracleComms.