BT-Infovista partnership leverages automation for superior customer experience

Contributed by Renata Da Silva, VP Product Marketing, Infovista.

In the ever-evolving landscape of telecommunications, the partnership between BT and Infovista has stood the test of time, over a decade of collaboration that continues to shape the future of voice quality for BT’s customers. What started as a venture around monitoring and assurance of fixed voice networks has now evolved into a cutting-edge approach, placing automation at the forefront to dramatically reduce resolution times and enhance overall customer experience.

Partnering for progress… and customer experience

In an insightful conversation with experts from BTGina Baikenycz, Network Specialist, and Mark Gibney, Senior Manager Mobile OSS Strategy the depth of the relationship between the British operator and Infovista came to light.

Mark Gibney emphasized Infovista’s profound domain knowledge and how their roadmaps seamlessly align with the technological landscape that BT envisions. This collaboration goes beyond the conventional supplier-client relationship, with Infovista becoming an integral extension of BT’s own teams. The ongoing deployment of the latest Ativa™ Suite serves as a testament to the harmonious integration of Infovista’s advanced capabilities into the intricate fabric of BT’s network models.

The conversation didn’t just revolve around technological integration but also delved into the challenges faced by BT in its pursuit of excellence. At the forefront of these challenges is the unwavering commitment to customer experience. Streamlining operations and enhancing troubleshooting capabilities emerged as imperative steps, prompting BT to actively explore the realms of automation.

Unlocking the power of automation to slash Mean Time to Repair (MTTR)

Recognizing the potential benefits that automation could bring, BT swiftly identified its significance within the Ativa solution. More than just streamlining operations, automation promised faster service issue resolution, a critical factor in enhancing overall customer satisfaction.

The focus on efficiency and consistency became a priority. By automating manual and repetitive tasks, Ativa not only reduces the time spent but also ensures that troubleshooting processes are consistently and accurately executed, mitigating the risk of errors in the first place. This is particularly vital in the context of modern, complex networks, where accuracy is paramount.

Automation goes beyond mere operational efficiency; it empowers BT to achieve faster issue resolution by quickly detecting and diagnosing problems, often before they become noticeable to customers. The result is a substantial reduction in Mean Time to Repair (MTTR), a metric crucial for maintaining high customer satisfaction levels. Additionally, the operational productivity and resource optimization achieved through automation are key strategic advantages.

By reducing the need for extensive human involvement in routine tasks, operational costs are optimized , and the impact of human error is mitigated. Skilled personnel can then redirect their focus towards more complex issues and strategic initiatives, maximizing the utilization of their expertise.

With these significant benefits in mind, BT leveraged Ativa’s fixed voice assurance solution to set up an automated troubleshooting workflow. This workflow, triggered upon anomaly detection, demonstrated a potential 60-80% improvement in Mean Time to Understand (MTTU), a metric crucial for swift issue comprehension and resolution and where currently engineering teams spend the majority of their time.

BT troubleshooting workflow with Ativa assurance solution

Keep pioneering voice quality solutions with automation

The anticipated outcomes from achieving automated customer and service troubleshooting are indeed impressive. From automating anomaly detection to reducing the Mean Time to Understand, the objective is clear – proactively identify and resolve issues before customers are impacted. This proactive approach aligns seamlessly with BT’s commitment to making customer experience its top priority.

Looking ahead, BT’s plan is to expand Ativa’s capabilities further into its voice network, with a keen eye on continuous enhancements. The intention is clear – to enable the workflow to identify customer impact for each anomaly, thereby facilitating prioritized resolutions based on business value. This strategic move not only ensures efficient issue resolution but also aligns with the broader goal of delivering value to customers.

The BT-Infovista partnership stands on the brink of revolutionizing voice quality through advanced automation. And not only for Fixed voice users. Recently, BT and Infovista regrouped with AWS to work on a TM Forum’s catalysts project that aims to demonstrate the importance of having 360º observability and intelligence over mobile services such as VoLTE/VoNR and 5G slicing as an enabler to business-driven network management.

Whether we consider fixed and/or mobile networks, our journey with BT continues with an unwavering commitment to innovation and collaboration, ensuring that BT’s customer satisfaction remains at the forefront.

Would you like to know more about this PoC and explore how to elevate voice quality through automation? Join us in person or virtually at Mobile World Congress Barcelona 2024. Join Us at MWC | Infovista

Uses for Gen AI in the Telco Network and OSS

Uses for Gen AI in the Telco Network and OSS

By Charlotte Patrick.

 

A telco network remains the most financially beneficial place to implement intelligence – offering capex and opex savings, as well as supporting new revenue streams.

However, deployment of Gen AI looks likely to be limited to a small number of use cases in the short term because:

  1. Any deployment of AI/ML into the network present unique challenges, with the potential to harm customer experience
  2. The complex nature of telco networks necessitates specialised models – which have less applicability than some popular Gen AI areas such as chatbots
  3. The current general decline in Gen AI enthusiasm as its limitations become clear.

In this environment, it is important to discern what is viable and in what timeframe to make the best use of investments and manage risk.  The diagram below describes a range of opportunities being discussed currently.

 

Gen AI use cases in a telco

The diagram is arranged around seven ways in which Gen AI will be used in telcos:

  1. Content creation – this area currently generates most headlines with models such as ChatGPT.  Includes creating original content (e.g. blog posts, product descriptions for marketing) and audio, image and video content creation.  Also, a range of other related capabilities, such as translation and proofreading of text

    At FutureNet World Charlotte is moderating the panel: Integrating Gen AI to develop smarter networks and transform customer experience
  2. Human-machine – centred around improvements in natural language understanding and generation (NLU/NLG), providing better human-machine communications within chatbots, IVRs and digital assistants*.  Also, improved accessibility and several Gen AI improvements to sentiment and emotion analysis
  3. Human-human – the improvements in digital assistants will be most important in the contact centre, helping agents reduce time and improve customer satisfaction when engaged in complex interactions.
  4. Knowledge management – a group of Gen AI use cases relating to telco catalogues and knowledge bases – for example, in contact centres or field services.  Gen AI capabilities will bring improvements from NLU/NLG (similar to chatbot functionality) and be involved in improving the knowledge base itself (e.g. undertaking gap analysis or providing the ability to summarise material).
  5. Process improvements – A catch-all category that includes Gen AI’s use in the improvement of processes.  Including the creation of code, its use in testing and around the management of processes (e.g. API call creation and the improvement of process documentation).
  6. Data management – This category includes managing underlying data and activities such as governance.  Today’s primary use cases appear to be around the generation of synthetic data and the augmentation of existing data sets to improve quality. However, likely that more uses will occur over time.
  7. Intelligence improvements – Lastly, the use of Gen AI to improve the intelligence within a telco.  It is at its strongest, currently, in the area of anomaly detection – but it can also enhance other models for prediction and personalisation.

 

Uses for Gen AI in the telco network

 

Observations from the diagram

Creation and updates of topology and architecture  When looking at some of the most popular use cases for Gen AI today (providing generation of innovative images and configurations) a question arises about its potential use for generating optimum network topologies, simulating network environments or suggesting cost-saving configuration changes on the network.   Vendor products are not yet seen – but the discussion focuses on analysis of various geographical and demographic data with image data of the location, to recommend optimal locations for new cell towers or base stations. Also, creation and update of coverage maps potentially from sparse or incomplete data, where Gen AI is augmented with additional external data, providing information for network planning and also, potentially, in real time for troubleshooting and capacity management.

Other discussions are around predictive models to create network topologies by suggesting the arrangement of nodes, links, and connections.  Also, models trained on network topology and configuration data to suggest the configuration of network elements or improvements in energy savings.  Questions remain here on whether Gen AI is the most appropriate model type for prediction – and it may become one model amongst others used.

Digital twin management  There is discussion of the ability of LLUs to train digital twins on the behaviour of their physical counterparts – creating simplified twins that accurately represent the counterparts while reducing running costs.  If LLUs could be utilised in this way, it may make the use of digital twins in the network more viable – as there are many potential use cases for twins in the network but questions about the business case for deployment of many of them.

Digital assistants  The use of domain-specific digital assistants to support operational and field services teams. These could provide summaries or answer questions on large bodies of vendor or standards documentation to the NOC/SOC or support field services team when on site.  Summarization is the most implemented form of Gen AI currently – making this a good area for investigation.

Creation and maintenance of documentation  A related area is the knowledge management of documentation and catalogues in the network and OSS.   There are opportunities to improve the quality of documentation provided to a variety of operational teams or SLA documentation and network diagrams provided directly to the customer in regard to their new service.

Code or script creation  LLMs offer to decrease coding time across the network/OSS.   Examples seen discussed recently include customer-facing tasks, such as a customer’s on-demand service request triggering an LLM to create the necessary scripts and commands for auto-provisioning. Also, network-facing tasks such as the development of network functions, where a relatively unskilled person could be offered recommendations, auto-completing of code and checks against best practise. LLMs can also translate older coding languages into more modern languages to cut support costs.

Declarative instruction  The ingestion of written requirements from customers or internal staff in simple language and translation into instructions which can be executed by a variety of automations on the network/OSS.

Multi-agent systems   This system is composed of multiple interacting intelligent agents using generative AI and reinforcement learning (RL) techniques.  An LLM can already break large tasks into sub-tasks and could break down a task among agents.  Multi-agent RL allows these agents to coordinate and act simultaneously on the environment and collaborate towards achieving a joint goal and/or individual targets.  Inter-agent communication and shared learning allow the system to adapt to unforeseen challenges and evolving scenarios by collaborative learning.

Validation and testing  The strength of Gen AI in anomaly detection and the creation of synthetic data provides opportunities in this area.  Real-world testing is expensive and time-consuming; risks develop when test data sets are not kept up to date, leaving the network or other equipment under test vulnerable. Gen AI could generate test data samples more quickly and efficiently and can be used for adversarial testing to test the network, and other models deployed in the network, against attack.

Tasks around network data management: As it becomes more important to manage data sets coming from the network/OSS, there are Gen AI use cases that become relevant in the data management space.  Examples include the creation of synthetic data to add synthetic samples for a minority data class, improving a model’s ability to predict rare events accurately and the augmentation of datasets by creating additional samples with slight variations.  LLMs could also support the real-time training of ML models used in the network.

Support for predictive models  A number of suggestions around the ability of Gen AI to model and predict outcomes were seen.  It is currently understood that Gen AI is best used to prevent overfitting, estimate uncertainty, and data augmentation – supporting other models which are better suited to prediction.

Anomaly detection looking for unknown-unknowns   Anomaly detection has been a popular use for ML in the network for some time; and is already an important part of security and troubleshooting processes across the network. A generative model doesn’t require a labelled example of every possible anomaly and can detect deviations previously unknown.  But, has the potential to be less accurate due to its ability to make biased assumptions.  Also, an LLM’s “black box” nature which makes it more difficult to understand why the model has flagged an anomaly.  It is unclear what use cases it might be most suited for, but it is possible that it will be used to support other anomaly detection models.

 

Charlotte Patrick

About the Author

Charlotte is an independent industry analyst covering the use of artificial intelligence, automation, and analytics by telecom companies.  Her areas of interest are the uptake and efficacy of these technologies and the resulting financial benefit.   She brings 13 years of experience as an analyst at Gartner, 5 years as an independent analyst, and 10 years of practical experience in AT&T, COLT and Telefonica O2 – with a mix of strategic, marketing and financial skill sets.

Connect with Charlotte on LinkedIn

 

The SMEs’ cry for affordable assured services: the revolutionary approach of the ‘Converged Access with ODA’ Catalyst

Contributed by Alejandro Medina, CTO, Future Connections. 

Small and medium size businesses (SMEs) struggle to provide their services when their broadband gets impacted. Most solutions for business continuity today focus on service failure instead of SLA performance degradation. In addition, current SD-WAN and Dynamic Multi-path Optimization (DMPO) solutions are proprietary and expensive, hence they are not affordable and suitable by SMEs.

It’s interesting to note that the market in consideration is represented by 145 million SMEs and is set to grow to US$800 billion by 2027. Based on Eurostat research*, SMEs are responsible for over 64% of employment in Europe and contribute to over 52% of the value add of the EU countries economy*. Worldwide, they represent over 90% of businesses and are responsible for more than 50% of employment**. If the business impact of SMEs is so crucial to support the value chain of our society, why, then, their cry for innovative and affordable assured service solutions doesn’t seem to have been heard yet?

Something is changing though, if we consider the Catalyst ‘Converged Access with ODA’, a proof-of-concept recently presented at TM Forum Digital Transformation World ’23 in Copenhagen. A team of operators and service providers put their knowledge and expertise together to provide the perfect answer to the SMEs’ market need and, in doing so, to advance the industry by addressing and solving together a specific market problem, demonstrating feasibility and potential benefits. In this Catalyst, Future Connections joined forces with the operators Claro Brazil, GCI Alaska, NTT Japan, Verizon USA and VNPT Vietnam and with the service providers Brillio, Incognito, NaaS Compass and Red Hat to respond effectively to the SMEs’ cry and capture its market growth by creating a solution that delivers affordable, guaranteed service continuity.

Illustration of Catalyst use case

During phase I of the Catalyst, presented at last year’s Digital Transformation World, champions and participants focused on investigating the consolidation of fixed and mobile service networks. The solution achieved a reduction of network access cost by 30%, thanks to network management platforms consolidation and simplification.

During phase II of the Catalyst, a solution was developed by abstracting the access technologies using a Converged Access Domain (NaaS), by using cloud-native architecture and by leveraging predictive monitoring performance Service Level Agreements (SLAs) and close loops. The solution, implemented in the labs of Claro in Rio de Janeiro and São Paulo in Brazil, delivered the desired service continuity at times of network congestion.

The effectiveness of this innovative approach was illustrated using the example of a hotel chain with a failing fibre (FTTx ) that impacted the hotel’s SLA. This was due to congestion originated by a sport event on TV that triggered a lot of TV streaming demand, increasing the traffic in the fibre. The  check-in of the hotel´s customers, though, did not get impacted thanks to the smooth switching of access technology. When congestion was experienced, traffic was automatically diverted to 5G over Fixed Wireless Access (FWA), as it offered the same SLAs. When congestion in FTTx subsided, the service delivery was then back to the fixed network.

Future Connections made available its NIx Manager Assurance and Automation platform to manage the service assurance part of the solution. The platform implemented APIs in order to collect the relevant network indicators, implement the data analytics to monitor the health of the service and identify the degradation, and trigger the switch to 5GFWA technology for activation ad configuration of the service.

The outcome demonstrated that service continuity can indeed be offered in a cost-effective way to SMEs even at times of service degradation, not just service outage, and in a way that guarantees the SLAs in the access domain for both mobile and fixed networks. The operators that were part of the Catalyst team welcomed the results, with one of them forecasting a 62,5% increase in SME demand if using this solution. Additional positive outcomes were also recorded in the form of reduction of access and solution costs and guaranteed SLAs across all service impairments, as well as additional energy savings using cloud-native solutions (OSS servers and VNF/CNFs) and on-demand services. The TM Forum itself recognised the potential of this proof-of-concept as the Catalyst was selected as finalist in three of the Catalyst Program Awards ’23 for business impact, business growth and sustainability & impact on society.

With such demonstrated impactful results, it is hoped that the work of the Catalyst team goes on to commercialize the solution. Certainly operators, SMEs and their customers will thank them for that.

*  Data source: EU small and medium-sized enterprises: an overview – Products Eurostat News – Eurostat (europa.eu)

** Data source: The World Bank https://www.worldbank.org/en/topic/smefinance

 

 

Navigating the Telecom Revolution: A Realistic Approach to Network Automation and AI

Bridging the Gap Between Telecom Realities and IT Aspirations. 

Contributed by Ned Taleb, CEO and Founder, Reailize & B-Yond.

In the dynamic landscape of the telecommunications industry, the buzz around “Network Automation and AI” is ubiquitous. Almost every vendor in the telecom market incorporates these terms when presenting their solutions or products. However, amidst the hype, the actual progress in the network side of telecom providers often falls short, especially when compared to the rapid advancements seen in the broader IT sector and the TechCo segment.

This disparity is not unexpected. A first-hand account from a European Tier 1 operator reveals the challenges faced when cloud engineers attempted to implement a 5G Standalone (SA) virtualized core. The stark realization hit them when their personal and family devices were provisioned to the new virtualized core. The stringent requirements of network availability and reliability, distinct from other cloud services, became glaringly apparent. Telecom networks operate under a different set of demanding standards, requiring attention to details that might be overlooked in other realms. Hence, hiring cloud and ML engineers is not going to quickly resolve skill gaps on the telecom operator side. Such new resources will require some time to learn and adjust to the telecom’s reality. An alternative is to partner with companies that have already gained the blended experience with Telco-applied Cloud and ML Engineering solutions.

In the intricate dance of nature, cross-pollination occurs as a bee gracefully moves from one flower to another, facilitating the exchange of vital elements. Remarkably, a similar phenomenon unfolds in the vast landscape of technology, both at an individual and company level. Those individuals and organizations that traverse diverse projects globally, offering cross-domain and cross-technology solutions, stand poised to propel the AI and Automation journey for Communication Service Providers (CSPs).

A compelling illustration emerges from our own endeavours, a testament to the power of diverse exposure and collaboration. Enter the realm of Packet Capture (PCAP) analysis—a technical and engineering-intensive domain demanding considerable expertise and domain knowledge for meticulous issue identification. However, innovation often thrives on breaking traditional moulds. Through a symbiotic partnership between our Data Scientists and Subject Matter Experts (SMEs), a groundbreaking approach emerged. PCAP files were transformed into a unique “language,” and a sophisticated Large Language Model (LLM) algorithm was deployed to automatically unearth words out of context, pinpointing with precision the faults within the system.

Allow me to illustrate it with another tangible example from our experiences. Our team undertook a significant engagement with a Tier 1 operator in North America, embarking on the establishment of a cloud infrastructure across various cloud vendors for diverse projects, including the deployment of 5G Standalone (SA). As we worked diligently on the ground delivering on the agreed scope, the meticulous documentation by our Subject Matter Experts (SMEs) and the close collaboration between our cloud and Machine Learning (ML) engineers yielded remarkable results over time. The outcome? A robust platform that genuinely transformed the deployment process from a matter of months to mere hours, and this is no exaggeration. This achievement not only underscores the power of a tailored approach but also highlights the impact of sustained collaboration and attention to detail.

In reflecting on our extensive journey, a compelling sense of responsibility drives us to deepen our contribution to AI and automation in the Telco sector. A pivotal step in this direction is our recent venture, the “LLM Discovery Workshops.” These workshops foster collaboration as we empower clients to shape their unique use cases using Generative AI and LLM technology. Leveraging open-source resources like LangChain and open LLMs, we ensure a secure integration with information systems, underpinned by robust authentication and access controls. A notable illustration from this initiative is the creation of a “Ticket Recommender System.” Here, the LLM model identifies parallels with past tickets by determination of a similarity score that is then used to suggest a resolution codes and a resolution team for a newly created ticket.

In essence, the path to effective network automation and AI integration lies in recognizing the unique challenges posed by the telecom sector. It involves a blend of experience, expertise, and a pragmatic approach that addresses immediate concerns while keeping an eye on the larger strategic goals.

 

Automation needs to be Open

Building the Future of Network Operations – Contributed by Andrew McGee, Senior Director, Telco Solutions Architecture, APAC, Red Hat & Naman Gupta, Business Development, Global Ecosystem, Red Hat.

In today’s fast-paced competitive landscape, businesses and organizations are constantly seeking ways to streamline their operations, enhance efficiency, and deliver better services to their customers. This innovative approach to automation represents a new thinking in how businesses manage their processes, networks, and services.

This drive for optimization has led to the emergence of transformative concepts like “Zero Touch Automation”. Zero touch automation is not only about enhancing operational efficiency; it’s also closely aligned with the principles of open source. This innovative approach to automation represents a new way of thinking about how businesses manage their processes, networks, and services, by harnessing the power of open-source technologies and collaborative development. It’s a testament to the industry’s commitment to openness and innovation, emphasizing the importance of accessible and adaptable tools in reshaping the future of network operations.

Open source and automation are a dynamic duo reshaping the landscape of modern business operations. Open source technologies provide the foundation upon which automation thrives, offering flexibility, transparency, and a collaborative ecosystem of tools and solutions. Open source not only democratizes access to automation but also fosters innovation through community-driven development. It enables organizations to tailor automation solutions to their specific needs, avoiding vendor lock-in and ensuring long-term viability. Moreover, open-source automation solutions often come with vibrant communities of users and contributors, facilitating knowledge sharing and continuous improvement. As businesses increasingly embrace automation to boost efficiency and agility, open source stands as the bedrock of this transformative journey, driving cost-effective, adaptable, and sustainable solutions that propel them toward a brighter, more automated future.

The Need To Transform Operations

In an era marked by unprecedented technological advancements, businesses find themselves in a perpetual race to keep up with evolving customer demands and industry trends. Traditional operational models, which often involve manual interventions and time-consuming processes, are increasingly becoming obsolete. This necessitates a fundamental shift in the way organizations operate, leading to the need for operational transformation.

The key drivers for operational transformation include:

  • Agility: Businesses must be nimble and responsive to rapidly changing market conditions. Operational transformation enables organizations to adapt quickly and make data-driven decisions in real-time, allowing them to seize opportunities and mitigate risks more effectively.
  • Cost-Savings: Traditional operational models are often resource-intensive and costly. Automation reduces operational costs by eliminating manual tasks, minimizing errors, and optimizing resources.
  • Customer-Centricity: Meeting customer expectations is paramount. Operational transformation allows organizations to deliver superior customer experiences by providing on-demand services, resolving issues promptly, and ensuring network reliability.
  • Competitive Advantage: Staying ahead of the competition requires innovative approaches to operations. Zero Touch Automation not only helps organizations maintain their competitive edge but also positions them as industry leaders.
  • Scale and Complexity: With the advent of technologies like 5G and IoT, networks and services are becoming increasingly more complex. Operational transformation is crucial for efficiently managing and scaling these intricate systems now and into the future.
  • Security and Compliance: In a landscape where data security and compliance are paramount, automation can enhance security protocols, reduce vulnerabilities, and ensure adherence to regulatory requirements.

Understanding Zero Touch Automation

Traditional automation workflows are generally initiated by a human operator due to change, support or project request. While the tasks of deployment or configuration have been automated the act of evaluating, running and checking automation work has still sat with a person.

Zero Touch Automation uses advanced automation tools and design, aiming for a state where operational processes can run autonomously with minimal or no human intervention at all. It represents a shift in how businesses and service providers manage their networks, services, and infrastructure.

Key aspects of Zero Touch Automation include:

  1. Self-Configuration: Network elements and services can automatically configure themselves based on predefined policies and business rules.
  2. Self-Optimization: Systems continuously monitor performance and adjust parameters to optimize efficiency and resource utilization.
  3. Self-Healing: Automated processes can detect faults or anomalies and take corrective actions without human intervention, reducing downtime and service disruptions.
  4. Dynamic Scaling: Resources and capacity can scale up or down automatically in response to changing demand, ensuring cost-efficiency and performance.
  5. Data-Driven Decision Making: Automation relies on real-time data analytics and machine learning to make informed decisions and adapt to changing conditions.

Business Value

Network automation offers a wide range of benefits for all businesses and organizations. These benefits extend to both traditional network infrastructures and emerging technologies like 5G and IoT. Here are some key advantages of network automation:

  1. Enhanced Efficiency and Productivity:
    1. Automation reduces the need for manual intervention in network configuration, monitoring, and maintenance tasks.
    2. IT and network teams can focus on strategic, high-value tasks rather than routine, time-consuming activities.
  2. Reduced Human Error:
    1. Automation can mitigate the cost associated with human errors, which can lead to network outages, security vulnerabilities, and service disruptions.
    2. Consistent and error-free configurations improve network reliability.
  3. Reduced Time to Market:
    1. Automation accelerates the deployment of new services, applications, and network changes.
    2. Businesses can respond more quickly to customer demands and market opportunities.
  4. Cost Savings:
    1. By optimizing resource allocation and reducing operational overhead, automation lowers operational costs.
    2. It minimizes the need for additional staff to manage growing networks.
  5. Consistency and Compliance:
    1. Automation enforces standardized configurations and security policies across the network.
    2. It helps organizations maintain compliance with industry regulations and security standards.
  6. Improved Network Security:
    1. Automated security measures can detect and respond to threats in real-time.
    2. Security policies can be consistently enforced, reducing vulnerabilities.
  7. Optimized Resource Utilization:
    1. Automation optimizes the use of network resources, ensuring that bandwidth, computing power, and storage are allocated efficiently.
    2. This leads to better resource utilization and cost efficiency.
  8. Streamlined Troubleshooting and Root Cause Analysis:
    1. Automated tools can quickly identify and diagnose network issues, reducing downtime and improving the mean time to repair (MTTR).
    2. Root cause analysis becomes more efficient with historical data and analytics.

Considerations for your Automation Transformation

Accelerating the journey to Zero Touch Automation (ZTA) requires a strategic approach and a focus on key factors that can drive progress efficiently. Here are some critical elements that will help speed up the journey to ZTA:

Clear Vision and Strategy:

Start with a well-defined vision for what ZTA means for your organization. Understand the specific goals and outcomes you want to achieve.

Develop a comprehensive strategy that outlines how ZTA aligns with your business objectives and how it will transform your operations.

Leadership and Culture:

Engage senior leadership in advocating for and supporting the ZTA initiative. Leadership buy-in is essential for securing the necessary resources and driving cultural change.

Foster a culture of innovation, collaboration, and continuous improvement. Ensure that all team members understand the importance of ZTA and are motivated to embrace it.

Skill Development:

Invest in training and skill development for your IT and network teams. Equip them with the knowledge and expertise required to design, implement, and manage automated systems.

Encourage cross-functional training to break down silos and promote collaboration among different teams.

Open Standards and Platforms:

Embrace open standards and open-source platforms. These provide flexibility, interoperability, and the ability to avoid lock-in.

Use standardized and open APIs to ensure that different components of your network and systems can communicate seamlessly.

Data-Driven Decision-Making:

Leverage data analytics and machine learning to make informed decisions. Data-driven insights can help optimize network performance, predict issues, and automate responses.

Implement robust monitoring and reporting systems to track the impact of automation on your operations.

Security Integration:

Prioritize security in your ZTA strategy. Implement automation with robust security measures to protect against threats and vulnerabilities.

Regularly assess the security of automated systems and update security protocols as needed.

Collaboration and Ecosystem Engagement:

Collaborate with industry peers, vendors, and partners to share best practices and leverage collective expertise.

Engage with industry organizations and forums that focus on automation standards and initiatives.

Change Management:

Develop a change management plan to guide the transition to ZTA. Communicate the benefits, goals, and expectations to all stakeholders.

Provide ongoing training and support to ensure a smooth transition and address any resistance to change.

Vendor Collaboration:

Collaborate closely with technology vendors and service providers. Work with them to align their solutions with your ZTA goals and requirements.

Regulatory Compliance:

Ensure that your automation initiatives align with industry regulations and compliance standards relevant to your business.

How Ansible Creates An Automation Mindset For Business Advantage

We at Red Hat, have been partnering with CSPs around the world to capitalize on  the power of Open technologies.

Vodafone Idea, with the help of Red Hat and Red Hat’s Ansible Automation Platform, have improved the service management platform that offers business units IT visibility and control, regardless of location or operating system. Due to Ansible’s open source qualities and applicability, Vodafone Idea Limited consolidated and integrated its automation tools into a unified platform, dashboard, auditing, and reporting system. This solution blueprint is now being replicated with operators across the globe.

Vodafone Idea Limited needed to optimize its IT infrastructure. Manual daily operations resulted in low operational efficiency and multiple errors. Disconnects between automation tools caused security and compliance risks, inefficient use of resources, and risked financial penalties caused by breaches in service-level agreements (SLAs).

Red Hat assisted implementing a comprehensive automation strategy and solution design to take advantage of the latest in open automation technology and practices..

Start Today!

Starting an automation project such as Zero Touch Automation is easy because the first steps are always assessment and planning. Think about the business values listed above and build goals based on your top priorities. Consider the factors for transformation and identify some initial use cases and targets in your environment.

Finally, assess the automation technology landscape for the appropriate, open tools and reach out to experts for assistance and guidance where needed.

 

The journey to cloud-native and network monetization

Contributed by Vivek K. Chadha, Senior Vice President, Global Head Telco Cloud, Rakuten Symphony.

Many businesses today – including 96% of Fortune 500s – have their data on the public cloud. As a natural evolution, in recent years, cloud computing has brought the discussion around containers and cloud-native technology to the fore.

According to Gartner, by 2027, more than 90% of global organizations will be running containerized applications in production – a leap from less than 40% in 2021. Those considering the shift to cloud-native architectures stand to benefit from a structured approach that intertwines technology adoption and operating model re-calibration, and tying it all to business outcomes, as this blog proposes.

 

Understanding the lexicon: cloud-native vs. cloud

Companies contemplating a cloud-native transition are often confused by the different terminologies in the cloud ecosystem.

So, let’s start with the basics.

At a high level, cloud refers to the delivery of computing services over the internet. On the other hand, cloud-ready applications can run on a cloud but were originally designed for on-premises deployment and now, can offer the best of both worlds in terms of “hybrid” models.

Cloud-native is a term specifically used for applications that are designed and created for cloud environments from the start. It encompasses designing and managing applications, network functions, and services within an adaptable and open framework. Some of the key facets of a cloud-native environment are:

  • The business applications/workloads use containers and are orchestrated using Kubernetes.
  • Applications are grounded in microservices, allowing seamless integration with CI/CD pipelines for rapid deployments, lifecycle management, and intent-based/ declarative/ auto-scaling.
  • Applications are dynamically orchestrated, manageable, observable, and responsive to demand fluctuations through AI-driven just-in-time adjustments.

Technical and operational benefits of cloud-native are most easily visible across the following attributes:

  • High scalability: Seamlessly manages storage and network-intensive workloads by automatically scaling resources up or down
  • Operational efficiency: Reduced need for manual oversight due to automation or hyperautomation
  • High resilience: Designed for rapid failover and auto-healing, enhancing network reliability, and minimizing downtime
  • Improved time to market: Enables faster deployment and safer experimentation, driving innovation and accelerating the time-to-market for new offerings
  • Unified management: Supports operations across core, edge clouds through vendor-agnostic, unified management platforms
  • Enhanced security: Modern-day security practices, such as micro-segmentation, enable granular control of network traffic, strengthening the security posture

Looking for detailed insights on the nuances of cloud vs. cloud-native? Click here.

 

Monetization as the endgame

First, let’s start with “monetization” itself. Monetization is not necessarily unidimensional growth.

Successful businesses consistently focus on:

  1. Reducing costs
  2. Accelerating existing revenue streams
  3. Increasing market share/ introducing new revenue streams
Fig 1: Monetization can be multi-dimensional

Cloud-native offers a path to all three levers mentioned above.

 

This includes cost efficiencies due to hyperautomation and a faster, simpler paradigm of building, delivering, and managing services, as well as accelerating revenue by improving time to market and finally unlocking new capabilities and offerings.

Businesses undergoing a cloud-native transformation can enable value-added services on top of existing offerings at minimal effort and cost. For example, Multi-access Edge Computing (MEC) can enhance real-time data processing capabilities near the user (the edge) where the data and service demands get generated. Consequently, telecom providers can offer specialized, low-latency services to existing customers, realizing incremental gains.

Organizations can also tap into entirely new revenue streams by offering novel services or use cases facilitated by a cloud-native network architecture. Take Industrial IoT and Industry 4.0. Here, the network can support real-time data analytics and machine-to-machine communications, enabling manufacturers to offer predictive maintenance as a premium service, for instance. While the costs of setting up these new services can be on the higher side, the potential gains in terms of revenue and market differentiation are considerable.

 

Fig 2: Opportunities to maximize financial returns

 

Finally, expanding the Total Addressable Market (TAM) or Serviceable Obtainable Market (SOM) is possible by delivering services to areas previously constrained by cost, technology, or operational challenges. Innovative solutions like “Network in a Box” can extend connectivity to remote or underserved regions. Though these solutions may require moderate to high initial investment, the potential for high returns exists by tapping into markets that were previously unreachable.

By carefully evaluating these monetization avenues, businesses can maximize the return on investment in cloud-native network infrastructure and position themselves as leaders in the evolving digital landscape.

Putting the “why” before the “how”

Moving to a cloud-native architecture isn’t just a technological upgrade; it’s a business decision that affects the entire organization. It requires a willingness to invest in modernizing existing applications, adopting containerized applications, acquiring, or upskilling talent, and overhauling processes. You need to accept some change to embrace a better paradigm, and reap the benefits of cloud-native.

The decision to go cloud-native needs to be grounded in quantifiable business outcomes. Whether it’s achieving better resiliency and uptime, moving to a more ‘open’ software eco-system (limiting vendor-lock-in), cost optimization, increased market responsiveness, or improving agility, the ‘why’ behind moving to cloud-native must be business-centric. Importantly, you must assess whether the cloud-native path offers substantial ROI and aligns with your business strategy. Once you have assessed and decided that a cloud-native future is in the best interest of your business, what is needed next is a structured approach to embark on this adoption or transformation.

 

The journey to cloud-native: 5 essential steps

An organization’s cloud-native journey is nothing short of transformational. Here are some essential steps to guide this transition.

  1. Build basic cloud-native competencies & understanding

An understanding of key principles, such as container technologies, orchestration, microservices, and CI/CD is important to allow your organization to engage, stay in control of and relate to a cloud-native paradigm, toolset and technology, or terminology. There are now battle-tested and commercially proven platforms available that significantly lower the barrier to entry for businesses to get going with cloud-native. This brings us to the next point.

  1. Select the right platform

This is a pivotal decision. Factors such as scalability to accommodate future growth and security features that align with industry regulations are non-negotiable. The platform should offer a rich ecosystem of services, APIs, and third-party tools that can seamlessly integrate into existing workflows without having to undergo a major overhaul. The platform should also support multi-cloud or hybrid cloud orchestration for zero-touch deployment and end-to-end lifecycle management. This enables a simplified and automated approach to managing applications and resources across different cloud environments.

  1. Operational alignment and hyperautomation

This involves adopting new technologies and reshaping workflows and operations with a focus on hyperautomation. Hyperautomation can help businesses leverage advanced technologies like AI and machine learning to automate complex business processes. Cloud-native platforms allow hyperautomation tools to operate with optimal efficiency. They are inherently scalable and facilitate microservices, containerization, and orchestration – all key enablers for hyperautomation.

  1. Creating a vendor-ecosystem-aligned roadmap

A successful transition to cloud-native functions (CNFs) requires not just internal readiness but also a meticulous evaluation of the vendor ecosystem. Aligning the business roadmap to the future development plans of vendors allows businesses to capitalize on new features as they are rolled out and to stay updated.

  1. Benefits realization (and tracking)

A structured benefits realization model is essential for tracking the ROI of the cloud-native transformation journey. This model should outline KPIs that are aligned with the broader business objectives and need to be continuously measured and monitored.

Fig: The roadmap to successful cloud-native adoption

 

Conclusion

Today cloud-native is becoming an important consideration for many businesses across industries. However, choosing between cloud-native, cloud-ready, or just cloud is not only a technical decision but also a strategic business move.

Going cloud-native allows businesses to improve resilience to opening new operational and revenue models. This in turn allows them to take full advantage of cloud architectures and services.

Learn more about how Rakuten Symphony’s cloud-native SymcloudTM suite can help businesses maximize monetary benefits from their cloud-native journey.

Vivek Chadha is scheduled to appear at the “CxO Keynote Panel: The journey to Cloud Native and network monetization” on Thursday, 19 October, at FutureNet Asia 2023.

To join the session and/or schedule a meeting with him and the team in Singapore, please register here.

Enhance online experiences with intelligent inventory and assurance

Contributed by Jose Carlos Mendez, Director of Network & OSS Product Marketing, Amdocs.

In the fast-paced world of communications & media, delivering faster speeds and superior online experiences is critical for success. To support their digital lifestyles, customers demand ubiquitous, reliable connectivity regardless of the access technology, be it Fiber to the Premises (FTTP), Fixed Wireless Access (FWA) or Satellite connectivity. For CSPs, this requires substantial network investment. Yet to unlock new possibilities and ultimately ensure exceptional online experiences, this isn’t the whole story: engineering and operations must also be optimized with modern, centralized inventory and assurance systems that leverage AI/ML.

 

Streamlining network design and build

Inventory systems are often fragmented and built on a legacy of technologies, hindering scalability and efficiency. As new technologies are introduced, from access to core, these outdated inventory systems fail to provide the end-to-end, real-time visibility and control required for optimizing network resources and delivering seamless customer experiences.

Inventory modernization is key to enhancing broadband connectivity. By consolidating various technologies such as FTTP, FWA and Satellite into a unified inventory system, it becomes possible to gain a holistic view of the infrastructure. This centralized access to network inventory data enables CSPs to efficiently design and build their network with an end-to-end view of the available infrastructure, leading to reduced costs and accelerated rollouts. Such a streamlined approach also increases the efficiency and agility of network expansion, empowering CSPs to respond rapidly to customer demands and quickly deploy networks that meet increasing bandwidth requirements.

 

Assuring superior customer experience

The centrality of connectivity in daily life, along with customer expectations for always-on, reliable networks, demands a paradigm shift in ensuring quality of service and the underlying network. Specifically, assurance needs to evolve from being primarily reactive, manual and network-centric to proactive, automated and customer-centric. The solution lies in ‘integrated assurance,’ with fault, performance and service quality management capabilities working seamlessly together to provide engineering and operations teams with the capability to predict, identify and quickly resolve service-impacting network outages.

 

Fig. 1 – Integrated Assurance

 

Data changes everything

With data playing a critical role in delivering reliable, ubiquitous connectivity, modern inventory and assurance systems jointly provide valuable data required to implement AI/ML techniques that support predictive and reactive analytics. Specifically, a modern inventory system provides a comprehensive view across network domains and layers, from the physical network layer (including both outside and inside plant infrastructure) to the logical, virtual and cloud network layers, as well as the service layer. This enables the inventory data to provide valuable insights into the dependencies between network and service layers for all operational processes.

In parallel, beyond just a collection of alarms and performance measures, modern assurance systems provide observability capabilities that enable holistic management of quality of service. Importantly, real-time data must be stored and combined with historical data to enable the AI/ML analysis that uncovers deep insights, helping operational efficiencies.

 

Fig. 2 – Inventory and assurance

 

Together, the data provided by these two systems means that operational processes have access to complete, end-to-end, real-time and accurate data, which is essential to increasing operational efficiency in multiple domains. For example:

  • Network design and build – accurate designs based on accurate usage information
  • Service fulfillment – zero-touch service fulfillment and reduced fallout
  • Service assurance – prediction, detection and automatic resolution of service issues.

 

Empowerment of a real-time, end-to-end view

To realize the full opportunity of the new era of online experiences, CSPs need an accurate, real-time end-to-end view of the network and services. The key to achieving this lies in breaking down existing silos and embracing modern, intelligent inventory and assurance systems that streamline operations through the power of data and AI/ML insights.

Choosing the right partner is the first step. With a wealth of capabilities, knowledge and experience, Amdocs provides truly integrated, end-to-end solutions encompassing market-leading inventory and assurance products, empowering your business to offer exceptional online experiences.

Learn more about Amdocs Helix Service Assurance here and Amdocs Network Inventory here.

Author: Jose Carlos Mendez, Director of Network & OSS Product Marketing, Amdocs

Public Sector and Telecoms: Redefining Digital Communication and Collaboration with Zextras Carbonio

Contributed by Zextras.

In today’s dynamic digital landscape, businesses worldwide face a multitude of challenges and pressure to adapt to emerging industry trends. The global shift to remote work, catalyzed by the COVID-19 pandemic, has thrust cloud service providers onto center stage, with an escalating demand for secure, efficient, and user-friendly solutions that satisfy diverse business needs.

Cloud service providers face a daunting array of challenges:

  • ensuring data security,
  • adhering to compliance standards,
  • maintaining service availability,
  • managing costs

are all vital aspects that require deft handling. Any downtime can translate into significant losses for businesses heavily relying on cloud-based platforms for daily operations. Coupled with the ongoing surge in cyber threats, this situation underscores the necessity for robust security measures to safeguard sensitive data.

Simultaneously, the industry trend is decisively veering towards unified platforms capable of integrating various workplace tools and applications. Enterprises are increasingly seeking solutions that encompass collaboration, communication, file management, video meetings and more, all while having full control over their data.

Meet Zextras Carbonio, the all-inclusive digital workplace designed to tackle these challenges head-on and meet the evolving trends of the industry as the private alternative to Microsoft 365 and Google Workspace. Zextras Carbonio emerges as a beacon of hope, particularly for Public Sector organizations and Telecom companies, grappling with stringent regulations and unique requirements, respectively.

Unrivaled Security for Email and Collaboration

Carbonio’s standout feature – its built-in private email server – provides an unrivaled level of security and privacy. This unique characteristic empowers businesses, especially those in the public sector, to regain control over their data, a critical factor in today’s data-driven landscape. Furthermore, the platform revolutionizes compliance strategy, making it an ideal choice for public sector organizations that often face strict regulations such as the General Data Protection Regulation (GDPR).

Beyond secure email, Carbonio serves as a vibrant collaboration platform, integrating various workplace tools. Its features are thoughtfully designed to boost productivity, streamline workflows, and encourage teamwork, regardless of employees’ physical location.

In terms of manageability, Carbonio’s intuitive admin panel simplifies the entire platform management process. IT administrators can effortlessly navigate through user management, server settings adjustments, and service status checks, reducing the time and effort spent on administrative tasks. The simple interface ensures a smooth and stress-free experience for administrators, allowing them to focus on more strategic tasks.

Lastly, Carbonio’s support for distributed architecture ensures high service availability and scalability. This is crucial for maintaining consistent performance levels, even during peak usage times or sudden demand surges, a common occurrence in the telecom sector.

Carbonio stands out with several offerings addressing specific needs of businesses, particularly those operating in the Public Sector, Regulated Industries, and Telecommunications.

 

Public Sector: Collaboration Designed on Data Governance

Carbonio’s private email server offers an unprecedented level of data security and privacy. This native feature, unique to Carbonio, ensures businesses do not relinquish data control to third parties. Public sector organizations often must adhere to stringent data protection regulations such as

  • General Data Protection Regulation (GDPR – European Union),
  • California Consumer Privacy Act (CCPA – California, USA),
  • Personal Information Protection and Electronic Documents Act (PIPEDA – Canada).

With Zextras Carbonio, organizations can manage their compliance strategies effectively, reducing the risk of data breaches and non-compliance penalties.

In order for organizations to effectively implement their data governance policies, they need to have a comprehensive understanding of the different pillars of data sovereignty. This will make it much easier to effectively manage and protect their data.

Here are 6 pillars of digital sovereignty to help organizations understand the various dimensions of their digital presence and operations.

Adhering to these pillars of digital sovereignty ensures that organizations can effectively manage and protect their data while complying with regulations on data privacy.

Telecom Operators: Consolidating Retention

Carbonio can also adapt to the Telecoms’ world rapidly changing trends, allowing them to offer a modern platform to their customers. Carbonio come as a fully-customizable, white-label workplace for secure communication and seamless collaboration.
With such a differentiating offering from the competition, Telco operators can dramatically reduce churn and TCO.

Known for their unique operational needs, telecom companies often find themselves wrestling with proprietary communication software. Carbonio’s open-source nature allows for customization, providing a flexible and future-proof solution that circumvents vendor lock-in.
This characteristic allows for a high degree of customization, providing telecom companies the flexibility to tailor the platform to their subscribers’ unique needs.

As businesses navigate the intricacies of the digital era, platforms like Carbonio will undoubtedly smooth the journey, ensuring it is more secure and efficient. Carbonio is more than just a digital workplace – it’s a trailblazer, setting new standards in communication and collaboration for the public sector and telecom.

 

Read more:

The sovereign digital workplace for Public Sector and Regulated Enterprises: https://zextras.com/carbonio-for-public-sector

Private collaboration for Telco subscribers: https://zextras.com/carbonio-for-telco

Six Ways 5G Core Automation Will Transform Testing Forever

Contributed by Clark Whitty, Principal Product Line Manager,  Spirent Communications

With every next-gen core that has been introduced, operators have typically scrambled to their corners with a testing vendor of choice to independently develop the tests needed to guide rollouts. This approach has been terribly inefficient, with the process playing out operator by operator, all around the world. But inefficient won’t cut it in the 5G era. Not as enterprise revenues wait to be unlocked, cloud competition closes in, and consumer experiences hang in the balance.

For the past year, Spirent has worked closely with tier-one operators on the industry’s earliest 5G core testing initiatives. As the 5G core remains poised for global deployment, our focus has been on uncovering the most common use cases, understanding the early testing challenges operators are facing, and fine-tuning the broad set of testing methodologies needed to get a worldwide slate of deployments off the ground. All of this work has been in the name of finally automating what has been a time-consuming, labor-intensive, and costly process.

Breaking down a successful approach to 5G core testing

Following decades of core testing work, we’ve seen where strategies have fallen short. On the other hand, we’ve also zeroed in on what makes them run smoothly. Now, in the face of complex multi-vendor networks, pressures building around time-to-market demands, and engineering teams spread increasingly thin across the latest urgent needs, we’ve drawn on our expertise to reveal the critical aspects of a successful next-gen core network automation strategy:

  1. Compliance, capacity, and performance testing take precedence.Initial 5G core testing use cases will revolve around these key needs. In fact, we’ve identified more than 200 testing use cases in total that will need to be supported and made available in lockstep with future standards releases and market developments.

 

Compliance, capacity, and performance testing are critical through each stage of core validation

 

  1. If it’s not off-the-shelf, operators will have their hands full. Professional services teams need to be at the ready to address the unique use case requirements beyond common needs. But by and large, it should be easy for any operator to get started right out of the box with a set of common 5G core testing methodologies. This represents a departure from previous testing dynamics that were built around high-budget, one-to-one custom methodology development on an operator-by-operator basis.
  2. Automation is a must. While it’s essential to perform industry-standard 3GPP testing of 5G network functions to validate compliance, capacity, and performance, 5G and multi-vendor environments will require automation in order to keep up with the deluge of releases, fixes, patches, etc., pushed through by an expanding ecosystem of vendors. The scale, rate, and financial burden of managing these changes will force organizations to think beyond the legacy of manual testing. In addition, this automation must extend to fit their domain-specific use cases. In many instances, custom call flows, specific pass/fail assertions, and additional telemetry must be included for test coverage to be complete. Operators’ custom scripts and measurements must be incorporated to extend and adapt test automation of 5GC network functions.
  3. Portability from the lab to production. Designing an automated 5G core testing solution from the ground up to easily integrate into an operator’s lab and then port just as easily to staging and production networks delivers significant time and cost-efficiencies. It also represents a major boost to network uptime and customer experiences as testing validation can be used in change management workflows for pre- and post-upgrade validation.
  4. Community-driven feedback loops are a must. Quickly enhancing an automated solution to address the broadest set of customer needs requires open communication between operators and testing vendors. This means creating communication pathways where operators can recommend use cases and enhancements.
  5. Real world traffic emulation is a requirement.Just addressing common testing use cases won’t be enough to ensure core networks are up to the task of real-world deployments. Being able to connect testing solutions to testing hardware and software that can pump realistic traffic and accurately replicate a range of network environments can make the difference between sink or swim when it comes time for rollouts.

 

These next-gen core network automation requirements serve as the foundation for our latest entry into the 5G core testing market. Want to dive deeper into the discussion around what comprises a cutting-edge core network testing solution? Watch our webinar, Jumpstart Your 5G Core Validation.

Precision Timing When Every Second Counts

Contributed by Chris Moezzi, Vice President and General Manager of Foundational NIC, IP, and Customer Solutions, Intel Corporation.

Have you ever seen a crew performing a pit stop in a motor race? How about when one of the pit crew members fails to keep pace with the others, causing a delay in the pit stop? In a sport where fractions of a second can mean winning or losing a race, the synchronization and accuracy of the pit crew members (not just the driver) play a critical role in the outcome of a race.

Staying at the Forefront of 5G vRAN Transformation

Time is a valuable element of 5G virtualized radio access network (vRAN) performance. To use a real-world example, in the same manner synchronization and accuracy are key ingredients to the performance of a motor race “pit crew,” time synchronization in 5G vRAN plays an equally critical role in network performance and end-user 5G experiences in their everyday lives. Through the RAN, end-users use their cell phones to browse the web, use an app, or make a phone call. As 5G continues to evolve, new use cases continue to emerge, calling for low-latency and precision timing capabilities as part of a cost-effective solution. Intel enables communication service providers (CoSPs) to stay at the forefront of 5G vRAN transformation and deliver new or improved capabilities and services.

The new Intel® Ethernet 800 Series Network Adapters E810-CQDA2T (formerly codenamed Logan Beach) and E810-XXVA4T (formerly codenamed Westport Channel) are optimized for deployments from the cloud to the 5G edge by offering advancements that simplify timing synchronization and greater timing accuracy. HW-enhanced 1588 precision time protocol (PTP) and Synchronous Ethernet (SyncE) support featured on these adapters enhance the timing synchronization with other parts of 5G vRAN infrastructure. These advancements and innovations improve timing synchronization and present benefits to end-user applications.

Intel® Ethernet 800 Series Network Adapters E810-CQDA2T and E810-XXVA4T

Timing Synchronization in Emergency Location Services

Take, for example, Enhanced 911 (in the United States) or eCall (999 in Europe) services. Using trilateration, a technique that detects the timing delay between different cell towers, emergency responders can pinpoint an emergency caller’s relative location within a given radius.

In the past, 3G applications could identify an emergency caller’s location within approximately a 100-meter radius. With the improved timing hardware implemented in 4G RAN, an emergency caller’s location radius could be reduced to approximately 50 meters. Now, thanks to HW-Enhanced 1588 PTP and SyncE, the E810-CQDA2T and E810-XXVA4T adapters can maximize timing accuracy and minimize timing variance in cell site synchronization to tighten an emergency caller’s location radius even further to approximately 30 meters over 5G vRAN [1].

HW-Enhanced 1588 PTP and SyncE in the E810-CQDA2T and E810-XXVA4T Intel® Ethernet Network Adapters can tighten a caller’s location radius to approximately 30 meters over 5G vRAN.

Timing synchronization for 5G vRAN solutions requires a substantial degree of accuracy. Should the infrastructure for timing synchronization be insufficient, end-users can experience poor services and frequent disruptions, resulting in dropped calls, poor video quality, and in the context of our example use case, inaccurate location services. When every second counts in an emergency situation, improving and accelerating the delivery of location services is crucial to increasing the success rate of emergency response. As a synchronized and highly accurate pit crew can help win a motor race, so too can accurate timing synchronization in 5G vRAN help bring a successful response to an emergency.

At this year’s Mobile World Congress (MWC) in Barcelona, come see a demonstration of this use case at the Intel booth located in Hall 3, Stand #3E31. We’ll present how vRAN timing synchronization increases the accuracy of emergency location services and discuss the key features of the E810-CQDA2T and E810-XXVA4T Intel® Ethernet Network Adapters. We’ll even have samples available at the kiosk for you to take a closer look at these adapters.

Meet Intel at FutureNet Asia, 18/19 October 2023, Singapore

Trusted Solutions for 5G vRAN Infrastructure

Intel’s unmatched portfolio demonstrates our commitment to innovate and deliver the most trusted solutions for 5G vRAN infrastructure. Set to launch at MWC Barcelona 2023, 4th Gen Intel® Xeon® Scalable Processors with Intel® vRAN Boost offer CoSPs the ability to boost networks by enabling programmable infrastructure and eliminating the need for custom vRAN accelerator cards. With forward error correction (FEC) acceleration directly integrated into these processors, CoSPs can maintain reliable service through detection and correction of transmission errors with reduced power consumption and total cost of ownership.

Combine these features with those found in E810-XXVDA4T or E810-CQDA2T Intel® Ethernet Network Adapters, and CoSPs can deploy a 5G vRAN solution with hardware-enhanced timing synchronization capabilities that meet stringent vRAN requirements while eliminating the need for expensive, dedicated timing appliances. In dense radio configurations, CoSPs can add the Intel® vRAN Accelerator ACC100 Adapter for increased capacity through its concurrent 5G and 4G FEC acceleration, integrated Intel® FlexRAN software reference architectures, and support of cloud native, VM, and bare metal.

Watch this video to see how Intel’s portfolio of solutions for vRAN come together to maximize vRAN transformation.

Intel’s solutions for vRAN enable flexible and scalable platforms that deliver high timing accuracy and FEC acceleration, avoid vendor lock-in by removing the need for expensive proprietary systems, and reduce total cost of ownership. Intel continues to provide and evolve solutions for vRAN in pursuit of delivering innovations that address the many challenges faced by CoSPs, including timing synchronization precision in 5G vRAN ecosystems. It’s why more than 90% of today’s commercial vRAN networks around the world run on Intel architecture [2].

As the driver is a key factor to the success of a racing team, the other members, like the pit crew, racing strategists, and engineers, are significant contributors to the overall outcome of a race. When the entire team comes together, in sync and focused, the team’s performance is exponentially greater. With 4th Gen Intel® Xeon® Scalable Processors in the driver seat and the pit crew of Intel® Ethernet 800 Series Network Adapters ready to ensure the car connects through every corner of the racetrack, Intel’s portfolio of solutions for vRAN come together to optimize the potential of earning a place on the podium.

Discover how Intel® Solutions for vRAN enable CoSPs to build vRAN solutions that meet unique customer needs with accelerated time to market deployment by visiting intel.com/ethernet.

Notices & Disclaimers:

Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.

Intel technologies may require enabled hardware, software or service activation.

Your costs and results may vary. Code names are used by Intel to identify products, technologies, or services that are in development and not publicly available. These are not “commercial” names and not intended to function as trademarks.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

References:

1. Qualcomm Technologies, Inc. report published August 7, 2014, “Introduction to OTDOA on LTE Networks.”

2. Dell’Oro report published in Jan. 2023, “Mobile RAN Five-Year Forecast Report 2023–2027 Vol.22 №1,” with Intel analysts.

Header Photo by gustavo Campos on Unsplash

What is Happening in the Network Edge

Contributed by Dell Technologies.

Where is the Network Edge in Mobile Networks

The notion of ‘Edge’ can take on different meanings depending on the context, so it’s important to first define what we mean by Network Edge. This term can be broadly classified into two categories: Enterprise Edge and Network Edge. The former refers to when the infrastructure is hosted by the company using the service, while the latter refers to when the infrastructure is hosted by the Mobile Network Operator (MNO) providing the service.

This article focuses on the Network Edge, which can be located anywhere from the Radio Access Network (RAN) to next to the Core Network (CN). Network Edge sites collocated with the RAN are often referred to as Far Edge.

What is in the Network Edge

In a 5G Standalone (5G SA) Network, a Network Edge site typically contains a cloud platform that hosts a User Plane Function (UPF) to enable local breakout (LBO). It may include a suite of consumer and enterprise applications, for example, those that require lower latency or more privacy. It can also benefit the transport network when large content such as Video-on-Demand is brought closer to the end users.

Modern cloud platforms are envisioned to be open and disaggregated to enable MNOs to rapidly onboard new applications from different Independent Software Vendors (ISV) thus accelerating technology adoption. These modern cloud platforms are typically composed of Commercial-of-the-Shelf (COTS) hardware, multi-tenant Container-as-a-Service (CaaS) platforms, and multi-cloud Management and Orchestration solutions.

Join Dell Technologies at FutureNet Asia, 18/19 October 2023, in Singapore

Similarly, modern applications are designed to be cloud-native to maximize service agility. By having microservices architectures and supporting containerized deployments, MNOs can rapidly adapt their services to meet changing market demands.

What contributes to Network Latency

The appeal of Network Edge or Multi-access Edge Computing (MEC) is commonly associated with lower latency or more privacy. While moving applications from beyond the CN to near the RAN does eliminate up to tens of milliseconds of delay, it is also important to understand that there are many other contributors to network latency which can be optimized. In fact, latency is added at every stage from the User Equipment (UE) to the application and back.

RAN is typically the biggest contributor to network latency and jitter, the latter being a measure of fluctuations in delay. Accordingly, 3GPP has introduced a lot of enhancements in 5G New Radio (5G NR) to reduce latency and jitter in the air interface. We can actively reduce latency through the following categories: There are three primary categories where latency can be reduced:

  • Transmission time: reduce symbol duration with higher subcarrier spacing or with mini slots
  • Waiting time: improve scheduling (optimize handshaking), simultaneous transmit/receive, and uplink/downlink switching with TDD
  • Processing time: reduce UE and gNB processing and queuing with enhanced coding and modulation

Transport latency is relatively simple to understand as it is mainly due to light propagation in optical fiber. The industry rule of thumb is 1 millisecond round trip latency for every 100 kilometers. The number of hops along the path also impacts latency as every transport equipment adds a bit of delay.

Typically, CN adds less than 1 millisecond to the latency. The challenge for the CN is more about keeping the latency low for mobile UEs, by seamlessly changing anchors to the nearest Edge UPF through a new procedure called ‘make before break’. Also, the UPF architecture and Gi/SGi services (e.g., Deep Packet Inspection, Network Address Translation, and Content Optimization) may add a few additional milliseconds to the overall latency, depending on whether these functions are integrated or independent.

Architectural and Business approaches for the Network Edge

The physical locations that host RAN and Network Edge functionalities are widely recognized to be some of the MNOs’ most valuable assets. Few other entities today have the real estate and associated infrastructure (e.g., power, fiber) to bring cloud capabilities this close to the end clients. Consequently, monetization of the Network Edge is an important component of most MNOs’ strategy for maximizing their investment in the mobile network and, specifically, in 5G. In almost all cases, the Network Edge monetization strategy includes making Network Edge available for Enterprise customers to use as an “Edge Cloud.” However, doing so involves making architectural and business model choices across several dimensions:

  • Connectivity or Cloud: should the MNO offer a cloud service or just the connectivity to a cloud service provided by a third party (and potentially hosted at a third party’s site).
  • aaS model: in principle, the full range of as-a-Service models are available to the MNO to offer at the network edge. This includes co-location services; Bare-Metal-as-a-Service, Infrastructure-as-a-Service (IaaS), Containers-as-a-Service (CaaS), and Platform and Software-as-a-Service (PaaS and SaaS). Going up this value chain (up being from co-lo to SaaS) allows the MNO to capture more of the value provided to the Enterprise. However, it also requires it to take on significantly more of responsibility and puts it in direct competition with well-established players in this space – e.g., the cloud hyperscale companies. The right mix of offerings – and it is invariably a mix – thus involves a complex set of technical and business case tradeoffs. The end result will be different for every MNO and how each arrives there will also be unique.
  • Management framework: our industry’s initial approach to exposing the Network Edge to the enterprises involved a management framework that tightly couples to how the MNO manages its network functions (e.g., the ETSI MEC family of standards for example (ETSI MEC)). However, this approach comes with several drawbacks from an Enterprise point of view. As a result, a loosely coupled approach, where the Enterprise manages its Edge Cloud applications using typical cloud management solutions appears to be gaining significant traction, with solutions such as Amazon’s Wavelength as an example. This approach, of course, has its own drawbacks and managing the interplay between the two is an important consideration in Network Edge (and one that is intertwined with the selection of aaS model).
  • Network-as-a-Service: a unique aspect of the Network Edge is the MNOs ability to expose network information to applications as well as the ability to provide those applications (highly curated) means of controlling the network. How and if this makes sense is again both an issue of the business case – for the MNO and the Enterprise – as well as a technical/architectural issue.

Certainly, the likely end state is a complex mixture of services and go-to-market models focused on the Enterprise (B2B) segment. The exposition of operational automation and the features of 5G designed to address this make it likely that this is a huge opportunity for MNOs. Navigating the complexities of this space requires a deep understanding of both what services the Enterprises are looking for and how they are looking to consume these. It also requires an architectural approach that can handle the variable mix of what is needed in a way that is highly scalable.

As the long-time leader in Enterprise IT services, Dell is uniquely positioned to address this space – stay tuned for more details in an upcoming blog!

Building the Network Edge

There are several factors to consider when moving workloads from central sites to edge locations. Limited space and power are at the top of the list. The distance of locations from the main cities and generally more exposed to the elements require a new class of denser, easier-to-service, and even ruggedized form factors. Thanks to the popularity of Open RAN and Enterprise Edge, there are already solutions in the market today that can also be used for Network Edge. Read more on Edge blog series Computing on the Edge | Dell Technologies Info Hub

Higher deployment and operating costs are another major factor. The sheer number of edge locations combined with their degraded accessibility make them more expensive to build and maintain. The economics of the Network Edge thus necessitates automation and pre-integration. Dell’s solution is the newly engineered cloud-native solution with automated deployment and life-cycle management at its core. More on this novel approach here Dell Telecom MultiCloud Foundation | Dell USA.

Last is the lower cost of running applications centrally. Central sites have the advantage of pooling computes and sharing facilities such as power, connectivity, and cooling. It is therefore important to reduce overhead wherever possible, such as opting for containerized over VM-based cloud platforms. Moreover, having an open and disaggregated horizontal cloud platform not only allows for multitenancy at edge locations, which significantly reduces overhead but also enables application portability across the network to maximize efficiency.

The ideal situation is where Open/Cloud RAN and Network Edge are sharing sites thus splitting several of the deployment and operations costs. Due to the latency requirements, Distributed Unit (DU) must be placed within 20 kilometers of the Radio Unit (RU). Latency requirements for the mid-haul interface between DU and Central Unit (CU) are less stringent, and CU could be placed roughly around 80-100 kilometers from the DU. In addition, the Near-Real Time Radio Intelligent Controller (Near-RT RIC) and the related xApps must be placed within 10ms RTT. This makes it possible to collocate Network Edge sites with the CU sites and Near-RT RIC.

Future

What has happened over the past few years is that several MNOs have already moved away from having 2-3 national DCs for their entire CN to deploying 5-10 regional DCs where some network functions such as the UPF were distributed. One example of this is AT&Ts dozen “5G Edge Zones” which were introduced in the major metropolitan areas: AT&T Launching a Dozen 5G “Edge Zones” Across the U.S. (att.com).

This approach already suffices for the majority of “low latency” use cases and for smaller countries even the traditional 2-3 national DCs can offer sufficiently low transport latency. However, when moving into critical use cases with more stringent latency requirements, which means consistently very low latency is a must, then moving the applications to the Far Edge sites becomes a necessity in tandem with 5G SA enhancements such as network slicing and an optimized air interface.

The challenge with consumer use cases such as cloud gaming is supporting the required Service Level (i.e., low latency) country wide. And since enabling the network to support this requires a substantial initial investment, we are seeing the classic chicken and egg problem where independent software vendors opt not to develop these more demanding applications while MNOs keep waiting for these “killer use cases” to justify the initial investment for the Network Edge. As a result, we expect geographically limited enterprise use cases to gain market traction first and serve as catalysts for initially limited Network Edge deployments.

For use cases where assured speeds and low latency are critical, end-to-end Network Slicing is essential. In order to adopt a new more service-oriented approach, MNOs will need Network Edge and low latency enhancements together with Network Slicing in their toolbox. For more on this approach and Network Slicing, please check out our previous blog To slice or not to slice | Dell Technologies Info Hub.

 

About the author: Tomi Varonen 

Tomi Varonen is a Telecom Network Architect in Dell’s Telecom Systems Business Unit. He is based in Finland and working with the Cloud, Core Network, and OSS&BSS customer cases in the EMEA region. Tomi has over 23 years of experience in the Telecom sector in various technical and sales positions. Wide expertise in end-to-end mobile networks and enjoys creating solutions for new technology areas. Passion for various outdoor activities with family and friends including skiing, golf, and bicycling.

 

About the author: Arthur Gerona

Arthur is a Principal Global Enterprise Architect at Dell Technologies. He is working on the Telecom Cloud and Core area for the Asia Pacific and Japan region. He has 19 years of experience in Telecommunications, holding various roles in delivery, technical sales, product management, and field CTO. When not working, Arthur likes to keep active and travel with his family.

 

 

About the author: Alex Reznik 

ALEX REZNIK is a Global Principal Architect in Dell Technologies Telco Solutions Business organization.  In this role, he is focused on helping Dell’s Telco and Enterprise partners navigate the complexities of Edge Cloud strategy and turning the potential of 5G Edge transformation into the reality of business outcomes.  Alex is a recognized industry expert in the area of edge computing and a frequent speaker on the subject.  He is a co-author of the book “Multi-Access Edge Computing in Action.”  From March 2017 through February 2021, Alex served as Chair of ETSI’s Multi-Access Edge Computing (MEC) ISG – the leading international standards group focused on enabling edge computing in access networks.

Prior to joining Dell, Alex was a Distinguished Technologist in HPE’s North American Telco organization.   In this role, he was involved in various aspects of helping Tier 1 CSPs deploy state-of-the-art flexible infrastructure capable of delivering on the full promises of 5G.  Prior to HPE Alex was a Senior Principal Engineer/Senior Director at InterDigital, leading the company’s research and development activities in the area of wireless internet evolution.  Since joining InterDigital in 1999, he has been involved in a wide range of projects, including leadership of 3G modem ASIC architecture, design of advanced wireless security systems, coordination of standards strategy in the cognitive networks space, development of advanced IP mobility and heterogeneous access technologies and development of new content management techniques for the mobile edge.

Alex earned his B.S.E.E. Summa Cum Laude from The Cooper Union, S.M. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology, and Ph.D. in Electrical Engineering from Princeton University. He held a visiting faculty appointment at WINLAB, Rutgers University, where he collaborated on research in cognitive radio, wireless security, and future mobile Internet.   He served as the Vice-Chair of the Services Working Group at the Small Cells Forum.  Alex is an inventor of over 160 granted U.S. patents and has been awarded numerous awards for Innovation at InterDigital.