Warning: Invalid argument supplied for foreach() in /home/customer/www/futurenetworld.net/public_html/wp-content/themes/futurenet-group/template-parts/content.php on line 13

Bring it on: leading telcos are upbeat about facing the future

Some of the speakers at next week’s FutureNet World event discuss their strategic and commercial priorities regarding the future network with Contributing Editor Annie Turner.

Mallik Rao, CTIO, Telefónica Germany
Mallik Rao, CTIO, Telefónica Germany

We all know that network operators are under pressure to speed up their digital transformations, but what are the main drivers? According to Mallik Rao, CTIO, Telefónica Germany, “Telcos need to manage increasingly complex network and IT architectures to handle the diversity of mobile communications standards and applications. This creates an urgent need for transformation and using new ways for deployment and maintenance.

“Open networking, automation and disaggregation enable telcos to better manage their network and IT services, meet customer expectations and become future-ready. At Telefónica Germany we see ourselves as one of the pioneers in this transformation phase, for instance regarding Open RAN, disaggregation and cloud deployments.”

Get a good start

Terje Jensen, SVP, Head of Global Network Architecture, Director 5G Readiness Strategic Program at Telenor
Terje Jensen, SVP, Head of Global Network Architecture, Director 5G Readiness Strategic Program, Telenor

Terje Jensen, SVP, Head of Global Network Architecture, Director 5G Readiness Strategic Program at Telenor, has distinct ideas about how to speed up their transformations. He says, “Clear ambitions and directions must be set for activities to deliver a coherent result. This [comes from] defining scopes and objectives at the start as well as following up on outcomes, while execution should be flexible. During the initial phase, all personnel need to be onboarded about the purpose [of the transformation] and the planned achievements.

“When activities have clear scopes and expected outcomes, they can run in parallel, which naturally speeds up the overall execution. A condition [for success] is that interfaces are set so each activity relates to an environment as anticipated. Hands-on management is implied to swiftly clear blockers.”

Data difficulties and getting smarter

Data management has presented a huge challenge to operators for years: they have immense volumes of it, the question that has proved hard to answer is how to extract value from data. Jukka-Pekka Salmenkaita, VP AI and Special Projects at Elisa, says, “Many of the AIOps use cases require data across multiple operational systems. Traditionally the data has been very siloed, making use case development and deployment a very slow process. A solid data foundation across source systems is key foundation for AIOps development.”

Jukka-Pekka Salmenkaita, VP AI and Special Projects at Elisa
Jukka-Pekka Salmenkaita, VP AI and Special Projects, Elisa

A reason for many telco failures regarding data management is taking “a very centralised approach,” JPS says, because it “tends to become too rigid, slow to evolve, and fails to engage all required stakeholders. This is key reason why ‘data lake’ initiatives fail to deliver the expected value.” He recommends a more modular approach that distributes responsibilities closer to domain experts and adds that the so-called ‘data mesh’ is a promising approach to operationalise data management in more modular, distributed manner.

A data layer that can handle operational use cases, not only reporting and business intelligence (BI), is a key enabler JPS says, and stresses the importance of fully closed-loop automation. He states, “Our approach has been as follows. First, pilot closed-loop automation in specific domains. Second, add value by moving from heuristic and deterministic use case implementations to machine learning-based implementations. Third, set up fundamental DataOps and MLOps practices shared across the domains. Fourth, expand [the number of] domains covered and cross-domain use cases.”

Jenson notes that currently in the industry, “Smartness gained from AI and machine learning is used to various degree in processes, and gradually we will need to improve that level of smartness as we gain additional insights. End-to-end perspectives are captured by connecting different processes, while process re-design is also done to simplify the overall flow and remove inefficiencies. New technologies often allow higher degree of automation and use expert and learning algorithms. These are part of the modernisation that gradually expands the automation of more end-to-end processes. This is also key for digital interactions with customers and partners.”

Filling the skills gap, stretching the budget

Data management aside, other “blockers” referred to by Jensen can also include what look like intractable problems of lack of the right skills and investment constraints. Jensen is nothing daunted. “Most of the activities involve partners in some ways. Therefore, leveraging each others’ capabilities and capacities is important: [it] is a proven way to gain synergies [and] lower costs as well as improving the speed. This is an option in many cases as partners and customer want to co-create for innovative and sustainable solutions. The approach further strengthens the trust in the solution as it is built to bring value to the customer and society at large.”

Juan Francisco Redondo, Senior Manager Network Strategy & Architecture at Three UK
Juan Francisco Redondo, Senior Manager Network Strategy & Architecture, Three UK

On the subject of the common good, sustainability is embedded in almost every telco’s strategy. Juan Francisco Redondo, Senior Manager Network Strategy & Architecture at Three UK, stresses that for operators, sustainability is “not a question of cutting costs, but of actively managing the incremental traffic demand in a sustainable manner from financial point of view. Therefore, the ethical imperative and the responsible costs management of the networks that support our services marry nicely.”

Sustainability by design means “that the resources utilised to deliver products and services are part of the value chain in a way [that] their value is maximised and their cost and environmental impact is minimised,” he adds. “Bringing the sustainability dimension into the decision-making process across different business areas not only addresses the sustainability objectives, but allows the operators to operate ‘at scale’, which is definitely needed in an industry with stagnant revenues and margins under pressure.”

Redondo also highlights the fact that the investment community takes into account the maturity in the environmental, social and governance (ESG) journey as a key metric when making investment decisions, “so the focus in this area for telecommunications companies is a win-win from multiple fronts, including the financial one”.

Jensen is also upbeat about matters financial, saying, “In the case of any investment crunch, several financing models [can] address this, although they have to be aligned with…ownership and accounting rules. While partner and customer collaboration may complement skills, [you can] also systematically work on internal upskilling within key areas.”

Dynamic partnerships power the future

Even so, he thinks that dynamic partnerships will become more crucial, which “makes it even more important to reduce the threshold for engaging and benefiting from collaboration.”

Rao concurs: “Telcos are the central enabler of digitisation and the driver of a connected society. A strong ecosystem is our opportunity to create more value for our business and customers alike. We drive business growth with better network infrastructure and new services, tailored to customers’ needs, without having to develop and operate all those services and features on our own.”

Telefónica Germany’s partnership ecosystem includes IT service providers, hyperscalers, consultancies and start-ups for its “own fundamental IT transformation, further network roll-out and for providing new connectivity solutions to our customers,” Rao says.

In particular, hyperscalers are becoming really important partners for telcos. For Rao, collaborating with cloud providers is all about bringing technical innovations to market faster, flexibly managing its computing capacity and delivering new 5G applications to our customers with more stable services and better connectivity.

He adds, cloud “provides an optimal basis for industrial 5G networks and the necessary APIs to the world’s leading development tools. Network functions can be implemented fast and seamlessly, for example to automate production and logistics processes. Overall, the cloud is a gamechanger for our industry. I am convinced that partnerships between telco operators and cloud hyperscalers will get closer in the future.”

Edging closer to customers

Joanna Newman, Global Edge Computing and 5G Principal Manager at Vodafone
Joanna Newman, Global Edge Computing and 5G Principal Manager, Vodafone

Joanna Newman, Global Edge Computing and 5G Principal Manager at Vodafone agrees. She points out that Vodafone was first to launch Distributed Multi-access Edge Computing with AWS Wavelength in Europe – in the UK in June 2021 and in Germany in December 2021. Vodafone also offers Dedicated MEC, in partnership with Microsoft. “This technology is now changing the landscape of what is possible for businesses in these countries,” Newman says.

She says that Vodafone sees use cases across range of areas, from healthcare to gaming, sports technology, autonomous transport, biometric security, remote VR, and factory automation.

Rao expands on this saying, “Our connectivity solutions can give corporations and SMEs a huge boost in growth, innovation and efficiency. We provide the technological framework for companies to implement their processes, whether via an IoT marketplace, such as our Telefónica KITE platform, or a private 5G network.”

Redondo concludes that to meet wider ambitions, the collaboration that already exists in the industry for the creation of end-to-end solutions, with significant interdependencies between chipsets, hardware and software elements, will have to be strengthened to incorporate new sustainability angles and frameworks, even beyond energy efficiency.

He says, “All players in the value and supply chains of mobile industry have a role to play in the design, delivery and operations of sustainable products and services.

Initially that happens at individual company level, while working against businesses objectives and considering sustainability aspirations. A good example is the successful efforts by the equipment manufacturers when offering new generations of efficient hardware products that are utilised in mobile networks. This is very positive, but it will not be enough.”

He remains confident that telcos will step up to the challenges, and “Meanwhile, the mobile industry continues delivering fantastic capabilities that enable other industries in their own sustainability journeys.”

You can hear more from the executives in this article at FutureNet World 2022 on the 10-11 May. Find out more and register here.

Futurenet World Interview with Dr. Piyush Sarwal, Oracle

We recently caught up with Dr. Piyush Sarwal, VP, Product Management, Oracle to get his thoughts on Oracle’s strategy and the topic of Future Networks: Injecting intelligent automation into network and service operations.

Can you tell us about Oracle’s approach to network and service operations automation?

The traditional operations environment is still largely manual, siloed, and offline, and in many cases relies on a workforce that may soon retire. The key business processes are performed by a fractured IT estate and owned by different departments. This model is not fit for purpose for the new generation of networks and services that are expected to be inherently more complex, more diverse, and much more demanding in terms of customer expectations and performance requirements. Status quo is not an option.

We are tackling these challenges head on with our Unified Operations suite where we are working on the principle of ‘automate everything that can be automated’. Using our solution, CSPs can achieve significantly higher levels of operations automation by (a) increasing the level of automation within assurance, inventory & topology, and orchestration, and (a) building the right level of integrations between these three key solution areas. With these approaches, CSPs can achieve both top-down intent-driven automation for digital services as well as end-to-end service lifecycle automation.

How are you making your automation solution ‘intelligent’?

We are employing a multi-pronged approach to intelligent automation across our Unified Operations portfolio and there are different facets to it depending on the solution area. For example, in our Unified Assurance solution area, we use the RCA3 model to significantly improve the efficacy of operations. It combines machine learning, supervised event correlation and unified network topology to eliminate noise and quickly triangulate the root cause of the most relevant network incidents and their service impact. This automated correlation and compression of events into actionable ones, enables operations to prioritize customer experience and service impacting incidents.

Another example of intelligent automation is the use of intent in our orchestration solution where the desired end state of the service and network is captured. An internal intent engine computes the steps to go from the current to the target state. It is akin to using a GPS where it lets the users specify the destination and the system computes the best possible route to get there. The user can also specify the service and network constraints, for example, what the service performance should be, and even which data centers should the resources reside in etc, like the travel conditions we would typically place on the GPS, such as the quickest route or the shortest route.

How are you helping customers to alleviate the perennial issue of managing legacy tools while modernizing the operations for the 5G era?

CSPs have a built up many operations technologies over the years which are doing a perfectly good job at keeping the lights on for existing networks and services. But they have limitations when it comes to supporting 5G as the technology was developed for older generations networks and services.

Our unique proposition is that with the Unified Operations solution CSPs can bridge the legacy to the new, i.e., pre-5G operations to the 5G-era operations. They can do this by introducing a unification layer on top of existing assets to automate the operational processes. When the time is right, they can standardize our solution and retire the legacy tools. With this approach, CSPs can achieve 2 key goals: (a) significantly reduce the business risk, sustain operational continuity, and reduce TCO and OpEx, and (b) operationalize new telco cloud and SDN-based networks and digital services.

 

Oracle Communications sponsored and participated in the panel below at FutureNet World on the 10/11th May 2022 in London.

Opening Keynote Panel – Future Networks: Injecting intelligent automation into network and service operations

  • Leveraging ML & AI to manage networks and services: defining the roadmap
  • Hype vs Reality: What has been achieved to date and what does success look like?
  • How can CSPs leverage existing assets to increase automation? – rip and replace is not an option
  • Opensource vs partner solutions: Assessing the options
  • How are hyperscalers supporting telcos for automation: What are the terms?

Panelists

  • Amy Cameron, Principal Analyst, STL Partners (Moderator)
  • Juan Manuel Caro, Director Operations Transformation, Telefónica
  • Jean-Benoît Besset, VP IT & Network Strategy, Orange
  • Azhar Mirza, Group Vice President, Applications GBU, Oracle Communications

Driving DevOps closer to production and nowcasting comes to the fore

Philippe Ensarguet, Group CTO at Orange Business Services, explained how GitOps is the evolution of DevOps and why forecasting isn’t the right focus to Contributing Editor, Annie Turner.

Philippe Ensarguet was an interesting choice as CTO of Orange Business Services when he was appointed back in September 2019. Previously he was CTO for the digital and data division of Orange Business Services, not a telco veteran with a traditional ‘network’ background. Rather he says he is driven by software, automation, cloud and infrastructure.

His big theme right now is the importance of nowcasting as opposed to forecasting, which he said was all very well when the operational environment was more static but is not sufficiently useful in today’s dynamic ecosystem “considering we have an explosion of areas to manage,” Ensarguet said. “We are…trying to understand what’s happening now”.

Putting the heart rate up

He has described the in part technology-driven acceleration as, “A short sharp shock that exploded the heart rate of the production and delivery rhythm” and said the “Acceleration is all about convergence”. He explains that that ‘digital’, IT, Network and telco – which were seen as somewhat separate entities – are converging as they all become based on the four pillars of software, APIs, automation and disaggregation.

Ensarguet adds that businesses are pushing hard for Orange Business Services (and its counterparts in other operators) to change for reasons including: businesses constantly evolve; works is an activity, not a place; the internet is the universal standard; cloud-first applications; and increased exposure to cyberattacks. He says, “For a company like us it means tomorrow we will have a different operational model to run network services, digital services, or IT services. For us it’s a question of productivity and efficiency [for] our customers.”

At the moment, the limiting factor of being able to do the best possible job, Ensarguet argues, is the number of working hours that humans have at their disposal, but the approach of simply throwing more people into the mix cannot achieve the necessary speed and scale and increases complexity.

In short, Ensarguet explains, “We want more digital. We want OT [operational tech] to match the IT world but there will be an explosion of devices and we will have to manage the explosion of data. We will have to implement, release, manage and secure more and more applications and services.”

GitOps ups the ante

Typically, he appears to be delighted by the prospect and is fizzing with ideas about how this can be achieved. Prime among them is GitOps which started out in 2017 as a way to manage Kubernetes clusters and application delivery. The founder and CEO of Weaveworks, Alexis Richardson coined the term (watch his explanatory video here). GitOps uses a reconciliation loop that picks up discrepancies between the encoded desired state held in the Git (also known as the source of truth) and reality. Reconcilers iron out the discrepancies according to what caused the divergence, by updating or rolling back whatever step caused the issue.

Many telcos are still in the throes of adopting the DevOps metholdologies of CI/CD/CT – continuous integration, delivery or deployment, testing. Ensarguet is focused on applying GitOps – using continuous deployment, continuous operations (CD/CO) – to move this approach further along the delivery pipeline, closer to production.

A key element here is that it involves moving from a prescriptive to declarative model: declarative code encapsulates the desired results without explicitly listing the commands or steps needed to reach that outcome. A declarative approach can support full automation so that as data centres, networks and storage are softwarised, the people working in these areas make all employees affected by this shift “more effective, more productive,” Ensarguet says. “If you are not able to automate longer or deeper than with the CI/CD, then you have no lever to manage the scaling – and that’s critical to the whole thing.”

Ramifications ripple across the industry

Ensarguet gives the impression that it is not possible to exaggerate the importance of the developers’ experience in all of this. He says that the companies that typically comprise the telco vendor ecosystem offer much the same things which he sees as table stakes and argues that the thing that will differentiate the winners will be, “The company that can bring a global experience to the market that makes the development teams productive – very fast, in a smooth way, with an approach you can scale across the company – will have a game changer.”

Another avenue for a vendor to become a gamechanger, according to Ensarguet is if they can lighten “the cognitive load of the team that implements services. A key issue is for us is how fast can we achieve operations to implementation?”.

He points out that Orange Business Services is a digital company and IT integrator as well as a network operator, and all three elements come into play for example when deploying a private network and end-to-end services for IoT. “This is why the story around developer experience is very, very important.”

Security as code is critical

Ensarguet includes a certain level and type of inbuilt security in what he describes as vendors’ “table stakes” and thinks this should too be a differentiator and part of that all-important developers’ experience. He points to security as code which works by mapping how changes to code and infrastructure are made to DevOps tools and workflows and thereby identifies where security checks, tests and gates should be implemented – and all without adding cost or delay.

Security as a code codifies security and policy decisions, so that they can be socialised [shared] with other teams so they can be checked in the the CI/CD pipeline which automatically and continuously looks for  vulnerabilities and bugs.

Policy as a code is integral to security as a code. Ensarguet explains Orange Business Services is considering implementing verification control using policy as code to control who can access what, on which machine, for what purposes etc. He says, “If you can express it by code, you get all the benefits in addition to those we talked about [with GitOps]…it drives what I call trusted application delivery.”

He continued, “If you can add the benefits of CI/CO to policy as code…you can apply very regulated or highly constrained [conditions], because you’re able to prove the security rules that are running on your infrastructure.”

Policy and security as a code are built around the Open Policy Agent (OPA) standard which is an open-source engine that suports writing policies as code declaratively so the policies can themselves become part of a decision-making process.

Work started on the OPA in 2016 with the goal of unifying policy enforcement across different technologies and systems. It was accepted as a project by the Cloud Native Computing Foundation (CNCF) in 2018, attained Graduated project status and now is wide use, for example by Big Tech players. Netflix uses OPA to control access to its internal API resources. Cloudflare, Pinterest and others use OPA to enforce policies on platforms like Kubernetes clusters.

He call outs Stryra’s OPA-based product for particular mention as its work encouraged many other firms to move towards supporting policy as code which he says “the level of verification and checking is definitively a big step up in terms of quality and reliability at a scale that was not previously possible.”

Striving for a common stack

Referencing the recent MWC he said many of the people there were working to softwarise the core network were following what has already happened in the digital and IT world –adopting open source and other open standards.

The implications for greater interoperability and flexibility was demonstrated as an O-RAN Alliance initiative at MWC. Deutsche Telekom, Orange, Telefonica, TIM and Vodafone, working and Mavenir as well as Ericsson and Nokia, implemented one of the of the first telco cloud-run services.

Although five operator groups were involved, there was only a single, common stack and all their services ran on top of it. Ensarguet says this proves the value of disaggregation and recomposition being taken out of tightly integrated black boxes and instead happening through the four pillars of convergence he started this interview with – software, APIs, open standards and open source.

He cheerfully acknowledges, “It will be a tough, long journey…We want the company to be able to run 50, 60 or 70% of all our projects in this model. I’m truly confident it will happen,” even though right now he cannot say exactly when.

Rakuten Symphony orchestrated MWC’s drum beat

Telecoms guru, Tareq Amin and the company he leads, Rakuten Symphony, dominated headlines at MWC, reports Contributing Editor, Annie Turner, with a slew of announcements.

Developments at Rakuten have come so thick and fast it’s hard to keep track, and they are driven by the vision of Tareq Amin, who shot to fame as the visionary CTO of Rakuten Mobile. At a press and analyst briefing this Valentine’s Day, he reiterated his vision as “cloud in everything we do in mobile and automation of everything” which, he has said, was inspired by watching the rapid progress of cloud hyperscalers and enterprises over the last decade, while telcos lagged more and more, focusing on the launch of one generation of network technology after another.

Proof of the pudding

In 2020, Amin predicted Rakuten Mobile’s totally virtualised, cloudified infrastructure with as close to ubiquitous automation as it’s possible to get would save 40% on the overall capex of the new network, plus 30% in opex. The radio segment of a mobile network accounts for 60 to 70% of capex, and hence the new operator’s use of Open RAN would contribute 60% of the overall savings.

These predictions met considerable scepticism in the industry and are yet to proven. At the Valentine’s Day briefing, Amin said that as of early February, Rakuten Mobile’s network had achieved 96% 4G coverage in Japan in the two and a half years since the build began, four years ahead of schedule. It is one of the world’s largest Open RAN deployments, with more than 200,000 cells.

While this acceleration incurred substantial costs, by the end of 2021, Rakuten was carrying very close to 90% of its data on its own network, resulting in, as Amin said, “a very, very substantial” reduction in roaming. He added, “We save an unbelievable sum of money on roaming…it’s a motivation to do whatever it takes to fast track the foundation, and the foundation is all about coverage”.

Amin said of the network “This is a mixture of 4G and 5G. Our 5G products are unique because [our infrastructure] is the only 100% massive MIMO product for sub-6 [GHz] deployed in Japan…for us it was a balance where we could ramp up customer acquisition…which incumbents can’t compete with because we don’t cap anything. It is just unlimited [usage]”. He added although the service suited light data user, “our…niche we want to live on is heavy data consumers.”

While the financial side of the argument remains to be seen – note that Q4 revenue was up by almost 48% year on year to $565 million – Amin noted wryly that Tokyo has a dense-packed population of 17 million, adding, “I am optimistic that this year we have cleared the hurdle of whether Open RAN works or not.”

Automation makes it possible

Rakuten Mobile activates 200 base stations each day. Amin stressed, “We don’t have…an army of people doing this. If we did not employ Open RAN, if we did not employ cloudification and if we did not employ automation, all this stuff would be absolutely impossible, but we treat our active base stations exactly like Wi-Fi: Wi-Fi is all about plug and play, [and] zero-touch provisioning. We learned quickly that to build these dreams and ideas, we need a massive, systematic approach to automation and the underlying systems.”

Amin commented, “The first acquisition [in May 2020] that we [did] people never understood…we acquired Innoeye because I knew that Open RAN cloud without the wrappers of automation does not achieve results.” He continued, “If you don’t have the right automation architecture, the right next-generation OSS platform…is a massive challenge. So that was my first target acquisition because I knew that this is so important.”

At MWC 2022, Rakuten continued the trend, publicising its intention to acquire the award-winning Silicon Valley start-up Robin.io, having worked with the start-up for two years. Amin said of the planned acquisition at a roundtable at MWC, “We learned the hard way not all clouds are alike. The complexity of telecom workloads, especially radio access, require predictability that public cloud hasn’t yet experienced”, and continued, “Robin.io’s cloud capability is proven to be effective for the most demanding workloads in mobile and we believe it will allow Rakuten Symphony to safely accelerate cloud-native transformation for our customers and prepare the industry for the future.”

Moving to a different key

Note his use of Rakuten Symphony: in August 2021, Rakuten Mobile announced Rakuten Symphony which was to sell Rakuten Mobile’s technology, software and expertise to other operators around the world – with Amin as CEO, naturally. The new division simultaneously announced its first customer, 1 & 1, Germany’s new entrant, 5G operator for whom it is to build Europe’s first fully virtualised infrastructure on Open RAN.

In January 2022, Rakuten Symphony was spun out of the operator and Amin announced yet another part of his vision, Symworld which was officially launched at the Valentine Day’s briefing. Amin described it as “a consolidated telco app store with all the applications you need for engineering, operation, architecture, knowledge management, learning management…no need…to go to multiple screens on multiple billing systems, and multiple subscription models.

“We are very, very excited about this and of course, the constant innovation around automation…is a never-ending process. It never has a start date or end date.” Nokia was announced as Symworld’s the sole vendor for specific mobile core products

Amin explained that, depending on size, an operator could buy and consume more than 2,000 applications and “As we looked at Rakuten Mobile, we knew we needed to stitch all of it together”, adding, “Rakuten Symphony has one massive, unique advantage…our living lab, Rakuten Mobile, is a massive, massive challenge…the ability to understand how to build systems to run at scale, how to build VNFs [virtual network functions] and CNFs [cloud network functions] that are all orchestrated and the underlying system architecture to guarantee reliability at the scale one would expect…with zero increase in headcount or operation.”

Amin said, “It proves the concept of what we call hyperscale” and said “the operational experience of Rakuten Mobile has given all our team the insight to build this beautifully elegant, orchestrated, automated system that you will never find in the marketplace today”.

To support this view, he explained that an orchestrator is only as intelligent as the systems beneath it and if those underlying systems are not fully autonomous and don’t support real-time APIs, “What’s the orchestrator going to do?,” he asks. By comparison, “You [can] purchase an integrated solution with Rakuten, in the case of Germany, your network is live in three months. How many networks you could build in three months? That’s the beauty of integrated validated systems…I’m not even happy about three months, I want to get it to seven days…to bring up the workload.”

Not only for greenfield

Although Rakuten Symphony was announced last August, the division has bookings worth $3 billion. Amin batted away suggestions that Rakuten’s telecoms activities are all about greenfield, highlighting the role Enrique Blanco, Group CTIO of Telefonica, has played in working with Rakuten Symphony, “not in a lab environment, but in a commercial environment across five markets to go validate all the hypotheses we are talking about,” Amin said.

He added, “The Symphony journey is not only about Open RAN. I can tell you today that Telefonica in certain markets [is] already consuming some of our OSS products…We are making progress in the brownfield game… I am really, really super-optimistic that Telefonica could be the breakthrough that everybody is looking for.”

In fact, Telefonica invested in Altiostar, which was Rakuten’s second acquisition, made last year. It is to be rebranded under a new business unit as network functions and “own RAN, core, edge…intelligent operations [are] all about comprehensive OSS stack – and this is [like no other].”

Then there’s AT&T. At MWC Rakuten Symphony announced that the US operator would collaborate with it to enhance network design and build solutions for operators. At a roundtable for journalists and analysts following the announcement, Amin said, “The announcement…is not just the monetary value [although] that’s important. We are learning a lot; how to manage our way through a brownfield deployment and integration point.”

He said that initial conversations with the operator’s CTO was about foundational digital workflow and planning. Amin stated, “If you don’t automate these mundane processes, then as you go upstream and say, ‘I want zero-touch provisioning’ none of that elegant stuff will work if the foundation isn’t fixed.

“The first thing we did when we engaged with him was a discovery and due diligence to find out how we integrate within the ecosystem because there is a very large complexity of integration points. The way we build these applications [in Symworld] is we have adaptors we created for legacy integration architecture.”

There was considerable emphasis at the roundtable that Symworld is not about industrialising a rigid approach – as every network in the world is unique – rather the unique selling point is flexibility, which will be maintained by keeping everything at component level and modular so users create their own environment from a menu.

Amin concluded, “the cloudification of the app store to achieve the big premise of zero-touch provisioning, autonomous operation requires building blocks: That’s what we focus on in Symworld, these building blocks.”

Automation is about networks complexity and customer experience, not cool tech

Dr Kim (Kyllesbech) Larsen has been CTIO and board member at T-Mobile Netherlands for more than three years. He talked to Annie Turner about his approach to network automation, its role in the operator’s notable success so far, and his doubts about network slicing.

Dr Kim (Kyllesbech) Larsen has worked at the Deutsche Telekom group for many years, not continuously, in a variety of technology roles and countries. He’s a physicist-mathematician with a PhD in physics, and widely known as Dr Kim. He became CTIO of T-Mobile Netherlands in January 2019, after the operator acquired Tele2 Netherlands at the end of 2018. He is also a board member and an executive advisor on technology, economics and strategy.

Since Dr Kim joined T-Mobile Netherlands, the operator has increased its mobile market share from 25% in 2017 to 42% (based on the number of SIMs) in 2020 and become market leader. It entered the fixed market in 2016 with the acquisition of Thuis, strengthening its position through a strategic partnership with Open Dutch Fiber (owned by KKR and DTCP) in April 2021.

T-Mobile Netherlands was acquired by a consortium of Apax and Warburg Pincus in September 2021. Its new owners described it as “Europe’s fastest-growing mobile network”.

Motivation for automation

Dr. Kim’s network automation journey began in 2014 when he and his counterparts across the industry started thinking about what future network would look like, as network functions virtualization (NFV) and software-defined networks (SDN) were underway.

Dr Kim Larsen is speaking live on the 22nd February Webinar: Smart CapEx investment: Optimizing for RoI. REGISTER FREE

He stresses that work on network automation is not because it’s “cool technology”, but “comes from a wish to continuously improve efficiency, handle complexity and to use automatic detection or machine learning (ML) algorithms in general, to improve the network’s operational excellence and customer experience, which is part of our Magenta DNA.”

No surprise then that after T-Mobile Netherland acquired Tele2 and “a lot of legacy”, he quickly introduced anomaly detection loops to identify modems that weren’t working and poor quality at customers’ homes. “Sometimes we could reset the modem, sometimes we asked the customer to do it, but we closed the loop,” he says.

Increasingly anomaly detection is “part of how we catch network anomalies that otherwise would lead to a bad outcome for our customers. We are still far away from the self-detect endgame, but we have worked closely with the likes of Anodot to set down frameworks in our networks – mobile and fixed – that allows early detection of what we call emerging incidents,” Dr Kim adds.

Cold comfort

Around 2015, Dr Kim and his cohort also began to realise that some aspects of cloud, already familiar in IT, would come into play in Network and that the trend will expand and accelerate as 5G Standalone proliferates. Hence since 2018, T-Mobile Netherlands has been through an IT transformation, starting with less than 10% of cloudified application platforms to having more than 90% in the cloud, although most are not containerised.

He explains, “The lift and shift of legacy network components into the cloud gives a boost from cloudification to some extent. The next step is moving to 5G Standalone and we will be going cloud native with all the automation frameworks there.”

Dr Kim thinks the importance of AI and ML are hyped at the expense of other aspects of automation: “I’ve been a big proponent of AI and ML in network operations but I also always want to put a damper on it because there are a lot of automation frameworks in the cloud-native environment that have nothing to do with AI and ML which I think are sort of the cherry on the cake,” he says.

“You can combine all kinds of different platforms within the different containers, and you can keep it all in the same computing centre. From an efficiency perspective, it makes a lot of sense. It’s not like the old physical hardware…now you unbundle it, you take the proprietary hardware away and you put commercial off-the-shelf hardware there instead and should work.”

Dr Kim thinks NFV was an essential step. For one thing, the core network technology has a lifespan of 10 to 15 years, but also, as he points out, “In 2014, if anybody had said we would want to use a cloud-native framework for our applications and network, a lot of people would’ve said, ‘It will never happen – it’s IT stuff, you cannot trust it’. This is because IT had a completely different view of service levels and availability than we had at Network then.

“Nowadays IT systems have to be 24×7 and within the cloud environment, and we started to understand there are other ways of solving availability and reliability, instead of duplicating hardware in multiple redundancies. You can try other, better ways of getting the same result.”

IT transformation

Not everyone is happy about moving telecoms network to public clouds. Dr Kim comments, “The FBI, NASA and US defense run a lot a lot of mission-critical applications on public clouds. I would say the hyperscalers – AWS, Azure, Google and the rest – have far more experience in and know-how about cybersecurity: their security is far thicker and more mature than many telcos’. We’ve always cared about it, but we need to care even more.”

He adds, “Most of our IT applications are running on AWS with some mission-critical ones on Azure and one or two on private cloud, and they run very well. We have the right service levels; we feel very comfortable about the security around us. They meet our contractual agreements, but in some instances, we run things in our own environment and certainly we keep customers’ data away from the public cloud.”

As he predicts, “In the beginning, we would not have put network functionality in a public cloud domain. We’d have tried to ring fence that probably within our own copy of a public cloud in our own environment, but the time will come maybe when even that border will be broken down and we will trust hyperscalers’ public clouds even more than today.”

Autonomous layers

He concedes, “Engineers’ view is that ‘trust is good, but control is better’, so they like a top-down approach. This is why we see a lot of architectures with the ‘heavy-handed’ orchestrator on top of the network stack. Not being an engineer, I like to keep top-down control to a bare minimum and rely much more on layer-by-layer autonomy with APIs between them. Microservices can be taken up while working to keep orchestration local rather than having a ‘Big Guy’ orchestrator controlling everything from the top-down. Autonomous layers are where we’re moving to over the next 10 to 20 years.”

Dr Kim is a fan of disaggregation, explaining, “The world will be much more dynamic in terms of adding functionality and applications to the cloud landscape that will support 5G Standalone, and then you will not be tied to one or a few suppliers.” He adds, “We say we want more independence from the traditional suppliers, but I think you can never be completely independent because there is a minimum set of applications and functionalities that makes more sense to get from one supplier, not from three or four or five.”

He is hopeful that many more “dynamic new companies will come on the scene to provide applications, piggybacking on hyperscalers’ public cloud infrastructure. We need them to help us get cloud network functionalities to work through new, innovative or cost-efficient standard applications and to help us close the loop across our network layers and telco stack.”

Contradiction in terms

Cloudification – the basis of 5G Standalone – is all about scale, yet as Dr Kim points out, “In the past, when we deployed a new core network, it was a Big Bang launch and all the customers were on it. The network had to work flawlessly, from the very beginning. 5G Standalone is super scalable [but] by design it allows you to start running small workloads, learn from that and develop it as we go along, such as working with different verticals.”

One aspect of 5G SA that has had a lot of airtime is network slicing. Dr Kim has issues with both the economics and network resource planning aspects, despite his immense respect for one of its chief architects. He note wryly, “My blog about 5G SA network slicing seems to have resonated with a lot of people but we will still have to use it.”

Dr Kim asks, “What are we trying to solve [with network slicing]? Our mobile networks are super versatile. The impression that a modern cellular network can only deliver ‘one make of black car’ is plain wrong and misleading. For example, I can differentiate speeds and have very sophisticated, rich QoS mechanisms to my disposal (which in my opinion are much more impactful in 5G SA) to achieve what we need to do: I can set up VPN-like connections without slicing.

“There are other ways to solve this. Imagine having thousands and thousands of slices serving different needs and maybe even in real time – in my opinion, you become increasingly inefficient with your network resources than if you had to optimise across the whole network.”

Dr Kim reflects that the ideas for the 3GPP standards “came out at a time when resources were scarce – when our networks could deliver a car in maybe one, two or three colours – and we believed that vertical segments would really be the huge revenue generator for 5G and ‘save our industry’. That might happen, but if you look at revenue share B2B versus revenue share B2C, I don’t think it will change that much. I truly believe we could have done this in other ways – it looks like a cool engineering thing to do.”

A note of caution

Dr Kim says, “The last thing I want to get across about automation is to ask what does it bring? In general, unless you are a big classical telco incumbent, I don’t expect automation to bring a lot of efficiencies, but it will bring cost avoidance, saving on manual effort. We want automation to manage the complexity that we are incurring more and more in a modern telco network which is increasingly heterogeneous in nature. We are operating different generations of mobile networks, home networks, campus networks, fixed broadband networks – and they are all becoming more and more integrated.

Similarly on the transport side, architecturally, we want fixed and mobile to merge allowing us to save cost and increase efficiency. Automation itself will not bring huge cost savings according to the economic models I’ve been looking at. It improves how we deal with complexity and takes away some risk around skills because if it works, we will need fewer critical, scarce engineering skills.”

Combining 5G Standalone, IoT, AI, cloud and edge is the key to techco success

Three key speakers from our upcoming FutureNet Middle East and Africa event answered a rapid round of questions on how they see the network of the future evolving, where they are now and how they plan to get there.

In keeping with the New Year in some parts of the world, and the traditional resolutions to do better that come with it, Haithem Mohammed Alfaraj, CTOstc, notes, “Telecoms are on a continuous improvement journey”.

He adds, “We are well aware of the rules and the new needs that are reshaping the network, and eager to assist and support our customers as they embark on their digitisation journey”. No matter where they are in the world, stc is keen to provide them with full access to its services.

He also acknowledges his company’s role in its local national economy of Saudi Arabia, saying, “We aim to continue to be a key accelerator in the Kingdom’s digital transformation.”

The digital transformation of economies is a subject close to the heart of Orange Cameroon, which last year celebrated the tenth anniversary since the launch of Orange Money. During those celebrations, the then Head of Orange Money Cameroon pointed out that his company had a 70% market share of the country’s mobile banking market.

Tassembedo estimated that the operator’s cumulative monthly transactions would amount to CFA800 billion ($1.37 billion) during last year, making an estimated total of CFA9.6 trillion ($16.414 billion) for the year – about twice Cameroon’s state budget for 2021.

5G Standalone needs complements

CSPs register free for FutureNet Middle East & Africa!

Looking forward to new business opportunities enabled by combining of 5G Standalone and complementary technologies, Alfaraj indentifies revenue-sharing deals and joint ventures with cloud providers as being among the key factors driving telecoms. He points out that 5G technology “has shifted fundamentally” the communication network architecture, “accelerating revenue generation through innovative services for consumers as well as corporations large and small alike”.

Consequently, stc’s focus is on expanding the 5G ecosystem and new use cases through co-creation projects, including for the private networks markets where privacy and capacity requirements will drive demand.

5G Standalone is key to this evolution, but Alfaraj says it “will be paired with other emerging technologies that stc has already adopted such as artificial intelligence, machine learning and IoT, [to] unlock and help us to transform our customers’ experiences and provide customised and innovative solutions to enterprises to offer full-fledged technology and telecom solutions on platforms.”

Ayedime Amadi, Group Senior Manager Enterprise Architecture & Customer Channels, Group Technology at MTN agrees with Alfaraj that edge, cloud, AI, IoT and 5G are critical catalysts and forces for industry convergence and a connected world. She predicts, “These five forces will together rapidly accelerate cross-industry use cases – both consumer and business – to create new business models and lines of business, and translate to new revenue streams that futureproof telecoms”.

She explains, “The sophisticated infrastructure and connectivity, which is core niche for telcos, has given them more scale and depth in capabilities that [other] industry verticals are unable to match”. Amadi references a GSMA Intelligence study from 2021 that listed manufacturing, fintech, retail, healthcare, oil and gas, agriculture and mining as having the most potential for being revenue drivers for telcos beyond  traditional connectivity in the next five years.

The power of gestalt

Amadi agrees with Alfaraj that this will be enabled by the interworking of the five technologies, with the whole being greater than the sum of the parts. She outlines that while 5G enables high transmission speed and the transfer of large volumes of data, edge and cloud bring the power to run, deploy and manage application workloads closer to the source of data – and to run them in “auto-parallel and at scale”.

Edge computing and AI are at the heart of IoT applications but AI requires the extremely fast data processing enabled by edge computing to deliver higher performance of compute resources and intelligence at the edge. Amadi says, “Cloud has also become a key enabler for AI as it enables inexpensive, powerful intelligence at the edge without…transfer[ing] data back and forth for processing.”

She outlines a number of “very commercially viable use cases that enable rapid decision making which is key to any businesses evolution  – smart devices, analytics at the edge for rapid insights, voice recognition, smart cameras in production facilities…the opportunities are endless and are new areas of business growth for telcos to expand into as they solve and automate real-life problems.”

Scale is the key to granularity

Alfaraj stresses that “the future network should be able to offer a unique and different experience to a range of sectors such as health and education to serve a wide range of use cases with fine granularity”.

Determining the right slice for each customer must be personalized to get the maximum benefit for both provider and customer, Amadi insists. “We must leverage analytics and insight to segment our customers’ needs based on demographic, psychographic, behavioral and geographic segmentation,” she says. “To succeed in new lines of business, it is important to be driven by insight”. Key customer inputs include usage, spend, macro environment, competitive landscape, digital maturity, and customers’ propensity and appetite.

The big question is how can telcos evolve to achieve these goals? Alfaraj says that hyperscale cloud providers have a big role to play, as their strong ecosystems complement telcos’ strong networks, as well as “relationships with governments and global reach. This is a synergistic opportunity for both”.

He continues, “Hyperscale cloud providers bring a huge value to the telecom industry [but] is in a very early stage of development as networks transform into new ecosystems. It will open the door for enormous opportunities such as reducing costs, offering scalability, or offering interoperability.”

To leverage this opportunity, he explains that stc has adopted a multi-cloud strategy applied on a per-use case basis. The operator has a strong relationship with Alibaba, but is also “in on-going discussions with Microsoft and Google as part of our MENA [Middle East, North Africa] HUB vision.”

Independent approach being public

Orange Cameroon has a different approach to cloud. Its Chief Engineering and Network Officer, Abdallah Nassar, explains the reason his company first looked to virtualisation was

as a means of reducing operational complexity and cost because demand for its network grows 200% year on year.

Orange Cameroon’s virtualisation strategy and deployment is an evolving combination of network functions virtualisation (NFV) and cloud native. While some operators use public clouds like AWS, Google Cloud or IBM Cloud, Orange Cameroon chose a different route, Nassar explains: “We are investing in our own cloud and recently launched our first vEPC [virtual evolved packet core] on it: we built our own public cloud infrastructure using systems from different vendors.”

This first vEPC is to help Orange Cameroon handle data more efficiently on 3G and 4G infrastructure, but tellingly, Nassar adds, “This kind of an investment gives us a different position versus the competition which also has vEPCs, but they use vendors’ infrastructure or a vendor’s virtualised system.

“We want to be independent because when we talk about scaling, we mean different kinds of scaling: we can use this infrastructure for investment in our own systems and we can even offer it to the public. That’s why we call it Orange public cloud.”

Tech intensity

MTN’s Amadi quotes Microsoft’s CEO, Satya Nadella, saying “the world is becoming one giant computer,” adding that only  businesses that are best positioned to take full advantage of that massive scale and connectedness will remain competitive.

“Digital transformation will help a company exist but embracing tech intensity will help a company thrive,” she states. “I measure tech intensity in bands based on the degree to which an organisation has an appetite to make technology core to its offering and the internal reengineering they are committed to to enable this.”

The key pillars that require re-engineering in an organisation’s ecosystem to support this new business model, according to Amadi, culture, people, process, structure, business model with a key focus on injecting new skillsets , cultural transformation and the operating model, she concludes.

FutureNet Middle East & Africa, Speaker Interview with Ayedime Amadi, MTN

Ayedime Amadi, Group Senior Manager Enterprise Architecture & Customer Channels, MTN is a panel speaker at FutureNet Middle East & Africa on 26th January. We recently caught up with Ayedime to get her thoughts on the panel topic: The intersection of Cloud, Edge, 5G, IoT and AI: The real growth opportunity?

 

Telcos have been talking about searching for new revenue streams for years – how close are we to enabling them now?

Digital transformation has been accelerated in a number of dimensions, this has especially been sped up by COVID and the global need for all day, every day, everywhere real-time experience driven by consumer appetite. Telcos now have an opportunity to offer new services and delve into new lines of business which are not core traditional Telco but more into the facets of being able to provide Digital and lifestyle services. This is the evolutionary progression from a Telecommunications service provider (Telco) to becoming a Technology Company (TechCo)
Also with the decline of traditional service revenues such as Voice, SMS, Data, etc expanding the scope beyond just being a pipe becomes a big necessity to remain competitive and to future proof the business. Taking into cognizance the sophisticated infrastructure and connectivity which is cire niche for telcos and has given them more scale and depth in capabilities that these industry verticals are unable to match.
GSMA intelligence in a study that was done in 2021, has highlighted that the verticals of Manufacturing, Fintech, Retail, Healthcare, Oil & Gas, Agric and Mining have the most potential for being core revenue drivers for Telco services beyond just the traditional connectivity services in the next 5 years.
Edge Cloud, AI, IOT are key enablers which these industries require to drive their businesses, and the Telcos are strategically positioned to provide this and it becomes a new revenue stream.

 

How important is the combination of these Technologies to generating new growth?

These powerful dynamo of Cloud, Edge, IOT, AI & 5G are critical catalysts and forces for industry convergence and a connected world. These five forces will together rapidly accelerate cross-industry use cases across both consumer and business segments to create new business models and lines of business and these translate to new revenue streams that future proof the Telecom Business.

With 5G comes the enablement for high speed and a large volume of data transfer, with Edge and Cloud comes the power to run, deploy and manage application workloads closer to the source of data and the ability to run auto parallel and at scale

Edge computing and AI are at the core of internet of things applications and are driving a number of use cases, due to the fact that AI requires extremely fast processing of data which edge computing enables while AI enables higher performance of compute resources and intelligence at the edge. Cloud has also become a key enabler for Edge AI as it enables inexpensive powerful intelligence at the edge without the need to transmit or transfer data back and forth for processing.

There are a number of very commercially viable use cases that enable rapid decision making which is key to any business’s evolution – Smart devices, Analytics at the edge for Rapid insights, Voice recognition, Smart cameras in production facilities etc. The opportunities are endless as are new areas of business growth for Telcos to expand into as they solve and automate real-life problems.

 

How can we define the right slice of parameters for the right customers every time?

Determining the right slice for each customer needs to be personalized to get the maximum benefit for both provider and customer, there is no one size fits all for these kinds of services

We must leverage analytics and insight to segment our customer needs, this needs to be done based on Demographic, Psychographic, Behavioral and Geographic segmentation – data needs to be analysed to identify patterns that can be used to create customer segments.

To succeed in these new lines of business, it is important that it is driven by insight – key customer inputs are usage, spend, macro-environment, competitive landscape, digital maturity index, customer propensity and appetite.

All these need to be fed into an Analytics and Intelligence engine to properly create nano and micro-segments based on customer behavior and propensity to adopt and to ensure the right mix for targeting.

The decisions around product propositioning need to become data and insight-driven because customer needs change and evolve over time hence it is important to leverage analytics and machine learning to create algorithms based on current behaviors and metrics to predict future use and appetite, hence customer value management is also key for these new-age offerings.

 

How do Telcos working practices and corporate culture need to change to make the most of this phenomenal intersection of Technologies?

The Evolution from a Telco to Techo requires a paradigm shift and mindset surgery as the thought process and culture is quite different from the Traditional Telco – pipe mentality, and to succeed in this area it is important to embrace Technology Intensity and the platforms based model of thinking.
Satya Nadella defines Tech Intensity as the fusion of cultural mindset and business processes that enable the propagation of digital capabilities that create end-to-end digital feedback loops, tear down data silos and unleash information flows to trigger insights and predictions, automated workflows and intelligent services.”
Nadella says” the world is becoming one giant computer” hence only businesses that are best positioned to take full advantage of that massive scale and connectedness will remain competitive “Digital transformation will help a company exist but embracing tech intensity will help a company thrive.”
I measure Tech intensity in bands based on the degree to which an organization has an appetite to make Technology core to its offering and the internal reengineering they are committed to, enable this.
The key pillars that require engineering in an organization’s ecosystem to support this new business model are the culture, people, process, structure, business model with a key focus on injecting new skillsets, culture transformation and operating model reengineering.

Key areas for focus for recalibration

1. Modern platform architecture – platform-based capabilities, low code software development, Modular Platforms & APIs, Cloud-native Applications, Disruptive Innovation. Simplified & Agile architectures
2. New Operating Model – Agile DevOps Way of Work, New Engagement Models, Realigned Governance and Operating Model.
3. Talent & Sourcing – Insourcing / Outsourcing Strategy, Future Proofing skills, Partner Optimization,
4. Culture – Experimentation & launching new services in “beta” mode and supporting a “fast fail” approach, Innovation, indoctrinated as part of the mainstream business, Start-up incubation and venture funds mindset, Cocreation mindset

The DNA of a Techco Organization is driven by a number of characteristics a few of which are Collaborative processes, Technology Savviness, Entrepreneurial Spirit, Organizational Agility, Data-Driven Customer engagement, Software Development and Online Anywhere – hence to enable the sustainability of this magnitude of change, an Organization-wide Change & Culture management program is required to entrench this change and ensure that it reaches the required maturity levels.

To hear more insight from Ayedime register for FutureNet Middle East & Africa here

FutureNet Middle East & Africa, Speaker Interview with Haithem M. Alfaraj, STC

Haithem M. Alfaraj, CTO, STC, is speaking on a Panel at FutureNet Middle East & Africa on 26th January. We recently caught up with Haithem to get his thoughts on the panel topic: The Network of the Future and the Future Telco.

What do you see as the key requirements for future telecoms networks and how are they different from what we have today?

Telecoms are on a continuous improvement journey. We are well aware of the rules and the new needs that are reshaping the network, and we are eager to assist and support our customers as they embark on their digitization journey. The future requires us to engage with our customers wherever they are across the globe and ensure that they have full access to our services. And we are striving every day to offer the best possible user experience.

The industry is accelerating, and so Telecoms are moving fast to build and capture the latest technologies and innovations in the market. We are very keen to provide our customers with the best technologies to ensure their success and we aim to continue to be a key accelerator in the Kingdom’s digital transformation.

Which business and operating models do you think will deliver revenue growth in the future?

The market is trending towards revenue-sharing deals and joint ventures with cloud providers, which is one of the key factors driving the telecom industry in the next few years.

With 5G technology, the communication network architecture has shifted fundamentally, accelerating revenue generation through innovative services for consumers as well as corporations large and small alike. Therefore, stc focuses on growing the 5G ecosystem and embodies new use cases through co-creation projects, expanding partnerships, and scrutinizing consortiums with industry players.

stc assumes that privacy and capacity requirements are expected to drive the demand for private networks. Therefore, stc prospects potential investment to develop ecosystems and monetize technologies through projects and products by exploring different business models.

What are the implications of 5G Standalone on the network of the future and when will we feel its effects?

The capabilities of 5G as it continues to evolve, paired with other emerging technologies that stc has already adopted such as artificial intelligence, machine learning, and IoT, will unlock and help us to transform our customers’ experiences and provide customized and innovative solutions to enterprises to offer full-fledged technology and telecom solutions on platforms.

In stc, we are well aware that the future network should be able to offer a unique and different experience to a range of sectors such as health and education to serve a wide range of use cases with fine granularity.

stc is keen to continue to develop this technology as it will have far-reaching impacts on how our customers work, study, socialize, live, and play all over the world as we have seen during the pandemic. And we are working hard every day to be able to deliver this to the community.

Will hyper-scale cloud providers become a permanent fact of life for telcos and, if so, in what roles?

With hyper-scale cloud providers bringing strong ecosystems, and telcos bringing strong networks, relationships with governments, and global reach, this is a synergistic opportunity for both. Hyper-scale cloud providers bring a huge value to the telecom industry. Luckily, the relationship between hyper-scale cloud providers and telco providers is in a very early stage of development as networks transform into new ecosystems.

It will open the door for enormous opportunities such as reducing costs, offering scalability, or offering interoperability. Therefore, in stc, we adopted a multi-cloud strategy on a per-use case basis where we have a strong and long-term partnership with Alibaba for example. As well as that, we are in ongoing discussions with Microsoft and Google as part of our MENA HUB vision.

To hear more insight from Haithem register for FutureNet Middle East & Africa here

Virtualisation and automation will bring a shared future to benefit all

Abdallah Nassar, Chief Engineering and Network Officer at Orange Cameroon, talked to Contributing Editor Annie Turner about the long but fast-moving journey from telco to techco.

Abdallah Nassar has been in his job at Orange Cameroon for four years. In that time, he’s won two Awards and been nominated as running the Best Network by nPerf (2019, 2020) and by Ookla as Best Mobile Coverage (2019, 2020), Fastest Mobile Network (2020) and Best Mobile Network (2020).

He stressed that virtualisation is the future of the network, so automation of the network, tools or process must be talked about in that context. He explained that his company began to pay serious attention to virtualisation when it was looking at how best to provide content to customers, how to combine content with data services, how to gain greater speed and agility – and ensure quality of services.

Local market conditions

Abdallah Nassar is a speaker at FutureNet Middle East & Africa, regsiter today!

As is common in many markets, customers typically have more than one SIM card and switch between them depending on what they are doing. He says, “In terms of [absolute numbers] we are not claiming we are first in the market” with about 12 million customers but claims to lead the market in terms of coverage, service quality and products. The population of Cameroon is a little over 27.5 million.

Most notably, the operator offers mobile banking services across its African operating companies under the Orange Money brand. In July, Emmanuel Tassembedo, the former Head of Orange Money Cameroon, announced his company has a 70% market share of the mobile banking market on Orange Money’s tenth anniversary of operations in Cameroon.

Previously Tassembedo estimated that the operator’s cumulative monthly transactions amount to CFA800 billion ($1.37 billion) during this year, making an estimated total of CFA9.6 trillion ($16.414 billion) for the year or about twice Cameroon’s state budget for 2021.

Meeting demands

Demand for data on Orange Cameroon’s network is growing at 200% year on year, which results in the infrastructure becoming increasingly challenging in terms of maintenance, capacity, technology updates and more. Nassar says, “We realised this complexity cannot continue in the same way, using the same type of equipment, dimensioning, and legacy technology…we started to look how we can deal with all this.

He says, “So the first step was we needed to invest but also lower operational costs – how could we create a significant cost saving while we’re investing? That started the idea of virtualisation, of putting all our system in a virtual environment, because it comes with more agility, more flexibility and on a bigger scale than normal legacy equipment.”

What exactly does Nassar mean when he talks about virtualisation – network functions virtualisation (NFV) or full-blown cloud native? He says it’s an evolving mixture and explains that while some operators use public clouds like AWS, Google Cloud or IBM Cloud, Orange chose a different way. Nassar says, “We are investing in our own cloud and recently launched our first vEPC [virtual evolved packet core] on it: we built our own public cloud infrastructure using systems from different vendors.” .

This first vEPC is to help Orange Cameroon handle data more efficiently on 3G and 4G infrastructure, but Nassar adds, “We need to be independent…this kind of an investment gives us a different position versus the competition which also has vEPCs, but they use vendors’ infrastructure or a vendor’s virtualised system. We want to be independent because when we talk about scaling, we mean different kinds of scaling: we can use this infrastructure for investment in our own systems and we can even offer it to the public. That’s why we call it Orange public cloud.”

Think global, act local

While building and controlling its own telco cloud is in line with group-level strategy, Nassar says that execution is “country by country. We are talking about virtual environments, but you still need a certain type of infrastructure in place and in Africa there is a lot of limitations on international transit, so it’s much better to put things in place locally and reserve international connectivity for customers’ use.

“This is why we cache content in the country – things like Meta, so Facebook, Instagram and WhatsApp – to lower the costs of international transit and give our customers better experience. Also, [we cache] the entire Google platform, from YouTube to all the others, and recently we put Netflix in place.”

How does the virtualisation and keeping-it-local strategy work for the enterprise or B2B market in Cameroon? Nassar explains there are two sectors. At the moment, the much larger one is private businesses, most especially micro financial institutions and banks, as well as logistics including local transport and sea-borne and other international transport. Almost all of them have systems hosted on Orange’s own public cloud, using the operator’s equipment in its own data centres

He elaborates, “We’re talking about big players in the market and in various domains such as transport, banking, micro-finance and breweries”. He notes, “The public or government sector is totally different,” displaying natural caution regarding issues like security and the maturity of technology, but Orange Cameroon expects the pace to pick up here too.

Enter automation

Having got that first vEPC up and running, Nassar says, “The second step is to virtualise all [the EPC] and new services, along with classic services for a mobile operator such as VAS [value-added services] and triple-play services. Step by step we are increasing the scalability of this public cloud system to virtualise our whole environment.”

Is this move to cloud and virtualisation preparing the way for 5G? He states, “It is one of many steps that we are taking to prepare the ground for 5G. It is an ambition, but a difficult ambition for this continent because there are many dependencies. We are trying to benefit from each step, although the virtualisation and automation are essential steps toward 5G.”

Nassar says, “The idea of having a virtual environment really is to simplify our operational method…but this virtual system will require a lot of frequent maintenance. To do that manually is very painful and human error could have a huge impact. When we talk about virtualisation, we cannot ignore that we need certain types of automation to be in place too.

Fault management, predictive maintenance

What automation is Nassar talking about? “We are talking about orchestration [by which I mean] an automated system that will eventually harmonise and simply the workflow of our systems, which will give us the ability to better analyse, anticipate and correct,” he says.

“There is also automation from robots which experience the network, products and services like a customer and tests them nonstop. We are not sitting in the office waiting for the customer to inform us something is not working. Our robots run all the time to identify particular types of service interruption by simulating certain situations, for example during a high utilisation period, so we can do preventive maintenance while the orchestration tries to harmonise the functionality of different nodes to make it all work together.”

Does Orange Cameroon build its own artificial intelligence (AI) models for fault management and predictive maintenance? Nassar says his company aims to develop support for ongoing maintenance and simulations of maintenance on its own cloud but is still in the process of evolving from a classic telco to a data-driven company, relying on vendors’ AI and automation solutions in the meantime.

He describes this shift as “a very long journey” but adds, “We are moving very fast. We even recently opened our Open RAN Lab, which is unique – we are asking the other operators to come and test and experiment with us. The idea is that it will help all the operators to go faster with their deployments.”

Nassar says the motivation behind Open RAN is to allow operators to act faster in the future, and to lower operational costs while using the RAN as platform for innovation in business and operational models. He explains, “The future of the current Open RAN will not only provide interoperability and standardisation of RAN elements, but also it will allow different dimensioning of your network. You will have a better control on your resources – that is, frequency, radio, equipment and power resources.”

He continues, “Open RAN will open the door to wider competition between vendors and if we continue that way, not only will the operators benefit from this interoperation between vendors’ equipment and more network flexibility at a lower cost, but also it enables the operators to come together. The future O-RAN will have the capability to merge the environment – allow it to act as a single operator with potentially many operators and vendors at the same site to offer a universal platform.”

Nassar says, “I see the future where all the resources in terms of vendors, equipment, frequencies and passive infrastructure, such as towers, will be shared by many operators that differentiate themselves based on the services they run on those platforms.”

Reinventing operations through cloudification, automation and disaggregation

Thomas van Briel is SVP, Architecture & Strategy at Deutsche Telekom. He talked to contributing editor Annie Turner about DT’s end-game ambitions – creating value for customers.

Thomas van Briel has worked in architecture-related roles at Deutsche Telekom (DT) in Germany for five years. His background is as a software developer, starting with Nortel in 1998 then a long stint with Swisscom. “The denominator that always draws me is a combination of technology development, organisational development and business development. That’s the heart of what I do and it fits very nicely with the topic of network automation,” he says.

Bridging the innovation gap

Reflecting on progress since he joined DT, van Briel comments, “DT has always been innovative and at the forefront of shaping technology, especially at group level, but we had something of a chasm between innovative parts of the organisation and the operational shopfloor. We have managed to bridge that gap quite a bit, [enabling] people to work together, to build new skills and bring new stuff into operations, leveraging DT’s excellent engineering and execution capabilities.”

One example is DT’s voice operation in Germany which has been rebuilt, he says, “in a cloud-native manner and [we] brought in brutal automation to realize the ‘three-two-one-zero vision’. This sets the  goal of three months to roll out a feature (compared to the typical one-to-two years in a traditional IMS environment), two days to deploy new software after it is released by a vendor, one day to implement a new bug or security patch, and zero ‘nightshifts’.”

Another example is the transport network and IP core, “where we’ve introduced orchestration solutions such as accelerating setting up new network links or handling failover when a link goes down,” van Briel adds.

Organisational transformation is always challenging and difficult to set up properly. He explains, “To build or acquire competences, the way we work crosses functions and [involves] partners. We built cross-functional teams with internal people in other parts of the continent like in St. Petersburg, to bring expertise in software development, and modelling and orchestration to the existing operational units. That’s a very effective way to change the spirit in an organisation, to put teams [together] and persistently work on a subject…to make it work in practice.”

Making ONAP operational

DT has been intensively involved in the Open Network Automation Platform (ONAP) community for some time and originally had lab deployments before deciding, “that if we really want to give it a chance, we need operational scope. We combined it with our O-RAN efforts, to do O-RAN automation in a pilot production via ONAP,” van Briel says.

The agile delivery structure means operations in the O-RAN trial deployment were automated from the start, and DT is now focused on building and “delivering one use case after another in an independent service management and orchestration platform that can eventually scale. We make sure those innovation islands, ONAP is just one of them, have a degree of freedom so they can adopt new practices. That is the journey of the last couple of years,” he explains.

Drive Network Automation

Is there a grand plan to join up the islands? Van Briel says, “It’s where our vision comes in. We call it DNA for Drive Network Automation…It’s about customer experience, velocity and efficiency – to become an orchestrated network of networks and services”. How will this come about?

Van Briel says at domain level, DT uses a continuous integration and continuous delivery (CI/CD) pipeline, a test automation framework, real-time inventory, controllers for applications, cloud and infrastructure as well as data analytics and messaging, security capabilities and last but not least, intent driven-orchestration.

“Those are pretty much the components that we need, and most of them have degrees of open source tools in them. The degree of open source utilisation varies – from vendors that integrate open source components in their automation products to complete open source approaches like ONAP. We pick and choose our approach from the complete range of options.

“Some components are reused across domains, but not individually instantiated, like a data lake, or Kafka bus or common data ingest. Some parts of the CI/CD pipelines are engineered, and operated centrally, but then you still instantiate your own pipeline locally: you build your own practices, your own models, and put the things together there.”

Each DevOps team is in charge of its own automation process. “They create a bounded context within a domain that can live on its own, adding resilience, independence and also a means to deal with complexity overall,” van Briel elaborates.

Next, all the services are exposed from the domains, “So that they become usable by overarching services or by other domains – that’s where standards like ETSI’s Zero-touch network and service management (ZSM) and TM Forum’s Open APIs come in. We use [the APIs] for catalogue management, for configuration and lifecycle management, and wherever we find them useful,” he says, noting that as they are of necessity generic, it takes some effort to make them operational.

“Having a set of common standards, guidelines, interfaces, practices etcetera applied, no matter where this automation is happening, will enable us to put the bricks together and make it work across a huge organization – a challenge in itself,” he continues.

Lessons from virtualisation

Like most operators, DT learned hard lessons about automation through virtualization efforts. Van Briel says, “When we tried to deal with [virtualized] workloads, we always ended up with specific infrastructure tightly mingled with the application at hand, and limited automation…We also tried a group-wide set of services to be operated in a cloudified fashion and that proved difficult, mainly because the available workloads and underlying technologies were immature.”

In short, often the technologies hadn’t been created for that purpose and DT is determined not to repeat the experience. Van Briel explains, “We developed standard test cases so if any vendor says it is cloud native, we go into the lab and have a look against the use cases because we were burned in the past. For me, that’s a foundation.”

Creating value

Vendors are under pressure in another major area too: disaggregation. He states, “It is very important to us as a means to accelerate innovation; to make sure that we have sufficient choice within the vendor landscape and new players. This has some consequences: you must go deeper into value creation yourself, in integration, in testing, in automation, and adopt different practices to deal with that.”

He continues, “In that sense, there are three changes in the industry: cloudification that enables it all; disaggregation for smaller components to be handled and integrated; and automation to deal with the complexity and deliver services from all of it. This hattrick is changing the way operations run.”

Having created some value through its islands of innovation, “Now it’s about making it broader, and making sure that things can be orchestrated across networks and domains, and across layers to deliver more complex solutions to the customer,” van Briel says.

He sees 5G as foundational to creating value, pointing out that the 5G Core architecture was designed to be cloud native and to support better automation. He says, “By having a decomposed architecture, it is possible to combine pieces from different vendors to bring something like slicing to life where we combine aspects of the RAN, transport and core networks to deliver end-to-end service to a customer.”

How far are we from 5G slicing being a commercial product and a self-service one at that? Van Briel says, “This will evolve in steps. It’s a matter of how quickly the markets will adopt [them]…The technological capabilities are pretty much ready to go. The question now is how to bring it to the marketplace.”

Managing a network of networks

He adds, “I think we will first see some deployments in campuses and we are exploring other approaches like exposing APIs which you can use for slicing or more generally to control the quality of your connectivity and enable revenue streams that go beyond our own company’s through B2B2C models.”

 In other words, an operator could expose its services to another operator to run over that second operator’s infrastructure, thereby extending the originator’s footprint to serve its own customers or its customers’ customers.

Van Briel says, “That’s the end-game. We call it managing a network of networks. There are a number of flavours. One is being able to integrate a connectivity service from a partners to deliver – especially in the beginning – to B2B customers.”

The intention is to orchestrate the provisioning, “by combining overlay and underlay services on- and off-footprints, making sure they deliver connectivity solution to B2B customer. That’s the first goal”.

He agrees the end-game is ambitious and challenging but points out, “This is not just driven out of the networking department, but a strategic imperative we set ourselves. We have great visionaries…they are thinking in terms up to global orchestration. We must be able to integrate our services, with the multi-cloud approach from the customer’s perspective, and make it work.

“It might be to the point where we enable innovation by utilising our services sitting on top of our network capabilities…we are then integrated into the value chain without having the end-customer relationship in every single instance.”

Van Briel concludes, “If we don’t build the muscle to integrate all that as quickly and as flexibly as customers expect for their multi-cloud solutions and applications…over time, we will lose relevance and be pushed down the chain in value creation. We’re in a great position to ensure that we are the ones to help our customers solve their needs…As we reach out with our capabilities – be that in security or connectivity for underlay networks or other things – we will really shine.”

 

FutureNet Asia, Rakuten Speaker Interview

Miro Salem, Global Head of Artificial Intelligence & Autonomous Networks, Rakuten Mobile, is speaking on a Panel at FutureNet Asia on 27th October. We recently caught up with Miro to get his thoughts on the panel topic: Intelligent operations for new 5G and Edge ecosystems: A game changer?

How do you start building intelligent operations for a 5G and edge world?

A solid foundation is key. Intelligent operations require the right tools and culture, from efficiently extracting and understanding network data to taking intelligent action. Without a stable ecosystem of tools (e.g., data platform, AI platform, usecase backlogs, etc) and full synchronization across the organization (e.g., RAN, core, cloud, security, devops, etc), intelligent operations will be siloed, duplicated, and inefficient. They require alignment and buy-in from all stakeholders. Only with this clarity can true intelligent operations across various layers and elements of the production network flourish.

Should CSPs adopt a multi-cloud model for delivering immersive, multi-access edge computing services and use cases?

Yes. 5G brings new opportunities and challenges that we must overcome. One of those challenges is the much lower latency requirement all the way to the physical layer.  5G has a much shorter Transmission Time Interval (TTI) than its predecessor, 4G. This means processing and round-trip delay must be carefully considered, especially for mmWave use cases. Pixel streaming gaming is an example application use case that demonstrates the need for edge computing services. More and more applications will need to use edge for low latency (e.g., AR, VR, etc).  The need for the multi-cloud model becomes stronger when considering inter-operator edge clouds.

How can CSPs embed the right capabilities and technologies for scalability and automation in their 5G propositions when the business case for growth remains unclear?

The digital transformation of telco has given operators, what I believe to be, the ultimate enabler: virtualization. Embracing the fact that new technologies, services, and applications can be deployed within minutes, even seconds, including in the RAN, we can tailor best-fit solutions and adapt as we go. A fully virtualized, containerized, modular end-to-end network allows operators to respond based on customer needs, not vendor requirements. Like any entity in search of a business model, agility and speed to adapt to customer needs are essential. Business models are discovered, not dictated.

How important is automation’s role in intelligent operations?

The role is fundamental. Automation is the first practical step towards autonomous networks, which is our final goal. Intelligence is a key enabling suite of technologies and mindsets to take the next step towards autonomy. That said, it is important to remember that sometimes automation is sufficient for the task at hand. As with all things in life, it depends on the problem we are trying to solve.

To hear more insight from Miro, join the event and register here