Portability of automations is required to ensure that, as telcos networks and processes evolve, there is no risk or disruption to the existing process.
One of the main challenges telcos face when it comes to automation is that they are operating a complex network environment. They have to run and maintain legacy platforms, while integrating new investments and technologies – and rolling out coverage.
Most are beginning to automate operations, while some are well advanced on this journey. But as your network evolves, can you retain the benefits of investments already made in automated operations and processes?
Portability is now key, so that when systems, platforms and infrastructure is replaced or upgraded, the automation is unaffected.
Portability – fundamental to strategic automation.
Let’s use a very specific example to explain why – and point to a solution. The transport layer is, of course, fundamental. Not only is it key to accessing the core, but it also enables the high-speed low latency connectivity to base stations and disaggregation that’s essential for the 5G SA architecture (and future Fixed 5G) and the delivery of slice-based services that have exceptional performance demands.
As a result, operators often upgrade their transport networks, adding new tech or even selecting different vendors as they grow footprint and require new levels of performance (all those edge processing sites must be connected with the latest generation of optical routing engines).
But what does that mean for any automations that have been implemented to optimise transport layer performance? Take automated network congestion management as an example.
This – an example of a common transport layer automation enabled by We Are CORTEX – spans multiple components, systems, processes, and so on. It’s a cross-domain automation that basically covers a number of key functional requirements, spread across different platforms:
- Real-time alarm monitoring, covering traffic load indicators, QoS metrics, router status and so on
- Demand trends and expected peak / low load thresholds
- Router configuration
- Network topology and paths
- Inventory data covering router capacity, status, availability and more
The automation is basically a set of rules-based logic that dynamically adapts to real-time events. It uses the information available from the different systems to autonomously reroute traffic when certain conditions are met – for example, when a router approaches a threshold set to avoid congestion — which could impact traffic and lead to bottlenecks and impaired performance.
The automation can:
- Set traffic thresholds and adjust according to predicted patterns
- Detect traffic levels
- Map alternative routes for the traffic
- Correlate available capacity with the new demands
- Reroute the traffic during the surge
- Restore routing when traffic abates
So, when an alarm is received that indicates that capacity thresholds are close to being reached (again, rules can determine the margin that’s acceptable), commands can be sent to alternative routes, creating pathways through which the traffic can be diverted to ensure uninterrupted service.
The rules can also take account of context. If the traffic surge is at an unexpected time, then it may be indicative of other problems, or it may also abate. If it’s during a peak period, then, fallback capacity may already have been built into the network.
The point is that all of these eventualities can be foreseen, and the automation can trigger the appropriate commands to ensure effective traffic management. This is achieved through integration with the underlying components that actually route the traffic, and the operational systems that support them, like the inventory management platform.
But what happens when a system, solution, process, or router (or equipment vendor), for example, is changed? Can the automation handle this without any risk or impact on performance and, ultimately, customer QoE?
This is a key question: having invested in the automation of a process or workflow, it’s essential that this investment delivers over time, so that capital is used efficiently. Key to this is reuse of automations.
Well, let’s think about that for a moment. It is the logic behind the automation that should deliver the required solution. This logic is detached from the underlying interfaces to the necessary equipment and must remain so.
It may well use specific commands to do certain things, but the actions that trigger those commands are abstracted from them and remain part of the logical process and its flows. In other words, the automation doesn’t know anything about the equipment or software it manages; only the integration layer below needs to have any such information. So, if the interface to a new solution changes, it’s only the integration layer that needs to adapt – not the logic of the process. This dramatically simplifies network changes and ensures that automations can be protected.
At We Are CORTEX we term this ‘portability’, which we define as a means to decouple one solution or piece of equipment from an automation — cross-domain hyperautomation or otherwise — and move it to another. In our example of congestion management, it means that should a telco want to replace its router vendor, which may require different interfaces and protocols, it can.
If you want to then extend the scope of the automation by, say, adding new slicing capabilities to provide dynamic capacity management, then the automation can be extended, but the basic logic will be unaffected. Which is really what all operators need – the ability to protect investments (like those in automation) and the ability to extend those investments incrementally to add new capabilities.
Portability is essential for automation longevity
Our automation platform is agnostic to the underlying interfaces and protocols with which an automation interacts to achieve the task for which it was built. It means that components, solutions, or processes can be swapped in and out without any disruption to the overall cross-domain automation, and new extensions added.
It doesn’t matter if these interfaces include protocols that may be publicly specified, proprietary, specific to a particular vendor, or other forms of input / output layers.
Our platform ensures that from the outset the logic of the automation is independent of the interfaces that provide the information about the event and tasks that need to be performed.
To find out more about our flexible, portable, reusable, and extensible automation platform, download our latest whitepaper by filling out the form below