LLMs can help operators boost common practices, like service assurance and RCA, thanks to their ability to process and parse data from disparate sources – providing human intelligible output and providing clear inputs for ticketing systems and audit trails.
AI is starting to play a significant (and growing) role in how operators deliver service assurance and AI-based approaches have already penetrated multiple areas within the overall service assurance domain. For example, according to GSMA Intelligence, operators are currently using AI to “enhance service assurance” across key dimensions, like:
- Customer complaint analysis
- Fault prediction
- Root cause analysis
- Customer intent predictions
- Closed-loop automation
So, good progress is being made and there’s clearly widespread enthusiasm in the potential of AI to bolster efforts to optimise service assurance. But what’s really interesting here is that there isn’t a pervasive approach to AI being taken. AI is part of the solutions available, but the question of where and how to use it has many possible answers.
In fact, what we’re seeing is that the answers depend on two key variables:
- What’s the current situation in terms of vendor solutions in place? And
- Which kind of AI might be most appropriate to deploy, given the answer to Q1, above?
That’s because innovations in AI give us a number of good options, such as Agentic, Generative, Core, and while each has its role to play, they offer different capabilities and advantages. All are relevant to service assurance, and each will play its role – but in different use cases and scenarios.
LLMs provide game-changing data processing capabilities. Let’s train them on telco data
One promising avenue is the adoption of LLMs to provide data processing capabilities. That’s because service assurance depends on access to data – not just live network information but also reports and outputs from different platforms that support service delivery.
The problem for service assurance – which involves reviewing different data inputs in order to pinpoint issues and to make the right counter decisions – is that the crucial data that drives operational performance exists in different formats.
Yes, it would be great to introduce an entirely new data processing fabric that delivers harmonisation across these formats in multi-vendor networks, but this is an ambitious step that, for many, will be some years away.
LLMs, however, can give us a jump-start here, paving the way for assurance processes that can leverage all of the relevant data. That’s because LLMs excel in parsing data and extracting meaningful results and information.
Gather information from disparate sources
As readers will know, it’s not just that data necessary to support service assurance procedures and operations exists in multiple formats, it’s also that interpreting these different data requires specialist knowledge.
LLMs offer a remedy. With the right training, LLMs can be taught to interpret different inputs and generate human-readable outputs that summarise the information the LLM has ingested – and these same outputs can also be fed into systems that can also accept human-readable inputs.
Take Root Cause Analysis (RCA) as an example. Like other such tasks, data is required to identify the underlying problem behind an issue. Many operators already benefit from RCA automation, but records of the issues and actions also need to be maintained and logged in ticketing systems for audit and tracking purposes.
A typical investigation may require data from a wide range of sources, including:
- SNMP reports
- Performance management counters
- Information from inventory systems
- Network data
- Fault and configuration management data
The raw inputs from the various systems involved constitute a barrier, because deep domain expertise is often required to understand them. LLMs provide a remedy to this problem – by collecting these inputs, interpreting them, and then presenting them in a human-readable format, an objective report can be created by the LLM.
Better still, an LLM can also suggest appropriate remedial actions (often based on deterministic flows in which alarm x requires response y, for example), alongside any automation that has been implemented.
These reports can then be fed as enriched information into ticketing systems, such as ServiceNow, where they can be actioned or stored, as appropriate, to create a clear, trackable — and easily understood — auditable record of the incident and the required actions.
In this case, LLMs both support any automation that’s in place and also provide fuel to support the implementation of further automations in the future as transformation proceeds.
Train your LLM on your data and boost AI adoption gradually
Operators can choose the LLM they need, according to their data governance and sovereignty goals, ensuring that the data on which the model is trained remains private and secure in their network and operations centres.
So, while operators continue to explore how AI can deliver for their networks, operations and customer interactions, and while the precise role to be played by different forms of AI takes shape, it’s already clear that the novel application of LLMs to telecoms operations can deliver in the domain of service assurance – augmenting but not necessarily replacing existing approaches.
The same report from GSMA Intelligence goes on to report on areas in which operators think Agentic AI, in particular, can offer most value. These include, among others:
- Automated customer complaint resolution
- Autonomous fault resolution
- Customer experience prediction
- End-to-end incident management
- Root cause analysis across network layers
We’re not there yet – but AI LLMs can help you to move up the adoption curve, accelerating data processing and the exposure of insights to your teams, and paving the way for future innovation and Agentic AI adoption.
Because here’s the thing about AI: it’s not an all-or-nothing event, but a gradual process of iteration and adoption because there isn’t a single right answer. Your progress towards AI adoption will be different from that of your peers. It all depends on the point of departure.
Read our new paper to find out how LLMs can boost key operational processes





