Insight

AI’s role in supporting vulnerable customers – from detection to intervention

Molly Preleski Richard Berkley

By Molly Preleski, Richard Berkley

Vulnerable customers are a greater priority than ever for UK financial institutions. The number of vulnerable customers is rising; half of adults have at least one vulnerable characteristic. At the same time the topic remains a major focus for the FCA, which has levied multi-million-pound fines for poor treatment of vulnerable customers over the last 12 months.

The FCA’s recent review of regulated firms reported improvements in the identification of vulnerable customers. However, it also found room for improvement in the quality and consistency of support provided to those customers. Key challenges for firms include integrating internal insights with data from local authorities and other third parties; building a detailed picture of every vulnerable customer’s needs; and building on insights to deliver actions that improve customer outcomes.

With AI investment gathering momentum across financial services, how can financial institutions use AI to achieve a step change in identification, understanding, and support that will take their treatment of vulnerable customers to the next level?

Embedding AI into customer facing processes and everyday workflows enables firms to leverage existing data, identify vulnerabilities faster, and deliver proactive interventions. We see potential to deliver targeted improvements in four key areas.

1. Move from reactive detection to proactive identification

Most financial institutions already gather data about vulnerabilities, typically during key transactions or customer interactions. Firms can use AI to go further, moving beyond basic detection to proactive identification and prediction.

  • Capture data faster and better by building a full picture of financial, physical, cognitive and situational vulnerabilities. AI can use natural language processing to analyse calls, chats, and emails for language indicating vulnerability, and sentiment analysis to spot intonation signalling frustration, confusion, or stress.
  • Connect disparate information using behavioural analysis to uncover trends or patterns that indicate vulnerability. This includes financial behaviour such as missed payments, unusual, or high-risk transactions, and communication changes like cancellations or repeated calls.
  • Categorise and prioritise different types of needs, using predictive AI models that combine data, such as financial history and external events with the findings of behavioural analysis, to flag leading indicators of vulnerability.

2. Move from patchy support to timely, tailored interventions

The FCA’s review found that financial institutions were often unclear, untimely, or inconsistent in their engagement with vulnerable customers. AI can help firms to make tailored, intelligence-led responses to vulnerability – providing the right support, at the right time, across every channel.

  • Initiate proactive outreach before issues accelerate. Agentic AI can leverage analysis by predictive models to send reminders, share educational resources, or trigger preventative interventions such as payment plans. It can also suggest personalised mitigations adapted to the features of each vulnerable customer.
  • Automatically route customers to specialists with suitable training and capabilities, including the authority to provide additional support or forbearance if required.
  • Personalise communications using AI to adjust the content, language, tone, and delivery of communications to vulnerable customers’ circumstances and levels of knowledge.
  • Embed support into design developing products and services with vulnerability considerations built into every stage of the customer journey.

3. Move from under-supported staff to empowered teams

Many financial institutions struggle to mesh the capabilities of staff with those of technologies such as AI. Integrating agentic AI into customer-facing activities equips human agents with the tools to change a box-ticking approach into a proactive one.

  • Build a hybrid delivery model that supports staff but doesn’t replace them, helping employees to provide vulnerable customers with emotionally intelligent, empathetic interactions.
  • Provide live guidance to human agents, using agentic AI to monitor conversations and provide real-time prompts and context-specific suggestions for action, reaction, or escalation.
  • Deliver continuous, adaptive training to elevate the skills of frontline staff. Agentic AI can deliver automated, personalised coaching that helps employees to identify and carry out targeted interventions across a range of customer vulnerabilities.
  • Create varied communication tools - including voice assistants, live transcription and chatbots - that increase accessibility, helping customer service agents to support customers with disabilities, cognitive impairments or language difficulties.

4. Move from opaque performance to transparent outcomes

AI can play a vital role in the management and reporting of customer treatment, helping firms and leadership teams to monitor the outcomes being achieved.

  • Monitor the treatment of vulnerable customers. Agentic AI can provide a single overview of customer interactions and interventions across fragmented systems or businesses - detecting any gaps, inconsistencies, or shortcomings in the support provided.
  • Gather evidence of identification and intervention for reporting to boards and regulators, helping firms to demonstrate good outcomes and achieve compliance with required standards.

The limitations of AI mean that the judgement and empathy of trained staff remain vital to supporting vulnerable customers. Potential risks need managing, including inappropriate bias from poorly trained models, data breaches due to weak controls, and poor accountability from a lack of transparency.

Risks can be managed through a combination of governance guardrails - design, regulation, and continuous oversight. For example, many financial organisations use frameworks to ensure fairness, transparency, and accountability, with a particular focus on protecting vulnerable customers who can be at greater risk of harm. However, the human element remains essential across all aspects of risk management, ensuring that automated decisions are checked and explained clearly.

AI is a potential gamechanger for the treatment of vulnerable customers. The ability to transform outcomes at scale means that AI will be critical in enabling the financial industry to provide a growing vulnerable population with timely, tailored support.

About the authors

Molly Preleski
Molly Preleski PA risk and regulation expert
Richard Berkley
Richard Berkley PA data analytics & business intelligence expert

Financial services risk and regulation

Transforming risk and regulation from being a license to operate to supporting your biggest decisions.
Man on a call.

Reimagine AI, digital, and data

Our teams, working with yours, elevate data and digital from technologies to drivers of lasting impact.

Bring ingenuity to your inbox.

Subscribe for the latest insights and event invites on strategy, innovation, technology, and transformation.

Explore more

Contact the team

We look forward to hearing from you.