Lloyds Builds AI Agent Platform as Governance Push Deepens

Envoy is designed to help Lloyds teams build, reuse and monitor AI agents across the group, as the bank expands its responsible AI capability.

LLOYDS A robot thinking about taking the red pill in its hand

Lloyds Banking Group has launched Envoy, an internal platform for building and running AI agents across the organisation with central controls, monitoring and oversight.

Built with Google Cloud, the platform gives Lloyds teams a way to create, train, use and share AI agents through a governed internal environment. It is intended to support the responsible use of agentic AI while improving customer journeys and how colleagues work.

The launch comes as Lloyds expands its responsible AI expertise and appoints new senior leadership for data and AI, putting governance, safety and accountability at the centre of its wider adoption of the technology.

Building agents inside the bank

Envoy provides ready-to-use templates so teams can build AI agents without starting from scratch for each use case. The platform is designed to reduce duplication and make it easier for teams to reuse proven tools across the group.

Ron van Kemenade, chief operating officer at Lloyds Banking Group, said: “Envoy helps our employees become more productive, improve customer journeys, and launch potentially disruptive business models.”

AI agents differ from standard chatbots because they can maintain context, complete multi-step tasks and interact with tools or systems. For banks, that creates potential uses across customer support, internal operations, fraud prevention and colleague productivity, while raising questions about oversight, accountability and data controls.

Envoy connects to Lloyds’ existing large language model platform, with built-in checks for safety and risk. Human oversight will remain in key decisions, while agent behaviour and performance can be monitored once tools are live.

Responsible AI infrastructure

The Envoy launch follows recent announcements that Lloyds has strengthened its Responsible AI team with specialist hires across governance, ethics, assurance, AI safety, threat intelligence, guardrails and technical research.

The team is led by Dr Suzanne Brink, head of Responsible AI, and sits within the bank’s AI Centre of Excellence. Every AI use case follows a defined Responsible AI journey, with oversight from concept through deployment and ongoing monitoring.

Dr Rohit Dhawan, group head of AI at Lloyds Banking Group, said: “Responsible AI is what makes innovation at scale possible. These appointments reflect our continued investment in the skills, governance and safeguards needed to deploy AI in a way that earns trust, delivers value, and creates positive outcomes for customers and the economy.”

All AI use cases are captured in a central model and use case inventory, giving Lloyds visibility across projects. The expanded Responsible AI team will support responsible-by-design practices, AI assurance, research and automated controls.

From pilots to reusable tools

One of Envoy’s main features is its internal Agent Marketplace. Agents that are ready for wider use can be published internally, allowing other teams to find, reuse and build on existing tools.

The marketplace model gives teams a shared route for deploying AI agents, rather than creating separate tools in different parts of the organisation. Lloyds says this should support a more joined-up use of AI across the group.

Envoy also allows agents to remember relevant details during conversations while following rules on data privacy and retention. In customer journeys, this could reduce the need for customers to repeat information when returning to the same enquiry.

AI reaches senior decision-making

Lloyds’ AI work is also extending beyond operational use cases. Earlier in April, the bank was reported to be trialling an AI ‘board bot’ developed by Board Intelligence to help directors review confidential information, prepare for meetings and reduce bias in decision-making.

Nicola Putland, corporate governance director at Lloyds, told The Times: “We see real potential for AI to support decision making in boardrooms when used carefully and responsibly. We are trialling AI tools to support us to better prepare for discussions through faster analysis, and access to a broader range of perspectives.”

The boardroom trial sits alongside Lloyds’ wider push to bring AI into internal decision-making and customer support. It also explains the bank’s emphasis on controlled deployment, human oversight and auditability.

Leadership and customer use cases

Lloyds has recently appointed Sameer Gupta as chief data and AI officer, reporting to van Kemenade. Gupta joins from DBS Bank in Singapore, where he is chief analytics officer, and will lead the next stage of Lloyds’ AI strategy.

His remit includes scaling AI across the business, improving customer experiences, strengthening fraud prevention and supporting colleagues with better tools and insights. Lloyds has also launched an AI-powered financial assistant designed to give customers personalised, round-the-clock support.

Gupta said: “Used well, AI can help customers get the right support more quickly, protect them from fraud and make managing money simpler and more intuitive. I’m looking forward to working with colleagues across the Group to apply AI in a responsible way that delivers better outcomes for customers and communities.”

Trust and operational resilience

The focus on safety and auditability also sits within a highly regulated environment for banks’ digital systems. In an update to the Treasury Committee last month, Lloyds said a March mobile banking app incident may have affected up to 446,915 customers who logged in and viewed their transaction list during the incident window, while up to 107,937 clicked through to view transaction details.

Lloyds said account balances were not affected, passwords and login details were not visible, and customers were not able to move money from anyone else’s account. The bank’s fraud analysis found no increase in average daily fraud volumes among the potentially affected customers, and it has made around £201,000 in goodwill payments overall.

The incident was not related to AI, but it illustrates the level of scrutiny banks face when introducing or scaling digital systems. Lloyds has framed Envoy around controls, monitoring, audit trails and human oversight as it looks to expand the use of AI agents internally.