If 2023 and 2024 were the “wow years” for AI, 2025 was the year when reality quietly took the wheel. The proofs of concept did not stop. The demos stayed impressive. But in banks, fintechs and payment companies, the tone of the conversation changed.
The question shifted from “What can this model do?” to “What will this change in our P&L, in our risk profile, in our operations and in the way we treat customers over the next three years?”. That movement from dazzle to discipline may be the most important paradigm shift of the year.
In financial services and payments, AI has moved out of labs and innovation teams and into the flow of day-to-day work: AIOps, smart payment orchestrators, credit underwriting, fraud monitoring, collections, contact centres, onboarding and compliance reviews. The debate is no longer about which tools to use. It is about the shape of the operating environment we are building: how intelligence will be embedded in our infrastructure, how it will be governed and who will be accountable when things go wrong.
We find ourselves a little like Alice stepping through the looking glass. The landscape is recognisable, but the rules bend and twist in ways our old maps no longer explain. We need new frameworks not only to understand what is happening but also to decide consciously what kinds of systems we want to build.
2025: The Year the Questions Got Harder
By now, very few people in financial services still argue about whether AI matters. Surveys over the last two years show that a clear majority of financial institutions have already deployed AI models in one or more important processes, and that usage is accelerating. McKinsey, for example, estimates that generative AI alone could create between 200 and 340 billion dollars in annual value for the global banking sector if it is deployed well. Something real is on the table.
2025 did expose an uncomfortable gap between ambition and readiness though. A recent model risk survey found that roughly 70 percent of financial institutions have integrated AI-driven models into their operations, but many are still working out how to bring those models fully into their model risk and governance frameworks. Supervisors and standard setters are voicing similar concerns. Model risk, data quality and governance weaknesses are now explicitly flagged as AI-related vulnerabilities with potential financial stability implications.
At the same time, the economics of AI at the infrastructure level have started to look much less like a free lunch. In a recent interview, IBM CEO Arvind Krishna did some very simple arithmetic on the current AI data centre buildout. At today’s costs, he estimated that fully equipping a one gigawatt AI data centre can run to around 80 billion dollars. Scale that to the roughly 100 gigawatts of capacity implied by some of the most aggressive announcements and you arrive at about 8 trillion dollars of capital expenditure. On his numbers, that would require something like 800 billion dollars in annual profit just to cover the cost of capital. His conclusion was blunt: at current economics there is, in his view, no way that level of spending can earn an adequate return.
The deeper signal behind IBM’s “8 trillion dollar AI mirage” warning is not about data centres alone. It is about discipline. If the economics of AI infrastructure look stretched even at hyperscaler scale, then banks, fintechs and payment providers cannot afford to treat AI as a vanity project. It pushes us to connect every model, every feature and every workflow to clear improvements in experience, success rate and unit economics.
In many boardrooms, AI oversight has already shifted from a one-off strategy presentation to a standing agenda item. Not because the technology is new, but because the consequences of getting it wrong on conduct, customer outcomes and reputation have become very real.
At the same time, the business case for AI inside financial institutions and payment businesses has become more concrete. We now have examples of organisations reducing credit decision times from days to minutes, automating entire layers of back-office work, tuning risk checks in real time and routing transactions more intelligently across rails and acquirers. Industry surveys suggest that banks and payment providers expect AI to lift their profitability meaningfully over the next few years, even after accounting for investment and risk.
But that value is not evenly distributed. The firms that are starting to pull ahead are not necessarily those with the biggest models. They are the ones that have been choosy about use cases, disciplined about data and serious about governance. They treat AI as part of their core operating model rather than as a side project.
A good example is a leading bank in the UAE. After simplifying its IT landscape and building a bank-wide data lake, the bank focused on a handful of high-impact AI and analytics use cases co-owned with business leaders, put a federated data governance model in place and embedded the outputs into frontline tools. In its first two years, it built more than 100 models, created a core team of over 70 analytics specialists and is targeting a five to seven times return on its AI investment, as described in this McKinsey case study: “How a UAE bank transformed to lead with AI and advanced analytics”.
Others have discovered a different reality: hundreds of models scattered across the organisation, inconsistent data pipelines, conflicting rules and unclear ownership. The dazzle of rapid experimentation has, in some places, hardened into a new form of technical and regulatory debt.
So 2025 became the year when the questions got harder. Not “Can we do this?” but “Should we?” and “Who owns the risk?” and “How will this show up in our capital plan, our audit findings and our trust with customers?”.
Beyond Generic Intelligence: The Rise of Domain AI
The second big drift this year has been away from a generic fascination with intelligence and toward something more grounded: domain AI.
The early conversation about large language models in finance often treated them as magic calculators that could be dropped into any workflow. Over the last twelve to eighteen months, that story has become more nuanced. The institutions that are getting the most out of AI are not simply plugging in general-purpose models. They are training and constraining them with the specific semantics, data sets and guardrails of their own domains.
In payments, domain AI lives inside the actual grammar of the system. It knows the difference between a soft decline and a hard decline, understands how strong customer authentication exemptions work by segment and ticket size, tracks token and mandate life cycles and is aware of cut-off times, settlement calendars and intraday liquidity constraints. That lets it reason directly in terms of approval rate, cost per transaction, chargeback exposure and working capital, then suggest routing, authentication and retry patterns that fit a specific merchant, corridor or use case instead of a global rule book. In compliance around these flows, domain AI treats every alert and decision as part of an explicit lineage from regulation to policy to control to case outcome, so that an investigator or supervisor can see not only what was decided but also which pieces of law, risk appetite and historical behaviour informed that decision.
Around this, an older idea has started to echo again. Isaac Asimov’s three laws of robotics were, at heart, an attempt to imagine what it would mean to embed ethics and control into powerful systems. Financial AI needs its own, simpler set of principles. One useful way to think about it is as follows:
- AI used in finance should not harm customers or the wider system.
- It should operate under meaningful human oversight, especially where decisions affect capital, conduct or inclusion.
- It should preserve its own integrity, including its data, logs and safeguards, without cutting across those first two duties.
These are not new regulations, and they are certainly not a complete policy framework. They are a reminder that in financial services, “responsible AI” is not a slogan. It is a design problem. The most promising experiments now bake these principles into the way systems are conceived, built and monitored, instead of bolting them on later in a separate ethics deck.
Payments as the First Intelligent Fabric
If you want to see domain AI in motion, payments are a good place to start.
Over the last decade, real-time payment systems, wallets and QR ecosystems have rewired how money moves. In India, UPI has grown into the primary rail for retail digital payments. It now handles the bulk of person-to-person and person-to-merchant transactions and processes billions of transfers each month. In Brazil, Pix has rapidly become the dominant way to pay, moving money instantly between people, businesses and government entities and processing trillions of reais in value each year.
On top of these rails sit card networks, local schemes, alternative payment methods, cross-border connectors and merchant acquirers. For many digital businesses, this has created both an opportunity and a headache. Customers now expect every payment to be instant, invisible and free of friction. Behind the scenes, though, each transaction is a small maze of choices. Which rail should we use? Which acquirer? Which authentication flow? How much risk should we take? Each of these choices has real consequences for conversion and cost.
What is emerging, quietly, is an intelligent payment fabric.
Instead of static routing tables and one-size-fits-all rules, we are starting to see systems that learn from every transaction and adjust over time. They nudge traffic away from failing endpoints before dashboards turn red. They shift authentication journeys based on behavioural signals and regulatory thresholds. They balance the trade-offs between approval rates, fraud losses and transaction costs in ways that would be impossible to manage manually at scale.
For a high-growth merchant in Latin America, that might mean AI-driven orchestration that can reroute traffic in real time away from an underperforming acquirer, tune authentication flows by risk profile and dynamically pick the cheapest successful rail, without a human watching the dashboard. For a wallet operator in Africa or Asia, it might mean collections strategies that adapt to local cash-flow patterns, or risk models that combine device data, behavioural signals and traditional credit information in order to extend inclusion without blowing up loss rates.
Earlier this year, I described payments as “experience infrastructure”, the hidden layer that shapes how the experiential economy actually works, and argued that AI-powered orchestration would turn this layer from plumbing into a strategic asset. In 2025, that shift began to feel less theoretical. The question for 2026 is whether we treat this new fabric simply as another efficiency play or as a canvas for designing better, fairer and more resilient ways for value to move.
For emerging markets in particular, the stakes are high. As governments push more subsidies, services and small-business support through digital channels, payment infrastructure becomes part of social infrastructure. When it is intelligent and well governed, it can widen access and resilience. When it is brittle or opaque, it can amplify exclusion and fragility.
Three Bold Challenges for 2026
It is tempting, at the turn of the year, to ask only for more pilots, more success stories and more budget. If 2025 was the year when the questions got harder, 2026 should be the year when we are willing to answer them. Three challenges in particular keep coming up in conversations with leadership teams in financial services and payments.
1.Retire AI theatre and tie AI to capital and conduct.
The first challenge is to move beyond AI as a collection of impressive demos and isolated productivity wins and to connect it explicitly to the way we think about capital, risk and conduct.
That means asking which initiatives on the AI roadmap will show up not only in customer-experience metrics or cost-to-income ratios but also in risk-weighted assets, provision levels or conduct indicators by 2027. It means designing major programmes with regulators, auditors and risk functions in the conversation from the start, not only as late-stage gatekeepers. It means being ready to explain, in plain language, how AI-assisted decisions in areas such as credit, collections, fraud and customer treatment are made, monitored and escalated when things go wrong.
A simple question for any leadership team is this: Which AI initiative are we willing to defend in front of our supervisor as a net positive for both safety and fairness?
2. Build a living governance spine, not a one-time policy.
Many large institutions now have an AI policy, and a growing number have published responsible AI principles. The harder work is turning those documents into a living governance spine.
That looks less like another committee and more like a set of capabilities: inventories that actually capture all the models in use, including vendor and “shadow” models; risk tiering that reflects impact rather than only technical complexity; monitoring that can spot drift in both performance and behaviour; and red-teaming that is more than a box-ticking exercise. Supervisors have already signalled that model risk management for AI will be held to at least the same standard as for traditional models, and in some cases a higher standard.
The goal is not to eliminate all failure. That is neither realistic nor desirable. The goal is to build institutions that can learn safely. Systems will misclassify. Models will drift. Bad actors will probe for weaknesses. The discipline lies in how quickly we detect, respond and adapt.
A practical question to carry into 2026 is this: If a critical AI system in our stack started to misbehave tomorrow, how confident are we that we would notice, understand why and respond before customers or regulators told us?
3. Reskill leadership and teams for human and AI collaboration, not replacement.
Much of the public debate about AI and jobs in finance has been framed as a substitution story that asks which tasks or roles will be automated away. The more interesting, and more demanding, challenge is to build organisations in which humans and AI together make better decisions than either could alone.
Executives will need a working understanding of how AI systems behave in production: what they are good at, where they are brittle, how bias creeps in and how guardrails work. Frontline teams will need to learn when to trust a suggestion from a model, when to challenge it and when to override it entirely. Risk, audit and compliance teams will need new skills in interrogating models and their outputs, not just their documentation.
Surveys in 2024 and 2025 already show that institutions reporting the biggest gains from AI tend to combine investment in technology with significant investment in skills and ways of working. The human part of the system is doing as much of the heavy lifting as the models.
One last question for leadership teams follows from this: Where, concretely, in our organisation do we expect humans and AI to work together in 2026, and what are we doing now to prepare those people?
Looking Ahead: From Infrastructure to Intelligence
When the history of this decade is written, 2025 may not be remembered for the most spectacular new model release. It may instead be seen as the year when we chose what kind of relationship we wanted with this technology.
Did we treat AI as a powerful tool in search of a purpose, deployed wherever it happened to fit a slide or a marketing campaign? Or did we do the harder work of bringing it into our core infrastructure with discipline, clear objectives, explicit guardrails and a willingness to take responsibility for its consequences?
For those of us in payments and financial services, 2026 will not only be about faster rails, richer APIs or sleeker apps. Those things matter. The deeper shift is that intelligence, data driven, adaptive and increasingly autonomous in narrow domains, is becoming part of the fabric of our systems. The question is whether that fabric will be continuously governed, transparent enough to be trusted and flexible enough to support inclusion and resilience, not only efficiency.
As you sign off the year, one question is worth carrying forward:
If your payment and financial infrastructure could truly learn at scale, what new forms of value, inclusion and resilience would you be willing to design for, and take responsibility for, in 2026 and beyond?
Like Alice from Alice in Wonderland, financial leaders have stepped through the looking glass. The world on this side is full of possibility, but also full of strange new risks. To thrive here will require both the curiosity to explore the wonder and the discipline to govern it.
If 2023 and 2024 were about being dazzled by what AI could do, let 2026 be about what we choose to do with it.

