The future of AI payments solutions: Driving intelligent payment platforms with IPF and Generative AI

17 October 2025

For a Software Development Kit (SDK) such as the Icon Payments Framework (IPF), the increasing progress and application of Generative AI in payments solutions has become a focal point for many developers. Our clients are interested in understanding how IPF can further enable them in pursuit of AI-first development approaches and why AI by itself isn’t a magic bullet for software development. These are the correct questions to be asking, especially as on the surface there are often similarities between the application of Generative AI to software development, and the use of code generation within IPF. Both approaches promote developers to focus their efforts on higher value tasks by minimising “boilerplate” development, but they both bring very different contributions to the table (one brings the food, the other brings the drink!).

As an engineering team building the IPF software, we already adopt AI-based approaches internally across many parts of our development process, like most others we see the benefits but are also aware of the limitations and potential risks of using AI without a supporting process. We are actively focusing on how we best see AI featuring as part of our product for our clients.

For transparency I’m one of the Software Engineers working on the IPF product, and also a big proponent of Generative AI in software development, but by no means an AI expert.

Generative AI for code generation in AI payments solutions

Generative AI is principally non-deterministic (stochastic), the way in which the output token (for simplicity, word) is constructed is based on probabilities of what “word” comes next given the input. There are also usually some configuration parameters that can be passed to the Large Language Model (LLM) that relate to this process, such as temperature control. This parameter influences how the LLM should select the next word from a set of probable candidates, increasing this pushes the LLM to appear more creative with its responses.

The fact that these sort of parameters are configurable by the user often gives the impression that it should be possible to make the LLM act in a deterministic manner if it is supplied with the correct configuration. By reducing all these “creativity” parameters to zero you would be forgiven for then assuming that the same action against an LLM would yield the same output, but this is not the case. Whilst it may help move towards that direction, the underlying architectural process of how an LLM works – by executing many operations in parallel over multiple GPUs, there is an inherent level of non-determinism because of concurrent operations being completed in different orders.

This is easy to demonstrate, a good example we observed recently was using an LLM to translate a document from English to Spanish, repeated translation attempts of the same document using the same LLM, with the same prompting, temperature and other controls yielded slightly different output on consecutive runs.

The sentence:

“When a child flow is called, it will be sent two key data fields.”

Initially translated to:

“Cuando se llama a un flujo hijo, se le enviarán dos campos clave de datos.”

And on subsequent run translated to:

“Cuando se llama a un flujo hijo, se le enviarán dos campos de datos clave.”

A third run translated the first result again.

Ultimately these two sentences are semantically equivalent enough such that an LLM can provide either in response to the requested translation input.

This characteristic of an LLM, means that the use case for Generative AI for code generation is really restricted around a single cycle of generation. The user would describe what they want the LLM to generate, and the LLM generates the output. The user can then take that output and integrate it directly into their software project as if they had written it themselves. They would take ownership of it, change it as needed, and it would be subject to the existing code review processes as part of the normal software development life cycle. Many developer tools have been created both around and extending this workflow. This includes tools such as dedicated AI Integrated Development Environments (IDEs) such as Cursor and Windsurf and AI integrations for existing IDEs, such as Jetbrains’ AI Assistant. Most of these tools have evolved to be based around agentic workflows. This allows users to describe deeper goals, and the AI Agent would plan, reflect and make changes across many files – all inline towards the original goal. Ultimately, however each operation from the agent is an interaction with an LLM, and if you repeat the individual process with the same initial conditions, you may get slightly different (albeit equally “correct”) results.

Using Generative AI in this context, with the right conditions, can be extremely powerful. I’m continually surprised every few months the new depths that modern agentic developer tools can reach. As with anything AI related, success is dependent on the level of effort involved in clearly describing your requirements and enabling the model with high quality instructions, references and examples. Also, unsurprisingly LLMs will perform best when operating within contexts that were richly covered within their training data – so popular languages and domains (such as Javascript and web development) will likely have better results than some obscure framework with few references. As a payments developer, luckily for us, whilst python is the de-facto language for Machine Learning programming, Java is still the king of back-end enterprise development and there are plenty of examples of Java and popular Java application frameworks that foundation LLMs have been trained upon.

Code generation with IPF

The use-case relevant to Generative AI, as described above, is quite different from the code generation aspect of IPF.

As a brief reminder, IPF roughly boils down to:

  1. A resilient, battle-tested, distributed runtime specifically designed for real-time payment processing
  2. An orchestration modelling tool using Domain Specific Languages (DSL) and code generation to build executable workflows in Java
  3. A set of payment-specific out-of-the-box, ready to use capabilities curated by our Payment Experts
  4. An open architecture promoting extensibility for clients

The workflows that the IPF orchestration tool generates are a direct representation of the users requirements, executable on the IPF engine. The purpose of a DSL is to strip away all of the unnecessary details and allow the user to focus on the domain and business logic (in our case the orchestration of payments). When DSLs are coupled with a code generator to a general purpose language such as Java, we can provide a direct representation of these requirements in the runtime platform, there is no inference, no ambiguity. It is simply just a much more effective way of producing that particular part of software. The code generation process in this case is in fact, not particularly a focus or concern of the developer, it is done transparently and continually – similarly to a compiler that translates “source” code to “machine” code.

The IPF platform is the result of many years enabling real-time payments across distributed systems. It’s solid, proven, and there is a reason why more and more banks are coming to IPF rather than building everything in house. The reason is simply that this is difficult, and expensive, especially when looking for non-functionals that support a strategic programme like a full payments hub. Generative AI by itself doesn’t solve this problem.

The act of writing code has never been the blocker for any enterprise development effort – consider why we have Solution Architects. More real challenges are things like design, organisation, pattern-re-use, guardrails, security, non functional requirements and extensibility, to name but a few. These challenges still remain regardless of us now having more AI tools at our disposal to “generate” things. I’d go even further and state that unbridled use of AI in any of these areas creates risk alongside opportunity. You really want to have AI included as part of your process where it brings value, but well guarded and isolated, with the correct human-in-the-loop feedback points.

IPF and Generative AI

The real winning combination that we see, supported by insights from our existing clients, is developers using Generative AI to develop their custom extensions on top of the IPF platform. For example writing the integration code for a new service’s API that IPF will communicate with. Where previously developers would manually write this “Bridge” or “Glue” code, now Generative AI can help.

This combination of using a platform + Generative AI really gives developers the space to focus on the more complex problems to be solved, but there will always be an element of software development required.

“You don’t write industrial software in English”.  – Jetbrains CEO Kirill Skrygan

Our job, through IPF is to help reduce the effort needed for “mundane” code, and now Generative AI also helps towards that journey!

Contribution of Code into a single development project

The Future of IPF and AI

At the moment, the future of IPF and AI is two main focuses:

  1. Working directly with our clients to understand how they are wanting to build with IPF and with AI, and making IPF the best example of an “AI-Native” SDK and platform. This includes simple things like ensuring the SDK itself is self-describing to an LLM, the concepts, data models, assets and file structures are natural for an LLM to extract context from. It also includes more direct enablement such as providing capabilities through Model Context Protocol (MCP) servers to support the SDK itself.
  2. We are also looking at applying AI as part of our payment specific services and reference solutions. This includes use cases like payments repair, where we demonstrate migrating from a traditional human based activity, to supporting a self-repairing service based on AI, and really understand the process of migrating from one to the other with IPF.

It is a very exciting time, I work with people who are super-enthusiastic about Generative AI, and those who are depressingly sceptical, but what is most common is often for people to bounce between these positions. It is incredibly easy to be blown away by the latest capabilities offered up by the current flavour-of-the-week model provider, it’s also then very easy to get to a point of trying to do something very simple and spending more time fighting the LLM rather than doing the task yourself, and coming away feeling disillusioned.

I feel fortunate to be able to work in this area looking at how best to provide real value for our customers with technologies like this opening up new possibilities that didn’t exist even a few years ago. I look forward to providing updates on this as we progress in the future.

 

 

Learn more about IPF or get in touch to speak to one of our team.

Categories: AI IPF Payments

Tom Beadman

BACK TO BLOGS