Building a payments solution: Key considerations for success

As a software engineer tasked with delivering a payments solution within a bank, you must balance strict non-functional requirements with the need to remain flexible for future changes. In this article, we’ll explore how to reconcile these sometimes-competing priorities. Drawing on our experience working with payments engineering teams across various banks, we’ll also examine the challenges of building a solution entirely in-house versus relying on traditional off-the-shelf vendor products.
Let’s begin by looking at a set of non-functional requirements that are standard in most banking and payments solutions.
Standard non-functional characteristics
Performance
With the standardisation of real-time payments and the growing use of mobile banking and card transactions, there are simply more payments to process than ever—and they all need to be processed quickly. The end customer experience has never been better, and we can’t afford to disrupt that with unexpected slowdowns. As with many non-functional aspects, performance must remain a key focus of the solution design, application architecture, and supporting infrastructure.
In the past, payments modernisation programs frequently revealed that core banking systems were the resulting bottlenecks, because they often weren’t designed for real-time processing. However, we can’t rest on our laurels. We should expect that these core services, along with their associated channels and clearing systems, will themselves be modernised. Regardless, payment orchestration must never become a limiting factor.
Elasticity
Payment orchestration solutions increasingly need to handle a diverse range of workloads, combining real-time processing with batch-based payments. In the past, payment traffic profiles—like predictable salary file runs—often allowed for simpler planning. But this is no longer sufficient. Consumer market trends can shift quickly, creating unforeseen spikes in traffic. Consequently, the software must adapt seamlessly – scaling up when demand is high and relaxing when demand is low. This flexibility cannot be provided by infrastructure alone, the application’s architecture must also be designed with elasticity in mind.
This concept is further explored in the Reactive Manifesto, an established set of principles that underpin many modern application frameworks, most notably Akka.
Resilience
Another quality highlighted in the Reactive Manifesto is resilience. While we often reduce risk by replicating systems, replication can introduce additional failure scenarios. A classic example is the “split-brain” problem: imagine two application nodes in an active-active cluster across two datacentres, and one datacentre suffers a failure. Can you confidently allow the remaining node to continue processing? The remaining node cannot “see” the node in the failed datacentre, so it doesn’t know if that datacentre is truly down or if there’s simply a network partition. Ultimately, it’s impossible to be certain.
The strategies for addressing these scenarios involve trade-offs but are well understood. One common approach is to provide a mechanism that both “sides” of the cluster can evaluate independently to decide whether to continue processing. This often involves either running an odd number of nodes to achieve a quorum or designating a central arbiter.
For deeper insight into the complexity of these issues, the Akka Split Brain Resolver documentation offers an excellent overview.
Observability
While it’s easy to say “a payment is a payment”—an instruction for moving money that requires compliance checks and often transmission to another financial institution—the reality is more complex. In practice, a single payment typically involves multiple nested orchestrations across different business areas, interacting with diverse services and protocols, often deployed on various infrastructures.
Observability is commonly defined as the ability to understand a system’s internal state by examining its external outputs, often through dashboards driven by infrastructure and service metrics. However, I’d like to introduce a functional dimension of observability focused on individual payments. By capturing and exposing payment-specific state information as part of Payment Orchestration, you not only gain deeper insight into how payments are routed and processed, but you also create a foundation for valuable business metrics. Crucially, implementing these capabilities at the application level avoids coupling your functional insights to any particular infrastructure components.
The ability to change
The ability for a system to adapt is arguably the most important factor in its long-term success. In the payments domain, we (un)fortunately face change requirements from multiple directions: regulatory and compliance bodies introduce new standards, bank-wide security mandates evolve, and businesses demand agility to expand into new markets. As a result, payment processing software must gracefully handle changes—such as uplifting, splitting, combining, or reusing existing processes—while still meeting all previously established requirements.
Designing a system to accommodate change must therefore permeate every level of software development. It’s tempting to jump directly to popular architectural strategies like event-driven design, hexagonal architecture, or ports-and-adapters, each of which promotes extensibility. Ultimately, though, these approaches commune on two foundational software principles: abstraction and encapsulation. By prioritizing these fundamentals, payment processing solutions become enablers for the business, capable of responding efficiently to both technical and market-driven shifts.
Vendor solutions vs in-house build
Historically, adopting large “black box” vendor solutions was often the safest—or only—way to meet the rigorous non-functional requirements we discussed earlier. However, these solutions frequently cannot (and still do not) accommodate the level of change modern payment processing systems require. At Icon, we hear the same concerns time and again: legacy payment platforms are difficult to re-purpose for new contexts, and even essential regulatory updates can become prohibitively expensive. Traditional vendors often lack either the incentive or capability to address these challenges.
Increasingly, we’re seeing financial institutions explore the opposite approach: eliminating vendors altogether and building everything in-house. This avoids lock-in and allows a “greenfield” environment where teams can craft solutions precisely suited to their needs. It’s an exciting prospect for software engineers, yet designing a truly future-ready system that satisfies all enterprise non-functional demands—and remains adaptable—is a monumental undertaking.
Key questions to consider:
- How can you accommodate market-specific behaviours without duplicating entire components?
- How can you ensure reliable orchestration for both real-time and long-lived processes?
- How can you codify governance—beyond simply reusing services—to also reuse patterns and best practices?
- How can you leverage existing payment domain models (e.g., ISO20022) while still structuring additional data points?
- How can you integrate with legacy core banking systems that haven’t yet been modernised for real-time processing?
- How can you address modern regulatory pressures—such as the need for multi-cloud environments—effectively?
These challenges are solvable, but a solution must address all of them to be truly strategic. Neglecting any of the standard non-functional requirements or the need for adaptability leads to piecemeal workarounds for each new use case—driving up both complexity and cost. The core issue is that building a comprehensive solution requires substantial investment, which must be budgeted at the program level rather than tucked into a single project scope.
It’s understandably difficult for a bank to consider the total expense of an in-house payments modernization. Costs often get dispersed across individual projects and use cases, each limited by its immediate requirements and budget. For instance, if you’re implementing high-value payment execution and want to generalize application components for broader use, how can you ensure they’ll perform under higher-demand real-time scenarios? And how do you adequately test for those use cases without exceeding the budget or timeline of a single project?
The introduction of AI
With AI evolving so rapidly, it’s easy to see why many view software development as getting “cheaper.” Generative AI and agent-based automation can be incredibly powerful—just last week, I generated implementations for hundreds of ISO20022 Message Rules that we at Icon (and likely you as well) once coded by hand. The results were remarkable, fuelling the notion that developers should now be able to do more with less. This perspective stands in stark contrast to the traditional reality of large engineering teams in banks, struggling to support complex transformational projects. It’s unsurprising that executives begin to question why software still seems so expensive when AI can clone common website patterns in a flash.
Yet AI is just a tool—its output is only as good as the data it’s trained on and the instructions it receives. You can achieve impressive results if you’re working with well-defined, thoroughly documented requirements—for instance, converting content from one representation to another, scaffolding new projects with popular frameworks, or… generating ISO20022 Message Rules. However, these scenarios remain relatively narrow when compared to designing, building, and maintaining a full-fledged payments processing system.
Looking ahead, AI’s influence will only grow, and it will undoubtedly help engineers deliver more in less time. But as simpler tasks become automated, the true value shifts toward the relationship between business and technology—understanding nuanced requirements, orchestrating complex systems, and ensuring solutions remain strategic over the long term. Those who master this balance will effectively leverage AI’s potential and truly differentiate their organizations in the payments domain.
One way to ensure these nuanced requirements are understood across all stakeholders is through effective domain modelling.
Modelling
One of the most important aspects of designing strategic software is how its design and purpose are communicated across various stakeholders—developers, testers, domain experts, and business owners. By establishing a common language to describe functionalities, responsibilities, and component types, you ensure everyone aligns on what the software does and what it should do. This clarity underpins all development stages, from understanding how domains relate to each other and deciding where components should reside, to structuring and restructuring teams so that new features can be delivered seamlessly throughout the solution.
There are many approaches to modelling in software development. For example, Behaviour Driven Development (BDD) focuses on using consistent, descriptive language for functionality, while developer portals like Backstage help maintain and expose cohesive software catalogues.
Modelling is crucial because it compels teams to think carefully about what they’re building and how they’re building it—independently of any particular technical stack. By capturing and sharing these insights, you can respond more quickly to inevitable changes, ensuring your system remains robust and adaptable over time.
Building a payments solutions with the Icon Payments Framework (IPF)
At Icon, we’ve collaborated with numerous banks over the years to tackle the challenges described in this article. Through these experiences, we’ve distilled our learnings into the Icon Payments Framework (IPF). Positioned firmly between traditional vendor products and pure in-house builds, IPF provides a battle-tested foundation for creating services that can meet the most demanding non-functional requirements we’ve covered.
Because IPF is offered as a framework rather than a platform, your engineering teams retain full ownership of payments services—similar to a complete in-house approach. The difference is that the scope of what you need to design and implement shrinks dramatically, making a truly strategic payments solution both realistic and achievable. Ultimately, we want (and you should, too) for your engineering teams to spend their time delivering unique business value—translating your organization’s distinct models and processes into running software.
For more information on how IPF can support banks building a payments solution in house, download our whitepaper.
To learn more about how IPF is architected, including our partnerships with Akka and MongoDB, as well as our approach to AI in payments, explore the other articles and resources available here on our site.