RTP3.0 – The implications of the UK’s New Payments Architecture

9 March 2018

 

An efficient payments infrastructure is essential for a country’s economy. Increasingly innovation in payments is also important to establishing the attractiveness of a region for investment, especially for Fintech. As the UK moves to implement its New Payments Architecture (NPA) there is an opportunity to lead the world in payments and market openness. If the original Faster Payments was real-time payments (RTP) version 1.0, and SCT Inst / TCH is RTP 2.0, then NPA could provide RTP 3.0.  What would this mean for participating financial institutions?

In this blog I look at some of the non-functional requirements banks will need to consider.

Orders of magnitude increase in scale

I’m sure we would all agree that electronic payment volumes will continue to increase, with most of the growth coming from Faster Payments as opposed to BACS batch payments. According to FPSL, some 1.7bn faster payments were processed in 2017, a 16% increase on 2016. A basic forward projection shows that banks should allow for a doubling of volumes over the next five years.

However, this does not take into account any migration of card volumes to faster payments arising from PSD2 and open banking. Icon’s research (conducted with Ovum) shows that PSD2 and open banking could lead to 30% of the potential cards market migrating to faster payments.  Even looking at just e-commerce this would potentially add another 2 billion transactions per annum by 2026.

It also does not consider the fact that payment generation in many areas is still locked into a batch mindset. For example, payroll and invoice payments are frequently performed on a monthly basis, resulting from legacy technology implications as much as anything else. With the rise of things like the gig economy and real-time, always on, scalable infrastructures there is no reason why people shouldn’t get paid daily.

Add other more unknown elements – societal and technological changes over the next ten years, IOT, AI, messenger services etc – and banks need to plan for orders of magnitudes increase in scale.

Peak processing demands

Many people have spoken about the rise of the GAFA’s and the BAT’s, myself included, but it is worthwhile reflecting again on the statistics from Singles Day in China in 2017.  Among the many statistics released by Alipay were:

  • Alibaba Cloud, which handled the infrastructure, managed 325,000 orders per second at peak
  • Alipay processed 1.5 billion payment transactions, up 41 percent on the previous year

Very few people would have predicted such demand a few years ago.  Although China is an enormous market, what if the UK payments system were to experience a similar (albeit smaller) extreme peak due to an event?  Card volumes have traditionally experienced greater demand peaks than account-to-account payments, for example on Black Friday Barclaycard (UK) experienced 976 transactions per second.  If card volumes migrate to a Request for Payment method or in-app payment push, your payment systems will need to dynamically scale to a greater extent than existing architectures support.

Availability and Resiliency

Payments availability and resiliency has been front-of-mind recently following high-profile outages.  With growing volumes and concentration of multiple schemes within one infrastructure, expectations are for an always-on, 100% available payment system, regardless of volume peaks.

Systems therefore need to self-heal and stay responsive in the face of failures. When failure of a component does occur, it needs to be met with elegance rather than disaster.

Costs Moving to zero

Despite growing volumes, the cost of processing needs to trend towards zero. How to achieve this?

Well, given the growth in volumes will be driven by faster payments this argues for stabilizing and containing the existing legacy payment hubs, freezing that investment and making strategic investment in real-time payments. Hubs were the right approach for consolidating disparate silos of complex, exception prone, non-immediate processing. However, that time has passed.

Cost pressure and rising volumes argue for simple flows, no repair, no delays, etc. Flows made out of standard components, removing complexity, and pushing exception handling to the customer through immediate feedback, etc.

Conclusion

The non-functional demands of the NPA are significant, as you would expect. Evolution of your existing infrastructures may enable you to be compliant, but it will not necessarily enable you to compete with the new market entrants. Therefore, we believe the NPA argues for a revolution, not an evolution, adopting an open solution that builds upon the same technology as the internet-scale, market disrupters.

In a separate blog I’ll look at some of the functional requirements of the NPA.

 

Don’t miss a thing! To receive future blogs and insight updates from Icon Solutions, simply register your details below:

You can unsubscribe at any point by clicking the link in our emails. Read our Privacy Policy.

* indicates required






Categories:

Mady Dyson

BACK TO BLOGS