Data-first Pre-bind Processing: Improving Blueprint 2

Blog

Introduction

Data-first pre-bind processing is so innovative that firms are preparing to adopt new ways of working starting in July as we approach Blueprint 2 Phase 1.

Anticipating Phase 2 Transformation

As we look ahead to Phase 2, we anticipate an innovative trading environment enabled by the Digital Gateway, allowing for complete digital risk placement.

This shift aligns with the Blueprint 2 vision of driving transformation towards a ‘data-first’ approach to trading, moving away from manual processes.

Industry-Wide Transformation

Certainly, the industry is set to undergo a true transformation through standardized data, digital messaging, and new central services.

Let’s start with some definitions 

Formerly, ‘Data-first’ is the concept of extracting risk information. From submissions to transposing it into standardised data entities. These can be shared between organisations and systems through APIs, enabling highly efficient, ‘data-first’ risk placement and processing.  

By comparison, ‘Document-led’ on the other hand is today’s paradigm for most of the market – risk. Data is populated into documents such as MRCs which are sent along the value chain. Additionally, the data contained within them is rekeyed into subsequent systems in the chain. 

Data-First Beginning with Pre-Bind Processing

The former approach has many clear advantages. Mainly in terms of efficiency, accuracy and speed. Togheter with enabling greater manipulation and enhancement of data for deeper risk insights leading to better underwriting. The Blueprint 2 vision is for market participants to move to data-first as quickly as possible. And eventually decommission non digital routes. 

Data-first is a pre-bind challenge 

However, whilst the aspiration is towards data-first pre-bind processing. In addition with elements of the target state Blueprint 2 operating model, there’s actually little within the Blueprint 2 architecture that helps market participants into this approach.  

Generally, MRC v3 and the new Core Data Record (CDR) are certainly part of the solution. Additionally, will help standardise data across different actors in the value chain. But the CDR is merely a set of defined data entities that must be captured through the pre-bind process – and in fact just the sub-set of entities needed to enable four post-bind processes through the Digital Gateway (accounting and settlement, tax, regulatory reporting and first notification of loss).  

Benefits and Challenges of Automated Submission Ingestion

In other words, while the CDR might require a data-first pre-bind processing approach and the MRC v3 might help enable it, they are not the complete solution. The MRC and CDR contain only a partial record of the risk. Using their requirements as the specification for pre-bind data capture would not enable a fully data-first approach to submission, quote, and bind. Which would seem like a missed opportunity. 

Placement platforms have an important role in digitising the placement process. However, these platforms will need to evolve. They must support brokers and insurers who are ready to trade fully digitally. For example, automatic creation of MRCs using data entities extracted from submissions and sent via API to the placement platform.

Data-first starts at submission 

Market participants need to think about how they will implement a data first approach. Not just to be compliant with CDR requirements, but to take full advantage of the opportunity to transform pre-bind.  

And there’s an easy, obvious and highly valuable place to start.  

A key capability of data-first is automated submission ingestion.

Data Standardization and API Integration

Using AI to extract targeted risk data from voluminous submission packs is the foundation of a data-first process. Standardising that data and converting it into the correct format is crucial. The data is then ingested by downstream systems. Finally, the data is pushed directly into those systems through APIs.

Pushing the data directly into those systems through APIs completes this process.

Data-first without automated submission ingestion would be like using a starting handle to fire up a rocket ship! 

Automated submission ingestion is not new. Previous AI-based solutions required lengthy and expensive algorithm training. Implementation projects often killed the business case. Now, a new generation of solution has arrived. Led by mea Platform, it is fully pre-trained, works immediately, and is available as SaaS.

Such solutions are driving significant business value right across the market:

  1. Carriers are transforming their back office.
  2. MGAs are winning more business through faster speed to quote.
  3. Brokers are automating the creation of market submission packs.

And crucially, automated submission ingestion:

  1. Unlocks a data-first approach to pre-bind processing.
  2. Accelerates the requirements and vision of Blueprint 2.

Let’s go back to the CDR. There’s currently no consensus approach nor solution to how the CDR will be created. The easiest option is to extract relevant data from the MRC v3.

  1. But there are two problems:
    • (i) The MRC lacks all CDR-required data, requiring manual extraction and addition from sources like SOVs and tax schedules.
    • (ii) It assumes the MRC data is correctly populated initially.

A next generation submission ingestion platform can solve this. Automatically extract targeted data entities from submissions, correctly populate the MRC, and gather additional data required by the CDR. Send it all through an API to the CDR via a placement platform or directly.

Opportunity for Early Movers

Now is the perfect time to take the crucial first step towards data-first pre-bind  

The technology is ready. The business case for submission ingestion is strong. The alignment with Blueprint 2 is clear. Early movers will reap the greatest rewards and position themselves at the forefront of this generational market modernisation initiative.