White paper
Take your OEM’s embedded insurance strategy to the next level
Download now
Join our webinar

How embedded insurance is changing the face of regulation & compliance

Thursday 25 May
1:00 PM CEST
Register now

Related solutions

Blog post

Creating a highly flexible embedded insurance orchestration platform through cloud-native technology

From data to distribution, we’ve talked a lot about the insur(ance) part of InsurTech – now we’re tackling the ‘tech’ part. First up: laying the foundations of our insurance-as-a-service platform by being cloud-native and building our own platform-as-a-service.
Topic
General
Time to read
7 minutes
Last updated
February 2, 2023
In a nutshell
  • Using a cloud-native microservices architecture allows for flexibility, scalability and increased efficiency.
  • To support our solution, we decided to build our own platform-as-a-service: the foundation layer between the cloud, managed services we depend on and our insurance-as-a-service platform.
  • Developing our own PaaS gives us full control over our developer platform, allowing us to develop, debug and deploy new features quickly – ultimately delivering more value to Qover’s embedded insurance orchestration platform.

‘Cloud-native’ is a bit of a buzzword these days. And like many buzzwords, their true meaning can get convoluted. So as Qover’s Chief Technology Officer, let me explain what we mean by being cloud-native, and how it helps fuel our tech platform.

Why we went cloud-native

Running your solution in the cloud doesn’t make you cloud-native. Rather, being cloud-native is a mindset, built around key pillars with the following advantages:

  • Becoming even more agile. We use a microservices architecture, which allows for both flexibility and scalability. Each microservice is a small, independent piece of software that can be deployed, updated and scaled separately from the rest of the application. This allows us to respond quickly to market changes and program adaptations. 
  • Deploying software via containers is more scalable. The application is packaged and deployed using containers, which are lightweight, portable and easily scalable. Containers provide a consistent environment for the application to run in, regardless of the underlying infrastructure.
  • Cloud-native software services are more resilient. All services are designed to be self-healing, meaning that they can automatically detect and recover from failures – like a phoenix rising from its ashes. Monitoring and logging tools are used to detect issues to automatically scale and replace failing components. 
  • Improved DevOps leads to faster execution and innovation. By using fully automated, continuous delivery practices, we ensure that new features and updates are delivered to users quickly and consistently. We do this by automating the build, test and deployment processes, and using version control systems to manage and track changes to the application.

In order to become cloud-native, we made some early decisions about the technology we would use: Kubernetes as the microservices orchestration solution as well as the Istio service mesh; Terraform for infrastructure as code scripting; and ArgoCD as a git-ops tool.

But choosing technology alone only gets you so far. What we really want is a consistently managed and easy to use developer platform that supports our solution. We asked ourselves: do we want to use another tool or set of tools for this or build our own?

Building our own platform-as-a-service

What is a platform-as-a-service (PaaS)?

A PaaS, or platform-as-a-service, is a variety of tools and services that allows you to more easily build and run microservices without the complexity of the infrastructure underneath. Typical elements include a development framework, Kubernetes operators, a command-line interface (CLI), etc.

When a company like Tesla plans to build cars, they first build a factory: a production and assembly line for their cars. They streamline this as much as possible to build any Tesla model.

Similarly – although minus the massive machines and robotswe built our PaaS as an assembly line for our insurance microservices.

Qover's embedded insurance orchestration platform layers
Qover's platform-as-a-service is the foundation layer between the cloud, the managed services we depend on and our insurance-as-a-service platform.

Why did we build our own PaaS?

Our PaaS is the foundation layer between the cloud, the managed services we depend on and our insurance-as-a-service platform.

Kubernetes is a wonderful orchestrator, but it can also be very complex to set up just right. We don’t want to burden our developers with this complexity, and we also don’t want to duplicate our specific Kubernetes setup all over our codebase.

A typical deployment also goes beyond just the microservices in Kubernetes; there’s the message bus, the database, etc. – so a lot needs to be configured and automated to get a service up and running.

A lot of companies use Helm to at least partially overcome this problem, but Helm is primarily a package manager and a poor fit for continuously deploying individual microservices. In a way, it’s the Helm templating engine that lures you in – but your templates quickly become very complicated, and hard to maintain and keep in sync everywhere.

Developing our own PaaS ensures that we can easily develop, debug and deploy our microservices to our needs. We can evolve and tweak it exactly as we want, without limitations. We can also easily apply the PaaS principles everywhere: in our tooling, in the configuration, in naming conventions across all resources and in our shared libraries. This gives us a very consistent result.

Ultimately, it allows our developers to focus on business features rather than lose or spend time on the mechanics of getting something deployed. Instead, they can focus on delivering more business value through the core of Qover’s SaaS.

Specifically, our PaaS also enables us to be an elite performer across these four metrics:

  • Deployment frequency: how often we successfully release to production
  • Lead time for changes: time taken to get a commit into production
  • Change failure rate: percentage of deployments causing a failure in production
  • Time to restore service: how long it takes to recover from a failure in production

Qover's cloud infrastructure
Qover's cloud infrastructure enables us to use microservices that are flexible, scalable and resilient.

Main principles of Qover’s PaaS

Automation, automation, automation

One thing was clear from the get-go: we automate everything – the full infrastructure and our deployments.

Still, we decided to have a human being make the final decision to deploy something to production – which is done through the simple click of a button.  

Deployments to testing environments are fully automated, as are the narrow integration and unit tests we run in the source control pipelines on every commit.

API & contract-first

We decided on a clear ‘source of truth’ for our APIs. We define the API contract before implementation using the standard protobuf format (which we extended by means of custom options).

This allows us to define the API endpoints, request and response messages, and other details in a formal contract. This contract serves as the blueprint for the implementation, and any changes to the contract are reflected in the implementation.

Our in-house code generator helps us leverage the contract by creating interfaces and base classes that can be used in the implementation code – making both the developer’s life easier and helping us avoid any human error.

Check out our API documentation --> 

code for Qover API
A contract-first approach gives us a single source of truth for our APIs and message contracts.

Spaces, modules & components

We decided to group our microservices into ‘spaces’, or logical groups that help give some structure across teams.

A ‘space’ will group ‘modules’ that can themselves be considered as the bounded context for one microservice – though the actual microservices are one or more runtime ‘components’ in such a ‘module’. Often a ‘module’ has only one ‘component’, called api. Other examples of components are cron jobs and message handlers.

We use a private database for each ‘module’, which is inaccessible to the rest of the services.

Convention over configuration

We wanted to avoid ‘configuration hell’ as much as possible, so we rely on convention over configuration everywhere we can. For example, database names are automatically built using the name of the environment a module runs in combined with its own name. 

Similarly, topics and subscriptions on the event bus are automatically scoped by namespacing them.

This avoids having to set up a configuration and potentially making errors while doing so. But more importantly, it also enables us to easily set up ad-hoc custom environments for testing purposes that can run in parallel without any additional configuration whatsoever. This is a great help and a huge time saver when testing or getting UAT feedback on new features.

Platform configuration

We support configuration at all three levels: ‘space’, ‘module’ and ‘component’. We keep module and component configuration as close to the source code as possible – in the same git repo (so the configuration is automatically part of any feature branches).

Mono/multi-repo agnostic

We do not fall into the (often) ‘religious’ trap of using either a monorepo or a separate repo per microservice (‘module’). Our PaaS supports both setups, so diversity wins.

Language-agnostic

Even though we made a choice to primarily use NodeJS (with TypeScript) at Qover, the PaaS allows us to easily add support for any language with next to no work at all. In fact, our data team uses Python microservices – the same PaaS.

3D illustration of cloud technology with Qover branding
Building our own cloud-native platform-as-a-service means that we have full control over our ‘insurance factory’.

Executing Qover’s PaaS

Qover’s command-line interface (CLI)

The Qover CLI is the swiss army knife of our PaaS. It is built to smooth the developer’s workflow and abstract away the complexities of Kubernetes. The same CLI is also used in our continuous delivery pipelines on our build agents.

Kubernetes templating is done using actual code (we use TypeScript) which makes it much easier to structure and maintain. It is also done in one central location: the CLI code. So updates, fixes and improvements take immediate effect across our codebase!

Kubernetes operators

An important part of automating operations in our PaaS is handled by a number of Kubernetes operators that build upon the desired state philosophy of Kubernetes.

Conclusion: Being cloud-native and building our own PaaS streamlines our insurance operations

Going cloud-native allows us to be more flexible and scalable, while making our services more resilient. Not only do our servers automatically bounce back when there’s an issue, but it also means that we can easily deploy new features for our partners.

Having our own PaaS on top of that means that we have full control over our ‘insurance factory’. This increased efficiency and automation ultimately allows our developers to focus on adding more value to Qover’s core business: its SaaS embedded insurance orchestration platform.

See how we can easily configure any product, in any country, for any company --> 
And if you'd like to further discuss our tech, don’t hesitate to reach out to our in-house experts or request a demo.