Recruiting Backend Engineers at ottonova

Here are some words about how the Backend Team goes about finding new team members. We want to do our part and share with the Community, as well as provide a bit more transparency into ottonova and how we are building state-of-the-art software that powers Germany’s first digital health insurance. 

This article covers what we are doing in the Backend Team, what we value, and how we ensure we hire people that share our values.

The Backend Team

Our team is responsible for many of the services that power our health insurance solutions at ottonova. This includes our own unique functionality, like document management, appointment timelines, guided signup, as well as interconnecting industry-specific specialized applications.

Under the hood, we manage a collection of independent microservices. Most of them written in PHP and delivering REST APIs through Symfony, but a couple leveraging Node.js or Go. Of course, everything we use is cutting-edge technology, and we periodically upgrade.

As a fairly young company, we spend most of our time adding new functionality to our software in cooperation with product owners. But at the same time we invest fair efforts into continuously improving the technical quality of our services.

Values

Technical excellence  is one of our team’s core values. To this end, we are practitioners of domain driven design (DDD). Our services are built around clearly defined domains and follow strict separation boundaries. 

Because we created an architecture that allows it, and we have the internal support to focus on quality. We invest a lot into keeping the bar high and whenever needed we refactor and make sure the Domain Layer stays up to date with the business needs, or that the Infrastructure Layer is performant enough and can scale.

Although most of our work is done using PHP, we strongly believe in using the right tool for the job. Modern PHP happens to be a pretty good tool for describing a rich Domain. But we like to be pragmatic and where it is not good enough, maybe in terms of performance, we are free to choose something more appropriate.

Expectations from a new team member

From someone joining our team we first of all expect the right mindset for working in a company that values quality. We are looking for colleagues that are capable and eager to learn as well as happy to share their existing knowledge with the team.

A certain set of skills or the right foundation for developing those skills is needed as well. We are particularly interested in a good mastery of programming and PHP fundamentals, Web Development, REST, OOP, and Clean Code.

As actual coding is central to our work, we require and test the ability to both write code on the spot and to come up with clean design.

These expectations can be grouped into four main pillars that a candidate will be evaluated on:

  1. Mindset – able and willing to both acquire and transfer knowledge inside a team
  2. Knowledge – possesses the core knowledge needed for using the languages and tools we use
  3. Clean Design – capable to employ industry standards to come up with simple solutions that can be understood by others
  4. Coding Fluency – can easily transfer requirements into code and coding is a natural process
ottonova services GmbH

The Recruiting Process

To get to work with us, a candidate goes through a process designed to validate our main pillars. All this while giving them plenty of time to get to know us and have all their questions answered.

It starts with a short call with HR, followed by a simple home coding assignment. Next there is a quick technical screening call. If everything is successful, we finish it up with an in-person meeting where we take 1-2 hours to get to know each other better.

The Coding Assignment

Counting mostly for the Clean Design pillar, we start our process with a coding assignment that we send to applicants. This is meant to allow them to show how they would normally solve a problem in their day-to-day work. It can be done at home with little time pressure, as it is estimated to take a couple of hours, and it can be delivered within the next 10 days.

The solution to this would potentially fit into a few lines of code. But since the requirement is to treat it as a realistic assignment, we are expecting something a bit more elaborate. We are particularly interested in how well the design reflects the requirements and the usage of clean OOP and language features, the correctness of the result (including edge cases), and tests.

We value everyone’s time and we don’t want unnecessary effort invested into this. We definitely do not care about features that were not asked for, overly engineered user interfaces or formatting, or usage of design patterns just for the sake of showcasing their knowledge.

It will ideally be complex enough to reflect the requirements in code, but simple enough that anyone can understand the implementation without explanations.

The Tech Screening

To test the Knowledge pillar we continue with a Zoom call. This step was designed for efficiency. By timeboxing to 30 minutes we make sure everyone has time for it, even on short notice. We don’t want to lose the interest of good candidates getting lost in a scheduling maze.

Even if it’s short, this call ensures for us a considerably higher match rate for the in-person interview. In time we found that there really are just a handful of fundamental concepts that we expect a new colleague to already know. Many of the other can quickly be learned by any competent programmer.

All topics covered in this screening are objectively answerable. So at the end of a successful round we can make the invitation for the next step.

In-Person Interview

This is when we really get to know each other. This is ideally done at our office in central Munich – easier for people already close by, but equally doable for those coming from afar.

In this meeting we start by introducing ourselves to each other and sharing some information about the team and the company in general.

Next we ask about the candidate’s previous work experience. With this and the overall way our dialog progresses we want to check the Mindset pillar and ensure that the potential new colleague fits well into our team.

After that we will go into a new round of “questioning” to deeper test the Knowledge pillar. Similar to the Tech Screening, but this time open-ended. Informed opinions are expected and valued. We definitely want to talk about REST, microservices, web security, design patterns and OOP in general, or even agile processes.

Then comes the fun part. We get to write some code. Well… mostly the candidate writes it, but we can also help. We go through a few mostly straight forward coding problems that can be solved on the spot. We are not looking for obscure PHP function knowledge, bullet-proof code, or anything ready to be released. We just want to see how a new problem is tackled and make sure that writing code comes as something natural to the candidate. With this we cover the Coding Fluency pillar.

Afterwards it’s the interviewee’s turn. We take our time to answer any questions they may have. They get a chance to meet someone from another team and get a tour of the office.

What’s next?

The interviewers consult and if there is a unanimous “hire” decision, we send an offer. In any case, as soon as possible (usually a few days) we inform the candidate of the outcome.

Interested in working with us? To get started,  apply here: https://www.ottonova.de/jobs

Continuing our Hackathon tradition

ottonova Hackathon Remote Editionottonova services GmbH

At ottonova, we have taken the challenge of bringing a slowly moving and mostly antiquated business, the one of health insurance in Germany, into the 21st century. For over three years already we’ve been offering our customers not only competitive insurance tariffs, but also the digital products that go along with them. All this to make their lives better and to prove that, yes, insurance can also be easy and fun to use.

It follows naturally that all of us here are innovators at heart. And innovators like to try out new things. “We are brave” – as one of our company values says. In the last years, we have found that a good way to get our creative juices flowing is by holding internal hackathons. The events are organized by our IT department and welcome everyone from the company that has an idea or simply wants to help out. In the first two editions, almost all the engineers participated, joined by many more colleagues from other departments, working in teams. The enthusiasm before and after the event was overwhelming. Everyone was happy to quickly bring a cool new thing live, to try out new tech, or just to hack something up together with some colleagues with whom they would normally not work with.

After such a positive response to the previous instances, we were eagerly awaiting a new installment. What we affectionately call a Hackotton was due to have its third edition this summer. It has already been half a year since the last one. Too long, some would say. We wanted our Hackotton 3.0.

But will a Hackotton be the same if, due to Covid-19, we’re not all in the office together? We have been working remotely since spring, when the social distancing measures were started. We have been doing so successfully, but this time it was not just work, it was a bit more. We did not know how an event that is mostly social in nature would end up if organized remotely. Still, we decided to give it a try.

We planned the event to take about two days. That is enough time to try out an idea, especially when you’re able to hack it and there is no need for the rigorousness of our normal style of work. It all started with a kick-off. If in the last editions this was where ideas were pitched and teams were formed, this time round it was just a very short zoom call to give the start. The idea gathering and team forming was replaced with a Confluence page with a join option for team members. As this was announced well in advance. There was plenty of time to think about topics and decide which teams to join. We ended up having a total of 10 teams and, for the first time, we were joined by external guests.

When the Hackottons where held in the office, all the planning and communication was done in person. There was a constant rustle on the floor, caused by people changing desks to be close to their team, or heated discussion between team members. This time, while the office was still available for the few that chose to go there, most people worked remotely. So the close collaboration mode needed to be emulated somehow. Though some quick planning was done in the beginning to distribute tasks, in the end, we relied mostly on instant messaging (Mattermost) and video calling (Zoom) to get things done.

As we have the good habit of developing only containerized applications, and have quite some practice with this, even “hacked” solutions built during the Hackotton are put into Docker containers. This allows for way easier collaboration and building more complex systems also in hackathon mode. Especially now when working remotely, it feels so good for a team mate to give you an already built container, and you can simply connect it to the part that you are implementing. As a follow up advantage of this setup, we can quickly add these containers to AWS’s ECS in our pre-live environment, so that we demo our ideas under realistic conditions, and later, with the infrastructure already solved, have a smooth path towards a live release.

At the end of an intense second day of hacking, and after hurrying to bring our projects to a functional state and crop up some sort of presentation, we had the chance to show our ideas to the whole company (over Zoom) and see what the other teams managed to build. Everyone was given a 5 minute slot to demo and present their project and impress the audience. A hard task to contain all the passion the teams poured into their projects into 5 short minutes, but the moderators were understanding and allowed for more time when the mark was overstepped.

After all the teams got their turn in the spotlight and demoed their ideas, we had an online voting round to choose a winner. The laurel wreath for the 3rd Hackotton edition went, by a margin, to a very fruitful collaboration between the android developers and one of our designers. They came up with a radically different approach for structuring our mobile apps. They showed how by putting the focus on the doctor appointments, interactions with our apps can be greatly simplified. Their solution takes our current separate Timeline, Documents and Chat sections in the apps, and brings them together under Appointments. This way related invoices, claim settlements, doctor appointments can be reviewed under one place, as well as making it easier to communicate with our customer support. This will allow our app users to have a better overview and control over their interactions with healthcare providers and with us as an insurance company, and consequently them being happier with our service.

Apart from the great mobile app improvement idea, all the other teams brought convincing proposals. In this edition we got to see new tooling for our colleagues in the Sales department, improvements in document processing using OCR, optimizations to our website, internal applications for making our work easier, and even more changes for the better for our customers’ experience using our products. The response from the audience was overwhelmingly positive, and even if the event ended with the presentations and voting, discussions about the projects continued well into the next days.

As any hackathon comes with improvement in morale and team spirit, we can already conclude that organizing them is a long term win for the participants and the company overall. For us at ottonova, it happened that the past Hackottons came with some short term wins as well. Some of the projects ended up being used – either directly, or after planning them into our Roadmap. From the past events we have “Agento”, a very handy internal automation tool that we use to quickly insure children attached to a parent’s account, as well as “ottoPrint” our own PDF generation service, which is much faster and more customizable than the off-the-shelf solution we’ve been using until then. So we are fairly confident that some of this edition’s projects will end up being used. The winner already sparked the interest of the product owners.

With this third edition, and organizing it even remotely, the Hackotton has become part of our company culture and has proven that it can endure. As we are coming to know that “after the Hackotton is before the Hackotton”, we’re now anxiously waiting for the fourth one, and using the time to come up with cool new ideas.

php.barcelona

I and the rest of the PHP engineers at ottonova, had the pleasure of attending the PHP.Barcelona Conference in November this year. It was a great experience, spanning over two full days.

I’ve put together some quick and biased notes about the presentations. Here it goes…

Owned by the author

Day 1

Opening Keynote – Rasmus Lerdorf

Really uplifting beginning of the conference with Rasmus going down memory lane. We got to know some of the motivation and the process of creating PHP. All this further strengthening my opinion that it didn’t really start as serious programming language. (Ok. I admit. It did progress since then, so don’t start throwing stones.)

A funny highlight was the explanation that “PHP is vertically consistent” – the PHP functions match the vendor functions they create an API for. Of course, this leaves it inconsistent with itself.

Also, really enlightening, was clarifying why adding things like further type checking, generics, or class modifiers, like “immutable” would be a serious performance hit to the language – so we should really stop hoping that any of that would come soon. 

From Helpers to Middleware – Marco Pivetta

Nice practical presentation. It showed how design can incrementally evolve, from what would basically be spaghetti code, into a proper modular and scalable middleware-style architecture. While a good start, I would have hoped to see a deeper dive into this style, because where it stopped it felt like it just scratched the surface.

Microservices gone wrong – Anthony Ferrara

Although this was just a public retrospective of a specific project, which maybe will not resonate with everyone, for me and the work we do at ottonova, there were still a couple of valuable lessons to take home:

  • Messages that microservices use to synchronize can be differentiated into proper Events and RPCs, and these categories can and maybe should be treated differently. The latter require a response back, while the former don’t really need it. We don’t have this clear separation ourselves yet, but the need for it is definitely starting to show.
  • Each entity in your domain does not need to have its own service. Larger services are also fine, if they make sense for your use case. Our own setup is using domain-defined microservices, of various sizes, so seeing that splitting everything aggressively may backfire will make us think twice when extracting a new microservice.

He has a cute dog.

Serverless PHP applications with Bref – Matthieu Napoli

I guess it’s nice to see that there is a way to hook up PHP to Lambda, but then again, why take the effort to force it and not just use a language supported directly? Apart from that aspect, interesting to see an intro into AWS Lambda, since I didn’t try it out myself yet.

JSON Web Tokens – Sam Bellen

Not that much to say here: JWTs. We already use them, you should use them too. They’re easy to work with and really useful. At their core they are just signed information in a nice and standard JSON format. Nevertheless still a very powerful concept as it enables you to transfer claims securely from one party to another.

Developing cacheable PHP applications – Thijs Feryn

Well, this one was awkward. Especially since I had the pleasure of sitting through this exact presentations some months before. Thijs is a dedicated evangelist, you have to give it to him. He manages to squeeze everything out of what Varnish can do and serve it to his audience on a silver platter. 

Now come the buts. The use-cases considered in the presentation are outdated. Really focusing too much on server side rendering and templating. And I particularly did not enjoy instructing an auditorium full of developers (maybe some more impressionable than other) to use their caching layer for keeping application logic.

Nothing wrong with Varnish itself, though. And since we need to keep all our data inside Germany, for legal reasons, maybe we’ll need to consider ourselves an on-premise caching solution in the future.

PHP Performance Trivia – Nikita Popov

Really confident presentation from one of the core PHP contributors, containing a deeper dive into how the OPcache works, and what its limitations are. Not that serious limitations, if you ask me. With a bit of care for how you handle deployments, you should be fine.

Also interesting to see some benchmarks that show that using objects instead of arrays is much more memory-efficient in PHP. Not that the opposite would have made us drop using Value Objects, but still good to see that we’re already using the memory friendly option.

Get GOing with a new language – Kat Zień

While there was nothing spectacular about this presentation, just an intro into Go, I still enjoyed seeing that there is a clear interest from the PHP community to explore other languages. Go is particularly relevant for us at ottonova, since we’re already using it for our messaging setup and we plan to try some more areas where we think it would do a better job than good old PHP.

Day 2

Advanced Web Application Architecture – Matthias Noback

Nice structured first dive into DDD and the rationale of it. We’re already doing most of what this presentation talks about, and much more, at ottonova, but it helps to see another’s take on it and double-check our approach.

It was reassuring to see that one of the first requirements of DDD is to separate your Domain from your Infrastructure, a thing that we carefully follow, along with some more advanced techniques.

Also really appreciated the general advice that not every project is the same and if something applies somewhere it does not automatically mean that it will apply to your situation. This is, of course, common sense, but it doesn’t hurt to hear some basic common sense from time to time, in a world overflooded with strong opinions.

Working with Webhooks – Lorna Mitchell

Decent structured presentation about Webhooks – an architectural style for async communication. Nothing groundbreaking here, but since this style is not standardly used (or at least not by me), it’s nice to be reminded it exists. Good tip with using ngrok for exposing local stuff.

Supercharge your apps with ReactPHP & PHP-PM – Albert Casademont

Since FPM is already showing its scaling limitations for us, it was particularly interesting to see what other options would be available.

For many of our processed requests, we have the common pattern that we get some input, we process it, and then pass it to another service (maybe external) and wait for a response. Then we do some more work on it and respond to our client. This implies considerable processing power wasted on our side – we wait a lot for HTTP responses ourselves. This is where some concurrent PHP would come in handy. While one request is waiting, other requests can be handled. So we will definitely be looking into either PHP-PM, or Swoole in the future.

It’s all about the goto – Derick Rethans

This was a nice, theoretical, dive into how PHP parses and executes code. For someone with a minimal Computer Science background, I think it was still fairly basic, and I don’t think there was much to take away.

Develop microservices in PHP – Enrico Zimuel

This was an interesting walk-through what are some of the benefits, and specific concerns, of using microservices. Nothing too new to us, since we already heavily rely on microservices both in the PHP group and in our other teams.

One thing we noted down to improve was the standardisation of error responses. Good hint. We’ll definitely look into that one.

Mutation Testing – Better code by making bugs – Théo Fidry

Mutation Testing seems really cool as a theoretical concept, and it’s nice to see that someone is trying it out. Not sure how it would work out in practice. So far, I can see two major downsides. First, it simply takes a lot of time to run such a test. Even with optimisations, this will be something that could take hours. Second, the testing itself seems only as good as your mutators, and I think writing relevant mutators is not a trivial task. Having some that just replace operators could be straight-forward, but how much does that help?

Back to Munich

And the rest we sadly had to skip, to not miss our plane back home.

Overall, as we hoped when booking the tickets, the lineup was a solid one. And they delivered. Kudos to them, as well as to the organizers. We came back with lots of new ideas, some that we will try in the near future, and confirmation that we are on the right track with many of our architectural decisions.

We will definitely have Barcelona on our list for next year as well. Lovely city too.