Continuing the hackathon tradition

At ottonova, we have taken the challenge of bringing a slowly moving and mostly antiquated business, the one of health insurance in Germany, into the 21st century. For over three years already we’ve been offering our customers not only competitive insurance tariffs, but also the digital products that go along with them. All this to make their lives better and to prove that, yes, insurance can also be easy and fun to use.

It follows naturally that all of us here are innovators at heart. And innovators like to try out new things. “We are brave” – as one of our company values says. In the last years, we have found that a good way to get our creative juices flowing is by holding internal hackathons. The events are organized by our IT department and welcome everyone from the company that has an idea or simply wants to help out. In the first two editions, almost all the engineers participated, joined by many more colleagues from other departments, working in teams. The enthusiasm before and after the event was overwhelming. Everyone was happy to quickly bring a cool new thing live, to try out new tech, or just to hack something up together with some colleagues with whom they would normally not work with.

After such a positive response to the previous instances, we were eagerly awaiting a new installment. What we affectionately call a Hackotton was due to have its third edition this summer. It has already been half a year since the last one. Too long, some would say. We wanted our Hackotton 3.0.

But will a Hackotton be the same if, due to Covid-19, we’re not all in the office together? We have been working remotely since spring, when the social distancing measures were started. We have been doing so successfully, but this time it was not just work, it was a bit more. We did not know how an event that is mostly social in nature would end up if organized remotely. Still, we decided to give it a try.

We planned the event to take about two days. That is enough time to try out an idea, especially when you’re able to hack it and there is no need for the rigorousness of our normal style of work. It all started with a kick-off. If in the last editions this was where ideas were pitched and teams were formed, this time round it was just a very short zoom call to give the start. The idea gathering and team forming was replaced with a Confluence page with a join option for team members. As this was announced well in advance. There was plenty of time to think about topics and decide which teams to join. We ended up having a total of 10 teams and, for the first time, we were joined by external guests.

When the Hackottons where held in the office, all the planning and communication was done in person. There was a constant rustle on the floor, caused by people changing desks to be close to their team, or heated discussion between team members. This time, while the office was still available for the few that chose to go there, most people worked remotely. So the close collaboration mode needed to be emulated somehow. Though some quick planning was done in the beginning to distribute tasks, in the end, we relied mostly on instant messaging (Mattermost) and video calling (Zoom) to get things done.

As we have the good habit of developing only containerized applications, and have quite some practice with this, even “hacked” solutions built during the Hackotton are put into Docker containers. This allows for way easier collaboration and building more complex systems also in hackathon mode. Especially now when working remotely, it feels so good for a team mate to give you an already built container, and you can simply connect it to the part that you are implementing. As a follow up advantage of this setup, we can quickly add these containers to AWS’s ECS in our pre-live environment, so that we demo our ideas under realistic conditions, and later, with the infrastructure already solved, have a smooth path towards a live release.

At the end of an intense second day of hacking, and after hurrying to bring our projects to a functional state and crop up some sort of presentation, we had the chance to show our ideas to the whole company (over Zoom) and see what the other teams managed to build. Everyone was given a 5 minute slot to demo and present their project and impress the audience. A hard task to contain all the passion the teams poured into their projects into 5 short minutes, but the moderators were understanding and allowed for more time when the mark was overstepped.

After all the teams got their turn in the spotlight and demoed their ideas, we had an online voting round to choose a winner. The laurel wreath for the 3rd Hackotton edition went, by a margin, to a very fruitful collaboration between the android developers and one of our designers. They came up with a radically different approach for structuring our mobile apps. They showed how by putting the focus on the doctor appointments, interactions with our apps can be greatly simplified. Their solution takes our current separate Timeline, Documents and Chat sections in the apps, and brings them together under Appointments. This way related invoices, claim settlements, doctor appointments can be reviewed under one place, as well as making it easier to communicate with our customer support. This will allow our app users to have a better overview and control over their interactions with healthcare providers and with us as an insurance company, and consequently them being happier with our service.

Apart from the great mobile app improvement idea, all the other teams brought convincing proposals. In this edition we got to see new tooling for our colleagues in the Sales department, improvements in document processing using OCR, optimizations to our website, internal applications for making our work easier, and even more changes for the better for our customers’ experience using our products. The response from the audience was overwhelmingly positive, and even if the event ended with the presentations and voting, discussions about the projects continued well into the next days.

As any hackathon comes with improvement in morale and team spirit, we can already conclude that organizing them is a long term win for the participants and the company overall. For us at ottonova, it happened that the past Hackottons came with some short term wins as well. Some of the projects ended up being used – either directly, or after planning them into our Roadmap. From the past events we have “Agento”, a very handy internal automation tool that we use to quickly insure children attached to a parent’s account, as well as “ottoPrint” our own PDF generation service, which is much faster and more customizable than the off-the-shelf solution we’ve been using until then. So we are fairly confident that some of this edition’s projects will end up being used. The winner already sparked the interest of the product owners.

With this third edition, and organizing it even remotely, the Hackotton has become part of our company culture and has proven that it can endure. As we are coming to know that “after the Hackotton is before the Hackotton”, we’re now anxiously waiting for the fourth one, and using the time to come up with cool new ideas.

php.barcelona

I and the rest of the PHP engineers at ottonova, had the pleasure of attending the PHP.Barcelona Conference in November this year. It was a great experience, spanning over two full days.

I’ve put together some quick, and biased, notes about the presentations.  Here it goes…

Day 1

Opening Keynote – Rasmus Lerdorf

Really uplifting beginning of the conference with Rasmus going down memory lane. We got to know some of the motivation and the process of creating PHP. All this further strengthening my opinion that it didn’t really start as serious programming language. (Ok. I admit. It did progress since then, so don’t start throwing stones.)

A funny highlight was the explanation that “PHP is vertically consistent” – the PHP functions match the vendor functions they create an API to. Of course, this leaves it inconsistent with itself.

Also, really enlightening, was clarifying why adding things like further type checking, generics, or class modifiers, like “immutable” would be a serious performance hit to the language – so we should really stop hoping that any of that would come soon. 

From Helpers to Middleware – Marco Pivetta

Nice practical presentation, showing how design can incrementally evolve, from what would basically be spaghetti code, into a proper modular and scalable middleware-style architecture. While a good start, I would have hoped to see a deeper dive into this style, because where it stopped it felt like it just scratched the surface.

Microservices gone wrong – Anthony Ferrara

Although this was just a public retrospective of a specific project, which maybe will not resonate with everyone, for me and the work we do at ottonova, there were still a couple of valuable lessons to take home:

  • Messages that microservices use to synchronize can be differentiated into proper Events and RPCs, and these categories can and maybe should be treated differently. The latter require a response back, while the former don’t really need it. We don’t have this clear separation ourselves yet, but the need for it is definitely starting to show.
  • Each entity in your domain does not need to have its own service. Larger services are also fine, if they make sense for your use case. Our own setup is using domain-defined microservices, of various sizes, so seeing that splitting everything aggressively may backfire will make us think twice when extracting a new microservice.

He has a cute dog.

Serverless PHP applications with Bref – Matthieu Napoli

I guess it’s nice to see that there is a way to hook up PHP to Lambda, but then again, why take the effort to force it and not just use a language supported directly? Apart from that aspect, interesting to see an intro into AWS Lambda, since I didn’t try it out myself yet.

JSON Web Tokens – Sam Bellen

Not that much to say here: JWTs. We already use them, you should use them too. They’re easy to work with and really useful. At their core they are just signed information in a nice and standard JSON format. Nevertheless still a very powerful concept as it enables you to transfer claims securely from one party to another.

Developing cacheable PHP applications – Thijs Feryn

Well, this one was awkward. Especially since I had the pleasure of sitting through this exact presentations some months before. Thijs is a dedicated evangelist, you have to give it to him. He manages to squeeze everything out of what Varnish can do and serve it to his audience on a silver platter. 

Now come the buts. The use-cases considered in the presentation are outdated. Really focusing too much on server side rendering and templating. And I particularly did not enjoy instructing an auditorium full of developers (maybe some more impressionable than other) to use their caching layer for keeping application logic.

Nothing wrong with Varnish itself, though. And since we need to keep all our data inside Germany, for legal reasons, maybe we’ll need to consider ourselves an on-premise caching solution in the future.

PHP Performance Trivia – Nikita Popov

Really confident presentation from one of the core PHP contributors, containing a deeper dive into how the OPcache works, and what its limitations are. Not that serious limitations, if you ask me. With a bit of care for how you handle deployments, you should be fine.

Also interesting to see some benchmarks that show that using objects instead of arrays is much more memory-efficient in PHP. Not that the opposite would have made us drop using Value Objects, but still good to see that we’re already using the memory friendly option.

Get GOing with a new language – Kat Zień

While there was nothing spectacular about this presentation, just an intro into Go, I still enjoyed seeing that there is a clear interest from the PHP community to explore other languages. Go is particularly relevant for us at ottonova, since we’re already using it for our messaging setup and we plan to try some more areas where we think it would do a better job than good old PHP.

Day 2

Advanced Web Application Architecture – Matthias Noback

Nice structured first dive into DDD and the rationale of it. We’re already doing most of what this presentation talks about, and much more, at ottonova, but it helps to see another’s take on it and double-check our approach.

It was reassuring to see that one of the first requirements of DDD is to separate your Domain from your Infrastructure, a thing that we carefully follow, along with some more advanced techniques.

Also really appreciated the general advice that not every project is the same and if something applies somewhere it does not automatically mean that it will apply to your situation. This is, of course, common sense, but it doesn’t hurt to hear some basic common sense from time to time, in a world overflooded with strong opinions.

Working with Webhooks – Lorna Mitchell

Decent structured presentation about Webhooks – an architectural style for async communication. Nothing groundbreaking here, but since this style is not standardly used (or at least not by me), it’s nice to be reminded it exists. Good tip with using ngrok for exposing local stuff.

Supercharge your apps with ReactPHP & PHP-PM – Albert Casademont

Since FPM is already showing its scaling limitations for us, it was particularly interesting to see what other options would be available.

For many of our processed requests, we have the common pattern that we get some input, we process it, and then pass it to another service (maybe external) and wait for a response. Then we do some more work on it and respond to our client. This implies considerable processing power wasted on our side – we wait a lot for HTTP responses ourselves. This is where some concurrent PHP would come in handy. While one request is waiting, other requests can be handled. So we will definitely be looking into either PHP-PM, or Swoole in the future.

It’s all about the goto – Derick Rethans

This was a nice, theoretical, dive into how PHP parses and executes code. For someone with a minimal Computer Science background, I think it was still fairly basic, and I don’t think there was much to take away.

Develop microservices in PHP – Enrico Zimuel

This was an interesting walk-through what are some of the benefits, and specific concerns, of using microservices. Nothing too new to us, since we already heavily rely on microservices both in the PHP group and in our other teams.

One thing we noted down to improve was the standardisation of error responses. Good hint. We’ll definitely look into that one.

Mutation Testing – Better code by making bugs – Théo Fidry

Mutation Testing seems really cool as a theoretical concept, and it’s nice to see that someone is trying it out. Not sure how it would work out in practice. So far, I can see two major downsides. First, it simply takes a lot of time to run such a test. Even with optimisations, this will be something that could take hours. Second, the testing itself seems only as good as your mutators, and I think writing relevant mutators is not a trivial task. Having some that just replace operators could be straight-forward, but how much does that help?

Back to Munich

And the rest we sadly had to skip, to not miss our plane back home.

Overall, as we hoped when booking the tickets, the lineup was a solid one. And they delivered. Kudos to them, as well as to the organizers. We came back with lots of new ideas, some that we will try in the near future, and confirmation that we are on the right track with many of our architectural decisions.

We will definitely have Barcelona on our list for next year as well. Lovely city too.

Our Android App’s Permissions Explained

Permissions

Starting on Android 6 (Marshmallow), Android has been improving and shaping permissions, adding more control to users and a better overview of what apps do with those permissions. Apps permissions are going in a direction of more transparency and security.

Still, it’s not always easy to understand why some permissions are needed some times. We at ottonova would like to clarify why we request some permissions and for what do we use them.

Permissions overview

The purpose of a permission is to protect the privacy of an Android user. Android apps must request permission to access sensitive user data (such as contacts and SMS), as well as certain system features (such as camera and internet). Depending on the feature, the system might grant the permission automatically or might prompt the user to approve the request.

Permissions are used to request system functionalities. Some permissions require user approval and some don’t. It depends on the protection level of the permission.
There are 4 levels of permissions: Normal, SignatureDangerous and special.
Additionally there are custom permissions that can be created to request app services access. Basically an app can declare these custom permissions so it can access another app or its own service.

Protection levels

Level Needs user approval? Description Example
Normal No Provides access to data or resources outside the app sandbox. Does not incur any risk to private data or other apps operations. WiFi state, Internet, Bluetooth, etc
Signature No These permissions are granted at install time. Apps that require these permissions need to be signed by the same certificate of the app that defines the permission. Battery stats, carrier services, etc
Dangerous Yes These permissions can provide access to sensitive data or resources or could potentially affect the user’s stored data or operations by other apps. The user must explicitly authorize the usage of these permissions. The app can only use a functionality that depends on these permissions after the user authorizes it. Read contacts, camera, capture audio, etc
Special / Privileged Yes Similar to dangerous permissions, but the authorization of these permissions is managed by Android Operating system. Apps should try to avoid using these permissions Write settings, system alert windows, etc

 

What permissions do we use?

Camera

Permission name: android.permission.CAMERA
Protection level: dangerous

One of the core features of the ottonova app, is that you can quickly scan an invoice or other document and quickly upload it to us. We could use the native camera and not request this permission, but then users would lose the features that we provide by using our in-app camera feature, that gives users automatic boundary/edge detection of documents and editing functions like cropping, rotating, etc.

Permission name: android.permission.FLASHLIGHT
Protection level: Normal

Used to turn the phone’s flashlight on or off for when users scan a document.

Storage

Permissions name: android.permission.READ_EXTERNAL_STORAGE, android.permission.WRITE_EXTERNAL_STORAGE
Protection level: dangerous

Besides scanning a document on the spot, users may also want to upload a document from their phone storage. This includes images or PDFs. That’s why we require this permission, so we can read an imported file from the external storage. This permission is not strictly necessary for users to upload invoices, it’s only necessary if you would like to import a file. We don’t scan the external storage, the implementation of this feature calls the default file picker on the phone, and most file picker apps that come with Android don’t actually require the caller app (ottonova in this case) to request this external storage permission, but unfortunately some do. That’s why we request this permission, so your experience as a user is as smooth as possible. Android 10 is introducing some changes to these permissions, an app won’t have to request access to all external storage anymore and will be able to only request access to media folders in the external storage.

Other app capabilities

Permissions name: android.permission.ACCESS_NETWORK_STATE, android.permission.ACCESS_WIFI_STATE, android.permission.INTERNET
Protection level: Normal

All of these permissions are related to the internet access. The INTERNET one is so we can perform operations that require internet and the other are just so we can know if we’re connected to a network or if we have internet at all.

Permission name: android.permission.WAKE_LOCK
Protection level: Normal

This permission allows an app to keep the phone awake for a certain amount of time. In ottonova’s app case, this is used by our tracking library (Firebase by Google) to keep the phone awake while Firebase communicates with google service to provide helpful app usage data to the server. Users can disable at any time app usage tracking, simply go to App settings > Notifications. If you disable tracking this permission won’t be used at all.

Permission name: android.permission.USE_FINGERPRINT
Protection level: Normal

With ottonova’s app, we have a pin screen to keep your data safe. You can either input a defined pin or use your fingerprint to unlock the app.

Permissions name: com.google.android.c2dm.permission.RECEIVE, com.google.android.finsky.permission.BIND_GET_INSTALL_REFERRER_SERVICE
Protection level: Normal (Custom permission)

Both of these permissions are defined by Google. The RECEIVE is used to receive push notifications and the BIND_GET_INSTALL_REFERRER_SERVICE is used by Firebase to recognize where the app was installed from.

Permission name: android.permission.FOREGROUND_SERVICE
Protection level: Normal

When a document is being uploaded we use this permission so users can put the app to background while we finish the upload operation. Whenever this permission is used a notification is always shown.

Conclusion

Permissions are getting more transparent and users are getting more control over what apps can do. These are vital improvements to help keeping user data safe.
Still, we feel that there are some improvements to be made in this field. For instance, external storage is still not a very safe place to store sensitive data because other apps can access that data without system privileges just by requesting the external storage permission (it’s starting to change with Android 10), that’s one of the reasons we don’t store any sensitive user related data locally, all sensitive data is stored remotely in our servers. At ottonova we use only the bare minimum permissions that we can to make our app and services work, always keeping in mind potential vulnerabilities that could compromise our customers data.

We value transparency, that’s why we made this post.

We welcome changes made to improve app permissions and overall security regarding users data privacy. For example, Android 10 is introducing new permission scopes for external storage access, meaning that apps will be able to simply request access to media folders (i.e.: Images or Download folder). Also, although not used by ottonova, asking for location while on background will require user permission. There are more changes, to see further privacy changes on Android 10 see this link.

References

Recruiting Backend Engineers at ottonova

To do our part and share with the Community, as well as provide a bit more transparency into ottonova and how we are building state-of-the-art software that powers Germany’s first digital health insurance, here are some words about how the Backend Team goes about finding new team members. 

We’re going to cover what we are doing in the Backend Team, what we value, and how we ensure we hire people that share our values.

The Backend Team

Our team is responsible for many of the services that power our health insurance solutions at ottonova. This includes our own unique functionality, like documents management, appointments timeline, guided signup, as well as interconnecting industry-specific specialized applications.

Under the hood, we manage a collection of independent microservices. Most of them written in PHP and delivering REST APIs through Symfony, but a couple leveraging Node.js or Go. Of course, everything we use is cutting-edge technology, and we periodically upgrade.

As a fairly young company, we spend most of our time adding new functionality to our software, all in cooperation with product owners, but at the same time we invest fair effort into continuously improving the technical quality of our services.

Values

Technical excellence  is one of our team’s core values. To this end, we are practitioners of Domain-driven design (DDD). Our services are built around clearly defined domains and follow strict separation boundaries. 

Because we created an architecture that allows it, and we have the internal support to focus on quality, we invest a lot into keeping the bar high and whenever needed we refactor and make sure the Domain Layer stays up to date with the business needs, or that the Infrastructure Layer is performant enough and can scale.

Although most of our work is done using PHP, we strongly believe in using the right tool for the job. Modern PHP 7+ happens to be a pretty good tool for describing a rich Domain, but we like to be pragmatic and where it is not good enough, maybe in terms of performance, we are free to choose something more appropriate.

Expectations from a new team member

From someone joining our team we, first of all, expect the right mindset for working in a company that values quality. We are looking for colleagues that are capable and eager to learn as well as happy to share their existing knowledge with the team. 

A certain set of skills is needed as well, or the right foundation for developing those skills. We are particularly interested in a good mastery of  programming and PHP fundamentals, Web Development, REST, OOP, and Clean Code.

As actual coding is central to our work, we require and test the ability to both write code on the spot, and to come up with clean design.

These expectations can be grouped into four main pillars that a candidate will be evaluated on:

  1. Mindset – able and willing to both acquire and transfer knowledge inside a team
  2. Knowledge – possesses the core knowledge needed for using the languages and tools we use
  3. Clean Design – able to employ industry standards to come up with simple solutions that can be understood by others
  4. Coding Fluency – can easily transfer requirements into code and coding is a natural process

The Recruiting Process

To get to work with us, a candidate goes through a process designed to validate our main pillars. All this while giving them plenty of time to get to know us and have all their questions answered.

It starts with a short call with HR, followed by a simple home coding assignment. Next there is a quick technical screening call. If all is successful so far, we finish it up with an in-person meeting where we take 1-2 hours to get to know each other better.

The Coding Assignment

Counting mostly for the Clean Design pillar, we start our process with a coding assignment that we send to applicants. This is meant to allow them to show how they would solve normally a problem in their day-to-day work. It can be done at home with little time pressure, as it is estimated to take a couple of hours, and it can be delivered within the next 10 days.

The solution to this would potentially fit into a few lines of code. But since the requirement is to treat it as a realistic assignment, we are expecting something a bit more elaborate. We are particularly interested in how well the design reflects the requirements and the usage of clean OOP and language features, the correctness of the result (including edge cases), and tests.

We value everyone’s time and we don’t want unnecessary effort invested into this. We definitely do not care about features that were not asked for, overly engineered user interfaces or formatting, or usage of design patterns just for the sake of showcasing their knowledge.

It will ideally be complex enough to reflect the requirements in code, but simple enough that anyone can understand the implementation without explanations.

The Tech Screening

To test the Knowledge pillar we continue with a Skype or a regular phone call. This step was designed for efficiency. By timeboxing to 30 minutes we make sure everyone has time for it, even on short notice. We don’t want to lose the interest of good candidates getting lost in a scheduling maze.

Even if it’s short, this call ensures for us a considerably higher match rate for the in-person interview. In time we found that there really are just a handful of fundamental concepts that we expect a new colleague to already know. Many of the other can quickly be learned by any competent programmer.

All topics covered in this screening are objectively answerable. So at the end of a successful round we can make the invitation for the next step.

In-Person Interview

This is when we really get to know each other. This is ideally done at our office in central Munich – easier for people already close by, but equally doable for those coming from afar.

In this meeting we start by introducing ourselves to each other and sharing some information about the team and the company in general.

Next we ask about the candidate’s previous work experience. With this and the overall way our dialog progresses we want to check the Mindset pillar and ensure that the potential new colleague fits well into our team.

After that we will go into a new round of “questioning” to deeper test the Knowledge pillar. Similar to the Tech Screening, but this time open-ended. Informed opinions are expected and valued. We definitely want to talk about REST, microservices, web security, design patterns and OOP in general, or even agile processes.

Then comes the fun part. We get to write some code. Well… mostly the candidate writes it, but we can also help. We go through a few mostly straight forward coding problems that can be solved on the spot. We are not looking for obscure PHP function knowledge, bullet-proof code, or anything ready to be released. We just want to see how a new problem is tackled and make sure that writing code comes as something natural to the candidate. With this we cover the Coding Fluency pillar.

Afterwards it’s the interviewee’s turn. We take our time to answer any questions they may have. They get a chance to meet someone from another team and get a tour of the office.

What’s next?

The interviewers consult and if there is a unanimous “hire” decision, we send an offer. In any case, as soon as possible (usually a few days) we inform the candidate of the outcome.

Interested in working with us? To get started,  apply here: https://www.ottonova.de/jobs

Über unsere ausgeschriebene QA-Stelle

Wir sind stetig auf der Suche nach neuen Mitarbeitern, unter anderem suchen wir auch QA-Spezialisten.

Franz-Xaver hatte dazu einige interessante Fragen, die wir euch zusammen mit den Antworten nicht vorenthalten möchten.

Here we go:

1. Ist Testautomatisierung Teil der QA-Tätigkeit? Falls ja, welche Erfahrung in welchen Programmiersprachen/Frameworks wird hier vorausgesetzt?

Ja, Testautomatisierung ist für uns ein sehr wichtiger Bestandteil. Für die Automatisierung der Web-Tests verwenden wir primär Python 3.
Die UI-Tests von den iOS und Android Apps sind in Java implementiert. Erfahrungen mit Appium oder Selenium sind sehr willkommen, beides setzen wir auch ein. Erfahrungen in der Nutzung klassischer Frameworks wie Page Object Pattern und der gängigen Build-Tools wie Jenkins ist auch ein großes Plus.

2.  Welche Art Tests soll genau durchgeführt und entwickelt werden (Akzeptanztests, Regressionstests, UI Testing, Backend-API Tests…)?

Es werden Regressions- und Akzeptanztests entwickelt und durchgeführt. Die Entwicklung und Durchführung der automatisierten UI- und Backend-Tests gehört ebenfalls zu unseren täglichen Aktivitäten.

3. Welche Anwendung soll genau getestet werden? Handelt es sich um die ottonova App? Falls ja, sollen beide Versionen (iOS und Android) getestet werden? 

In unserem QA-Team testen wir sowohl unsere verschiedenen Web-Anwendugen, als auch unsere mobilen Apps (iOS und Android).

4. Wie groß ist das Entwicklerteam und aus wie vielen Mitgliedern besteht das Testteam? Wie viele Releases wären pro Tag zu testen?  

Es gibt mehrere Entwicklungsteams, die aus Backend-, Frontend-, Mobileentwicklern und der QA bestehen. Die Entwicklungsteams nutzen die agilen Entwicklungsmethodologien wie Scrum und sind funktionsübergreifend organisiert (cross-functional), beinhalten also z.B. auch Product Owner oder Mitglieder unserer Versicherungsfachbereiche. Aktuell sind bei uns zwei dedizierte Personen Vollzeit für das QA-Testing zuständig, und kümmern sich dabei um manuelles sowie automatisiertes Testing.

5. Richtet sich das Stellenangebot für Werkstudenten ausschließlich an (immatrikulierte) Studierende oder können sich auch berufserfahrene Personen mit abgeschlossener Ausbildung bewerben? 

Wir sprechen in der aktuellen Stellenausschreibung gezielt Werkstudenten an, weil wir in der Vergangenheit gute Erfahrungen gemacht haben. Wir sind natürlich immer bereit, Studenten einiges an Wissen und Erfahrung zu vermitteln. Natürlich haben Personen mit abgeschlossener Berufsausbildung bei einer Bewerbung die gleichen Chancen.

6. Handelt es sich um einen zeitlich befristeten Vertrag? Wie hoch ist die wöchentliche Arbeitszeit? Welche Weiterentwicklungsmöglichkeiten habe ich in Ihrem Unternehmen im Bereich QA?

Wir sind bei allen Punkten flexibel und gehen gerne auf die Wünsche und Vorstellungen der Bewerber ein, das betrifft sowohl die Wochenstunden als auch die Vertragsbefristung. Werkstudentenverträge sind standardmässig auf ein Jahr befristet und besitzen einen fixen Stundenlohn. In unseren Arbeitsverträgen für eine Vollanstellung ist immer eine Probezeit von 6 Monaten vorhanden. Eine Festanstellung nach einer Werkstudententätigkeit (oder auch Praktikantentätigkeit) ist immer unser höchstes Ziel. Die Weiterentwicklungsmöglichkeiten sind je nach Mitarbeiter individuell, wir unterstützen bei der Verwirklichung deiner Ziele wo wir können.

Welcome to our ottonova.tech Blog!

Hello everybody,

with this post we want to kick off the ottonova tech blog and also explain a little bit why we’re doing that.

With this blog we not only want to keep up the tradition that “every tech centric startup should have a blog”.
We sincerely want to give you insights on how we create the technology and platform behind ottonova – the new kid on the block, the first private health insurance since over 17 years.

We’re assembling a strong and experienced IT team of over 25 people and growing. We’re focusing on everything it needs to create and run a private digital health insurance in 2018: Multi platform programming, cloud infrastructure, quality assurance, data science, data security, and, last but not least, office IT.

Our idea is to regularly publish interesting articles about our work and events. Something around 2 articles per month seems reasonable. Every IT member at ottonova is invited to participate, even other departments might contribute if it is inside of this blog’s scope: Technology.

Hope you’ll regularly visit our humble pages,

⭐️,
Andreas