Microservices Migration Is Entering Mainstream: Case Studies and Strategies Overlook
Migrating to microservices allows companies to scale organically. Nonetheless, most startups start with a monolithic architecture because it’s faster to deliver product and get the quickest return on investment. This status quo stipulated a current monolith-to-microservices trend and a hot-button need to reengineer existing solutions. What reengineering strategies exist, and what pitfalls can we expect from them? Let’s take a closer look at our members’ case studies to get a real-life microservices migration image.
The beauty of microservices
Matt Pistone, CTO at Riskalyze, a risk assessment platform, says that it has an expanded product line and several dozen engineers on staff. For that purpose, most of its teams are split into squads across product lines. Typically, each squad has a product manager who manages several squads, each comprising about four engineers and a QA technician. The main goal is to make the work of each squad autonomous. Matt says:
“[They] take projects and take design mocks and the inputs from outside their team, lay out a plan, go build it, and ship it without a lot of cross-team blockers and dependencies.”
According to Matt, Riskalyze is moving toward a microservices architecture, and this team structure works well with it:
“[One] team might spend a month building some service or some new functionality, and then that serves as input to another team’s upcoming project.”
Matt says that only DevOps are not split. Aaron Klein, Riskalyze’s CEO, comments:
“That’s the beauty of microservices. [They involve] a lot of engineers with a lot of different backgrounds.”
The benefit here is the ability to parallelize the development of separate units and multiply the speed of growth, especially for a large team. Usually, there are three scenarios regarding microservices: an initial microservices architecture, a monolith core with microservices around it, and early-stage full reengineering (because late-stage re-engineering is not advisable due to its extremely high cost). Let’s look at what cases proved successful for each of these strategies.
Microservices from the very beginning
Martin Polasek, CTO at Evolute, a wealth management platform, says that its product architecture was initially a set of microservices, with some parts added to it as monoliths. Once the business matured, Evolute began to split the whole thing into smaller services, taking into account capability and maintenance.
Microservices allowed Evolute to use a tech stack that matches its needs the best. Predominantly, the .NET platform is used for all business-related tasks, with C++ coming into play on the optimization and number-crunching sides. Evolute has migrated to .NET core 2.0 recently, which can be run in the .NET on Linux, as per the initial plan, with the development environment still in Windows. Some other components are written in Python and mainly used for data integration.
Thus, Martin managed to benefit from both the development speed of the monolith and the scalability of microservices.
Coexisting with a monolith core
If ain’t broke, don’t fix it. Based on this philosophy, companies often choose to leave their cores as-is and build microservices around them. Often, this strategy is preferred by companies that have their own intellectual property or innovative developments in their core, such as Advisor Software (ASI).
Rishi Srivastava, ASI’s CTO, runs a distributed enterprise platform that provides seamless single sign-on (SSO) access to all ASI products — DA, CAS, Goal-Based Planning, MCS, Rebalancer, and Restful API library. According to Rishi, ASI has a blend of legacy products and more modern ones, so the architecture tends to be distributed and often employs microservices. It uses multiple databases and caches, like PostgreSQL, MS-SQL, Redis, and S3 bucket. For orchestration, it uses Amazon ECS.
“[A one-size-fits-all] database approach doesn’t fit in a highly distributed application. Transitioning to microservice container-based architecture makes our platform highly scalable, with uniform response times.”
Full reengineering and migration
If the core turns out too difficult to maintain, there’s no other option than to rebuild it.
Steve Mays, ex-CTO at Trizic, a wealth solutions automation platform, needed to scale Trizic’s initial architecture. The initial approach was to split every business process into its own set of services, out of which there are at least two components — a producer who gets data from the data store and writes it to a queue and a consumer who consumes this data and does something with it. There is a contract between the two, what is in the message on the queue. What the consumer does with the data or where the producer gets the data is immaterial to each other. Steve had an “aha” moment when he realized that Trizic could actually turn this into “Henry Ford’s assembly line concept but for software.” By allowing engineers to agree on format and data (or contract) and splitting work into a constellation of producers and consumers (or collectively, “workers”), many engineers can work on the same project without running into merge conflicts.
Also, using queueing-based microservices gives Trizic the ability to arbitrarily turn on additional Docker containers if a given service needs more performance or fault tolerance with very low cost and in an entirely automated manner. Queues provide fast, RAM-based data storage and strong assurances that data has been only read once while allowing for multiple services to take on work from that queue in parallel.
“Now we have an automated, cloud-deployed, test driven, secured, auditable platform.”
Trizic built out an abstraction layer to allow for access to multiple data partners via microservices, not just its own local data stores. When the time came for adding additional custodians, the company quickly deployed microservices working with queues to publish accounts that need orders generated and executed and communicate with the proper custodian, using data from right data partners. Thus, the services can be rolled out in just a few months, not a year.
“We can implement services in any language, like GoLang, for example. We have a very small service that tells us if today is a trading day and, if so, whether it is shortened. This service doesn’t really have fast-moving data, doesn’t access any other data sources, and reads a flat file once a year and loads that into RAM, so we decided to try the service in GoLang, living alongside a Java architecture.”
Beyond the strategies
Definitely, reengineering is full of advantages. Still, reengineering the core requires significant expenditures and extra-qualified staff. For that reason, in late startup stages, it’s only advisable if the existing system does not cope with the loads.
Vasyl Soloshchuk, CEO at INSART, a FinTech engineering services company, admits that sometimes being monolithic is treated in some kind of negative way, but his company’s experience is that it’s not about negative or positive. It depends on the process you have.
“With a small team, you do need to be monolithic. It will be a bit hard to organize a bigger team, but you can just go faster. Microservices is basically used not for more sophisticated architecture but to split the process in parallel so you can add more and more people into it and introduce cross-functional teams.”
He says that when you do microservices, you will definitely slow down all the development because you need to support it all. But with a bigger team, you’ll be able to work in parallel.
“For us, it looks like a pure microservices approach will be beneficial for development teams of around 30 people. With a smaller team of 10, even 15 developers, let’s say engineers and QA people included, it really doesn’t make sense to be purely monolithic because you need to support all of this infrastructure.”