Search
  • triho

Our lessons learned from following micro-service architecture for two years

Starting in 2015, we began our technology investment into micro-service architecture. Why? Because it would allow each service within a complex application to scale independently based on its usage. We have used this architecture for many clients. Here are a few lessons which we have learned along the way that we hope you find it useful.


Our architecture

Here is an overview of our micro-service architecture. Each service is an ASP.NET Core 1x/2x application. They communicate with each other through Restful APIs and a message-broker such as RabbitMQ. There is one central authentication service which grants access for users. It packs login info inside a JWT access token which all other services can recognize and validate to prevent unauthorized access. Client apps can communicate with each service directly or through an API Gateway (Ocelot).



Futurify micro-service architecture


There are a lot of benefits using this approach such as:

  1. Each service is independent. So if it fails, other services would still run as normal. This fail-safe did not happen all the time which I will explain more below.

  2. Multiple teams can work in parallel so that we could finish the applications for our clients quicker. For example, one team works on the booking system, and another team works on the payment system. We use Postman to define the Restful APIs for each service, and Google Doc for messages in RabbitMQ.

  3. We could deploy one service to multiple redundant servers to scale up while keeping other services on one server. So that we could save hosting costs for our clients.

  4. We could reuse past common services for new projects such as payment service with Stripe, Paypal, or other payment gateways; or email service that works with Mandrill and SendGrid. So we still could protect our clients' IPs while reusing our past efforts for future projects.

Too many micro-services

We got too excited at the beginning and split an application into many services such as calendar service, inventory service, order service, etc. Because we thought, maybe we could reuse that calendar service in the future. But we were wrong. Every time, we need to check if an inventory is available, we have to call the calendar service. The problem was too many unnecessary calls. The inventory service must know the availability of each item without dependency on other services.


So we started to merge the calendar service into the inventory service and did the same for other services. The rule of thumb is each service should be independent. It should not depend on other services. However, this is not always a case.


The order service calls the inventory service to check for availability before making the reservation. So this is a dependency. What if the inventory service goes down, can users still order? There are two ways to handle this issue.

  1. Option 1: the order service tells the users that they cannot make a reservation at the moment due to a system error. They will need to check back later.

  2. Option 2: the order service has a read-only copy of the availability of the inventory. It could still allow users to make reservations, then sync with the inventory service later.

So depends on the requirements, we would choose the right option to implement.


Deployment nightmare

In some applications which we implemented, there are over ten services that work together. Deploying each of them by hands would be a nightmare. We use Azure DevOps to deploy all of our services at once to multiple servers and different environments onto AWS and Azure. We have not used Docker for each service yet. Using Docker is in our roadmap for Q1 2019. I will write a follow-up post about this after we successfully switch.


We also use Lambda Functions for some small functions such as sending emails or push notifications. We deploy them manually at the moment by uploading to S3. As we will be using more Serverless Architecture, we will look into auto-deployments for these functions across different environments quickly.


Fragmented database

Each service has its database. Some services would have mirrored read-only databases for other services to access. In my previous example, the inventory service has a read-only availability database for the order service to read. So data are split into multiple databases which could be a problem for reporting.


Our customers would want to see orders of an inventory. One way to compile that report, we could send one Restful API to the inventory service to get the inventory detail, another API to the order service to get all orders by item id. Then we merge them into one report.


Another way is to use a message queue called RabbitMQ to publish info of each inventory and order to a channel that the report service subscribes. The report service stores those data into its database which we could query via its API to create the reports. Even though the report service would have a lot of data from other services, the others can still protect its private information which only specific users can access via its API.


Conclusion

Our architecture is not perfect, but it works for us. I hope you find this post useful to you somehow. If you would like to share your lessons with us, please use the comment section below. Thank you for reading!

© 2019 by Futurify. All rights reserved.