Playing with microservices

Recently I was asked to build a small internal app: dynamic dashboard for one of our projectors, which are hanging all over the office and display some company info onto the walls: customer statistics, server-to-server latency, what tasks are in development, etc. My particular goal was to add release and builds statistics: build duration, failed/unreliable tests and anything else that could motivate us to produce more stable builds.

We store all relevant data in Google BigQuery and the whole task is basically to extract build results from the storage and present it in clear and simple way. Quite trivial.

Thinking process.

What made this task interesting is that recently I started to pay close attention to distributed applications and microservices in particular. Instead of having a big monolithic app, we divide it into smaller logical services, that work independently from each other and communicate via some lightweight protocol, like HTTP.

The task pleaded to be ‘microserviced’, because:

  • Getting data directly from BigData is slow and extremely inconvenient, so I’ll need some sort of cache-aggregator between UI and BigQuery;
  • I suspect that some other internal projects might use the service with build data, so it’s better to do it autonomous and independent from UI as much as possible;
  • lingua franca of server languages in our company is C#, so data service will be either .NET or .NET Core app;
  • UI page should be secured with login and password. The fastest way for me to do that is to make it Node.js application.

So there’re two distinctive cooperating services making an app: one data service and one presentation service, which have different requirements to language and runtime, and one of them is going to be used by somebody else. Sounds like a definition of microservices.


This is the final architecture I ended up with:


Nginx listens to 80th port and routes requests to either UI or data service, that live at ports 8080 and 5000 respectively. Configuring Nginx as reverse proxy was a pleasure. Everything should be that simple:

UI service is simple Node.js app that serves html and javascript. After I removed authentication handling from it in favour of IP-filtering by Nginx, there’re no single reason why Node.js should do the job. It easily can be done via IIS, Apache, Nginx itself or even python simple server: python -m SimpleHTTPServer . After all, this decision affects only UI itself and nothing else. That’s a freedom.

I wrote data service with ASP.NET Core (first ASP.NET application I wrote in years). It talks directly to Google BigQuery, holds internal cache and exposes itself to the world using RESTful API:  GET /build , GET /build/master , etc.


Obviously, such simple app doesn’t require microservice architecture. But because it was a side project, I could safely choose it for small experiment, and honestly I’m happy with results.

Firstly, UI service doesn’t expect that data service will be always available, and because of that UI is inevitable designed to be offline friendly or stay functional while data service is temporarily offline.

Secondly, I simply can bring Node.js offline for maintenance, and it has absolutely no adverse effect on data service. No downtime, no expensive cache rebuild operation, nothing. If there were another app using the service, it wouldn’t notice anything. And that makes perfect sense: if I want to fix one part of application, I don’t have to bring everything down.

Finally, because I made few stupid mistakes in data service, I have to restart it from time to time. And guess what, nobody noticed. UI is that independent.

I really enjoyed such approach. Different components use different languages and tools they need, service maintenance is trivial and flexibility is phenomenal. At least for small side projects.

Leave a Reply

Your email address will not be published. Required fields are marked *