Building a Microservice Architecture with Spring Boot and Docker, Part IV

This is the fourth blog post in a 4-part series on building a microservice architecture with Spring Boot and Docker. If you would like to read the previous posts in the series, please see Part 1, Part 2, and Part 3.

Part IV: Additional Microservices, Updating Containers, Docker Compose, and Load Balancing

So now that we have a solid understanding of microservices and Docker, stood up a MongoDB container and Spring Boot microservice container and had them talk to each other via container linking (reference part 4/start from our Git repo to catch up), let’s put together a few more quick microservices. To complete our initial use case, we’ll need two more microservices — missions and rewards. I’ll jump ahead and build out those two in the exact same manner as we did for the employee microservice. You can reference branch ‘part4/step1’ to get these two extra service containers. Now, if we do a Docker ps, we’ll have (some columns removed for brevity):

CONTAINER ID IMAGE                      PORTS                         NAMES
86bd9bc19917 microservicedemo/employee  0.0.0.0:32779->8080/tcp       employee
1c694e248c0a microservicedemo/reward    0.0.0.0:32775->8080/tcp       reward
c3b5c56ff3f9 microservicedemo/mission   0.0.0.0:32774->8080/tcp       mission
48647d735188 mongo                      0.0.0.0:32771->27017/tcp      mongodb

Updating a container image

This is all very simple, but not exactly functional because none of the microservices provide any direct value aside from simple CRUD at this point. Let’s start layering on some code changes to provide a bit more value and functionality. We’ll make some changes to one of our microservices, then deal with updating our running container to understand what goes into versioning containers. Since employees earn points by completing missions, we need to track their mission completions, point totals (earned and active), and rewards redeemed. We’ll add some additional classes to our employee model — these won’t be top-level business objects, so they won’t get their own microservices, but will instead provide context within the employee object. Once these changes are made (see ‘part4/step2’ branch) we’ll have some structural changes that need to be synchronized throughout the stack. The steps to update our container are:

  • Recompile our code
    gradle build
  • Rebuild our container image
    docker build -t microservicedemo/employee .
    You’ll notice some messages at the end:
    Removing intermediate container 5ca297c19885
    Successfully build 088558247
  • Now we need to clear out our old running container with a new version:
    docker stop employee
    docker rm employee

    docker run -P -d --name employee --link mongodb microservicedemo/employee

The important thing to note is that the code within the running container is not updated. Another core principle of containers and microservices is that the code and configuration within a container is immutable. To put it another way: you don’t update a container, you replace it. This can pose some problems for some container use cases, such as using them for databases or other persistent resources.

Using Docker Compose to organize the running containers

If, like me, you had other work to do between following this seres of articles, ensuring all of the various command line parameters to link up these containers can be a little frustrating. Organizing a fleet of containers is the purpose of Docker Compose (previously known as Fig). You define your set of containers in a YAML configuration file, and it manages the runtime configuration of the containers. In many ways, think of it as an orchestrator of “running” the containers with the correct options/configuration. We will create one for our application to do all of the things we’ve managed via command line parameters.

docker-compose.yml:

employee:
 build: employee
 ports:
  - "8080"
 links:
  - mongodb
reward:
 build: reward
 ports:
  - "8080"
 links:
  - mongodb
mission:
 build: mission
 ports:
  - "8080"
 links:
  - mongodb
mongodb:
 image: mongo

Then, from a command prompt, you type:
docker-compose up -d
And the entire fleet comes up. Pretty handy! Many docker commands have analogs in docker-compose. If we run “docker-compose ps,” we see:

Name           Command                        State   Ports
-------------------------------------------------------------
git_employee_1 java -Dspring.data.mongodb ... Up      0.0.0.0:32789->8080/tcp
git_mission_1  java -Dspring.data.mongodb ... Up      0.0.0.0:32785->8080/tcp
git_mongodb_1  /entrypoint.sh mongod          Up      27017/tcp
git_reward_1   java -Dspring.data.mongodb ... Up      0.0.0.0:32784->8080/tcp

Scaling containers and load balancing

But that’s not all that docker-compose can do. If you run “docker-compose scale [compose container name]=3,” it will create mutiple instances of your container — running “docker-compose scale employee=3” then “docker-compose ps” we see:

Name             Command                          State   Ports
---------------------------------------------------------------------------------
git_employee_1   java -Dspring.data.mongodb ...   Up      0.0.0.0:32789->8080/tcp
git_employee_2   java -Dspring.data.mongodb ...   Up      0.0.0.0:32791->8080/tcp
git_employee_3   java -Dspring.data.mongodb ...   Up      0.0.0.0:32790->8080/tcp
git_mission_1    java -Dspring.data.mongodb ...   Up      0.0.0.0:32785->8080/tcp
git_mongodb_1    /entrypoint.sh mongod            Up      27017/tcp
git_reward_1     java -Dspring.data.mongodb ...   Up      0.0.0.0:32784->8080/tcp

and our employee container now has three instances! Docker-compose remembers the number you set, so the next time you run, it will start up 3 employee instances. Personally, I think this should be in the docker-compose.yml file, but it’s not.

Hopefully, you are starting to see a problem developing here. How are we supposed to make an end-user application that can actually use these microservices, since their ports change, and in a clustered Docker server environment (e.g. Docker Swarm), the “host” IP address could change too? There are some advanced solutions (Kubernetes and AWS’ ECS), but for now we’ll look at a (relatively) simple option. We’ll start with a very easy way to load balance multiple container instances. Tutum, a company that is structuring a multi-cloud container organization capability, has put out to the Docker community an extension of the HAProxy, which can auto-configure itself based on linked containers. Let’s add a load balancer to balance our multiple employee containers we now have. WE’ll just add that into our docker-compose.yml file:

…
ha_employee:
 image: tutum/haproxy
 links:
   - employee
 ports:
   - "8080:80"

Then we run "docker-compose up -d” and it will download and start the missing container. Now we can run our tests against a specific port (8080), which will load balance against all the running employee instances. After this, we can hit our employee service “cluster” at 192.168.99.100:8080, which will round robin (by default) across the three running instances. Talk about easy street! There are a lot of additional features and functionality within this HAProxy Docker container; I suggest looking at https://github.com/tutumcloud/haproxy for more information.

This HAProxy approach works fantastic for load balancing against multiple instances of a specific container and would be the ideal choice for a single container environment. However, we don’t have that here, do we? We could set up multiple HAProxy instances to handle container clusters exposing each proxy on a different host port, so our employee service is at port 8080, our mission is on port 8081, and the reward is on port 8082 (reference part4/step3 in the Git repository). If we were to go full production, we could pursue a leveraging nginx to create a reverse proxy that would mask all service requests to a single IP/port (route to the container based upon URL path /employee/ vs /reward/). Or we could use a more robust service discovery route, such as this, which leverages etcd and some impressive Docker meta-data scripting and template engines from Jason Wilder’s Docker-gen system (https://hub.docker.com/r/jwilder/docker-gen/), as well as a myriad of additional self-managed service discovery solutions. We’ll keep the simple HAProxy solution for now, as it gives us a solid understanding in managing container clusters.

This is a good place to wrap up this series for now. There are many additional areas that I could pontificate on, including:

  • Building out a front-end in an container, or in a mobile app
  • Include batch processing of back-end data
  • Dynamic sizing of container cluster to process queue entires
  • Migrate a service from Java/Spring Boot to Scala/Akka/Play
  • Setting up CI
  • Building out my own image repository or using a container repository service (Google or Docker Hub)
  • Evaluating Container management systems like AWS’ ECS or Kubernates

What other areas around Docker and Microservices would you like to know? Let me know at dan.greene@3pillarglobal.com. I plan on making additional posts on this topic, continuing this use case scenario, and would be happy to pick the direction based on your feedback!

This blog is the fourth of four parts. The entire series:

Dan Greene

Dan Greene

Director of Architecture

Dan Greene is the Director of Architecture at 3Pillar Global. Dan has 18 years of software design and development experience, with software and product architecture experience in areas including eCommerce, B2B integration, Geospatial Analysis, SOA architecture, Big Data, and Cloud Computing. He is an AWS Certified Solution Architect who worked at Oracle, ChoicePoint, and Booz Allen Hamilton prior to 3Pillar. Dan is a graduate of George Washington University. He is also a father, amateur carpenter, and runs obstacle races including Tough Mudder.

22 Responses to “Building a Microservice Architecture with Spring Boot and Docker, Part IV”
  1. Tapo Ghosh on

    We are planning to deploy ~10 microservices and thinking of deploying them in their own docker containers. While I understand the development of the code ground up and building docker images, I am struggling a little to understand how these docker images can be rolled up to be deployed in ECS…

    Reply
  2. Dan Greene on

    Tapo –
    AWS’s ECS service does seem a little… convoluted. For it to work, your container would have to be stored in a repository that AWS can see. You can stand up your own docker registry within AWS, or use a public repository (such as with docker or google). Learning AWS’s terms like ‘task’, ‘service’, and ‘cluster’ is important – look at : http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted.html for some starting info. I may look into a followup of AWS ECS vs Kubernetes and practical usage of each

    Reply
  3. Fatih Dirlikli on

    Hello Dan,
    Thanks for great article series. What I would like to read a more about is inter-service communications and tools to use for this.

    Thanks,
    Fatih

    Reply
  4. Matt Reynolds on

    If you’re already using Spring Boot, it’s worth taking a look at Spring Cloud (http://projects.spring.io/spring-cloud/ ) . It includes versions of Netflix Eureka (for a service registry) and Zuul for edge services which can auto route to registered services amongst other things.

    Reply
  5. Frans Thamura on

    can u make this example in github?

    Reply
  6. Emmett on

    how would one go about scaling the mongo container ?

    Reply
  7. Ram on

    Very well written and useful set of articles.

    Reply
  8. Marco on

    Great article!
    Thanks for sharing it, specially because it really looks like a real case of study and not only an academic example.
    Well done.

    Reply
  9. Kiran Kumar on

    Fantastic Article. I loved it. Looking Continuous integration and deployment of miroservices to production and clustering of database instances.

    Reply
  10. Ammad Amjad on

    Very helpful series. Thanks for writing .

    Reply
  11. Bharath on

    Thank you Dan for the detailed information and working(in reality!) examples.

    I have been picking up the microservices development recently. I would like to understand how the synchronized REST calls which talks through various microservices deployed in different containers can be made and how to achieve a seamless REST call or get reported by a exact issue i.e, microservice down, out of connection pool, etc.,. I heard about the RXJava but never found a good article for the same.

    Reply
    • Dan Greene on

      Great question – There’s an entire aspect of container-based microservice solutions that need to work with a service registry such as Netflix’s Eureka https://github.com/Netflix/eureka. Once you can have x number of instances of your container running on various hosts in a cluster of y number of machines, things obviously gets very sticky.

      Reply
  12. Amr Khaled on

    This is the best microservices series on the internet so far, well done.

    Thanks for sharing

    Reply
  13. Tsaroga on

    That was very useful and easy to follow. I am looking forward to more of your blogs. Thanks a lot 🙂

    Reply
  14. Channa on

    Great article. Keep up the good work 🙂

    Reply
  15. Valerio on

    Great article!!
    There is something I can’t get:

    Scaling a docker container in multiple replicas on the same “host” machine can guarantee real reliability of the service?
    If the host goes down all service instances becames unavailable, right?
    Is there a way to replicate container on different hosts?
    I’ve read something about Docker Swarm, can it be used with Compose?

    Reply
  16. Jessica on

    Thanks for a great series – I went through all the examples and was able to get most everything up and running.

    As of 5/10/17 – it looks like the GitHub repo just contains a couple of text files. Can you add back the code? I wanted to run the reward and mission services so I could complete the examples in Part 4

    Reply
  17. Dedicated Proxies on

    Hey, I am new to your blog reading the fourth series of your post. But when I come to this fourth series. I found Part 1, Part 2, and Part 3. And read it in deep. Thanks for providing complete series of tutorial. Your way of linking post is amazing. No one can miss any tutorial series if they come to any post or any series. They find complete data. Actually, i am research something about Microservice architecture spring boot docker. Looking for complete data with tutorials series. Any you did it for me. Again great thanks and keep updating.

    Reply
Leave a Reply

SUBSCRIBE TODAY


Sign up today to receive our monthly product development tips newsletter.