November 4, 2015

Building a Microservice Architecture with Spring Boot and Docker, Part II

Part II: Getting Set-Up and Started

Introduction and Tools

In this part, we’ll get to work on building out the solution discussed in Part I. I’m going to do this on a Mac, but the tooling is the same on Mac and PC, so it will be 99% the same on both platforms. I won’t go through installing these tools, and instead move straight to getting started with them. What you will need:

  • Docker Toolbox: containing VirtualBox (for creating the VM that will run your containers), Docker Machine (runs within a VM to run Docker Containers), Docker Kitematic (a GUI for managing containers running in your Docker Machine), and Docker Compose (tool to orchestrate multiple container templates)
  • Git: you can follow along here. I’m a fan of Git Extensions on Windows and SourceTree on Mac, but anything including command line git is fine
  • Java 8 SDK: Java 8 had me at PermGen improvements; the collection streaming and lambda support are great, too
  • A build tool of choice: Let’s use Gradle. I recommend using SDKMan, formally known as GVM, to manage Gradle versions. If you’re working on Windows, you can use Cygwin with SDKMan or SDKMan’s Powershell CLI, or Gravy as an alternative
  • IDE of choice: We’ll work with the Spring Tool Suite (STS). As of this writing, the latest version is 3.7.0 for Mac
  • A REST tool: this is very handy for any web service project. I’m a big fan of the Chrome extension Postman. If you’re good at cURL, that works too
  • uMongo or other Mongo GUI: a document database fits the model of self-containment pretty well — objects are retrieved automatically, and reference objects are referred to by ID in a microservice architecture, which maps to a document store pretty well. Plus, MongoDB has an “official” Docker image which works very well

Our first note on source control — it appears that the overwhelming online opinion is that each microservice should have its own repository. It’s a fundamental belief for microservices that no code should be shared across services. Personally, this hurts my architect heart just a little, because the amount of duplicated code for any utilities may be high, as well as the lack of a single, unified domain model does give me a bit of heartburn. I understand the principle — self-containment means self-reliance. For the purposes of this blog post, I am putting all of the code into a single repository; however, each microservice will get its own folder under the root. This is done to allow for me to apply branches to demonstrate progress over time. In a real solution, you would have a distinct repository for each microservice, and perhaps a unified repository that references the others as submodules.

Overall Approach

Since we’re dealing with isolated, reusable components, we will do the following mappings:

One logical business object → One microservice  → One git repository folder  → One Mongo collection

While the business object may be made of multiple objects, any child object that we can consider as its own business object would be broken out into its own stack of components.

More information on how Docker works, and our first container

To understand how to build a full product solution based on Docker containers, we’ll need to delve into how containers run inside of the host machine (or virtual machine, as the case may be). Using Docker is typically made up of three phases: container building, container publishing, and container deployment.

Building a container – the world of the Dockerfile

To build a container, you write a set of instructions that take an existing container image, then apply changes and configuration to it. The official DockerHub repository contains dozens of “official” images as well as thousands of user-defined container images. If one of these images isn’t what you need it to be, you create a custom Dockerfile that appends onto the image with step-by-step additions, such as installing system packages, copying files, or exposing network ports. We will be creating a custom Dockerfile when we make our microservices, but for now, we will utilize a standard image to stand up a MongoDB instance.

Container networking

When you start a container, it has a private network. For outside network communications, ports from the container host get forwarded to individual container instance ports. The specific container ports that are open are dictated by the Dockerfile, and the forwarding occurs in one of two ways: you can explicitly map ports from the host machine to the container, or if not explicitly mapped, the Docker container server maps the declared container port to an available ephemeral port (typically ranging from 32768 to 61000). While we could explicitly manage port mappings for the entire solution, it is typically a much better idea to let Docker handle it, and expose port information to containers via its linking mechanism, which we will cover in more detail when we build our first microservice container.

Firing up a Mongo container

Whether you’re using Kitematic or the Docker command line, it’s pretty straightforward to fire up a standard container. Starting with the command line, if everything is installed correctly, a command prompt will contain three key environment variables:


These should be set for you (you may beed to restart your terminal/command prompt if it was open during the installation process). These are necessary because the Docker machine isn’t running directly on my laptop, but instead in a virtual machine running on my laptop. The Docker client will effectively “proxy” Docker commands from my laptop to the virtual machine. Before we fire up the container, let’s go over a handful of Docker commands that are very helpful. It’s always good to know the command line stuff before leveraging any GUI anyway.

Docker-level commands:

docker ps
This command will list all running containers, showing information on them including their ID, name, base image name, and port forwarding.

docker build
This command is used to define a container — it processes the Dockerfile and creates a new container definition. We’ll use this to define our microservice containers.

docker pull [image name]
This command pulls the container image from the remote repository and stores the definition locally.

docker run
This command starts a container based on a local or remote (e.g. DockerHub) container definition. We’ll go into this one quite a bit.

docker push
This command publishes a built container definition to a repository, typically DockerHub.

Container-specific commands

These commands take either a container ID or container Name as a parameter:

docker stats [container name/ID] [container name/ID]
This command will show the current load on each container specified – it will show CPU%, memory usage, and network traffic.

docker logs [-f] [container name/ID]
This command shows the latest output from the container. The -f option “follows” the output, much like a console “tail-f” command would.

docker inspect [container name/ID]
This command dumps all of the configuration information on the container in JSON format.

docker port [container name/ID]
This command shows all of the port forwarding between the container host and the container.

docker exec [-i] [-t] [container name/ID]
This command executes a command on the target container (-i indicates to run interactively, -t is pseudo-tty). This command is very commonly used to get a container shell:
docker exec -it [container name/ID] sh

Once we understand this reference material, we can move onto standing up a Mongo container.

It’s as simple as: docker run -P -d --name mongodb mongo

Some explanation:

  • the -P tells Docker to expose any container-declared port in the ephemeral range
  • the -d says to run the container as a daemon (e.g. in the background)
  • the –name mongodb says what name to assign to the container instance (names must be unique across all running container instances. If you don’t supply one, you will get a random semi-friendly name like: modest_poitras)
  • the mongo at the end indicates which image definition to use

DockerHub image definitions take the form of [owner]/[image name][:tag]. If no owner is specified, “official” DockerHub instances are used — this is reserved for the owners of Docker to “bless” images from software vendors. If the :tag part at the end is omitted, then it is assumed you want the latest version of the image.

Now we try to confirm our Mongo instance is up and running by connecting to the machine:

docker exec -it mongodb sh
# mongo
MongoDB shell version: 3.0.6
connecting to: test
Server has startup warnings:
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten]
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten]
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten]
> use microserviceblog
switched to db microserviceblog
> db.createCollection('testCollection')
{ "ok" : 1 }

From within the container, Mongo seems to be running, but can we hit it from the outside? To do that, we’ll need to see what ephemeral port was assigned for the Mongo server port. We get that by running:
docker ps
from which we get (some columns omitted for readability):

CONTAINER ID        IMAGE                PORTS                      NAMES
87192b65de95        mongo      >27017/tcp   mongodb

We can see that the port 32777 on the host machine is forwarded to port 27017 of the container; however, remember that we are running the host machine as a VM, so we must go back to our environment variables:


We should be able to access our Mongo container’s 27017 port by hitting: Firing up UMongo, and pointing it at that location shows the DB is accessible externally:


This concludes part II. In the third part of the series, we’ll continue this by actually creating a microservice or two, managing changes, and then work on applying CI and production deployment techniques.

This blog is the second of four parts: