Building a Microservice Architecture with Spring Boot and Docker, Part II

Part II: Getting Set-Up and Started

Introduction and Tools

In this part, we’ll get to work on building out the solution discussed in Part I. I’m going to do this on a Mac, but the tooling is the same on Mac and PC, so it will be 99% the same on both platforms. I won’t go through installing these tools, and instead move straight to getting started with them. What you will need:

  • Docker Toolbox: containing VirtualBox (for creating the VM that will run your containers), Docker Machine (runs within a VM to run Docker Containers), Docker Kitematic (a GUI for managing containers running in your Docker Machine), and Docker Compose (tool to orchestrate multiple container templates)
  • Git: you can follow along here. I’m a fan of Git Extensions on Windows and SourceTree on Mac, but anything including command line git is fine
  • Java 8 SDK: Java 8 had me at PermGen improvements; the collection streaming and lambda support are great, too
  • A build tool of choice: Let’s use Gradle. I recommend using SDKMan, formally known as GVM, to manage Gradle versions. If you’re working on Windows, you can use Cygwin with SDKMan or SDKMan’s Powershell CLI, or Gravy as an alternative
  • IDE of choice: We’ll work with the Spring Tool Suite (STS). As of this writing, the latest version is 3.7.0 for Mac
  • A REST tool: this is very handy for any web service project. I’m a big fan of the Chrome extension Postman. If you’re good at cURL, that works too
  • uMongo or other Mongo GUI: a document database fits the model of self-containment pretty well — objects are retrieved automatically, and reference objects are referred to by ID in a microservice architecture, which maps to a document store pretty well. Plus, MongoDB has an “official” Docker image which works very well

Our first note on source control — it appears that the overwhelming online opinion is that each microservice should have its own repository. It’s a fundamental belief for microservices that no code should be shared across services. Personally, this hurts my architect heart just a little, because the amount of duplicated code for any utilities may be high, as well as the lack of a single, unified domain model does give me a bit of heartburn. I understand the principle — self-containment means self-reliance. For the purposes of this blog post, I am putting all of the code into a single repository; however, each microservice will get its own folder under the root. This is done to allow for me to apply branches to demonstrate progress over time. In a real solution, you would have a distinct repository for each microservice, and perhaps a unified repository that references the others as submodules.

Overall Approach

Since we’re dealing with isolated, reusable components, we will do the following mappings:

One logical business object → One microservice  → One git repository folder  → One Mongo collection

While the business object may be made of multiple objects, any child object that we can consider as its own business object would be broken out into its own stack of components.

More information on how Docker works, and our first container

To understand how to build a full product solution based on Docker containers, we’ll need to delve into how containers run inside of the host machine (or virtual machine, as the case may be). Using Docker is typically made up of three phases: container building, container publishing, and container deployment.

Building a container – the world of the Dockerfile

To build a container, you write a set of instructions that take an existing container image, then apply changes and configuration to it. The official DockerHub repository contains dozens of “official” images as well as thousands of user-defined container images. If one of these images isn’t what you need it to be, you create a custom Dockerfile that appends onto the image with step-by-step additions, such as installing system packages, copying files, or exposing network ports. We will be creating a custom Dockerfile when we make our microservices, but for now, we will utilize a standard image to stand up a MongoDB instance.

Container networking

When you start a container, it has a private network. For outside network communications, ports from the container host get forwarded to individual container instance ports. The specific container ports that are open are dictated by the Dockerfile, and the forwarding occurs in one of two ways: you can explicitly map ports from the host machine to the container, or if not explicitly mapped, the Docker container server maps the declared container port to an available ephemeral port (typically ranging from 32768 to 61000). While we could explicitly manage port mappings for the entire solution, it is typically a much better idea to let Docker handle it, and expose port information to containers via its linking mechanism, which we will cover in more detail when we build our first microservice container.

Firing up a Mongo container

Whether you’re using Kitematic or the Docker command line, it’s pretty straightforward to fire up a standard container. Starting with the command line, if everything is installed correctly, a command prompt will contain three key environment variables:

DOCKER_HOST=tcp://192.168.99.100:2376
DOCKER_MACHINE_NAME=default
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=/Users/[username]/.docker/machine/machines/default

These should be set for you (you may beed to restart your terminal/command prompt if it was open during the installation process). These are necessary because the Docker machine isn’t running directly on my laptop, but instead in a virtual machine running on my laptop. The Docker client will effectively “proxy” Docker commands from my laptop to the virtual machine. Before we fire up the container, let’s go over a handful of Docker commands that are very helpful. It’s always good to know the command line stuff before leveraging any GUI anyway.

Docker-level commands:

docker ps
This command will list all running containers, showing information on them including their ID, name, base image name, and port forwarding.

docker build
This command is used to define a container — it processes the Dockerfile and creates a new container definition. We’ll use this to define our microservice containers.

docker pull [image name]
This command pulls the container image from the remote repository and stores the definition locally.

docker run
This command starts a container based on a local or remote (e.g. DockerHub) container definition. We’ll go into this one quite a bit.

docker push
This command publishes a built container definition to a repository, typically DockerHub.

Container-specific commands

These commands take either a container ID or container Name as a parameter:

docker stats [container name/ID] [container name/ID]
This command will show the current load on each container specified – it will show CPU%, memory usage, and network traffic.

docker logs [-f] [container name/ID]
This command shows the latest output from the container. The -f option “follows” the output, much like a console “tail-f” command would.

docker inspect [container name/ID]
This command dumps all of the configuration information on the container in JSON format.

docker port [container name/ID]
This command shows all of the port forwarding between the container host and the container.

docker exec [-i] [-t] [container name/ID]
This command executes a command on the target container (-i indicates to run interactively, -t is pseudo-tty). This command is very commonly used to get a container shell:
docker exec -it [container name/ID] sh

Once we understand this reference material, we can move onto standing up a Mongo container.

It’s as simple as: docker run -P -d --name mongodb mongo

Some explanation:

  • the -P tells Docker to expose any container-declared port in the ephemeral range
  • the -d says to run the container as a daemon (e.g. in the background)
  • the –name mongodb says what name to assign to the container instance (names must be unique across all running container instances. If you don’t supply one, you will get a random semi-friendly name like: modest_poitras)
  • the mongo at the end indicates which image definition to use

DockerHub image definitions take the form of [owner]/[image name][:tag]. If no owner is specified, “official” DockerHub instances are used — this is reserved for the owners of Docker to “bless” images from software vendors. If the :tag part at the end is omitted, then it is assumed you want the latest version of the image.

Now we try to confirm our Mongo instance is up and running by connecting to the machine:

docker exec -it mongodb sh
# mongo
MongoDB shell version: 3.0.6
connecting to: test
Server has startup warnings:
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten]
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten]
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-09-02T00:57:30.761+0000 I CONTROL  [initandlisten]
> use microserviceblog
switched to db microserviceblog
> db.createCollection('testCollection')
{ "ok" : 1 }

From within the container, Mongo seems to be running, but can we hit it from the outside? To do that, we’ll need to see what ephemeral port was assigned for the Mongo server port. We get that by running:
docker ps
from which we get (some columns omitted for readability):

CONTAINER ID        IMAGE                PORTS                      NAMES
87192b65de95        mongo                0.0.0.0:32777->27017/tcp   mongodb

We can see that the port 32777 on the host machine is forwarded to port 27017 of the container; however, remember that we are running the host machine as a VM, so we must go back to our environment variables:

$ echo $DOCKER_HOST
tcp://192.168.99.100:2376

We should be able to access our Mongo container’s 27017 port by hitting: 192.168.99.100:32777. Firing up UMongo, and pointing it at that location shows the DB is accessible externally:

building_microservice_part2

This concludes part II. In the third part of the series, we’ll continue this by actually creating a microservice or two, managing changes, and then work on applying CI and production deployment techniques.

This blog is the second of four parts:

Dan Greene

Dan Greene

Director of Cloud Services

Dan Greene is the Director of Cloud Services at 3Pillar Global. Dan has more than 20 years of software design and development experience, with software and product architecture experience in areas including eCommerce, B2B integration, Geospatial Analysis, SOA architecture, Big Data, and has focused the last few years on Cloud Computing. He is an AWS Certified Solution Architect who worked at Oracle, ChoicePoint, and Booz Allen Hamilton prior to 3Pillar. He is also a father, amateur carpenter, and runs obstacle races including Tough Mudder.

10 Responses to “Building a Microservice Architecture with Spring Boot and Docker, Part II”
  1. Al Krinker on

    Can you add Dockerfile and futures to github account or such so people can download it and follow?

    Reply
    • Dan Greene on

      Al,
      The Dockerfile starts showing up in the part III branches, once we start building custom containers

      Reply
      • Abhay Kumar on

        Could not find the docker file which would contain details of mongo db schema details in 3rd part also . Can you help

        Reply
        • Rajakrishna Reddy on

          Use this one to get the latest mongo from server if it’s not found locally, make sure you have internet connection.

          docker run -P -d –name mongodb mongo

          Reply
          • Dan Greene on

            To add to this – there are ‘official’ Docker images of many popular software packages. MongoDB’s is simply ‘mongo’ – you know it’s an official image by the lack of an owner name (e.g. owner/image). You do want to check the official Docker repository at hub.docker.com (moving to store.docker.com). Your machine will download the image if it’s not cached locally.

  2. Yashwant on

    Nice Article!!!

    Hi Dan, Just want to ask question what is difference between SOA and Micro-service development approach?

    Reply
    • Atin on

      Yash,
      You might want to consider the micro-services as SOA done in right away on a smaller scale. SOA is more from the enterprise perspective to provide business services, which essentially should have some sort of Message Enhancement and Translation typically achieved by ESB/Service bus. Micro service is more focused and find grained service provided granular or componentized business function.

      Reply
  3. Abhay Kumar on

    Hi Dan,

    The example shows we have created a container for image “mongo” . Can you share the docker image file from which this “mongo” image is built.

    Also , in a micro services architecture say , we have a maven project where there are three services and we want these services to run in different containers. So , will we create a docker image file in the project for each service or just 1 image file that will contain description of all services? If , we have 1 image file , then will all the 3 containers be created from the same image file by running the image file 3 times?

    Reply
  4. Rajakrishna Reddy on

    Very useful for a fresher to get started and going!!

    Reply
  5. Jay on

    Hi Dan,

    Thanks for the great post.

    I’m having a problem, can you help me?

    I followed the steps you provided, and getting empty line while echoing $DOCKER_HOST

    How to get the IP

    Reply
Leave a Reply to Al Krinker

Related Posts

How to Develop Microservices Using .NET Core & Docker With increasing business demands, we now develop very large and complex projects that take more time to build and deploy. Whenever QA reports any issu...
Highlights of the AWS re:Invent Conference & The Future... In Take 3, Scene 30, we debrief you on the AWS re:Invent conference that was recently held in Las Vegas. We're joined by Dan Greene, the Director of C...
Becoming an Anticipatory Organization – with Daniel Bu... On this episode of The Innovation Engine we look at why becoming an anticipatory organization just may be the key to long-term business success. We'll...
How to Initialize a Postgres Docker with a Million-Plus Reco... Besides the official Docker documentation, there are several good sources on the internet where you can read about how to kick-start your Dockerized e...
Why Isn’t My Cloud Raining Money? It is a common belief that many organizations can save money by moving their products “To the Cloud!” (One of my most hated catchphrases). However, on...

SUBSCRIBE TODAY


Sign up today to receive our monthly product development tips newsletter.