Having written a few posts on using the IBM Containers service in Bluemix I thought I’d cover another option for running Docker on IBM Cloud: using Docker on VMs provisioned from IBM’s SoftLayer IaaS. This is particularly easy with Docker Machine as there is a SoftLayer driver. As the docs state, there are three required values which I prefer to set as the environment variables SOFTLAYER_USER
, SOFTLAYER_API_KEY
and SOFTLAYER_DOMAIN
. The instructions to retrieve/generate an API key for your SoftLayer account are here. Don’t worry if you don’t have a domain name free – it is only used as a suffix on the machine names when they appear in the SoftLayer portal so any valid value will do. With those variables exported, spinning up three VMs with Docker is as simple as:
1 2 3 |
for i in {1..3}; do docker-machine create --driver softlayer node$i done |
Provisioning the VMs and installing the latest Docker engine may take some time. Thankfully, initialising swarm mode across the three VMs with a single manager and two worker nodes can then be achieved very quickly:
1 2 3 4 5 6 7 |
docker-machine ssh node1 docker swarm init \ --advertise-addr $(docker-machine ip node1):2377 TOKEN=$(docker-machine ssh node1 docker swarm join-token -q worker) for i in {2..3}; do docker-machine ssh node$i docker swarm join \ --token $TOKEN $(docker-machine ip node1):2377 done |
Now we can target our local client at the swarm and create a service (running the WebSphere Liberty ferret application):
1 2 3 |
eval $(docker-machine env node1) docker service create -p 9080:9080 --name ferret wasdev/ferret docker service ps ferret |
Once service ps
reports the task as running, due to the routing mesh, we can call the application via any of the nodes:
1 2 3 |
curl $(docker-machine ip node1):9080/ferret/ curl $(docker-machine ip node2):9080/ferret/ curl $(docker-machine ip node3):9080/ferret/ |
Scale up the number of instances and wait for all three to report as running:
1 2 |
docker service scale ferret=3 docker service ps ferret |
With the default spread strategy, you should end up with a container on each node:
1 2 3 4 |
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 1tyx11i9eebhevrmhci66jftz ferret.1 wasdev/ferret node1 Running Running 6 minutes ago 37nlj036334i8r3if8ftobox4 ferret.2 wasdev/ferret node3 Running Running about a minute ago 26wpeh3se9xf8xakrst173f8n ferret.3 wasdev/ferret node2 Running Running about a minute ago |
Note that the image has a healthcheck defined which uses the default interval of 30 seconds so expect it to take some multiple of 30 seconds for each task to start. Liam’s WASdev article talks more about the healthcheck and also demonstrates how to rollout an update. Here I’m going to look at the reconciliation behaviour. Let’s stop one of the work nodes and then watch the task state again:
1 2 |
docker machine stop node2 docker service ps ferret |
You will see the swarm detect that the task is no longer running on the node that has been stopped and is moved to one of the two remaining nodes:
1 2 3 4 5 |
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 1tyx11i9eebhevrmhci66jftz ferret.1 wasdev/ferret node1 Running Running 13 minutes ago 37nlj036334i8r3if8ftobox4 ferret.2 wasdev/ferret node3 Running Running 8 minutes ago 701m5cmvzock391vkh6zm8sda ferret.3 wasdev/ferret node1 Running Starting 5 seconds ago 26wpeh3se9xf8xakrst173f8n \_ ferret.3 wasdev/ferret node2 Shutdown Running 7 minutes ago |
(You’ll see that there is a niggle here in the reporting of the state of the task that is shutdown.)
This article only scratches the surface of the capabilities of both swarm mode and SoftLayer. For the latter, I’d particularly recommend looking at the bare metal capabilities where you can benefit from the raw performance of containers without the overhead of a hypervisor.