Docker swarm mode on IBM SoftLayer

September 26th, 2016

Having written a few posts on using the IBM Containers service in Bluemix I thought I’d cover another option for running Docker on IBM Cloud: using Docker on VMs provisioned from IBM’s SoftLayer IaaS. This is particularly easy with Docker Machine as there is a SoftLayer driver. As the docs state, there are three required values which I prefer to set as the environment variables SOFTLAYER_USER, SOFTLAYER_API_KEY and SOFTLAYER_DOMAIN. The instructions to retrieve/generate an API key for your SoftLayer account are here. Don’t worry if you don’t have a domain name free – it is only used as a suffix on the machine names when they appear in the SoftLayer portal so any valid value will do. With those variables exported, spinning up three VMs with Docker is as simple as:

Provisioning the VMs and installing the latest Docker engine may take some time. Thankfully, initialising swarm mode across the three VMs with a single manager and two worker nodes can then be achieved very quickly:

Now we can target our local client at the swarm and create a service (running the WebSphere Liberty ferret application):

Once service ps reports the task as running, due to the routing mesh, we can call the application via any of the nodes:

Scale up the number of instances and wait for all three to report as running:

With the default spread strategy, you should end up with a container on each node:

Note that the image has a healthcheck defined which uses the default interval of 30 seconds so expect it to take some multiple of 30 seconds for each task to start. Liam’s WASdev article talks more about the healthcheck and also demonstrates how to rollout an update. Here I’m going to look at the reconciliation behaviour. Let’s stop one of the work nodes and then watch the task state again:

You will see the swarm detect that the task is no longer running on the node that has been stopped and is moved to one of the two remaining nodes:

(You’ll see that there is a niggle here in the reporting of the state of the task that is shutdown.)

This article only scratches the surface of the capabilities of both swarm mode and SoftLayer. For the latter, I’d particularly recommend looking at the bare metal capabilities where you can benefit from the raw performance of containers without the overhead of a hypervisor.

Building application images with WebSphere traditional

September 25th, 2016

For a while now I’ve had a bullet point on a chart that blithely stated that you could add an application on top of our WebSphere Application Server traditional Docker image using wsadmin and, in particular, with the connection type set to NONE i.e. without the server running. Needless to say, when I actually tried to do this shortly before a customer demo it didn’t work! Thankfully, with moral support from a colleague and the excellent command assistance in the admin console, it turns out that my initial assertion was correct and the failure was just down to my rusty scripting skills. Here’s how…

First, the Dockerfile that builds an image containing our ferret sample application, taking the WAR file from Maven Central.

The following script then builds an image from the Dockerfile in the gist above, runs it, waits for the server to start, and then retrieves the ferret webpage.

Using Rocker to build Liberty images from Java source

September 24th, 2016

Looking for solutions to my archive extraction problem brought me to look at the Rocker project from Grammarly. I’d looked at it before but not in any great detail. In a nutshell, it aims to extend the Dockerfile syntax to overcome some of its current limitations. Although not an option for me (because I can’t depend anything beyond the standard Docker toolset) the Rocker solution to my earlier problem would be as simple as follows:

The extra syntax here is the MOUNT command which follows the same syntax as the --volume flag on docker run. As the Grammarly team point out, there are trade-offs here which help to explain why the Docker maintainers are reluctant to add volume mounts to docker build. Here, changes to the contents of the mounted directories do not result in the cache being busted.

Anyway, this post is meant to be about a different problem: building Docker images where the chosen language requires compilation e.g. Java. One approach (that taken by OpenShift’s source-to-image) is to add the source to an image that contains all of the necessary pieces to build, package and run it. As shown in Jamie’s WASdev post, for Liberty that might mean putting Maven and a full JDK in to the image. I’m not a fan off this approach: I prefer to end up with an image that only contains what is needed to run the application.

The following shows how this might look using Rocker:

Here we’re building one of the Micro Profile samples (which uses Maven) and then creating an image with the resulting WAR and the new WebSphere Liberty Micro Profile image. You’ll note that there are two FROM statements in the file. First we build on the maven image to create the WAR file. We then use the Rocker EXPORT command to make the WAR file available to subsequent build steps. The second part of the build then starts with the websphere-liberty:microProfile image, imports the WAR and tags it. Building, running and then calling the application is then as simple as follows:

The other thing of note is that we’ve used the MOUNT command to create a volume for the Maven cache. The volume is tied to this Rockerfile so, if you run rocker build --no-cache . you’ll see that it rebuilds the images from scratch but Maven does not need to download the dependencies again.

The MOUNT is also a great way to overcome the long-running issue of how to provide credentials required to access resources at build time without them ending up in the final image. Other nice features include the ATTACH command which effectively allows you to set a breakpoint in your Rockerfile, and the fact that the Rockerfile is pre-processed as a golang template allowing much more advanced substitutions than with Docker’s build arguments.

Unpacking an archive as a non-root user during docker build

September 23rd, 2016

The way we build our WebSphere traditional Docker images is as a two-step process. First we install the products using Installation Manager and generate a tar file containing the installed product. Then we suck that tar file in to a clean image so that the resulting image does not contain all the cruft left lying around from the install process. (Not including Installation Manager in the final image also helps to reinforce that these images are intended to be immutable.)

The ADD Dockerfile command is very convenient for copying in a tar file from the context and unpacking it, all in one atomic operation. Unfortunately the ADD command ignores the current user and always unpacks as root. (Aside: Docker recently re-opened the proposal created by IBMer Megan Kostick to address this problem.) You could run a chown following the ADD but this results in all the files being copied in to a new layer (not cool when the contents of your tar file weighs in at 1.5GB!). Our initial starting point was to make sure that all the files already had the right ownership when they are added to the tar file. This involved creating the same user/group in the initial image and relying on getting the same uid/guid in the eventual image, something I wasn’t entirely happy with.

A related problem that we had run in to elsewhere was that the copy and unpack magic of ADD doesn’t extend to zip files, a format in which many of our install binaries are found. Where those binaries are already hosted somewhere, it’s simple enough to use wget or curl to pull the files, unpack, and perform any necessary cleanup as part of a RUN command. The obvious solution to my local tar or zip file was to host the file somehow. I decided to spin up a python container to serve up the files as follows:

That URL can then be consumed in a subsequent build. For example, if I had example.tar.gz in the directory on the host, I could unpack as the user/group foo:bar in my image with the following Dockerfile:

To build the image, we then just need to pass in the URL as a build argument and, when we’re done, we can clean up the python container:

The result of all of this is that we then get the default non-root behavior of tar which is to unpack as the current user.

Containerizing background processes

September 22nd, 2016

The lifetime of a Docker container is tied to the lifetime of the PID 1 process executed when the container was started. WebSphere Liberty has a convenient server run command to run the application server in the foreground. Sadly, that’s not the case with the traditional WebSphere’s startServer.sh script which simply starts the server process in the background and then exits. To ensure that the container didn’t exit as well, we started out with a script something along the following lines:

where server1.pid is a file created by the server process (but not immediately, hence the initial sleep). That successfully kept the container alive but failed to allow it to shutdown cleanly! A docker stop, for example, would wait for the default timeout period and then kill the process. Not great for any in-flight transactions! The solution was simple enough, add a trap to catch any interrupt and issue the command to stop the server:

All was well with the world until we then enabled server security by default. Unfortunately with security enabled the stopServer.sh script requires credentials to be provided and there is no way to get those credentials to the script. The solution was to switch to sending the interrupt signal to the server process. I also disliked that initial sleep so I decided to retrieve the process ID via ps (something that’s safer in a container given the limited process tree) and then wait whilst the processes directory exists in /proc. The resulting code looked along the following lines:

Note the use of a function so that $PID is not evaluated at the point the trap is set up.
Another disadvantage with having the server process in the background is the lack of output in the container logs. I decided to rectify that whilst I was at it by adding calls to tail the server log files:

The significance of the tail parameters is as follows. The capital F indicates that the attempts to follow the log file should be retried. This ensures that we continue to follow the latest file when the logs roll over. The pid parameter ensures that the background tail processes exit along with the server process. The -n +0 indicates that the output should start at the beginning of the file so that entries output whilst the startServer.sh script is running are not lost. As previously noted, Docker preserves stderr across the remote API so we make sure to direct the output from SystemErr.log there.

Start of the season

September 18th, 2016

Emma and Duncan at the finishThe orienteering season got under way for us yesterday with a SOC event on an area called Salisbury Trench (east of Janesmoor Pond). Christine was saving herself for the Hursley 10K on Sunday so just took the children round the yellow course. (Actually, the children plus the pictured bear which Duncan has on loan from school as ‘Star of the Week’. It’s slightly concerning that he’s been awarded this in only the second week of the school year as normally it seems to go to those in need of some encouragement at school! Anyway, I digress…)

I went round the blue course which was quite fun, particularly given that the paths had been (deliberately) left off the map. The bracken needs a little longer to die off properly but the area was still pretty runnable although, as Pete Bray demonstrated later in the day, it was actually runnable at a lot faster pace than I was doing! The controls had been hung fairly high for which I was grateful as my compass-work was a little rusty. My ankle was also playing up a bit – a reminder not to get too carried away. The one event a month that SOC puts on is about the right frequency without having to stray further afield.

Container Camp III

September 10th, 2016

Container CampOn Friday I attended my third Container Camp UK. The venue had changed once more, this time taking up residence in the Picturehouse Central cinema by Piccadilly Circus. As with last year, this meant comfy seats which, contrary to what you might think, actually makes it easier to stay awake! In another repeat from 2015, we started late and connecting to the projector proved problematic throughout the day. This time we had a selection of shiny MacBooks with their new-fangled USB-C connectors to thank!

The great thing about this conference is its independence which means that the sessions during the day covered the complete gamut of container technologies. Here’s a quick run down of the day:

  • Craig Box from Google kicked the day off. His session was billed as covering Kubernetes 1.3 but, as he pointed out, that was old hat with 1.4 due to release within a week. As such, he spent much of the pitch talking about what was coming up. To me it was a reminder to have a play with deploying various WebSphere topologies with Pet Sets.
  • Next up was Ben Firshman, repeating his serverless app talk from DockerCon. I keep meaning to ask whether he knows that OpenWhisk supports Docker containers as actions.
  • Michael Hausenblas from Mesosphere came after the break. He was talking about DRAX, his chaos testing tool for DC/OS.
  • Michael was following by Nishant Totla, giving his first conference presentation. He’s an engineer at Docker working on swarmkit, with orchestration in Docker 1.12 being the subject of his presentation.
  • Mark Shuttleworth of Canonical fame had the last session of the morning. He was talking about snaps for application packaging, particularly in the context of IoT devices.
  • Once the long lunch queue had finally subsided there was a series of lightning talks but, standing about three meters from the speakers, I still couldn’t hear half of what was said against the background. I’ll have to wait for the replays.
  • After lunch, Jonathan Boulle from CoreOS talked about the rkt container runtime and, in particular, the work that has been done to integrate it in to Kubernetes. Undoubtedly factoring the Docker-specifics out of Kubernetes has been beneficial to the project. It remains to be seen whether rkt overtakes Docker as the runtime of choice.
  • George Lestaris (now working on Garden for Pivotal) was talking about the project to use the CernVM File System as the backing for a container layered file system. Consider what if the large proportion of the content of many images that is never touched by the running process was never pulled across the file system?
  • Liz Rice had borrowed Julz Friedman’s pitch on building containers from scratch with Go. It was interesting to compare Liz’s style of “oh look – what would happen if I tried this?” versus Julz’s “let me show you my skills”!
  • Gareth Robertson then took to the stage briefly to plug RC1 for Label Schema which seeks to standardise a base set of Docker image labels.
  • After another break, Ed Robinson from Reevoo gave an entertaining pitch on the Træf?k reverse proxy. He talked about cheese a little bit too much though as this was point a mouse started to repeatedly traverse the flooring in front of me!
  • Chris Van Tuin from Red Hat gave an OpenShift pitch, lightly disguised as a presentation on container security.
  • Dustin Kirkland, another Canonical employee was talking about LXD and HPC. My attention started to drift at this point as watching the activities of the mouse proved more entertaining!
  • Docker Captain Alex Ellis rounded off the day with a Swarm/Raspberry Pi/IoT demo. You can’t beat a few flashing lights to please the audience!

Everything was being recorded so keep checking back on the conference YouTube channel for any sessions that peak your interest.

Summer Holidays: Act Three

September 5th, 2016

This final instalment is mostly taken up with our actual summer holiday. Taking a holiday in Britain at the end of August can sometimes be testing the definition of summer and, as we set off for Pembrokeshire, we were heading in to gale force winds. To be fair, this kept the roads fairly empty and, when we arrived at Broad Haven, meant there were some impressive waves breaking against the sea wall. We were staying in a ‘lodge’ at the same place we’d stayed six years earlier when Duncan would have been about 9 months old.

St David's CathedralSurfBy the following day, the wind had died down enough that Christine and the children could test out their new wetsuits body boarding in the sea. Unfortunately the rain returned before too long and we had to test out the selection of board games in the lodge. Things weren’t much better the following day and we tested out the swimming pool in Haverfordwest before taking a trip to St David’s for a look round the cathedral.

Pembroke CastleEmma body boardingThe sun finally made itself felt after that and we spent two pleasant days at the beach. Duncan has thankfully learnt not to eat sand in the intervening years! On another day we visited the privately owned Pembroke Castle which was a trip down memory lane for me having been there during a junior school trip to Tenby. We fell in on a guided tour where there was a good selection of gruesome stories to entertain the children. It was also, slightly randomly, circus skills day, and the children greatly enjoyed the Punch and Judy show.

Marloes PeninsulaSealsWe also revisited Martin’s Haven where the martins are still in residence in the toilets! We debated a trip to Skomer Island but it was too late in the year to see puffins. Instead we just wandered the cliff top path, looking down on the seals and their newborn pups below.

Llyn IdwalAs with our last trip to Pembrokeshire, it was followed by a drive up to north Wales. In an unfortunate reoccurrence, Emma was once again car sick on that journey. We stayed a couple of nights in Caernarfon to be close to Christine’s cousin and extended family who were staying on Anglesey. We took the children for a walk round Llyn Idwal which was unfortunately shrouded in damp mist. Christine and her cousin did a run/walk up to the Glyders and such was the visibility that they managed to descend on the wrong side of Tryfan!

Dave in the seaNewboroughIn contrast, we had glorious sunshine for the following day’s visit to Newborough Sands, scene of the British Orienteering Champs in 1995. While the others set off along the beach to the island (at least it’s an island at high tides) I had a run round the 10K+ Commonwealth Trail Champs route which is signposted.

We relocated to Bryn Gwynant Youth Hostel for the next couple of nights but met up with Cath and family again at Pen-y-Pass for an assault on Snowdon. Thankfully, unlike our last Snowdontrip along the Miner’s Track, no running buggies or baby carriers were required and this time the children made it all the way to the summit of Snowdon. Unfortunately the cloud never lifted as forecast and it was pretty miserable on top, not helped by the café being closed and Emma was heartbroken that she wouldn’t be able to spend any money in the shop! We descended back down the tourist track in to Llanberis for the traditional refuelling at Pete’s Eats.

Wilderhope ManorChristine had a grant interview in Swindon on the Thursday so we departed Wales and spent a night in the rather grand YHA Wilderhope Manor on Wenlock Edge. The stay was even more grand for the fact that our ‘en-suite room’ turned out to be the bridal suite! The mere presence of a bridal suite is a good indication of why we have never been able to book a room here at the weekend when orienteering in Shropshire.

Whilst Christine attended her interview, we amused ourselves at the nearby STEAM Museum of the Great Western Railway. It was billed as being an excellent way to pass STEAMa few hours and so it proved to be. There were a relatively small number of locomotives on display but this meant there was plenty of space to stand back and appreciate them. There were also lots of diversion for the children which meant that I could actually read some of the material on display. I hadn’t appreciated the extent to which Swindon owed its existence, or at least size, to the presence of the railway.

Paultons ParkThat brought us back home but, with Christine working a weekend open day at the University, I still had some child-minding to do and we decided to tick one more item of the children’s bucket list for the summer: a return trip to Paultons Park. The answer I posed at the end of my last blog post on this subject was 5 years, although Emma has managed a trip there with school in the interim. The children’s tastes have certainly matured and we only had one ride in Peppa Pig World (although this was possibly my worst with Duncan attempting to spin our cabin as fast as he could!). Thankfully the queues are somewhat shorter in other parts of the park, including the new rides in the Lost Kingdom. Emma demurred at some of the rides but this only spurred Duncan on and sadly he was the one who still needs to be accompanied by an adult on many of them! In the end, Emma caved in and joined us on everything. The only ride we didn’t do (although Duncan was definitely eyeing it up) was the Edge.