Archive for the ‘Work’ Category

WebSphere Liberty admin center in Docker

Tuesday, September 27th, 2016

The content of the WebSphere Liberty Docker images currently match the runtime install zips that we make available for download from WASdev.net. One consequence of this is that none of them contain the admin center. Adding it is very simple though as the following Dockerfile shows:

This Dockerfile adds a snippet of server XML under the configDropins directory that adds the adminCenter-1.0 feature. It then uses installUtility to install that feature. The admin center requires a user registry to be defined for authentication and here we use the quickStartSecurity stanza to define a wsadmin user. We’ll come back to remoteFileAccess in a moment.

We can then build and run this image as follows:

Once the server has started you should then be able to access /adminCenter on the HTTPS port return by docker port admin 9443 using the credentials defined in the Dockerfile.

Liberty Admin Center

If you then click on the Explore icon in the toolbox you’ll find information about any applications that are (or are not) deployed to the server, the server configuration, and server-level metrics. The last of these may be of particular interest when trying to determine suitable resource constraints for a container.

Liberty Admin Center Monitoring

In a single-server, it’s not currently possible to deploy an application via the admin center. For a simple application you could just place it in the dropins directory but, for argument’s sake, let’s say that we need to provide some extra configuration. I’m going to assume that you have ferret-1.2.war in the current directory. We then copy the file in to the container:

In the admin center, we then navigate to Configure > server.xml, click Add child under the Server element, select Application and click Add. Fill in the location as ferret-1.2.war and the context root as ferret then click Save. It is the remoteFileAccess stanza that we added to the server configuration that allows us to edit the server configuration on the fly.

Add Application

If you return to the applications tab you should see the application deployed and you can now access the ferret application at /ferret!

Ferret application installed

Obviously modifying the server configuration in a running container is at odds with the idea of an immutable server but it may still be of use at development time or for making non-functional updates e.g. to the trace enabled for a server.

Docker swarm mode on IBM SoftLayer

Monday, September 26th, 2016

Having written a few posts on using the IBM Containers service in Bluemix I thought I’d cover another option for running Docker on IBM Cloud: using Docker on VMs provisioned from IBM’s SoftLayer IaaS. This is particularly easy with Docker Machine as there is a SoftLayer driver. As the docs state, there are three required values which I prefer to set as the environment variables SOFTLAYER_USER, SOFTLAYER_API_KEY and SOFTLAYER_DOMAIN. The instructions to retrieve/generate an API key for your SoftLayer account are here. Don’t worry if you don’t have a domain name free – it is only used as a suffix on the machine names when they appear in the SoftLayer portal so any valid value will do. With those variables exported, spinning up three VMs with Docker is as simple as:

Provisioning the VMs and installing the latest Docker engine may take some time. Thankfully, initialising swarm mode across the three VMs with a single manager and two worker nodes can then be achieved very quickly:

Now we can target our local client at the swarm and create a service (running the WebSphere Liberty ferret application):

Once service ps reports the task as running, due to the routing mesh, we can call the application via any of the nodes:

Scale up the number of instances and wait for all three to report as running:

With the default spread strategy, you should end up with a container on each node:

Note that the image has a healthcheck defined which uses the default interval of 30 seconds so expect it to take some multiple of 30 seconds for each task to start. Liam’s WASdev article talks more about the healthcheck and also demonstrates how to rollout an update. Here I’m going to look at the reconciliation behaviour. Let’s stop one of the work nodes and then watch the task state again:

You will see the swarm detect that the task is no longer running on the node that has been stopped and is moved to one of the two remaining nodes:

(You’ll see that there is a niggle here in the reporting of the state of the task that is shutdown.)

This article only scratches the surface of the capabilities of both swarm mode and SoftLayer. For the latter, I’d particularly recommend looking at the bare metal capabilities where you can benefit from the raw performance of containers without the overhead of a hypervisor.

Building application images with WebSphere traditional

Sunday, September 25th, 2016

For a while now I’ve had a bullet point on a chart that blithely stated that you could add an application on top of our WebSphere Application Server traditional Docker image using wsadmin and, in particular, with the connection type set to NONE i.e. without the server running. Needless to say, when I actually tried to do this shortly before a customer demo it didn’t work! Thankfully, with moral support from a colleague and the excellent command assistance in the admin console, it turns out that my initial assertion was correct and the failure was just down to my rusty scripting skills. Here’s how…

First, the Dockerfile that builds an image containing our ferret sample application, taking the WAR file from Maven Central.

The following script then builds an image from the Dockerfile in the gist above, runs it, waits for the server to start, and then retrieves the ferret webpage.

Using Rocker to build Liberty images from Java source

Saturday, September 24th, 2016

Looking for solutions to my archive extraction problem brought me to look at the Rocker project from Grammarly. I’d looked at it before but not in any great detail. In a nutshell, it aims to extend the Dockerfile syntax to overcome some of its current limitations. Although not an option for me (because I can’t depend anything beyond the standard Docker toolset) the Rocker solution to my earlier problem would be as simple as follows:

The extra syntax here is the MOUNT command which follows the same syntax as the --volume flag on docker run. As the Grammarly team point out, there are trade-offs here which help to explain why the Docker maintainers are reluctant to add volume mounts to docker build. Here, changes to the contents of the mounted directories do not result in the cache being busted.

Anyway, this post is meant to be about a different problem: building Docker images where the chosen language requires compilation e.g. Java. One approach (that taken by OpenShift’s source-to-image) is to add the source to an image that contains all of the necessary pieces to build, package and run it. As shown in Jamie’s WASdev post, for Liberty that might mean putting Maven and a full JDK in to the image. I’m not a fan off this approach: I prefer to end up with an image that only contains what is needed to run the application.

The following shows how this might look using Rocker:

Here we’re building one of the Micro Profile samples (which uses Maven) and then creating an image with the resulting WAR and the new WebSphere Liberty Micro Profile image. You’ll note that there are two FROM statements in the file. First we build on the maven image to create the WAR file. We then use the Rocker EXPORT command to make the WAR file available to subsequent build steps. The second part of the build then starts with the websphere-liberty:microProfile image, imports the WAR and tags it. Building, running and then calling the application is then as simple as follows:

The other thing of note is that we’ve used the MOUNT command to create a volume for the Maven cache. The volume is tied to this Rockerfile so, if you run rocker build --no-cache . you’ll see that it rebuilds the images from scratch but Maven does not need to download the dependencies again.

The MOUNT is also a great way to overcome the long-running issue of how to provide credentials required to access resources at build time without them ending up in the final image. Other nice features include the ATTACH command which effectively allows you to set a breakpoint in your Rockerfile, and the fact that the Rockerfile is pre-processed as a golang template allowing much more advanced substitutions than with Docker’s build arguments.

Unpacking an archive as a non-root user during docker build

Friday, September 23rd, 2016

The way we build our WebSphere traditional Docker images is as a two-step process. First we install the products using Installation Manager and generate a tar file containing the installed product. Then we suck that tar file in to a clean image so that the resulting image does not contain all the cruft left lying around from the install process. (Not including Installation Manager in the final image also helps to reinforce that these images are intended to be immutable.)

The ADD Dockerfile command is very convenient for copying in a tar file from the context and unpacking it, all in one atomic operation. Unfortunately the ADD command ignores the current user and always unpacks as root. (Aside: Docker recently re-opened the proposal created by IBMer Megan Kostick to address this problem.) You could run a chown following the ADD but this results in all the files being copied in to a new layer (not cool when the contents of your tar file weighs in at 1.5GB!). Our initial starting point was to make sure that all the files already had the right ownership when they are added to the tar file. This involved creating the same user/group in the initial image and relying on getting the same uid/guid in the eventual image, something I wasn’t entirely happy with.

A related problem that we had run in to elsewhere was that the copy and unpack magic of ADD doesn’t extend to zip files, a format in which many of our install binaries are found. Where those binaries are already hosted somewhere, it’s simple enough to use wget or curl to pull the files, unpack, and perform any necessary cleanup as part of a RUN command. The obvious solution to my local tar or zip file was to host the file somehow. I decided to spin up a python container to serve up the files as follows:

That URL can then be consumed in a subsequent build. For example, if I had example.tar.gz in the directory on the host, I could unpack as the user/group foo:bar in my image with the following Dockerfile:

To build the image, we then just need to pass in the URL as a build argument and, when we’re done, we can clean up the python container:

The result of all of this is that we then get the default non-root behavior of tar which is to unpack as the current user.

Containerizing background processes

Thursday, September 22nd, 2016

The lifetime of a Docker container is tied to the lifetime of the PID 1 process executed when the container was started. WebSphere Liberty has a convenient server run command to run the application server in the foreground. Sadly, that’s not the case with the traditional WebSphere’s startServer.sh script which simply starts the server process in the background and then exits. To ensure that the container didn’t exit as well, we started out with a script something along the following lines:

where server1.pid is a file created by the server process (but not immediately, hence the initial sleep). That successfully kept the container alive but failed to allow it to shutdown cleanly! A docker stop, for example, would wait for the default timeout period and then kill the process. Not great for any in-flight transactions! The solution was simple enough, add a trap to catch any interrupt and issue the command to stop the server:

All was well with the world until we then enabled server security by default. Unfortunately with security enabled the stopServer.sh script requires credentials to be provided and there is no way to get those credentials to the script. The solution was to switch to sending the interrupt signal to the server process. I also disliked that initial sleep so I decided to retrieve the process ID via ps (something that’s safer in a container given the limited process tree) and then wait whilst the processes directory exists in /proc. The resulting code looked along the following lines:

Note the use of a function so that $PID is not evaluated at the point the trap is set up.
Another disadvantage with having the server process in the background is the lack of output in the container logs. I decided to rectify that whilst I was at it by adding calls to tail the server log files:

The significance of the tail parameters is as follows. The capital F indicates that the attempts to follow the log file should be retried. This ensures that we continue to follow the latest file when the logs roll over. The pid parameter ensures that the background tail processes exit along with the server process. The -n +0 indicates that the output should start at the beginning of the file so that entries output whilst the startServer.sh script is running are not lost. As previously noted, Docker preserves stderr across the remote API so we make sure to direct the output from SystemErr.log there.

Container Camp III

Saturday, September 10th, 2016

Container CampOn Friday I attended my third Container Camp UK. The venue had changed once more, this time taking up residence in the Picturehouse Central cinema by Piccadilly Circus. As with last year, this meant comfy seats which, contrary to what you might think, actually makes it easier to stay awake! In another repeat from 2015, we started late and connecting to the projector proved problematic throughout the day. This time we had a selection of shiny MacBooks with their new-fangled USB-C connectors to thank!

The great thing about this conference is its independence which means that the sessions during the day covered the complete gamut of container technologies. Here’s a quick run down of the day:

  • Craig Box from Google kicked the day off. His session was billed as covering Kubernetes 1.3 but, as he pointed out, that was old hat with 1.4 due to release within a week. As such, he spent much of the pitch talking about what was coming up. To me it was a reminder to have a play with deploying various WebSphere topologies with Pet Sets.
  • Next up was Ben Firshman, repeating his serverless app talk from DockerCon. I keep meaning to ask whether he knows that OpenWhisk supports Docker containers as actions.
  • Michael Hausenblas from Mesosphere came after the break. He was talking about DRAX, his chaos testing tool for DC/OS.
  • Michael was following by Nishant Totla, giving his first conference presentation. He’s an engineer at Docker working on swarmkit, with orchestration in Docker 1.12 being the subject of his presentation.
  • Mark Shuttleworth of Canonical fame had the last session of the morning. He was talking about snaps for application packaging, particularly in the context of IoT devices.
  • Once the long lunch queue had finally subsided there was a series of lightning talks but, standing about three meters from the speakers, I still couldn’t hear half of what was said against the background. I’ll have to wait for the replays.
  • After lunch, Jonathan Boulle from CoreOS talked about the rkt container runtime and, in particular, the work that has been done to integrate it in to Kubernetes. Undoubtedly factoring the Docker-specifics out of Kubernetes has been beneficial to the project. It remains to be seen whether rkt overtakes Docker as the runtime of choice.
  • George Lestaris (now working on Garden for Pivotal) was talking about the project to use the CernVM File System as the backing for a container layered file system. Consider what if the large proportion of the content of many images that is never touched by the running process was never pulled across the file system?
  • Liz Rice had borrowed Julz Friedman’s pitch on building containers from scratch with Go. It was interesting to compare Liz’s style of “oh look – what would happen if I tried this?” versus Julz’s “let me show you my skills”!
  • Gareth Robertson then took to the stage briefly to plug RC1 for Label Schema which seeks to standardise a base set of Docker image labels.
  • After another break, Ed Robinson from Reevoo gave an entertaining pitch on the Træf?k reverse proxy. He talked about cheese a little bit too much though as this was point a mouse started to repeatedly traverse the flooring in front of me!
  • Chris Van Tuin from Red Hat gave an OpenShift pitch, lightly disguised as a presentation on container security.
  • Dustin Kirkland, another Canonical employee was talking about LXD and HPC. My attention started to drift at this point as watching the activities of the mouse proved more entertaining!
  • Docker Captain Alex Ellis rounded off the day with a Swarm/Raspberry Pi/IoT demo. You can’t beat a few flashing lights to please the audience!

Everything was being recorded so keep checking back on the conference YouTube channel for any sessions that peak your interest.

Book Review: Mastering Docker and Monitoring Docker

Sunday, June 19th, 2016

Mastering DockerThe films on my flight to the US this week weren’t much to write home about so I ploughed through some of my Safari Queue backlog. Two of the Docker related books on my queue were from Packt Publishing.

The first was Mastering Docker by Scott Gallagher which I’ll confess that I only skim read. Parts of the book were already showing their age even though it was only published in December 2015 but that’s inevitable in such a fast-moving area. More problematic was that I found myself disagreeing with the author so much in just the first couple of chapters that I couldn’t believe anything I read after that. By way of just one example: the author asserts that commands are chained together in Dockerfiles to speed build times and seems to suggest that items added in one layer may be removed in another to keep the image size down. The structure of the book is also poor with material repeated throughout. It certainly doesn’t contain the depth to provide mastery in anything!

Monitoring DockerThe second book from the same stable was Monitoring Docker by Russ McKendrick. Maybe it’s just because this is an area that I know less about but I got on better with this book. It covers use of the Docker API (top, stats and logs), cAdvisor and aggregation with Prometheus. The author covers Zabbix in some detail which, I must confess, is a tool that I’d never even heard of, before a gushing endorsement for Sysdig. The book provides an overview of SaaS options (Sysdig Cloud, Datadog and New Relic) before closing with a detailed walk-through of setting up a containerized ELK stack for log aggregation. There are certainly options that the book didn’t cover but I felt it provided good coverage of the considerations to allow the reader to make informed decisions about what they should look for in a monitoring solution.

I also took a look at the early release of Kelsey Hightower’s Kubernetes: Up and Running from O’Reilly. It shows promise but, as it’s not due for release until October, there wasn’t enough content there yet to give a review.