Archive for the ‘WebSphere’ Category

How many processors?

Tuesday, January 31st, 2017

Reading Daniel Bryant’s O’Reilly publication Containerizing Continuous Delivery in Java reminded me of the challenge of determining how processors are available to you when running in a container. In the case of Java, a call to Runtime.getRuntime().availableProcessors() should show this all important information. A quick check reveals that, when called in an unconstrained container, this correctly returns the number of cores on my physical hardware (Docker on Linux) or assigned to the VM containing the Docker Engine (Docker Toolbox or Docker for Windows/Mac). If I used the --cpuset-cpus option on docker run to constrain the cores available to the container then this is also correctly reflected in the value returned. The difficulty arises when access to those CPUs is constrained in other ways.

Take, for example, the new --cpus option in Docker 1.13. Setting this to two on a four-way box, I still get four back from a call to availableProcessors() and rightly so: there are four processors and I may get simultaneous access to all four of them even if the cgroup is then going to make sure that I don’t get that access for more than half of the time. Another potential constraint is a highly multi-tenant environment. If I deploy my test application to Bluemix it tells me that there are 48 processors. That’s great but I’m pretty sure I’m not going to get exclusive access to all of those!

One example we’ve seen where this becomes a real problem is in native memory usage. By default, WebSphere Liberty uses the number of available processors to decide the number of parallel threads it should support. Each of those executors utilises a thread and each of those threads takes up space in native memory. In a containerized environment where total memory is typically constrained (Bluemix containers are sold by the GB/hour) and some generic heuristic is often used to determine the heap size to allocate to the JVM, that can lead to memory exhaustion. That’s why you’ll see a GitHub issue from my colleague Erin that, among other things, proposes setting hard-coding a maximum on the number of threads for the executor service in our Docker images.

Docker 1.13 is out

Sunday, January 22nd, 2017

Docker 1.13 finally made it out the door earlier this week and I found some time to play around with it this weekend. I shan’t enumerate all of the new features here as the introductory post from Docker does a good job of that (or you can see the release notes for the gory details). Instead I’ll talk a little bit about some of the features that are of particular interest to me.

Top of the list has to be CLI backwards compatibility. It has been a frustration for some time that you’ve had to set the DOCKER_API_VERSION environment variable in order to have a client to talk an engine using an older version of the API. I almost always hit this following an upgrade or when accessing remote engines. It also made it difficult to have an image containing a Docker client, for example to talk to the engine it was running on. You ended up either having to create an image for each API version or trying to work out the engine’s version so you could set the variable appropriately. It’s a shame that compatibility only goes back as far as 1.12 but it’s a step in the right direction.

Another feature that I’ve been holding out for for some time is the --squash option on docker build. The way it has been implemented, this will squash all of the layers from the current build down in to one, preserving the image history in the process. This means that you no longer have to jump through hoops to make sure temporary files introduced in the build are created, used and deleted all in the same Dockerfile command.

I tested the option out on the build for our websphere-liberty images and was initially surprised that it didn’t reduce the size at all. I know that some files get overwritten in subsequent layers but unfortunately that happens across different images e.g. the javaee7 image overwrites some files in the webProfile7 image. Likewise, for our websphere-traditional images we currently have a two-step build process to avoid getting Installation Manager (IM) in the final image. I had hoped that we’d just be able to uninstall IM and then squash the layers but this would only work if we didn’t install IM in a separate base image. Hopefully the squash flag will gain some options in future to control just how many layers are squashed.

Another space-saving feature is the docker system prune command. Yes, pretty much every Docker user probably already had a script to do this using a host of nested commands but, as with the corresponding docker system df command, it’s good to see Docker making this that bit easier for everyone,.

The area of restricting CPU usage for containers has also been something of a black art involving shares, cpusets, quotas and periods. (I should know as we’ve given quite some consideration as to what this means for IBM’s PVU and vCPU pricing models.) It’s therefore great to see the --cpus option being added to docker run to radically simplify this area.

Perhaps the biggest feature in Docker 1.13 has to be the introduction of the Docker Compose V3 file format and the ability to deploy these Compose files directly to a swarm using docker stack deploy. This was a glaring hole when swarm mode was introduced in 1.12. It still sits a little uneasily with me though. Docker Compose started out as a tool for the developer. Despite exposing much the same Docker API, there were a few holes that started to creep in when trying to use the same YAML with a classic Swarm. For example, you really had to be using images from a repository for each node to be able to access them and the inability to specify any sort of scaling in the file meant it wasn’t really of use for actual deployment. The latter problem is, at least, resolved with V3 and swarm mode but only at the expense of moving away from something that feels like it is also of use to the developer. Perhaps experience will show that a combination of Compose file extensibility and Distributed Application Bundles will enable reuse of artifacts between development and deployment.

I don’t wish to end on a negative note though as, all in all, there’s a lot of good stuff in this release. Roll on Docker 1.14!

Using the Docker remote API to retrieve container CPU usage

Monday, November 28th, 2016

For reasons that I won’t go in to here, I’ve been interested in the CPU accounting aspect of cgroups for a while and I recently found some time to have a poke at what information is available in the Docker remote API. I was interested in getting hold of the actual CPU time used by a container versus the elapsed time that the container has been running for (where the former would be smaller if the container is not CPU intensive and would potentially be much larger if it’s chewing through multiple cores).

The CLI doesn’t expose the information that I was looking for so my first pass was to define an image with curl and jq:

Build it:

And then run it with a script as follows:

I started out with an Alpine based image but the version of date it comes with wasn’t capable of parsing the ISO format dates returned by the API. This was an interesting exercise in the use of curl with Unix sockets and jq for parsing JSON on the command line but I thought I could do better.

Next step was a rendering of the script above in to golang which you can find over on GitHub. You’ll have to forgive my poor golang – I wouldn’t claim to know the language; this is just a cut-and-shut from numerous sources around the internet. Perhaps the only part worth mentioning is that I explicitly pass an empty version string to the golang Docker library so that you don’t get client-server version mismatch errors.

Having compiled this up in to a static binary I could then build a small image from scratch. I then wanted to build this using Docker Hub automated builds and a binary release on GitHub. This raises the thorny issue of how you make the binary executable once you’ve used ADD to download it in to the image. There is one solution here that adds a very small C binary that can be used to perform the chmod. Having initially employed this method it reminded me of another issue that I’d hit. I’d inadvertently doubled the size of our websphere-traditional images to over 3GB with a recursive chmod (the files get copied in to a new layer with the modified permissions). So, in the end I caved in and checked the binary in to GitHub so I could use a COPY and pick up the correct permissions.

The resulting image, weighing it at just over 4MB, is on Docker Hub. As the instructions say, it can be run with the command:

To test out the image, let’s spin up a container that should burn up the two cores allocated to my Docker for Mac VM:

If we leave it for a few minutes we see an output along the following lines:

The total CPU usage is, as we’d expect, twice the elapsed time. Let’s try again but this time run two containers and use cpuset to constrain them both to a single core:

This time, the results show that each container is getting half of the CPU time:

(Actually, you can see that the one that has been running longer has slightly more than half as it got the CPU to itself for a couple of seconds before the other container started!) Finally, and just for interest, let’s spin up an unconstrained WebSphere Liberty server:

After a minute, we see that it’s used just over 20 seconds of CPU time to start up:

And if we check again after half an hour, we see that without any load, the server has consumed very little extra CPU:

Prometheus and WebSphere Liberty

Monday, October 3rd, 2016

It’s been on my to-do list for some time to try setting up Prometheus to monitor WebSphere Liberty. There is a JMX Exporter which makes the job pretty simple even if there ended up being more steps than I had originally hoped.

My first pass was to try to configure the exporter as a Java agent but sadly the current Java client attempts to use some com.sun packages that don’t work with an IBM JRE. I started down the path of rebuilding our Liberty image on OpenJDK but, when I discovered that the Java agent actually uses Jetty to expose its HTTP endpoint I decided that I really didn’t want that bolted on to the side of my Liberty process! Ideally I’d get the Java client fixed and then create a Liberty feature to expose the HTTP endpoint but that will have to wait for another day… This time round I decided to configure the exporter as an HTTP server in a side-car container.

The first step was to create a Liberty image with monitoring enabled using the following Dockerfile:

And then build and run the image and extract the JMX URL:

Note that, in addition to the normal HTTP and HTTPS ports, we’ve exposed a port (5556) that the exporter container is going to use.

Next we need to build the JMX exporter JAR file using maven:

And we also need a config file for the exporter that uses the JMX_URL that we extracted from the Liberty image earlier:

The pattern here is subscribing us to all the available MBeans. The following Dockerfile constructs an image with these two artifacts based on the openjdk image from Docker Hub:

Note that we tell the exporter to run on the same port that we exposed from the Liberty container earlier. Now we build and run the image. We use the network from our Liberty container so that the exporter can connect to it on localhost. The curl should retrieve the metrics being exposed by the exporter.

The last step is to run Prometheus. Create a prometheus.yml file to provide the scrape configuration:

We can then run the standard Prometheus image from Docker Hub:

You can then access the Prometheus UI in your browser on port 9090 of the host where your Docker engine is running. If you’re new to Prometheus, try switching to the Graph tab, entering the name of a metric (e.g. WebSphere_JvmStats_ProcessCPU) and then hit Execute. If all is well, you should see something along the following lines:

Prometheus UI

If the metrics don’t look all that exciting then try applying a bit of load to the server, such as using the siege tool:

WebSphere Liberty admin center in Docker

Tuesday, September 27th, 2016

The content of the WebSphere Liberty Docker images currently match the runtime install zips that we make available for download from WASdev.net. One consequence of this is that none of them contain the admin center. Adding it is very simple though as the following Dockerfile shows:

This Dockerfile adds a snippet of server XML under the configDropins directory that adds the adminCenter-1.0 feature. It then uses installUtility to install that feature. The admin center requires a user registry to be defined for authentication and here we use the quickStartSecurity stanza to define a wsadmin user. We’ll come back to remoteFileAccess in a moment.

We can then build and run this image as follows:

Once the server has started you should then be able to access /adminCenter on the HTTPS port return by docker port admin 9443 using the credentials defined in the Dockerfile.

Liberty Admin Center

If you then click on the Explore icon in the toolbox you’ll find information about any applications that are (or are not) deployed to the server, the server configuration, and server-level metrics. The last of these may be of particular interest when trying to determine suitable resource constraints for a container.

Liberty Admin Center Monitoring

In a single-server, it’s not currently possible to deploy an application via the admin center. For a simple application you could just place it in the dropins directory but, for argument’s sake, let’s say that we need to provide some extra configuration. I’m going to assume that you have ferret-1.2.war in the current directory. We then copy the file in to the container:

In the admin center, we then navigate to Configure > server.xml, click Add child under the Server element, select Application and click Add. Fill in the location as ferret-1.2.war and the context root as ferret then click Save. It is the remoteFileAccess stanza that we added to the server configuration that allows us to edit the server configuration on the fly.

Add Application

If you return to the applications tab you should see the application deployed and you can now access the ferret application at /ferret!

Ferret application installed

Obviously modifying the server configuration in a running container is at odds with the idea of an immutable server but it may still be of use at development time or for making non-functional updates e.g. to the trace enabled for a server.

Docker swarm mode on IBM SoftLayer

Monday, September 26th, 2016

Having written a few posts on using the IBM Containers service in Bluemix I thought I’d cover another option for running Docker on IBM Cloud: using Docker on VMs provisioned from IBM’s SoftLayer IaaS. This is particularly easy with Docker Machine as there is a SoftLayer driver. As the docs state, there are three required values which I prefer to set as the environment variables SOFTLAYER_USER, SOFTLAYER_API_KEY and SOFTLAYER_DOMAIN. The instructions to retrieve/generate an API key for your SoftLayer account are here. Don’t worry if you don’t have a domain name free – it is only used as a suffix on the machine names when they appear in the SoftLayer portal so any valid value will do. With those variables exported, spinning up three VMs with Docker is as simple as:

Provisioning the VMs and installing the latest Docker engine may take some time. Thankfully, initialising swarm mode across the three VMs with a single manager and two worker nodes can then be achieved very quickly:

Now we can target our local client at the swarm and create a service (running the WebSphere Liberty ferret application):

Once service ps reports the task as running, due to the routing mesh, we can call the application via any of the nodes:

Scale up the number of instances and wait for all three to report as running:

With the default spread strategy, you should end up with a container on each node:

Note that the image has a healthcheck defined which uses the default interval of 30 seconds so expect it to take some multiple of 30 seconds for each task to start. Liam’s WASdev article talks more about the healthcheck and also demonstrates how to rollout an update. Here I’m going to look at the reconciliation behaviour. Let’s stop one of the work nodes and then watch the task state again:

You will see the swarm detect that the task is no longer running on the node that has been stopped and is moved to one of the two remaining nodes:

(You’ll see that there is a niggle here in the reporting of the state of the task that is shutdown.)

This article only scratches the surface of the capabilities of both swarm mode and SoftLayer. For the latter, I’d particularly recommend looking at the bare metal capabilities where you can benefit from the raw performance of containers without the overhead of a hypervisor.

Building application images with WebSphere traditional

Sunday, September 25th, 2016

For a while now I’ve had a bullet point on a chart that blithely stated that you could add an application on top of our WebSphere Application Server traditional Docker image using wsadmin and, in particular, with the connection type set to NONE i.e. without the server running. Needless to say, when I actually tried to do this shortly before a customer demo it didn’t work! Thankfully, with moral support from a colleague and the excellent command assistance in the admin console, it turns out that my initial assertion was correct and the failure was just down to my rusty scripting skills. Here’s how…

First, the Dockerfile that builds an image containing our ferret sample application, taking the WAR file from Maven Central.

The following script then builds an image from the Dockerfile in the gist above, runs it, waits for the server to start, and then retrieves the ferret webpage.

Using Rocker to build Liberty images from Java source

Saturday, September 24th, 2016

Looking for solutions to my archive extraction problem brought me to look at the Rocker project from Grammarly. I’d looked at it before but not in any great detail. In a nutshell, it aims to extend the Dockerfile syntax to overcome some of its current limitations. Although not an option for me (because I can’t depend anything beyond the standard Docker toolset) the Rocker solution to my earlier problem would be as simple as follows:

The extra syntax here is the MOUNT command which follows the same syntax as the --volume flag on docker run. As the Grammarly team point out, there are trade-offs here which help to explain why the Docker maintainers are reluctant to add volume mounts to docker build. Here, changes to the contents of the mounted directories do not result in the cache being busted.

Anyway, this post is meant to be about a different problem: building Docker images where the chosen language requires compilation e.g. Java. One approach (that taken by OpenShift’s source-to-image) is to add the source to an image that contains all of the necessary pieces to build, package and run it. As shown in Jamie’s WASdev post, for Liberty that might mean putting Maven and a full JDK in to the image. I’m not a fan off this approach: I prefer to end up with an image that only contains what is needed to run the application.

The following shows how this might look using Rocker:

Here we’re building one of the Micro Profile samples (which uses Maven) and then creating an image with the resulting WAR and the new WebSphere Liberty Micro Profile image. You’ll note that there are two FROM statements in the file. First we build on the maven image to create the WAR file. We then use the Rocker EXPORT command to make the WAR file available to subsequent build steps. The second part of the build then starts with the websphere-liberty:microProfile image, imports the WAR and tags it. Building, running and then calling the application is then as simple as follows:

The other thing of note is that we’ve used the MOUNT command to create a volume for the Maven cache. The volume is tied to this Rockerfile so, if you run rocker build --no-cache . you’ll see that it rebuilds the images from scratch but Maven does not need to download the dependencies again.

The MOUNT is also a great way to overcome the long-running issue of how to provide credentials required to access resources at build time without them ending up in the final image. Other nice features include the ATTACH command which effectively allows you to set a breakpoint in your Rockerfile, and the fact that the Rockerfile is pre-processed as a golang template allowing much more advanced substitutions than with Docker’s build arguments.