Archive for the ‘WebSphere’ Category

WebSphere Liberty admin center in Docker

Tuesday, September 27th, 2016

The content of the WebSphere Liberty Docker images currently match the runtime install zips that we make available for download from One consequence of this is that none of them contain the admin center. Adding it is very simple though as the following Dockerfile shows:

This Dockerfile adds a snippet of server XML under the configDropins directory that adds the adminCenter-1.0 feature. It then uses installUtility to install that feature. The admin center requires a user registry to be defined for authentication and here we use the quickStartSecurity stanza to define a wsadmin user. We’ll come back to remoteFileAccess in a moment.

We can then build and run this image as follows:

Once the server has started you should then be able to access /adminCenter on the HTTPS port return by docker port admin 9443 using the credentials defined in the Dockerfile.

Liberty Admin Center

If you then click on the Explore icon in the toolbox you’ll find information about any applications that are (or are not) deployed to the server, the server configuration, and server-level metrics. The last of these may be of particular interest when trying to determine suitable resource constraints for a container.

Liberty Admin Center Monitoring

In a single-server, it’s not currently possible to deploy an application via the admin center. For a simple application you could just place it in the dropins directory but, for argument’s sake, let’s say that we need to provide some extra configuration. I’m going to assume that you have ferret-1.2.war in the current directory. We then copy the file in to the container:

In the admin center, we then navigate to Configure > server.xml, click Add child under the Server element, select Application and click Add. Fill in the location as ferret-1.2.war and the context root as ferret then click Save. It is the remoteFileAccess stanza that we added to the server configuration that allows us to edit the server configuration on the fly.

Add Application

If you return to the applications tab you should see the application deployed and you can now access the ferret application at /ferret!

Ferret application installed

Obviously modifying the server configuration in a running container is at odds with the idea of an immutable server but it may still be of use at development time or for making non-functional updates e.g. to the trace enabled for a server.

Docker swarm mode on IBM SoftLayer

Monday, September 26th, 2016

Having written a few posts on using the IBM Containers service in Bluemix I thought I’d cover another option for running Docker on IBM Cloud: using Docker on VMs provisioned from IBM’s SoftLayer IaaS. This is particularly easy with Docker Machine as there is a SoftLayer driver. As the docs state, there are three required values which I prefer to set as the environment variables SOFTLAYER_USER, SOFTLAYER_API_KEY and SOFTLAYER_DOMAIN. The instructions to retrieve/generate an API key for your SoftLayer account are here. Don’t worry if you don’t have a domain name free – it is only used as a suffix on the machine names when they appear in the SoftLayer portal so any valid value will do. With those variables exported, spinning up three VMs with Docker is as simple as:

Provisioning the VMs and installing the latest Docker engine may take some time. Thankfully, initialising swarm mode across the three VMs with a single manager and two worker nodes can then be achieved very quickly:

Now we can target our local client at the swarm and create a service (running the WebSphere Liberty ferret application):

Once service ps reports the task as running, due to the routing mesh, we can call the application via any of the nodes:

Scale up the number of instances and wait for all three to report as running:

With the default spread strategy, you should end up with a container on each node:

Note that the image has a healthcheck defined which uses the default interval of 30 seconds so expect it to take some multiple of 30 seconds for each task to start. Liam’s WASdev article talks more about the healthcheck and also demonstrates how to rollout an update. Here I’m going to look at the reconciliation behaviour. Let’s stop one of the work nodes and then watch the task state again:

You will see the swarm detect that the task is no longer running on the node that has been stopped and is moved to one of the two remaining nodes:

(You’ll see that there is a niggle here in the reporting of the state of the task that is shutdown.)

This article only scratches the surface of the capabilities of both swarm mode and SoftLayer. For the latter, I’d particularly recommend looking at the bare metal capabilities where you can benefit from the raw performance of containers without the overhead of a hypervisor.

Building application images with WebSphere traditional

Sunday, September 25th, 2016

For a while now I’ve had a bullet point on a chart that blithely stated that you could add an application on top of our WebSphere Application Server traditional Docker image using wsadmin and, in particular, with the connection type set to NONE i.e. without the server running. Needless to say, when I actually tried to do this shortly before a customer demo it didn’t work! Thankfully, with moral support from a colleague and the excellent command assistance in the admin console, it turns out that my initial assertion was correct and the failure was just down to my rusty scripting skills. Here’s how…

First, the Dockerfile that builds an image containing our ferret sample application, taking the WAR file from Maven Central.

The following script then builds an image from the Dockerfile in the gist above, runs it, waits for the server to start, and then retrieves the ferret webpage.

Using Rocker to build Liberty images from Java source

Saturday, September 24th, 2016

Looking for solutions to my archive extraction problem brought me to look at the Rocker project from Grammarly. I’d looked at it before but not in any great detail. In a nutshell, it aims to extend the Dockerfile syntax to overcome some of its current limitations. Although not an option for me (because I can’t depend anything beyond the standard Docker toolset) the Rocker solution to my earlier problem would be as simple as follows:

The extra syntax here is the MOUNT command which follows the same syntax as the --volume flag on docker run. As the Grammarly team point out, there are trade-offs here which help to explain why the Docker maintainers are reluctant to add volume mounts to docker build. Here, changes to the contents of the mounted directories do not result in the cache being busted.

Anyway, this post is meant to be about a different problem: building Docker images where the chosen language requires compilation e.g. Java. One approach (that taken by OpenShift’s source-to-image) is to add the source to an image that contains all of the necessary pieces to build, package and run it. As shown in Jamie’s WASdev post, for Liberty that might mean putting Maven and a full JDK in to the image. I’m not a fan off this approach: I prefer to end up with an image that only contains what is needed to run the application.

The following shows how this might look using Rocker:

Here we’re building one of the Micro Profile samples (which uses Maven) and then creating an image with the resulting WAR and the new WebSphere Liberty Micro Profile image. You’ll note that there are two FROM statements in the file. First we build on the maven image to create the WAR file. We then use the Rocker EXPORT command to make the WAR file available to subsequent build steps. The second part of the build then starts with the websphere-liberty:microProfile image, imports the WAR and tags it. Building, running and then calling the application is then as simple as follows:

The other thing of note is that we’ve used the MOUNT command to create a volume for the Maven cache. The volume is tied to this Rockerfile so, if you run rocker build --no-cache . you’ll see that it rebuilds the images from scratch but Maven does not need to download the dependencies again.

The MOUNT is also a great way to overcome the long-running issue of how to provide credentials required to access resources at build time without them ending up in the final image. Other nice features include the ATTACH command which effectively allows you to set a breakpoint in your Rockerfile, and the fact that the Rockerfile is pre-processed as a golang template allowing much more advanced substitutions than with Docker’s build arguments.

Unpacking an archive as a non-root user during docker build

Friday, September 23rd, 2016

The way we build our WebSphere traditional Docker images is as a two-step process. First we install the products using Installation Manager and generate a tar file containing the installed product. Then we suck that tar file in to a clean image so that the resulting image does not contain all the cruft left lying around from the install process. (Not including Installation Manager in the final image also helps to reinforce that these images are intended to be immutable.)

The ADD Dockerfile command is very convenient for copying in a tar file from the context and unpacking it, all in one atomic operation. Unfortunately the ADD command ignores the current user and always unpacks as root. (Aside: Docker recently re-opened the proposal created by IBMer Megan Kostick to address this problem.) You could run a chown following the ADD but this results in all the files being copied in to a new layer (not cool when the contents of your tar file weighs in at 1.5GB!). Our initial starting point was to make sure that all the files already had the right ownership when they are added to the tar file. This involved creating the same user/group in the initial image and relying on getting the same uid/guid in the eventual image, something I wasn’t entirely happy with.

A related problem that we had run in to elsewhere was that the copy and unpack magic of ADD doesn’t extend to zip files, a format in which many of our install binaries are found. Where those binaries are already hosted somewhere, it’s simple enough to use wget or curl to pull the files, unpack, and perform any necessary cleanup as part of a RUN command. The obvious solution to my local tar or zip file was to host the file somehow. I decided to spin up a python container to serve up the files as follows:

That URL can then be consumed in a subsequent build. For example, if I had example.tar.gz in the directory on the host, I could unpack as the user/group foo:bar in my image with the following Dockerfile:

To build the image, we then just need to pass in the URL as a build argument and, when we’re done, we can clean up the python container:

The result of all of this is that we then get the default non-root behavior of tar which is to unpack as the current user.

Containerizing background processes

Thursday, September 22nd, 2016

The lifetime of a Docker container is tied to the lifetime of the PID 1 process executed when the container was started. WebSphere Liberty has a convenient server run command to run the application server in the foreground. Sadly, that’s not the case with the traditional WebSphere’s script which simply starts the server process in the background and then exits. To ensure that the container didn’t exit as well, we started out with a script something along the following lines:

where is a file created by the server process (but not immediately, hence the initial sleep). That successfully kept the container alive but failed to allow it to shutdown cleanly! A docker stop, for example, would wait for the default timeout period and then kill the process. Not great for any in-flight transactions! The solution was simple enough, add a trap to catch any interrupt and issue the command to stop the server:

All was well with the world until we then enabled server security by default. Unfortunately with security enabled the script requires credentials to be provided and there is no way to get those credentials to the script. The solution was to switch to sending the interrupt signal to the server process. I also disliked that initial sleep so I decided to retrieve the process ID via ps (something that’s safer in a container given the limited process tree) and then wait whilst the processes directory exists in /proc. The resulting code looked along the following lines:

Note the use of a function so that $PID is not evaluated at the point the trap is set up.
Another disadvantage with having the server process in the background is the lack of output in the container logs. I decided to rectify that whilst I was at it by adding calls to tail the server log files:

The significance of the tail parameters is as follows. The capital F indicates that the attempts to follow the log file should be retried. This ensures that we continue to follow the latest file when the logs roll over. The pid parameter ensures that the background tail processes exit along with the server process. The -n +0 indicates that the output should start at the beginning of the file so that entries output whilst the script is running are not lost. As previously noted, Docker preserves stderr across the remote API so we make sure to direct the output from SystemErr.log there.

WebSphere Liberty and IBM Containers: Part 3

Saturday, June 18th, 2016

Scaling up

In the first two posts in this series we covered the basics of starting a WebSphere Liberty on IBM Containers via the browser and then using the command line to deploy an application.

We’ve already seen some of the value-add that comes out of the box when running under IBM Containers. For example, at no point have we needed to be concerned with the underlying infrastructure on which the containers are running (beyond selecting a region). When we created an image it was scanned automatically for vulnerabilities. Each container was allocated its own private IP address accessible from any other container running in the same space – no need to set up and configure overlay networking here. We had control over whether we also wanted to assign a public IP and, if so, what ports should be exposed there. We also had easy access to metrics and standard out/error from the container.

So far we’ve only deployed a single container though. What happens when we hit the big time and need to scale up our application to meet demand? When we created our first container via the UI, you may remember that the Single option was selected at the top. Let’s go back and try out the Scalable alternative. From the catalog, navigate through Compute and Containers (remember that these instructions are based on the new Bluemix console). Select our existing demo image. Next, select the Scalable button at the top and give the container group a name. By default you’ll see that our group will contain two instances.

Rather than having a single IP associated with a container, this time we are asked to give a host name (on the domain by default) for the group. Requests arriving at this host name will be load-balanced across the container instances in the group (reusing the gorouter from Cloud Foundry). One nice bonus of this approach is that it doesn’t eat in to our quota of public IPs! As the host name needs to be unique within the domain, I tend to include my initials as you’ll see in the screenshot below. Select 9080 as the HTTP port and then click Create.

Container group creation

Once the containers have started, the dashboard should show that you have two instances running:

Running instances

Right-click on the route shown in the Group details section and open it in a new tab. This should take you to a Liberty welcome page and, if you add the myLibertyApp context root, you should be able to see the application again. If you hit refresh a few times, although you won’t be able to tell with this application, your requests will be load-balanced across the two instances. If you return to the dashboard and switch to the Monitoring and Logs tab you can switch between the output for the instances and should, for example, be able to see the spike in network usage on the two containers when you made the requests.

If you return to the Overview tab you will see that there are plus and minus symbols either side of the current number of instances. These can be used to manually scale the number of instances. Click the + icon, click Save, and watch the creation of a new container in the group.

Manual scaling is all very well but it would be better if the number of instances scaled automatically up and down as required. If you’re deploying your containers in the London region then you’ll notice an extra tab at the top of the dashboard labelled Auto-Scaling. It’s only available in the London region at the moment because the service is still in beta (and so things may change a bit from what I’m describing here). Having selected this tab, click the plus icon labelled Create policy. Give the policy a name default and set the minimum and maximum instance values to 1 and 3. Add two CPU usage rules to scale up and down the number of instances as shown in the following diagram and then hit Create. Finally, select Attach to activate the policy for this scaling group.


If you click the Auto-Scaling History tab you should see that a scaling action has taken place. We originally scaled up manually to 3 instances but, as the CPU usage is below our 60% limit, the number gets scaled down by one. If you wait another 5 minutes (the cool down period we specified), then you’ll see it get scaled down again to our minimum of 1.

Scaling history

And that concludes our tour of the scaling options in IBM Containers!

WebSphere Liberty on IBM Containers: Part 2

Monday, May 30th, 2016

Deploying an Application

In the first part of this series we looked at how to get started running a WebSphere Liberty image in IBM Containers using the Bluemix console.  The container was just running an empty Liberty server. In this post we’ll look at building and deploying an image that adds an application. I was originally intending to stick to using the browser for this post but I think it’s actually easier to do this from the command line. I’m going to assume that you already have Docker installed locally, either natively on Linux, via Docker Machine, or via the Docker for Mac/Windows beta.

First off we need an application to deploy and, just for novelty, I’m going to use the Liberty app accelerator to generate one. Select Servlet as the technology type and then, contrary as it may seem, select Deploy to Local and not Deploy to Bluemix. The latter option currently only supports deploying to the Instant Runtimes (Cloud Foundry) side of Bluemix. Finally, give your project a name and click Download Now.

Liberty App Accelerator

Unpack the zip file you downloaded and change to the top directory of the project. The app is built using maven. Perhaps you already have maven installed but this is a Docker blog post so we’re going to use the maven image from Docker Hub to build the app as follows:

$ docker run –rm -v $(pwd):/usr/src/mymaven \
    -w /usr/src/mymaven/myProject-application maven mvn clean package

This mounts the project on to a container running the maven image and runs the command mvn clean package in the myProject-application directory. Note: if you were doing this repeatedly you’d probably want to mount a maven cache in to the image as well and not download everything each time)

In the myProject-application/target directory you should now find that you have a file myArtifactId-application-1.0-SNAPSHOT.war. Copy this in to a new empty directory so that when we execute a Docker build we don’t end up uploading lots of other cruft to the Docker engine. Using your favourite editor, add the following Dockerfile to the same directory:

FROM websphere-liberty:webProfile7
COPY myArtifactId-application-1.0-SNAPSHOT.war /config/dropins

We have two choices now, we can either build a Docker image locally and then push that up to the IBM Containers registry, or we can build the image in IBM Containers. We’ll go for the latter option here as it involves pushing less bytes over the network.

There’s one niggle today that, to access your IBM Containers registry, you need to log in first using the Cloud Foundry CLI and IBM Containers plugin. We’re going to play the containerisation trick again here. Run the following commands to build an image with the CLI and plugin:

$ docker build -t cf

Ideally I’d run this image as a stateless container but getting the right state written out to host in to the .cf, .ice and .docker directories is a bit finicky. Instead, we’re going to mount our current directory on to an instance of the image and perform the build inside:

$ docker run -it –rm $(pwd):/root/build cf
$ cd /root/build
$ cf login -a

$ cf ic login
$ cf ic build -t demo .

Now we’re ready to run an instance of your newly built image. At this point you could switch back the to the UI but lets keep going with the command line. We’ll need to refer to the built image using the full repository name, including your namespace:

$ ns=$(cf ic namespace get)
$ cf ic run –name demo -P$ns/demo

By default, containers are only assigned a private IP address. In order to access our new container we’ll need to request and assign a public IP. The cf ic ip command unfortunately returns a human friendly message, not a computer friendly one, hence the need for the grep/sed to retrieve the actual IP:

$ ip=$(cf ic ip request | grep -o ‘”.*”‘ | sed ‘s/”//g’)
$ cf ic ip bind $ip demo

Lastly, we can list the port and IP to point our browser at:

$ cf ic port demo 9080

Adding the root context myLibertyApp should give use the welcome page for the starter app.

Starter Welcome Page

Congratulations, you’ve successfully deployed an application to IBM Containers! In the next post in this series we’ll look at some of the additional features that the service provides, such as scaling groups and logging.