Archive for the ‘Technology’ Category

WebSphere Liberty and IBM Containers: Part 3

Saturday, June 18th, 2016

Scaling up

In the first two posts in this series we covered the basics of starting a WebSphere Liberty on IBM Containers via the browser and then using the command line to deploy an application.

We’ve already seen some of the value-add that comes out of the box when running under IBM Containers. For example, at no point have we needed to be concerned with the underlying infrastructure on which the containers are running (beyond selecting a region). When we created an image it was scanned automatically for vulnerabilities. Each container was allocated its own private IP address accessible from any other container running in the same space – no need to set up and configure overlay networking here. We had control over whether we also wanted to assign a public IP and, if so, what ports should be exposed there. We also had easy access to metrics and standard out/error from the container.

So far we’ve only deployed a single container though. What happens when we hit the big time and need to scale up our application to meet demand? When we created our first container via the UI, you may remember that the Single option was selected at the top. Let’s go back and try out the Scalable alternative. From the catalog, navigate through Compute and Containers (remember that these instructions are based on the new Bluemix console). Select our existing demo image. Next, select the Scalable button at the top and give the container group a name. By default you’ll see that our group will contain two instances.

Rather than having a single IP associated with a container, this time we are asked to give a host name (on the mybluemix.net domain by default) for the group. Requests arriving at this host name will be load-balanced across the container instances in the group (reusing the gorouter from Cloud Foundry). One nice bonus of this approach is that it doesn’t eat in to our quota of public IPs! As the host name needs to be unique within the domain, I tend to include my initials as you’ll see in the screenshot below. Select 9080 as the HTTP port and then click Create.

Container group creation

Once the containers have started, the dashboard should show that you have two instances running:

Running instances

Right-click on the route shown in the Group details section and open it in a new tab. This should take you to a Liberty welcome page and, if you add the myLibertyApp context root, you should be able to see the application again. If you hit refresh a few times, although you won’t be able to tell with this application, your requests will be load-balanced across the two instances. If you return to the dashboard and switch to the Monitoring and Logs tab you can switch between the output for the instances and should, for example, be able to see the spike in network usage on the two containers when you made the requests.

If you return to the Overview tab you will see that there are plus and minus symbols either side of the current number of instances. These can be used to manually scale the number of instances. Click the + icon, click Save, and watch the creation of a new container in the group.

Manual scaling is all very well but it would be better if the number of instances scaled automatically up and down as required. If you’re deploying your containers in the London region then you’ll notice an extra tab at the top of the dashboard labelled Auto-Scaling. It’s only available in the London region at the moment because the service is still in beta (and so things may change a bit from what I’m describing here). Having selected this tab, click the plus icon labelled Create policy. Give the policy a name default and set the minimum and maximum instance values to 1 and 3. Add two CPU usage rules to scale up and down the number of instances as shown in the following diagram and then hit Create. Finally, select Attach to activate the policy for this scaling group.

Auto-Scaling

If you click the Auto-Scaling History tab you should see that a scaling action has taken place. We originally scaled up manually to 3 instances but, as the CPU usage is below our 60% limit, the number gets scaled down by one. If you wait another 5 minutes (the cool down period we specified), then you’ll see it get scaled down again to our minimum of 1.

Scaling history

And that concludes our tour of the scaling options in IBM Containers!

WebSphere Liberty on IBM Containers: Part 2

Monday, May 30th, 2016

Deploying an Application

In the first part of this series we looked at how to get started running a WebSphere Liberty image in IBM Containers using the Bluemix console.  The container was just running an empty Liberty server. In this post we’ll look at building and deploying an image that adds an application. I was originally intending to stick to using the browser for this post but I think it’s actually easier to do this from the command line. I’m going to assume that you already have Docker installed locally, either natively on Linux, via Docker Machine, or via the Docker for Mac/Windows beta.

First off we need an application to deploy and, just for novelty, I’m going to use the Liberty app accelerator to generate one. Select Servlet as the technology type and then, contrary as it may seem, select Deploy to Local and not Deploy to Bluemix. The latter option currently only supports deploying to the Instant Runtimes (Cloud Foundry) side of Bluemix. Finally, give your project a name and click Download Now.

Liberty App Accelerator

Unpack the zip file you downloaded and change to the top directory of the project. The app is built using maven. Perhaps you already have maven installed but this is a Docker blog post so we’re going to use the maven image from Docker Hub to build the app as follows:

$ docker run –rm -v $(pwd):/usr/src/mymaven \
    -w /usr/src/mymaven/myProject-application maven mvn clean package

This mounts the project on to a container running the maven image and runs the command mvn clean package in the myProject-application directory. Note: if you were doing this repeatedly you’d probably want to mount a maven cache in to the image as well and not download everything each time)

In the myProject-application/target directory you should now find that you have a file myArtifactId-application-1.0-SNAPSHOT.war. Copy this in to a new empty directory so that when we execute a Docker build we don’t end up uploading lots of other cruft to the Docker engine. Using your favourite editor, add the following Dockerfile to the same directory:

FROM websphere-liberty:webProfile7
COPY myArtifactId-application-1.0-SNAPSHOT.war /config/dropins

We have two choices now, we can either build a Docker image locally and then push that up to the IBM Containers registry, or we can build the image in IBM Containers. We’ll go for the latter option here as it involves pushing less bytes over the network.

There’s one niggle today that, to access your IBM Containers registry, you need to log in first using the Cloud Foundry CLI and IBM Containers plugin. We’re going to play the containerisation trick again here. Run the following commands to build an image with the CLI and plugin:

$ docker build -t cf https://git.io/vr7pl

Ideally I’d run this image as a stateless container but getting the right state written out to host in to the .cf, .ice and .docker directories is a bit finicky. Instead, we’re going to mount our current directory on to an instance of the image and perform the build inside:

$ docker run -it –rm $(pwd):/root/build cf
$ cd /root/build
$ cf login -a api.ng.bluemix.net

$ cf ic login
$ cf ic build -t demo .

Now we’re ready to run an instance of your newly built image. At this point you could switch back the to the UI but lets keep going with the command line. We’ll need to refer to the built image using the full repository name, including your namespace:

$ ns=$(cf ic namespace get)
$ cf ic run –name demo -P registry.ng.bluemix.net/$ns/demo

By default, containers are only assigned a private IP address. In order to access our new container we’ll need to request and assign a public IP. The cf ic ip command unfortunately returns a human friendly message, not a computer friendly one, hence the need for the grep/sed to retrieve the actual IP:

$ ip=$(cf ic ip request | grep -o ‘”.*”‘ | sed ‘s/”//g’)
$ cf ic ip bind $ip demo

Lastly, we can list the port and IP to point our browser at:

$ cf ic port demo 9080

Adding the root context myLibertyApp should give use the welcome page for the starter app.

Starter Welcome Page

Congratulations, you’ve successfully deployed an application to IBM Containers! In the next post in this series we’ll look at some of the additional features that the service provides, such as scaling groups and logging.

WebSphere Liberty on IBM Containers: Part 1

Monday, May 16th, 2016

Getting Started

I wanted write a few blog posts as I explored deploying our WebSphere Liberty Docker image on to third-party cloud platforms but this didn’t really feel fair given that I haven’t written anything here about the options using IBM’s cloud platform! So, to kick things off, here’s a series of posts on using Liberty with the IBM Containers service in Bluemix. In this post I’m going to focus on using the web UI to get a simple Liberty container up and running. In subsequent posts I’ll cover topics such as deploying an application, scaling applications and using the command line.

I’m also going to use the new console for the instructions/screenshots in this post. If you don’t already have an account, use the sign-up button at the top right of the console to get a free 30-day trial.

You can use WebSphere in a number of different ways in Bluemix: web applications deployed as Cloud Foundry applications will use the Liberty runtime and it’s also possible to stand-up WAS ND topologies for both Liberty and the traditional application server in VMs. Here we’re going to focus on containers.

Once logged in, select the Compute icon and then the Containers tab. Then click the + icon top-right to create a new container. You should find that the private registry associated with your account is already populated with a number of IBM-provided images as shown in the following screen shot. Select the ibmliberty image. This image adds a few layers on top of the websphere-liberty image from Docker Hub to make it run more cleanly in IBM Containers (see the Dockerfile in the docs for full details).

Select container image

You will now see a drop-down on the left which allows you to select the image tag to use. By default the latest tag will give you the full Java EE7 profile image. Select the webProfile7 tag to get a lighter-weight image. Beneath the tag you’ll an area entitled Vulnerability Assessment which, if we’ve been doing our properly, should say Safe to Deploy. Assessments are updated regularly as new CVEs are announced and we therefore regularly refresh the image to make sure that it stays clean.

Vulnerabilty Advisor

At the top of the page, you’ll see that the Single option is selected to create just a single container instance, we’ll return later to see how the Scalable option works. Give the container a name. Leave the container size at Micro but select the option to Request and Bind Public IP.  All containers get an IP address on a private network and are accessible by other containers in the same space but to be accessible externally they must be assigned a public IP.

Create a container

Now click the Create button at the bottom of the page. After a minute or two the status of the container should change to Running.

Container running

Switch to the Monitoring and Logs tab. The container level metrics around CPU/memory/network usually take some time to appear but if you select the Logging tab you should be able to see the messages output by Liberty as the container started. Tip: you can use the ADVANCED VIEW button on these tabs to open up Grafana and Kibana views on to these metrics and logs respectively if you want to do more detailed analysis.

Container logs

Finally, switch back to the Overview tab. If you scroll down to the Container details section, you’ll find a list of ports. If you click on 9080, this will open a new browser tab using that port and the public IP address assigned to the container. With the default image, that should show the Liberty welcome page.

Liberty welcome page

Congratulations – you have run WebSphere Liberty on IBM Containers! The astute among you will, however, have noticed that it’s not running any application. That’ll be the subject of the next blog post…

ES2015 in Production

Thursday, April 21st, 2016

Bård Hovde gave tonight’s Developer South Coast presentation on the subject of “ES2015 in Production” (or “ES6 in Production” if you must). You can find the slides here with the source for the presentation over on Bård’s GitHub account. He did a great job of making the subject matter entertaining. Beyond being able to say goodbye to all of that boilerplate, my main takeaway was the use of Babel for transpiling ES2015 into ES5, so no excuses about waiting for browser compatibility! The Babel site also has a nice overview of ECMAScript 2015 features.

docker logs and stderr

Tuesday, April 12th, 2016

As part of a script that I was writing today I was attempting to count the number of times that a containerised Liberty server had started. I started with the following command:

(where CWWKF0011I is the code for Liberty’s “ready to run a smarter planet” message) but it was giving me an answer one greater than I anticipated. Stripping off the line count at the end quickly showed why: in addition to the messages that I was expecting, grep also appeared to be returning an error message from the logs. It took me a little while longer to get my head around what was going on. grep wasn’t passing the error at all. Indeed, the error wasn’t even getting as far as grep. Instead, docker logs was trying to be helpful and recreating the stdout and stderr streams from the container; stdout was getting piped to grep, and stderr, where Liberty was outputting its error message, was just going straight to the console. This was also going to cause me a headache elsewhere when I was actually trying to grep for errors.

When I first learnt bash, 2>&1 was the way to redirect stderr to stdout but, just to prove that you can teach an old dog new tricks, here’s the working version of my original command using the Bash 4 syntactic sugar to merge stderr in to a pipe:

As a further aside (which certainly added to my confusion when trying to debug this), if you allocate a TTY to a container via the -t flag on the run command, stderr will be merged in to stdout before it hits the logs so you won’t see this behaviour at all!

Book Review: Docker Containers: Build and Deploy with Kubernetes, Flannel, Cockpit, and Atomic

Monday, April 11th, 2016

I’m slowly working my way through the list of Docker publications that I stacked my tablet with when IBM restarted its subscription to Safari Books Online. One of these was Docker Containers: Build and Deploy with Kubernetes, Flannel, Cockpit, and Atomic by Christopher Negus. The last two projects in the title are a clue to the underlying theme of the book. Cockpit and Atomic being Red Hat projects, this is really a guide to doing containers the Red Hat way. This I was expecting – they do employ the author after all. What really disappointed me was that the four technologies cited in the title occupied so little of the book’s content. Of the 18 chapters, there was one on Super Privileged Containers (an Atomic concept), one on Cockpit, two on Kubernetes, and one paragraph on Flannel. Hardly comprehensive coverage!

The first part of the book covers the basic concepts, setting up an OS and a private registry. This reminded me of one key fact that I’d forgotten: that Red Hat ships its own Docker distribution. One of the Red Hat specific features is the ability to specify multiple default registries (with Red Hat placing their own registry ahead of Docker Hub in the default search order). This is at odds with Docker’s view that the image name (including registry host) should be a unique identifier. Personally, I would side with Red Hat on this one. I suspect many customers will be using their own private registries and would prefer to be able to specify ‘myimage’ and have it resolve against the correct image in the local registry for the environment.

The bulk of the content is in the second part that covers building, running and working with individual containers. There were a few errors that crept in to this section. For example, the author suggests that setting the environment variable HOST on a container somehow magically mount the host filesystem (it’s actually used to tell Atomic where the host filesystem is mounted). He also states incorrectly more than once that removing files introduced in one layer in a subsequent layer will reduce the size of the image. In general though, it provides a good coverage of working with containers. I picked up a few interesting command options that I wasn’t aware of. For example, ‘-a’ on a ‘pull’ to retrieve all of the images for a repository, the fact that you can use ‘inspect’ on images as well as containers, and a couple of commands that had previously escaped me completely: ‘rename’ and ‘wait’. There was also some useful information on the use of Docker with SELinux.

The third part covers Super Privileged Containers in Atomic (the way in which Atomic extends the basic capability of the OS via containerized tools) and management of Docker hosts and containers through the Cockpit browser based administration tool. The fourth part then covers the basic concepts of Kubernetes and the steps for setting up an ‘all-in-one’ environment and a cluster. These steps seem destined to be out-of-date before the ink is dried and the space would have been better spent covering the concepts in more depth and talking about usage scenarios.

The final part seems a little out of place. One chapter covers best practices for developing containers. The cynic in me suspects this may have just been an opportunity to introduce some OpenShift content. It certainly glosses over the entirety of Machine, Compose and Swarm in just a single section. Then there is a closing chapter looking at some example Dockerfiles.

All-in-all, the book offers a good introduction to the topic of Docker, particularly if you are looking to deploy on Fedora, RHEL or CentOS. Look elsewhere though if you really want to get to grips with Kubernetes.

Docker for Mac Beta

Sunday, April 10th, 2016

I was excited to see Docker announce a beta for ‘native’ support for Docker on Windows and Mac where ‘native’ means that Docker appears as a native application utilising built-in virtualisation technology (Hyper-V on Windows and a project called xhyve on Mac) rather than requiring Virtual Box. Sadly this isn’t much use to me at work where I run the corporate standard Windows 7 on my laptop and Linux on my desktop. (The Register had an article indicating that, although Windows 7 is declining in the enterprise, its market share is still 45%+ so I hope Docker don’t do anything rash like ceasing to develop Docker Toolbox.) I do have a Mac at home though so I signed up for an invite.

The install went very smoothly although the promised migration of images and containers from my existing default Docker Toolbox image failed to happen. My best guess was that this was because the VM was back-level from the version of the client that the native app had installed although I’m only guessing. Docker Machine and the native app sit happily alongside one another although obviously I then needed to upgrade the VM to match the newer client version.

Needless to say, the first thing I tried to run was the websphere-liberty image. This started flawlessly and, having mapped port 9080 to 9080, I was then able to access the Liberty welcome page at docker.local:9080. So far so good.

WebSphere Liberty under Docker for Mac Beta

Having been out on vacation at the time, I went back and listened to the online meetup covering the beta. Given that we have websphere-liberty images for PPC and z/Linux, I was particularly intrigued to see that it promised the ability to run images for multiple architectures. The example given in the meetup worked like a charm:

Unfortunately trying to run anything against other images such as ppc64le/ubuntu resulted in a ‘command not found’ from Docker so I need to do some more digging to see what’s going on here.

Whilst browsing the beta forums, a common complaint was the speed of the file system mounts which has also been a problem with the Virtual Box approach. I decided to test this out by trying to use the maven image to compile our DayTrader sample. To keep things ‘fair’, my maven cache was pre-populated and, when running the image, I mounted the cache.

Natively on the host a ‘mvn compile’ takes around 12 seconds. With Docker running in a Docker Machine VM using the following command, the time was surprisingly close, typically of the order of 14 seconds.

Running the same command against the beta unfortunately took over 30 seconds. Whether that’s down to the file system driver I can’t say. It’s certainly an appreciable difference but, hey, this is a beta so there’s still plenty of hope for the future!

One tip that I picked up on the way: ‘docker-machine env’ has an  ‘–unset’ option which means that, if you want to switch back to the native Docker install after using Machine, then you can use the following command:

Docker London December

Friday, December 4th, 2015

Last night I headed up to London for the December meetup of Docker London. The evening didn’t get off to a great start as I managed to cycle over a screw on the way to the station. Despite this, and the subsequent efforts of the Jubilee Line, I did just make the start in time.

The evening kicked off with Chad Metcalf from Docker demoing Tutum. It was just a slight variant of one of the demos from DockerCon so nothing really new for me here although he did talk a little about the extensions to the Compose syntax that Tutum uses. The HIGH_AVAILABILITY strategy being something that’s obviously missing from Compose/Swarm today.

Next up was Alois Mayr, a Developer Advocate at Ruxit, who did a nice job of not explicitly pushing his company’s offering but instead talked more generally about some issues experienced by a Brazillian customer of theirs that has a large deployment of Docker running on Mesos. The underlying theme was undoubtedly that, in a large microservices based architecture, you need to have a good understanding of the relationships between your services and their dependencies in order to be able to track problems back to the root cause.

Last up was an entertaining pitch by Chris Urwin, an engineering lead at HSCIC (part of the NHS) and consultant Ed Marshall. They talked about a project to move from a Microsoft VMM (and Excel spreadsheet) based setup to one using Docker and Rancher for container management. They were undoubtedly pleased with the outcomes in terms of developer productivity and the manageability of the deployed environment, not to mention reduction in cost and complexity. Although the system is not live in production yet, it is live in an environment that they share with partners that is subject to SLAs etc. Particularly striking for me was the reduction in the amount of disk space and memory that the new solution entailed.