Archive for the ‘Technology’ Category

WebSphere Liberty on IBM Containers: Part 1

Monday, May 16th, 2016

Getting Started

I wanted write a few blog posts as I explored deploying our WebSphere Liberty Docker image on to third-party cloud platforms but this didn’t really feel fair given that I haven’t written anything here about the options using IBM’s cloud platform! So, to kick things off, here’s a series of posts on using Liberty with the IBM Containers service in Bluemix. In this post I’m going to focus on using the web UI to get a simple Liberty container up and running. In subsequent posts I’ll cover topics such as deploying an application, scaling applications and using the command line.

I’m also going to use the new console for the instructions/screenshots in this post. If you don’t already have an account, use the sign-up button at the top right of the console to get a free 30-day trial.

You can use WebSphere in a number of different ways in Bluemix: web applications deployed as Cloud Foundry applications will use the Liberty runtime and it’s also possible to stand-up WAS ND topologies for both Liberty and the traditional application server in VMs. Here we’re going to focus on containers.

Once logged in, select the Compute icon and then the Containers tab. Then click the + icon top-right to create a new container. You should find that the private registry associated with your account is already populated with a number of IBM-provided images as shown in the following screen shot. Select the ibmliberty image. This image adds a few layers on top of the websphere-liberty image from Docker Hub to make it run more cleanly in IBM Containers (see the Dockerfile in the docs for full details).

Select container image

You will now see a drop-down on the left which allows you to select the image tag to use. By default the latest tag will give you the full Java EE7 profile image. Select the webProfile7 tag to get a lighter-weight image. Beneath the tag you’ll an area entitled Vulnerability Assessment which, if we’ve been doing our properly, should say Safe to Deploy. Assessments are updated regularly as new CVEs are announced and we therefore regularly refresh the image to make sure that it stays clean.

Vulnerabilty Advisor

At the top of the page, you’ll see that the Single option is selected to create just a single container instance, we’ll return later to see how the Scalable option works. Give the container a name. Leave the container size at Micro but select the option to Request and Bind Public IP.  All containers get an IP address on a private network and are accessible by other containers in the same space but to be accessible externally they must be assigned a public IP.

Create a container

Now click the Create button at the bottom of the page. After a minute or two the status of the container should change to Running.

Container running

Switch to the Monitoring and Logs tab. The container level metrics around CPU/memory/network usually take some time to appear but if you select the Logging tab you should be able to see the messages output by Liberty as the container started. Tip: you can use the ADVANCED VIEW button on these tabs to open up Grafana and Kibana views on to these metrics and logs respectively if you want to do more detailed analysis.

Container logs

Finally, switch back to the Overview tab. If you scroll down to the Container details section, you’ll find a list of ports. If you click on 9080, this will open a new browser tab using that port and the public IP address assigned to the container. With the default image, that should show the Liberty welcome page.

Liberty welcome page

Congratulations – you have run WebSphere Liberty on IBM Containers! The astute among you will, however, have noticed that it’s not running any application. That’ll be the subject of the next blog post…

ES2015 in Production

Thursday, April 21st, 2016

Bård Hovde gave tonight’s Developer South Coast presentation on the subject of “ES2015 in Production” (or “ES6 in Production” if you must). You can find the slides here with the source for the presentation over on Bård’s GitHub account. He did a great job of making the subject matter entertaining. Beyond being able to say goodbye to all of that boilerplate, my main takeaway was the use of Babel for transpiling ES2015 into ES5, so no excuses about waiting for browser compatibility! The Babel site also has a nice overview of ECMAScript 2015 features.

docker logs and stderr

Tuesday, April 12th, 2016

As part of a script that I was writing today I was attempting to count the number of times that a containerised Liberty server had started. I started with the following command:

docker logs test | grep "CWWKF0011I" | wc -l

(where CWWKF0011I is the code for Liberty’s “ready to run a smarter planet” message) but it was giving me an answer one greater than I anticipated. Stripping off the line count at the end quickly showed why: in addition to the messages that I was expecting, grep also appeared to be returning an error message from the logs. It took me a little while longer to get my head around what was going on. grep wasn’t passing the error at all. Indeed, the error wasn’t even getting as far as grep. Instead, docker logs was trying to be helpful and recreating the stdout and stderr streams from the container; stdout was getting piped to grep, and stderr, where Liberty was outputting its error message, was just going straight to the console. This was also going to cause me a headache elsewhere when I was actually trying to grep for errors.

When I first learnt bash, 2>&1 was the way to redirect stderr to stdout but, just to prove that you can teach an old dog new tricks, here’s the working version of my original command using the Bash 4 syntactic sugar to merge stderr in to a pipe:

docker logs test |& grep "CWWKF0011I" | wc -l

As a further aside (which certainly added to my confusion when trying to debug this), if you allocate a TTY to a container via the -t flag on the run command, stderr will be merged in to stdout before it hits the logs so you won’t see this behaviour at all!

Book Review: Docker Containers: Build and Deploy with Kubernetes, Flannel, Cockpit, and Atomic

Monday, April 11th, 2016

I’m slowly working my way through the list of Docker publications that I stacked my tablet with when IBM restarted its subscription to Safari Books Online. One of these was Docker Containers: Build and Deploy with Kubernetes, Flannel, Cockpit, and Atomic by Christopher Negus. The last two projects in the title are a clue to the underlying theme of the book. Cockpit and Atomic being Red Hat projects, this is really a guide to doing containers the Red Hat way. This I was expecting – they do employ the author after all. What really disappointed me was that the four technologies cited in the title occupied so little of the book’s content. Of the 18 chapters, there was one on Super Privileged Containers (an Atomic concept), one on Cockpit, two on Kubernetes, and one paragraph on Flannel. Hardly comprehensive coverage!

The first part of the book covers the basic concepts, setting up an OS and a private registry. This reminded me of one key fact that I’d forgotten: that Red Hat ships its own Docker distribution. One of the Red Hat specific features is the ability to specify multiple default registries (with Red Hat placing their own registry ahead of Docker Hub in the default search order). This is at odds with Docker’s view that the image name (including registry host) should be a unique identifier. Personally, I would side with Red Hat on this one. I suspect many customers will be using their own private registries and would prefer to be able to specify ‘myimage’ and have it resolve against the correct image in the local registry for the environment.

The bulk of the content is in the second part that covers building, running and working with individual containers. There were a few errors that crept in to this section. For example, the author suggests that setting the environment variable HOST on a container somehow magically mount the host filesystem (it’s actually used to tell Atomic where the host filesystem is mounted). He also states incorrectly more than once that removing files introduced in one layer in a subsequent layer will reduce the size of the image. In general though, it provides a good coverage of working with containers. I picked up a few interesting command options that I wasn’t aware of. For example, ‘-a’ on a ‘pull’ to retrieve all of the images for a repository, the fact that you can use ‘inspect’ on images as well as containers, and a couple of commands that had previously escaped me completely: ‘rename’ and ‘wait’. There was also some useful information on the use of Docker with SELinux.

The third part covers Super Privileged Containers in Atomic (the way in which Atomic extends the basic capability of the OS via containerized tools) and management of Docker hosts and containers through the Cockpit browser based administration tool. The fourth part then covers the basic concepts of Kubernetes and the steps for setting up an ‘all-in-one’ environment and a cluster. These steps seem destined to be out-of-date before the ink is dried and the space would have been better spent covering the concepts in more depth and talking about usage scenarios.

The final part seems a little out of place. One chapter covers best practices for developing containers. The cynic in me suspects this may have just been an opportunity to introduce some OpenShift content. It certainly glosses over the entirety of Machine, Compose and Swarm in just a single section. Then there is a closing chapter looking at some example Dockerfiles.

All-in-all, the book offers a good introduction to the topic of Docker, particularly if you are looking to deploy on Fedora, RHEL or CentOS. Look elsewhere though if you really want to get to grips with Kubernetes.

Docker for Mac Beta

Sunday, April 10th, 2016

I was excited to see Docker announce a beta for ‘native’ support for Docker on Windows and Mac where ‘native’ means that Docker appears as a native application utilising built-in virtualisation technology (Hyper-V on Windows and a project called xhyve on Mac) rather than requiring Virtual Box. Sadly this isn’t much use to me at work where I run the corporate standard Windows 7 on my laptop and Linux on my desktop. (The Register had an article indicating that, although Windows 7 is declining in the enterprise, its market share is still 45%+ so I hope Docker don’t do anything rash like ceasing to develop Docker Toolbox.) I do have a Mac at home though so I signed up for an invite.

The install went very smoothly although the promised migration of images and containers from my existing default Docker Toolbox image failed to happen. My best guess was that this was because the VM was back-level from the version of the client that the native app had installed although I’m only guessing. Docker Machine and the native app sit happily alongside one another although obviously I then needed to upgrade the VM to match the newer client version.

Needless to say, the first thing I tried to run was the websphere-liberty image. This started flawlessly and, having mapped port 9080 to 9080, I was then able to access the Liberty welcome page at docker.local:9080. So far so good.

WebSphere Liberty under Docker for Mac Beta

Having been out on vacation at the time, I went back and listened to the online meetup covering the beta. Given that we have websphere-liberty images for PPC and z/Linux, I was particularly intrigued to see that it promised the ability to run images for multiple architectures. The example given in the meetup worked like a charm:

$ docker run justincormack/ppc64le-debian uname -a
Linux 9a41dded6970 4.4.6 #1 SMP Mon Apr 4 15:12:22 UTC 2016 ppc64le GNU/Linux

Unfortunately trying to run anything against other images such as ppc64le/ubuntu resulted in a ‘command not found’ from Docker so I need to do some more digging to see what’s going on here.

Whilst browsing the beta forums, a common complaint was the speed of the file system mounts which has also been a problem with the Virtual Box approach. I decided to test this out by trying to use the maven image to compile our DayTrader sample. To keep things ‘fair’, my maven cache was pre-populated and, when running the image, I mounted the cache.

Natively on the host a ‘mvn compile’ takes around 12 seconds. With Docker running in a Docker Machine VM using the following command, the time was surprisingly close, typically of the order of 14 seconds.

$ docker run -v $HOME/.m2:/root/.m2 -v $(pwd):/usr/src -w /usr/src \
    maven mvn compile

Running the same command against the beta unfortunately took over 30 seconds. Whether that’s down to the file system driver I can’t say. It’s certainly an appreciable difference but, hey, this is a beta so there’s still plenty of hope for the future!

One tip that I picked up on the way: ‘docker-machine env’ has an  ‘–unset’ option which means that, if you want to switch back to the native Docker install after using Machine, then you can use the following command:

$ eval $(docker-machine env -u)

Docker London December

Friday, December 4th, 2015

Last night I headed up to London for the December meetup of Docker London. The evening didn’t get off to a great start as I managed to cycle over a screw on the way to the station. Despite this, and the subsequent efforts of the Jubilee Line, I did just make the start in time.

The evening kicked off with Chad Metcalf from Docker demoing Tutum. It was just a slight variant of one of the demos from DockerCon so nothing really new for me here although he did talk a little about the extensions to the Compose syntax that Tutum uses. The HIGH_AVAILABILITY strategy being something that’s obviously missing from Compose/Swarm today.

Next up was Alois Mayr, a Developer Advocate at Ruxit, who did a nice job of not explicitly pushing his company’s offering but instead talked more generally about some issues experienced by a Brazillian customer of theirs that has a large deployment of Docker running on Mesos. The underlying theme was undoubtedly that, in a large microservices based architecture, you need to have a good understanding of the relationships between your services and their dependencies in order to be able to track problems back to the root cause.

Last up was an entertaining pitch by Chris Urwin, an engineering lead at HSCIC (part of the NHS) and consultant Ed Marshall. They talked about a project to move from a Microsoft VMM (and Excel spreadsheet) based setup to one using Docker and Rancher for container management. They were undoubtedly pleased with the outcomes in terms of developer productivity and the manageability of the deployed environment, not to mention reduction in cost and complexity. Although the system is not live in production yet, it is live in an environment that they share with partners that is subject to SLAs etc. Particularly striking for me was the reduction in the amount of disk space and memory that the new solution entailed.

DockerCon Europe 2015: Day 2

Thursday, November 26th, 2015

DockerCon logoIt was another early start on Day 2 of the conference. It’s not often I leave the hotel before breakfast starts, but fortunately breakfast was being served in the expo hall so I could refuel whilst on duty.

The morning’s general session focussed on the solutions part of the stack that Soloman had introduced the previous day. VP for Engineering, Marianna Tessel, introduced Project Nautilus which, as with the vulnerability scanner in IBM’s Bluemix offering, aims to identify issues with image content held in the registry. This was of interest to me as they have been scanning the official repository images for several months now, presumably including the websphere-liberty image for which I am a maintainer. There was also a demo of the enhancements to auto-builds in Docker Hub and the use of Tutum, Docker’s recent Docker hosting acquisition.

Particularly interesting was Docker’s announcement of the beta of Docker Universal Control Plane. This product offers on-premise management of local and/or cloud-based Docker deployments with enterprise features such as secret management and LDAP support for authentication. Although Docker were at pains to point out that there will still be integrations for monitoring vendors and plugins for alternative volume and network drivers, this announcement, combined with the acquisition of Tutum, puts Docker in competition with a significant portion of its ecosystem.

CodeRally @ DockerConAfter lunch I went to sessions on Docker monitoring (didn’t learn much) and on Official Repos. In the latter, Krish Garimella expanded on Project Nautilus and described how the hope is that this will allow them to dramatically scale-out the number of official repositories whilst still ensuring the quality of the content. We also handed out the Raspberry Pis to our Code Rally winners. I was pleased that they all went to attendees who’d spent significant time perfect their cars.

The closing session was also well worth staying for. Of particular note was the hack to manage unikernels using the Docker APIs. If Docker can do for unikernels what it did for containers, this is certainly a project to watch!

DockerCon Europe 2015: Day 1

Wednesday, November 25th, 2015

Moby DockI was lucky enough to be a part of the IBM contingent attending last week’s DockerCon Europe in Barcelona. I had to earn my keep by manning the Code Rally game on the IBM booth (not to mention lugging a suitcase full of laptops to the event and porting the server-side of the game to run on IBM Containers). I did get to attend the sessions though and soak up the atmosphere.

The conference opened with a moving remembrance for those who had died in the Paris attacks the proceeding week led by Docker CTO and former Parisian Hykes. He chose to play Carl Sagan reading from Pale Blue Dot which is a though-provoking listen in its own right.

After a somewhat flat opening demo, Soloman return to the stage to introduce the Docker stack: Standards, Infrastructure, Dev Tools and Solutions. He then went on talk about the themes of quality, usability and security. The last of these was accompanied by a great demo of the Yubikey 4 for creating (and revoking) certificates for Docker Content Trust. This was given by Aanand Prasad acting as hapless developer, with Diogo Monica in the role of ops. In a nice touch, everyone in the audience found a Yubikey taped to the side of their seat (although perhaps less interesting for my children than the Lego Moby Dock!). There was also a tip of the hat to the work that my colleague Phil Estes has been leading in the community around user namespace support. The session concluded with a powerful demo of using Docker Swarm to provision 50,000 containers to 10,000 nodes running in AWS.

DockerCon Party @ Maritime MuseumAfter racing back to the expo hall to cover the next break, I went to an “Introduction to the Docker Project” which covered how to get involved with contributing (I submitted my first PR the week before, if only to the docs). It finished early so I could also catch a glimpse of the inimitable Jessie Frazzelle doing what she does best: running random stuff under Docker (a Tor relay this time). After lunch Jessie was on again, this time with Arnaud Porterie, to provide a round-up of the latest updates to the Docker engine.

I spent the remainder of the day watching the lightning talk sessions before heading back to the booth for Happy Hour followed by the IBM sponsored conference party at the impressive maritime museum.