Archive for the ‘Work’ Category

WebSphere Liberty and IBM Containers: Part 3

Saturday, June 18th, 2016

Scaling up

In the first two posts in this series we covered the basics of starting a WebSphere Liberty on IBM Containers via the browser and then using the command line to deploy an application.

We’ve already seen some of the value-add that comes out of the box when running under IBM Containers. For example, at no point have we needed to be concerned with the underlying infrastructure on which the containers are running (beyond selecting a region). When we created an image it was scanned automatically for vulnerabilities. Each container was allocated its own private IP address accessible from any other container running in the same space – no need to set up and configure overlay networking here. We had control over whether we also wanted to assign a public IP and, if so, what ports should be exposed there. We also had easy access to metrics and standard out/error from the container.

So far we’ve only deployed a single container though. What happens when we hit the big time and need to scale up our application to meet demand? When we created our first container via the UI, you may remember that the Single option was selected at the top. Let’s go back and try out the Scalable alternative. From the catalog, navigate through Compute and Containers (remember that these instructions are based on the new Bluemix console). Select our existing demo image. Next, select the Scalable button at the top and give the container group a name. By default you’ll see that our group will contain two instances.

Rather than having a single IP associated with a container, this time we are asked to give a host name (on the mybluemix.net domain by default) for the group. Requests arriving at this host name will be load-balanced across the container instances in the group (reusing the gorouter from Cloud Foundry). One nice bonus of this approach is that it doesn’t eat in to our quota of public IPs! As the host name needs to be unique within the domain, I tend to include my initials as you’ll see in the screenshot below. Select 9080 as the HTTP port and then click Create.

Container group creation

Once the containers have started, the dashboard should show that you have two instances running:

Running instances

Right-click on the route shown in the Group details section and open it in a new tab. This should take you to a Liberty welcome page and, if you add the myLibertyApp context root, you should be able to see the application again. If you hit refresh a few times, although you won’t be able to tell with this application, your requests will be load-balanced across the two instances. If you return to the dashboard and switch to the Monitoring and Logs tab you can switch between the output for the instances and should, for example, be able to see the spike in network usage on the two containers when you made the requests.

If you return to the Overview tab you will see that there are plus and minus symbols either side of the current number of instances. These can be used to manually scale the number of instances. Click the + icon, click Save, and watch the creation of a new container in the group.

Manual scaling is all very well but it would be better if the number of instances scaled automatically up and down as required. If you’re deploying your containers in the London region then you’ll notice an extra tab at the top of the dashboard labelled Auto-Scaling. It’s only available in the London region at the moment because the service is still in beta (and so things may change a bit from what I’m describing here). Having selected this tab, click the plus icon labelled Create policy. Give the policy a name default and set the minimum and maximum instance values to 1 and 3. Add two CPU usage rules to scale up and down the number of instances as shown in the following diagram and then hit Create. Finally, select Attach to activate the policy for this scaling group.

Auto-Scaling

If you click the Auto-Scaling History tab you should see that a scaling action has taken place. We originally scaled up manually to 3 instances but, as the CPU usage is below our 60% limit, the number gets scaled down by one. If you wait another 5 minutes (the cool down period we specified), then you’ll see it get scaled down again to our minimum of 1.

Scaling history

And that concludes our tour of the scaling options in IBM Containers!

WebSphere Liberty on IBM Containers: Part 2

Monday, May 30th, 2016

Deploying an Application

In the first part of this series we looked at how to get started running a WebSphere Liberty image in IBM Containers using the Bluemix console.  The container was just running an empty Liberty server. In this post we’ll look at building and deploying an image that adds an application. I was originally intending to stick to using the browser for this post but I think it’s actually easier to do this from the command line. I’m going to assume that you already have Docker installed locally, either natively on Linux, via Docker Machine, or via the Docker for Mac/Windows beta.

First off we need an application to deploy and, just for novelty, I’m going to use the Liberty app accelerator to generate one. Select Servlet as the technology type and then, contrary as it may seem, select Deploy to Local and not Deploy to Bluemix. The latter option currently only supports deploying to the Instant Runtimes (Cloud Foundry) side of Bluemix. Finally, give your project a name and click Download Now.

Liberty App Accelerator

Unpack the zip file you downloaded and change to the top directory of the project. The app is built using maven. Perhaps you already have maven installed but this is a Docker blog post so we’re going to use the maven image from Docker Hub to build the app as follows:

$ docker run –rm -v $(pwd):/usr/src/mymaven \
    -w /usr/src/mymaven/myProject-application maven mvn clean package

This mounts the project on to a container running the maven image and runs the command mvn clean package in the myProject-application directory. Note: if you were doing this repeatedly you’d probably want to mount a maven cache in to the image as well and not download everything each time)

In the myProject-application/target directory you should now find that you have a file myArtifactId-application-1.0-SNAPSHOT.war. Copy this in to a new empty directory so that when we execute a Docker build we don’t end up uploading lots of other cruft to the Docker engine. Using your favourite editor, add the following Dockerfile to the same directory:

FROM websphere-liberty:webProfile7
COPY myArtifactId-application-1.0-SNAPSHOT.war /config/dropins

We have two choices now, we can either build a Docker image locally and then push that up to the IBM Containers registry, or we can build the image in IBM Containers. We’ll go for the latter option here as it involves pushing less bytes over the network.

There’s one niggle today that, to access your IBM Containers registry, you need to log in first using the Cloud Foundry CLI and IBM Containers plugin. We’re going to play the containerisation trick again here. Run the following commands to build an image with the CLI and plugin:

$ docker build -t cf https://git.io/vr7pl

Ideally I’d run this image as a stateless container but getting the right state written out to host in to the .cf, .ice and .docker directories is a bit finicky. Instead, we’re going to mount our current directory on to an instance of the image and perform the build inside:

$ docker run -it –rm $(pwd):/root/build cf
$ cd /root/build
$ cf login -a api.ng.bluemix.net

$ cf ic login
$ cf ic build -t demo .

Now we’re ready to run an instance of your newly built image. At this point you could switch back the to the UI but lets keep going with the command line. We’ll need to refer to the built image using the full repository name, including your namespace:

$ ns=$(cf ic namespace get)
$ cf ic run –name demo -P registry.ng.bluemix.net/$ns/demo

By default, containers are only assigned a private IP address. In order to access our new container we’ll need to request and assign a public IP. The cf ic ip command unfortunately returns a human friendly message, not a computer friendly one, hence the need for the grep/sed to retrieve the actual IP:

$ ip=$(cf ic ip request | grep -o ‘”.*”‘ | sed ‘s/”//g’)
$ cf ic ip bind $ip demo

Lastly, we can list the port and IP to point our browser at:

$ cf ic port demo 9080

Adding the root context myLibertyApp should give use the welcome page for the starter app.

Starter Welcome Page

Congratulations, you’ve successfully deployed an application to IBM Containers! In the next post in this series we’ll look at some of the additional features that the service provides, such as scaling groups and logging.

Two Days with Uncle Bob

Friday, May 13th, 2016

I’ve been lucky enough to spend the past two days in the presence of Robert C. Martin a.k.a. Uncle Bob for his Clean Code workshop at Skills Matter in London. I arrived at the very last-minute having decided to try out a Santander Cycle for the first time, only to discover that the promised docking station next to Code Node has disappeared under road works.

The workshop got off to a slow start as we spent over an hour going round the room indicating how long we’d been programming and what languages we’d used. This seemed to be just so that we could observe that, as well as a fair smattering of those in the first 5-10 years, there were many of us beyond our first flush of youth! That, and so that Bob could provide a potted history of various programming languages. This was a general theme for the course with Bob prefixing each module with an interlude on some aspect or other of astro or nuclear physics!

As a consequence of the above, if attendees were expecting to learn all of the content of the Clean Code book during the workshop then they’d leave disappointed. (Although a copy of the book was included in the exorbitant course fee.) What you did get was a good understanding of the values that underlie the practices covered by the book, the opportunity to watch the master at work refactoring and plenty of opportunity to ask questions first-hand.

I shan’t attempt to repeat the material but there were two main a-ha moments for me. The first was that, as a result of not doing test-first development, we’ve ended up with unit tests focused on each individual method. This has made them extremely fragile to refactoring which, in turn, means that either the tests quickly fall by the wayside or that refactoring just doesn’t happen. And that leads to the second learning point: to refactor mercilessly, extracting functions until every function does just one thing. I had fallen in to the trap of seeing the myriad of small methods as adding to the complexity. Bob’s argument is that, well-named, the methods allow you to quickly gained an understanding of what the code is doing at every level of abstraction. Based on the one exercise that we did during the workshop, I’d say that pairing is immensely helpful in determining what makes a good name. Anyway, lots to share back with my colleagues!

As an aside: I can heartily recommend the Hawksmoor Seven Dials if you are a fan of steak (although it has to be said that the corporate expense limit would have just about covered the meat on my plate!).

DockerCon Europe 2015: Day 2

Thursday, November 26th, 2015

DockerCon logoIt was another early start on Day 2 of the conference. It’s not often I leave the hotel before breakfast starts, but fortunately breakfast was being served in the expo hall so I could refuel whilst on duty.

The morning’s general session focussed on the solutions part of the stack that Soloman had introduced the previous day. VP for Engineering, Marianna Tessel, introduced Project Nautilus which, as with the vulnerability scanner in IBM’s Bluemix offering, aims to identify issues with image content held in the registry. This was of interest to me as they have been scanning the official repository images for several months now, presumably including the websphere-liberty image for which I am a maintainer. There was also a demo of the enhancements to auto-builds in Docker Hub and the use of Tutum, Docker’s recent Docker hosting acquisition.

Particularly interesting was Docker’s announcement of the beta of Docker Universal Control Plane. This product offers on-premise management of local and/or cloud-based Docker deployments with enterprise features such as secret management and LDAP support for authentication. Although Docker were at pains to point out that there will still be integrations for monitoring vendors and plugins for alternative volume and network drivers, this announcement, combined with the acquisition of Tutum, puts Docker in competition with a significant portion of its ecosystem.

CodeRally @ DockerConAfter lunch I went to sessions on Docker monitoring (didn’t learn much) and on Official Repos. In the latter, Krish Garimella expanded on Project Nautilus and described how the hope is that this will allow them to dramatically scale-out the number of official repositories whilst still ensuring the quality of the content. We also handed out the Raspberry Pis to our Code Rally winners. I was pleased that they all went to attendees who’d spent significant time perfect their cars.

The closing session was also well worth staying for. Of particular note was the hack to manage unikernels using the Docker APIs. If Docker can do for unikernels what it did for containers, this is certainly a project to watch!

DockerCon Europe 2015: Day 1

Wednesday, November 25th, 2015

Moby DockI was lucky enough to be a part of the IBM contingent attending last week’s DockerCon Europe in Barcelona. I had to earn my keep by manning the Code Rally game on the IBM booth (not to mention lugging a suitcase full of laptops to the event and porting the server-side of the game to run on IBM Containers). I did get to attend the sessions though and soak up the atmosphere.

The conference opened with a moving remembrance for those who had died in the Paris attacks the proceeding week led by Docker CTO and former Parisian Hykes. He chose to play Carl Sagan reading from Pale Blue Dot which is a though-provoking listen in its own right.

After a somewhat flat opening demo, Soloman return to the stage to introduce the Docker stack: Standards, Infrastructure, Dev Tools and Solutions. He then went on talk about the themes of quality, usability and security. The last of these was accompanied by a great demo of the Yubikey 4 for creating (and revoking) certificates for Docker Content Trust. This was given by Aanand Prasad acting as hapless developer, with Diogo Monica in the role of ops. In a nice touch, everyone in the audience found a Yubikey taped to the side of their seat (although perhaps less interesting for my children than the Lego Moby Dock!). There was also a tip of the hat to the work that my colleague Phil Estes has been leading in the community around user namespace support. The session concluded with a powerful demo of using Docker Swarm to provision 50,000 containers to 10,000 nodes running in AWS.

DockerCon Party @ Maritime MuseumAfter racing back to the expo hall to cover the next break, I went to an “Introduction to the Docker Project” which covered how to get involved with contributing (I submitted my first PR the week before, if only to the docs). It finished early so I could also catch a glimpse of the inimitable Jessie Frazzelle doing what she does best: running random stuff under Docker (a Tor relay this time). After lunch Jessie was on again, this time with Arnaud Porterie, to provide a round-up of the latest updates to the Docker engine.

I spent the remainder of the day watching the lightning talk sessions before heading back to the booth for Happy Hour followed by the IBM sponsored conference party at the impressive maritime museum.

Rome Retreat

Saturday, July 11th, 2015

ColosseumWork took me to Rome at the end of this week, facilitating a code retreat for some of my colleagues at the local IBM lab. The retreat itself followed a format we’ve used numerous times before with a focus on pairing and TDD although for the first time we also introduced a session on BDD. Starting with a plain English (or Italian) description really did seem to help the participants avoid starting with a focus on the details of the implementation. The experience also made me realise how much you are dependent on being able to understand the communication between a pair when trying to coach them!

I had a few hours to spare in the evening in which I seemed to manage to cover most of Rome on foot and, with the hotel being based near the Colosseum (I could see it from my room window), I managed to get a quick trip round the inside before it was time to depart for the airport. Unfortunately Fiumicino was in disarray following a fire two months ago which meant we spent around an hour sat on the tarmac. I’d certainly like to return to Rome when I have more time to explore but the trip did remind me that I should do so at a time of year when it’s a little cooler!

Meetup Happy

Saturday, July 19th, 2014

I’ve gone a bit meetup happy in the past two weeks. Last week I headed along to the Pivotal offices in London for the first London Cloud Foundry User Group meetup organised by one-time colleague Duncan Winn. First to speaker was another ex-Hursley employee, Glyn Normington. He gave a fascinating presentation in to the work that he and his colleagues are doing to replace the backend of Cloud Foundry’s Warden container with libcontainer (now split out from Docker). More on this over on Glyn’s blog.

Next up was London based Tammer Saleh, Director of Products at Pivotal Cloud Foundry Services. You can see the recording of this session from the Cloud Foundry Summit where they talk about the different models for stacking server instances. Finally, James Watters (Vice President of Product, Marketing and Ecosystem for Cloud Foundry at Pivotal) talked about the roadmap for Cloud Foundry in 2014 (including what’s out of scope). See James Bayer’s session from the summit for similar information.

The next meetup was my first at Agile South Coast. If nothing else, this gave me an excuse to have a nose at the new(ish) Ordnance Survey offices! I can’t claim to have been welcomed with open arms to the group (no-one even commented on the fact that they hadn’t seen me there before) but that’s fine by me. Most notable to me though was the fact that I was the only one there who wasn’t a scrum master by profession. Have developers lost interest in agile?

As one would expect with this audience, it wasn’t long before the post-it notes were out and we were collaborating on choosing subjects to discuss. My heart sunk when topics such as “should spikes be given points?” were selected but I was glad when the resounding response from the group seemed to be “it doesn’t really matter – whatever works for you”. Oh, and apparently PSM is more through than CSM but the latter gets more CV points! As I’m part way through reading Kanban in Action, the discussion on Scrum vs Agile in a BAU environment was interesting. I may yet make it to another of these meet ups.

The American style pizza and good selection of beer certainly helped make the trip into town worthwhile although I’ll not mistakenly pick up the 7.2% Sierra Nevada Torpedo Extra IPA in future!

Lastly, I returned to Developer South Coast for a session entitled “NoSQL vs SQL… Fight!”. Actually, there wasn’t much of a fight to be had as the speaker (Tony Rogerson) is an SQL Server DBA. He gave a thorough although halting coverage of the theory behind relational and NoSQL databases though which sadly meant he ran out of time before reaching the potentially more interesting topic of NewSQL databases.

Logsearch & Decker

Friday, April 4th, 2014

Yesterday evening I headed up to the London PaaS User Group meeting as there were two Cloud Foundry related sessions on the agenda. First up was David Laing talking about the open source Logsearch project, a bosh deploy of an Elastic search ELK log analysis cluster. His employer (City Index) has this hooked up to Cloud Foundry system logs and, in some cases, they’re also using it for analysis of application logs with addition parsers. They’re looking for people to get involved in the project and help with the next phase: anomaly detection. One major hole in the solution as it currently stands: it’s only suitable for private PaaS as their is no access control over the logged data.

Up second was an entertaining pitch by Colin Humphreys, Founder and CEO of ours hosts CloudCredo, on how to sell hats to monkeys. That was the back story anyway, it was actually about how there is space in the stack for something that gives you the flexibility of IaaS over what you run but the simplicity of management, scaling and load balancing of PaaS. That something is Container as a Service. Specifically, the ability to push Docker files to Cloud Foundry using a custom stack for the DEA. Something that Colin is referring to as Decker.

Colin gave a nice demo but it is obviously still early days. Currently you can only push Docker files not images. There is also no staging at the moment – the image is created when each instance starts – consequently it is not taking any advantage of intermediate images. There is obviously lots of scope for improvement and it’s definitely one to watch. It was also interesting that Colin is currently focussing on the Docker side with the DEA interactions set to change with the introduction of Diego. The project is open source but Colin recommended waiting until he writes some docs before you try picking it up!