Author Archive

Introducing Microservice Builder

Monday, March 27th, 2017

When the frequency of blog posts drops on this site it generally has two causes: I’m busy and/or I’m working on something that’s IBM Confidential. Both of these have been true over the past six months or so whilst I’ve been working on something we’re calling Microservice Builder. A public beta was announced in the run up to InterConnect which went live on the 24th which means that I can now come up for air and say a little about the work we’ve done so far.

Although not limited to Java deployments, Microservice Builder pulls together multiple strands of work that we’ve been doing in the WebSphere space. First, there is the work that is being done in the MicroProfile community to define a set of standard APIs for building microservices in Java. Initially, this took a set of existing Java EE technologies (JAX-RS, CDI and JSON-P) but now additional APIs are being defined. You can start to see the results of this work in the Liberty March beta where there are new features for injecting environmental configuration and utilizing fault tolerance patterns such as timeout, bulkhead and circuit breaker.

Another area where we’ve sought to improve the developer experience is by providing a fast-path to creating new projects. The Liberty App Accelerator has been around for some time now, allowing you to generate Java projects quickly through a web UI. We’ve taken this idea and extended it to cover Swift and Node.js. This can be achieved either through a web UI or through a new plugin to the Bluemix CLI. (Note that generated projects do not need to be deployed to Bluemix.) The plugin goes beyond just generating projects and allows you to build and run them locally using containers. This means that the developer no longer needs to have the prerequisites (e.g. Java, Maven and Liberty) installed locally.

For a runtime environment, we believe containers are a good fit for microservices and in the first instance we’re focusing on Kubernetes. That could be the newly announced Kubernetes in IBM Containers or it could be on-premises with IBM Spectrum Conductor for Containers. On top of Kubernetes, Microservice Builder adds a lightweight fabric, installed as a Helm chart, that simplifies deployment of Liberty-based services. Specifically, in this first release it generates key and trust stores to facilitate inter-service communication. It also configures an ELK (Elasticsearch-Logstash-Kibana) stack to receive and display information including trace, FFDC, garbage collection and HTTP access logs from the Liberty logstashCollector-1.0 feature.

The final strand of Microservice Builder ties together the development and runtime environments via a Jenkins based pipeline. Once again, this is installed as a Helm chart, and is configured to automatically pick up projects from a GitHub or GitHub Enterprise organization. For a Java application, the pipeline will build and test using Maven, before creating a Docker image and pushing it to a registry. The Docker image is then deployed to a Kubernetes cluster using either the same or a separate pipeline.

To show all of this in action, we have taken the sample conference application from the MicroProfile community and broken it apart in to separate projects to deploy using Microservice Builder. Just follow the docs to recreate it in either your local minikube environment or with Spectrum Conductor for Containers.

Presentations from IBM InterConnect 2017

Sunday, March 26th, 2017

I’m finally back home after what feels like a very long week in Las Vegas at IBM’s InterConnect conference. I promised that I’d post my presentations on SlideShare and I’ll add a few comments here on how each session went.

After an Inner Circle session on Sunday, my first public session of the week was an introduction to containers with WebSphere traditional. This played to a full room which suggests that there is significant interest in the use of containers for existing workloads. Indeed, that was the point of the second half of the session, to describe scenarios where it may make sense to use containers with traditional WebSphere. That’s not to say that it always does and, during one-to-one sessions during the week, I found myself repeatedly cautioning customers against rushing in to the use of containers, particularly with ND, just for the sake of it.

https://www.slideshare.net/davidcurrie/how-to-containerize-websphere-application-server-traditional-and-why-you-might-want-to

My next session covered our new announcement around Microservice Builder. I’ll not say more here as I’ll cover this in a separate post.

https://www.slideshare.net/davidcurrie/microservice-builder-a-microservice-devops-pipeline-for-rapid-delivery-and-promotion

Unfortunately, I didn’t get to deliver this session on Liberty and IBM Containers as it clashed with another that I was presenting. As touched on briefly in this presentation, one of the other announcements at the conference was for support for Kubernetes in IBM Containers. There was lots of excitement around this and I urge you to go and check it out for yourself.

https://www.slideshare.net/davidcurrie/websphere-liberty-and-ibm-containers-the-perfect-combination-for-java-microservices

On Wednesday I had a joint presentation with Brian Paskin looking at options for scalability with Liberty and containers. This was very much Brian’s presentation though so I shan’t post it here. There was an accompanying lab in the afternoon that looked at Liberty collectives and at IBM Containers.

My last session of the week was looking at some of the options when choosing a container orchestration platform: from Liberty collectives, through Swarm and Docker Datacenter, and Kubernetes with IBM Spectrum Conductor for Containers and IBM Containers. Many customers I spoke to this week were looking for a single definitive answer here but my response for now is still very much “it depends”.

https://www.slideshare.net/davidcurrie/choosing-a-container-platform-for-your-websphere-applications

Find me at IBM InterConnect 2017

Saturday, March 18th, 2017

I’m going to be at IBM’s InterConnect conference this coming week. If you’re going to be there too, there’s a quick run-down of the sessions I’ll be presenting below. The astute will notice that, due to a scheduling snafu, I’m supposed to be presenting two sessions at the same time on Tuesday. If you go to the Liberty/ IBM Containers session then I’m afraid you’ll have to make do with Tom – be kind to him!

If you want to chat about any combination of microservices, containers and WebSphere, you can find me on the microservices ped in the WebSphere area of the expo hall from 5-7:30pm on Tuesday and again from 3-5pm on Wednesday. I’ll be kicking off the latter with a live demo of Microservices Builder, of which more in another post. For Inner Circle customers, I’ll also be talking about this topic at 11am on Sunday.

HAJ-5451 : How to Containerize WebSphere Application Server Traditional, and Why You Might Want To
Date/Time : Mon, 20-Mar, 11:15 AM-12:00 PM
Location : Mandalay Bay South, Level 2 – Surf D
Presenter(s) : David Currie, IBM

BMC-7014 : Roundtable Discussion on Building Java Microservices with WebSphere Liberty
Date/Time : Mon, 20-Mar, 02:00 PM-02:45 PM
Location : Mandalay Bay North, Level 0 – Tropics A
Presenter(s) : Alasdair Nottingham, IBM; David Currie, IBM

BMC-7085 : Meet the Expert on IBM WebSphere Application Server Liberty on Docker
Date/Time : Tue, 21-Mar, 02:30 PM-03:15 PM
Location : Concourse, Bayside B, Level 1 – Meet the Experts Forum # 1
Presenter(s) : David Currie, IBM; Tom Banks, IBM

HAM-5526 : IBM Microservice Builder: A Microservice DevOps Pipeline for Rapid Delivery and Promotion
Date/Time : Tue, 21-Mar, 03:45 PM-04:30 PM
Location : Mandalay Bay North, Level 0 – Islander F
Presenter(s) : David Currie, IBM; Jeremy Hughes, IBM

BMC-5983 : WebSphere Liberty and IBM Containers: The Perfect Combination for Java Microservices
Date/Time : Tue, 21-Mar, 03:45 PM-04:30 PM
Location : Mandalay Bay North, Level 0 – South Pacific A
Presenter(s) : David Currie, IBM; Tom Banks, IBM

BMC-7014 : Roundtable Discussion on Building Java Microservices with WebSphere Liberty
Date/Time : Wed, 22-Mar, 08:00 AM-08:45 AM
Location : Mandalay Bay North, Level 0 – Tropics A
Presenter(s) : Alasdair Nottingham, IBM; David Currie, IBM

BMC-2714 : Utilizing WebSphere Application Server Liberty in Docker Containers for Scalability
Date/Time : Wed, 22-Mar, 10:15 AM-11:00 AM
Location : Mandalay Bay North, Level 0 – South Pacific A
Presenter(s) : Brian S. Paskin, IBM; David Currie, IBM

HAJ-2718 : Utilizing IBM WebSphere Liberty in Docker Containers for Scalability (Lab)
Date/Time : Wed, 22-Mar, 01:00 PM-02:45 PM
Location : Mandalay Bay South, Level 3 – South Seas H
Presenter(s) : Brian S. Paskin, IBM; David Currie, IBM

BAS-5901 : Choosing a Container Platform for Your WebSphere Applications
Date/Time : Thu, 23-Mar, 10:30 AM-11:15 AM
Location : Mandalay Bay North, Level 0 – South Pacific A
Presenter(s) : David Currie, IBM; Tom Banks, IBM

How many processors?

Tuesday, January 31st, 2017

Reading Daniel Bryant’s O’Reilly publication Containerizing Continuous Delivery in Java reminded me of the challenge of determining how processors are available to you when running in a container. In the case of Java, a call to Runtime.getRuntime().availableProcessors() should show this all important information. A quick check reveals that, when called in an unconstrained container, this correctly returns the number of cores on my physical hardware (Docker on Linux) or assigned to the VM containing the Docker Engine (Docker Toolbox or Docker for Windows/Mac). If I used the --cpuset-cpus option on docker run to constrain the cores available to the container then this is also correctly reflected in the value returned. The difficulty arises when access to those CPUs is constrained in other ways.

Take, for example, the new --cpus option in Docker 1.13. Setting this to two on a four-way box, I still get four back from a call to availableProcessors() and rightly so: there are four processors and I may get simultaneous access to all four of them even if the cgroup is then going to make sure that I don’t get that access for more than half of the time. Another potential constraint is a highly multi-tenant environment. If I deploy my test application to Bluemix it tells me that there are 48 processors. That’s great but I’m pretty sure I’m not going to get exclusive access to all of those!

One example we’ve seen where this becomes a real problem is in native memory usage. By default, WebSphere Liberty uses the number of available processors to decide the number of parallel threads it should support. Each of those executors utilises a thread and each of those threads takes up space in native memory. In a containerized environment where total memory is typically constrained (Bluemix containers are sold by the GB/hour) and some generic heuristic is often used to determine the heap size to allocate to the JVM, that can lead to memory exhaustion. That’s why you’ll see a GitHub issue from my colleague Erin that, among other things, proposes setting hard-coding a maximum on the number of threads for the executor service in our Docker images.

Docker 1.13 is out

Sunday, January 22nd, 2017

Docker 1.13 finally made it out the door earlier this week and I found some time to play around with it this weekend. I shan’t enumerate all of the new features here as the introductory post from Docker does a good job of that (or you can see the release notes for the gory details). Instead I’ll talk a little bit about some of the features that are of particular interest to me.

Top of the list has to be CLI backwards compatibility. It has been a frustration for some time that you’ve had to set the DOCKER_API_VERSION environment variable in order to have a client to talk an engine using an older version of the API. I almost always hit this following an upgrade or when accessing remote engines. It also made it difficult to have an image containing a Docker client, for example to talk to the engine it was running on. You ended up either having to create an image for each API version or trying to work out the engine’s version so you could set the variable appropriately. It’s a shame that compatibility only goes back as far as 1.12 but it’s a step in the right direction.

Another feature that I’ve been holding out for for some time is the --squash option on docker build. The way it has been implemented, this will squash all of the layers from the current build down in to one, preserving the image history in the process. This means that you no longer have to jump through hoops to make sure temporary files introduced in the build are created, used and deleted all in the same Dockerfile command.

I tested the option out on the build for our websphere-liberty images and was initially surprised that it didn’t reduce the size at all. I know that some files get overwritten in subsequent layers but unfortunately that happens across different images e.g. the javaee7 image overwrites some files in the webProfile7 image. Likewise, for our websphere-traditional images we currently have a two-step build process to avoid getting Installation Manager (IM) in the final image. I had hoped that we’d just be able to uninstall IM and then squash the layers but this would only work if we didn’t install IM in a separate base image. Hopefully the squash flag will gain some options in future to control just how many layers are squashed.

Another space-saving feature is the docker system prune command. Yes, pretty much every Docker user probably already had a script to do this using a host of nested commands but, as with the corresponding docker system df command, it’s good to see Docker making this that bit easier for everyone,.

The area of restricting CPU usage for containers has also been something of a black art involving shares, cpusets, quotas and periods. (I should know as we’ve given quite some consideration as to what this means for IBM’s PVU and vCPU pricing models.) It’s therefore great to see the --cpus option being added to docker run to radically simplify this area.

Perhaps the biggest feature in Docker 1.13 has to be the introduction of the Docker Compose V3 file format and the ability to deploy these Compose files directly to a swarm using docker stack deploy. This was a glaring hole when swarm mode was introduced in 1.12. It still sits a little uneasily with me though. Docker Compose started out as a tool for the developer. Despite exposing much the same Docker API, there were a few holes that started to creep in when trying to use the same YAML with a classic Swarm. For example, you really had to be using images from a repository for each node to be able to access them and the inability to specify any sort of scaling in the file meant it wasn’t really of use for actual deployment. The latter problem is, at least, resolved with V3 and swarm mode but only at the expense of moving away from something that feels like it is also of use to the developer. Perhaps experience will show that a combination of Compose file extensibility and Distributed Application Bundles will enable reuse of artifacts between development and deployment.

I don’t wish to end on a negative note though as, all in all, there’s a lot of good stuff in this release. Roll on Docker 1.14!

Using the Docker remote API to retrieve container CPU usage

Monday, November 28th, 2016

For reasons that I won’t go in to here, I’ve been interested in the CPU accounting aspect of cgroups for a while and I recently found some time to have a poke at what information is available in the Docker remote API. I was interested in getting hold of the actual CPU time used by a container versus the elapsed time that the container has been running for (where the former would be smaller if the container is not CPU intensive and would potentially be much larger if it’s chewing through multiple cores).

The CLI doesn’t expose the information that I was looking for so my first pass was to define an image with curl and jq:

Build it:

And then run it with a script as follows:

I started out with an Alpine based image but the version of date it comes with wasn’t capable of parsing the ISO format dates returned by the API. This was an interesting exercise in the use of curl with Unix sockets and jq for parsing JSON on the command line but I thought I could do better.

Next step was a rendering of the script above in to golang which you can find over on GitHub. You’ll have to forgive my poor golang – I wouldn’t claim to know the language; this is just a cut-and-shut from numerous sources around the internet. Perhaps the only part worth mentioning is that I explicitly pass an empty version string to the golang Docker library so that you don’t get client-server version mismatch errors.

Having compiled this up in to a static binary I could then build a small image from scratch. I then wanted to build this using Docker Hub automated builds and a binary release on GitHub. This raises the thorny issue of how you make the binary executable once you’ve used ADD to download it in to the image. There is one solution here that adds a very small C binary that can be used to perform the chmod. Having initially employed this method it reminded me of another issue that I’d hit. I’d inadvertently doubled the size of our websphere-traditional images to over 3GB with a recursive chmod (the files get copied in to a new layer with the modified permissions). So, in the end I caved in and checked the binary in to GitHub so I could use a COPY and pick up the correct permissions.

The resulting image, weighing it at just over 4MB, is on Docker Hub. As the instructions say, it can be run with the command:

To test out the image, let’s spin up a container that should burn up the two cores allocated to my Docker for Mac VM:

If we leave it for a few minutes we see an output along the following lines:

The total CPU usage is, as we’d expect, twice the elapsed time. Let’s try again but this time run two containers and use cpuset to constrain them both to a single core:

This time, the results show that each container is getting half of the CPU time:

(Actually, you can see that the one that has been running longer has slightly more than half as it got the CPU to itself for a couple of seconds before the other container started!) Finally, and just for interest, let’s spin up an unconstrained WebSphere Liberty server:

After a minute, we see that it’s used just over 20 seconds of CPU time to start up:

And if we check again after half an hour, we see that without any load, the server has consumed very little extra CPU:

Barcelona Break

Monday, October 31st, 2016

BarcelonaIn general, we’re not very good at combining business trips with pleasure but at half term I was due to be in a conference in Madrid for the latter part of the week and Christine was about to start a new collaboration based in Barcelona so we decided to take the children over to Spain for a few days. Things didn’t get off to a great start with a three-hour delay on our Easyjet flight to Barcelona. To be fair, they did let us know of the delay before we left home and thankfully we’d already made arrangements for late arrival at our apartment.

On Sunday we took the metro to the Sagrada Familia, only to discover that it was sold out for the day. We therefore slowly made our way to Park Güell where we had booked in advance for a late afternoon entrance. Christine went off to the University on Monday whilst the children and I headed to the beach. Unfortunately you could barely see the beach for the mist, let alone the cable car across the harbour that we were intending to take. Luckily, as we waited to board the cable car the mist started to clear and by the time we arrived at Montjuïc the sun was out in force.

We spent some time in the Fort which became quite atmospheric when the mist rolled in again off the sea. Our walk down Plaza d’Espanya was cut short when Duncan failed to clear the large muddy puddle at the bottom of a very steep slide!

Christine was working again on Tuesday. Sadly the mist had turned to drizzle and I headed to the Museu Blau with the children (located dangerously close to the OpenStack summit that was kicking off that day!). For a very modern natural history museum, it seem to specialise in glass cases with large numbers of exhibits in them which wasn’t particularly child friendly. The visit was saved by the temporary National Geographic Spinosaurus exhibition.

In the afternoon, we headed back to the Sagrada Familia having booked our tickets in advance this time. The cathedral has gained a very impressive ceiling since I last entered the building about 10 years ago. Although the rain had stopped by this point, unfortunately the damp conditions meant that we weren’t permitted to ascend the towers.

Having handed the children over to Christine on a metro platform, I took the fast train to Madrid, arriving just in time for the speaker dinner. The rest of the family flew back to the UK the following morning.

Running Weekend

Sunday, October 16th, 2016

Christine XCIt’s been a weekend for running. On Saturday Christine ran at the first of this year’s Hampshire Cross-Country races at Farley Mount. I didn’t feel 100% when I woke up so decided to save myself for Sunday. Although I felt much better by the time the races came round it was probably still a wise decision (not least to reserve some energy for a barn dance in the evening!).

StringerOn Sunday it was Totton RC’s Stinger which meant a return to Ocknell. It had been raining heavily during the night and it was still going as we drove to the event. The sun had come out by the start so, although wet underfoot, it was actually quite warm.

Ocknell MudI was slightly alarmed to be in the lead for the first couple of miles but around the three mile mark, three runners made a move (although I’m puzzled because the results that were posted suggest four). Most of the next four miles were spent racing around the gravel tracks in the Inclosure. The first two runners started to pull away and I had to work hard to stay in contact with the third placed runner (or was it fourth?!). I started to make some ground as we left the tracks and worked our way back along the edge of the Inclosure but didn’t have the energy left to haul him in on the final climb up towards the finish (the sting).

Christine, meanwhile, had take the children for a walk through a marsh which meant they were covered in almost as much mud as me!

20161016 Stinger