Archive for the ‘WebSphere Application Server’ Category

Kubernetes arrives in Docker for Mac

Monday, January 8th, 2018

My focus for the last 18 months having been on deployment to Kubernetes, I was excited to hear the news back at DockerCon that Docker Inc were recognising the dominance of Kubernetes. This included adding support to Docker Enterprise Edition (alongside Swarm) and to Docker for Mac/Windows. The latter has now hit beta in the edge channel of Docker for Mac and the following are my first impressions.

Having not had any particular need for the latest and greatest Docker for some time, my first step was to switch from the stable channel back to edge. That’s a pretty painless process. You do lose any of your current containers/images but you’ve got the ones you care about stored away in a registry somewhere haven’t you?! Then open up the preferences, switch to the shiny new Kubernetes tab, check the box to Enable Kubernetes and hit Apply followed by Install. As promised, it took a couple of minutes for the cluster to be created.

The UI leaves you a bit in the dark at this point but thankfully the email that arrived touting the new capability gave you a pointer as to where to go next: the install creates a kubectl context called docker-for-desktop. With this information I could access my new cluster from the command line:

Now to take the cluster for a quick spin. Let’s deploy Open Liberty via the Helm chart:

And, due to the magic of Docker for Mac networking, after a short wait we are treated to the exposed NodePort running on localhost:

Open Liberty on Kubernetes on Docker for Mac

Undoubtedly there will be issues but, at least at first glance, this support would seem to go a long way to answering those who see minikube as an inhibitor to making Kubernetes a part of the developer’s workflow.

Multi-Arch Docker Images

Sunday, October 8th, 2017

Things have been a little hectic recently and a whole month has slipped past without a blog post. The following is old news now but it’s still something that I wanted to call out. As of mid-September, all the official images on Docker Hub have been multi-arch enabled and many of them, including the websphere-liberty image, are now available for multiple platforms.

So what does that mean practically speaking? It means that, whether you are on x, p or z, hardware, you can now use the same docker run websphere-liberty command and have it pull the appropriate image for your architecture. That may seem like a trivial thing: what was so hard about docker run ppc64le/websphere-liberty after all? What it means though is that I can also use the same Dockerfiles, Compose files and Kubernetes configuration, regardless of what platform I’m on.

If you consider WebSphere Liberty, for example, we don’t have any architecture specific code but we are obviously dependent upon Java which has both native code and is dependent on platform specific libraries. That meant that, to build the ppc64le/websphere-liberty image, we had to change the Dockerfile to build from ppc64le/ibmjava. That’s no longer the case.

To take another example, with my Microservice Builder hat on we are building Helm charts that we want to run on multiple platforms. Previously we’d had to rely on multiple charts or overrides to select the correct image for the platform. That’s also no longer the case.

Now just to be 100% clear, this is not about having a single image that can run on multiple architectures: that would require a hypervisor or emulator. The magic here is purely that websphere-liberty is no longer an image, it is a manifest that points to the layers that make up the images for each architecture, and, when the image is pulled, that indirection is resolved by the client to select the image for the current platform.

If you want to know more of the technical detail and the journey to get this far, I suggest reading the blog posts by my colleagues Phil Estes and Utz Bacher. I also need to call out Tianon Gravi whose tireless work on the official images meant that enabling this support for WebSphere was entirely painless.

Microservice Builder GA Update

Wednesday, July 12th, 2017

As I posted here on the Microservice Builder beta, I thought it only fair that I should offer an update now that it is Generally Available. There is already the official announcement, various coverage in the press including ZDNet and ADT, a post from my new General Manager Denis Kennelly, and, indeed, my own post on the official blog, so I thought I’d focus on what has changed from a technical standpoint since the beta.

If I start with the developer CLI, the most significant change here is that you no longer need a Bluemix login. Indeed, if you aren’t logged in, you’ll no longer be prompted for potentially irrelevant information such as the sub-domain on Bluemix where you want the application to run. Note, however, that the CLI is still using back-end services out in the cloud to generate the projects so you’ll still need internet connectivity when performing a bx dev create.

Moving on to the next part of the end-to-end flow: the Jenkins-based CI/CD pipeline, the Helm chart for this has been modified extensively. It is now based on the community chart which, most significantly, means that it is using the Kubernetes plugin for Jenkins. This results in the use of separate containers for each of the build steps (with Maven for the app build, Docker for the image build, and kubectl for the deploy) and those containers are spun up dynamically as part of a Kubernetes pod representing the Jenkins slave when required.

The Jenkinsfile has also been refactored to make extensive use of a Jenkins library. As you’ll see in the sample projects, this means that the generated Jenkinsfile is now very sparse:

I could say much more about the work we’ve done with the pipeline but to do so would be stealing the thunder from one of my colleagues who I know is penning an article on this subject.

Looking at the runtime portion, what we deploy for the Microservice Builder fabric has changed significantly. We had a fair amount of heartache as we considered security concerns in the inter-component communication. This led us to move the ELK stack and configuration for the Liberty logstash feature out into a sample. This capability will return although likely in a slightly different form. The fabric did gain a Zipkin server for collation and display of Open Tracing data. Again, the security concerns hit home here and, for now, the server is not persisting data and the dashboard is only accessible via kubectl port-forward.

Another significant change, and one of the reasons I didn’t post this immediately, was that a week after we GA’d, IBM Spectrum Conductor for Containers morphed into IBM Cloud private. In the 1.2 release, this is largely a rebranding exercise but there’s certainly a lot more to come in this space. Most immediately for Microservice Builder, it means that you no longer need to add our Helm repository as it will be there in the App Center out of the box. It also meant a lot of search and replace for me in our Knowledge Center!

You may be wondering where we are heading next with Microservice Builder. As always, unfortunately I can’t disclose future product plans. What I can do is highlight existing activity that is happening externally. For example, if you look at the Google Group for the MicroProfile community, you will see activity ramping up there and proposals for a number of new components. Several of the Microservice Builder announcements also refer to the Istio service mesh project on which IBM is collaborating with Google. It’s still early days there but the project is moving fast and you can take a look at some of the exciting features on the roadmap.

Multi-Stage Docker Build

Friday, May 5th, 2017

Docker 17.05 enabled the ability to perform multiple build stages in a single Dockerfile, copying files between them. This brings to regular Docker build a capability that I’ve previously talked about in the context of Rocker, and something that’s of particular use in a compiled language like Java. Let’s see what it would look like in the context of the WebSphere Liberty ferret sample.

The original Dockerfile looks as follows:

We can see that it assumes that the application has already been built and just pulls in the WAR file, in this case from Maven Central. With a multi-stage build we can perform the build of the application and the build of the image in a single Dockerfile:

The first line uses the Maven on-build image to build the application using the source in the same directory as the Dockerfile. Although the stages are given an index by default, naming them using the AS keyword makes the file much more readable. Further down in the Dockerfile we can see that the COPY command takes the built WAR file from the first stage and copies it into the Liberty dropins directory as before. The important thing about all of this is that the final image doesn’t end up with the application source in it, or Maven, or an SDK – just the artifacts that are needed at runtime – thereby keeping down the final image size.

Introducing Microservice Builder

Monday, March 27th, 2017

When the frequency of blog posts drops on this site it generally has two causes: I’m busy and/or I’m working on something that’s IBM Confidential. Both of these have been true over the past six months or so whilst I’ve been working on something we’re calling Microservice Builder. A public beta was announced in the run up to InterConnect which went live on the 24th which means that I can now come up for air and say a little about the work we’ve done so far.

Although not limited to Java deployments, Microservice Builder pulls together multiple strands of work that we’ve been doing in the WebSphere space. First, there is the work that is being done in the MicroProfile community to define a set of standard APIs for building microservices in Java. Initially, this took a set of existing Java EE technologies (JAX-RS, CDI and JSON-P) but now additional APIs are being defined. You can start to see the results of this work in the Liberty March beta where there are new features for injecting environmental configuration and utilizing fault tolerance patterns such as timeout, bulkhead and circuit breaker.

Another area where we’ve sought to improve the developer experience is by providing a fast-path to creating new projects. The Liberty App Accelerator has been around for some time now, allowing you to generate Java projects quickly through a web UI. We’ve taken this idea and extended it to cover Swift and Node.js. This can be achieved either through a web UI or through a new plugin to the Bluemix CLI. (Note that generated projects do not need to be deployed to Bluemix.) The plugin goes beyond just generating projects and allows you to build and run them locally using containers. This means that the developer no longer needs to have the prerequisites (e.g. Java, Maven and Liberty) installed locally.

For a runtime environment, we believe containers are a good fit for microservices and in the first instance we’re focusing on Kubernetes. That could be the newly announced Kubernetes in IBM Containers or it could be on-premises with IBM Spectrum Conductor for Containers. On top of Kubernetes, Microservice Builder adds a lightweight fabric, installed as a Helm chart, that simplifies deployment of Liberty-based services. Specifically, in this first release it generates key and trust stores to facilitate inter-service communication. It also configures an ELK (Elasticsearch-Logstash-Kibana) stack to receive and display information including trace, FFDC, garbage collection and HTTP access logs from the Liberty logstashCollector-1.0 feature.

The final strand of Microservice Builder ties together the development and runtime environments via a Jenkins based pipeline. Once again, this is installed as a Helm chart, and is configured to automatically pick up projects from a GitHub or GitHub Enterprise organization. For a Java application, the pipeline will build and test using Maven, before creating a Docker image and pushing it to a registry. The Docker image is then deployed to a Kubernetes cluster using either the same or a separate pipeline.

To show all of this in action, we have taken the sample conference application from the MicroProfile community and broken it apart in to separate projects to deploy using Microservice Builder. Just follow the docs to recreate it in either your local minikube environment or with Spectrum Conductor for Containers.

Presentations from IBM InterConnect 2017

Sunday, March 26th, 2017

I’m finally back home after what feels like a very long week in Las Vegas at IBM’s InterConnect conference. I promised that I’d post my presentations on SlideShare and I’ll add a few comments here on how each session went.

After an Inner Circle session on Sunday, my first public session of the week was an introduction to containers with WebSphere traditional. This played to a full room which suggests that there is significant interest in the use of containers for existing workloads. Indeed, that was the point of the second half of the session, to describe scenarios where it may make sense to use containers with traditional WebSphere. That’s not to say that it always does and, during one-to-one sessions during the week, I found myself repeatedly cautioning customers against rushing in to the use of containers, particularly with ND, just for the sake of it.

https://www.slideshare.net/davidcurrie/how-to-containerize-websphere-application-server-traditional-and-why-you-might-want-to

My next session covered our new announcement around Microservice Builder. I’ll not say more here as I’ll cover this in a separate post.

https://www.slideshare.net/davidcurrie/microservice-builder-a-microservice-devops-pipeline-for-rapid-delivery-and-promotion

Unfortunately, I didn’t get to deliver this session on Liberty and IBM Containers as it clashed with another that I was presenting. As touched on briefly in this presentation, one of the other announcements at the conference was for support for Kubernetes in IBM Containers. There was lots of excitement around this and I urge you to go and check it out for yourself.

https://www.slideshare.net/davidcurrie/websphere-liberty-and-ibm-containers-the-perfect-combination-for-java-microservices

On Wednesday I had a joint presentation with Brian Paskin looking at options for scalability with Liberty and containers. This was very much Brian’s presentation though so I shan’t post it here. There was an accompanying lab in the afternoon that looked at Liberty collectives and at IBM Containers.

My last session of the week was looking at some of the options when choosing a container orchestration platform: from Liberty collectives, through Swarm and Docker Datacenter, and Kubernetes with IBM Spectrum Conductor for Containers and IBM Containers. Many customers I spoke to this week were looking for a single definitive answer here but my response for now is still very much “it depends”.

https://www.slideshare.net/davidcurrie/choosing-a-container-platform-for-your-websphere-applications

Find me at IBM InterConnect 2017

Saturday, March 18th, 2017

I’m going to be at IBM’s InterConnect conference this coming week. If you’re going to be there too, there’s a quick run-down of the sessions I’ll be presenting below. The astute will notice that, due to a scheduling snafu, I’m supposed to be presenting two sessions at the same time on Tuesday. If you go to the Liberty/ IBM Containers session then I’m afraid you’ll have to make do with Tom – be kind to him!

If you want to chat about any combination of microservices, containers and WebSphere, you can find me on the microservices ped in the WebSphere area of the expo hall from 5-7:30pm on Tuesday and again from 3-5pm on Wednesday. I’ll be kicking off the latter with a live demo of Microservices Builder, of which more in another post. For Inner Circle customers, I’ll also be talking about this topic at 11am on Sunday.

HAJ-5451 : How to Containerize WebSphere Application Server Traditional, and Why You Might Want To
Date/Time : Mon, 20-Mar, 11:15 AM-12:00 PM
Location : Mandalay Bay South, Level 2 – Surf D
Presenter(s) : David Currie, IBM

BMC-7014 : Roundtable Discussion on Building Java Microservices with WebSphere Liberty
Date/Time : Mon, 20-Mar, 02:00 PM-02:45 PM
Location : Mandalay Bay North, Level 0 – Tropics A
Presenter(s) : Alasdair Nottingham, IBM; David Currie, IBM

BMC-7085 : Meet the Expert on IBM WebSphere Application Server Liberty on Docker
Date/Time : Tue, 21-Mar, 02:30 PM-03:15 PM
Location : Concourse, Bayside B, Level 1 – Meet the Experts Forum # 1
Presenter(s) : David Currie, IBM; Tom Banks, IBM

HAM-5526 : IBM Microservice Builder: A Microservice DevOps Pipeline for Rapid Delivery and Promotion
Date/Time : Tue, 21-Mar, 03:45 PM-04:30 PM
Location : Mandalay Bay North, Level 0 – Islander F
Presenter(s) : David Currie, IBM; Jeremy Hughes, IBM

BMC-5983 : WebSphere Liberty and IBM Containers: The Perfect Combination for Java Microservices
Date/Time : Tue, 21-Mar, 03:45 PM-04:30 PM
Location : Mandalay Bay North, Level 0 – South Pacific A
Presenter(s) : David Currie, IBM; Tom Banks, IBM

BMC-7014 : Roundtable Discussion on Building Java Microservices with WebSphere Liberty
Date/Time : Wed, 22-Mar, 08:00 AM-08:45 AM
Location : Mandalay Bay North, Level 0 – Tropics A
Presenter(s) : Alasdair Nottingham, IBM; David Currie, IBM

BMC-2714 : Utilizing WebSphere Application Server Liberty in Docker Containers for Scalability
Date/Time : Wed, 22-Mar, 10:15 AM-11:00 AM
Location : Mandalay Bay North, Level 0 – South Pacific A
Presenter(s) : Brian S. Paskin, IBM; David Currie, IBM

HAJ-2718 : Utilizing IBM WebSphere Liberty in Docker Containers for Scalability (Lab)
Date/Time : Wed, 22-Mar, 01:00 PM-02:45 PM
Location : Mandalay Bay South, Level 3 – South Seas H
Presenter(s) : Brian S. Paskin, IBM; David Currie, IBM

BAS-5901 : Choosing a Container Platform for Your WebSphere Applications
Date/Time : Thu, 23-Mar, 10:30 AM-11:15 AM
Location : Mandalay Bay North, Level 0 – South Pacific A
Presenter(s) : David Currie, IBM; Tom Banks, IBM

How many processors?

Tuesday, January 31st, 2017

Reading Daniel Bryant’s O’Reilly publication Containerizing Continuous Delivery in Java reminded me of the challenge of determining how processors are available to you when running in a container. In the case of Java, a call to Runtime.getRuntime().availableProcessors() should show this all important information. A quick check reveals that, when called in an unconstrained container, this correctly returns the number of cores on my physical hardware (Docker on Linux) or assigned to the VM containing the Docker Engine (Docker Toolbox or Docker for Windows/Mac). If I used the --cpuset-cpus option on docker run to constrain the cores available to the container then this is also correctly reflected in the value returned. The difficulty arises when access to those CPUs is constrained in other ways.

Take, for example, the new --cpus option in Docker 1.13. Setting this to two on a four-way box, I still get four back from a call to availableProcessors() and rightly so: there are four processors and I may get simultaneous access to all four of them even if the cgroup is then going to make sure that I don’t get that access for more than half of the time. Another potential constraint is a highly multi-tenant environment. If I deploy my test application to Bluemix it tells me that there are 48 processors. That’s great but I’m pretty sure I’m not going to get exclusive access to all of those!

One example we’ve seen where this becomes a real problem is in native memory usage. By default, WebSphere Liberty uses the number of available processors to decide the number of parallel threads it should support. Each of those executors utilises a thread and each of those threads takes up space in native memory. In a containerized environment where total memory is typically constrained (Bluemix containers are sold by the GB/hour) and some generic heuristic is often used to determine the heap size to allocate to the JVM, that can lead to memory exhaustion. That’s why you’ll see a GitHub issue from my colleague Erin that, among other things, proposes setting hard-coding a maximum on the number of threads for the executor service in our Docker images.