Archive for the ‘Work’ Category

Farewell IBM

Thursday, August 2nd, 2018

On 2nd August, I handed back my IBM badge, just shy of twenty years after I first joined the company. I’ll come back to the ‘why?’ and ‘where next?’ questions and start with a recap of those intervening years (with apologies for the consequent length of this post!).

I started at IBM Hursley on 6th October 1998, fresh out of university with a degree in Engineering and Computer Science. I was a month late for the beginning of the graduate programme having taken some time out to travel across Canada by Greyhound coach! I began working on IBM’s C++ CORBA offering (Component Broker) with a brief spell in test before switching to development in the transactions team. (Remember when ‘test’ and ‘development’ were two different teams?) Many of my colleagues in that team (too many to name but they should know who they are) formed the basis of a network that would define the shape of my future career. (My Component Broker mug is still going strong but I’m afraid I ditched the set of foils describing the product that I found when clearing out my desk!)

At university I’d used, the then nascent, Java in a couple of projects and those skills were to become of use as we started to add a Java client. Before long, the focus switched to the newly-defined J2EE specifications and WebSphere Application Server was born. After working on the JTA and Activity Session implementations, I joined a team looking at integration with MQ. When the time came to implement an embedded JMS provider in WebSphere Application Server V6, it was natural I should move to work on that.

Six years in, I was starting to make architectural decisions but desired a better understanding of how customers were actually using our products. When the opportunity came up to work as a software consultant in IBM Software Services for WebSphere (aka Lab Services), I jumped at the chance. The next few years were spent travelling across Europe, doing everything from performance bake-offs, resolving critical situations, to participating in first-of-a-kind projects. I particularly enjoyed this time, learning to survive on your wits on those occasions when it wasn’t possible to draw on that all important network. This was also the period during which this blog began.

On returning from a short-term assignment to Norway a, by now one-year old, daughter meant it was time to get my feet back under a development desk. Having worked with customers on WebSphere ESB, it was natural to join that team. From there, I had the pleasure of building and leading a new development team to take over what was to become WebSphere Appliance Management Center. We had great fun, rewriting the offering to build on the new WebSphere Liberty Profile with a shiny new JavaScript front-end (thankfully IBM later moved on from Dojo though) in what I still think was one of the most passable efforts at agile I’ve seen in IBM.

Eventually, the team were moved to work on IBM API Management. The eight-hour time zone difference to the half of the team in California didn’t work for me and, after a nine-year break, I rejoined the WebSphere Application Server family. Initially, I was working on the open source Cloud Foundry buildpack. A side project relating to Netflix OSS was the start of an interest in microservices. From there, I lead efforts relating to containerization, including the publication of official images on Docker Hub.

This, in turn, led to Microservice Builder: a platform for developing, building and deploying, cloud-native applications on Kubernetes. This was then rolled into an offering called Microclimate which added a greater emphasis on the developer experience and that brings us to the current day.

So why, after so many years working with such great people on such a variety of interesting projects, am I now set to leave? Sure, there have been frustrations in working for IBM, but I’m sure many of those are common to all large, shareholder-owned, multi-national companies. As an example, take the laying down of corporate instructions that mandate that all 380,000 employees be treated in some particular way that cannot possibly be equally applicable to all. Thankfully I’ve been blessed with managers who have all excelled in the flexible interpretation of those rules. Many of those same managers are helping to revive Hursley as the vibrant technical community that I first joined.

Really, my departure just boils down to wanting to experience working for a different company. I’ve often said that IBM is the best employer within a two-mile radius of my house and I’ve set a lot of store by that convenience. My LinkedIn profile has been ‘open to offers’ for a few years now but I’ve been resistant to the lure of London money/startups or the peripatetic life of the solution architect. In this case though, I was offered the opportunity to work from home, not as the lone outcast, but for a company that is almost entirely distributed. It was also an opportunity that would utilize the skills around the cloud and DevOps (in particular Kubernetes and Jenkins) that I’ve garnered over the past few years. Such is the overlap that I even credited one of my technical interviewers in a presentation I gave earlier this year when citing their work!

So, without further ado, from 28 August I will be a Senior Sofware Engineer at CloudBees where I’ll be joining the architecture team for their core (Jenkins) offering. At eight-years-old, the company is very much a late-stage venture but, with the distribution list for my leaving email at IBM having more people on it than there are in the entire company, it will be quite a different prospect to working at IBM. Much more than that, I can’t tell you because, quite frankly, I don’t know, but I’m looking forward to new colleagues and challenges. Stay tuned to this blog to find out what happens next!

 

Scratching at Work

Saturday, May 12th, 2018

Not satisfied with a four-day Bank Holiday week, I was back in work today for a Scratch Day organised by the inimitable Dale Lane, supported by an all-star cast of IBMers, past and present. The day got off to an ‘exciting’ start with Duncan and I cycling there along Hursley Road. Emma joined us by car, just as the day got going, hot foot from her swimming lesson.

There was a good turnout from IBM and other local families. On offer was a selection of projects from Code Club and Dale’s own Machine Learning for Kids. Emma and Duncan worked separately and I probably spent most of my time helping Duncan (although both are familiar with Scratch from school and home). Typically, Duncan picked two of the ‘advanced’ options but, having heard Dale talk about them at a lunchtime session, I was more than happy to try out a couple of the ML exercises.

We started with Judge a book which performs image classification on book covers to try and identify genre. I was a bit slow to realise that Duncan was logged in to my Amazon account whilst performing his searches but thankfully we switched to an incognito session before getting to the flesh-covered books under Romance! He’d picked Horror and Fantasy as two of his other genres and it wasn’t surprising that the classifier occasionally got those confused.

I had to help out a fair amount with the Headlines exercise as there was a lot of typing to enter the training set from different newspapers. We didn’t manage to finish before the end of the day but we still had an interesting discussion about the differences between tabloid and broadsheet headlines.

The event closed with an opportunity for the children to show what they had done to the others. Although some were a little reticent, this was a great opportunity for them to build a little confidence and soak up the applause that each invariably got.

All-in-all, we had a great day and my thanks go to all those that gave up a day (and more) to help out. We’ll certainly be checking out a few of the other projects and hope that Scratch Day makes a return to Hursley next year.

Index Developer Conference

Sunday, February 25th, 2018

IBM launched a new conference in San Francisco under the name Index and I was lucky enough to attend. This wasn’t your usual IBM conference focused on brands and products. Although the tracks were aligned with IBM’s strategic areas (Cloud, Blockchain and AI talks were much in evidence, for example) it really was a developer conference with keynotes and speakers from well-renowned figures across the industry.

You can watch my session covering deploying Jenkins on Kubernetes with Helm and deploying to Kubernetes from Jenkins with Helm below. You can find the deck on SlideShare and the demo material on GitHub. For those who know what I work on, it will be no surprise that this is based on our discoveries when developing Microservice Builder. I highly recommend you also check out some of the other sessions on the conference playlist and watch out for Index 2019!

The timing of the conference meant I had Friday to be a tourist with some colleagues. We headed over to SF MoMA and then made the most of the sunshine with a stroll along the waterfront to see the sea lions and then to have to have lunch overlooking the bay.

Optional Kubernetes resources and PodPresets

Thursday, July 27th, 2017

The sample for Microservice Builder is intended to run on top of the Microservice Builder fabric and also to utilize the ELK sample. As such, the Kubernetes configuration for the microservice pods all bind to a set of resources (secrets and config-maps) created by the Helm charts for the fabric and ELK sample. The slightly annoying thing is that the sample would work perfectly well without these (you just wouldn’t get any logging to the ELK stack) only, as we shall see in a moment, deployment fails if the fabric and ELK sample have not already been deployed. In this post we’ll explore a few possibilities as to how these resources could be made optional.

I’m going to assume a minikube environment here and we’re going to try to deploy just one of the microservices as follows:

If you then perform a kubectl describe  for the pod that is created you’ll see that it fails to start as it can’t bind the volume mounts:

Elsewhere in the output though you’ll see a clue to our first plan of attack:

Doesn’t that optional  flag look promising?! As of Kubernetes 1.7 (and thanks to my one-time colleague Michael Fraenkel) we can mark our usage of secrets and config-maps as optional. Our revised pod spec would now look as follows:

And lo and behold, with that liberal sprinkling of optional attributes we can now successfully deploy the service without either the fabric or ELK sample. Success! But why stop there? All of this is boilerplate that is repeated across all our microservices. Wouldn’t it be better if it simply wasn’t there in the pod spec and we just added it when it was needed? Another new resource type in Kubernetes 1.7 comes to our rescue: the PodPreset. A pod preset allows us to inject just this kind of configuration at deployment time to pods that match a given selector.

We can now slim our deployment down to the bare minimum that we want to have in our basic environment:

Note that we have also added that runtime: liberty  label to the pod which is what we’re going to use to match on. In our advanced environment, we don’t want to be adding the resources to every pod in the environment, in particular,  we don’t want to add it to those that aren’t even running Liberty. This slimmed down deployment works just fine, in the same way that the optional version did.

Now, what do we have to do to get all of that configuration back in an environment where we do have the fabric and ELK sample deployed? Well, we define it in a pod preset as follows:

Note that the selector is matching on the label that we defined in the pod spec earlier. Now, pod presets are currently applied by something in Kubernetes called admission control and, because they are still alpha, minikube doesn’t enable the admission controller for PodPresets by default. We can enable it as follows:

(Note that, prior to minikube v0.21.0 this property was called apiserver.GenericServerRunOptions.AdmissionControl, a change that cost me half an hour of my life I’ll never get back!)

With the fabric, ELK sample and pod preset deployed, we now find that our pod regains its volume mounts when deployed courtesy of the admission controller:

Pod presets are tailor-made for this sort of scenario where we want to inject secrets and config maps but even they don’t go far enough for something like Istio where we want to inject a whole new container into the pod (the Envoy proxy) at deployment time. Admission controllers in general also have their limitations in that they have to be compiled into the API server and, as we’ve seen, they have to be specified when the API server starts up. If you need something a whole lot more dynamic that take a look at the newly introduced initializers.

One last option for those who aren’t yet on Kubernetes 1.7. We’re in the process of moving our generated microservices to use Helm and in a Helm chart template you can make configuration optional. For example, we might define a logging  option in our values.yaml with a default value of disabled , and then we can define constructs along the following lines in our pod spec:

Then all we’ve got to do when we’re deploying to our environment with the fabric and ELK sample in place is to specify an extra --set logging=enabled  on our helm install. Unlike the pod preset, this does mean that the logic is repeated in the Helm chart for every microservice but it certainly wins on the portability stakes.

Private repositories on Docker Hub

Sunday, July 16th, 2017

Sometimes Docker Hub really is just the quickest and easiest way to share an image from one place to another, particularly when the place I’m trying to share to is expecting to just do a docker pull. It’s not always the case that I want to share those images with the rest of the world though. Docker Hub’s answer to this is the private repository but, on a free plan, you only get one private repository. What you have to remember though is that a repository can contain multiple images: they all share the same name but each has a different tag.

So, a while back I created a repository in my personal namespace called private and made it private using the button on the settings page:

When I then want to push an image up I use the local name as the tag. For example:

Simple as that. There are obviously limitations here in that I lose the ability to have multiple versions of my image with different tags but so far, for my limited use cases, I’ve been able to live with that. In fairness to Docker Inc, I should say that have multiple private repositories is not the only reason to pay for an account on Docker Hub. You also get the ability to run parallel builds on.

Microservice Builder GA Update

Wednesday, July 12th, 2017

As I posted here on the Microservice Builder beta, I thought it only fair that I should offer an update now that it is Generally Available. There is already the official announcement, various coverage in the press including ZDNet and ADT, a post from my new General Manager Denis Kennelly, and, indeed, my own post on the official blog, so I thought I’d focus on what has changed from a technical standpoint since the beta.

If I start with the developer CLI, the most significant change here is that you no longer need a Bluemix login. Indeed, if you aren’t logged in, you’ll no longer be prompted for potentially irrelevant information such as the sub-domain on Bluemix where you want the application to run. Note, however, that the CLI is still using back-end services out in the cloud to generate the projects so you’ll still need internet connectivity when performing a bx dev create.

Moving on to the next part of the end-to-end flow: the Jenkins-based CI/CD pipeline, the Helm chart for this has been modified extensively. It is now based on the community chart which, most significantly, means that it is using the Kubernetes plugin for Jenkins. This results in the use of separate containers for each of the build steps (with Maven for the app build, Docker for the image build, and kubectl for the deploy) and those containers are spun up dynamically as part of a Kubernetes pod representing the Jenkins slave when required.

The Jenkinsfile has also been refactored to make extensive use of a Jenkins library. As you’ll see in the sample projects, this means that the generated Jenkinsfile is now very sparse:

I could say much more about the work we’ve done with the pipeline but to do so would be stealing the thunder from one of my colleagues who I know is penning an article on this subject.

Looking at the runtime portion, what we deploy for the Microservice Builder fabric has changed significantly. We had a fair amount of heartache as we considered security concerns in the inter-component communication. This led us to move the ELK stack and configuration for the Liberty logstash feature out into a sample. This capability will return although likely in a slightly different form. The fabric did gain a Zipkin server for collation and display of Open Tracing data. Again, the security concerns hit home here and, for now, the server is not persisting data and the dashboard is only accessible via kubectl port-forward.

Another significant change, and one of the reasons I didn’t post this immediately, was that a week after we GA’d, IBM Spectrum Conductor for Containers morphed into IBM Cloud private. In the 1.2 release, this is largely a rebranding exercise but there’s certainly a lot more to come in this space. Most immediately for Microservice Builder, it means that you no longer need to add our Helm repository as it will be there in the App Center out of the box. It also meant a lot of search and replace for me in our Knowledge Center!

You may be wondering where we are heading next with Microservice Builder. As always, unfortunately I can’t disclose future product plans. What I can do is highlight existing activity that is happening externally. For example, if you look at the Google Group for the MicroProfile community, you will see activity ramping up there and proposals for a number of new components. Several of the Microservice Builder announcements also refer to the Istio service mesh project on which IBM is collaborating with Google. It’s still early days there but the project is moving fast and you can take a look at some of the exciting features on the roadmap.

Multi-Stage Docker Build

Friday, May 5th, 2017

Docker 17.05 enabled the ability to perform multiple build stages in a single Dockerfile, copying files between them. This brings to regular Docker build a capability that I’ve previously talked about in the context of Rocker, and something that’s of particular use in a compiled language like Java. Let’s see what it would look like in the context of the WebSphere Liberty ferret sample.

The original Dockerfile looks as follows:

We can see that it assumes that the application has already been built and just pulls in the WAR file, in this case from Maven Central. With a multi-stage build we can perform the build of the application and the build of the image in a single Dockerfile:

The first line uses the Maven on-build image to build the application using the source in the same directory as the Dockerfile. Although the stages are given an index by default, naming them using the AS keyword makes the file much more readable. Further down in the Dockerfile we can see that the COPY command takes the built WAR file from the first stage and copies it into the Liberty dropins directory as before. The important thing about all of this is that the final image doesn’t end up with the application source in it, or Maven, or an SDK – just the artifacts that are needed at runtime – thereby keeping down the final image size.

Introducing Microservice Builder

Monday, March 27th, 2017

When the frequency of blog posts drops on this site it generally has two causes: I’m busy and/or I’m working on something that’s IBM Confidential. Both of these have been true over the past six months or so whilst I’ve been working on something we’re calling Microservice Builder. A public beta was announced in the run up to InterConnect which went live on the 24th which means that I can now come up for air and say a little about the work we’ve done so far.

Although not limited to Java deployments, Microservice Builder pulls together multiple strands of work that we’ve been doing in the WebSphere space. First, there is the work that is being done in the MicroProfile community to define a set of standard APIs for building microservices in Java. Initially, this took a set of existing Java EE technologies (JAX-RS, CDI and JSON-P) but now additional APIs are being defined. You can start to see the results of this work in the Liberty March beta where there are new features for injecting environmental configuration and utilizing fault tolerance patterns such as timeout, bulkhead and circuit breaker.

Another area where we’ve sought to improve the developer experience is by providing a fast-path to creating new projects. The Liberty App Accelerator has been around for some time now, allowing you to generate Java projects quickly through a web UI. We’ve taken this idea and extended it to cover Swift and Node.js. This can be achieved either through a web UI or through a new plugin to the Bluemix CLI. (Note that generated projects do not need to be deployed to Bluemix.) The plugin goes beyond just generating projects and allows you to build and run them locally using containers. This means that the developer no longer needs to have the prerequisites (e.g. Java, Maven and Liberty) installed locally.

For a runtime environment, we believe containers are a good fit for microservices and in the first instance we’re focusing on Kubernetes. That could be the newly announced Kubernetes in IBM Containers or it could be on-premises with IBM Spectrum Conductor for Containers. On top of Kubernetes, Microservice Builder adds a lightweight fabric, installed as a Helm chart, that simplifies deployment of Liberty-based services. Specifically, in this first release it generates key and trust stores to facilitate inter-service communication. It also configures an ELK (Elasticsearch-Logstash-Kibana) stack to receive and display information including trace, FFDC, garbage collection and HTTP access logs from the Liberty logstashCollector-1.0 feature.

The final strand of Microservice Builder ties together the development and runtime environments via a Jenkins based pipeline. Once again, this is installed as a Helm chart, and is configured to automatically pick up projects from a GitHub or GitHub Enterprise organization. For a Java application, the pipeline will build and test using Maven, before creating a Docker image and pushing it to a registry. The Docker image is then deployed to a Kubernetes cluster using either the same or a separate pipeline.

To show all of this in action, we have taken the sample conference application from the MicroProfile community and broken it apart in to separate projects to deploy using Microservice Builder. Just follow the docs to recreate it in either your local minikube environment or with Spectrum Conductor for Containers.