Orfu Orienteering

August 25th, 2017

The next leg of our Hungarian adventure took us to Orf?, a small tourist village, half an hour north of the city of Pécs (where we stopped briefly to stock up at Tescos!). Orf? sits beside a string of lakes on the edge of the Mecsek hills (the highest mountain in Hungary is only just over 1000m). We checked in to our shady apartment and then walked down to the village hall where registration for the Hungaria Cup was taking place. There was a bit of queue for the ‘foreign clubs’ and we didn’t help! We had underpaid and, to make the sums harder, I’d paid in Hungarian Forints but the transfer had arrived in Euros, and I wanted to pay the balance in Forints!

The next day, we walked to the assembly area for the orienteering and then promptly had to retrace half our steps to the start which was high on the hill above the apartment. Thankfully they had decided to let the open courses start whenever they liked so Emma and Duncan went to the start with Christine and set off together. They were on the taped course, a great idea which allowed them to either follow the tape the whole way (as they did on Day 1) or make the course significantly shorter by following the obvious shortcuts on the map (as they did on subsequent days). I struggled on the steep climbs in the heat but, at least travelling slowly, I didn’t waste much time on navigational areas. Afterwards, we cooled off in the aqua park by the lake which was free to competitors.

Day 2 had the same assembly area but we drove this time as, with starts after twelve, we didn’t want to walk there in the midday sun. Thankfully my course was 2km shorter but I still didn’t manage to break 10 min/k. Emma failed to punch one of the controls despite having been with Duncan but the organisers were sympathetic and reinstated her. After the day had cooled a little, we climbed up the lookout for views over the surrounding hills.

The assembly area moved for Day 3 and the courses got shorter again for a blast around an area filled with massive sink holes. The terrain obviously suited Christine as she won her course bringing her up into third place overall. We didn’t discover this until after the prizegiving though (which took place every night at the event campsite followed by a disco until midnight which we could hear across the valley from our apartment). We headed into Pécs to take in the Turkish architecture and an ice cream. With temperatures still in the high 30s we didn’t last long though.

The assembly moved again for the last two days to the neighbouring village of Abaliget. Christine was off early and took the children with her. She improved her position again, finishing second. The children made it back before her though and even had an interview with the commentator. I had a late start and, after some early blunders, was caught four minutes by the leader of my course. I was pleased to be able to hang on to him for the middle section. We made a return trip to the aqua park afterwards.

The final day of the orienteering was a chasing start, or at least it was for Christine. My cumulative time was more than 40 mins behind the leader which meant just starting off at minute intervals. We were back with the sink holes again and I had a pretty clean run, finishing second on the day which brought me up to seventh overall. Christine was also second which meant she retained her third place overall and secured a place on the podium. Thankfully she didn’t win the 12 (screw fit) light bulbs the men got but we did have a bottle of wine and 3 litres of apple juice to drink before leaving the country! We took the cave tour afterwards which was an interesting experience given it was all in Hungarian. If nothing else, it was nice and cool.

After one last night in the apartment, it was time to say goodbye to Orf? and head north for the final chapter of our holiday…

For those who are particularly interested, these are my routes from the five days (although my GPS failed to get a lock at the start of Day 1).

Boiling Budapest

August 20th, 2017

The Hungaria Cup came up when we were searching for possible Summer holiday orienteering and, neither of us having ever been to the country before, we decided to give it a try.Part of our decision process also included checking the average seasonal temperature: a very reasonable mid-20s °C. However, when we landed in Budapest there was a heatwave in force and the temperature was a hot but dry 39°C! The top-floor apartment we were staying in downtown Pest did have a standalone air-con unit but it simply couldn’t compete and the temperature inside didn’t drop below 30°C whilst we were there which made for some uncomfortable nights.

On our first full day, we crossed the chain bridge to Buda and walked up the hill (the shady path on the hillside looking preferable to the long queue for the funicular). To escape the heat, we descended into the Labyrinth under Buda Castle, containing a bizarre mixture of waxwork figures dressed in mouldy opera outfits and Dracula themed displays. The latter certainly had Emma spooked!

We returned via the picturesque Matthias Church and Fisherman’s Bastion. Stopping in a playground on the way down, Emma was unfortunately stung by a wasp but thankfully didn’t react too badly.

The next day, we headed past the Parliament Building to Margaret Island, which was taking a break between being the venue for the FINA World Champs and the World Masters, and joined what seemed like most of the local population at Palatinus Strand. There was no need to sample the thermal waters with the outdoor pools being quite warm enough. Sadly Duncan wasn’t quite tall enough for the water slides but Emma certainly enjoyed dragging me down them!

City Park was the venue for our last full day in Budapest. We had booked in for the circus mid-afternoon and, not being entirely sure what to do with ourselves in the interim, we ended up in the zoo. Given the temperature, part of the attraction of the circus was that the performance took place on ice. It was a superb show and the acrobatics would have been breathtaking even without the addition of ice skates!

Whilst we were in the big top the weather broke and we emerged into a torrential downpour. The rain continued the next day but it was time to pick up the hire car and move on to the next part of the holiday anyway…

 

Optional Kubernetes resources and PodPresets

July 27th, 2017

The sample for Microservice Builder is intended to run on top of the Microservice Builder fabric and also to utilize the ELK sample. As such, the Kubernetes configuration for the microservice pods all bind to a set of resources (secrets and config-maps) created by the Helm charts for the fabric and ELK sample. The slightly annoying thing is that the sample would work perfectly well without these (you just wouldn’t get any logging to the ELK stack) only, as we shall see in a moment, deployment fails if the fabric and ELK sample have not already been deployed. In this post we’ll explore a few possibilities as to how these resources could be made optional.

I’m going to assume a minikube environment here and we’re going to try to deploy just one of the microservices as follows:

If you then perform a kubectl describe  for the pod that is created you’ll see that it fails to start as it can’t bind the volume mounts:

Elsewhere in the output though you’ll see a clue to our first plan of attack:

Doesn’t that optional  flag look promising?! As of Kubernetes 1.7 (and thanks to my one-time colleague Michael Fraenkel) we can mark our usage of secrets and config-maps as optional. Our revised pod spec would now look as follows:

And lo and behold, with that liberal sprinkling of optional attributes we can now successfully deploy the service without either the fabric or ELK sample. Success! But why stop there? All of this is boilerplate that is repeated across all our microservices. Wouldn’t it be better if it simply wasn’t there in the pod spec and we just added it when it was needed? Another new resource type in Kubernetes 1.7 comes to our rescue: the PodPreset. A pod preset allows us to inject just this kind of configuration at deployment time to pods that match a given selector.

We can now slim our deployment down to the bare minimum that we want to have in our basic environment:

Note that we have also added that runtime: liberty  label to the pod which is what we’re going to use to match on. In our advanced environment, we don’t want to be adding the resources to every pod in the environment, in particular,  we don’t want to add it to those that aren’t even running Liberty. This slimmed down deployment works just fine, in the same way that the optional version did.

Now, what do we have to do to get all of that configuration back in an environment where we do have the fabric and ELK sample deployed? Well, we define it in a pod preset as follows:

Note that the selector is matching on the label that we defined in the pod spec earlier. Now, pod presets are currently applied by something in Kubernetes called admission control and, because they are still alpha, minikube doesn’t enable the admission controller for PodPresets by default. We can enable it as follows:

(Note that, prior to minikube v0.21.0 this property was called apiserver.GenericServerRunOptions.AdmissionControl, a change that cost me half an hour of my life I’ll never get back!)

With the fabric, ELK sample and pod preset deployed, we now find that our pod regains its volume mounts when deployed courtesy of the admission controller:

Pod presets are tailor-made for this sort of scenario where we want to inject secrets and config maps but even they don’t go far enough for something like Istio where we want to inject a whole new container into the pod (the Envoy proxy) at deployment time. Admission controllers in general also have their limitations in that they have to be compiled into the API server and, as we’ve seen, they have to be specified when the API server starts up. If you need something a whole lot more dynamic that take a look at the newly introduced initializers.

One last option for those who aren’t yet on Kubernetes 1.7. We’re in the process of moving our generated microservices to use Helm and in a Helm chart template you can make configuration optional. For example, we might define a logging  option in our values.yaml with a default value of disabled , and then we can define constructs along the following lines in our pod spec:

Then all we’ve got to do when we’re deploying to our environment with the fabric and ELK sample in place is to specify an extra --set logging=enabled  on our helm install. Unlike the pod preset, this does mean that the logic is repeated in the Helm chart for every microservice but it certainly wins on the portability stakes.

Private repositories on Docker Hub

July 16th, 2017

Sometimes Docker Hub really is just the quickest and easiest way to share an image from one place to another, particularly when the place I’m trying to share to is expecting to just do a docker pull. It’s not always the case that I want to share those images with the rest of the world though. Docker Hub’s answer to this is the private repository but, on a free plan, you only get one private repository. What you have to remember though is that a repository can contain multiple images: they all share the same name but each has a different tag.

So, a while back I created a repository in my personal namespace called private and made it private using the button on the settings page:

When I then want to push an image up I use the local name as the tag. For example:

Simple as that. There are obviously limitations here in that I lose the ability to have multiple versions of my image with different tags but so far, for my limited use cases, I’ve been able to live with that. In fairness to Docker Inc, I should say that have multiple private repositories is not the only reason to pay for an account on Docker Hub. You also get the ability to run parallel builds on.

Microservice Builder GA Update

July 12th, 2017

As I posted here on the Microservice Builder beta, I thought it only fair that I should offer an update now that it is Generally Available. There is already the official announcement, various coverage in the press including ZDNet and ADT, a post from my new General Manager Denis Kennelly, and, indeed, my own post on the official blog, so I thought I’d focus on what has changed from a technical standpoint since the beta.

If I start with the developer CLI, the most significant change here is that you no longer need a Bluemix login. Indeed, if you aren’t logged in, you’ll no longer be prompted for potentially irrelevant information such as the sub-domain on Bluemix where you want the application to run. Note, however, that the CLI is still using back-end services out in the cloud to generate the projects so you’ll still need internet connectivity when performing a bx dev create.

Moving on to the next part of the end-to-end flow: the Jenkins-based CI/CD pipeline, the Helm chart for this has been modified extensively. It is now based on the community chart which, most significantly, means that it is using the Kubernetes plugin for Jenkins. This results in the use of separate containers for each of the build steps (with Maven for the app build, Docker for the image build, and kubectl for the deploy) and those containers are spun up dynamically as part of a Kubernetes pod representing the Jenkins slave when required.

The Jenkinsfile has also been refactored to make extensive use of a Jenkins library. As you’ll see in the sample projects, this means that the generated Jenkinsfile is now very sparse:

I could say much more about the work we’ve done with the pipeline but to do so would be stealing the thunder from one of my colleagues who I know is penning an article on this subject.

Looking at the runtime portion, what we deploy for the Microservice Builder fabric has changed significantly. We had a fair amount of heartache as we considered security concerns in the inter-component communication. This led us to move the ELK stack and configuration for the Liberty logstash feature out into a sample. This capability will return although likely in a slightly different form. The fabric did gain a Zipkin server for collation and display of Open Tracing data. Again, the security concerns hit home here and, for now, the server is not persisting data and the dashboard is only accessible via kubectl port-forward.

Another significant change, and one of the reasons I didn’t post this immediately, was that a week after we GA’d, IBM Spectrum Conductor for Containers morphed into IBM Cloud private. In the 1.2 release, this is largely a rebranding exercise but there’s certainly a lot more to come in this space. Most immediately for Microservice Builder, it means that you no longer need to add our Helm repository as it will be there in the App Center out of the box. It also meant a lot of search and replace for me in our Knowledge Center!

You may be wondering where we are heading next with Microservice Builder. As always, unfortunately I can’t disclose future product plans. What I can do is highlight existing activity that is happening externally. For example, if you look at the Google Group for the MicroProfile community, you will see activity ramping up there and proposals for a number of new components. Several of the Microservice Builder announcements also refer to the Istio service mesh project on which IBM is collaborating with Google. It’s still early days there but the project is moving fast and you can take a look at some of the exciting features on the roadmap.

Emma Goes Ape

July 9th, 2017

Having passed the minimum age limit, Emma was keen to try out the adult Go Ape at Itchen Valley Country Park. There was a certain amount of expectation setting that had to be done before we left home as she needed shoes on to be taller than the limit of 140cm but they seem fine with that when we arrived at check-in. Quite apart from any qualms over heights, my back gave out last Monday, and so the job of trailing Emma round fell to Christine, with Duncan and myself watching from ground-level.

Emma had a big cheery smile on her face the whole way round although Christine says she was a bit nervous at times. (I think she meant Emma, not herself.) It certainly didn’t hold her up though as she flew along many of obstacles. Some of the attachments were a bit of a stretch for her so she certainly needed to have Christine there to help her clip on. Christine didn’t give her any chance to forget about clipping on either! Emma’s certainly keen to return so perhaps my back pain will have to become a recurring problem…

More photos over on Flickr.

Donutting

June 4th, 2017

Emma reached double-figures last week and, conveniently, the school had scheduled an Inset day so that she and her friends (and Duncan!) could celebrate by going donutting at the dry ski-slope in Southampton. They all had a whale of a time although, with the requirement to wear helmets and full body cover, frequent refreshment stops were required in the midday sun. (Also, drag lifts require a little more effort when you’re not actually wearing skis!)

Thankfully the chalet where party tea was held offered plenty of shade. The activity also made a good theme for the cake although I think I put about as much effort into constructing a sloping stand as actually decorating the cake itself! Once we’d seen everyone back home we packed up the car and joined the Bank Holiday traffic for a weekend of camping near Corfe Castle.

Multi-Stage Docker Build

May 5th, 2017

Docker 17.05 enabled the ability to perform multiple build stages in a single Dockerfile, copying files between them. This brings to regular Docker build a capability that I’ve previously talked about in the context of Rocker, and something that’s of particular use in a compiled language like Java. Let’s see what it would look like in the context of the WebSphere Liberty ferret sample.

The original Dockerfile looks as follows:

We can see that it assumes that the application has already been built and just pulls in the WAR file, in this case from Maven Central. With a multi-stage build we can perform the build of the application and the build of the image in a single Dockerfile:

The first line uses the Maven on-build image to build the application using the source in the same directory as the Dockerfile. Although the stages are given an index by default, naming them using the AS keyword makes the file much more readable. Further down in the Dockerfile we can see that the COPY command takes the built WAR file from the first stage and copies it into the Liberty dropins directory as before. The important thing about all of this is that the final image doesn’t end up with the application source in it, or Maven, or an SDK – just the artifacts that are needed at runtime – thereby keeping down the final image size.