Archive for the ‘Kubernetes’ Category

Optional Kubernetes resources and PodPresets

Thursday, July 27th, 2017

The sample for Microservice Builder is intended to run on top of the Microservice Builder fabric and also to utilize the ELK sample. As such, the Kubernetes configuration for the microservice pods all bind to a set of resources (secrets and config-maps) created by the Helm charts for the fabric and ELK sample. The slightly annoying thing is that the sample would work perfectly well without these (you just wouldn’t get any logging to the ELK stack) only, as we shall see in a moment, deployment fails if the fabric and ELK sample have not already been deployed. In this post we’ll explore a few possibilities as to how these resources could be made optional.

I’m going to assume a minikube environment here and we’re going to try to deploy just one of the microservices as follows:

If you then perform a kubectl describe  for the pod that is created you’ll see that it fails to start as it can’t bind the volume mounts:

Elsewhere in the output though you’ll see a clue to our first plan of attack:

Doesn’t that optional  flag look promising?! As of Kubernetes 1.7 (and thanks to my one-time colleague Michael Fraenkel) we can mark our usage of secrets and config-maps as optional. Our revised pod spec would now look as follows:

And lo and behold, with that liberal sprinkling of optional attributes we can now successfully deploy the service without either the fabric or ELK sample. Success! But why stop there? All of this is boilerplate that is repeated across all our microservices. Wouldn’t it be better if it simply wasn’t there in the pod spec and we just added it when it was needed? Another new resource type in Kubernetes 1.7 comes to our rescue: the PodPreset. A pod preset allows us to inject just this kind of configuration at deployment time to pods that match a given selector.

We can now slim our deployment down to the bare minimum that we want to have in our basic environment:

Note that we have also added that runtime: liberty  label to the pod which is what we’re going to use to match on. In our advanced environment, we don’t want to be adding the resources to every pod in the environment, in particular,  we don’t want to add it to those that aren’t even running Liberty. This slimmed down deployment works just fine, in the same way that the optional version did.

Now, what do we have to do to get all of that configuration back in an environment where we do have the fabric and ELK sample deployed? Well, we define it in a pod preset as follows:

Note that the selector is matching on the label that we defined in the pod spec earlier. Now, pod presets are currently applied by something in Kubernetes called admission control and, because they are still alpha, minikube doesn’t enable the admission controller for PodPresets by default. We can enable it as follows:

(Note that, prior to minikube v0.21.0 this property was called apiserver.GenericServerRunOptions.AdmissionControl, a change that cost me half an hour of my life I’ll never get back!)

With the fabric, ELK sample and pod preset deployed, we now find that our pod regains its volume mounts when deployed courtesy of the admission controller:

Pod presets are tailor-made for this sort of scenario where we want to inject secrets and config maps but even they don’t go far enough for something like Istio where we want to inject a whole new container into the pod (the Envoy proxy) at deployment time. Admission controllers in general also have their limitations in that they have to be compiled into the API server and, as we’ve seen, they have to be specified when the API server starts up. If you need something a whole lot more dynamic that take a look at the newly introduced initializers.

One last option for those who aren’t yet on Kubernetes 1.7. We’re in the process of moving our generated microservices to use Helm and in a Helm chart template you can make configuration optional. For example, we might define a logging  option in our values.yaml with a default value of disabled , and then we can define constructs along the following lines in our pod spec:

Then all we’ve got to do when we’re deploying to our environment with the fabric and ELK sample in place is to specify an extra --set logging=enabled  on our helm install. Unlike the pod preset, this does mean that the logic is repeated in the Helm chart for every microservice but it certainly wins on the portability stakes.

Microservice Builder GA Update

Wednesday, July 12th, 2017

As I posted here on the Microservice Builder beta, I thought it only fair that I should offer an update now that it is Generally Available. There is already the official announcement, various coverage in the press including ZDNet and ADT, a post from my new General Manager Denis Kennelly, and, indeed, my own post on the official blog, so I thought I’d focus on what has changed from a technical standpoint since the beta.

If I start with the developer CLI, the most significant change here is that you no longer need a Bluemix login. Indeed, if you aren’t logged in, you’ll no longer be prompted for potentially irrelevant information such as the sub-domain on Bluemix where you want the application to run. Note, however, that the CLI is still using back-end services out in the cloud to generate the projects so you’ll still need internet connectivity when performing a bx dev create.

Moving on to the next part of the end-to-end flow: the Jenkins-based CI/CD pipeline, the Helm chart for this has been modified extensively. It is now based on the community chart which, most significantly, means that it is using the Kubernetes plugin for Jenkins. This results in the use of separate containers for each of the build steps (with Maven for the app build, Docker for the image build, and kubectl for the deploy) and those containers are spun up dynamically as part of a Kubernetes pod representing the Jenkins slave when required.

The Jenkinsfile has also been refactored to make extensive use of a Jenkins library. As you’ll see in the sample projects, this means that the generated Jenkinsfile is now very sparse:

I could say much more about the work we’ve done with the pipeline but to do so would be stealing the thunder from one of my colleagues who I know is penning an article on this subject.

Looking at the runtime portion, what we deploy for the Microservice Builder fabric has changed significantly. We had a fair amount of heartache as we considered security concerns in the inter-component communication. This led us to move the ELK stack and configuration for the Liberty logstash feature out into a sample. This capability will return although likely in a slightly different form. The fabric did gain a Zipkin server for collation and display of Open Tracing data. Again, the security concerns hit home here and, for now, the server is not persisting data and the dashboard is only accessible via kubectl port-forward.

Another significant change, and one of the reasons I didn’t post this immediately, was that a week after we GA’d, IBM Spectrum Conductor for Containers morphed into IBM Cloud private. In the 1.2 release, this is largely a rebranding exercise but there’s certainly a lot more to come in this space. Most immediately for Microservice Builder, it means that you no longer need to add our Helm repository as it will be there in the App Center out of the box. It also meant a lot of search and replace for me in our Knowledge Center!

You may be wondering where we are heading next with Microservice Builder. As always, unfortunately I can’t disclose future product plans. What I can do is highlight existing activity that is happening externally. For example, if you look at the Google Group for the MicroProfile community, you will see activity ramping up there and proposals for a number of new components. Several of the Microservice Builder announcements also refer to the Istio service mesh project on which IBM is collaborating with Google. It’s still early days there but the project is moving fast and you can take a look at some of the exciting features on the roadmap.