Archive for the ‘WebSphere Application Server’ Category

Containerizing background processes

Thursday, September 22nd, 2016

The lifetime of a Docker container is tied to the lifetime of the PID 1 process executed when the container was started. WebSphere Liberty has a convenient server run command to run the application server in the foreground. Sadly, that’s not the case with the traditional WebSphere’s startServer.sh script which simply starts the server process in the background and then exits. To ensure that the container didn’t exit as well, we started out with a script something along the following lines:

where server1.pid is a file created by the server process (but not immediately, hence the initial sleep). That successfully kept the container alive but failed to allow it to shutdown cleanly! A docker stop, for example, would wait for the default timeout period and then kill the process. Not great for any in-flight transactions! The solution was simple enough, add a trap to catch any interrupt and issue the command to stop the server:

All was well with the world until we then enabled server security by default. Unfortunately with security enabled the stopServer.sh script requires credentials to be provided and there is no way to get those credentials to the script. The solution was to switch to sending the interrupt signal to the server process. I also disliked that initial sleep so I decided to retrieve the process ID via ps (something that’s safer in a container given the limited process tree) and then wait whilst the processes directory exists in /proc. The resulting code looked along the following lines:

Note the use of a function so that $PID is not evaluated at the point the trap is set up.
Another disadvantage with having the server process in the background is the lack of output in the container logs. I decided to rectify that whilst I was at it by adding calls to tail the server log files:

The significance of the tail parameters is as follows. The capital F indicates that the attempts to follow the log file should be retried. This ensures that we continue to follow the latest file when the logs roll over. The pid parameter ensures that the background tail processes exit along with the server process. The -n +0 indicates that the output should start at the beginning of the file so that entries output whilst the startServer.sh script is running are not lost. As previously noted, Docker preserves stderr across the remote API so we make sure to direct the output from SystemErr.log there.

WebSphere Liberty and IBM Containers: Part 3

Saturday, June 18th, 2016

Scaling up

In the first two posts in this series we covered the basics of starting a WebSphere Liberty on IBM Containers via the browser and then using the command line to deploy an application.

We’ve already seen some of the value-add that comes out of the box when running under IBM Containers. For example, at no point have we needed to be concerned with the underlying infrastructure on which the containers are running (beyond selecting a region). When we created an image it was scanned automatically for vulnerabilities. Each container was allocated its own private IP address accessible from any other container running in the same space – no need to set up and configure overlay networking here. We had control over whether we also wanted to assign a public IP and, if so, what ports should be exposed there. We also had easy access to metrics and standard out/error from the container.

So far we’ve only deployed a single container though. What happens when we hit the big time and need to scale up our application to meet demand? When we created our first container via the UI, you may remember that the Single option was selected at the top. Let’s go back and try out the Scalable alternative. From the catalog, navigate through Compute and Containers (remember that these instructions are based on the new Bluemix console). Select our existing demo image. Next, select the Scalable button at the top and give the container group a name. By default you’ll see that our group will contain two instances.

Rather than having a single IP associated with a container, this time we are asked to give a host name (on the mybluemix.net domain by default) for the group. Requests arriving at this host name will be load-balanced across the container instances in the group (reusing the gorouter from Cloud Foundry). One nice bonus of this approach is that it doesn’t eat in to our quota of public IPs! As the host name needs to be unique within the domain, I tend to include my initials as you’ll see in the screenshot below. Select 9080 as the HTTP port and then click Create.

Container group creation

Once the containers have started, the dashboard should show that you have two instances running:

Running instances

Right-click on the route shown in the Group details section and open it in a new tab. This should take you to a Liberty welcome page and, if you add the myLibertyApp context root, you should be able to see the application again. If you hit refresh a few times, although you won’t be able to tell with this application, your requests will be load-balanced across the two instances. If you return to the dashboard and switch to the Monitoring and Logs tab you can switch between the output for the instances and should, for example, be able to see the spike in network usage on the two containers when you made the requests.

If you return to the Overview tab you will see that there are plus and minus symbols either side of the current number of instances. These can be used to manually scale the number of instances. Click the + icon, click Save, and watch the creation of a new container in the group.

Manual scaling is all very well but it would be better if the number of instances scaled automatically up and down as required. If you’re deploying your containers in the London region then you’ll notice an extra tab at the top of the dashboard labelled Auto-Scaling. It’s only available in the London region at the moment because the service is still in beta (and so things may change a bit from what I’m describing here). Having selected this tab, click the plus icon labelled Create policy. Give the policy a name default and set the minimum and maximum instance values to 1 and 3. Add two CPU usage rules to scale up and down the number of instances as shown in the following diagram and then hit Create. Finally, select Attach to activate the policy for this scaling group.

Auto-Scaling

If you click the Auto-Scaling History tab you should see that a scaling action has taken place. We originally scaled up manually to 3 instances but, as the CPU usage is below our 60% limit, the number gets scaled down by one. If you wait another 5 minutes (the cool down period we specified), then you’ll see it get scaled down again to our minimum of 1.

Scaling history

And that concludes our tour of the scaling options in IBM Containers!

WebSphere Liberty on IBM Containers: Part 2

Monday, May 30th, 2016

Deploying an Application

In the first part of this series we looked at how to get started running a WebSphere Liberty image in IBM Containers using the Bluemix console.  The container was just running an empty Liberty server. In this post we’ll look at building and deploying an image that adds an application. I was originally intending to stick to using the browser for this post but I think it’s actually easier to do this from the command line. I’m going to assume that you already have Docker installed locally, either natively on Linux, via Docker Machine, or via the Docker for Mac/Windows beta.

First off we need an application to deploy and, just for novelty, I’m going to use the Liberty app accelerator to generate one. Select Servlet as the technology type and then, contrary as it may seem, select Deploy to Local and not Deploy to Bluemix. The latter option currently only supports deploying to the Instant Runtimes (Cloud Foundry) side of Bluemix. Finally, give your project a name and click Download Now.

Liberty App Accelerator

Unpack the zip file you downloaded and change to the top directory of the project. The app is built using maven. Perhaps you already have maven installed but this is a Docker blog post so we’re going to use the maven image from Docker Hub to build the app as follows:

$ docker run –rm -v $(pwd):/usr/src/mymaven \
    -w /usr/src/mymaven/myProject-application maven mvn clean package

This mounts the project on to a container running the maven image and runs the command mvn clean package in the myProject-application directory. Note: if you were doing this repeatedly you’d probably want to mount a maven cache in to the image as well and not download everything each time)

In the myProject-application/target directory you should now find that you have a file myArtifactId-application-1.0-SNAPSHOT.war. Copy this in to a new empty directory so that when we execute a Docker build we don’t end up uploading lots of other cruft to the Docker engine. Using your favourite editor, add the following Dockerfile to the same directory:

FROM websphere-liberty:webProfile7
COPY myArtifactId-application-1.0-SNAPSHOT.war /config/dropins

We have two choices now, we can either build a Docker image locally and then push that up to the IBM Containers registry, or we can build the image in IBM Containers. We’ll go for the latter option here as it involves pushing less bytes over the network.

There’s one niggle today that, to access your IBM Containers registry, you need to log in first using the Cloud Foundry CLI and IBM Containers plugin. We’re going to play the containerisation trick again here. Run the following commands to build an image with the CLI and plugin:

$ docker build -t cf https://git.io/vr7pl

Ideally I’d run this image as a stateless container but getting the right state written out to host in to the .cf, .ice and .docker directories is a bit finicky. Instead, we’re going to mount our current directory on to an instance of the image and perform the build inside:

$ docker run -it –rm $(pwd):/root/build cf
$ cd /root/build
$ cf login -a api.ng.bluemix.net

$ cf ic login
$ cf ic build -t demo .

Now we’re ready to run an instance of your newly built image. At this point you could switch back the to the UI but lets keep going with the command line. We’ll need to refer to the built image using the full repository name, including your namespace:

$ ns=$(cf ic namespace get)
$ cf ic run –name demo -P registry.ng.bluemix.net/$ns/demo

By default, containers are only assigned a private IP address. In order to access our new container we’ll need to request and assign a public IP. The cf ic ip command unfortunately returns a human friendly message, not a computer friendly one, hence the need for the grep/sed to retrieve the actual IP:

$ ip=$(cf ic ip request | grep -o ‘”.*”‘ | sed ‘s/”//g’)
$ cf ic ip bind $ip demo

Lastly, we can list the port and IP to point our browser at:

$ cf ic port demo 9080

Adding the root context myLibertyApp should give use the welcome page for the starter app.

Starter Welcome Page

Congratulations, you’ve successfully deployed an application to IBM Containers! In the next post in this series we’ll look at some of the additional features that the service provides, such as scaling groups and logging.

WebSphere Liberty on IBM Containers: Part 1

Monday, May 16th, 2016

Getting Started

I wanted write a few blog posts as I explored deploying our WebSphere Liberty Docker image on to third-party cloud platforms but this didn’t really feel fair given that I haven’t written anything here about the options using IBM’s cloud platform! So, to kick things off, here’s a series of posts on using Liberty with the IBM Containers service in Bluemix. In this post I’m going to focus on using the web UI to get a simple Liberty container up and running. In subsequent posts I’ll cover topics such as deploying an application, scaling applications and using the command line.

I’m also going to use the new console for the instructions/screenshots in this post. If you don’t already have an account, use the sign-up button at the top right of the console to get a free 30-day trial.

You can use WebSphere in a number of different ways in Bluemix: web applications deployed as Cloud Foundry applications will use the Liberty runtime and it’s also possible to stand-up WAS ND topologies for both Liberty and the traditional application server in VMs. Here we’re going to focus on containers.

Once logged in, select the Compute icon and then the Containers tab. Then click the + icon top-right to create a new container. You should find that the private registry associated with your account is already populated with a number of IBM-provided images as shown in the following screen shot. Select the ibmliberty image. This image adds a few layers on top of the websphere-liberty image from Docker Hub to make it run more cleanly in IBM Containers (see the Dockerfile in the docs for full details).

Select container image

You will now see a drop-down on the left which allows you to select the image tag to use. By default the latest tag will give you the full Java EE7 profile image. Select the webProfile7 tag to get a lighter-weight image. Beneath the tag you’ll an area entitled Vulnerability Assessment which, if we’ve been doing our properly, should say Safe to Deploy. Assessments are updated regularly as new CVEs are announced and we therefore regularly refresh the image to make sure that it stays clean.

Vulnerabilty Advisor

At the top of the page, you’ll see that the Single option is selected to create just a single container instance, we’ll return later to see how the Scalable option works. Give the container a name. Leave the container size at Micro but select the option to Request and Bind Public IP.  All containers get an IP address on a private network and are accessible by other containers in the same space but to be accessible externally they must be assigned a public IP.

Create a container

Now click the Create button at the bottom of the page. After a minute or two the status of the container should change to Running.

Container running

Switch to the Monitoring and Logs tab. The container level metrics around CPU/memory/network usually take some time to appear but if you select the Logging tab you should be able to see the messages output by Liberty as the container started. Tip: you can use the ADVANCED VIEW button on these tabs to open up Grafana and Kibana views on to these metrics and logs respectively if you want to do more detailed analysis.

Container logs

Finally, switch back to the Overview tab. If you scroll down to the Container details section, you’ll find a list of ports. If you click on 9080, this will open a new browser tab using that port and the public IP address assigned to the container. With the default image, that should show the Liberty welcome page.

Liberty welcome page

Congratulations – you have run WebSphere Liberty on IBM Containers! The astute among you will, however, have noticed that it’s not running any application. That’ll be the subject of the next blog post…

Messaging Administration Guide

Wednesday, July 15th, 2009

I’ve largely given up posting links to interesting content on this site – see my delicious feed for that. However, many of my original posts related to messaging and WebSphere Application Server and hence I suspect a reasonable proportion of those who stumble across this site are interested in that subject. Consequently, I feel it’s appropriate to advertise the new WebSphere Application Server V7 Messaging Administration Guide. This document covers both the default messaging provider and WebSphere MQ support. Don’t be misled by the title – although it does provide detailed information on the administration of resources, the background information on concepts and topologies is equally relevant to developers and architects. The document also has a good section on securing the default messaging provider.

Free WebSphere Application Server

Saturday, June 20th, 2009

This one’s doing the rounds of various IBM related blogs but I think it’s sufficiently momentous for me to give it a mention in case you haven’t seen it elsewhere. WebSphere Application Server for Developers provides a free development runtime environment using the full WebSphere Application Server V7 product. What you don’t get is support but feel free to ask questions on the developerWorks forum.

Service Integration Bus Destination Handler

Wednesday, May 6th, 2009

I’ve previously plugged the Service Integration Bus Explorer and IBM Client Application for JMS as useful tools to have in your WebSphere messaging kitbag. Thanks go once again to Dave Screen, this time for bringing the Service Integration Bus Destination Handler to my attention. This provides a very configurable mechansim for carrying out actions on a set of messages either on a one-off basis (via client or web application) or on a scheduled basis. Particularly useful operations include dumping messages, moving messages from one destination to another, and resurrecting messages from the exception destination. The readme file available in the download provides lots of detailed instructions and examples.

Security Bulletin for WebSphere Application Server

Tuesday, February 10th, 2009

I now try to avoid just re-posting material from developerWorks and other IBM sources but this one is worth highlighting. IBM is now publishing a list of risk assessed security vunerabilities for WebSphere Application Server.