Archive for the ‘Technology’ Category

WordPress is broken by PHP in Jammy update

Saturday, August 3rd, 2024

This blog has been a bit neglected for the last few years. I miss the opportunity to reflect on something I’ve done and write up those thoughts. We’ll see whether this is a one-off or the start of something new!

The first task was to make sure everything on the site was up-to-date. WordPress does a pretty good job of automatically applying security fixes but the Ubuntu VPS needed an upgrade. The update to Jammy went smoothly enough but attempting to access the site showed the WordPress PHP source. The enabled modules for Apache showed a couple of broken symlinks to PHP 7. After enabling those for PHP 8.1, I saw a WordPress error page: There has been a critical error on this website.

The WordPress PHP compatibility matrix indicates that there are still exceptions with PHP 8 versions. Time to get PHP 7 back…

Then re-enable the PHP 7 Apache modules:

With the site rendering again, I thought I was done but on posting this entry the dreaded critical error reappeared. Looking again at the Apache error logs, /var/log/apache2/error.log revealed errors in lightbox-plus and crayon-syntax-highlighter of the form Compilation failed: invalid range in character class. From PHP 7.3, hyphens need to be escaped in regular expressions. I could have rolled the PHP version back further but decided to patch the offending files. (I probably need to review the plugins in use on the site and remove those that are no longer supported.)

And, finally, we’re back in business!

Update time

Thursday, September 5th, 2019

WordPress has been nagging me for some time that I needed to update the version of PHP this blog is running on and, in particular, Jetpack had finally given up on me. The sticking point has been that the Digital Ocean droplet it’s running on has been stuck back on Trusty Tahr and a previous attempt to upgrade it had gone awry.

I finally took the plunge and set up a new droplet running Bionic Beaver and eventually found the right combination of PHP modules to get everything running again. Whilst I was at it, I ticked another item off my todo list and enabled TLS (trivial with the aid of Certbot and Let’s Encrypt). A late evening but nothing too painful.

On the downside, when I first set the blog up (back in 2005) I used Gallery to manage images. The WordPress plugin died a while back but the Gallery install itself failed to play nicely with the new PHP version. As a consequence, the item to write a script to locate all those <wpg2id> tags and replace them with the appropriate images still remains very much on my todo list. Oh, and then there’s all those GPX files that were being displayed with Google Maps…

Knative Intro @ Devoxx UK

Thursday, May 30th, 2019

I presented an introduction to Knative at Devoxx UK, the recording for which can be found below. I’m afraid I deviated somewhat from the abstract given the changes to the project in the five months since I submitted it. With only half an hour, I probably shouldn’t have tried to cover Tekton as well but I wanted to have an excuse to at least touch on Jenkins X, however briefly! The demo gods largely favoured me except when hey failed to return (not the part of the demo I was expecting to fail!). The script and source for the demo are on GitHub although I’m afraid I haven’t attempted to abstract them away from the Docker Hub/GCP accounts.

Debugging with Telepresence

Monday, February 11th, 2019

I’ve spent the last few days trying to debug an issue on Kubernetes with an external plugin that I’ve been writing in Go for Prow. Prow’s hook component is forwarding on a GitHub webhook and the plugin mounts in various pieces of configuration from the cluster (the Prow config, GitHub OAuth token and the webhook HMAC secret). As a consequence, running the plugin standalone in my dev environment is tricky, but just the sort of scenario that Telepresence is designed for.

The following command is all that is needed to perform a whole host of magic:

  • It locates the my-plugin-deployment deployment already running in the cluster and scales down the number of replicas to zero.
  • It executes the my-plugin binary locally and creates a replacement deployment in the cluster that routes traffic to the local process on the exposed port.
  • It finds the volumes defined in the deployment and syncs their contents to /tmp/tp using the mount paths also specified in the deployment.
  • Although not needed in this scenario, it also sets up the normal Kubernetes environment variables around the process and routes network traffic back to the cluster.

Now, it was convenient in this case that the binary already exposed command line arguments for the configuration files so that I could direct them to the alternative path. Failing that, you could always use Telepresence in its--docker-run mode and then mount the files onto the container at the expected location.

And the issue I was trying to debug? I had used the refresh plugin as my starting point and this comment turned out to be very misleading. The call to configAgent.Start() does actually set the logrus log level based on the prow configuration (to info by default). As a consequence, everything was actually working as it should and my debug statements just weren’t outputting anything!

Website backup to pCloud

Wednesday, January 30th, 2019

Another SOC website related posting – this time on the subject of backup. The website is backed up by the club’s current hosting provider (Krystal – who, a year in, I can highly recommend) but I was informed that the club had bought a large quantity of cloud storage for the purpose of storing its map archive and, for belt and braces, it made sense to also include backups of the website there.

As it turned out, the cloud storage was courtesy of pCloud who are best described as a Dropbox clone i.e. the expected interaction patterns are via the web UI, mobile, or sync from the desktop app. A quick search turned up rclone which describes itself as “rsync for cloud storage” and, amongst the list of supported backends, includes pCloud.

Install on hosting provider was straightforward. The configuration process is interactive (opening a browser to log in to pCloud) but the docs also cover how to create the configuration on one machine and copy them across to another. A copy is then as simple as:

I started out looking to use drush arb to create a backup but, as the same hosting is used for a WordPress site, it was easiest in the end just to write a script using tar and mysqldump to create the archive of the file system and database tables. This is then triggered nightly on a cron job. Each backup is around 0.5GB so I wasn’t too concerned about incremental backup and, with 2 TB of storage to play with, it will be a while before the question of cleaning up old backups comes back to haunt me!

Drupal 8 Migration

Monday, January 28th, 2019

For my sins, I have now been involved in the management of our orienteering club’s current website for over 10 years now. Back then, we wanted to make it as easy as possible for club officials and members to contribute content and, after evaluating WordPress, Joomla! and Drupal, we went with Drupal as our Content Management System. The extensibility of Drupal makes it immensely powerful but, as with many open source projects, the rich ecosystem of contributed modules can be both a blessing and a curse.

Although the details have been long forgotten, I do remember that the move from Drupal 6 to 7 was a painful one and so, despite it being over three years since Drupal 8 was released, I was in no rush to migrate. In the end, it was a security vulnerability in one of the modules that wasn’t going to be addressed in v7 that precipitated the move.

The major changes in core Drupal have seemingly been too much for many module contributors to make the move. An initial assessment wasn’t particularly promising: of fifty-five non-core modules the current site had installed, five were no-longer needed in Drupal 8, six had GA v8 versions and a further fourteen had beta versions available. A migration estimate site put the effort involved at several weeks worth and, in the end, it probably wasn’t far off!

My first task was to slim down the number of modules installed. Many weren’t actively in use any more (e.g. content_access and views_data_export) and others had simple replacements which had easier migration paths (e.g. swapping out timefield for a simple text field). Ironically, the module with the security flaw was one of those that I disabled but, having started down this path, I was determined to complete a migration.

It was then time to start the actual migration. Thankfully the process now involves setting up a parallel site as it would still be weeks before I had anything that was approaching usable. One of the issues was that no private file path was set up during the migration. Another, that the migrated text formats were using a handler that no longer existed. Opening and resaving them fixed that problem. Another of the random error messages required manually modifying the database to remove the upload field from entity.definitions.bundle_field_map in the drup_key_value table (go figure).

The site makes extensive use of custom content types and views which are finally a part of core Drupal. Views are not part of the default migration though, and, in the end, I just recreated them manually. The same was true of all the patterns for pathauto.

At this point, with the styling also re-introduced, the site was ready to go live again but there were still problems waiting to be found. One was that, what used to appear as a date field, now appeared as a datetime field in forms. In the end, I decided to test out the new REST capabilities to export the contents of the field and reimport into a new field with the correct type. The only catch here was that there is no querying capability in the REST API so it was necessary to create a JSON-rendered view that listed the required nodes in order to retrieve their ids so that they could then be processed one-by-one. The rest was just a short bash script using curl and jq.

Hopefully, the migration can now be considered complete. The site now uses relatively few custom modules which is, undoubtedly, a good thing for future stability. If the move to Drupal 9 looks anywhere near as painful though, I now know how to extract the entire site content so maybe it will be time to revisit the CMS landscape. It would hate to think that I’ll still be debugging PHP errors in another ten years time!

Oracle Code One: Continuous Delivery to Kubernetes with Jenkins and Helm

Wednesday, October 31st, 2018

Last week I was out in San Francisco at Oracle Code One (previously known as JavaOne). I had to wait until Thursday morning to give my session on “Continuous Delivery to Kubernetes with Jenkins and Helm”. This was the same title I presented in almost exactly the same spot back in February at IBM’s Index Conference but there were some significant differences in the content.

https://www.slideshare.net/davidcurrie/continuous-delivery-to-kubernetes-with-jenkins-and-helm-120590081

The first half was much the same. As you can see from the material on SlideShare and GitHub, it covers deploying Jenkins on Kubernetes via Helm and then setting up a pipeline with the Kubernetes plugin to build and deploy an application, again, using Helm. This time, I’d built a custom Jenkins image with the default set of plugins used by the Helm chart pre-installed which improved start-up times in the demo.

I had previously mounted in the Docker socket to perform the build but removed that and used kaniko instead. This highlighted one annoyance with the current approach used by the Kubernetes plugin: it uses exec on long-running containers to execute a shell script with the commands defined in the pipeline. The default kaniko image is a scratch image containing just the executor binary – nothing there to keep it alive, nor a shell to execute the script. In his example, Carlos uses the kaniko:debug image which adds a busybox shell but that requires other hoops to be jumped through because the shell is not in the normal location. Instead, I built a kaniko image based on alpine.

The biggest difference from earlier in the year was, perhaps not unsurprisingly, the inclusion of Jenkins X. I hadn’t really left myself enough time to do it justice. Given the normal terrible conference wifi and the GitHub outage earlier in the week, I had recorded a demo showing initial project creation, promotion, and update. I’ve added a voiceover so you can watch it for yourself below (although you probably want to go full-screen unless you have very good eyesight!).

Introduce poetry to your Kube config with ksonnet

Monday, October 15th, 2018

Returning to the 101 ways to create Kubernetes configuration theme, next up is ksonnet from the folks at Heptio. (I have no doubt that there are 101 ways to create Kubernetes configuration but I’m afraid I don’t really intend to cover all of them on this blog!) ksonnet has a different take yet again from Helm and kustomize. In many ways, it is more powerful than either of them but that power comes at the cost of a fairly steep learning curve.

The name is derived from Jsonnet, a data templating language that came out of Google back in 2014. Jsonnet essentially extends JSON with a scripting syntax that supports the definition of programming constructs such as variables, functions, and objects. The ‘Aha!’ moment for me with ksonnet was in realizing that it could be used as a simple template structure in much the same way as Helm. You start with some Kubernetes configuration in JSON format (and yq is your friend if you need to convert from YAML to JSON first) and from there you can extract parameters. I say ‘it could’ because you’d typically only take this approach if you were actually converting existing configuration but realizing this helped me get beyond some of the slightly strange syntax you see in generated files.

As usual, Homebrew is your starting point: brew install ksonnet/tap/ks. ksonnet has an understanding of the different environments to which an application is deployed and, when you issue ks init myapp, it takes the cluster that your current kube config is pointing at as the default environment (although you can override this with --context).

ksonnet then has the concept of ‘prototypes’ which are templates for generating particular types of application component when supplied with suitable parameters. These are provided by ‘packages’ which, in turn, come from a ‘registry’ stored on GitHub. Stealing from the tutorial, we can generate code for a simple deployment and service with the deployed-service prototype giving the image name and service type as parameters e.g.

At this point, we can use ks show default to return the YAML that would be generated or ks show apply to actually apply it to the default environment. I highly recommend doing the tutorial first and not the web-based tour as it shows you that you can get a long way with ksonnet without actually editing, or even looking at, any of the generated files. For example, you can use ks env add to create another environment and then ks param set to override the values of parameters for a particular environment as you might with Helm or kustomize.

Of course, the real power comes when you drop into the code and make use of ksonnet features like parts and modules to enable greater reuse of configuration in your application. At that point though, you really should take the time to learn jsonnet properly!