Archive for the ‘Technology’ Category

Creating a Membership List in Drupal 11 with Aggregating Views

Wednesday, July 9th, 2025

I’ve written before about our use of Drupal for the Southampton Orienteering Club website. We’re now on Drupal 11, and my opinions haven’t really changed. Upgrades are still painful, particularly the community modules that we have to leave behind each time. The user experience for creating content also lags behind newer alternatives. We have a significant amount of historical content on the site (not all of it publicly visible), making a move a daunting proposition. In the meantime, as this post demonstrates, we continue to utilise the powerful features that Drupal and its ecosystem offer.

We had a requirement to provide a membership list for use by the club’s members, which would provide names, approximate home location (to facilitate lift sharing), and a contact mechanism. Previously, it fell to the membership secretary to create this list manually; however, given that nearly all members have an account on the website, it felt like there was a better way.

We already had a permission role that was granted to club members (allowing them access to the members’ area), so it was trivial to create a page that listed all of the website users in that role (and limit access to the list to those in that role). Drupal lets you add custom fields to the user profile. We already have fields for forename and surname, to which I added a location field, which we populated from the old membership list.

User profile

Drupal also has a built-in mechanism for users to contact one another. Users can select the user they wish to contact and provide a message, which is then emailed to the recipient with the originating user as the sender. This has the benefit that users see messages where they are most likely to notice them (in their inbox rather than in some additional system), but without having to expose everyone’s email address to everyone else, which was an area of concern. Better still, users can indicate in their profile whether or not they wish to be contactable.

Contact form

So far, so good. We had a list that showed members’ names, locations, and a link to their contact form if they hadn’t disabled it. The last thing we wanted to add to the list was some additional data for each member, highlighting honorary members, any qualifications (e.g., first aider or coach), and any posts they might hold (e.g., secretary or chair).

We already had a Drupal node type to represent a post, which is then linked to multiple users. This was being used to generate the committee page. I decided to extend this to cover the other scenarios. Drupal views allow you to specify reverse relationships, so for each member, it would retrieve all of the ‘posts’ the member held. Unfortunately, it then renders this as if it were an outer join in SQL, with multiple rows in the table for a member, one for each post.

This is where the Views Aggregator Plus module came to the rescue. Once installed, I could select the “Table with aggregation options” format for my Drupal view. Getting the correct settings was then a bit finicky. I had to add a hidden field with the user’s UUID. I then configured the view to group the post holder relationship using the “Enumerate (sort, no dupl.)” function and group the UUID using “Group and compress” as shown in the following screenshot.

Table with aggregation options settings

The module is significantly more powerful than this. It will, for example, allow you to perform operations such as COUNT, MIN, and MAX on the aggregated rows. That’s maybe for another day!

One further tweak was then needed. The table was styled differently from all of the other tables on the site. Rather than try to replicate that styling, I changed the class in modules/contrib/views_aggregator/templates/views-aggregator-results-table.html.twig from table to views-table.

The final list (or at least the important section of it!) then looks something like the following:

Membership list

Stopping the Git CredentialHelperSelector from popping up

Tuesday, June 24th, 2025

Recently, I was plagued by the “CredentialHelperSelector” dialogue popping up multiple times when attempting to pull from a remote Git repository. This was despite repeatedly selecting the option to remember my selection to use manager and various attempts to explicitly set the config helper via the command line.

In the end, the following command was my saviour:

git config -l --show-origin

It showed that the offending credential.helper=helper-selector was specified in the gitconfig file under the Git install (this being Windows). What you then need to know is that credential.helper is a multi-valued list, and so any changes I was making in my user level .gitconfig were additive. This explains why, once an alternative was specified, I could cancel the numerous selector dialogues, and the operation would still complete successfully.

So, how to avoid those annoying pop-ups? Well, if you can edit that system-level gitconfig just remove the offending entry. Unfortunately, on my locked-down system, that wasn’t an option. The answer, then, is this change, available in Git 2.9 onwards. It allows you to specify an empty helper to clear any existing entries in the list. My .gitconfig now contains the following, and the selector is no more!

[credential]
        helper =
        helper = manager

Updating the symbol set and magentic north with OpenOrienteering Mapper

Sunday, June 15th, 2025

I spend a couple of hours a week hanging around the leisure centre at Fleming Park while Emma swims. For the past month or so, I’ve been using that time to update the orienteering map of the area, ready for the SOC Summer Series event there in August. The fairways of the old golf course are becoming increasingly overgrown, aided by the planting of lots of new trees. I therefore wanted to update the map to the latest sprint specification, ISSprOM 2019-2, so that I could make use of the ‘rough open with scattered bushes’ symbol. Although it hasn’t shifted much since 2016, I thought it was also time to update magnetic north.

The following directions for OpenOrienteering Mapper (OOM) are based on those I received from the club’s mapping officer, Mark Light.

Updating the symbol set

  1. Download and unzip the latest symbol set from the British Orienteering website.
  2. To make your life easier in step 4, delete any unused symbols from the map.
    1. Right-click on the symbol palette and click Select Symbols > Select Unused.
    2. Right-click on any unused symbol in the palette and select Delete.
  3. Select Symbols > Replace symbol set… and select the
    appropriate scale set of icons from the download in step 1. If, as in the case of this map, the scale doesn’t match, you’ll get a warning.
  4. Provide a mapping for each symbol in the old set to the new.
    1. You can use the Symbol mapping dropdown at the bottom of the dialogue to determine whether it matches by textual name or ID number by default.
    2. Work your way down the list, checking where there is no mapping specified. If the old symbol is something custom that you want to carry across, for example, text for a legend, leave the selection as -None-. Similarly, if you’re not sure what it should translate to, just take a note of the number and leave it as -None-.
    3. Click OK.
  5. Map any symbols you were unsure about
    1. Right-click on each symbol in the symbol window and click Select all objects with this symbol.
    2. If you can now work out what they should be mapped to:
      1. Select the new symbol in the symbol window.
      2. Click the Switch symbol icon in the toolbar.
      3. Right-click on the old symbol and select Delete.
  6. Particularly for any custom symbols you’ve carried across, check that they are still visible on the map. It may be that, as with this map, they have been given a colour that is now lower down the colour table than some symbol that appears above them. Either double-click on the symbol and edit it to use the correct colour from the specification, or select View > Color window and re-order the colours so that the symbols reappear.
  7. If, in step 3, you received a warning about the symbol and map scales not matching, now is the time to fix that.
    1. Select Symbols > Scale all symbols….
    2. Enter the scale percentage. For example, when using 1:4,000 symbols on a 1:5,000 map, enter 80%.
    3. Click OK.

Updating magnetic north

  1. Determine the magnetic declination applicable to your map.
    1. Open this website in a browser.
    2. Drag the marker to the location of your map and note the current magnetic declination. OOM will only accept two decimal places, so don’t worry too much about the exact position of the marker.
  2. Ensure that the map is correctly georeferenced with the correct projection, in our case, the Ordnance Survey British Grid (EPSG 27700). These settings can be found under Map > Georeferencing….
  3. If it doesn’t already exist, create a new ‘part’ in OOM for the map furniture (borders, legend, north lines, and anything that shouldn’t change with magnetic north).
    1. Select Map > Add new part….
    2. Name the part Furniture.
    3. Click OK.
    4. A new dropdown appears in the toolbar showing the currently selected part. Select the default part.
    5. Select the items that make up the furniture, either on the map or via their symbols. Select Map > Move selected objects to > Furniture. Repeat until all of the furniture is in the new part.
    6. Under Map > Georeferencing… enter the declination you retrieved in step 1 and click OK. This will rotate all parts of the map to account for the current position of magnetic north.
    7. Now you need to rotate the furniture back.
      1. Select the Furniture part.
      2. If you don’t already have a grid displayed, select View > Show grid.
      3. Select Tools > Rotate objects and rotate the furniture part to align with the grid.

Helm: for better or worse?

Monday, June 9th, 2025

A few weeks ago, one of my colleagues at JUXT gave a presentation on Helm, and this started me thinking back over my own experiences with the tool. It appears I already had a lot to say on the subject back in 2018! Since then, I’ve made extensive use of Helm at CloudBees where we had an umbrella chart to deploy the entire SaaS platform, and at R3. It’s that latter experience that I’m going to talk about in this post.

Helm and Corda

The main Helm chart in question is the one for the R3’s Corda DLT, which you can find on GitHub. The corda.net website has, unfortunately, been sunset, but my blog post describing the rationale for using Helm is still available on the Internet Archive. Another article explains how the chart can be used, along with those for Kafka and Postgres, to spin up a complete Corda deployment quickly.

As an aside, it was a conscious decision not to provide a chart that packaged Corda along with those Kafka and PostgreSQL prereqs. The concern was that customers would take this and deploy it to production without thinking about what a production deployment of Kafka or Postgres entails. Not to mention wanting to make it clear that these were not components that we, as a company, were providing support for.

As a cautionary tale: despite its name, the corda-dev-prereqs chart referenced in that last article (which creates a decidedly non-HA deployment of Kafka and PostgreSQL) found itself being deployed in places it shouldn’t have been…

More Go than YAML

Whilst the consumer experience with the Helm chart was pretty good, things weren’t so rosy on the authoring side. The combined novelty of Kubernetes configuration and Go templating was just too much for many developers. While some did engage, ownership of the chart definitely remained with the DevOps team that authored the initial version, rather than the application developers.

The complexity of the chart also ramped up rapidly. With multiple services requiring almost identical configuration, we soon moved from YAML with embedded Go to Go with embedded YAML! That problem is not unique to Helm; I remember having the same issue with JSPs many moons ago.

The lack of typing, combined with the fact that all functions return strings, started to make the chart fragile, particularly without any good testing of the output with different override values.

Two charts are not better than one

If you look at the GitHub repository, you might wonder why most of the logic for the chart sits in a separate library chart (corda-lib) on which the main corda chart depends. What you can’t see is that we had a separate Helm chart for use by paying customers. This was largely identical to the open-source chart, but included some additional configuration overrides. The library chart was an attempt to share as much logic as possible between the two.

What we couldn’t share was the values.yaml itself and the corresponding JSON schema, and as a consequence, there was always a certain amount of double fixing that went on. What we really needed was a first-class mechanism for extending a chart.

Helm hooks

Although there were other niggles, the last issue I’m going to talk about is the use of Helm hooks. Corda has two mechanisms for bootstrapping PostgreSQL and Kafka: an administrator can use the CLI to generate the required SQL and topic definitions, or the chart can automatically perform the setup when the chart is installed. We expected customers to use the former mechanism, at least in production, but the latter was used in most of our development and testing, and by the services team in pilot projects. The automated approach used a pre-install hook to drive a containerised version of the CLI to perform the setup.

So far, so good. We then started to look at using ArgoCD to deploy the chart. ArgoCD doesn’t install Helm charts directly, instead, it renders the template and then applies the Kubernetes configuration. It does have some understanding of Helm hooks, converting them into ArgoCD waves, but it doesn’t distinguish between install and upgrade hooks. This would lead ArgoCD to try to rerun the setup during an upgrade.

Now, here some responsibility must lie with the Corda team, as those setup commands should have been idempotent, but they weren’t. The answer, for us, was to use an alternative to ArgoCD (worth a separate post), but our customers might not have the luxury of that choice.

Summary

Does all of the above mean that I think Helm is a bad choice? As always, it depends. For ‘packaged’ Kubernetes configuration, I still believe it’s a better choice than requiring consumers to understand your YAML sufficiently to be able to apply suitable modifications with Kustomize. In particular, pushing Kustomize is opening up your support organisation to having to deal with customers basically using any arbitrary YAML to deploy your solution.

In the case of Corda, we underinvested in building the skills to make the best of Helm. Fundamentally, though, I’d suggest that we simply outgrew it. If I were still working on its evolution, the next step would undoubtedly have been to implement an operator and write all of that complicated logic in a language that properly supports testing and reuse.

WordPress is broken by PHP in Jammy update

Saturday, August 3rd, 2024

This blog has been a bit neglected for the last few years. I miss the opportunity to reflect on something I’ve done and write up those thoughts. We’ll see whether this is a one-off or the start of something new!

The first task was to make sure everything on the site was up-to-date. WordPress does a pretty good job of automatically applying security fixes but the Ubuntu VPS needed an upgrade. The update to Jammy went smoothly enough but attempting to access the site showed the WordPress PHP source. The enabled modules for Apache showed a couple of broken symlinks to PHP 7. After enabling those for PHP 8.1, I saw a WordPress error page: There has been a critical error on this website.

The WordPress PHP compatibility matrix indicates that there are still exceptions with PHP 8 versions. Time to get PHP 7 back…

Then re-enable the PHP 7 Apache modules:

With the site rendering again, I thought I was done but on posting this entry the dreaded critical error reappeared. Looking again at the Apache error logs, /var/log/apache2/error.log revealed errors in lightbox-plus and crayon-syntax-highlighter of the form Compilation failed: invalid range in character class. From PHP 7.3, hyphens need to be escaped in regular expressions. I could have rolled the PHP version back further but decided to patch the offending files. (I probably need to review the plugins in use on the site and remove those that are no longer supported.)

And, finally, we’re back in business!

Update time

Thursday, September 5th, 2019

WordPress has been nagging me for some time that I needed to update the version of PHP this blog is running on and, in particular, Jetpack had finally given up on me. The sticking point has been that the Digital Ocean droplet it’s running on has been stuck back on Trusty Tahr and a previous attempt to upgrade it had gone awry.

I finally took the plunge and set up a new droplet running Bionic Beaver and eventually found the right combination of PHP modules to get everything running again. Whilst I was at it, I ticked another item off my todo list and enabled TLS (trivial with the aid of Certbot and Let’s Encrypt). A late evening but nothing too painful.

On the downside, when I first set the blog up (back in 2005) I used Gallery to manage images. The WordPress plugin died a while back but the Gallery install itself failed to play nicely with the new PHP version. As a consequence, the item to write a script to locate all those <wpg2id> tags and replace them with the appropriate images still remains very much on my todo list. Oh, and then there’s all those GPX files that were being displayed with Google Maps…

Knative Intro @ Devoxx UK

Thursday, May 30th, 2019

I presented an introduction to Knative at Devoxx UK, the recording for which can be found below. I’m afraid I deviated somewhat from the abstract given the changes to the project in the five months since I submitted it. With only half an hour, I probably shouldn’t have tried to cover Tekton as well but I wanted to have an excuse to at least touch on Jenkins X, however briefly! The demo gods largely favoured me except when hey failed to return (not the part of the demo I was expecting to fail!). The script and source for the demo are on GitHub although I’m afraid I haven’t attempted to abstract them away from the Docker Hub/GCP accounts.

Debugging with Telepresence

Monday, February 11th, 2019

I’ve spent the last few days trying to debug an issue on Kubernetes with an external plugin that I’ve been writing in Go for Prow. Prow’s hook component is forwarding on a GitHub webhook and the plugin mounts in various pieces of configuration from the cluster (the Prow config, GitHub OAuth token and the webhook HMAC secret). As a consequence, running the plugin standalone in my dev environment is tricky, but just the sort of scenario that Telepresence is designed for.

The following command is all that is needed to perform a whole host of magic:

  • It locates the my-plugin-deployment deployment already running in the cluster and scales down the number of replicas to zero.
  • It executes the my-plugin binary locally and creates a replacement deployment in the cluster that routes traffic to the local process on the exposed port.
  • It finds the volumes defined in the deployment and syncs their contents to /tmp/tp using the mount paths also specified in the deployment.
  • Although not needed in this scenario, it also sets up the normal Kubernetes environment variables around the process and routes network traffic back to the cluster.

Now, it was convenient in this case that the binary already exposed command line arguments for the configuration files so that I could direct them to the alternative path. Failing that, you could always use Telepresence in its--docker-run mode and then mount the files onto the container at the expected location.

And the issue I was trying to debug? I had used the refresh plugin as my starting point and this comment turned out to be very misleading. The call to configAgent.Start() does actually set the logrus log level based on the prow configuration (to info by default). As a consequence, everything was actually working as it should and my debug statements just weren’t outputting anything!