Hardening with ssh-audit

My comment the other day about how I didn’t understand SSH encryption or key types very well got me thinking that maybe it’s something that I should understand a bit more.

Sadly I still don’t understand it, but thankfully you don’t have to because of the amazing ssh-audit tool, developed by Joe Testa (which is an excellent name for a penetration tester).

It tests both hosts and clients. You either give it the host:port to scan, or run it as a server and when a client connects it will print information about the encryption schemes supported by the client. It is not particularly reassuring when you see this printed in your terminal:

-- [fail] using elliptic curves that are suspected as being backdoored by the U.S. National Security Agency

There are some hardening guides for host and client configs for various distros—to be honest I would have rather just looked at an example config, rather than running a huge command that uses sed to edit what it expects to be in there. The huge commands did work, and the client guide even translated over to MacOS.

After a quick test connecting from various devices, I don’t seem to have cut off access for anything. I was able to:

  • Connect to my Synology DS420j (which has SSH security set to “High”)1
  • Connect to Ubuntu Server 22.04 from:
  • Push to GitHub

Of course the best bit is that ssh-audit is written in Python—so I was expecting to go through pip hell—BUT it has a Docker image that you can run instead:

$ podman run -it -p 2222:2222 docker.io/positronsecurity/ssh-audit host-to-check:port

So there’s basically no excuse not to just give it a quick check and make sure you’re up to snuff.

  1. The “High” setting scored pretty decently on the audit, enough that I didn’t bother trying to alter the config further. 

  2. Both of these apps generate their own RSA keys, but they’re obviously generating with whatever the current recommendations are.  2


Setting up podman-remote

podman-remote is a way of running Podman commands on a remote machine (or as different user) transparently. It allows you to add a transparent SSH layer between the podman CLI and where the actual podding happens.

For the uninitiated, Podman is basically more-open-source Docker that can be used (mostly) as a drop-in replacement. The biggest difference is that it can run rootless—which means even if a process can escape the container, they’re still limited by standard unix permissions from doing anything too nasty to the host.

There doesn’t seem to be foolproof instructions on setting up podman-remote from zero, so I thought I’d write down what I did here. I’ll refer to the computer that will run the containers as the “host machine” and the computer that you’ll be running podman commands on the “local machine”.

The first thing to do is log in to the host machine and enable the podman.socket system service:

$ systemctl --user --global start podman.socket

On one of my machines I got an error:

Failed to connect to bus: $DBUS_SESSION_BUS_ADDRESS and $XDG_RUNTIME_DIR not defined (consider using --machine=<user>@.host --user to connect to bus of other user)

I don’t know why these environment variables are not getting set, but I managed to find the values they should have, set them, and re-ran the command:

export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/$UID/bus
export XDG_RUNTIME_DIR=/run/user/$UID

We should be able to see that it’s running:

$ systemctl --user status podman.socket
● podman.socket - Podman API Socket
     Loaded: loaded (/usr/lib/systemd/user/podman.socket; enabled; vendor preset: enabled)
     Active: active (listening) since Fri 2023-03-24 10:38:41 UTC; 1 month 11 days ago
   Triggers: ● podman.service
       Docs: man:podman-system-service(1)
     Listen: /run/user/1000/podman/podman.sock (Stream)
     CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/podman.socket

Mar 24 10:38:41 tycho systemd[917]: Listening on Podman API Socket.

Make note of the Listen: path, as we’ll need that later. If we’re running rootless, it should have the UID of the user that will run the containers in it—in my case that’s 1000.

Next we need to configure how to get to the host machine from your local machine. You can pass this config to the CLI every time, or set it via environment variables, but plopping it all in a config file was the most appealing approach for me. There’s even a Podman command that will add the config for you:

$ podman system connection add bruce ssh://will@hostmachine:22/run/user/1000/podman/podman.sock

The first argument is the name of the connection (that we’ll use to refer to this particular config). The second argument is a URI that specifies the transport (you can use a unix socket or unencrypted TCP, but SSH is probably best). The URI is in this form (with $SOCKET_PATH being the path we noted down earlier)

ssh://$USER@$HOST_MACHINE:$PORT/$SOCKET_PATH

We should see that ~/.config/containers/containers.conf has been updated with the new config:

[engine]
  [engine.service_destinations]
    [engine.service_destinations.bruce]
      uri = "ssh://will@bruce:22/run/user/1000/podman/podman.sock"
      identity = "/home/will/.ssh/id_ed25519"

Podman won’t use your SSH key by default, you need to add the identity entry yourself.

So I don’t know enough about SSH encryption schemes to explain this next bit. Basically if you use your “normal” RSA key, Podman won’t be able to make an SSH connection—seemingly even if manually ssh-ing to the host machine works. The solution is to generate a more secure ed25519 key on the local machine, and use that instead.

$ ssh-keygen -t ed25519

Then append the contents of id_ed25519.pub to authorized_keys on the host and you’re good to go.

Let’s test that it works: (replace bruce with the name you gave your host)

$ podman --remote --connection=bruce ps
CONTAINER ID  IMAGE                                 COMMAND               CREATED       STATUS           PORTS                   NAMES
4dd7d2e41348  docker.io/library/registry:2          /etc/docker/regis...  4 days ago    Up 4 days ago    0.0.0.0:5000->5000/tcp  registry
0c986c819fc2  localhost/pixelfed-piper:prod-latest  https://pixelfed....  27 hours ago  Up 27 hours ago                          pixelfed-piper

Success! We can now chuck --remote --connection bruce (or -r -c bruce) into any podman command and have it run on another machine. Possibilities include:

  • Lazy deployments
  • Transparent builds on more powerful hardware (note that COPY and such reads from the filesystem on the local machine!)
  • Quick debugging of remotely-deployed containers

And probably some more things, I’m quite new to this whole container lifestyle.


@willhbr: Origins

At some point my memory will fail me and I won’t be able to recall some of these details. So I’ve written it down for future-me to enjoy, and while it’s here you might as well read it.

Zero: Computers

My first interaction with a computer was after showing my dad my cool Lego minifig with a little chainsaw1 from my “Arctic Explorers” set:

The LEGO arctic explorers set with lots of cool lego action happening

My dad then showed me “stress.exe” where you could use a variety of tools (including a chainsaw) to destroy whatever was displaying on the computer. 6-year-old me found this amazing.

The family computer we had was a classic beige monster probably running Windows 98, with those cool little speakers that would clip onto the sides of the huge CRT display.

One: Programming

Fast forward a few years and I was playing around with stop motion animation, and the MonkeyJam splash-screen mentioned that it was written in Delphi. My dad made some comment about how I could probably write something in Delphi—not knowing about what programming involved, this sounded quite daunting.

My memory is a bit hazy on the next bit, I think it may have been hard to get a Delphi license or something (free and open source tooling is the only way to go!) so I ended up tinkering around in Visual Basic for a while.

I don’t remember doing any “programming” in VB—instead I just used the UI editor to drag buttons in and then make them show other screens when you clicked them. I definitely made some mazes where all the walls were buttons and you had to carefully manoeuvre the cursor through the gaps.

Eventually the rate of questions that I was asking my dad hit a critical threshold and I suggested that if he just helped me get started with Java, I could learn it all from a book and wouldn’t have to ask him anything. This was of course, not true.

Nevertheless I got a copy of Java 2: A beginners Guide, a fresh install of Eclipse, and I was on my way.

You know how most 11-year-olds would want to get home from school and play computer games for hours? Well instead I would slave away writing Java Swing applications.

With no concept of version control, backups, tests, code structure, or libraries I learnt how things worked by reading the Eclipse autocomplete and posts on the Sun Java forums. It’s amazing what you can do with unlimited time, a lot of patience, and no concept of the “wrong” way of doing something.

Two: The USB Years

At some point while I was off in Java land, my dad installed Ubuntu2 on the family computer (why? unclear). This started my dive into the world of ultra-small Linux distros. I played around with running Damn Small Linux from a USB drive in QEMU in Windows (with limited success), I then graduated to Puppy Linux which was more usable but still left a lot of space on my 1GB USB drive.

The exact USB drive that I used to have (and probably have in a drawer somewhere)

These tiny distros were great to pull apart and see how things worked, and the community was full of people who were doing the same kind of tinkering—rather than doing real work. This gave plenty of practice in booting from live CDs, installing Linux distros, and debugging the inevitable problems.

Eventually I scraped together enough money to buy my own computer—a $20 Windows ME-era desktop from a recycling centre3:

My first computer (well, one that looks just like it)

Side note: I’ve had amazing luck while writing this being able to find pictures of things. I searched for “Kingston 1gb usb drive” and “hp pavilion windows me desktop” and got images of exactly what I was looking for immediately.

I’m not sure of the exact specs, but this probably had 600MHz clock, 128MB or 256MB of RAM, and a 15GB HDD. Even running a 100MB OS was challenging for this machine. I disassembled and reassembled this computer countless times as I would scrounge parts from other broken PCs. In the end I think it had 768MB of RAM and a 40GB HDD.

The next few years were spent trying to get my hands on newer hardware, and assembling a Franken-PC from the best bits. I think the peak of my power was a Pentium 4, and a 17” LCD monitor—a huge upgrade from the ~14” CRT that I had started with.

Three: Netbook

It’s 2010. The netbook craze is at its peak.

My parents helped me buy an Acer Aspire One netbook. This thing was the absolute bomb; it had more storage than all my Franken-PCs combined, it had wifi built in which would give me reliable internet anywhere in the house4.

By this point I was riding the Ubuntu train full-time, and so installed Ubuntu Netbook Edition 9.10 (Karmic Koala)5 on it immediately—replacing Windows 7 Starter Edition. The tinkering didn’t stop now that I had reliable hardware, I remember installing the Compiz settings manager to play around with thing like wobbly windows, deciding that I wanted to remove them to save valuable system resources, and ended up uninstalling Compiz itself. Since Compiz is actually the window manager and not some layer that just wobbles your windows, I had a very fun time trying to fix my system while all my windows were missing their chrome.

Having a (somewhat) reliable computer meant that I got back to programming more, and I ditched Java Swing in favour of Ruby on Rails. The web is unparalleled in being able to make something and share it with others. Before, the things that I had made were mostly hypothetical to others, but once it’s on the web anyone can go and use it.

My most successful project was a fake stock market game, where you could invest in, and create companies whose stock price would go up and down semi-randomly. Most of the fun was in giving your company a wacky name and description and then being amused when the stock price soared or tanked. This captured the zeitgeist of my school for a few weeks.

Free web hosting (specifically the old free Heroku tier) for real applications did so much to motivate me as I was learning web development. Being able to easily deploy your application without paying for dedicated hosting, setting up a domain, or learning how to setup a real server was so valuable in letting me show my work to other people. I was sad to see Heroku stopping this tier, but glad to see things like Glitch filling the niche.

I also made my first Android app on my netbook. Running Eclipse and an Android emulator with 1GB of RAM to share between them seems horrifying now, but when that’s your only option you just make do. I do remember being massively relived when I accidentally found out that I could plug my phone in and develop the app directly on that, instead of using the emulator.

Eventually I replaced the netbook with a MacBook (one of the white ones that were actually just called “MacBook”)—but still ran Ubuntu6 until I realised that Minecraft ran way better on OS X.

Despite having made applications on a variety of different platforms, I hadn’t strayed very far into different languages. I’d had to use Python for school and uni, but it’s not much of a change from Ruby. It wasn’t until 2014 and the introduction of Swift that I actually learnt about a statically-typed language with more features than Java. The idea that you could have a variable that could never be null or could not be changed was completely foreign to me.

Four: Peak Computing

The MacBook was eventually replaced with a MacBook Pro and then a MacBook Air when I was in university. At this point I felt like my computer could do anything. Unlike my netbook I could run external displays7, install any IDE, SDK, or programming language. There would be countless different tools and packages installed on my machines. I’d usually have both MySQL and PostgreSQL running in the background (until I realised they were wasting resources while not in use, and wrote a script to stop them).

Academically I know that my current computer(s) are much, much faster than any of these machines were. However I’m still nostalgic for the sense of freedom that the MacBook gave me. It was the only tool I had, and so it was the best tool for any job.

My MacBook, hard at work making an app that simulated cicuits

Five: Down the Rabbit Hole

It must’ve been 2012 when I read “I swapped my MacBook for an iPad+Linode”. The idea was so enticing—having all the hard work happen somewhere else, being able to leave your computer working and come back to it being as though you never left.

However it seemed completely out of reach—it relied on you being able to use a terminal-based text editor, and while I could stumble around Vim it was nowhere near as effective as using TextMate. It was like my dream of having an electric longboard—hampered by the cold reality of me not being able to stand on a skateboard.

Much like my learning to skateboard (which was spurred by a friend helping me stand on their longboard, showing me that it wasn’t impossible)—my journey down the Vim rabbit hole started with a friend pointing out that “you can just use the Vim keybindings in Atom” (which allows you to do some Vim stuff, but also use it like a normal text editor).

At university my interests veered into more and more abstract projects. Instead of making apps and websites for people to use, I was learning Haskell to implement random algorithms and writing Lisp compilers.


So if you ever wondered “why is Will like this”, maybe this will help answer.

  1. Actually in terms of Lego scale, it was about half this guy’s height so not really a little chainsaw. 

  2. Just going from vibes on my memory of the Ubuntu versions, I think this would have been 7.04 (Feisty Fawn). 

  3. Credit to www.3sfmedia.com, who seem to restore and sell really old PCs? Or they’re just an out-of-date computer shop. 

  4. On my desktops I had to use a USB wifi dongle, which would interfere with my cheap speakers and make a buzzing noise whenever it was sending data. 

  5. This particular version of Ubuntu seemed to work really well on the netbook, and so even more than a decade later I am still nostalgic for this one version. 

  6. Installing Linux on Apple hardware is one of the most painful things you can do, I would not recommend it to anyone. Clearly The USB Years had made me foolhardy enough to try this. 

  7. I think the netbook could actually do this, but it would be a really bad time for all involved. 


Making a Website for the Open Web

The open web is complicated1. Over the last few months I’ve been trying to make my website work with as many standards as I could find. I couldn’t find a guide on all the things you should do, so I wrote down all the things I’ve done to make my website integrate better with the open web:

Accessibility

Running an accessibility checker on your website is a good start. I used accessibilitychecker.org, which can automatically detect some low-hanging improvements.

The easiest thing was setting the language on the root tag on the website, so that browsers and crawlers know what language the content will be in:

<!DOCTYPE html>
<html lang="en">

Since I’m still young1, I don’t have much trouble reading low-contrast text, I didn’t notice how close in brightness some of the colours that I had picked were. I initially used coolors.co’s tool, but then ended up using the accessibility checking devtools in Firefox to ensure that everything was within the recommended range.

This was the colour that I had been using for the titles and links in dark mode. It’s not very readable on the dark background.

This is the new colour that I changed it to, which has much better contrast.

Add Feeds

I already had an RSS Feed and a JSON Feed setup, but I double checked that it was giving the correct format using the w3 RSS Validator and the JSON Feed validator.

What I had missed adding was the correct metadata that allows browsers and feed readers to get the URL of the feed from any page. If someone wants to subscribe to your feed, they don’t have to find the URL themselves and add that, they can just type in your homepage and any good reader will work it out for them. This is just a single HTML tag in your <head>:

<link rel="alternate" type="application/rss+xml"
  title="Will Richardson"
  href="https://willhbr.net/feed.xml" />

JSON Feed didn’t take the world by storm and replace XML-based RSS, but it is nice having it there so I can get the contents of my posts programatically without dealing with XML. For example I’ve got a Shortcut that will toot my latest post, which fetches the JSON feed instead of the RSS.

OpenGraph Previews

I wrote about this when I added them, since I was so stoked to have proper previews when sharing links.

My assumption had always been that services just fetched the URL and guessed at what image should be included in the preview (I guess this is what Facebook did before publishing the standard?).

Sites like Github really take advantage of this and generate a preview image with stats for the state of the repo that’s being shared:

An OpenGraph image from Github, showing the stats for this repo

Dark Mode

Your website should respect the system dark-mode setting, which you can get in CSS using an @media query:

@media(prefers-color-scheme: dark) {
  /* some dark-mode-specific styling */
}

This is fairly easy to support - just override the colours with dark-mode variants - but it gets more complicated if you want to allow visitors to toggle light/dark mode (some may want to have their OS in one mode but read your site in a the other).

I won’t go into the full details of how I implemented this, but it boils down to having the @media query, a class on the <body> tag, and using CSS variables to define colours. Look at main.js and darkmode.sass for how I did it.

Alias Your Domain to Your ActivityPub Profile

Something else I wrote about earlier. Not something that I think everyone needs to do, but if you like the idea of a computer being able to find your toots from your own domain, it’s probably worth doing. Especially because it’s quite straightforward.

Good Favicons

Back in my day you’d just put favicon.ico at the root of your website and that would be it. Things are a little more advanced now, and support a few more pixels. I used this website to turn my high-res image into all the correct formats. It also conveniently gives you the right HTML to include too.

Add robots.txt

I added a robots.txt file that just tells crawlers that they’re allowed to go anywhere on my site. It’s entirely static, any nothing is intended to be hidden from search engines.


If I’ve missed a standard that belongs on my website, please toot me!

  1. Citation needed.  2


Installing rsnapshot on Synology DS420j

I’ve got a shiny new Synology DS420j, and I’m in the process of re-implementing the wheels of computer suffering that I call my backup system on it. Part of this is setting up rsnapshot to create point-in-time snapshots of my backups, so I can rewind back to an old version of my data.

A marketing image of a Synology DS420j NAS

There are plenty of instructions on how to setup rsnapshot using Docker on the higher-end Synology models, but when you’re shopping at the bottom of the barrel you don’t have that option. We’ve got to install rsnapshot directly on the Synology - the most cursed Linux environment ever1.

All the instructions I could find were quite old, and the landscape seems to have changed. Synology have changed their package format, so old packages cannot be installed on newer DSM versions (according to this German forum post) which means anything that tells you to install pages from cphub.net probably doesn’t work any more. ipkg is also no longer maintained and has been replaced by Entware. Once I knew that, the process was relatively straightforward.

Preamble

You probably need to enable the rsync service, enable ssh, etc for this to work. I’m assuming that you’ve got a Synology already setup that you can SSH into and sync files to from your other computers. If you don’t, then you should sort that out and then come back. I’ll wait.

Install Entware

Entware have detailed installation instructions on GitHub. Remember that the DS420j is armv8, check yours with cat /proc/cpuinfo.

Once you’ve yolo-run the various install scripts and added enough backdoors into the device that holds all your most valuable information, you just need to add the scheduled tasks to keep Entware from being removed (take note of this, we’ll use this again later for rsnapshot).

Setup rsnapshot

Now we can use opkg to install rsnapshot:

$ sudo opkg install rsnapshot

And then edit /opt/etc/rsnapshot.conf to taste. For example here’s mine:

config_version  1.2

snapshot_root  /volume1/rsnapshot

# Commands
cmd_cp  /bin/cp
cmd_rm  /bin/rm
cmd_rsync  /usr/bin/rsync
cmd_ssh  /usr/bin/ssh
cmd_logger  /usr/bin/logger
cmd_du  /usr/bin/du
cmd_rsnapshot_diff  /opt/bin/rsnapshot-diff

# Backups
retain  daily  28
retain  weekly  26

# Opts
verbose  2
loglevel  3
logfile  /var/log/rsnapshot.log
lockfile  /var/run/rsnapshot.pid

backup  /volume1/Backups  Backups/

Now check that the config is valid:

$ sudo rsnapshot configtest
Syntax OK

Schedule rsnapshot

We could use cron to schedule our rsnapshot job, but since Synology isn’t quite normal Linux, I think it’s best to use the built-in GUI rather than install more custom packages.

The task is fairly simple, just click “Create” > “Scheduled Task” > “User-defined script” and setup:

  • Task: “rsnapshot daily”
  • User: root
  • Schedule: every day, probably sometime when you’re not likely to be using the Synology. I set mine to run at 2am.
  • Task Settings: “run command” should be something like /opt/bin/rsnapshot daily (I used the full path Just In Case™ there is $PATH weirdness).

Save the task and accept the dialog that tells you you’ve voided your warrantee.

The task will be skipped if an instance is already running, you can use the Synology UI to start it ad-hoc, and you can easily see the status in the “View Result” screen. Which is a bit more user-friendly than cron.

This setup has successfully run on my Synology once (so far). Whether it will continue to work after reboots and software updates remains to be seen.

  1. citation needed. 


Geode: More Functions for Crystal

The only language I use for personal projects is Crystal. It is as easy to write as a dynamically typed language - so I don’t have to waste time telling the type system what I mean, but it has full type checking - so I also don’t waste time on easily preventable mistakes.

Crystal being the best programming language for side projects is a topic for another blog post. There’s a lot to say about how perfect it is.

Naturally this means that over the last 6 or so years that I’ve been using Crystal, I’ve assembled a messy collection of utility functions and classes that I share between projects.

I’ve collected some of the best bits into a library: Geode. Features include:

More bits will be added as I need them.

If for some reason you do use this, be aware that I am changing the behaviour of the stdlib, which might cause other things to break in weird and unexpected ways.


HOT NEW MARKDOWN EDITOR FOR MACOS

I just came across MarkEdit on Mastodon, and then instantly installed it. It’s advertised as being “TextEdit, but for Markdown” and it totally delivers on that.

A screenshot of this post, written in MarkEdit

Since I’ve started using my Mac more for general tinkering (instead of mostly just photos), I have not been able to find a decent no-nonsense Markdown editor. The options that go around seem to be:

  • MacDown, an open source split-screen editor which used to work quite well but has significant issues open for multiple years
  • IA Writer, Ulysses, and other “professional” apps. These are no doubt the best and most feature-complete, but for writing stuff here it seems like overkill.
  • TextMate/BBEdit/Atom/VS Code/etc. Having declared text editor bankruptcy by moving to Vim, I don’t really want to think about these. Plus their main focus isn’t going to be Markdown, and they’ll come with many more bells/whistles than I need.
  • Obsidian. This is what I had ended up using (and even writing a plugin) to write the file out to my GitHub Pages repo.

MarkEdit is really impressive for a project with an initial commit only two days ago (do people really scrub their version control history before publishing‽).

Since I’m writing this entirely to spread the news of a good app, I thought I’d ask ChatGPT to write a one-paragraph review:

Me: Write a one-paragraph review of MarkEdit, the new markdown editor for macOS. This review should be SEO-optimised for maximum engagement and clicks.


MarkEdit is a cutting-edge markdown editor for macOS users. Its sleek and intuitive design makes it easy to create and edit documents in markdown format, with features like live preview, robust syntax highlighting, and powerful editing tools. Whether you’re a seasoned writer or a beginner, MarkEdit offers a seamless experience for anyone looking to create high-quality, professional-looking documents in markdown. Its advanced feature set and user-friendly interface make it a top choice for anyone looking to take their writing to the next level. So, if you’re looking for the best markdown editor for your Mac, be sure to check out MarkEdit!

Couldn’t have said it better myself.


Dismissable Popup Shell in tmux

Ok so this is mostly a solution in search of a problem. Or a solution to the problem of “Will read the tmux man page too many times and wants to use all the features”. However there’s like a 5% chance this is actually useful, and it’s something that I’ve wanted to get working in tmux for a while. It turned out to be much simpler than I thought.

What I want is a persistent shell that I can quickly activate or dismiss to run simple commands - like checking the git status, making a commit, or finding a path to a file. What I usually do is open a split, run the command, and immediately close the split - I must open hundreds of new tmux panes each day. Gnome people might use Guake which does this inside the window manager.

So here’s the same thing working in tmux:

a screencast showing tmux with two panes, then an overlay window appears on top and a command is run before the overlay is dismissed

Anywhere in tmux I can press M-A (meta/alt-shift-a) and get a terminal window over the top of whatever I was doing. If I press M-A again, it will disappear - but any commands running in it will persist and can be brought back with the same keystroke.

This is based on the (somewhat recent) tmux display-popup feature, which runs a command in a window that floats over the top of your existing windows and panes. This is useful for utilities like fzf which can show the search pane inside a popup instead of in the shell itself. The popups have a limitation though - they are not treated like a tmux pane, the popup will only disappear when the command within it exits. So this makes my plan for a persistent shell in a dismissible popup seem difficult.

And it would be, if I wasn’t a massive tmux nerd.

How this works is that when you open the popup, it will create a super secret background tmux session. This session has the status bar hidden and all the keyboard shortcuts disabled, so it appears like it’s just a normal terminal. The popup then attaches to the background session using a new client (yep, that’s tmux within tmux). This gives you persistence between popups.

The background session actually has one key binding - M-A will detach from the session, exiting the client, and closing the popup.

The implementation turned out to be a lot simpler than I expected when I started:

# in tmux.conf
bind -n M-A display-popup -E show-tmux-popup.sh
bind -T popup M-A detach
# This lets us do scrollback and search within the popup
bind -T popup C-[ copy-mode

# in show-tmux-popup.sh, somewhere in $PATH
#!/bin/bash

session="_popup_$(tmux display -p '#S')"

if ! tmux has -t "$session" 2> /dev/null; then
  session_id="$(tmux new-session -dP -s "$session" -F '#{session_id}')"
  tmux set-option -s -t "$session_id" key-table popup
  tmux set-option -s -t "$session_id" status off
  tmux set-option -s -t "$session_id" prefix None
  session="$session_id"
fi

exec tmux attach -t "$session" > /dev/null

The key-table popup is what turns off all the keyboard shortcuts. It’s not actually turning anything off, it’s just enabling a collection of key bindings that doesn’t have any of the standard shortcuts in it - just the two we’ve added ourselves: one to detach and one for copy-mode.

You may be thinking “Will, won’t you end up with a bunch of weird secret sessions littered all over the place?” - if you were you’d be absolutely right. This is less than ideal, but where tmux closes a pane it opens a window. Or something. We can use the filter (-f) option to hide these secret sessions from anywhere that we see them, for example in choose-tree or list-sessions:

# in tmux.conf
bind -n M-s choose-tree -Zs -f '#{?#{m:_popup_*,#{session_name}},0,1}'

This will hide any sessions beginning with _popup_. The #{? starts a conditional, the #{m:_popup_.*,#{session_name}} does a match on the session name, and rows where the result is 0 are hidden. You get the idea.

The next step is to have some way of promoting a popup shell into a window in the parent session - in a similar way to how break-pane moves a pane into its own window. That’s a challenge for another day. UPDATE: I did this almost immediately, it was not very hard.

Have a look at my dotfiles repo on GitHub to see this config in context: tmux.conf and show-tmux-popup.sh.


Adding OpenGraph previews to Jekyll

I’m on a tear adding support for open web standards to my website(s) - if I’ve missed one, let me know. I’ve just added RSS meta tags which allow for feed reading plugins to suggest the RSS feed - rather than people having to find the RSS link (it’s at the bottom of the page) and paste that into their feed reader. There must be dozens of people who haven’t been reading my amazing content because it was too hard to add my site into their feed reader.

The other standard is OpenGraph which tells other services how to build a rich preview of your site. The obvious examples are links from social media sites (like Facebook, the author of the standard), or messaging apps:

a screenshot of an message showing a link to one of my blog posts, with a title and preview image

This is fairly simple to do, you just need to add some <meta> tags to the <head> of your site, for example my Jekyll template:


<meta property="og:url" content="{{ page.url | absolute_url }}">
<meta property="og:type" content="{% if page.type != null %}{{ page.type }}{% elsif page.layout == "post" %}article{% else %}website{% endif %}">
<meta property="og:title" content="{{ page.title }}">
<meta property="og:description" content="{{ site.description }}">
<meta property="og:image" content="{% if page.image != null %}{{ page.image }}{% else %}/images/me.jpg{% endif %}">

This will populate the correct URL (the absolute URL for the current page), guess the type either from a field on the page or whether it’s a post, and show a default image of me or a custom one specified by the page. This lets me customise posts with frontmatter:

---
title: "A custom OpenGraph post"
type: article
layout: post
image: /images/some-image.jpeg
---
# ... content of the post

I then checked using opengraph.xyz that my formatting was correct. As with most standards, different implementations with have varying quirks and may display things differently. Since I’m just adding a preview image, I’m not too fussed about 100% correctness.

This has also been done to my photos website so if a particular post is shared from there, it will get the preview thumbnail.


Create a Mastodon alias using GitHub Pages

If (like me) you’ve moved to Mastodon recently and are looking for a good way to show off some nerd cred, this is a great way to do it.

Mastodon (and the rest of the ActivityPub fediverse) use WebFinger to discover user accounts on other servers and across different services. It’s used as the way to get the user’s ActivityPub endpoint (which can potentially be on a different domain or non-standard path). We’re going to make use of the “different domain” feature here.

The idea is to plop your WebFinger information on your GitHub Pages-powered site, so that Mastodon can dereference something like @willhbr@willhbr.net to an actual Mastodon instance (in my case, ruby.social). There are some plugins to do this, but they don’t work with GitHub’s “no plugins unless you build the site yourself and upload the result” policy. So we’re going pluginless.

Firstly, get the WebFinger info for your Mastodon account:

$ curl 'https://$MASTODON_INSTANCE/.well-known/webfinger?resource=$USERNAME@$MASTODON_INSTANCE'

It should look something like:

{
  "subject": "acct:willhbr@ruby.social",
  "aliases": [
    "https://ruby.social/@willhbr",
    "https://ruby.social/users/willhbr"
  ],
  "links": [
    {
      "rel": "http://webfinger.net/rel/profile-page",
      "type": "text/html",
      "href": "https://ruby.social/@willhbr"
    },
    {
      "rel": "self",
      "type": "application/activity+json",
      "href": "https://ruby.social/users/willhbr"
    },
    {
      "rel": "http://ostatus.org/schema/1.0/subscribe",
      "template": "https://ruby.social/authorize_interaction?uri={uri}"
    }
  ]
}

Save that into /.well-known/webfinger in the root of your Jekyll site, and tell Jekyll to include the hidden directory when it builds:

# in _config.yml
include:
  - .well-known

Then just commit and push your site to GitHub. Once it’s deployed, you should be able to search for yourself: @anyrandomusername@$MY_WEBSITE_URL.

Since the server is static, it can’t change the response based on the resource query parameter, but since (I assume) you’re doing this for yourself on your personal website, that shouldn’t matter too much.

Needless Polishing

That’s cool, but we can do better. Replace the .well-known/webfinger file with a template:

---
---
{{ site.webfinger | jsonify }}

Then you can keep the WebFinger info in _config.yml so it’s more accessible (not in a hidden directory) and have it checked for syntax errors when your site builds. I’ve translated mine to YAML:

# in _config.yml
include:
  - .well-known

webfinger:
  subject: "acct:willhbr@ruby.social"
  aliases": ["https://ruby.social/@willhbr", "https://ruby.social/users/willhbr"]
  links:
    - rel: "http://webfinger.net/rel/profile-page"
      type: "text/html"
      href: "https://ruby.social/@willhbr"
    - rel: "self"
      type: "application/activity+json"
      href: "https://ruby.social/users/willhbr"
    - rel: "http://ostatus.org/schema/1.0/subscribe"
      template: "https://ruby.social/authorize_interaction?uri={uri}"

Nerd cred achieved, without a plugin. Toot me.