Apple Watch Running Apps

This year I’m running the Sydney Marathon, and so I’ve got a lot of running on the cards for the next few months. I’ve been using an Apple Watch to track my runs since early 2019, first a series 3 and now a series 8. Here’s what I’m using to keep track of training and races.

WorkOutDoors is a kitchen-sink-included Apple Watch activity tracker, mostly focussed on running, cycling, and hiking in areas where a map is required. Some Garmin watches have maps built in, which is super useful to make sure you take the right turns on a trail run.

Often I’ll do trail runs with my friend Max1, who is always organised enough that I don’t need to know where we’re going. I bought WorkOutDoors as a backup but then never got around to using, it until a few months ago. It’s a bit intimidating, but it’s super useful for trail runs where you’re not sure of exactly which track you need to be on.

Last month I ran UTA 22 in the Blue Mountains just to the west of Sydney. I used WorkOutDoors to not just show a map, but also show the route with elevation and key points of interest (like the aid station) so I knew what to expect as I was running. Perhaps it takes some of the adventure out of the run, but it was useful to know how much of an ascent or descent was coming up. Without it, I would have been trying to guess where I was based on a patchy recollection of the elevation profile.

You need a GPX file to show the route in WorkOutDoors. Some races (like UTA) supply an official one, but if I’m doing a training run the best tool I’ve found to make one is the Garmin Connect course planning tool. You don’t need a Garmin watch to use it, but you do need to make an account. Once you’ve made a route by clicking waypoints and adjusting the path-finding, you can download the GPX file.

The “Routes” section of WorkOutDoors has an “Import” button where you can load the GPX file from the iOS Files app. You can then give it a name, send it to the watch, and tell the watch to download the surrounding map tiles. The trick that confused me initially was that you need to go into the settings for WorkOutDoors on the watch and select an active route to have it appear on the map when you start an activity. If you don’t do this you’ll still get a map, which can be useful, but not as useful as having the real route.

If I’m just doing a run from home—where I don’t need a map—I’ll track the run using the standard Workouts app on the Apple Watch. It’s not perfect, I’d really like the ability to increase the size of certain metrics and make better use of the screen real estate. Currently about 30% of the screen is just empty, and I can’t use that space to make the existing metrics more readable, only to add more clutter. It does work reliably and I’m used to reading the tiny numbers.

I’m a bit miffed that the ability to have completely custom metric sets was removed a few years ago. You can’t put any metric on any screen, some metrics are restricted to particular pre-defined screens. For example you can’t have an altitude graph on the same screen as your pace. That being said, the only screen I really care about is the main one. This is configured to show time, distance, rolling pace, and average pace. The second screen I use exclusively during recovery runs to show the heart rate zone indicator.

The reason I want to see the heart rate indicator is because of another frustrating design decision. You can have pre-defined runs with a target, either time based, distance based, or “custom”. I would like to be able to setup a “recovery run” with a time goal and a heart rate alert to catch me if I’m putting in too much effort. However the alerts (including both heart rate and pace) are defined globally for all runs, not a particular flavour, so if I set a heart rate goal I risk forgetting about it and being spammed with alerts when I do my next run. So instead I just scroll to the second screen and glance at the heart rate zone every so often.

“Custom” runs are a welcome addition, as they make interval training much easier. You can set work and rest periods defined either by distance, time, or “open” (you double-tap the screen to advance to the next interval). They have a custom screen with some interval-specific info—but it seems like you can’t customise that view at all.

Custom runs are definitely only designed for intervals, however. I wanted to setup a 5K run that included a warmup as part of the workout. This would skip having to fiddle around and swap to a new workout after the warmup, instead I could just go from my warmup straight into the main course. This didn’t end up working because the “distance” selector for an interval only goes in 5 metre increments, so I would have had to scroll 1000 times to put in my 5K goal. I just put up with stopping and starting a new workout.

The iOS Fitness (previously “Activity”) app is reasonable for viewing information about runs, but the last few updates have sacrificed usability in this area to put more focus on Fitness+—which I have no interest in. For example, it now only shows your latest activity on the main screen (previously it would show multiple) in order to make room for a weekly “trainer tip” video.

Instead I use HealthFit2, which reads the same HealthKit data, but surfaces much more information. The main view lists all activities with nice big maps. Each activity has graphs for pace, elevation, and heart rate, and a whole host of other stats. Some of this is available in the Fitness app, but not easily accessible.

What the Fitness app doesn’t have is nice graphs for keeping track of activity per week, month, or year. I’m currently using the weekly “kilometres run” graph to keep up with my marathon training. This is much more actionable than the trends shown in the Fitness app, which work on a fairly long time frame (previous 3 months compared to the last year) and offer frustratingly obtuse advice—if I start running significantly further, I get told off because my average pace is dropping.

If you’re at all serious about running and use an Apple Watch to track your exercise, buying both WorkOutDoors and HealthFit (about $10 each, HealthFit has an optional subscription for minor features) dramatically improves the experience of using the watch while trail running and visualising the data afterwards.

  1. Website pending. 

  2. Weirdly they don’t have a website listed anywhere, only a Facebook page. 


Repairing My Roborock S6

About three weeks ago my previously-trusty Roborock S6 (named Henry) stopped halfway through a clean. Usually this means that he found a tasty looking cable or shoelace and got tangled up, but when I got home he was just sitting in the hallway unobstructed. I popped him on his base, and didn’t think much of it. The next time he was scheduled to clean up I got a notification saying that the laser distance sensor had malfunctioned, and that I should remove any obstructions and retry. That was cause for more concern.

I sat him in the middle of the floor and turned him on, and sure enough the LiDAR sensor (housed in the knob on top of the vacuum) didn’t spin. After trying and failing to start three times, I got the error notification again.

After a look online, this seems to be a reasonably common failure, and the official advice is to contact customer support. So I dutifully contacted Roborock and explained the failure, and eventually they quoted me $70 for an “assessment” of whether it was repairable, and then estimated that a repair could cost $60-200. I’d also have to pay for shipping either way, which I’d conservatively estimate at $35 each way.

Customer service also pointed out that because the S6 was “phased out”, including spare parts. I was not particularly thrilled at the prospect of paying $140 to send Henry off to someone who didn’t have the parts needed to do the repair.

Naturally, I disassembled Henry to see if I could notice anything obviously wrong—my concern was that something had just got lodged and was preventing the LiDAR from spinning. Disassembly was straightforward, the only trick being that you have to pry the front cover off with a little bit more force than I’d be comfortable with if I didn’t know that was the correct procedure.

Henry with his top off

The LiDAR “laser distance sensor” module is the black and orange unit in the centre.

This didn’t reveal any obvious failures, but it did give me confidence in replacing the LiDAR unit myself. It’s a self-contained slide-in component—you just undo some screws and it disconnects from a single port that connects it to the rest of the vacuum.

Now that I knew what the part looked like, and that doing the replacement would be easy, I found a replacement part on AliExpress for $70 (with shipping included). I don’t think I would’ve trusted the compatibility advertised in the description if I didn’t know the shape of the component I was looking for. The shape and screw locations matched, and it would be weird for Roborock to make two seemingly self-contained parts that are physically identical but incompatible in software.

The part arrived after about a week, and it had some subtle differences in the design but not in the overall shape. It turned out the new part was coded LDS01RR but the broken one was LDS02RR, so perhaps this was made for the S5 originally. I slotted it in, booted the vacuum up (sans top) and it worked perfectly. After putting the top back on, Henry was able to catch up on all the vacuuming he’d missed in the last three weeks.

I’m glad the repair worked so I didn’t have to spend money buying a new part and shipping Henry off to Roborock. It’s not great that this part can seemingly fail spontaneously, I’ve looked for any blown out components but haven’t seen anything. Naturally, I will hoard the old broken part in case the new one fails and I have to Frankenstein them together to get Henry going again.


Using JJ for the Version Control Operation Audit

So I just wrote about the version control operations that I use day-to-day. My new favourite thing is JJ—a git-compatible version control system that I’ve also written about before—so I thought I would explain how each of these operations are done with JJ.

View what’s about to be committed

So we’re already at a “well, actually” moment, because all changes in JJ are automatically committed, but basically jj diff will do what you want.

Making a commit

You do jj new to start a new commit, jj describe to set the commit message, or jj commit to set the commit message and start a new commit in one go.

Uploading a change to be reviewed

This depends on your workflow, but jj git push --all will upload every branch to your remote. I also use jj git push -c @- to create a new auto-named branch, and jj git push -b 'glob:willhbr/push-*' to upload every auto-named branch. These all sit behind convenient aliases that I’ve mentioned before.

If you’re submitting a change to someone else’s repo via your own fork, it works really well to set the upstream remote to be theirs, and origin to be yours, then edit the repo config to pull from upstream and push to origin:

[git]
push = "origin"
fetch = "upstream"

Altering a change based on code review feedback

You can do this a few ways:

jj edit $change to swap your working copy to point to the change you want to alter, but this makes it a little trickier to see what your alterations are since you’re editing the change directly (jj diff will show the diff for the whole change). There are ways around this using jj obslog but that’s more work.

jj new $change will create a new change on top of the target you want to alter. You can make changes, view the diff compared to the target (the parent) with jj diff, and then do jj amend to move the changes into the target.

Of course you could just do jj new anywhere, make your edits, and then do jj squash --into $change to move the changes. This works from anywhere to anywhere1. This does run an increased risk of creating conflicts, but you should live dangerously every once in a while.

Revert a file back to the original state

Either jj restore --from=$change <paths>, or jj diffedit (I haven’t used that one).

Alternatively I just do jj split and then jj abandon on the commit that has the changes I don’t want.

Splitting a change in two

It’s just jj split. No tricks.

Merging two changes into one

I’d like to be able to do this with a murcurial-style histedit-and-fold, but JJ doesn’t have histedit yet so the next best thing is jj squash --from $a --into $b, and then jj abandon the empty commit.

Writing dependent changes

The depends on the review system you’re using, but in the common branch-based ones (GitHub/GitLab) you just use jj git push -c $change to create a branch that can be uploaded for review. This won’t move as you add more commits, so you don’t have to remember to branch before you continue working.

Reordering dependent changes

JJ doesn’t have a mercurial histedit command (yet), so I’d do this with multiple rebase -s X -d Y invocations. This is less than ideal, but gets the job done.

Make a dependent change independent

jj rebase -r @ -d main will pop the working copy change off its parent and put it on top of main.

Context switch between changes

Either jj edit or jj new, depending if you want to be editing the change directly, or a new change on top of it. Since your working copy is always recorded in a commit, there’s no need to have any stash mechanism.

Jump back to main

jj new main, and you can do this at any point because you don’t have to worry about stashing.

Test someone else’s change

This is another one that depends on your workflow. If the changes have been pushed to a remote you’ve already got setup, you just need to jj git fetch --all-remotes and then jj new $branchname. If the change is in someone else’s fork, you’ll need to jump through a couple of hoops to add the remote first, fetch from it, then start a new change on top of their branch.

Build off someone else’s change

This looks just the same as testing someone’s change, you just start writing some code. You might need to fetch and rebase if they update their code.

Rolling back a change

jj backout -r $change will create a new commit that reverses everything done in $change. This may have some conflicts, depending on how old $change is.

Update your work based on newly-merged code

jj git fetch --all-remotes is my go-to, I have this aliased as jj sync. It only fetches though, it doesn’t actually alter any of your pending changes. I then run jj rebase --skip-empty -d 'trunk()' (aliased to jj evolve) to put my current changes back on top of main. If I was working on top of someone else’s change, I would have to replace trunk() with their branch name.

Show what changes are pending

I think the default jj log query shows too much stuff, so I’ve got a custom query that will typically show less than half a screen of output:

[revsets]
log =  '@ | ancestors(trunk()..(visible_heads() & mine()), 2) | trunk()'

How I worked this out is a topic for another day, but this basically just shows my changes that haven’t been submitted yet, and ignores other people’s unmerged branches.

I’m not using JJ for my day-to-day work, just for my personal projects (like this website!) and so I’m not actually doing most of these operations that often.

  1. I’m pretty sure? I haven’t checked though. 


The Version Control Operation Audit

Often I see people dismiss complaints about a particular version control system’s usability1 because you “just need to learn like six commands”2, and so it doesn’t matter that some things are complicated, because you won’t use them day-to-day. If you do use them, it’ll be infrequent enough that looking up an example is not a big deal.

So with that in mind, here’s all the operations—not commands—that I use a version control system for. I’d say these are things I’d do with enough regularity that it’s not at all noteworthy that I did them.

View what’s about to be committed

Before I commit I always want to do a quick check to make sure I’m committing what I expect. This gives me an opportunity to undo any changes I’ve made just for debugging, or spot any issues that I need to resolve before sending the change off for review.

Similarly, I will fairly often want to check the contents of an existing commit in a diff format compared to its parent.

Making a commit

Making a commit is obviously table stakes, but something I’ll do is commit only some of my changes—either just certain files, or certain chunks in certain files.

Uploading a change to be reviewed

Initially I forgot this one because it’s basically a reflex, but it should definitely be on the list!

Altering a change based on code review feedback

So you fix a bug or implement feature or whatever, commit those changes, and upload them to be reviewed3. Your reviewer leaves some comments and you need to make some changes. Assuming that you want a clean change history, you should be able to easily make the alterations the reviewer suggested, alter your commit, and re-upload that back to be reviewed again. This is almost certainly the thing I do the most often.

The optimal granularity of changes is a well-discussed topic, which I won’t go into here, but in general I would prefer to not have “fix test” and “oops wrong value” in the commit history if I could avoid it. My ideal is that the project should compile and the tests should pass at every commit in the main branch.

Revert a file back to the original state

You make some changes, then you realise that they were rubbish, or maybe you added a bunch of debugging code to a file that you don’t need any more. Whatever the case is, it should be easy to blast away any changes to a file and get it back to the latest version on the main branch.

This includes both discarding uncommitted changes from your working copy, as well as dropping the changes in a file from a commit. Just today I was working on a change and added a bunch of debugging code to a particular file. When it came time to send the change for review, I needed to get rid of all the changes in that file.4

Splitting a change in two

Often I’ll make a variety of changes across the codebase and then realise that what I’ve done is actually better thought of as two separate changes—it just happens that I did them at the same time.

You should be able to take your change—committed or uncommitted—and turn it into two changes that can be reviewed independently.

This is useful to get feedback from different people (without having to explain the unrelated changes they should ignore), to keep the history logical, or to make it easier to roll back one of the changes if it breaks something.

Merging two changes into one

Sometimes you thought a change could be made in parts, but for whatever reason you’re going to need to land everything at once. Maybe you thought you could adjust an API and the migrate call sites over later, but it turned out to be impossible. Whatever the case is, your two changes need to become one.

Writing dependent changes

You make one amazing feature, and send the code to review, but then have an idea for a second amazing feature that builds on top of the first feature. You should be able to continue building on top of your existing work while you wait for a review on the first feature.

A bit of a git-gotcha—at least for branch-based review tools—is that it’s easy to just continue committing on the same branch, push it, and then have the commits for the second feature be included in the review for the first. You’d then have to manually point the branch back to the right commit. You have to remember to proactively create a new branch when you start working on what will be a new change.

Reordering dependent changes

A bit of a less common operation, but if you’ve made a series of changes in a dependent chain and they’re not actually dependent on one another, it is really convenient to be able to re-order them to get something submitted before the others.

The most obvious example is a refactor that has to touch every call site for a method. You don’t want to send it all as one change, you don’t want to swap back to main and lose track of which call sites you’ve updated, and it doesn’t matter which part of the refactor is submitted first.

Make a dependent change independent

Instead of rearranging the order of changes, I’ll instead just move a change to be in a separate series of changes before sending it for review. If my log looked like this:

@  q Will Richardson 1 second ago
│  Some useful bug fix
◉  u Will Richardson 3 hours ago
│  A second, equally useful feature (also huge)
◉  u Will Richardson 5 hours ago
│  Implement a huge feature
~

Then I would move that top commit to be separate from the feature work:

◉  u Will Richardson 3 hours ago
│  A second, equally useful feature (also huge)
◉  m Will Richardson 5 hours ago
│  Implement a huge feature
│ @  q Will Richardson 25 seconds ago 0b
├─╯  Some useful bug fix
◉  z root() 00

Then I can continue working on the useful features, and send the bug fix for review.

Context switch between changes

Chances are you’ve got multiple changes on the go at any given time, and so you want it to be super easy to swap from working on one change to another. Usually what happens is you send something for review, get started with a new task, and then when the review comes back you need to swap back to editing the first change to make some fix-ups and get the code submitted.

Part of this is being able to switch while you’ve got some uncommitted changes in your working copy. You need to be able to record these somewhere, swap to the other change, and not lose them when you need to swap back.

Jump back to main

Similar to the previous one, but something that I find I’ll do while working on a single change, as I’ll want to verify some behaviour without my in-progress changes, and then jump back to whatever I was doing.

This means being able to store your working copy changes, so your working copy is empty when you move back to main.

Test someone else’s change

Sometimes you just need to run someone else’s code locally, either to check out the feature they’ve implemented, or to do some debugging into a problem that they’re having.

It should be easy to get the version of the code that they’ve sent for review, and start making changes.

Build off someone else’s change

Similarly, you might need to start working on a change that requires an API or fix that someone else hasn’t merged into the main branch yet. Once you’ve got their change locally, you need to be able to commit your own changes, and re-update them based on any alterations they make after you started.

Rolling back a change

It doesn’t take long doing operations work to appreciate a simple rollback. Having a way to say “make a change that reverses everything done in that change” is invaluable. Of course you might have to resolve some conflicts if there have been other changes in the interim, or maybe make some manual changes if you don’t want a 100% pure rollback.

Update your work based on newly-merged code

An active codebase is a moving target, and you’re going to need to keep your code up-to-date with the latest code in the repo. This makes the review simpler, the submission faster, and reduces the chance of you making a change that interacts poorly with someone else’s work.

Show what changes are pending

I don’t want to lose track of work that I’ve made locally, so having some way of listing all the changes that exist on my machine that haven’t yet made it into the main branch is very useful. Usually this is necessary when I’ve swapped between tasks and need a reminder to upload a change and get it reviewed.


I’m fairly sure that’s the lot. I’ll leave it as an exercise for the reader which commands those actions would map to in your favourite version control system. If you’re one of these mythical 6-command people, what do you think of my operations? Do they fit in your memorised commands, or are these just things that you never need to do? Do you use a graphical interface for your favourite VCS that abstracts these things away? Send me a toot, I’d be fascinated to know!

What I haven’t included here is operations that involve reading the whole history of a project. My main interaction with version control for making changes is via command-line interfaces, and CLIs are not very good for reading changes. Instead I’ll do these through a browser-based code review and code browsing tool (eg the GitHub/GitLab UI). For what it’s worth, the things I view in that UI are:

  • Change history for a file
  • State of a file at a particular point in time
  • Blame for a file
  • Blame for a file at a particular point in time
  • Diff for a particular already-submitted change
  • Cross-references, interface implementations, and other non-VCS information
  1. It’s git, obviously. 

  2. Replace six with your favourite number that the average person would consider “small”. 

  3. You are doing code review, right? 

  4. Reading this back I realise just how many of my examples are about separating debugging code from real code. This is probably because I’m a serial printf debugger. If you’re a real debugger person, I guess you never have to do this? 


Some Hot JJ Tips

I spent a bunch of time learning how to use JJ properly after I gave up on git. Up until this point, I had been dumping commits directly onto main and just pushing the branch occasionally. I had avoided learning the pull/merge request flow because it’s not something I use on personal projects, but it turns out to work pretty well. With a few tactically-deployed aliases I’ve got a pretty simple flow going.

We start a new change with jj new, and make some edits to some files. We’ll end up with something like:

@  lw Will Richardson now 2
│  (no description set)
◉  w Will Richardson ago main main@origin HEAD@git 03
│  Bump version number to 0.8.1
~

Once we’ve made some changes and got stuff working, we’ll give it a commit message with jj commit -m 'do some stuff'. With that super meaningful commit message, I’m ready to send this change for review. The easiest way to do this is to use jj git push -c lw (lw is the change ID we’re pushing):

$ jj git push -c lw
Creating branch willhbr/push-lwwlpunxnpnu for revision @-
Branch changes to push to origin:
  Add branch willhbr/push-lwwlpunxnpnu to af2e2412e623
remote:
remote: To create a merge request for willhbr/push-lwwlpunxnpnu, visit:
remote:   https://gitlab.com/willhbr/.../-/merge_requests/new?merge_request?...

JJ auto-creates a branch for us based on the change ID. I’ve customised this with the git.push-branch-prefix option to include willhbr/ at the front so I know it’s mine.

The change has been pushed, and the remote—GitLab—has given us a handy link to create a merge request. This command is a bit wordy, so I’ve got an alias that will push the change automatically:

[aliases]
cl = ['git', 'push', '-c', '@-']

A little side note: @- refers to the parent of the current change, since when I’m running this I will have just created a new commit, and my log will look like:

@  x Will Richardson 4 minutes ago 6
│  (empty) (no description set)
◉  lw Will Richardson 4 minutes ago HEAD@git a
│  do some stuff
◉  w Will Richardson 1 month ago main main@origin 03
│  Bump version number to 0.8.1
~

So to push that first non-empty, non-working-copy commit I use @- as the change ID.

Now I just need to wait for someone to review and approve the merge on GitHub or GitLab or whatever, and do the merge via the web UI. Once that’s done, I can fetch changes from the remote, and my changes will disappear from the default log view as they’re now just part of main@origin.

Depending on how the remote is setup, we might have to do one more step. If the changes were merged into the main branch, the commit hashes remain the same and everything works normally—JJ knows the commits now in main are the ones you authored. This is the default behaviour in GitHub and GitLab. However, if a GitHub project is setup to rebase or squash into main, you’ll end up seeing duplicate changes. This is because the commit hashes get updated when they’re rebased, so JJ can’t reconcile them when it fetches new changes. If you rebase your existing changes on top of main, your local changes will become empty—since their content is already present in the other commits. Instead when you rebase, pass --skip-empty, and these empty commits will be dropped.

I’ve got two more aliases to make this easier:

[aliases]
sync = ['git', 'fetch', '--all-remotes']
evolve = ['rebase', '--skip-empty', '-d', 'main']

So I just jj sync to get all the changes from the internet, and then jj evolve to put my changes back on the new location of main.

If you use a web UI to accept some changes based on reviewer feedback, the next time you jj sync, the changes will be added to your local branch. You could then make further edits, or squash the suggested changes back into the original commit to have a cleaner history.

If you make any alterations locally the branch name in the log will have an asterisk after it to indicate that it has changes that need to be pushed. Update all branches with jj git push --all (I have this aliased to jj upload).

Something of note is that if you have two changes in succession (one is the parent of the other) and you make two pull requests from them, the child pull request will contain all the content from both changes. Unless your code review tool has some way to change the base of the diff1, you’ll want to get them reviewed in sequence. Alternatively, if the child change doesn’t actually depend on the parent—perhaps it’s just an unrelated bug fix you made while working on a feature—you can just rebase it to be a sibling of its parent

If you end up in this situation, and now want to get that bug fix submitted ASAP, but it’s currently sitting on top of a huge feature that’ll take ages to get reviewed:

$ jj log
@  q Will Richardson HEAD@git a
│  Fix how the bugs are created
◉  u Will Richardson 9
│  Implement a huge feature
~
$ jj rebase -s @ -d @--
@  q Will Richardson 0c
│  Fix how the bugs are created
│ ◉  u Will Richardson u
├─╯  Implement a huge feature
~

That little rebase trick takes the current change and moves it to be a sibling of its parent. I’ve used the hg equivalent of this for years to get code merged that I had written in the opposite order I should have.


I’ve got some aliases to make it easier to quickly get going with a JJ repo. I’ve only been using colocated JJ/git repos, which means there’s both a .git as well as a .jj directory, so any git tool or command also works with no modification. In my ~/.gitconfig I have:

[alias]
jj = "!jj git init --git-repo=."
setup = "!git init && git jj"

This allows me to run git jj in an existing repo, or git setup to get from no version control immediately to good version control, with no intermediate steps.

In my ~/.jjconfig.toml I have a bunch of aliases, I’m not fully settled on these but here they are anyway:

[aliases]
# Old init alias, before I added the aliases in git
ig = ['git', 'init', '--git-repo=.']

# If I want to just push directly to main
# This just sets it to be the second-latest commit
setmain = ["branch", "set", "main", "-r", "@-"]
# Sync everything, mentioned above
sync = ['git', 'fetch', '--all-remotes']
# Put stuff back on top of main
evolve = ['rebase', '--skip-empty', '-d', 'main']

# Do a full log, rather than just the interesting stuff
# Basically the same behaviour as the default git log
xl = ['log', '-r', 'all()']
# Progression log? Shows how the current change has evolved
# A bit more on this later
pl = ['obslog', '-p']

# Pushing changes and auto-creating branches
cl = ['git', 'push', '-c', '@-']
push = ['git', 'push', '-b', 'glob:willhbr/push-*']
upload = ['git', 'push', '--all']

# This might be useful, opens an editor to set per-repo settings.
configure = ['config', 'edit', '--repo']

Ok, about that jj pl alias. jj opslog will show the progression of a commit, so you can view or revert back to an intermediate state without having to actually make intermediate commits. So if you do a bunch of work, and get stuff working, and then decide to make everything better but actually make a huge mess of it, you can get back to the middle state even if you forgot to commit at that point. Here’s the progression for my website while I’ve been working on this post:

$ jj obslog
@  z Will Richardson 13 seconds ago a
│  (no description set)
◉  z hidden Will Richardson 1 hour ago a6f
│  (no description set)
◉  z hidden Will Richardson 3 hours ago b2a3
│  (no description set)
◉  z hidden Will Richardson 3 hours ago 80f
│  (no description set)
◉  z hidden Will Richardson 3 hours ago 2e
   (empty) (no description set)

The -p option shows the patch diff between each version, so I can quickly see what I had changed.

The big caveat is that this only works at points in time where you ran a jj command. If you haven’t run jj status or jj log or whatever, it won’t have picked up your changes.2

This is a side-effect of the working-copy-as-commit model, so every time you modify a file and run a JJ command, it amends the changes into the working copy commit. However this just creates a new commit (that’s how git works), so you’re leaving a trail of commits as you work. jj opslog just exposes that trail to you. I don’t think I’d rely on this—I’d rather create a bunch of commits then squash them later—but having this as a backup just gives me more confidence that I can find something I’ve lost in a pinch.

Most of my learning was done reading the JJ docs on working with GitHub and GitLab, and perusing the CLI Reference. I also read jj init by Chris Krycho the other day and enjoyed his detailed look at things.

  1. If you know, you know. 

  2. It does have some filesystem watcher, which I assume will keep this fully up-to-date, but I assume you’re not running that. 


Happy Birthday to Website

My website is now ten years old!

Screengrab of willhbr.net today

How willhbr.net appeared as this was posted.

Ten years ago today I made the first commit to this website, consisting of just an index.html with the contents:

<p>Sup.</p>

Thankfully it didn’t stay like that for long (just over three hours) and soon after I committed a simple “about me”-style homepage:

Screenshot of my 2015 website

Hilariously this included adding the entirety of Bootstrap just to get that div centred, and loading jQuery just to expand a section when you click the button.

It was a few months later in late 2014 I added Jekyll and wrote my first post. This started out as hiding behind a /blog URL, but later moved onto the main page. I refuse to link to any old posts directly because that’s just embarrassing.

In 2015 I spruced up the design and added this photo of my shiny new OnePlus One:

Screenshot of my 2015 website, background is a photo of a OnePlus One in a bright orange case

Yep that’s me using an Oculus DK 2.

Big images are super cool, for ages I have wanted to have that cool layout where the images extend to the full width of the page but the text remains in a narrower centred block. I don’t have enough CSS enthusiasm to implement this, and I don’t post enough images to make this worthwhile. The main reason I’ve ended up back with an image-less layout is that even on a reliable, high speed connection it can take the best part of a second to load the image. That’s not too bad for small part of the page, but when it’s taking up the whole page it’s super jarring to have it pop in after the rest of the page has loaded.

I did have big fancy images for a while, but ended up giving up on it. It’s also a lot of effort for something that is basically invisible in a mobile layout.

My post about my OnePlus One with a big, wide image

Who needs any kind of readability or contast!

It’s amusing that the basic layout and style of my site were locked in at not too long after I setup Jekyll. Big title of my name at the top, subtitle (since removed), and a few links. Posts had a big coloured title, with some basic metadata below.

basic website design circa 2015

A substantial part of this design is that I have limited enthusiasm for complicated layouts that change substantially with screen size. Most of my web development experience was just before everyone started looking at everything on their phones. My site layout doesn’t require any @media queries to change based on viewport width, it’s max-width and margin: auto doing all the heavy lifting.

Website design January 2023

The website in early 2023, after I settled on purple but before I started endless tinkering.

There’s definitely an aesthetic that I’m following of “technical personal website/blog” that’s typically shades of grey with a single highlight colour, default sans serif font, minimal layout complexity. I’m definitely influenced by the sites that I follow, as well as the ease of implementation.

The goal is to make all the information a reader might want as available and obvious as possible. The text of the post is right there—that’s the main thing. If someone decided to use a reader mode on my site I would consider that a design failure. Post metadata (most importantly the date, but now also tags) is clear right under the post title. I’ve definitely come across posts with no obvious date and been unsure if it’s still relevant, or maybe if it just needs to be read in a different mindset. This is especially true for technical writing, where things change somewhat frequently. If your writing is timeless, I would forgive you for omitting the date. But I do think you should still include it for completeness.

Last year I added some more links at the bottom of the post: next and previous, links to the archive, RSS Feed, and my Mastodon account. When I come across an interesting post I want to see what other things the author has written, so I made this easily accessible at the end of every post.

I would quite like to have links to related posts based on keywords or tags, but I don’t really have a big enough collection of posts for this to work super well, and I don’t do Jekyll plugins at the moment anyway. This might change as I play around with tags more.

In a bunch of places I have a link to the archive, which is a chronological list of every post on the site. Some sites have similar pages with just lists of months, each linking to the full content of the posts published during that month. I find this annoying. It’s hard to search for a particular topic or keyword as the post titles aren’t in the list, if you don’t know the exact month of a post you have to click through each page manually. It just pushes people into leaving the site and using a search engine with a site: filter.

I’m such a fan of the archive page that I try and slip links to it in as many places as possible. It’s in the site header1, the date on every post goes to the archive, there’s a link after every post, it’s next to the pagination links, and if you go to a page other than the first one there will be pagination at the top and bottom of the page (both with links to the archive).

Hopefully if you’re looking for something I’ve written, skimming the archive will find it.

Picking an accent colour is tricky, if you look throughout the history of the site I’ve dabbled in blue, teal, green, Ubuntu-flavoured orange/brown, and now purple. It’s hard to not just fall back to using blue for everything, and I’m really happy with the purple accent—both in light and dark mode. I think of the dark mode as being the canonical colour scheme, since I’m almost always writing something late at night.2

It’s hard not to use web fonts, since there’s basically no guarantee as to which fonts will be available on any particular system. Previously I did use Raleway but removed it was pointed out how the whole page “pops” when the font loads. Using Helvetica or the system sans-serif font is good enough, most systems have decent looking options3—and it’s probably a font that the reader is used to seeing.

Of course, I put all this effort in and then the best-case scenario in my view is that someone subscribes via RSS and never sees the site. Although there are some affordances that make the feeds4 friendlier. First is putting the whole post content in them—my posts aren’t too long so it’s not like the feed becomes unwieldy. I copied the idea of a feed-only footer after seeing it on Pixel Envy, it’s a really simple way to tell the reader “this is the end, there’s nothing more to read, you can leave now”. As a reader I never know if someone’s feed contains the whole post or if it’s just a snippet—it’s possible to get to the bottom and wonder if there’s something you’re missing. Writing well is another option, but I find that more difficult.

If you are going to just include stubs in the feed, then you should do a similar thing—put a “continue reading” link at the bottom of the entry. This makes it clear that you’re not just posting a one-paragraph quick thought, and it’s probably easier to click that link than rely on the feed reader UI to make opening the post in a browser obvious.

It’s especially annoying when a feed includes a few paragraphs, so you read that, wonder if you’ve got to the end, scroll back up to the top, open the post in the browser, scroll down again to find the point you’d read to, and then continue reading from that point. I don’t want to put anyone in that position.

Now for some indulgent behind-the-scenes details.

When I first made the site I was using TextMate for editing code and MacDown for markdown editing. Later, when I was doing things on my iPad I used Bear and then iA Writer, probably with some others in the middle. On the iPad I could just use Working Copy to push the changes directly to GitHub, but for some reason I absolutely must see the post as it will look on the site before I push it publicly. Seeing the post in a different context—not a syntax-highlighted, fixed-width markdown editor—helps spot mistakes, and gives me confidence I haven’t colossally messed up some markdown syntax or post metadata. I do the same thing when I’m sending code to be reviewed—I’ll look at the diffs in the terminal but then upload to a code review tool and immediately spot mistakes.

Running the website “locally” is actually running it on one of my home servers, so I’ve come up with plenty of mechanisms for getting posts from my iPad or Mac onto the server. Despite all my efforts, the easiest is still just to copy-paste into Vim.

Now that I’m doing everything on my Mac, I use MarkEdit after a brief dabble with Obsidian. I just keep drafts in my documents folder like the good old days. There’s a little helper script that deals with the front matter and file naming convention for Jekyll. Turning a human-readable title into a URL slug manually is just not a good use of energy.

What will the site look like in another ten years? Have I reached the optimal design, or has drudging through ten years of website screenshots inspired a proper redesign? Wait until 2034 to find out!

  1. Only on the main page, if you’re on a different page then it just links back to the main page. 

  2. Written at 10:22 PM. Side note, I don’t know how anyone uses MacOS in light mode, it’s so bright! 

  3. Apart from Ubuntu Sans, I don’t know what it is but I always recognise it and that makes it stick out as “Hey I’m written like the ubuntu logo!”. Back in my day the Ubuntu logo was 100% curves. 

  4. Don’t forget about the JSON Feed


Glory Is an Indeterminate Amount of Bandwidth Away

On Mastodon I saw this toot showing a tangle of interconnected AWS services used to host a Wordpress site. I don’t speak AWS so it looks confusing.1 One of the replies linked to this post, which I’d come across last week. Seeing it twice was clearly a sign to share my thoughts.

Overall I agree with the message—you can get a really long way with very little computing power. In university I ran a Rails application that collected entries for sports events, and it was hosted on a $5/month VPS that barely broke a sweat. However the post glosses over some aspects of web hosting that I think are worth considering.

The base assumption is that to be solidly in the top 1000 websites, you’ll need to be serving about 30TB of compressed HTML per month, which averages out to 11MB/s. That’s a lot but not an unfathomable amount. It’s only 88Mb/s which is much less than my home internet download speed. Although sadly it is much greater than my meagre 20Mb/s upload speed—so we can’t host this on a server in my apartment.

The assumption baked into the 11MB/s number is that all your visitors will come and look at your website in a perfectly uniform distribution throughout the day and throughout the month. There can be no variation due to timezones, days of the week, or days of the month. This is almost certainly not the case for this hypothetical website. If it’s an English-first website then the traffic will grow as the US, UK, and Canada are awake, and then drop off as they go to sleep. If you do the maths on a cosine wave, you can see that it’s easy to hit a peak of 22MB/s.

Fluctuations between weekdays and weekends will vary much more based on the type of website you’re running, but we can pretty safely say that you want to be able to exceed that 22MB/s number.

Of course you could be a victim of your own success—you want to be able to handle traffic spikes. Whether it’s a global news event or just a part of people’s daily routine, your traffic is driven by your visitors. Is there something that could cause all your visitors for one day to come in the space of an hour? That’s almost 300MB/s of traffic you’d have to handle.

We’re also making the assumption that visitors will come to the website, download their content promptly, and leave. There are all sorts of things that can go wrong when downloading a file, which might affect the performance of your website. Will a bunch of clients downloading really slowly impact your memory usage? Are there some users with so much data that your database gets locked up executing a huge query when they’re around?

SQLite is amazing. I think the main reason why I wouldn’t pick it for a web server is not performance, but strictness. It does have a strict mode but I’m not sure how much that encompasses. Way back in the day I used to use MySQL for everything (it’s what my dad did) and happily learnt how to write MySQL-flavoured queries. For some reason (maybe an internship?) I started using Postgres and got really confused that my queries were failing. They had been working fine in MySQL.

The reason is that Postgres is much more strict than MySQL2, and it would just throw out my nonsense queries. If we have this query:

SELECT first_name, AVG(age), last_name
FROM people
GROUP BY first_name

Postgres will fail because you can’t select the column last_name that’s not aggregated or not part of the GROUP BY. MySQL will just give you one of the last names in the group.

SQLite takes this to a whole new level, you don’t have to specify the types of columns, you can basically coerce any type into any other type. This makes it great for doing weird messed up things with mostly-structured data3, but I wouldn’t recommend it for running on a server where it’s not too much additional effort to run Postgres alongside your web server.

The post also argues against using “the edge”4, basically stating that the added complexity of getting content to the edge isn’t worth it for the reduced latency. Obviously this again depends on the type of website you’re building, if it’s mostly static then keeping an edge cache up-to-date isn’t too hard, if it’s changing constantly then it’s a huge pain.

It’s definitely worth squeezing as much out of your monolith before you decide to split it up, even if that means migrating to a faster language (like the author’s preferred Elixir). You might even be able to make some interesting product decisions where you defer some work that could be real-time but is cheaper to do in a batch off-peak.

A quick aside about English-speaking countries. The post says:

Plop [a server] in Virginia and you can get the English speaking world in under 100ms of latency.

I know we’ve got different accents down here in the south, but we do mostly speak English in Australia. And we’re not within 100ms of network latency from the US east coast, although I understand the generalisation of it being the centre of the English-speaking world.

If you’re not sharding your application around the world for latency, you might want to shard it for maintainability. You don’t want your 11MB/s of traffic to be dropped on the floor because of a failed OS update on your server. Turning your application into a distributed system increases the complexity dramatically, but also adds so many more ways to operate on it while it’s still running. Things like having two redundant servers so that you can slowly move traffic between them when running a deployment, or take one offline to do maintenance or an upgrade. There are plenty of benefits outside of the increased compute capacity of adding more hardware.

One final thing, I know I’m a huge container nerd but there are plenty of reasons to containerise your application other than horizontal scaling. I keep everything in containers because I like being able to blow away my entire dev setup and start again from scratch. It’s probably easier to deploy your containerised application on a new server than if you have to go in and manually install the right version of Ruby and all your dependencies. If you’ve got a bug that only appears in production it’s really convenient to be able to run “production” on your local machine, and containers are a great way to do that.

I think it’s important to consider your decisions based on the problem you’re trying to solve. You can prematurely optimise your infrastructure just like you can prematurely optimise your code. You can also choose the wrong architecture that makes scaling impossible, just like you can choose the wrong algorithm that is impossible to optimise. The difficult bit is knowing when it’s the right time for optimisation or rearchitecture.

  1. I did just setup a website on Wordpress.com and that seemed to work pretty well. 

  2. Well, it was in the configuration that I was using over ten years ago. Maybe I could have set different options, or maybe the defaults are different now. I don’t know, I haven’t used either in like seven years. 

  3. I got like halfway through making a MacOS app that lets you use a spreadsheet like an SQLite database and made heavy use of the weakly-typed data to deal with arbitrary input. 

  4. It’s an all-time great heading. You know the one. 


Blogroll: Early 2024

Blogroll, incomplete, early 2024, presented without comment, in no particular order.


It's Not Me, It's Git

tl;dr: I’ve been using jj for version control in my personal projects and it makes me much happier than using git. Continue reading for lukewarm takes on the git CLI.

Firstly I’ll just get some disclaimers out of the way: I only use git (now JJ) for personal projects, I don’t use git at work. Also I work at Google, who currently fund development of JJ since the founder and main contributor is a Google employee—read this section of the readme for more info. This post (along with the rest of this website) is solely my own opinions, and my enthusiasm for JJ (and lack of enthusiasm for git) is just my personal view.


One of my pet peeves is hearing people say that git isn’t hard, you just remember or alias a handful of commands and ignore the rest. While this seems to work for a lot of people, they’re not able to take advantage of having their code in version control.

Imagine that you were learning to program and everything was done in C. You get confused about pointer arithmetic, manual memory management, and void*. Eventually you learn a portion of the language1 that makes sense and meticulously check your entire program after making any change. After doing this for years someone tells you about Python2. You realise that all that time you spent avoiding buffer overruns and segmentation faults could have been avoided altogether.

This is how I feel about git. It’s a tool that gets the job done, but doesn’t empower me to use it confidently to make my work easier. The bar for “oh I’ll just search for the command to do this” is so incredibly low.

The best feature of JJ is it has global undo. You commit something you didn’t mean to? jj undo. Abandoned the wrong change? jj undo. There’s a reason that UX people prefer undo over confirmation dialogs3, it empowers users to move faster doing the thing they think is correct instead of second-guessing and double-checking destructive operations.

I had an almost magical experience with JJ: I tried to rebase some commits, I think I must’ve passed an incorrect option and I ended up with the completely wrong result. If this was git, I would be preparing myself to re-clone the repo or maybe delving into the reflog if that wasn’t possible. Instead I just did jj undo, re-checked the documentation, realised the option I should have used, and ran rebase again.

Knowing that the worst thing I can reasonably do4 is waste some time and have to undo my changes frees me up to actually use version control for more things.

Part of what makes my interaction with git messy is that it differentiates between untracked files, unstaged changes, and staged changes. The result of this is that you’ve got to remember how to move a file (or god forbid, parts of a file) between these states. A foot-gun here is that you could easily think that you’re running a command to unstage a file so that it won’t be added to a commit, but accidentally run a command that reverts the working copy changes for that file.

git restore --staged will unstage a file, but git restore will drop working copy changes to a file—losing your work. Of course you would never make this mistake, but I as a mere mortal am susceptible to these mistakes.

JJ does away with staged/tracked/untracked files, and instead all files are automatically added to the working copy. I’m sure some people that like being able to manicure their staged files will find this as a deal-breaker, but as someone that compulsively runs git add --all so I don’t accidentally leave a file un-committed, this is exactly what I want.

The working copy in JJ is actually just a commit. It starts out with no changes and an empty commit message, but any edits you make are added to it automatically, and you can adjust the commit message as you go.

Initially this seemed like a “ok whatever” implementation detail, but then you realise that all the operations on JJ just have to work on commits. You don’t need to stage and unstage because you just move changes between commits. You don’t need stash because you just leave your changes in a different commit. Basically all the operations you do boil down to the same small set of operations on a commit.

If you forgot to include a change into a commit, in git you would use git commit --amend but in JJ you’re just squashing two commits together (the previous commit and the working copy commit). If you want to amend a commit that’s not the most recent one, you just move your working copy to be that commit using jj edit. Now any changes you make will be added into that commit.

In git you would need to like, make a new commit, then rebase the intermediate commits to re-order it to be next to the target commit, and then squash it into the target. This is basically what JJ is doing under the hood, but to me I’m just swapping over to a commit and making some changes.

This also makes it easier to inspect the state of the repo. In git I constantly get caught out by the fact I can diff staged files, or diff unstaged files, but can’t diff untracked files5. I need to run git diff --cached, but what’s cached? Am I diffing what’s in a cache, or am I showing the cached diff? The answer is that --cached is actually just referring to staged files, and you can also say --staged and get the same result but like, why is --cached there?

We’re really getting into the weeds here, but to use git diff to show the change made in a commit, you need to do git diff $commit~ $commit to tell git to compare between the parent of the commit and the commit itself. From an implementation standpoint this makes sense—you can diff any two commits, so there’s no point to limit the command to just showing a single commit.

I think of a commit as being a diff moving the repository from the old state to the new state. If someone says “oh yeah it changed in this commit” I would expect to be able to look at the diff of that commit and see the line they were talking about highlighted in red or green. This makes the default behaviour of jj diff to be great: it shows the diff of the commit to its parent.

Since there’s no staged/unstaged/tracked/untracked files, jj diff with no arguments just works on the current change—which is likely your “working copy” change—so it shows the diff of what you’re planning to commit.6

I wrote in my post comparing the DJI Mini 2 to the Mini 3 Pro:

The end result is that editing the Mini 2 photos feels like trying to sculpt almost-dry clay. You can’t really make substantial changes, and if you try too hard you’ll end up breaking something. On the other end of the spectrum is raw files from the a6500, which can be edited like modelling clay.

In a similar way, working with git feels like I’m building with some incredibly fragile material that could shatter if I’m not careful. Working with JJ feels like I can mould or re-shape as much as I need.

I had a lecturer in uni that was adamant that students should work on their assignment throughout the semester instead of cramming it in a weekend. They were so paranoid that they created periodic copies of all our repos throughout the semester to check that the history in the final repo hadn’t been tampered with. If you’re on that level, you might be reading this thinking “oh no, if editing history is that easy, you’ll just mess up your whole repo!” This is understandable, but JJ (by default) only allows editing the history that hasn’t been in main, so you’re only allowed to edit commits before you merge them into the main branch.

It’s these kinds of sensible defaults that make JJ more approachable. I like that in git it’s possible to edit the history—for example if you’re helping students work out why it’s so slow to work with their repository after committing multiple gigabytes of test data, then creating another commit deleting it. I’ve been using JJ co-located with git. The repository still has a .git folder and I can run any git command I want, but most of the operations I do through JJ.7

Perhaps it’s just that I’m more invested in using JJ, but after skimming the reference and using it for a few months, I’m able to do more than I am with git. In no small part because I can just repeat the same few commands that operate on commits8.

  • edit swaps to a different commit, allowing you to edit the contents at that commit
  • new adds a new empty commit on top of the current commit
  • abandon deletes the commit
  • split allows you to turn one commit into two (or more) commits
  • rebase lets you prune and splice commit chains
  • diff shows the changes in a commit

Something that’s amazing is that I honestly couldn’t tell you if there’s a way to remove changes from a commit. Since it’s so easy to split and abandon changes, I just do that instead of looking for a command that can do it in one step.

I didn’t think I’d really care about how conflicts are handled, but not having your repo get “locked out” because you’re in the middle of a merge or rebase is just really nice. I almost never get conflicts because my personal projects are basically always just authored by me on one machine, but the few times I’ve run into them it’s freeing to have the option to just go off and do something else in the repo.

Many people think that the main part of software engineering is writing code, but any software engineer will correct you and point out all the talking to people that’s often overlooked. However even just focussing on the time when you’re writing code, a large part of that is just reading other code to work out what to write. If you’ve spent enough time in large codebases you’ll know how important it is to investigate the history of the code—looking at a single snapshot only tells a fraction of the story. Tools that make working with the history easier are incredibly valuable.

You’re going to spend a lot of time using your development tools, you should make sure you’re able to make them work for you.


Ok so I’m sure I’ve linked to Git Koans by Steve Losh before, but I have only now realised that he’s also made a Twitter bot that generates and tweets vaguely-plausible git commands. It’s seemingly broken or intentionally stopped by changes to Twitter—no updates in over six months—but you can still read the code or look at the old tweets. Some good ones:

git delete [-x FILE] [-E]
Delete and push all little-endian binary blobs to your home directory

git clone --record=<tip> [-j] -N
Read 9792 bytes from /dev/random and clone them

git remove [-T] [-U] [--changeset=<changeset>]
Rebase a file onto .git/objects after removing it

  1. An actual true story from when I learnt C in university was that we had a practical exam where we had to write solutions to basic programming and algorithms problems in C. Instead of learning the various functions in libc and their caveats, I just doubled down on pointer arithmetic. You don’t need memcpy when you can just write a one-line while loop that does it manually. I wouldn’t necessarily recommend this as a serious approach to programming C, but it worked for this one exam. 

  2. Or Ruby or Rust or Java or JavaScript or Haskell or Crystal or Kotlin or Go or OCaml or C# or erlang or 

  3. I’m probably butchering this by remembering vibes without context, but you get the idea. 

  4. You can totally reconfigure some things in JJ to allow you to mess up the history of a repo and then force-push that to origin, but you’d have to try really hard. 

  5. I know this one doesn’t really make that much sense, but it’s nice to see a diff of “this is the added contents of this file” rather than just having it be omitted, and then just blindly running git add --all. You can actually use git add -N to pseudo-track a file to make it appear in the diff. This has a side-effect of stopping you from stashing, which I’m sure makes sense from an implementation perspective but as a user this just seems weird. 

  6. “Well actually” it’s already committed, since JJ automatically adds changes in the working copy to the current change. But you know what I mean; conceptually it’s not committed until you’ve chucked a commit message in there and moved on to a new change. 

  7. This is super convenient, because I can still push to GitLab or GitHub or whatever, I can still use basically any tool that relies on a git repo, my history is still just normal git history so it can be inspected or analysed by any tool that works on git repos, and I can still work with anyone that is using plain-old git. 

  8. Oh yeah I think technically they’re changes or revisions, not commits. 


Further Adventures in tmux Code Evaluation

In my previous post I wrote a compiler that turns Python code into a tmux config file. It makes tmux evaluate a program by performing actions while switching between windows. My implementation relies on a feature in tmux called “hooks” which run an command whenever a certain action happens in tmux. The action that I was using was when a pane received focus. This worked great except I had to do some trickery to avoid tmux’s cycle detection in hooks—it won’t run a hook on an action that is triggered by a hook, which is a sensible thing to do.

I don’t want things to be sensible, and I managed to work around this by running every tmux action as a shell command using the tmux run command. I’ve now worked out an even sillier way that this could work by using two tmux sessions1, each attached back to the other, then using bind-key and send-keys to trigger actions.

You start a tmux session with two windows. The first window just runs any command, a shell or whatever. The second window runs a second instance of tmux (you’d have to unset $TMUX for this to work). That second instance of tmux is attached to a second session, also with two windows. The first window also just runs any command, and the second window attaches back to the original session. Here’s a diagram to make this a bit clearer:

diagram of tmux sessions used to run code using key bindings

Session A (blue) has two windows, the first A:1 is just running a shell, the second A:2 is attached to session B (red) which is showing the first window in session B, B:1. Session B also has two shells, the second (B:2) is attached to session A, and is showing window A:1 from session A.

What this cursed setup allows us to do is use send-keys to trigger keybindings that are interpreted by tmux itself, rather than the program running inside tmux—because tmux is the program running inside tmux.

If you have a tmux pane that’s running a program like Vim and you run send-keys a, the character “a” will be typed into Vim. The key is not interpreted at all by the surrounding tmux pane, even if you send a key sequence that would normally do something in tmux, it goes directly to the program in the pane. For example if your prefix key is C-z, then send-keys C-z c will not create a new window, it’ll probably suspend the running program and type a literal character “c”.

However, if the program that’s running in tmux is tmux, then the inner tmux instance will interpret the keys just like any other program.

So if we go back to our diagram, session A uses send-keys to trigger an action in session B. Session B can use send-keys to trigger an action in session A, by virtue of it also having a client attached to session A in one of its panes. The program would be evaluated by each session responding to a key binding, doing an action, and then sending a key binding to the other session to trigger the next instruction. For example, using some of the tricks I described in my previous post:

bind-key -n g {
  set-buffer "1"
  send-keys -t :=2 q
}

bind-key -n q {
  set-buffer "2"
  send-keys -t :=2 w
}

bind-key -n w {
  run 'tmux rename-window "#{buffer_sample}"'
  run 'tmux delete-buffer'
  run 'tmux rename-window "#{e|+:#{buffer_sample},#{window_name}}"'
  run 'tmux delete-buffer'
  run 'tmux set-buffer "#{window_name}"'
  send-keys -t :=2 e
}

# ... program continues with successive bindings

The program starts with the user pressing “g” in session A, which pushes a value onto the stack and sends the key “q” to the second window, which triggers the next action in session B. That next action pushes another value and sends “w” to the second window in session B, which triggers an action back in session A. This action does some juggling of the buffer stack and adds the two values together, putting the result on the stack. It then sends “e” to the second window in session A, triggering whatever the next action would be in session B.

This should also allow the compiler to get rid of the global-expansion trick, in the last post I wrote:

Wrapping everything in a call to run gives us another feature: global variable expansion. Only certain arguments to tmux commands have variable expansion on them, but the whole string passed to run is expanded, which means we can use variables anywhere in any tmux command.

Since we’re no longer using windows as instructions, it’s much easier to use them as variable storage. This should remove the need for storing variables as custom options, and using buffers as a stack.

The stack would just be a separate, specifically-named session where each window contains a value on the stack. To add a value, you write the desired contents to that pane using either paste-buffer to dump from a buffer, or send-keys to dump a literal value. You can get that value back with capture-pane and put it into a specific buffer with the -b flag.

Options can be set to expand formats with the -F flag, so you can put the contents of a window-based variable into a custom option with a command like set -F @my_option '#{buffer_sample}'. This would allow for some more juggling without having to use the window and session name, like I did before.

Ideally you would have a different variable-storage session for each stack frame, and somehow read values from it corresponding to the active function call. This might not be possible without global expansion of the command, but if you allowed that then you’d avoid the problems that my current implementation has with having a single global set of variables.

The astute among you might be thinking “wait Will, what happens when you want to have more than 26 or 52 actions, you’ll run out of letters!” Well, tmux has a feature called “key tables” which allow for swapping the set of active key bindings, so all you need to do is have each letter swap to a unique key table, and then the next letter actually does an action, which gives you enough space for 2,704 actions, if you only use upper and lower-case letters. But you can have as many key tables as you want, so you can just keep increasing the length of the sequence of keys required to trigger an action, allowing for more and more actions for larger programs.

I don’t think I’ve really worked around the “no global expansion” limitation that I imposed, but I think this shows there are enough different avenues to solve this that you can probably assemble something without the trade-offs that I made originally.

  1. Actually you can probably do this with one session connected back to itself, but I only realised this after I’d written up my explanation of how this would work.