Shortcuts is a Cursed Minefield

This all starts with me wanting to host my photos outside of Instagram. I ended up using GitHub pages and wrote a simple iOS Shortcut to resize and recompress the image to make it suitable for serving on the web.

The shortcut is fairly simple - it takes an image, creates two versions (a main image and a thumbnail for the homepage), crops the thumbnail into a square to fit on the homepage grid, then saves the images to Working Copy ready to be published to GitHub Pages.

The problem arises when cropping the thumbnail. See what I need is:

image showing cropping/resizing

The image needs to be resized to that the shortest side is 600px long, and then crop the centre of that image to a square. Shortcuts has a built-in action for resizing the longest side to a certain length, but not the shortest side.

Instead you have to compare the width and height of the image and resize accordingly:

image showing if statement in shortcuts with resizing

However this gives me stretched out images for landscapes:

squished image after resize

After much head-scratching at what was going on, I figured it out. I was accessing the width and height of the image like this:

accessing height/width with photo media type

While debugging I could print this value and see the correct dimensions for the images that I was selecting. I tried assigning these to separate variables just in case there was some weird type-casting happening when they got passed to the if, but you can’t do less-than or greater-than on arbitrary shortcuts variables:

image showing no less/greater than

In a moment of desperation I swapped the Type to be “Image” rather than “Photo media” (getting desperate at this point), and it worked exactly as it should.

correctly resized and cropped image

Being thorough, I tried changing the Type back to “Photo media”… and it also worked. I made a new shortcut and left it at the default of “Photo media”… and it failed again.

The shortcut would only work if I either left the Type as “Image”, or set it to “Image” and then set it back, obviously flipping some invisible internal value1 of the shortcut to correctly compare the width and height.

No programming language, API, library, or framework has made me as frustrated as writing even the most trivial Shortcuts.

  1. I briefly tried inspecting the internals of the .shortcut file as I have done before, but the signing that is now included made that more difficult than I had enthusiasm for. 

The Good & Bad of Photos for MacOS

I’ve previously written about how iOS/iPadOS is poorly suited to anything more than casual photography.

Unsurprisingly there hasn’t been a change in iPadOS’s capabilities since 2019. However through a fortunate series of events I’ve found myself in possession of an M1 MacBook Air, and have been using that for storing/editing my photos for the last year or so.

As well as ending up on Instagram, my photos also end up on a real website.

The Good

Reviewing and flagging photos on MacOS is much quicker and easier than on iPadOS. The arrow keys move instantly between photos (instead of having to swipe and see an animation), which means you can have a more direct comparison between two shots. Pressing “.” quickly favourites a photo, which I use to mark a photo as a possible editing opportunity. Deleting photos or removing them from an album can be done with one shortcut – no annoying confirmation required every time. These few things alone make it much easier for me to scan through my photos, find the best ones, and clear out any obvious rubbish.

On iPadOS you still have to manually unfavourite every single photo, which makes using favourites as a staging area impractical and frustrating. (Darkroom has implemented their own “flag/reject” system as a workaround, but I was already using MacOS at this point).

Photos on MacOS has a much more robust and mature album system – the best feature being smart albums. This lets me have an “Edited” smart album that just shows edited photos. Some apps on iPadOS will add edited photos to their own album, but that’s limited per-app and depends on how it integrates with the photos library. Smart albums are also much more flexible – you can filter by camera model, shutter speed, aperture, date, etc. So it’s completely trivial to do something like find all the long exposure shots from my new camera that have been edited.

Of course the real advantage to MacOS is that the photos library lives in the file system, and I can just go and look at it. The real, original bytes that came from my camera are fully within my grasp – I don’t have to worry about transparent re-encoding as I move them around – and I can actually back them up to somewhere that isn’t iCloud or Google Photos. I can include them in Time Machine, I can copy them to an external drive, or I can create my own janky system that uses rsync to copy them to a Raspberry Pi sitting on my network.

I also now have access to more advanced or flexible software. As much as I like Pixelmator Photo it is limited to only making colour adjustments to a single layer. Pixelmator Pro on the other hand has all the same features (even with the same UI) but has layers (and layer masks!), effects (blurring and suchlike), and painting tools. This makes it possible to do things like merge multiple fireworks shots into one picture, or fake the motion of many ferries at once.

fireworks photo with implausible number of fireworks

In addition to Pixelmator Pro, I’ve also found use in single-purpose utilities like Starry Landscape Stacker and Panorama Stitcher – which has come in useful with my still-quite-new DJI Mini 2.

While none of this is impossible on iPadOS, these tools are made with the assumption of a more serious use – for example I can stitch together 20 raw images from my a6500 and get a 1GB TIFF out the other end. Not a single pixel of data is lost, and the software on MacOS can scale to handle it without hitting any limits.

The Bad

While much of the software around Photos on MacOS is built for serious users, Photos itself is still lacking. For example, when you export a JPEG, you get a few pre-defined choices for quality instead of a slider. I’d like to export at 95% quality, but instead I’m stuck with whatever “high” or “maximum” mean (probably 80% and 100%). I could probably work around this with Shortcuts, but that’s another whole can of worms.

Apps can integrate their editing UI directly into Photos – but there isn’t a quick way to jump straight to editing in a specific app, you’ve got to open the default Photos editor and then open it in the app you actually want. There is a way to jump directly to an app, but it doesn’t support preserving the adjustments you made, you just get a new image with the adjustments baked in. So if you want to do a complex edit where you might come back later to tweak it, this is no good.

The editing UI in Photos also limits you to using one window, since Photos itself only supports a single window. This is the kind of limitation I’d expect on iPadOS, not on MacOS.

Backing up with Time Machine is a terrible experience. Not specific to Photos, but this is what pushed me to make a custom solution using rsync. Backing up to a NAS appears to be almost impossible (without buying a Synology or something else that has worked out how to trick MacOS into using it as a backup target), and backing up to an external disk is horrendously slow and temperamental. MacOS also fights you with permissions issues trying to read the contents of the photo library – the permissions changes added in MacOS Catalina do not mesh well with any kind of custom script (even Apple’s own launchd doesn’t work with it), which makes custom backups harder to implement and rely on.

I opted to get the 1T SSD in my MacBook, since I knew I wanted enough storage to keep me going for a few years. There doesn’t seem to be any particularly well-supported way of splitting a Photos library in two, so that part of it can be offloaded to archival storage. You can do it by duplicating the photo library itself, archiving one copy, then delete all the old photos from the unarchived copy. My photo library currently takes up 500G and I’m not looking forward to having to split it. This is an obvious “strategy tax” that Photos pays when the recommended thing to do when you run out of local storage is to pay for iCloud storage. However that only goes up to 2T and what do you do then?

The experience of using MacOS to manage and edit photos is far better than iPadOS. I do still miss being able to twiddle my Apple Pencil while thinking of how I wanted to apply my edits, and do it without a keyboard in the way. Additionally, you get a way better screen on an iPad for a similar price as a MacBook Air.

Now that I’ve made the shift to MacOS, the iPad would have to do something amazing for me to switch back.

OH! A side note, did you know there’s no way to import a photo library from an iOS device to MacOS? You can import the photos (either directly from the Photos app, or using Photo Capture) but this doesn’t import any albums, or edit history. I’m fairly sure I’ve lost some originals in the process of importing pictures from my iPad – and the only way I can see to do it is upload it all to iCloud and re-download it on MacOS, or write your own app to write all the photos to some external storage connected to the iPad, along with custom metadata, and then write a MacOS app that imports them back. You just have to ignore the fact that external storage still is terribly limited and unreliable on iPadOS.

Configure Nebula VPN on iOS/Android

Nebula is now available on iOS and Android, which is very exciting. What is less exciting is the fact I couldn’t find any documentation on how to set it up. You’re in luck though, because that’s just what I’ve got here - how to setup Nebula on a server, and connect that to a mobile device.

Setup Nebula on a server

Installing Nebula is a fairly straightforward (but manual) process. It’s not available in any PPAs (that I know of), so it’s a bit involved.

On the server that will be your lighthouse (ie a server that has a public static IP, and can open a port to the outside world).

  1. Download the latest release for your platform from GitHub.
  2. Extract the archive and put nebula and nebula-cert somewhere in your $PATH - like /usr/local/bin. Make sure they’re executable: sudo chmod +x /usr/local/bin/nebula*.
  3. Download the example config to /etc/nebula/config.yml.

Now let’s generate some certificates! Generate a CA cert:

$ nebula-cert ca -name "My Mesh Network"

You should now have ca.key and ca.crt. Keep ca.key super secret - anyone that has access to that has the ability to add new nodes to your network!

Generate a cert for the lighthouse node:

$ nebula-cert sign -name "lighthouse" -ip ""

The IP address can be anything in the range of private network address space. Easiest thing to do is just 10.X.Y.Z - but choose IPs that aren’t already common on private networks! Many routers give out 10.1.1.X, and so your VPN could clash with devices on your network.

You should now also have lighthouse.crt and lighthouse.key. You can repeat the nebula-cert sign command for each node in the network - giving them each their own IP.

Now update the config.yml with the VPN IP and external IP/port of your lighthouse. Find the section like this:

  # "<Nebula VPN IP>": ["<external IP or addresss>:<port>"]
  # eg:
  "": [""]

This allows new nodes to make their initial connection. The external address can be a URL (I actually use a dynamic DNS provider to point to my home computer). The port must be open to the outside world, and listed in the listen section:

  port: 4242

For lighthouses, you need to set am_lighthouse: true. For all other nodes you need to set lighthouse.hosts to a list of the Nebula IPs of the lighthouses. See the example config file for more info, and all the other options you can set.

You can now start nebula:

$ nebula -config /etc/nebula/config.yml

If you want to run it in the background and have it run at boot - look at the service scripts.

Setup on iOS/Android

Get the app (iOS/Android). Click “+” to add a new config, and copy the public key for the device (from the “Certificate” screen) onto your machine that has ca.key on it.

Sign the key using ca.key:

$ nebula-cert sign -ca-crt ./ca.crt \
  -ca-key ./ca.key -in-pub <mobile key file> \
  -name <device name> -ip

This should produce <device name>.crt. Copy that and ca.crt back to your phone.

Paste the contents of <device name>.crt onto the “certificate” screen, and the contents of ca.crt onto the “CA” screen. Click “Load certificate”/”Load CA” after pasting each cert.

In the “hosts” screen, set the IP of your lighthouse, as well as its public IP and port. Flip the “lighthouse” toggle on.

Once you’ve entered that, you can save the config. This should prompt a system dialog to enter your passcode to add the new VPN config. You can then use the Nebula app or VPN settings screen to enable Nebula. It will take a second to connect, then you should be able to access all the devices on your VPN.


Impracticalities of iOS Photo Management for Photographers

In the last six months or so I have become an enthusiastic hobbyist photographer. This started with paying more attention to how I used the camera on my phone, then quickly progressed into buying a Sony a6000 mirrorless camera. Before I knew it I had two tripods, a camera bag, an ND filter, and was starting to browse through lenses on Amazon.

These photos usually end up on my Instagram. Here’s one from the other day:

The Sydney Harbour Bridge at sunset

All the photos that I take on my camera are edited on my iPad using Pixelmator Photo. It’s an amazingly powerful piece of software - especially considering that it’s less than $10.

Importing and editing photos from a camera is easy on iOS. You just need the appropriate dongle - in this case it’s just a USB C SD card reader - and tap “Import” in the Photos app. It removes duplicates and can even clear the photos off the card once the import is complete. What more could you want?

Much, much more.

The problem first started when a friend of mine suggested that I start shooting raw photos to make use of the extra dynamic range. If you’re not aware, raw photos contain very little processing from the camera and are not compressed - unlike jpegs. On the a6000 a typical jpeg is less than 10MB, whereas a raw photo is 24MB. This meant that whenever I took my camera anywhere, I could easily produce 20GB of photos instead of the more reasonable 1-2GB.

There are two big problems here:

  1. my iPad only has 250GB of storage
  2. my home internet is too slow to upload this many photos to Google Photos1

My first thought was to just dump my unused photos onto an external drive (now that iOS 13 adds support for them natively) and delete them from my iPad. This works OK for video - you have a handful of files, each of which are a few gigabytes. Background processing isn’t great in iOS, but I can deal with having to keep the app open in the foreground while the copy is happening.

This is not the case with photos. You instead have hundreds and hundreds of small files, so if the copy fails you have no idea how much you have to continue with. That being said, if you were shooting jpegs, you could do something like:

  1. import into the Photos app
  2. swap the SD card out for an external drive
  3. select all the photos you just imported (hoping that you can just select in a date range)
  4. drag the photos into a folder on the external drive
  5. baby sit the Files app while it copies

Less than ideal, but workable. Ok, now why won’t that work for raw files?

iOS kinda only pretends that it supports raw files. If you import them into the Photos app there is no indication of the file type, but third party apps can access the raw data. What’s frustrating is that when you export the photos, only the jpeg is exported. So if you try and drag raw photos onto an external drive, you lose the raw data.

Basically, Photos on iOS is a one-way street for raw images. They’ll be imported, but can’t be exported anywhere without silent data loss.

Alright so using the Photos app is a bit of a loss. Can we just keep all the images as files instead? The short answer is “not very well”.

Importing from an SD card to the Files app can only be done as a copy - no duplicate detection, no progress indication, etc. You just have to sit there hoping that nothing is being lost, and if it fails in the middle you probably just have to start again.

Once the photos are in the Files app, you have to do the same dance again to export them to your external drive. Cross your fingers and hope that nothing gets lost.

Also remember that once the photos have been exported, the Files app will be unable to show the contents of the folder - it’ll just sit on “Loading” presumably trying to read every file and generate hundreds of thumbnails. And there’s no way to unmount the drive to ensure that the data has been correctly flushed to disk, you just have to pull the cable out and hope for the best.

Thankfully Pixelmator Photo does support opening images from the Files app, so you can import them from there without much trouble. But there is no way to quickly delete photos that weren’t in focus or get rid of the bad shots out of a burst. (This isn’t great in the Photos app either, but at least the previewing is acceptable).

So you’re left with a bunch of files on your iPad that you have to manage yourself, and a backup of photos on your external drive that you can’t look at unless you read the whole lot back. Not good.

“Why don’t you just use a Mac” - the software that I know how to use is on iOS. My iPad is faster than my Mac2. The screen is better. I can use the Pencil to make fine adjustments.

“Why not use Lightroom and backup to Adobe CC” - I don’t want to have to pay an increasing recurring cost for something that is a hobby. Also I like using Pixelmator Photo (or Darkroom, or regular Pixelmator, or maybe Affinity Photo when I get around to learning how it works).

“Just get an iPad with more storage” - I’d still have the same problem, just in about a year.

This is the point in the blog post that I would like to be able to say that I have a neat and tidy solution. But I don’t. This is a constant frustration that stops me from being able to enjoy photography, because for every photo I take I know that I’m just making everything harder and harder to manage.

One solution could be to connect my external drive to my home server, and find an app that allows me to backup to that. This seems ridiculous as my iPad can write to the drive directly.

I think the only thing that can make this practical - outside of iOS getting much better at managing external drives - is to make an app that reads the full image data from Photos, and writes to a directory on an external drive. It could also keep a record of what it has exported, and allow for cleaning up any photos that didn’t get edited automatically. The workflow would look something like:

  1. attach SD card and import into Photos app
  2. attach external drive and use photo exporting app to copy new photos onto the drive
  3. edit/delete photos using any app that takes your fancy
  4. use photo export app to remove any photo that wasn’t edited or marked in some way (eg, favorited) in one go

This would patch over the poor experience of copying to external drives, making iOS-only photography more practical for people who don’t want to pay for cloud storage for hundreds of mediocre shots.

Here’s hoping that iOS will continue to improve and become more flexible for different workflows.

If you’re an iOS developer who wants to make an app with probably the most limited appeal ever, get in touch.

  1. I will sometimes take my iPad to work and let it upload on the much better connection there, but this is inconvenient. 

  2. Probably. 

Compiling for Shortcuts

In this video I show a program written in Cub being compiled and run inside of the iOS Shortcuts app. The program is a simple recursive factorial function, and I can use it to calculate the factorial of 5 to be 120. Super useful stuff.

The idea was first floated by Andrew - who knows that I am someone that is easily nerd-sniped by programming language problems. Even after I pointed out that Shortcuts doesn’t support functions (or any kind of jump command other than conditionals) and a whole host of other features that you’d expect in the most basic set of machine instructions. But the bar had been set.

Initially I started writing a parser for a whole new language, but quickly discarded that idea because writing parsers takes time and patience. Why not just steal someone else’s work? I’d seen Louis D’hauwe tinkering on Cub, a programming language that he wrote for use in OpenTerm - which is written entirely in Swift. After a quick look into the code I realised that it would be simple to just use the lexer and parser from Cub and ignore the actual runtime, just leaving me to do the code generation. All I have to do was traverse the syntax tree and generate a Shortcuts file. In terms of code this is fairly straightforward - just add an extension to each AST node that generates the corresponding code.

Over a few evenings I pulled together the basic functionality - after reverse-engineering the Shortcuts plist format by making a lot of shortcuts and airdropping them to my Mac (Josh Farrant ended up doing a similar thing for, and he’s written about the internals a bit on Medium).

The main problem was how to invent functions in an environment that has no concept of them. Andrew suggested making each function a separate shortcut - and just having a consistent naming convention that includes the function name somewhere - which would work but would make installing them a complete pain. However if you assume you know the name of the current shortcut, you can put every function in one shortcut and just have it call itself with an argument that tells it which function to run. An incredibly hacky and slow solution (as you can see in the video) but it works - even for functions with multiple arguments!

A lot of debugging and crashing Shortcuts later, I had a working compiler that could do mathematical operations, conditionals, loops, and function calls - all being turned into drag-and-droppable blocks in the Shortcuts app. Like using CAD to design your Duplo house.

The main shortcoming with this hack is that every action has a non-obvious (and non-documented) identifier and argument scheme that you have to reverse engineer for every action. If this was going to be a general-purpose solution, you’d have to deconstruct all the options for every action and map this to an equivalent API in Cub.

If you’re intrigued you can run the compiler yourself (be warned; it is janky). All you need is Swift 4.2 and the code from GitHub.

Learning Software Engineering

It’s an often-repeated meme that school doesn’t teach you the things you need to know to get by in the working world. “I don’t know how to do taxes but at least I can solve differential equations, har har” is usually how they go. There is a sub-meme in Software Engineering that the things you learn at university are generally not applicable to what is needed in the industry - most jobs don’t require you to be able to implement an in-place quicksort or balance a binary tree.

I generally agree with this sentiment - I now have a job and am yet to implement any kind of sorting algorithm or tree structure as part of my day-to-day responsibilities. Although I think the main problem here is comparing the Computer Science curriculum with the needs of the Software Engineering industry - like if you were to expect physics students to graduate and work as mechanical engineers1.

The thing that I find frustrating is the distain towards group projects - whether it’s cramming all the work in at the last minute, group members not pulling their weight, etc. No one likes group projects, and can you blame them? They’re nothing like working in the real world. This blog post by James Hood gives a good comparison of what it’s like doing work at school vs in industry. It’s not aimed specifically at group projects, but this summary shows how James perceives group projects:

“Sure, we may get the occasional group project, however that’s generally the exception. In these group projects, it’s not uncommon for one or two individuals to end up doing most of the work while the rest of the team coasts.”

The core difference is that when working in the industry, deadlines either aren’t concrete - and if they are and you’re not on track to meet them, a good team will work out how to work together and reach it - on top of that the scope can be changed in collaboration with the client or project owner if the deadline can’t be met. And unless you work in the games industry where a death march is the accepted project management practice, you should have some kind of work/life balance where you are not spending every moment of your day pushing to complete the project.

What I think is the problem with group projects is that they need to go big or not bother - especially projects done in smaller groups or in pairs. If you give a small project to a pair of students, it’s quite likely that one of them can just get on a roll and churn through the majority of the project. I have done this and it’s not good for either party.

This is exacerbated when the project doesn’t have N obvious parts that can be allocated one to each team member. If the parts have to interact then the team member that does theirs first or makes the most used part will become the go-to person in working out how all the other parts should integrate - doing more work than anyone else.

This is basically distilled down to “you research it and I’ll write the report” - but then the person that does the research has to write a report to transfer the knowledge of what they researched. Just replace “research” and “write report” with the components of the project.

What makes a worthwhile group project for software engineers? In the third year of my degree (which was four years in total) we had a group project that lasted the whole year. I think almost everyone in the class learnt a lot about working successfully in a team, as well as learning software engineering skills that you don’t pick up working by yourself.

The course outline looks like this:

The Software Engineering group project gives students in-depth experience in developing software applications in groups. Participants work in groups to develop a complex real application. At the end of this course you will have practiced the skills required to be a Software Engineer in the real world, including gaining the required skills to be able to develop complex applications, dealing with vague (and often conflicting) customer requirements, working under pressure and being a valuable member of a software development team.

There are a lot of small things that make this course work, but I’m just going to mention a few of the most significant:

Firstly having a massive project that is too big for one person to do or to cram in at the start of the year, it is always big enough that you can’t see the whole scope and how things fit together until you’ve finished the initial parts - this forces the team to work sustainably and discourages cramming it all at the last minute or trying to get everything out of the way. This should guide the team to get into a consistent rhythm.

The team should reflect the size of an industry software engineering team - about six to ten people. I think seven is a good size - it is small enough that everyone can keep up-to-date with the rest of the team, but big enough to produce a sizeable amount of work throughout the year.

Instead of having a single deadline at the end of the project where the team dumps a load of work and promptly forgets about it, the team should at least be getting feedback on how their development process is improving. Ideally, teachers would be able to do significant analysis into how the team is working - incorporating both data gathered from the codebase, the development process, and subjective feedback from students.

This analysis is a massive amount of work, and is hard to get right - my final year project was to improve the analysis of code. I didn’t improve it an awful lot but learnt a lot about syntax trees, Gumtree and Git.

The team should be marked on their ability to work as a team, and improve their process over the year - as well as the quality of the actual project work that they complete. This gives them the ability to improve their development process - perhaps at the expense of number of features - but hopefully becoming more sustainable and improving other “soft skills”.

This kind of work also has the added benefit of teaching students how to deal with typical industry tools, like getting into a Git merge hell, opening a bunch of bugs in an issue tracker and ignoring them for months, dutifully splitting all their work into stories and tasks and never updating them as the work gets done, receiving panicked Slack messages late at night then working out whether you can get away with ignoring them, and realising that time tracking is a punishment that no one deserves before fudging their timesheet at the end of the day. Proper valuable skills that software engineers use every day.

Of course this means that if you’re planning a project that is only a few weeks long and doesn’t make up much of the marks in the course, it probably shouldn’t be a given out as a group project.

Just like computer science is more more than just writing programs - you learn about algorithms, data structures, complexity analysis, etc - software engineering is not just computer science. Learning to be a software engineer also includes the ever-dreaded soft-skills and learning how to actually put together a piece of software that will be used by other people. And so just like in computer science there is far more than just learning how to program, software engineering must be far more than just learning computer science with other people.

  1. This may be a bad analogy, I have not studied physics since my first year of university and don’t really know what mechanical engineers do day-to-day. But I’m sure you get the gist. 

iOS Should Not Have a Command Line

On the latest episode of Upgrade 1 Jason and Myke briefly discuss the idea of Apple adding a command line to iOS. They quickly dismiss it because of the constraints of sandboxing and security - instead suggesting that the command line will be a feature that keeps people at the Mac. I find the idea of a new command line really interesting, so I wanted to expand on some more reasons why there shouldn’t be a command line on iOS - unless it comes with massive changes to how you think about a command line.

I think I know customising and using the terminal fairly well; I’m 160 commits into customising how I use it - whenever I’m at a computer (whether that’s my Mac or my iPad) I have a terminal open.

The advantage of a command line is not the fact that it is textual. Being able to enter rm * is not much faster than selecting everything and hitting cmd-delete in the Finder. The real advantage is that everything uses the exact same set of easily understandable interface concepts, and they can all interact with each other.

All command-line programs have an input stream, an output stream, and an error stream. These can be displayed anywhere, hidden, parsed, redirected, or reinterpreted trivially. The concepts in the command line are the core concepts of the operating system, and everything respects these concepts. They are building blocks that you can put together to do your work.

In macOS, the building blocks are windows 2 - everything you interact with is contained in a window, and you can interact and manipulate these in a consistent and predictable way. iOS has a less predictable model - apps are the only unit that a user can interact with, which in some situations - particularly dealing with documents - is too coarse and gets in the way.

The other advantage of a the command line is that interacting with it (at least at the basic level, before you get into ncurses) is trivial, so tools like compilers or video encoders like ffmpeg don’t have to implement a full interface - they concentrate on their own specific functionality.

Previously on iOS if you wanted to implement something like a compiler, you’d have to implement the whole IDE or editor as well as the actual compiler - which is a lot of extra work. People are also quite picky about their editors [citation needed]. iOS 11 improved this significantly with the Files app - your compiler could just read the source files from a file provider that were written with someones editor of choice.

For example, Cub can only really be used in OpenTerm - and there’s no way to add another language to it. OpenTerm is also limited in that it can’t accept commands from other apps - the commands must be baked in, entirely hard-coded.

It would be possible to create a sandboxed shell - that ensures that commands can only see the files that they are entitled to access. You would most likely have to throw out almost all existing scripts from macOS, and the semantics of the shell language would change - popular shells (zsh, BASH, Fish, etc) don’t have strong types so you don’t know if the parameters passed are files or if they just look like that. Maybe they’re just parts of a web URL? (but does this command have permission to access that URL?) Sandboxing an existing shell would either end up limited, or frustrating and unproductive to use.

This all ignores the fact that iOS apps are not built to be used from the command line - they don’t expect command line arguments, they don’t print a result - they can’t share anything about themselves in the way that a command line expects. Even macOS apps don’t really do this.

For me to support a command line on iOS, I would have to see significant changes to how the core of the operating system behaves. The command line needs to be able to tell apps to do things on behalf of the user - when it is allowed - and receive results back from those apps.

iOS can already do this: Workflow (aka Shortcuts) can chain actions from different apps together. Siri shortcuts allows apps to expose actions that can be consumed by the system, and used as part of a workflow. They don’t allow for passing input or output (as far as I know), which doesn’t make them as versatile as command line programs.

The other aspect of the terminal that is often overlooked is the features that sit outside of the fairly simple concept of a process with input and output streams - VT100 emulation and the use of ANSI escape codes. These allow any process to control the interface far more than just writing a sequence of characters onto the screen. My terminal would not be complete without tmux. It allows me to easily manage applications in a uniform environment, without having to reach for features higher up the “stack” like windows which are not as well suited to managing command line applications.

There is however - as is tradition when talking about iOS releases - always next year. Shortcuts could gain the ability to accept input from other apps and pass their output to other shortcuts. iOS could get UI improvements that make it easier to juggle parts of applications like I can with tmux.

What I don’t want to see is a literal port of the command line to iOS, because that would be a significant step back in imagining a new way of handling inter-app communication and would most likely be so constrained that it wouldn’t be able to fill the use cases of todays terminals. A bad terminal on iOS would only serve to further the argument that doing work on iOS is a dead end.

But hey, I’m just some guy that uses tools that are decades older than him.

  1. A great show, definitely listen to it if you’re into this kind of thing. 

  2. Also tabs, I suppose. 

Writing Macros in Crystal

The existing documentation for macros in Crystal leaves a wee bit to be desired, especially if you want to do anything that’s a bit off-the-rails. So here we go, some top tips for macros in Crystal.

There are two different types of interpolation - {{ foo }} will put the result of foo into the code, whereas {% foo %} will evaluate it and ignore the result. Much like <%= foo %> and <% foo %> in embedded Ruby. So if you want to print something to debug the macro, then use {%. This is obvious if you notice that conditionals and loops in macros always use {% because they shouldn’t actually output anything themselves.

Something that I didn’t realise initially was that you can assign variables in non-expanding interpolation (the {% kind). This makes your code a lot tidier.

When writing a macro it is super useful to be able to see the generated code - to do this you can use {% debug %}! It will output the current “buffer” for the macro, so you can just put it at the bottom of your macro definition to see what is being generated when your code compiles.

@type is definitely not given the attention it needs. It is essential for writing macros that change aspects of the current class or struct. For example:

macro auto_to_string
  def to_s(io)
    io << {{ @type.stringify }}

.stringify basically returns the syntax tree wrapped in quotes, so 44.stringify gives "44" at compile time.

When we call this method in some class, a new method will be generated:

class SomeNeatClass
  auto_to_string # Calling the macro will expand the code here

  # This is what will be generated:
  def to_s(io)
    io << "SomeNeatClass"

The class name is turned into a string at compile time. @type will be some kind of TypeNode - checking what kind it is using .is_a? and the methods in the imaginary macros module lets you do different things based on what it is - like if it has generic types, what its superclasses are, etc. Although do remember that this information is limited to what is known by the compiler when the macro is invoked - so if you use @type.methods in a macro that is expanded before any methods are defined, there won’t be any there:

macro print_instance_methods
  {% puts &.name %}

class Foo

  def generate_random_number

# This will print:
# []
# [print_instance_methods]

Depending on what you want to do, you could either move the code into a macro method - they get resolved when the first bit of code that uses them is compiled - or use the method_added and finished macro hooks.

The difficult thing about writing macros (especially if someone else has to use them) is doing unexpected things when you don’t get quite the input you expect. The error messages are often incomprehensible - just as you’d expect from an interpreted templating language that is used to generate code for another language, on top of which it is based.

Pretty much everything you run into in macros is some kind of *Literal class. Arrays are ArrayLiteral, booleans are BoolLiteral, nil is NilLiteral, etc.

Scrutinising a Scalable Programming Language

In my opinion, most developers should know three different programming languages, three different tools that can be used for most of the projects you come across. These should include a scripting language - for automating a task or doing something quickly; an application language - for writing user-facing or client applications, which have to be run on systems that you don’t control; and an “infrastructure” language for building a system that has many components and runs on your own platform.

I don’t really do any systems programming - which I see as basically being dominated by C anyway - so I would lump it in with your application language, as the situation is very similar. If this is not the case then the target hardware and tooling probably dictates using C anyway.

A scripting language is usually dynamically typed and interpreted. It starts up fast and lets you cut as many corners as needed until the problem is solved. It should be good at dealing with files, text, and simple HTTP requests. For this I will usually use Ruby, as it is very quick to write and has useful shortcuts for common things scripts do - like running shell commands. On occasion I will use Python, but I can’t think of any reason that I would use it over Ruby - except for Ruby not being installed.

The application language I use most often is Java (or Kotlin, more recently). This is partly due to having to use it for university, but also if you want to write something that can be run on pretty much anyones computer with minimal effort, the JVM is probably your best bet. It has its problems, of course - but it’s easier to get someone to install a JRE and run a fat JAR than it would be to ensure they’re running the correct version of Python and the dependencies are installed.

The “infrastructure” language is probably closest to what I find most interesting - I first thought of this after finishing the hackathon that resulted in Herbert. Herbert is a Slackbot that does some timesheeting. It includes the proper Slack app integration hooks and stuff, so it needs to be able to join many different teams at once, and dynamically join teams as they add the integration. It’s written entirely in Ruby, because it was easy to get a working prototype (see above) - but because Ruby doesn’t have real threads, running a load of things at the same time is a bit orthogonal to it’s strengths. In subsequent hackathons, Logan and I chose to use Elixir, which is better suited to piecing a selection of bits together and having them communicate (thanks, BEAM!) so it would probably be my choice here.

I expect that most people - like me - would choose three fairly different languages. What if we had a language that could be used easily for all of these situations? What would it take, and what (in my opinion) is stopping existing languages from being used everywhere?

Most of these languages can be used in all situations, but there are some things that get in the way of them being a great choice. If you are an expert in one particular language then the familiarity will likely win out over the suitability of another.


Go excels at building highly concurrent and performant applications. What makes it even more useful is the ability to easily cross compile to pretty much every major platform. Not having exceptions makes it easy to cut corners in error handling - which makes building an initial prototype or experiment easier. For me the verbose syntax and lack of (seemingly simple) language features pushes me away from Go - but I’ll happily reap the benefits of others using it by making use of their applications.

The other aspect of Go that I find frustrating is the enforced package structure and (lack of) dependency management. There is a fair bit of work from just using go run on a single file to setting up a proper Go environment - especially if you have an existing way of organising your projects.

Java/ Other JVM Languages

The main pitfall of using the JVM for quick tasks is that it usually takes longer for the JVM to start than it does for your program to run. In the time that it takes to start a Clojure REPL, I can typically open a few other terminals and launch a Ruby, Python, and Elixir REPL.

Plain Java also lacks a build system - who in their right mind would use javac to compile multiple files? No simple script should start by writing a pom.xml file. Kotlin is a lot closer to being used for scripting, but still has the problem of slow JVM startup.

Python & Ruby

Both are great scripting languages, and often get paired with native extensions that increase performance without you having to step down to a lower-level language yourself. However I find that the native extensions or systems that manage the running of your code (Passenger, etc) hard to understand and deploy. I like the idea of being able to run the application completely in production mode locally, which is often not really the case for these types of systems. For example, Rails is usually run in production through a Rack-based server that lets your application handle concurrent requests easily, however in development it just uses a simple single-threaded server.

This makes deployment more difficult, and Pythons versioning fiasco doesn’t help. I once wrote a simple script to run on my Raspberry Pi, and because I couldn’t get the correct version of Pip installed to load the dependencies onto the Pi I just reimplemented the same script in Go and cross-compiled it.

JavaScript & Node

I don’t have a lot of experience with JavaScript, but it is one of the few languages that is probably being used in almost every scenario. Quick scripts using Node, applications either in Electron or even just as a web page, and potentially back to Node for your infrastructure. However the ecosystem and build tools are not really to my liking (to say the least), so I will rarely use Node for anything unless I really have to.


Swift has the potential to be a great option for solving all kinds of problems, but the focus on iOS and macOS development hinders it in the areas that I am interested in - backend development on Linux. This fragments libraries and frameworks, some depend on being built by XCode and use Foundation or Cocoa, others use the Swift Package Manager and use third-party implementations of what you would expect from a standard library. For example, I don’t know how to open a file in Swift. I know how to do it using Foundation, but not on Linux - or maybe that’s changed in the latest version?

String handling makes Swift awkward to use for scripting - indexing can only be done using a Index rather than an Int. This makes sense - indexing a UTF-8 string is an O(n) operation as characters can be pretty much any number of bytes. It does mean that you end up with a spaghetti of string.startIndex and string.index(after: idx).


I find Elixir great for making systems with many different components - which isn’t a surprise given the concurrent focus of Erlang and the BEAM VM. The tooling for Elixir is also great, which makes it a pleasure to develop with. Creating a new project, pulling in some dependencies, building a system, and deploying it is well supported for a new language.

The tooling is necessary - working with multiple files isn’t really possible outside of a Mix (the Elixir build system) project, as Mix makes a load of bits and bobs that allows BEAM to put all the code together.

I would never imagine building something designed to run on a computer that’s not my own with Elixir - having someone install Erlang or bundling all of it into a platform-specific binary seems a bit flaky. Also the type of code you write for a client-side application usually isn’t suited for Elixir.


Crystal is an interesting new language - it’s still in alpha. What makes it interesting is it has global type inference; instead of just inferring the type for variable assignments based on the return type of functions, it takes into account the usage of every function to determine what types the arguments and result are.

What this means is that Crystal code can be written almost like a dynamically typed programming language, but any type errors will get caught at compile time - basically eliminating typos (ha! type-os!). Obviously this makes Crystal a great language for quick development.

Static typing should make Crystal quite well suited to building larger systems. Of course this is dependent on how it continues to evolve and its ecosystem continues expanding and maturing.

So what would make an ideal scalable language? Something that can be used in any scenario, to build any kind of application. This is a stretch - especially if you want it to be any good at any of one of the problems. For me this basically boils down to readable, concise syntax that allows me to write code that fits in with the standard library; often this includes some solid compile-time checks, catching problems before the code is run, but without adding too much extra overhead to writing the code.

The tools in the language need to make it easy to go from a single script, to a few files, to a full on project with dependencies, tests, etc. Of course as soon as dependencies are involved you need a good ecosystem of libraries and frameworks ready to be used on all the platforms you need them on.

On balance the language should probably compile to native code, but what would be neat is the option to run in a lightweight VM that can be bundled into a native binary so that the application can be more easily deployed without installing a runtime environment. Developing Elixir is very easy because you can recompile and interact with the application from the REPL (this is obviously not new to LISP developers - I have only used Clojure and find that IEx in Elixir is far easier to use than the Leiningen REPL).

Due to my habit of jumping between languages I don’t imagine I will settle on one that I would use for everything any time soon. Although I always keep an eye on Swift to see how that’s progressing - just next year is going to be the year of Desktop Linux, next year will be the year of Server-side Swift. Always next year.

A Downgrade

I have decided to downgrade my phone. I don’t feel the need for a large HD screen, a massive battery, and 3GB of RAM. The extra resolution of the 13MP camera pretty much wasted on me. I have used this phone - the OnePlus One - for the last two and a half years and have decided that when I get a new phone, I no longer need something of this calibre.

When the One came out, it boasted top-of-the-line specs - an incredible 401ppi 5.5” screen, massive 3100 mAh battery, a quad-core 2.4GHz processor, and a 13MP camera.

Almost all new phones in the last few years put the specs of the One to shame - boasting 8-core CPUs, over 6GB of RAM, screen densities upwards of 550ppi, and 16MP cameras.

So, what to do?

Thankfully, I found a device that suits my needs a bit better. The screen is a respectable 326ppi, the battery just 1800 mAh, only 2GB of RAM, and a 12MP camera.

This device does incur the cost of running a less common operating system that is proprietary to this particular manufacturer, rather than using Android, the de-facto industry standard. I have used Android for a long time, so this did weigh heavily in my decision to move, but in the end I decided that the more modest, lower-specced device was the right choice.

The phone arrived last week and even though the specs are behind even my One from 2014, thankfully I haven’t noticed any lack of performance or responsiveness in day-to-day use. Even with its battery that is less than two thirds the size the the One’s, I still haven’t dropped below 30% at the end of the day.

So, long story short - I got an iPhone 8 and it’s amazing.