iOS Should Not Have a Command Line

On the latest episode of Upgrade 1 Jason and Myke briefly discuss the idea of Apple adding a command line to iOS. They quickly dismiss it because of the constraints of sandboxing and security - instead suggesting that the command line will be a feature that keeps people at the Mac. I find the idea of a new command line really interesting, so I wanted to expand on some more reasons why there shouldn’t be a command line on iOS - unless it comes with massive changes to how you think about a command line.

I think I know customising and using the terminal fairly well; I’m 160 commits into customising how I use it - whenever I’m at a computer (whether that’s my Mac or my iPad) I have a terminal open.

The advantage of a command line is not the fact that it is textual. Being able to enter rm * is not much faster than selecting everything and hitting cmd-delete in the Finder. The real advantage is that everything uses the exact same set of easily understandable interface concepts, and they can all interact with each other.

All command-line programs have an input stream, an output stream, and an error stream. These can be displayed anywhere, hidden, parsed, redirected, or reinterpreted trivially. The concepts in the command line are the core concepts of the operating system, and everything respects these concepts. They are building blocks that you can put together to do your work.

In macOS, the building blocks are windows 2 - everything you interact with is contained in a window, and you can interact and manipulate these in a consistent and predictable way. iOS has a less predictable model - apps are the only unit that a user can interact with, which in some situations - particularly dealing with documents - is too coarse and gets in the way.

The other advantage of a the command line is that interacting with it (at least at the basic level, before you get into ncurses) is trivial, so tools like compilers or video encoders like ffmpeg don’t have to implement a full interface - they concentrate on their own specific functionality.

Previously on iOS if you wanted to implement something like a compiler, you’d have to implement the whole IDE or editor as well as the actual compiler - which is a lot of extra work. People are also quite picky about their editors [citation needed]. iOS 11 improved this significantly with the Files app - your compiler could just read the source files from a file provider that were written with someones editor of choice.

For example, Cub can only really be used in OpenTerm - and there’s no way to add another language to it. OpenTerm is also limited in that it can’t accept commands from other apps - the commands must be baked in, entirely hard-coded.

It would be possible to create a sandboxed shell - that ensures that commands can only see the files that they are entitled to access. You would most likely have to throw out almost all existing scripts from macOS, and the semantics of the shell language would change - popular shells (zsh, BASH, Fish, etc) don’t have strong types so you don’t know if the parameters passed are files or if they just look like that. Maybe they’re just parts of a web URL? (but does this command have permission to access that URL?) Sandboxing an existing shell would either end up limited, or frustrating and unproductive to use.

This all ignores the fact that iOS apps are not built to be used from the command line - they don’t expect command line arguments, they don’t print a result - they can’t share anything about themselves in the way that a command line expects. Even macOS apps don’t really do this.

For me to support a command line on iOS, I would have to see significant changes to how the core of the operating system behaves. The command line needs to be able to tell apps to do things on behalf of the user - when it is allowed - and receive results back from those apps.

iOS can already do this: Workflow (aka Shortcuts) can chain actions from different apps together. Siri shortcuts allows apps to expose actions that can be consumed by the system, and used as part of a workflow. They don’t allow for passing input or output (as far as I know), which doesn’t make them as versatile as command line programs.

The other aspect of the terminal that is often overlooked is the features that sit outside of the fairly simple concept of a process with input and output streams - VT100 emulation and the use of ANSI escape codes. These allow any process to control the interface far more than just writing a sequence of characters onto the screen. My terminal would not be complete without tmux. It allows me to easily manage applications in a uniform environment, without having to reach for features higher up the “stack” like windows which are not as well suited to managing command line applications.

There is however - as is tradition when talking about iOS releases - always next year. Shortcuts could gain the ability to accept input from other apps and pass their output to other shortcuts. iOS could get UI improvements that make it easier to juggle parts of applications like I can with tmux.

What I don’t want to see is a literal port of the command line to iOS, because that would be a significant step back in imagining a new way of handling inter-app communication and would most likely be so constrained that it wouldn’t be able to fill the use cases of todays terminals. A bad terminal on iOS would only serve to further the argument that doing work on iOS is a dead end.

But hey, I’m just some guy that uses tools that are decades older than him.

  1. A great show, definitely listen to it if you’re into this kind of thing. 

  2. Also tabs, I suppose. 


Writing Macros in Crystal

The existing documentation for macros in Crystal leaves a wee bit to be desired, especially if you want to do anything that’s a bit off-the-rails. So here we go, some top tips for macros in Crystal.

There are two different types of interpolation - {{ foo }} will put the result of foo into the code, whereas {% foo %} will evaluate it and ignore the result. Much like <%= foo %> and <% foo %> in embedded Ruby. So if you want to print something to debug the macro, then use {%. This is obvious if you notice that conditionals and loops in macros always use {% because they shouldn’t actually output anything themselves.

Something that I didn’t realise initially was that you can assign variables in non-expanding interpolation (the {% kind). This makes your code a lot tidier.

When writing a macro it is super useful to be able to see the generated code - to do this you can use {% debug %}! It will output the current “buffer” for the macro, so you can just put it at the bottom of your macro definition to see what is being generated when your code compiles.

@type is definitely not given the attention it needs. It is essential for writing macros that change aspects of the current class or struct. For example:

macro auto_to_string
  def to_s(io)
    io << {{ @type.stringify }}
  end
end

.stringify basically returns the syntax tree wrapped in quotes, so 44.stringify gives "44" at compile time.

When we call this method in some class, a new method will be generated:

class SomeNeatClass
  auto_to_string # Calling the macro will expand the code here

  # This is what will be generated:
  def to_s(io)
    io << "SomeNeatClass"
  end
end

The class name is turned into a string at compile time. @type will be some kind of TypeNode - checking what kind it is using .is_a? and the methods in the imaginary macros module lets you do different things based on what it is - like if it has generic types, what its superclasses are, etc. Although do remember that this information is limited to what is known by the compiler when the macro is invoked - so if you use @type.methods in a macro that is expanded before any methods are defined, there won’t be any there:

macro print_instance_methods
  {% puts @type.methods.map &.name %}
end

class Foo
  print_instance_methods

  def generate_random_number
    4
  end

  print_instance_methods
end
# This will print:
# []
# [print_instance_methods]

Depending on what you want to do, you could either move the code into a macro method - they get resolved when the first bit of code that uses them is compiled - or use the method_added and finished macro hooks.

The difficult thing about writing macros (especially if someone else has to use them) is doing unexpected things when you don’t get quite the input you expect. The error messages are often incomprehensible - just as you’d expect from an interpreted templating language that is used to generate code for another language, on top of which it is based.

Pretty much everything you run into in macros is some kind of *Literal class. Arrays are ArrayLiteral, booleans are BoolLiteral, nil is NilLiteral, etc.


Scrutinising a Scalable Programming Language

In my opinion, most developers should know three different programming languages, three different tools that can be used for most of the projects you come across. These should include a scripting language - for automating a task or doing something quickly; an application language - for writing user-facing or client applications, which have to be run on systems that you don’t control; and an “infrastructure” language for building a system that has many components and runs on your own platform.

I don’t really do any systems programming - which I see as basically being dominated by C anyway - so I would lump it in with your application language, as the situation is very similar. If this is not the case then the target hardware and tooling probably dictates using C anyway.

A scripting language is usually dynamically typed and interpreted. It starts up fast and lets you cut as many corners as needed until the problem is solved. It should be good at dealing with files, text, and simple HTTP requests. For this I will usually use Ruby, as it is very quick to write and has useful shortcuts for common things scripts do - like running shell commands. On occasion I will use Python, but I can’t think of any reason that I would use it over Ruby - except for Ruby not being installed.

The application language I use most often is Java (or Kotlin, more recently). This is partly due to having to use it for university, but also if you want to write something that can be run on pretty much anyones computer with minimal effort, the JVM is probably your best bet. It has its problems, of course - but it’s easier to get someone to install a JRE and run a fat JAR than it would be to ensure they’re running the correct version of Python and the dependencies are installed.

The “infrastructure” language is probably closest to what I find most interesting - I first thought of this after finishing the hackathon that resulted in Herbert. Herbert is a Slackbot that does some timesheeting. It includes the proper Slack app integration hooks and stuff, so it needs to be able to join many different teams at once, and dynamically join teams as they add the integration. It’s written entirely in Ruby, because it was easy to get a working prototype (see above) - but because Ruby doesn’t have real threads, running a load of things at the same time is a bit orthogonal to it’s strengths. In subsequent hackathons, Logan and I chose to use Elixir, which is better suited to piecing a selection of bits together and having them communicate (thanks, BEAM!) so it would probably be my choice here.


I expect that most people - like me - would choose three fairly different languages. What if we had a language that could be used easily for all of these situations? What would it take, and what (in my opinion) is stopping existing languages from being used everywhere?

Most of these languages can be used in all situations, but there are some things that get in the way of them being a great choice. If you are an expert in one particular language then the familiarity will likely win out over the suitability of another.

Go

Go excels at building highly concurrent and performant applications. What makes it even more useful is the ability to easily cross compile to pretty much every major platform. Not having exceptions makes it easy to cut corners in error handling - which makes building an initial prototype or experiment easier. For me the verbose syntax and lack of (seemingly simple) language features pushes me away from Go - but I’ll happily reap the benefits of others using it by making use of their applications.

The other aspect of Go that I find frustrating is the enforced package structure and (lack of) dependency management. There is a fair bit of work from just using go run on a single file to setting up a proper Go environment - especially if you have an existing way of organising your projects.

Java/ Other JVM Languages

The main pitfall of using the JVM for quick tasks is that it usually takes longer for the JVM to start than it does for your program to run. In the time that it takes to start a Clojure REPL, I can typically open a few other terminals and launch a Ruby, Python, and Elixir REPL.

Plain Java also lacks a build system - who in their right mind would use javac to compile multiple files? No simple script should start by writing a pom.xml file. Kotlin is a lot closer to being used for scripting, but still has the problem of slow JVM startup.

Python & Ruby

Both are great scripting languages, and often get paired with native extensions that increase performance without you having to step down to a lower-level language yourself. However I find that the native extensions or systems that manage the running of your code (Passenger, etc) hard to understand and deploy. I like the idea of being able to run the application completely in production mode locally, which is often not really the case for these types of systems. For example, Rails is usually run in production through a Rack-based server that lets your application handle concurrent requests easily, however in development it just uses a simple single-threaded server.

This makes deployment more difficult, and Pythons versioning fiasco doesn’t help. I once wrote a simple script to run on my Raspberry Pi, and because I couldn’t get the correct version of Pip installed to load the dependencies onto the Pi I just reimplemented the same script in Go and cross-compiled it.

JavaScript & Node

I don’t have a lot of experience with JavaScript, but it is one of the few languages that is probably being used in almost every scenario. Quick scripts using Node, applications either in Electron or even just as a web page, and potentially back to Node for your infrastructure. However the ecosystem and build tools are not really to my liking (to say the least), so I will rarely use Node for anything unless I really have to.

Swift

Swift has the potential to be a great option for solving all kinds of problems, but the focus on iOS and macOS development hinders it in the areas that I am interested in - backend development on Linux. This fragments libraries and frameworks, some depend on being built by XCode and use Foundation or Cocoa, others use the Swift Package Manager and use third-party implementations of what you would expect from a standard library. For example, I don’t know how to open a file in Swift. I know how to do it using Foundation, but not on Linux - or maybe that’s changed in the latest version?

String handling makes Swift awkward to use for scripting - indexing can only be done using a Index rather than an Int. This makes sense - indexing a UTF-8 string is an O(n) operation as characters can be pretty much any number of bytes. It does mean that you end up with a spaghetti of string.startIndex and string.index(after: idx).

Elixir

I find Elixir great for making systems with many different components - which isn’t a surprise given the concurrent focus of Erlang and the BEAM VM. The tooling for Elixir is also great, which makes it a pleasure to develop with. Creating a new project, pulling in some dependencies, building a system, and deploying it is well supported for a new language.

The tooling is necessary - working with multiple files isn’t really possible outside of a Mix (the Elixir build system) project, as Mix makes a load of bits and bobs that allows BEAM to put all the code together.

I would never imagine building something designed to run on a computer that’s not my own with Elixir - having someone install Erlang or bundling all of it into a platform-specific binary seems a bit flaky. Also the type of code you write for a client-side application usually isn’t suited for Elixir.

Crystal

Crystal is an interesting new language - it’s still in alpha. What makes it interesting is it has global type inference; instead of just inferring the type for variable assignments based on the return type of functions, it takes into account the usage of every function to determine what types the arguments and result are.

What this means is that Crystal code can be written almost like a dynamically typed programming language, but any type errors will get caught at compile time - basically eliminating typos (ha! type-os!). Obviously this makes Crystal a great language for quick development.

Static typing should make Crystal quite well suited to building larger systems. Of course this is dependent on how it continues to evolve and its ecosystem continues expanding and maturing.


So what would make an ideal scalable language? Something that can be used in any scenario, to build any kind of application. This is a stretch - especially if you want it to be any good at any of one of the problems. For me this basically boils down to readable, concise syntax that allows me to write code that fits in with the standard library; often this includes some solid compile-time checks, catching problems before the code is run, but without adding too much extra overhead to writing the code.

The tools in the language need to make it easy to go from a single script, to a few files, to a full on project with dependencies, tests, etc. Of course as soon as dependencies are involved you need a good ecosystem of libraries and frameworks ready to be used on all the platforms you need them on.

On balance the language should probably compile to native code, but what would be neat is the option to run in a lightweight VM that can be bundled into a native binary so that the application can be more easily deployed without installing a runtime environment. Developing Elixir is very easy because you can recompile and interact with the application from the REPL (this is obviously not new to LISP developers - I have only used Clojure and find that IEx in Elixir is far easier to use than the Leiningen REPL).

Due to my habit of jumping between languages I don’t imagine I will settle on one that I would use for everything any time soon. Although I always keep an eye on Swift to see how that’s progressing - just next year is going to be the year of Desktop Linux, next year will be the year of Server-side Swift. Always next year.


A Downgrade

I have decided to downgrade my phone. I don’t feel the need for a large HD screen, a massive battery, and 3GB of RAM. The extra resolution of the 13MP camera pretty much wasted on me. I have used this phone - the OnePlus One - for the last two and a half years and have decided that when I get a new phone, I no longer need something of this calibre.

When the One came out, it boasted top-of-the-line specs - an incredible 401ppi 5.5” screen, massive 3100 mAh battery, a quad-core 2.4GHz processor, and a 13MP camera.

Almost all new phones in the last few years put the specs of the One to shame - boasting 8-core CPUs, over 6GB of RAM, screen densities upwards of 550ppi, and 16MP cameras.

So, what to do?

Thankfully, I found a device that suits my needs a bit better. The screen is a respectable 326ppi, the battery just 1800 mAh, only 2GB of RAM, and a 12MP camera.

This device does incur the cost of running a less common operating system that is proprietary to this particular manufacturer, rather than using Android, the de-facto industry standard. I have used Android for a long time, so this did weigh heavily in my decision to move, but in the end I decided that the more modest, lower-specced device was the right choice.

The phone arrived last week and even though the specs are behind even my One from 2014, thankfully I haven’t noticed any lack of performance or responsiveness in day-to-day use. Even with its battery that is less than two thirds the size the the One’s, I still haven’t dropped below 30% at the end of the day.

So, long story short - I got an iPhone 8 and it’s amazing.


Top Tips for iOS 11

Some top tips from Frederico Viticci’s iOS 11 review:

  1. Press CMD-Option-D to show the dock. No more awkwardly trying to reach for the edge of the screen when your keyboard is in the way.
  2. Grab the top nubbin in split view to re-arrange the windows in split view.

And some things that I wish were top tips but aren’t:

  1. Drag and drop apps in the multitasking view to create enter split view.
  2. Use CMD & arrow keys to move apps from spotlight into split view.

And some things that aren’t mentioned that I would like to see:

  1. Some way to enter 50/50 split view immediately, without going into 70/30 first.
  2. Open-in-place support in all the apps.

Pug: An Abomination of Shell Scripting

Pug started out a few months ago as a slightly silly idea of writing my own plugin manager for Vim and ZSH plugins. There is no shortage of ways of managing Vim plugins - Pathogen or Vundle seem to be the most common. For ZSH everyone swears by Oh My Zsh which includes every bell and whistle you could imagine.

However each of these only work for the one tool. What if (for some reason) I wanted a tmux plugin? I’d have to install some tmux package manager - if there is one. Pug is the one tool to rule all my package managing needs.

Pug can be used to manage packages for any utility - out of the box it has installers for Vim and ZSH, but other installers can be added by writing a simple shell script. I’ll probably write some more builtin ones myself.

To get started with Pug, head over to the Pug repo. My favourite ZSH plugins - syntax highlighting and auto suggestions - can be installed with Pug:

Install Pug:

curl https://raw.githubusercontent.com/willhbr/pug/master/install.sh | bash

Create a deps.pug file somewhere:

vim deps.pug

Add the dependencies (zsh-autosuggestions and zsh-syntax-highlighting) to deps.pug:

#!/usr/local/bin/pug load

zsh github: zsh-users/zsh-autosuggestions
zsh github: zsh-users/zsh-syntax-highlighting

Load the dependencies:

pug load deps.pug

You’ll be prompted to add this to your .zshrc file:

source "$HOME/.pug/source/zsh/pug"

Done. No more submodules.


Learn Enhancer Reaches Version 2.0

When I first started Learn Enhancer at the start of my degree - sometime in early 2014 - it was just a single JavaScript file that would redirect to a certain URL if it found a certain element. Eventually I coughed up the $5 required to submit it to the Chrome Webstore, and it started its slow creep on to students browsers.

This initial version didn’t catch every case, and so just over a year later I grew frustrated enough to dig right into the HTTP requests and alter them to ensure that the file could be previewed - which I outlined on this blog. Since then I branched out into re-styling every page on Learn with gentler colours (with design supervision from Sarang) - which had the delightful side-effect of letting me see whenever someone has Learn Enhancer installed. It is nice to see my work being used by people that have no clue who I am.

One of the features that I wanted to include but never quite worked out was auto-login, so that I didn’t have to two buttons before accessing content on Learn. This never quite worked out, until this morning I saw Patrick working on a script to do it. Thankfully he was receptive to the idea of incorporating this into Learn Enhancer, and soon he had a it working in the extension.

So here we go, Learn Enhancer has hit 2.0. Since this is my last year at UC, I have put Learn Enhancer on GitHub so that (in theory) current students can contribute fixes when Learn inevitably changes after I have graduated.

If you are a current UC student, download Learn Enhancer, you will almost definitely appreciate it.


Using tmux in the Real World

Every now and again I happen across a post outlining how to use tmux. Since I first happened upon tmux in 2015, my use of it has grown from “occasional”, to “frequent”, almost to “continual”. What I find frustating with these posts is that they don’t describe how to actually use tmux in the real world. The post in question that prompted this post tells you how to start a session, create new windows, then how to switch between and resize windows.

The thing that they fail to explain is that the default tmux commands and shortcuts are terrible. Common operations require far too many fiddley keystrokes to be done quickly. Moving between panes is by default Prefix followed by an arrow key, so to move to the right you would enter C-b → - but you can’t press the arrow key while you have control held down, because that will resize the window. So if you have four panes, you have to press C-b, release ctrl, press the arrow key, press C-b, release ctrl, press the arrow key, press C-b, release ctrl, press the arrow key once more and you’re there - unless of course you pressed the arrow key before the ctrl key was released, which means you will have a slightly resized pane instead 1.

This lack of usability is repeated - splits are created with Prefix % and Prefix " but which one does horizontal and which one is vertical? I have no idea, plus having keys that you have to use the shift key to get at just makes them harder to get at. The tmux command also leaves a lot to be desired - it’s not simple to connect back to an existing session given its name.

What irks me the most is that a beginner will read one of these posts and think that they have to remember all these arcane commands and be able to enter them at lighening speed. This is far from the reality of using tmux (or most other command-line tools) - everyone that I know that uses tmux has a config that makes tmux fit to the way they think of things. Each one of them had to learn the defaults and then find out if there was a better way - which is a significant barrier for most people.

The aspect of tmux that redeems these oddities is it’s extensive set of customisation options. Every command can be bound to a new shortcut, and shortcuts can be entered without needing to press the prefix key first. So what I’m going to do is build a set of reasonable defaults, so you can jump ahead and use tmux like a sane person.


This post isn’t a one-stop-shop for all your tmux needs, instead it’s just a quick walkthough of the basic ways that I make tmux more appropriate for daily use 2. All the snippets should be added to your tmux config file, which lines in ~/.tmux.conf by default.

The first thing that most tutorials tell you to do is remap the prefix to something other than C-b, because C-b is a bit too much of a stretch for most people. I use C-z, many people use C-a. Whatever you use is up to you. To remap the prefix, add this to your .tmux.conf:

unbind C-b
set -g prefix C-z
bind C-z send-prefix

This deactivates C-b, sets C-z as the prefix and makes a shortcut C-z C-z that will send C-z to the program inside tmux (so you can still use the shortcut). Replace C-z with another shortcut that tickles your fancy if you so desire. (I’ll use C-z when I’m talking about the prefix in examples, just remember to use yours if it is different).

The next thing is splitting panes. This will depend on how you visualise the panes, but I think of a horizontal split as two panes with a divider that is horizontal, and a vertical split has a vertical divider. This is the opposite to how tmux thinks of it, so depending on how you think, you may want to skip this.

Since tmux 1.9, new windows and panes open in the directory that tmux started in. I prefer the old method where they would open in the same directory as the previous window or pane. I frequently run some command, and if it takes a while I will open a split and continue working in the same location while waiting for the command to complete. I find this behaviour useful, and I think you will too. So:

# Open new windows with the same path (C-z c)
bind c new-window -c "#{pane_current_path}"
# Create a 'vertical' split (divide vertically) using C-z v
bind v split-window -h -c "#{pane_current_path}"
# And a horizontal split (divide horizontally) using C-z h
bind h split-window -v -c  "#{pane_current_path}"

Ok so on to the main event, the thing that makes tmux actually usable - faster pane switching. I use vim so I’m used to using h/j/k/l for left/down/up/right movement, you may prefer the arrow keys. Up to you. The key is to make these shortcuts not require the prefix before them, so you can smush some buttons repeatedly instead of repeating an exact sequence.

# For h/j/k/l movement
bind -n C-h select-pane -L
bind -n C-j select-pane -D
bind -n C-k select-pane -U
bind -n C-l select-pane -R
# For arrow key movement
bind -n C-Left select-pane -L
bind -n C-Down select-pane -D
bind -n C-Up select-pane -U
bind -n C-Right select-pane -R

These lowers the barrier to moving between your panes, which should hopefully encourage you to get crazy and open as many panes as you can fit on your screen. Wait, what if I don’t want to have everything in exact halves? Then you’ll have to resize a pane!

The post that inspired this one instructs you to resize panes by opening the command mode Prefix : and entering resize-pane -L, to move the split to the left. Now that is just super tedious. You can give it number of the amount you want to resize it, but that devolves into guesstimating pretty quickly. Instead I like to leverage the meta (alt/ option) key, so M-l (alt + L) will resize the pane to the left. Again you could make this M-Left if arrow keys are your forté.

# h/j/k/l
bind -n M-h resize-pane -L
bind -n M-j resize-pane -D
bind -n M-k resize-pane -U
bind -n M-l resize-pane -R
# Arrow keys
bind -n M-Left resize-pane -L
bind -n M-Down resize-pane -D
bind -n M-Up resize-pane -U
bind -n M-Right resize-pane -R

And boom, you can resize panes super quickly. One last shortcut that isn’t quite essential, but still useful is a quick window-switching shortcut, I like M-n and M-p to replace C-z n and C-z p. Especially if you’re flicking through a lot of windows.

bind -n M-n next-window
bind -n M-p previous-window

Two more useful things; set the default terminal to be 256 color so that your editor looks good, and set the starting index of windows to be 1 rather than 0 so it follows the order of the keyboard:

set -g base-index 1
set -g default-terminal "screen-256color"

So what to do about managing your sessions? Almost everyone I’ve talked to has made a little wrapper script that basically does this: if no arguments are given, list all the sessions. If an argument is given, connect to that session if it exists, otherwise create a session with that name. This avoids having unnamed sessions and means you don’t have to remember to run tmux ls every time. I’ve made a version of this with more bells and whistles but this is the basic idea:

mux() {
  local name="$1"
  if [ -z "$name" ]; then
    tmux ls
    return
  fi
  tmux attach -t "$name" || tmux new -s "$name"
}

Chuck that in your .bash_profile, .zshrc or whatever, then run mux to view your sessions, or mux my-session to create or connect to a session.

These are the changes that I have made to make tmux usable, but don’t forget that there are a whole load of things that I just do the default way. This post isn’t an exhaustive tutorial on using tmux, but rather an outline of how to make it more useful if you share my sensibilities.

  1. I know that if C-h/j/k/l were the default this would stop those keys being able to be used for other things, but I think the productivity gain is far greater than the loss of some keys (this is probably just because I don’t use anything that needs those shortcuts). 

  2. Other things that irk me are poor window indicators in the status bar - mine has more color to show the current window. The status bar also does a poor job of showing the status info - especially the current host. I change the color of part of the status bar depending on the host I’m on (mostly for aesthetics). And the default green highlight is super gross. 


Metaprogramming and Macros for Server-Side Swift

I have been a fan of the Swift programming language since it was first announced, and especially after it was open sourced. The place that I thought Swift could be the most interesting for me was for server applications - I’m not much of an iOS/ macOS developer. The progress of Swift-on-Linux is slow for someone that doesn’t like digging around Makefiles and linking to C libraries.

However, there are some things about web applications that aren’t currently served by the design of Swift. This can basically be boiled down to one thing - compile-time macros. Having a macro system allows for a lot of really cool syntactic sugar, as well as removing work that would otherwise need to be done on the first request, or at startup. Many of these are taken from my brief time learning Phoenix, a web framework written in Elixir - if I’ve misinterpreted something or ruled out some approach that is actually possible, let me know.

The main use of macros in your typical web framework is the routing configuration. Phoenix and Rails both support a DSL (implemented using the syntax of the language, Elixir or Ruby). Both of these look quite similar, basically allowing you to do this:

# In Phoenix
get "/", MyController, :index
# In Rails
get "/", to: 'my_controller#index'

The DSL gets more complicated when you include resourceful routes and other goodies. But at its core the purpose of the DSL is to allow the developer to use the same tools (i.e: the same editor and highlighting) to define their routes in a succinct manner. Phoenix can go one step further, because Elixir supports macros. The routes are checked when the project is compiled, and can be turned into arbitrary code that responds to web requests following the rules defined.

For example, the get macro can check that the path is valid, that it doesn’t clash with any other routes, and make helper functions for linking to that page (e.g. a my_controller_index_path() function). This is done at compile time, so when the code is run it is no different to running the “hand written” equivalent.

This is not the case in Ruby - because it is a dynamic language these methods can be created at runtime. There is basically no loss in performance because to support this level of metaprogramming (and because it is interpreted) Ruby is super slow compared to compiled languages.

When it comes to compiled languages without macros (like Swift, Go, Java, etc) you can’t pre-calculate information while the code is being compiled. Go lacks the features 1 to implement any kind of usable DSL. Revel (the #1 result when googling for “golang web framework”) has a separate routes file - written in a Revel-specific syntax that is parsed at runtime. This creates complexity in the packaging and distribution of the application - it no longer can be built as a single binary as it relies on this config file.

Swift does allow for creating concise DSLs. Vapor and Perfect are Swift web frameworks. Both of them offer routing DSLs that look something like:

app.get("/:page_id") { request ->
  return Response(.text, request.params["page_id"])
}

But this is processed at runtime, and doesn’t allow for creating helper methods for creating URLs, or grouping methods together into a class-based controller like Rails does. The latter could just be a necessary limitation of Swift, instead of making classes you could create a “controller factory” DSL, which you might use like:

controller("MyController") { app ->
  app.get("/stuff") { request ->
    // do something with stuff
  }
  // etc
}

Although this doesn’t get around the fact that much of your logic is defined in string literals that don’t get looked at until the application is running, or the fact that the routes must be generated when the application starts - if you wanted to make a super-efficient trie or other data structure for better processing requests, you sacrifice startup time in both development and production, even if the structure never changes until a new version is deployed.

Moving code-level information out of strings allows for static analysis to perform more useful checks when validating code. For example, regular expressions are often written as string literals (e.g: in Java) and so don’t get checked for validity until the program reaches them. Other languages have builtin regex literals (JavaScript, Ruby) to fix this problem. Elixir goes one step further thanks to (you guessed it) macros, specifically “sigils”. These are macros that wrap around a special “literal creation” syntax. This is used not only for regexes (written like ~r/abc\w{5}/) but for common “make a list of strings” helpers commonly found in other languages: ~w(foo bar) is equivalent to ["foo", "bar"]. So if you made a cool new type of regex that adds some awesome new feature, you can implement a macro that lets you write it easily and have all the same advantages as the builtin version.

View templates (think ERB, Liquid, Handlebars, etc) can also be parsed and optimised at compile time using macros - Phoenix does this so that when running the application all that needs to be done is string concatenation, no parsing needed.

So where does that leave us with Swift-based web development? It doesn’t seem any worse off than Go in terms of ability to dynamically create methods, etc - and Go appears to be used for web development a wee bit. The other option is code generation - but that’s always going to be a second-class way of doing this, as it relies on other tools and requires the other tool to parse the rest of the codebase to get the advantages you would from a macro system.

There might of course be a time when Swift gets a macro system, which will create a huge opportunity for new syntax and more concise, expressive code. However given the complexity of Swift and decisions so far, I would not hold my breath.

  1. lol no generics. 


Needless complexity: Generalising a Scheme for Aikido Training

It is perhaps a little known fact that I have practiced Aikido for about the last 13 years now.

I’m bad at writing introductions, so let’s jump straight to the problem. When training with more than one other person, you have to have some way of deciding who attacks who - you can’t just alternate. Normally when practicing tachi waza the most senior student goes first and does the technique four times to the uke before the roles are swapped. So what to do when someone else joins your pair?

You could just make a directed triangle - the person A is attacked by B, the B by C, and C by A, before the cycle repeats. This is easy to describe and can easily be extended to any number of people, but person A will never be attacked by person C - they miss out on any feedback that person C may have for them. You want a method that will allow everyone to train with everyone else, as well as allowing each member to do the technique enough times to improve.

At the moment, this is the recommended way of training:

A - B
A - C
B - C
B - A
C - A
C - B

Now this is fine. Apart from the fact that it only applies to exactly three people. The programmer in me wants a method that applies to any number of people. How about:

func train(members: [Person]) {
  for nage in members.sorted(by: .rank) {
    for uke in members.sorted(by: .rank) where uke != nage {
      uke.attack(nage)
    }
  }
}

Basically starting from the highest ranked member, each member should have a turn as nage. They should be attacked by each other member, in the order of their rank. This is how training in a pair works, and works just the same way if the whole class is training together.

This gets slightly more confusing when you doing weapons practice - there is a less clear distinction between the uke and the nage; the uke is often not thrown by the nage, and the uke still has to learn the attack as it not just a single strike or grab.

It’s common with weapons practice for a pair to train with one role, then swap and train before moving on to the next member of the group. This reduces the distraction of changing partners, letting you focus on the technique. This can be generalised in a similar way - this time each member of the group in descending rank order is the ‘key’ member, who practices both sides of the technique with each other member, then the ‘key’ member is changed to the next member in rank.

func train(members: [Person]) {
  for key in members.sorted(by: .rank) {
    for other in members.sorted(by: .rank) where key != member {
      other.attack(key)
      key.attack(other)
    }
  }
}

Basically I think too much about the efficiency of how I am training, rather than focussing on the training itself. I guess that’s what happens when you spend all day learning about Software Engineering and stuff.