Will Richardson

Me Twitter GitHub

Impracticalities of iOS Photo Management for Photographers

In the last six months or so I have become an enthusiastic hobbyist photographer. This started with paying more attention to how I used the camera on my phone, then quickly progressed into buying a Sony a6000 mirrorless camera. Before I knew it I had two tripods, a camera bag, an ND filter, and was starting to browse through lenses on Amazon.

These photos usually end up on my Instagram. Here’s one from the other day:

The Sydney Harbour Bridge at sunset

All the photos that I take on my camera are edited on my iPad using Pixelmator Photo. It’s an amazingly powerful piece of software - especially considering that it’s less than $10.

Importing and editing photos from a camera is easy on iOS. You just need the appropriate dongle - in this case it’s just a USB C SD card reader - and tap “Import” in the Photos app. It removes duplicates and can even clear the photos off the card once the import is complete. What more could you want?

Much, much more.

The problem first started when a friend of mine suggested that I start shooting raw photos to make use of the extra dynamic range. If you’re not aware, raw photos contain very little processing from the camera and are not compressed - unlike jpegs. On the a6000 a typical jpeg is less than 10MB, whereas a raw photo is 24MB. This meant that whenever I took my camera anywhere, I could easily produce 20GB of photos instead of the more reasonable 1-2GB.

There are two big problems here:

  1. my iPad only has 250GB of storage
  2. my home internet is too slow to upload this many photos to Google Photos1

My first thought was to just dump my unused photos onto an external drive (now that iOS 13 adds support for them natively) and delete them from my iPad. This works OK for video - you have a handful of files, each of which are a few gigabytes. Background processing isn’t great in iOS, but I can deal with having to keep the app open in the foreground while the copy is happening.

This is not the case with photos. You instead have hundreds and hundreds of small files, so if the copy fails you have no idea how much you have to continue with. That being said, if you were shooting jpegs, you could do something like:

  1. import into the Photos app
  2. swap the SD card out for an external drive
  3. select all the photos you just imported (hoping that you can just select in a date range)
  4. drag the photos into a folder on the external drive
  5. baby sit the Files app while it copies

Less than ideal, but workable. Ok, now why won’t that work for raw files?

iOS kinda only pretends that it supports raw files. If you import them into the Photos app there is no indication of the file type, but third party apps can access the raw data. What’s frustrating is that when you export the photos, only the jpeg is exported. So if you try and drag raw photos onto an external drive, you lose the raw data.

Basically, Photos on iOS is a one-way street for raw images. They’ll be imported, but can’t be exported anywhere without silent data loss.

Alright so using the Photos app is a bit of a loss. Can we just keep all the images as files instead? The short answer is “not very well”.

Importing from an SD card to the Files app can only be done as a copy - no duplicate detection, no progress indication, etc. You just have to sit there hoping that nothing is being lost, and if it fails in the middle you probably just have to start again.

Once the photos are in the Files app, you have to do the same dance again to export them to your external drive. Cross your fingers and hope that nothing gets lost.

Also remember that once the photos have been exported, the Files app will be unable to show the contents of the folder - it’ll just sit on “Loading” presumably trying to read every file and generate hundreds of thumbnails. And there’s no way to unmount the drive to ensure that the data has been correctly flushed to disk, you just have to pull the cable out and hope for the best.

Thankfully Pixelmator Photo does support opening images from the Files app, so you can import them from there without much trouble. But there is no way to quickly delete photos that weren’t in focus or get rid of the bad shots out of a burst. (This isn’t great in the Photos app either, but at least the previewing is acceptable).

So you’re left with a bunch of files on your iPad that you have to manage yourself, and a backup of photos on your external drive that you can’t look at unless you read the whole lot back. Not good.

“Why don’t you just use a Mac” - the software that I know how to use is on iOS. My iPad is faster than my Mac2. The screen is better. I can use the Pencil to make fine adjustments.

“Why not use Lightroom and backup to Adobe CC” - I don’t want to have to pay an increasing recurring cost for something that is a hobby. Also I like using Pixelmator Photo (or Darkroom, or regular Pixelmator, or maybe Affinity Photo when I get around to learning how it works).

“Just get an iPad with more storage” - I’d still have the same problem, just in about a year.

This is the point in the blog post that I would like to be able to say that I have a neat and tidy solution. But I don’t. This is a constant frustration that stops me from being able to enjoy photography, because for every photo I take I know that I’m just making everything harder and harder to manage.

One solution could be to connect my external drive to my home server, and find an app that allows me to backup to that. This seems ridiculous as my iPad can write to the drive directly.

I think the only thing that can make this practical - outside of iOS getting much better at managing external drives - is to make an app that reads the full image data from Photos, and writes to a directory on an external drive. It could also keep a record of what it has exported, and allow for cleaning up any photos that didn’t get edited automatically. The workflow would look something like:

  1. attach SD card and import into Photos app
  2. attach external drive and use photo exporting app to copy new photos onto the drive
  3. edit/delete photos using any app that takes your fancy
  4. use photo export app to remove any photo that wasn’t edited or marked in some way (eg, favorited) in one go

This would patch over the poor experience of copying to external drives, making iOS-only photography more practical for people who don’t want to pay for cloud storage for hundreds of mediocre shots.

Here’s hoping that iOS will continue to improve and become more flexible for different workflows.

If you’re an iOS developer who wants to make an app with probably the most limited appeal ever, get in touch.

  1. I will sometimes take my iPad to work and let it upload on the much better connection there, but this is inconvenient. 

  2. Probably. 

Compiling for Shortcuts

In this video I show a program written in Cub being compiled and run inside of the iOS Shortcuts app. The program is a simple recursive factorial function, and I can use it to calculate the factorial of 5 to be 120. Super useful stuff.

The idea was first floated by @a2c2b - who knows that I am someone that is easily nerd-sniped by programming language problems. Even after I pointed out that Shortcuts doesn’t support functions (or any kind of jump command other than conditionals) and a whole host of other features that you’d expect in the most basic set of machine instructions. But the bar had been set.

Initially I started writing a parser for a whole new language, but quickly discarded that idea because writing parsers takes time and patience. Why not just steal someone else’s work? I’d seen Louis D’hauwe tinkering on Cub, a programming language that he wrote for use in OpenTerm - which is written entirely in Swift. After a quick look into the code I realised that it would be simple to just use the lexer and parser from Cub and ignore the actual runtime, just leaving me to do the code generation. All I have to do was traverse the syntax tree and generate a Shortcuts file. In terms of code this is fairly straightforward - just add an extension to each AST node that generates the corresponding code.

Over a few evenings I pulled together the basic functionality - after reverse-engineering the Shortcuts plist format by making a lot of shortcuts and airdropping them to my Mac (Josh Farrant ended up doing a similar thing for Shortcuts.fun, and he’s written about the internals a bit on Medium).

The main problem was how to invent functions in an environment that has no concept of them. Andrew suggested making each function a separate shortcut - and just having a consistent naming convention that includes the function name somewhere - which would work but would make installing them a complete pain. However if you assume you know the name of the current shortcut, you can put every function in one shortcut and just have it call itself with an argument that tells it which function to run. An incredibly hacky and slow solution (as you can see in the video) but it works - even for functions with multiple arguments!

A lot of debugging and crashing Shortcuts later, I had a working compiler that could do mathematical operations, conditionals, loops, and function calls - all being turned into drag-and-droppable blocks in the Shortcuts app. Like using CAD to design your Duplo house.

The main shortcoming with this hack is that every action has a non-obvious (and non-documented) identifier and argument scheme that you have to reverse engineer for every action. If this was going to be a general-purpose solution, you’d have to deconstruct all the options for every action and map this to an equivalent API in Cub.

If you’re intrigued you can run the compiler yourself (be warned; it is janky). All you need is Swift 4.2 and the code from GitHub.

Learning Software Engineering

It’s an often-repeated meme that school doesn’t teach you the things you need to know to get by in the working world. “I don’t know how to do taxes but at least I can solve differential equations, har har” is usually how they go. There is a sub-meme in Software Engineering that the things you learn at university are generally not applicable to what is needed in the industry - most jobs don’t require you to be able to implement an in-place quicksort or balance a binary tree.

I generally agree with this sentiment - I now have a job and am yet to implement any kind of sorting algorithm or tree structure as part of my day-to-day responsibilities. Although I think the main problem here is comparing the Computer Science curriculum with the needs of the Software Engineering industry - like if you were to expect physics students to graduate and work as mechanical engineers1.

The thing that I find frustrating is the distain towards group projects - whether it’s cramming all the work in at the last minute, group members not pulling their weight, etc. No one likes group projects, and can you blame them? They’re nothing like working in the real world. This blog post by James Hood gives a good comparison of what it’s like doing work at school vs in industry. It’s not aimed specifically at group projects, but this summary shows how James perceives group projects:

“Sure, we may get the occasional group project, however that’s generally the exception. In these group projects, it’s not uncommon for one or two individuals to end up doing most of the work while the rest of the team coasts.”

The core difference is that when working in the industry, deadlines either aren’t concrete - and if they are and you’re not on track to meet them, a good team will work out how to work together and reach it - on top of that the scope can be changed in collaboration with the client or project owner if the deadline can’t be met. And unless you work in the games industry where a death march is the accepted project management practice, you should have some kind of work/life balance where you are not spending every moment of your day pushing to complete the project.

What I think is the problem with group projects is that they need to go big or not bother - especially projects done in smaller groups or in pairs. If you give a small project to a pair of students, it’s quite likely that one of them can just get on a roll and churn through the majority of the project. I have done this and it’s not good for either party.

This is exacerbated when the project doesn’t have N obvious parts that can be allocated one to each team member. If the parts have to interact then the team member that does theirs first or makes the most used part will become the go-to person in working out how all the other parts should integrate - doing more work than anyone else.

This is basically distilled down to “you research it and I’ll write the report” - but then the person that does the research has to write a report to transfer the knowledge of what they researched. Just replace “research” and “write report” with the components of the project.

What makes a worthwhile group project for software engineers? In the third year of my degree (which was four years in total) we had a group project that lasted the whole year. I think almost everyone in the class learnt a lot about working successfully in a team, as well as learning software engineering skills that you don’t pick up working by yourself.

The course outline looks like this:

The Software Engineering group project gives students in-depth experience in developing software applications in groups. Participants work in groups to develop a complex real application. At the end of this course you will have practiced the skills required to be a Software Engineer in the real world, including gaining the required skills to be able to develop complex applications, dealing with vague (and often conflicting) customer requirements, working under pressure and being a valuable member of a software development team.

There are a lot of small things that make this course work, but I’m just going to mention a few of the most significant:

Firstly having a massive project that is too big for one person to do or to cram in at the start of the year, it is always big enough that you can’t see the whole scope and how things fit together until you’ve finished the initial parts - this forces the team to work sustainably and discourages cramming it all at the last minute or trying to get everything out of the way. This should guide the team to get into a consistent rhythm.

The team should reflect the size of an industry software engineering team - about six to ten people. I think seven is a good size - it is small enough that everyone can keep up-to-date with the rest of the team, but big enough to produce a sizeable amount of work throughout the year.

Instead of having a single deadline at the end of the project where the team dumps a load of work and promptly forgets about it, the team should at least be getting feedback on how their development process is improving. Ideally, teachers would be able to do significant analysis into how the team is working - incorporating both data gathered from the codebase, the development process, and subjective feedback from students.

This analysis is a massive amount of work, and is hard to get right - my final year project was to improve the analysis of code. I didn’t improve it an awful lot but learnt a lot about syntax trees, Gumtree and Git.

The team should be marked on their ability to work as a team, and improve their process over the year - as well as the quality of the actual project work that they complete. This gives them the ability to improve their development process - perhaps at the expense of number of features - but hopefully becoming more sustainable and improving other “soft skills”.

This kind of work also has the added benefit of teaching students how to deal with typical industry tools, like getting into a Git merge hell, opening a bunch of bugs in an issue tracker and ignoring them for months, dutifully splitting all their work into stories and tasks and never updating them as the work gets done, receiving panicked Slack messages late at night then working out whether you can get away with ignoring them, and realising that time tracking is a punishment that no one deserves before fudging their timesheet at the end of the day. Proper valuable skills that software engineers use every day.

Of course this means that if you’re planning a project that is only a few weeks long and doesn’t make up much of the marks in the course, it probably shouldn’t be a given out as a group project.

Just like computer science is more more than just writing programs - you learn about algorithms, data structures, complexity analysis, etc - software engineering is not just computer science. Learning to be a software engineer also includes the ever-dreaded soft-skills and learning how to actually put together a piece of software that will be used by other people. And so just like in computer science there is far more than just learning how to program, software engineering must be far more than just learning computer science with other people.

  1. This may be a bad analogy, I have not studied physics since my first year of university and don’t really know what mechanical engineers do day-to-day. But I’m sure you get the gist. 

iOS Should Not Have a Command Line

On the latest episode of Upgrade 1 Jason and Myke briefly discuss the idea of Apple adding a command line to iOS. They quickly dismiss it because of the constraints of sandboxing and security - instead suggesting that the command line will be a feature that keeps people at the Mac. I find the idea of a new command line really interesting, so I wanted to expand on some more reasons why there shouldn’t be a command line on iOS - unless it comes with massive changes to how you think about a command line.

I think I know customising and using the terminal fairly well; I’m 160 commits into customising how I use it - whenever I’m at a computer (whether that’s my Mac or my iPad) I have a terminal open.

The advantage of a command line is not the fact that it is textual. Being able to enter rm * is not much faster than selecting everything and hitting cmd-delete in the Finder. The real advantage is that everything uses the exact same set of easily understandable interface concepts, and they can all interact with each other.

All command-line programs have an input stream, an output stream, and an error stream. These can be displayed anywhere, hidden, parsed, redirected, or reinterpreted trivially. The concepts in the command line are the core concepts of the operating system, and everything respects these concepts. They are building blocks that you can put together to do your work.

In macOS, the building blocks are windows 2 - everything you interact with is contained in a window, and you can interact and manipulate these in a consistent and predictable way. iOS has a less predictable model - apps are the only unit that a user can interact with, which in some situations - particularly dealing with documents - is too coarse and gets in the way.

The other advantage of a the command line is that interacting with it (at least at the basic level, before you get into ncurses) is trivial, so tools like compilers or video encoders like ffmpeg don’t have to implement a full interface - they concentrate on their own specific functionality.

Previously on iOS if you wanted to implement something like a compiler, you’d have to implement the whole IDE or editor as well as the actual compiler - which is a lot of extra work. People are also quite picky about their editors [citation needed]. iOS 11 improved this significantly with the Files app - your compiler could just read the source files from a file provider that were written with someones editor of choice.

For example, Cub can only really be used in OpenTerm - and there’s no way to add another language to it. OpenTerm is also limited in that it can’t accept commands from other apps - the commands must be baked in, entirely hard-coded.

It would be possible to create a sandboxed shell - that ensures that commands can only see the files that they are entitled to access. You would most likely have to throw out almost all existing scripts from macOS, and the semantics of the shell language would change - popular shells (zsh, BASH, Fish, etc) don’t have strong types so you don’t know if the parameters passed are files or if they just look like that. Maybe they’re just parts of a web URL? (but does this command have permission to access that URL?) Sandboxing an existing shell would either end up limited, or frustrating and unproductive to use.

This all ignores the fact that iOS apps are not built to be used from the command line - they don’t expect command line arguments, they don’t print a result - they can’t share anything about themselves in the way that a command line expects. Even macOS apps don’t really do this.

For me to support a command line on iOS, I would have to see significant changes to how the core of the operating system behaves. The command line needs to be able to tell apps to do things on behalf of the user - when it is allowed - and receive results back from those apps.

iOS can already do this: Workflow (aka Shortcuts) can chain actions from different apps together. Siri shortcuts allows apps to expose actions that can be consumed by the system, and used as part of a workflow. They don’t allow for passing input or output (as far as I know), which doesn’t make them as versatile as command line programs.

The other aspect of the terminal that is often overlooked is the features that sit outside of the fairly simple concept of a process with input and output streams - VT100 emulation and the use of ANSI escape codes. These allow any process to control the interface far more than just writing a sequence of characters onto the screen. My terminal would not be complete without tmux. It allows me to easily manage applications in a uniform environment, without having to reach for features higher up the “stack” like windows which are not as well suited to managing command line applications.

There is however - as is tradition when talking about iOS releases - always next year. Shortcuts could gain the ability to accept input from other apps and pass their output to other shortcuts. iOS could get UI improvements that make it easier to juggle parts of applications like I can with tmux.

What I don’t want to see is a literal port of the command line to iOS, because that would be a significant step back in imagining a new way of handling inter-app communication and would most likely be so constrained that it wouldn’t be able to fill the use cases of todays terminals. A bad terminal on iOS would only serve to further the argument that doing work on iOS is a dead end.

But hey, I’m just some guy that uses tools that are decades older than him.

  1. A great show, definitely listen to it if you’re into this kind of thing. 

  2. Also tabs, I suppose. 

Writing Macros in Crystal

The existing documentation for macros in Crystal leaves a wee bit to be desired, especially if you want to do anything that’s a bit off-the-rails. So here we go, some top tips for macros in Crystal.

There are two different types of interpolation - {{ foo }} will put the result of foo into the code, whereas {% foo %} will evaluate it and ignore the result. Much like <%= foo %> and <% foo %> in embedded Ruby. So if you want to print something to debug the macro, then use {%. This is obvious if you notice that conditionals and loops in macros always use {% because they shouldn’t actually output anything themselves.

Something that I didn’t realise initially was that you can assign variables in non-expanding interpolation (the {% kind). This makes your code a lot tidier.

When writing a macro it is super useful to be able to see the generated code - to do this you can use {% debug %}! It will output the current “buffer” for the macro, so you can just put it at the bottom of your macro definition to see what is being generated when your code compiles.

@type is definitely not given the attention it needs. It is essential for writing macros that change aspects of the current class or struct. For example:

macro auto_to_string
  def to_s(io)
    io << {{ @type.stringify }}
  end
end

.stringify basically returns the syntax tree wrapped in quotes, so 44.stringify gives "44" at compile time.

When we call this method in some class, a new method will be generated:

class SomeNeatClass
  auto_to_string # Calling the macro will expand the code here

  # This is what will be generated:
  def to_s(io)
    io << "SomeNeatClass"
  end
end

The class name is turned into a string at compile time. @type will be some kind of TypeNode - checking what kind it is using .is_a? and the methods in the imaginary macros module lets you do different things based on what it is - like if it has generic types, what its superclasses are, etc. Although do remember that this information is limited to what is known by the compiler when the macro is invoked - so if you use @type.methods in a macro that is expanded before any methods are defined, there won’t be any there:

macro print_instance_methods
  {% puts @type.methods.map &.name %}
end

class Foo
  print_instance_methods

  def generate_random_number
    4
  end

  print_instance_methods
end
# This will print:
# []
# [print_instance_methods]

Depending on what you want to do, you could either move the code into a macro method - they get resolved when the first bit of code that uses them is compiled - or use the method_added and finished macro hooks.

The difficult thing about writing macros (especially if someone else has to use them) is doing unexpected things when you don’t get quite the input you expect. The error messages are often incomprehensible - just as you’d expect from an interpreted templating language that is used to generate code for another language, on top of which it is based.

Pretty much everything you run into in macros is some kind of *Literal class. Arrays are ArrayLiteral, booleans are BoolLiteral, nil is NilLiteral, etc.

Scrutinising a Scalable Programming Language

In my opinion, most developers should know three different programming languages, three different tools that can be used for most of the projects you come across. These should include a scripting language - for automating a task or doing something quickly; an application language - for writing user-facing or client applications, which have to be run on systems that you don’t control; and an “infrastructure” language for building a system that has many components and runs on your own platform.

I don’t really do any systems programming - which I see as basically being dominated by C anyway - so I would lump it in with your application language, as the situation is very similar. If this is not the case then the target hardware and tooling probably dictates using C anyway.

A scripting language is usually dynamically typed and interpreted. It starts up fast and lets you cut as many corners as needed until the problem is solved. It should be good at dealing with files, text, and simple HTTP requests. For this I will usually use Ruby, as it is very quick to write and has useful shortcuts for common things scripts do - like running shell commands. On occasion I will use Python, but I can’t think of any reason that I would use it over Ruby - except for Ruby not being installed.

The application language I use most often is Java (or Kotlin, more recently). This is partly due to having to use it for university, but also if you want to write something that can be run on pretty much anyones computer with minimal effort, the JVM is probably your best bet. It has its problems, of course - but it’s easier to get someone to install a JRE and run a fat JAR than it would be to ensure they’re running the correct version of Python and the dependencies are installed.

The “infrastructure” language is probably closest to what I find most interesting - I first thought of this after finishing the hackathon that resulted in Herbert. Herbert is a Slackbot that does some timesheeting. It includes the proper Slack app integration hooks and stuff, so it needs to be able to join many different teams at once, and dynamically join teams as they add the integration. It’s written entirely in Ruby, because it was easy to get a working prototype (see above) - but because Ruby doesn’t have real threads, running a load of things at the same time is a bit orthogonal to it’s strengths. In subsequent hackathons, Logan and I chose to use Elixir, which is better suited to piecing a selection of bits together and having them communicate (thanks, BEAM!) so it would probably be my choice here.


I expect that most people - like me - would choose three fairly different languages. What if we had a language that could be used easily for all of these situations? What would it take, and what (in my opinion) is stopping existing languages from being used everywhere?

Most of these languages can be used in all situations, but there are some things that get in the way of them being a great choice. If you are an expert in one particular language then the familiarity will likely win out over the suitability of another.

Go

Go excels at building highly concurrent and performant applications. What makes it even more useful is the ability to easily cross compile to pretty much every major platform. Not having exceptions makes it easy to cut corners in error handling - which makes building an initial prototype or experiment easier. For me the verbose syntax and lack of (seemingly simple) language features pushes me away from Go - but I’ll happily reap the benefits of others using it by making use of their applications.

The other aspect of Go that I find frustrating is the enforced package structure and (lack of) dependency management. There is a fair bit of work from just using go run on a single file to setting up a proper Go environment - especially if you have an existing way of organising your projects.

Java/ Other JVM Languages

The main pitfall of using the JVM for quick tasks is that it usually takes longer for the JVM to start than it does for your program to run. In the time that it takes to start a Clojure REPL, I can typically open a few other terminals and launch a Ruby, Python, and Elixir REPL.

Plain Java also lacks a build system - who in their right mind would use javac to compile multiple files? No simple script should start by writing a pom.xml file. Kotlin is a lot closer to being used for scripting, but still has the problem of slow JVM startup.

Python & Ruby

Both are great scripting languages, and often get paired with native extensions that increase performance without you having to step down to a lower-level language yourself. However I find that the native extensions or systems that manage the running of your code (Passenger, etc) hard to understand and deploy. I like the idea of being able to run the application completely in production mode locally, which is often not really the case for these types of systems. For example, Rails is usually run in production through a Rack-based server that lets your application handle concurrent requests easily, however in development it just uses a simple single-threaded server.

This makes deployment more difficult, and Pythons versioning fiasco doesn’t help. I once wrote a simple script to run on my Raspberry Pi, and because I couldn’t get the correct version of Pip installed to load the dependencies onto the Pi I just reimplemented the same script in Go and cross-compiled it.

JavaScript & Node

I don’t have a lot of experience with JavaScript, but it is one of the few languages that is probably being used in almost every scenario. Quick scripts using Node, applications either in Electron or even just as a web page, and potentially back to Node for your infrastructure. However the ecosystem and build tools are not really to my liking (to say the least), so I will rarely use Node for anything unless I really have to.

Swift

Swift has the potential to be a great option for solving all kinds of problems, but the focus on iOS and macOS development hinders it in the areas that I am interested in - backend development on Linux. This fragments libraries and frameworks, some depend on being built by XCode and use Foundation or Cocoa, others use the Swift Package Manager and use third-party implementations of what you would expect from a standard library. For example, I don’t know how to open a file in Swift. I know how to do it using Foundation, but not on Linux - or maybe that’s changed in the latest version?

String handling makes Swift awkward to use for scripting - indexing can only be done using a Index rather than an Int. This makes sense - indexing a UTF-8 string is an O(n) operation as characters can be pretty much any number of bytes. It does mean that you end up with a spaghetti of string.startIndex and string.index(after: idx).

Elixir

I find Elixir great for making systems with many different components - which isn’t a surprise given the concurrent focus of Erlang and the BEAM VM. The tooling for Elixir is also great, which makes it a pleasure to develop with. Creating a new project, pulling in some dependencies, building a system, and deploying it is well supported for a new language.

The tooling is necessary - working with multiple files isn’t really possible outside of a Mix (the Elixir build system) project, as Mix makes a load of bits and bobs that allows BEAM to put all the code together.

I would never imagine building something designed to run on a computer that’s not my own with Elixir - having someone install Erlang or bundling all of it into a platform-specific binary seems a bit flaky. Also the type of code you write for a client-side application usually isn’t suited for Elixir.

Crystal

Crystal is an interesting new language - it’s still in alpha. What makes it interesting is it has global type inference; instead of just inferring the type for variable assignments based on the return type of functions, it takes into account the usage of every function to determine what types the arguments and result are.

What this means is that Crystal code can be written almost like a dynamically typed programming language, but any type errors will get caught at compile time - basically eliminating typos (ha! type-os!). Obviously this makes Crystal a great language for quick development.

Static typing should make Crystal quite well suited to building larger systems. Of course this is dependent on how it continues to evolve and its ecosystem continues expanding and maturing.


So what would make an ideal scalable language? Something that can be used in any scenario, to build any kind of application. This is a stretch - especially if you want it to be any good at any of one of the problems. For me this basically boils down to readable, concise syntax that allows me to write code that fits in with the standard library; often this includes some solid compile-time checks, catching problems before the code is run, but without adding too much extra overhead to writing the code.

The tools in the language need to make it easy to go from a single script, to a few files, to a full on project with dependencies, tests, etc. Of course as soon as dependencies are involved you need a good ecosystem of libraries and frameworks ready to be used on all the platforms you need them on.

On balance the language should probably compile to native code, but what would be neat is the option to run in a lightweight VM that can be bundled into a native binary so that the application can be more easily deployed without installing a runtime environment. Developing Elixir is very easy because you can recompile and interact with the application from the REPL (this is obviously not new to LISP developers - I have only used Clojure and find that IEx in Elixir is far easier to use than the Leiningen REPL).

Due to my habit of jumping between languages I don’t imagine I will settle on one that I would use for everything any time soon. Although I always keep an eye on Swift to see how that’s progressing - just next year is going to be the year of Desktop Linux, next year will be the year of Server-side Swift. Always next year.

A Downgrade

I have decided to downgrade my phone. I don’t feel the need for a large HD screen, a massive battery, and 3GB of RAM. The extra resolution of the 13MP camera pretty much wasted on me. I have used this phone - the OnePlus One - for the last two and a half years and have decided that when I get a new phone, I no longer need something of this calibre.

When the One came out, it boasted top-of-the-line specs - an incredible 401ppi 5.5” screen, massive 3100 mAh battery, a quad-core 2.4GHz processor, and a 13MP camera.

Almost all new phones in the last few years put the specs of the One to shame - boasting 8-core CPUs, over 6GB of RAM, screen densities upwards of 550ppi, and 16MP cameras.

So, what to do?

Thankfully, I found a device that suits my needs a bit better. The screen is a respectable 326ppi, the battery just 1800 mAh, only 2GB of RAM, and a 12MP camera.

This device does incur the cost of running a less common operating system that is proprietary to this particular manufacturer, rather than using Android, the de-facto industry standard. I have used Android for a long time, so this did weigh heavily in my decision to move, but in the end I decided that the more modest, lower-specced device was the right choice.

The phone arrived last week and even though the specs are behind even my One from 2014, thankfully I haven’t noticed any lack of performance or responsiveness in day-to-day use. Even with its battery that is less than two thirds the size the the One’s, I still haven’t dropped below 30% at the end of the day.

So, long story short - I got an iPhone 8 and it’s amazing.

Top Tips for iOS 11

Some top tips from Frederico Viticci’s iOS 11 review:

  1. Press CMD-Option-D to show the dock. No more awkwardly trying to reach for the edge of the screen when your keyboard is in the way.
  2. Grab the top nubbin in split view to re-arrange the windows in split view.

And some things that I wish were top tips but aren’t:

  1. Drag and drop apps in the multitasking view to create enter split view.
  2. Use CMD & arrow keys to move apps from spotlight into split view.

And some things that aren’t mentioned that I would like to see:

  1. Some way to enter 50/50 split view immediately, without going into 70/30 first.
  2. Open-in-place support in all the apps.

Pug: An Abomination of Shell Scripting

Pug started out a few months ago as a slightly silly idea of writing my own plugin manager for Vim and ZSH plugins. There is no shortage of ways of managing Vim plugins - Pathogen or Vundle seem to be the most common. For ZSH everyone swears by Oh My Zsh which includes every bell and whistle you could imagine.

However each of these only work for the one tool. What if (for some reason) I wanted a tmux plugin? I’d have to install some tmux package manager - if there is one. Pug is the one tool to rule all my package managing needs.

Pug can be used to manage packages for any utility - out of the box it has installers for Vim and ZSH, but other installers can be added by writing a simple shell script. I’ll probably write some more builtin ones myself.

To get started with Pug, head over to the Pug repo. My favourite ZSH plugins - syntax highlighting and auto suggestions - can be installed with Pug:

Install Pug:

curl https://raw.githubusercontent.com/willhbr/pug/master/install.sh | bash

Create a deps.pug file somewhere:

vim deps.pug

Add the dependencies (zsh-autosuggestions and zsh-syntax-highlighting) to deps.pug:

#!/usr/local/bin/pug load

zsh github: zsh-users/zsh-autosuggestions
zsh github: zsh-users/zsh-syntax-highlighting

Load the dependencies:

pug load deps.pug

You’ll be prompted to add this to your .zshrc file:

source "$HOME/.pug/source/zsh/pug"

Done. No more submodules.

Learn Enhancer Reaches Version 2.0

When I first started Learn Enhancer at the start of my degree - sometime in early 2014 - it was just a single JavaScript file that would redirect to a certain URL if it found a certain element. Eventually I coughed up the $5 required to submit it to the Chrome Webstore, and it started its slow creep on to students browsers.

This initial version didn’t catch every case, and so just over a year later I grew frustrated enough to dig right into the HTTP requests and alter them to ensure that the file could be previewed - which I outlined on this blog. Since then I branched out into re-styling every page on Learn with gentler colours (with design supervision from Sarang) - which had the delightful side-effect of letting me see whenever someone has Learn Enhancer installed. It is nice to see my work being used by people that have no clue who I am.

One of the features that I wanted to include but never quite worked out was auto-login, so that I didn’t have to two buttons before accessing content on Learn. This never quite worked out, until this morning I saw Patrick working on a script to do it. Thankfully he was receptive to the idea of incorporating this into Learn Enhancer, and soon he had a it working in the extension.

So here we go, Learn Enhancer has hit 2.0. Since this is my last year at UC, I have put Learn Enhancer on GitHub so that (in theory) current students can contribute fixes when Learn inevitably changes after I have graduated.

If you are a current UC student, download Learn Enhancer, you will almost definitely appreciate it.