Dvorak two years after

In November 2016, I switched from QWERTY to Dvorak. The past year, the change wasn’t noticeable yet, so I was hoping this year would.

This is the completely unscientific test I run: I visit TypeRacer and record races – I’m going to settle at 20 from now on. The texts aren’t the same from year to year, I just trust the random nature of TypeRacer to give me different kind of them: short, large, with tons of punctuation, with very few breaks, and so on.

It looks like I’ve already surpassed my previous baseline after two years using Dvorak – both in terms of speed and accuracy. I had been using QWERTY for about 15 years before switching, so that’s impressive. I guess I’m able to introduce bugs in my code faster!

In a more qualitative note, 2018 has been a year of consolidation. Unlike in 2017, I haven’t changed my input devices and my keyboard configuration has remained the same – that has probably helped my muscle memory to develop faster. I still don’t have a steady rhythm and there are some characters I struggle to type. The accuracy results are more meaningful to me than the speed, as that speaks about my finger health long-term, which was the main reason I was interested to give Dvorak a try.

Function keys in Thinkpads

I usually work with an external ThinkPad keyboard, which matches the configuration of my laptop’s. Lately, though, I’ve been using my laptop’s keyboard more and more. At some point, Lenovo decided to design the 6th keyboard row in a slimmer way and switched the standard F1-F12 keys to function keys (volume, brightness, etc). This is very inconvenient if your work involves typing a lot, as editors tend to offer handy shortcuts with the F1-F12 keys.

This is how you change the default configuration: FN + ESC.

On JavaScript modules

Next week I’m going to participate in a session about package managers for different languages – we’ll present pip (python), composer (PHP), and npm (JavaScript). In sharing ideas with my colleagues I realized how many of the struggles of modern JavaScript tooling and workflow is based upon the fact that the language didn’t have a built-in module system until recently, so I’ve decided to write down my thoughts as preparation for the talk.

A brief history of modularity

For a more in-depth explanation, please, check this evolution of modularity in JavaScript – it has many details hard to find in other places.

JavaScript was created in 1995 without modules, and every <script>  shared the same global scope. We learned how to work with functions and namespacing to gain a minimum ability to divide our code into pieces, but as the importance of JavaScript grew, the pressure to have better alternatives was more intense as well.

In the server, Node.js became the de facto standard and its module choice, CommonJS, the format used in the incipient node package manager,npm . That was 2010, 15 years after the JavaScript release. Browser-side nobody could afford or was interested in a unified module system, so alternatives bloomed and the market became more fragmented. With time, the use of npm skyrocketed and so the importance of CommonJS, even for browsers.

How to make CommonJS available in the browser

It may be worth to pause and remember that at that point we still didn’t have a module system for browsers. We had something that was simple enough to work with and hack. The core of the CommonJS API is quite simple, and has two pieces:

  • require: a function to import some code that’s somewhere else.
  • module.exports: a variable to hold the code to be exported.

Let’s say that I have an input.js file in CommonJS format:

var constants = require( './constants' );
console.log( constants.HELLO + ' ' + constants.WORLD );

And the corresponding constants.js contains:

module.exports = {
    HELLO: 'HELLO',
    WORLD: 'WORLD',
};

I can’t add those files to the browser through the <script>  tag and expect them to work. That’s invalid JavaScript as browsers understand it.

How do we make it valid JavaScript? Well, something we can do is to copy the modules to the same file, wrap them in a function (so their internal variables don’t collide) and expose the necessary keywords through the function arguments:

// Scope the first module
function( require, module ) {
    var constants = require( './constants' );
    console.log( constants.HELLO + ' ' + constants.WORLD );
}

// Scope the second module
function( require, module ) {
    module.exports = {
        HELLO: 'HELLO',
        WORLD: 'WORLD',
    };
}

This can be included in the browsers! It won’t fail, but it will also do nothing.

OK, next step, let’s implement require : it is a function that takes a module identifier and returns its module.exports object. We can do that:

// Implement require
var modules = {};
var require = function( moduleId ){
    var tmpModule = {};
    modules[ moduleId ]( require, tmpModule );
    return tmpModule.exports;
}

// Scope and register the first module
var input = function( require, module ) {
    var constants = require( './constants' );
    console.log( constants.HELLO + ' ' + constants.WORLD );
}
modules[ './input.js' ] = input;

// Scope and register the second module
var constants = function( require, module ) {
    module.exports = {
        HELLO: 'HELLO',
        WORLD: 'WORLD',
    };
}
modules[ './constants' ] = constants;

It looks a bit better, but still does nothing and we ended up adding a lot of variables to the global scope.

Let’s fix this by scoping the code within an IIFE (so it doesn’t pollute the global scope) and execute the main module, the entry point of our program (./input.js in our example):

(function() {
  // Implement require
  var modules = {};
  var require = function( moduleId ) {
    var tmpModule = {};
    modules[ moduleId ]( require, tmpModule );
    return tmpModule.exports;
  };

  // Scope and register the first module
  var input = function( require, module ) {
    var constants = require( './constants' );
    console.log( constants.HELLO + ' ' + constants.WORLD );
  };
  modules[ './input' ] = input;

  // Scope and register the second module
  var constants = function( require, module ) {
    module.exports = {
      HELLO: 'HELLO',
      WORLD: 'WORLD',
    };
  };
  modules[ './constants' ] = constants;

  // Execute the main module
  var module = {};
  modules[ './input' ]( require, module );
})();

This is it! We’ve transformed our initial CommonJS modules into something that is executable in today browsers.

This exercise would need quite a bit of work to be production-ready, but the fundamental steps are there and it’s not difficult to see that’s easily automated by tools. This is mostly what Webpack does when it transpiles CommonJS to IIFE. Rollup seems to be a bit more elegant but its strategy is similar. If you’re curious, check the runnable code they generate.

The transition to ESModules

The success of npm and CommonJS  taught the browser ecosystem a valuable lesson: a better workflow was possible. Some attempts were made to replicate the registry plus format formula but, eventually, npmCommonJS won .

Flash forward to 2015 and the ES6 standard introduces modules in the JavaScript language. Three years after that, browser adoption and node support are still not universal or complete, but everybody agrees that it will. The whole ecosystem seems to be on board and execution environments, toolslibraries, and authors are preparing for what that means.

If we believe that, npm will continue to be the central registry to distribute JavaScript for the foreseeable future, but CommonJS will no longer be the default module format. In this moment of transition, not everything is clear or necessarily better. The ashes of the module wars are still warm, but they won’t be forever.

JavaScript in use 2011-2017

According to the HTTP Archive, the top 1.000 websites download 5 times more JavaScript today than seven years ago – HTML grew 2x and CSS 3x. Combining that with the fact that the mobile web is more present than ever, the result is that the main bottleneck for the websites we create and consume is the CPU.

Agile according to Basecamp

Running in Circles is Basecamp’s view of agile product management. They acknowledge the value of working in cycles, but add three pieces: having the time to focus, being able to modify the original plan, and tackle the core unknowns of the feature first.

The first two are enablers that are provided to the makers by management. The last part is how the maker make the most of those powers. Together, they form a process that is nicely captured with the uphill / downhill metaphor. Uphill you are discovering the unknowns and making decisions about what goes in, downhill everything is clear and you are implementing it at warp factor 10:

Software architecture failing

Software architecture failing: tech writing is biased towards what the big ones do, which usually doesn’t fit most other contexts – but, who got fired for choosing IBM, right? Although I feel connected to this rant at an emotional level, I do think it’s necessary to elaborate more and make a positive contribution: help to create and spread that alternate history of software development. How do you do it? Hat tip: Fran.

Touch typing in Dvorak

On November 2016 I had a free month between jobs. Apart from some resting, reading, and general preparations for my new adventure, I still had quite a bit of free time to do new things or build good habits. It was while cleaning my office that I found a keyboard I had bought a couple of years back:

Its layout was a beautiful matrix -which is good for your fingers- and came with Dvorak by default. So it struck me: how about improving my typing during the coming weeks?

As a programmer, typing is an essential skill for me. I had been doing it for more than 15 years in a learn-by-doing way, and I plan to keep typing for years to come. I thought it would be fun to spend a couple of hours a day training in touch-typing and give Dvorak a second try. And so I did.

How it felt

Before I switched, I recorded about 15 typing sessions at TypeRacer using the QWERTY layout, which logs typing speed (words per minute) and accuracy (% characters right over the total). I was at 67 wpm and about 95% accuracy at the time.

Progress was very humbling at the beginning; it felt like learning to walk again, and I swear that, sometimes, I could even hear my brain circuits being reconfigured! After a few weeks, though, I was at 40 wpm and, by the end of the month, I was at 50 wpm. I stopped quantifying myself by then: as I started working, I had a lot of typing to do anyway.

During the first months, real-time communication -chat, slack- was the only moment I struggled and felt like perhaps the switch wasn’t a good idea. I don’t know what people thought of me, but my writing at the time was typing-bounded – I was certainly a very slow touch-typist by my own standards. But time passed and I improved.

Spáñish Dvorak and symbols

Throughout the process I changed my setup quite a bit:

  1. I started by using the Programmer Dvorak layout with a TypeMatrix keyboard.
  2. After a few months, I switched back to my good old ThinkPad keyboard because having to use a mouse again after years without it was painful.
  3. A few months later, I switched to the Dvorak international layout, because the Programmers Dvorak didn’t quite suit me.
  4. Then, I tweaked the common symbols I use for programming so they were more ergonomic for my daily tasks.
  5. Although the bulk of my typing is in English, I still need to write decent Spáñish, which basically means using tildes on vowels and ñ so I switched to the Spanish Dvorak.
  6. Finally, Spanish Dvorak wasn’t the improvement I was looking for, so I’ve ended up accommodating tildes, ñ, and other symbols in the Dvorak international as I see fit.

This is how my layout looks like today:

All these changes through the year have affected my ability to build muscle memory – sometimes I still need to look at some specific symbol on the keyboard. However, the current version has been unchanged for months, so I only need a bit more time for them to stick.

Performance to date

Given that I was a QWERTY user for 15 years, I thought I would give the new layout a year before comparing any numbers. The fair thing to do would be comparing after 15 years, but I’m a bit impatient for that. So I went to TypeRacer and noted down the results for about 20 races:

In terms of speed, it looks like I’m mostly there. My median speed now is 65 words per minute, 2 wpm less than before. I had a higher peak (83 vs 79), but I was under 60wpm in more sessions.

In terms of accuracy, I’ve improved a bit. My median accuracy has increased by 1,5 points, and I had only 2 sessions below 95%.

Coda

My accuracy has improved, and having fewer mistakes to correct will help me become a faster typist as time passes. By learning to touch-type I also have grown more endurance.

This experiment was very humbling. I believe it increased my brain plasticity by an order of magnitude. Although I hope to improve my numbers, what’s more important to me is to promote a healthy use of the tools I heavily depend upon.

%d bloggers like this: