testling - automated browser tests
browserling - interactive browser testing
commit d95a2849d28593758c03c0cde74175cb807db857
Author: James Halliday
Date: Sun Dec 8 16:14:47 2013 -0800

In node I use simple test libraries like tap or tape that let you run the test files directly. For code that needs to run in both the browser and node I use tape because tap doesn't run in the browser very well and the APIs are mostly interchangeable.

The simplest kind of test I might write in test/ looks like:

var test = require('tape');
var someModule = require('../');

test('fibwibblers and xyrscawlers', function (t) {
    t.plan(2);

    var x = someModule();
    t.equal(x.foo(2), 22);

    x.beep(function (err, res) {
        t.equal(res, 'boop');
    });
})

To run a single test file in node I just do:

node test/fibwibbler.js

And if I have multiple tests I want to run I do:

tape test/*.js

or I can just use the tap command even if I'm just using tape because tap only looks at stdout for tap output:

tap test/*.js

The best part is that since tape just uses console.log() to print its tap-formatted assertions, all I need to do is browserify my test files.

To compile a single test in the browser I can just do:

browserify test/fibwibbler.js > bundle.js

or to compile a directory full of tests I just do:

browserify test/*.js > bundle.js

Now to run the tests in a browser I can just write an index.html:

<script src="bundle.js"></script>

and xdg-open that index.html in a local browser. To shortcut that process, I can use the testling command (npm install -g testling):

browserify test/*.js | testling

which launches a browser locally and prints the console.log() statements that executed browser-side to my terminal directly. It even sets the process exit code based on whether the TAP output had any errors:

substack : defined $ browserify test/*.js | testling

TAP version 13
# defined-or
ok 1 empty arguments
ok 2 1 undefined
ok 3 2 undefined
ok 4 4 undefineds
ok 5 false[0]
ok 6 false[1]
ok 7 zero[0]
ok 8 zero[1]
ok 9 first arg
ok 10 second arg
ok 11 third arg
not ok 12 (unnamed assert)
  ---
    operator: ok
    expected: true
    actual:   false
    at: Test.ok.Test.true.Test.assert (http://localhost:47079/__testling?show=true:7772:10)
  ...
# (anonymous)
ok 13 should be equal

1..13
# tests 13
# pass  12
# fail  1
substack : defined $ echo $?
1
substack : defined $

coverage

bonus content: if I want code coverage, I can just sneak that into the pipeline using coverify. This is still experimental but here's how it looks:

$ browserify -t coverify test.js | testling | coverify

TAP version 13
# beep boop
ok 1 should be equal

1..1
# tests 1
# pass  1

# ok

# /tmp/example/test.js: line 7, column 16-28

  if (err) deadCode();
           ^^^^^^^^^^^

# /tmp/example/foo.js: line 3, column 35-48

  if (i++ === 10 || (false && neverFires())) {
                              ^^^^^^^^^^^^

or to run the tests in node, just swap testling for node:

$ browserify -t coverify test.js | node | coverify
TAP version 13
# beep boop
ok 1 should be equal

1..1
# tests 1
# pass  1

# ok

# /tmp/example/test.js: line 7, column 16-28

  if (err) deadCode();
           ^^^^^^^^^^^

# /tmp/example/foo.js: line 3, column 35-48

  if (i++ === 10 || (false && neverFires())) {
                              ^^^^^^^^^^^^

Update (2013-12-21): check out the covert package on npm, which gives you a covert command that runs browserify and coverify for you.

why write tests this way?

The node-tap API is pretty great because it feels asynchronous by default. Since you plan out the number of assertions ahead of time, it's much easier to catch false positives where asynchronous handlers with assertions inside didn't fire at all.

By using simple text-based interfaces like stdout and console.log() it's easy to get tests to run in node and the browser and you can just pipe the output around to simple command-line tools. If you stick to tools that just do one thing but expose their functionality in a hackable way, it's easy to recombine the pieces however you want and swap out components to better suit your specific needs.

commit 742794366c89622b0a7ce2ee848d1edd92d94651
Author: James Halliday
Date: Sat Dec 7 23:39:34 2013 -0800

A new browserify version is upon us, just in time for the FESTIVE SEASON during which we in the northern hemisphere at mid to high latitudes huddle for warmth around oxidizing hydrocarbons!

There are 2 big changes in v3 but most code should be relatively unaffected.

shiny new Buffer

feross forked the buffer-browserify package to create native-buffer-browserify, a Buffer implementation that uses Uint8Array to get buf[i] notation and parity with the node core Buffer api without the performance hit of the previous implementation and a much smaller file size. The downside is that Buffer now only works in browsers with Uint8Array and DataView support. All the other modules should maintain existing browser support.

Update: a shim was added to in 3.1 for Uint8Array and DataView support. Now you can use Buffer in every browser.

direct builtin dependencies

In v3, browserify no longer depends on browser-builtins, in favor of depending on packages directly. Instead of having some separate packages and some files in a builtin/ directory like browser-builtins, browserify now uses only external packages for the shims it uses. By only using external packages we can keep browserify core focused purely on the static analysis and bundling machinery while letting the individual packages worry about things like browser compatibility and parity with the node core API as it evolves.

Individual, tiny packages should also be much easier for newcomers to contribute packages toward because they won't need to get up to speed with all the other pieces going on and the packages can have their own tests and documentation. Additionally, each package may find uses in other projects beside browserify more easily and if people want variations on the versions of shims that ship with browserify core this is easier to do when everything is separate.

Back when we were using browser-builtins there was a large latency between pushing out fixes to the individual packages and getting them into browserify core because we had to wait on browser-builtins to upgrade the semvers in its package.json. With direct dependencies we get much lower latency for package upgrades and much more granular control over upgrading packages.

Here is the list of packages we now directly depend on in v3:

That's it! If you're bold enough to give v3 a spin, just do:

npm install -g browserify
commit c13668f2b5f4f515e97723fa3322aa009181629c
Author: James Halliday
Date: Mon Nov 18 01:52:10 2013 +0800

There are some fancy tools for doing build automation on javascript projects that I've never felt the appeal of because the lesser-known npm run command has been perfectly adequate for everything I've needed to do while maintaining a very tiny configuration footprint.

Here are some tricks I use to get the most out of npm run and the package.json "scripts" field.

the scripts field

If you haven't seen it before, npm looks at a field called scripts in the package.json of a project in order to make things like npm test from the scripts.test field and npm start from the scripts.start field work.

npm test and npm start are just shortcuts for npm run test and npm run start and you can use npm run to run whichever entries in the scripts field you want!

Another thing that makes npm run really great is that npm will automatically set up $PATH to look in node_modules/.bin, so you can just run commands supplied by dependencies and devDependencies directly without doing a global install. Packages from npm that you might want to incorporate into your task workflow only need to expose a simple command-line interface and you can always write a simple little program yourself!

building javascript

I write my browser code with node-style commonjs module.exports and require() to organize my code and to use packages published to npm. browserify can resolve all the require() calls statically as a build step to create a single concatenated bundle file you can load with a single script tag. To use browserify I can just have a scripts['build-js'] entry in package.json that looks like:

"build-js": "browserify browser/main.js > static/bundle.js"

If I want my javascript build step for production to also do minification, I can just add uglify-js as a devDependency and insert it straight into the pipeline:

"build-js": "browserify browser/main.js | uglifyjs -mc > static/bundle.js"

watching javascript

To recompile my browser javascript automatically whenever I change a file, I can just substitude the browserify command for watchify and add -d and -v for debugging and more verbose output:

"watch-js": "watchify browser/main.js -o static/bundle.js -dv"

building css

I find that cat is usually adequate so I just have a script that looks something like:

"build-css": "cat static/pages/*.css tabs/*/*.css > static/bundle.css"

watching css

Similarly to my watchify build, I can recompile css as it changes by substituting cat with catw:

"watch-css": "catw static/pages/*.css tabs/*/*.css -o static/bundle.css -v"

sequential sub-tasks

If you have 2 tasks you want to run in series, you can just npm run each task separated by a &&:

"build": "npm run build-js && npm run build-css"

parallel sub-tasks

If you want to run some tasks in parallel, just use & as the separator!

"watch": "npm run watch-js & npm run watch-css"

the complete package.json

Altogether, the package.json I've just described might look like:

{
  "name": "my-silly-app",
  "version": "1.2.3",
  "private": true,
  "dependencies": {
    "browserify": "~2.35.2",
    "uglifyjs": "~2.3.6"
  },
  "devDependencies": {
    "watchify": "~0.1.0",
    "catw": "~0.0.1",
    "tap": "~0.4.4"
  },
  "scripts": {
    "build-js": "browserify browser/main.js | uglifyjs -mc > static/bundle.js",
    "build-css": "cat static/pages/*.css tabs/*/*.css",
    "build": "npm run build-js && npm run build-css",
    "watch-js": "watchify browser/main.js -o static/bundle.js -dv",
    "watch-css": "catw static/pages/*.css tabs/*/*.css -o static/bundle.css -v",
    "watch": "npm run watch-js & npm run watch-css",
    "start": "node server.js",
    "test": "tap test/*.js"
  }
}

If I want to build for production I can just do npm run build and for local development I can just do npm run watch!

You can extend this basic approach however you like! For instance you might want to run the build step before running start, so you could just do:

"start": "npm run build && node server.js"

or perhaps you want an npm run start-dev command that also starts the watchers:

"start-dev": "npm run watch & npm start"

You can reorganize the pieces however you want!

when things get really complicated...

If you find yourself stuffing a lot of commands into a single scripts field entry, consider factoring some of those commands out into someplace like bin/.

You can write those scripts in bash or node or perl or whatever. Just put the proper #! line at the top of the file, chmod +x, and you're good to go:

#!/bin/bash
(cd site/main; browserify browser/main.js | uglifyjs -mc > static/bundle.js)
(cd site/xyz; browserify browser.js > static/bundle.js)
"build-js": "bin/build.sh"

windows

A surprising amount of bash-isms work on windows but we still need to get ; and & working to get to "good enough".

I have some experiments in the works for windows compatibility that should fold in very well with this npm-centric approach but in the meantime, win-bash is a super handy little bash implementation for windows.

conclusion

I hope that this npm run approach I've documented here will appeal to some of you who may be unimpressed with the current state of frontend task automation tooling, particularly those of you like me who just don't "get" the appeal of some of these things. I tend to prefer tools that are more steeped in the unix heritage like git or here with npm just providing a rather minimal interface on top of bash. Some things really don't require a lot of ceremony or coordination and you can often get a lot of mileage out of very simple tools that do very ordinary things.

If you don't like the npm run style I've elaborated upon here you might also consider Makefiles as a solid and simple alternative to some of the more baroque approaches to task automation making the rounds these days.

commit 730e80407f36a44890da1357d59b02cae5a0ab0e
Author: James Halliday
Date: Tue Oct 1 13:37:54 2013 +0100

wireless on the command line

Connecting to wireless access points completely from the command line in linux using the built-in tools is not actually very complicated. The hardest part about it is turning off whatever "friendly" wireless/network managers your system is already running.

why the command line?

Graphical tools like nm-applet are handy but what they're doing is very opaque. Sometimes you will tell them to connect to an access point but they will ignore you and continue connecting to some other access point that you don't want them to connect to. If you prefer to tell the computer exactly what to do, managing wireless on the command line is actually not that hard or difficult and you gain a lot of transparency into what your computer is doing to avoid frustrating situations tinkering with opaque graphical tools.

Also if you like minimal or tiling windowing managers using a wireless applet by way of something like stalonetray feels really awkward and strange.

turning things off

debian/ubuntu

$ sudo update-rc.d network-manager remove
$ pkill nm-applet
$ sudo service network-manager stop

or if sudo service network-manager stop didn't work, try:

$ sudo /etc/init.d/network-manager stop

If you're using a graphical environment with a panel that automatically spins up something like nm-applet, you'll also need to figure out how to disable that although it won't do anything if network-manager isn't running.

figuring out the interface name

Type iwconfig. You will see a list of interfaces. Ignore all the interfaces that say "no wireless extensions".

The interface name will be wlan0, wlan2 or ath0 or something like that.

This document uses the name wlan0 but you should substitute wlan0 for whichever interface your system reports.

adding passwords

$ sudo su
# wpa_passphrase SSID PASSPHRASE >> /etc/wpa_supplicant.conf

Make sure to use >> and not > or else you will delete all your wireless passwords! It's a good idea to make a backup occasionally:

sudo cp /etc/wpa_supplicant.conf{,.backup}

run wpa_supplicant

scanning for access points

$ sudo iw dev wlan0 scan | grep SSID
    SSID: MEO-876078
    SSID: Thomson249040
    SSID: MEO-089464
    SSID: Solmar - Guests
    SSID: SINDICADO-NACIONAL
    SSID: Solmar

connecting to an access point

To connect to an access point called SSID, do:

$ sudo iw dev wlan0 connect -w SSID

see if you're connected to an access point

Use iwconfig:

$ iwconfig wlan0

When you're connected, you will see something like:

wlan0     IEEE 802.11abgn  ESSID:"Thomson249040"  
          Mode:Managed  Frequency:2.412 GHz  Access Point: 00:24:17:44:35:28   
          Bit Rate=48 Mb/s   Tx-Power=19 dBm   
          Retry limit:231   RTS thr:off   Fragment thr:off
          Power Management:off
          Link Quality=46/70  Signal level=-64 dBm  
          Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0
          Tx excessive retries:170  Invalid misc:134   Missed beacon:0

getting an IP address

Most of the time you'll just need to do:

sudo dhclient wlan0

but sometimes you will get the message:

RTNETLINK answers: File exists

In that case, release the dhcp lease first with -r and then get a lease:

$ sudo dhclient -r wlan0
$ sudo dhclient wlan0

Once dhclient finishes, you're online!

disconnecting

sudo iw dev wlan0 disconnect

see also

The manual setup section of the archlinux wiki is very good but somewhat specific to arch in places.

commit 371294f9d675d4dd2b9b59212d323e69422284b9
Author: James Halliday
Date: Sun Jul 7 04:00:23 2013 -0700

So you're using the same language on the browser and the server. Great!

You can get the benefits of fast initial page loads server-side that are easily indexed by search engines while simultaneously rendering realtime and on-demand content browser-side for a rich, responsive user experience!

This sounds really great, but now you might be thinking:

  • How do I load the shared code in both environments without making a mess?
  • How can I load files like html or templates in a way that works in both node and the browser?
  • How do I render the data?
  • How should I route the data where it needs to go?

These questions are not obvious and there are many ways to answer them! The rest of this article is some answers that I've discovered or built that work well together.

import shared code

Node already has a really good built-in way of importing and exporting code with require() and module.exports.

browserify makes require() and module.exports work in the browser pretty much exactly the same as they work in node.

By using node-style modules, our shared rendering logic can just use module.exports to expose a function and we can use require() to load other project files or even npm modules.

Let's stub out the shared render file, render.js:

module.exports = function () {
    // shared logic goes here
};

load files

The next thing we'll need to figure out is how our shared render logic should load the non-js files it will need into memory.

In node, to load something into memory at process start-up, it's common to use fs.readFileSync(filename) to synchronously return the file contents at filename.

brfs can make fs.readFileSync() work for browser code too! Instead of performing synchronous IO when the program runs, instead at compile time the file contents are inlined into the bundle.

For example if we have some code:

var fs = require('fs');
var src = fs.readFileSync(__dirname + '/file.txt');

and if file.txt is just the string "beep boop\n", after running browserify with brfs, our file will be turned into:

var fs = require('fs');
var src = "beep boop\n";

To run browserify with brfs, just use the -t switch:

$ browserify -t brfs main.js > bundle.js

render the data

I'm not a big fan of html templates since you've got to nest a pseudo-language into your html and I would rather just write ordinary html that I can update procedurally from my rendering code.

In browsers it's easy to update the html on the page with the DOM. I can just do a quick .querySelector() to fetch an element and then I can easily update attributes or inner content.

If I've got a lot of data to insert into the DOM, calling .querySelector(), .setAttribute(), and assigning .innerHTML or .textContent all the time can get verbose, but the approach is not so unpleasant.

In node, there's not a fast, reliable DOM library for doing updates with the intent of producing html strings that will be sent to the browser in the initial payload. Luckily, the full DOM isn't strictly necessary to serve html strings to the browser in this "dom style".

hyperglue

With hyperglue, we can solve both the verbosity of updating the DOM at query selectors and node compatability at once.

hyperglue takes an html element or string and an object that maps query selectors to attributes and content. With hyperglue and brfs together you can write some rendering logic that looks like:

var hyperglue = require('hyperglue');
var fs = require('fs');
var html = fs.readFileSync(__dirname + '/article.html');

module.exports = function (doc) {
    var name = doc.title.replace(/[^A-Za-z0-9]+/g,'_');
    return hyperglue(html, {
        '.title a': {
            name: name,
            href: '#' + name,
            _text: doc.title
        },
        '.commit': doc.commit,
        '.author': doc.author,
        '.date': doc.date,
        '.body': { _html: doc.body }
    });
}

which will work in both node and the browser! The keys are css query selector strings and the values are the attributes and content to update at the nodes matching the selector. The values are objects mapping element attribute names to values but with special keys _html to set inner html and _text to sent entity-encoded inner text. If the value is a string, it's treated as the _text parameter.

The only odd part is that in the browser you will get a full dom element, but in node you just get an object with an outerHTML string property, which is adequate for writing html content to the http server response.

hyperspace

hyperspace puts a stream on top of hyperglue and adds some browser-specific functionality so that you can write shared rendering logic that starts on the server and seamlessly picks up where the server left off on the browser.

With hyperspace we can write a shared render.js that looks like:

var hyperspace = require('hyperspace');
var fs = require('fs');
var html = fs.readFileSync(__dirname + '/article.html');

module.exports = function () {
    return hyperspace(html, function (doc) {
        var name = doc.title.replace(/[^A-Za-z0-9]+/g,'_');
        return {
            '.title a': {
                name: name,
                href: '#' + name,
                _text: doc.title
            },
            '.commit': doc.commit,
            '.author': doc.author,
            '.date': doc.date,
            '.body': { _html: doc.body }
        };
    };
}

Now when we require('./render.js') in either the browser or in node, we get a function that returns a through stream. The stream expects json data to be written to it (either naked objects or newline-separated json text) and outputs a stream of html that can be piped directly to an http server response:

var http = require('http');
var fs = require('fs');
var render = require('./render.js');

var server = http.createServer(function (req, res) {
    res.setHeader('content-type', 'text/html');
    fs.createReadStream(__dirname + '/data')
        .pipe(render())
        .pipe(res)
    ;
});
server.listen(5000);

Here we're just piping newline-separated json from a file, but it should be very simple to swap that part out for a real database when you need one.

In the browser, hyperstream()'s return value is still a stream, but that stream has some more browser-appropriate functions on it:

var render = require('./render.js')();
render.on('element', function (elem) {
    elem.addEventListener('click', function onclick () {
        elem.classList.remove('summary');
        elem.removeEventListener('click', onclick);
    });
});
render.appendTo('#articles');

var shoe = require('shoe');
shoe('/article-stream').pipe(render);

Here we're using the 'element' event to bind a click listener on every article element. The 'element' listener also fires for elements that were pre-rendered server-side once the page loads.

Any new data that comes down the pipe from shoe (which hasn't yet been wired up server-side in this example) will get added to the rendering automatically.

Instead of .pipe() which render still has in the browser, we're using .appendTo() to put content into the #articles element. Using .appendTo() here will insert rendered elements as they are written to the render stream and it will fire the 'element' events for any elements that were rendered server-side.

It's also possible to give a sorting function to hyperspace for browser code, which is really useful when you've got a realtime feed in conjunction with on-demand loading so that your server code can be dumb and simply write realtime updates and requested on-demand data directly to the same data stream without adding any extra transformations.

trumpet

Going back to the server code example, we'll just have article elements on the page and no containing <html> or <body> elements to wrap the article content.

We can use trumpet to pipe the stream that hyperspace returns into some existing html file:

var http = require('http');
var fs = require('fs');
var trumpet = require('trumpet');
var render = require('./render.js');

var server = http.createServer(function (req, res) {
    res.setHeader('content-type', 'text/html');

    var tr = trumpet();
    fs.createReadStream(__dirname + '/data')
        .pipe(render())
        .pipe(tr.select('#content').createWriteStream())
    ;
    fs.createReadStream(__dirname + '/index.html').pipe(tr).pipe(res);
});
server.listen(5000);

Here we're streaming the rendered html into an element named #content and piping the index html file with rendered data at #content to the response.

stream everything

By using streams with hyperspace and trumpet, we get the benefit of APIs that compose well together, but streams are easy to serialize and make it easy to route data over a different transport or to a different destination.

Some new database APIs even have streaming realtime feeds built-in to listen for live updates.

LevelDB is particularly fascinating because you can browserify most of the modules and use the same database interfaces backed to IndexDB when you're in a browser.

links

For an example of using hyperspace, trumpet, and brfs in a real application, check out the code for this very blog.

more
git clone http://substack.net/blog.git