JavaScript Tooling 2015
Here’s my list of must-have JavaScript tools and modules, updated for 2015. These are the tools I use on every project. They are:
Universal. These tools makes sense for nearly every JavaScript project.
Valuable. You’ll get noticeable, ongoing benefits from using them.
Mature. They’ve stood the test of time. You won’t have to spend a lot of time keeping up with changes.
See JavaScript Workflow 2015 for a video describing how to set up a front-end project using these tools. To get started quickly, see my automatopia seed project on Github.
tl;dr
- Build automation: Jake
- Dependency versioning: check ’em in
- Continuous integration: test before merging (see below)
- Linting: JSHint
- Node.js tests: Mocha and Chai
- Front-end tests: Karma, Mocha, and Chai (Expect.js instead of Chai on IE 8)
- Smoke tests: Selenium WebdriverJS
- Front-end modules: Browserify and karma-commonjs
Changes since the 2014 edition:
- Smoke tests: Replaced CasperJS with Selenium WebdriverJS. CasperJS uses PhantomJS, which seems to be going through some growing pains, including an annoying slow-down on Mac OS.
Build Automation: Jake
(Build automation is introduced in Chapter 1, “Continuous Integration,” and discussed in LL16, “JavaScript Workflow 2015”.)
Build automation is the first thing I put into place on any new project. It’s essential for fast, repeatable workflow. I constantly run the build as I work. A good build automation tool supports my work by being fast, powerful, flexible, and staying out of the way.
My preferred tool for build automation is Jake. It’s mature, has a nice combination of simplicity and robustness, and it’s code-based rather than configuration-based.
That said, Grunt is the current king of the hill and it has a much better plugin ecosystem than Jake. Grunt’s emphasis on configuring plugins rather than writing code tends to get messy over time, though, and it lacks classic build automation features such as dirty file checking. I think Jake is a better tool overall, but Grunt’s plugins make it easier to get started. If you’re interested in Grunt, I review it in The Lab #1, “The Great Grunt Shootout.”
Another popular build tool is Gulp. It uses an asynchronous, stream-based approach that’s fast and avoids the need for temporary files. But that stream-based approach can also make debugging difficult. Gulp’s pretty minimalistic, too, lacking useful features such as task documentation and command-line parameters. You can read my review of gulp here.
We cover installing Jake and creating a Jakefile in the second half of episode 1, “WeeWikiPaint.” I also have a pre-configured example on GitHub in the automatopia repository. For examples of Grunt and Gulp builds, see the code for Lab #1.
Dependency Versioning: Check ’em in
(Dependency management is introduced in Chapter 1, “Continuous Integration,” and discussed in LL16, “JavaScript Workflow 2015”.)
I’m a big proponent of keeping everything you need to build your code in a single, versioned repository. It’s the simplest, most reliable way to share changes with your team and ensure you can build old versions when you need to.
As a result, unless you’re actually creating an npm module, I prefer to install npm modules locally (in other words, don’t use the -g
option, even for tools) and check them into source control. This will isolate you from undesired upstream changes and hiccups.
To do this, you need to ensure that you don’t check in build artifacts. Here’s how to do it with git:
npm install <package> --ignore-scripts --save # Install without building
git add . && git commit -a # Check in the module
npm rebuild # Build it
git status # Display files created by the build
### If there's any build files, add them to .gitignore and check it in.
In the Live channel, we install our tools locally, use scripts to run them, and check them into git. You can see an example of this in the second half of episode 1 when we set up Jake. The automatopia repository also demonstrates this approach. My essay, “The Reliable Build,” goes into more detail.
Continuous Integration: Test before merging
(Continuous integration is introduced in Chapter 1, “Continuous Integration,” and LL1, “Continuous Integration with Git.” It’s also discussed in LL16, “JavaScript Workflow 2015”.)
I’m known for saying, “Continuous integration is an attitude, not a tool.” Continuous integration isn’t about having a build server—it’s about making sure your code is ready to ship at any time. The key ingredients are:
- Integrate every few hours.
- Ensure the integrated code works.
The most effective way to do this is to use a synchronous integration process that prevents integration build failures.
“Synchronous integration” means that you don’t start a new task until you’ve confirmed that the integration succeeded. This ensures that problems are fixed right away, not left to fester.
Preventing integration build failures is a simple matter of testing your integration before you share it with the rest of the team. This prevents bad builds from disrupting other people’s work. Surprisingly, most CI tools don’t support this approach.
I use git branches to ensure good builds. I set up an integration machine with an integration branch and one dev branch for each development workstation. Development on each workstation is done on that workstation’s dedicated branch.
### Develop on development workstation
git checkout <dev> # Work on this machine's dev branch
# work work work
<build> # optional # Validate your code before integrating
### Integrate on development workstation
git pull origin integration # Integrate latest known-good code
<build> # optional # Only fails when integration conflicts
### Push to integration machine for testing
git push origin <dev>
### Validate on integration machine
git checkout <dev> # Get the integrated code
git merge integration --ff-only # Confirm changes have been integrated
<build> # mandatory # Make sure it really works
git checkout integration
git merge dev1 --no-ff # Make it available to everyone else
You can do this with a manual process or an automated tool. I prefer a lightly-scripted manual approach, as seen in the automatopia repository, because it’s lower maintenance than using a tool.
If you use an automated tool, be careful: most CI tools default to asynchronous integration, not synchronous, and most test the code after publishing it to the integration branch, not before. These flaws tend to result in slower builds and more time wasted on integration errors.
I demonstrate how to set up a basic CI process starting in the second half of episode 3, “Preparing for Continuous Integration.” I show how to automate that process and make it work with a team of developers in Lessons Learned #1, “Continuous Integration with Git.” The automatopia repository also includes an up-to-date version of that CI script. See the “Continuous Integration” section of the README for details.
The process I describe above is for Git, but it should also translate to other distributed version control systems. If you’re using a centralized version control system, such as Subversion, you can use a rubber chicken instead. (Really! It works great.)
Linting: JSHint
(Linting is introduced in Chapter 1, “Continuous Integration,” and discussed in LL16, “JavaScript Workflow 2015”.)
Static code analysis, or “linting,” is crucial for JavaScript. It’s right up there with putting "use strict";
at the top of your modules. it’s a simple, smart way to make sure that you don’t have any obvious mistakes in your code.
I prefer JSHint. It’s based on Douglas Crockford’s original JSLint but offers more flexibility in configuration.
Another tool that’s been attracting attention lately is ESLint. Its main benefit seems to be a pluggable architecture. I haven’t tried it, and I’ve been happy enough with JSHint’s built-in options, but you might want to check ESLint out if you’re looking for more flexibility than JSHint provides.
Episode 2, “Build Automation & Lint,” shows how to install and configure JSHint with Jake. I’ve since packaged that code up into a module called simplebuild-jshint. You can use that module for any of your JSHint automation needs. See the module for details.
Node.js Testing: Mocha and Chai
(Node.js testing tools are introduced in Chapter 2, “Test Frameworks,” and Lessons Learned #2, “Test-Driven Development with NodeUnit”.)
When I started the screencast, Mocha was my first choice of testing tools, but I had some concerns about its long-term viability. We spent some time in episode 7 discussing those concerns and considering how to future-proof it, but eventually, we decided to go with NodeUnit instead.
It turns out that those concerns were unfounded. Mocha’s stood the test of time and it’s a better tool than NodeUnit. NodeUnit isn’t bad, but it’s no longer my first choice. The test syntax is clunky and limited and even its “minimal” reporter setting is too verbose for big projects.
I recommend combining Mocha with Chai. Mocha does an excellent job of running tests, handling asynchronous code, and reporting results. Chai is an assertion library that you use inside your tests. It’s mature with support for both BDD and TDD assertion styles.
See episode 34, “Cross-Browser and Cross-Platform,” (starting around the eight-minute mark) for an example of using Mocha and Chai. That example is for front-end code, not Node.js, but it works the same way. The only difference is how you run the tests. To run Mocha from Jake, you can use mocha_runner.js from the automatopia repository.
For a step-by-step guide to server-side testing, start with episode 7, “Our First Test.” It covers NodeUnit rather than Mocha, but the concepts are transferable. The automatopia repository shows how to use Mocha instead. If you need help figuring out how to use Mocha, leave a comment here or on episode 7 and I’ll be happy to help out.
Cross-Browser Testing: Karma, Mocha, and Chai
(Cross-browser testing is introduced in Chapter 7, “Cross-Browser Testing,” and Lessons Learned #6, “Cross-Browser Testing with Karma.”) It’s also discussed in LL16, “JavaScript Workflow 2015”.)
Even today, there are subtle differences in JavaScript behavior across browsers, especially where the DOM is concerned. It’s important to test your code inside real browsers. That’s the only way to be sure your code will really work in production.
I use Karma for automated cross-browser testing. It’s fast and reliable. In the screencast, we use it to test against Safari, Chrome, Firefox, multiple flavors of IE, and Mobile Safari running in the iOS simulator. I’ve also used it to test real devices, such as my iPad.
Karma’s biggest flaw is its results reporting. If a test fails while you’re testing a lot of browsers, it can be hard to figure out what went wrong.
An alternative tool that does a much better job of reporting is Test’em Scripts. It’s superior to Karma in nearly every way, in fact, except the most important one: it doesn’t play well with build automation. As a result, I can’t recommend it. For details, see The Lab #4, “Test Them Test’em.”
I combine Karma with Mocha and Chai. Chai doesn’t work with IE 8, so if you need IE 8 support, try Expect.js. Expect.js has a lot of flaws—most notably, its failure messages are weak and can’t be customized—but it’s the best assertion library I’ve found that works well with IE 8.
We cover Karma in depth in Chapter 7, “Cross-Browser Testing,” and Lessons Learned #6, “Cross-Browser Testing with Karma.” For details about the new config file format that was added in Karma 0.10, see episode 133, “More Karma.” The automatopia repository is also set up with a recent version of Karma.
Smoke Testing: Selenium WebdriverJS
(Smoke testing is introduced in Chapter 5, “Smoke Test,” and Lessons Learned #4, “Smoke Testing a Node.js Web Server.” Front-end smoke testing is covered in Chapter 15, “Front-End Smoke Tests,” and Lessons Learned #13, “PhantomJS and Front-End Smoke Testing.”)
Even if you do a great job of test-driven development at the unit and integration testing levels, it’s worth having a few end-to-end tests that make sure everything works properly in production. These are called “smoke tests.” You’re turning on the app and seeing if smoke comes out.
I used to recommend CasperJS for smoke testing, but it uses PhantomJS under the covers, and PhantomJS has been going through some growing pains lately. Now I’m using Selenium WebdriverJS instead. It’s slower but more reliable.
(In fairness, PhantomJS just came out with a new version 2, which may have fixed its problems. I haven’t had a chance to try it yet.)
We cover Selenium WebdriverJS in chapter 39, “Selenium.” PhantomJS is covered starting with episode 95, “PhantomJS,” and also in Lessons Learned #13. We investigate and review CasperJS in The Lab #5.
Front-End Modules: Browserify and karma-commonjs
(Front-end modules are introduced in Chapter 16, “Modularity,” and Lessons Learned #14, “Front-End Modules.” It’s also discussed in LL16, “JavaScript Workflow 2015”.)
Any non-trivial program needs to be broken up into modules, but JavaScript doesn’t have a built-in way of doing that. Node.js provides a standard approach based on the CommonJS Modules specification, but no equivalent standard has been built into browsers. You need to use a third-party tool.
I prefer Browserify for front-end modules. It brings the Node.js module approach to the browser. It’s simple, straightforward, and if you’re using Node, consistent with what you’re using on the server.
Another popular tool is RequireJS, which uses the Asynchronous Module Definition (AMD) approach. I prefer Browserify because it’s simpler, but some people like the flexibility and power AMD provides. I discuss the trade-offs in Lessons Learned #14.
A disadvantage of Browserify is that the CommonJS format is not valid JavaScript on its own. You can’t load a single module into a browser, or into Karma, and have it work. Instead, you must run Browserify and load the entire bundle. That can be slow and it changes your stack traces, which is particularly annoying when doing test-driven development.
In Chapter 17, “The Karma-CommonJS Bridge,” we create a tool to solve these problems. It enables Karma to load CommonJS modules without running Browserify first. That tool has since been turned into karma-commonjs, a Karma plugin.
One limitation of karma-commonjs is that it only supports the CommonJS specification. Browserify does much more, including allowing you to use a subset of the Node API in your front-end code. If that’s what you need, the karma-browserify plugin might be a better choice than karma-commonjs. It’s slower and has uglier stack traces, but it runs the real version of Browserify.
We show how to use Browserify starting with episode 103, “Browserify.” We demonstrate karma-commonjs in episode 134, “CommonJS in Karma 0.10.” There’s a nice summary of Karma, Browserify, and the Karma-CommonJS bridge at the end of Lessons Learned #15. You can find sample code in the automatopia repository.
Notably Missing
These aren’t all the tools you’ll use in your JavaScript projects, just the ones I consider most essential. There are a few categories that I’ve intentionally left out.
Spies, Mocks, and other Test Doubles
I prefer to avoid test doubles in my code. They’re often convenient, and I’ll turn to them when I have no other choice, but I find that my designs are better when I work to eliminate them. So I don’t use any tools for test doubles. I have so few that it’s easy to just create them by hand. It only takes a few minutes.
I explain test doubles and talk about their trade-offs in Lessons Learned #9, “Unit Test Strategies, Mock Objects, and Raphaël.” We create a spy by hand in chapter 21, “Cross-Browser Incompatibility,” then figure out how to get rid of it later in the same chapter. A simpler example of creating a spy appears in episode 185, “The Nuclear Option.”
If you need a tool for creating test doubles, I’ve heard good things about Sinon.JS.
Front-End Frameworks
One of the most active areas of JavaScript development is client-side application frameworks and libraries. Examples include React, Ember and AngularJS.
This topic is still changing too rapidly to make a solid long-term recommendation. There seems to be a new “must use” framework every year. My suggestion is to delay the decision as long as you can. To use Lean terminology, wait until the last responsible moment. (That doesn’t mean “wait forever!” That wouldn’t be responsible.) The longer you wait, the more information you’ll have, and the more likely that a stable and mature tool will float to the top.
If you need a framework now, my current favorite is React. I have a review of it here and an in-depth video in The Lab.
When you’re ready to choose a framework, TodoMVC is a great resource. Remember that “no framework” can also be the right answer, especially if your needs are simple and you understand the design principles involved.
We demonstrate “not using a framework” throughout the screencast. Okay, okay, that’s not hard—the important thing is that we also demonstrate how to structure your application and create a clean design without a framework. This is an ongoing topic, but here are some notable chapters that focus on it:
- Chapter 13, “Design, Objects, & Abstraction”
- Chapter 18, “Drag and Drop” (starting with episode 123, “A Question of Design.”)
- Chapter 22, “Fixing Bad Code”
- Chapter 26, “Refactoring”
We’re also investigating front-end frameworks in The Lab. At the time of this writing, React has a review and a video series and so does AngularJS (review, video series). Ember is coming next.
Promises
Promises are a technique for making asynchronous JavaScript code easier to work with. They flatten the “pyramid of doom” of nested callbacks you tend to get in Node.js code.
Promises can be very helpful, but I’ve held off on embracing them fully because upcoming changes in JavaScript may make their current patterns obsolete. The co and task libraries uses ES6 generators for some beautiful results, and there’s talk of an await/async syntax in ES7, which should solve the problem once and for all.
The newer libraries use promises under the covers, so promises look like they’re a safe bet, but the newer ES6 and ES7 approaches have a different syntax than promises do. If you switch existing code to use promises, you’ll probably want to switch it again for ES6, and again for ES7.
As a result, I’m in the “adopt cautiously” camp on promises. I’ll consider them when dealing with complex asynchronous code. For existing callback code that’s not causing problems, I’ll probably just keep using callbacks. There’s no point in doing a big refactoring to promises when that code will just need to be refactored again to one of the newer styles.
ES6 is supposed to have native support for promises. If you need a promise library in the meantime, I’ve heard that Bluebird is good. For compatibility, be sure to stick to the ES6 API.
There’s Room for More
Is there a particular tool or category that I should have included? Add your suggestions in the comments! Remember, we’re looking for tools that are universal, valuable, and mature, so be sure to explain why your suggestion fits those categories.