One of the most exciting features coming to JavaScript (and therefore Node.js) is the async
/await
syntax being introduced in ES7. Although it's basically just syntactic sugar on top of Promises, these two keywords alone should make writing asynchronous code in Node much more bearable. It all but eliminates the problem of callback hell, and even lets us use control-flow structures around our asynchronous code.
Throughout this article we'll take a look at what's wrong with Promises, how the new await
feature can help, and how you can start using it right now.
The Problem with Promises
The concept of a "promise" in JavaScript has been around for a while, and it's been usable for years now thanks to 3rd party libraries like Bluebird and q, not to mention the recently added native support in ES6.
They've been a great solution to the problem of callback hell, but unfortunately they don't solve all of the asynchronous problems. While a great improvement, Promises leave us wanting even more simplification.
Let's say you want to use GitHub's REST API to find the number of stars a project has. In this case, you'd likely use the great request-promise library. Using the Promise-based approach, you have to make the request and get the result back within the callback you pass to .then()
, like this:
var request = require('request-promise');
var options = {
url: 'https://api.github.com/repos/scottwrobinson/camo',
headers: {
'User-Agent': 'YOUR-GITHUB-USERNAME'
}
};
request.get(options).then(function(body) {
var json = JSON.parse(body);
console.log('Camo has', json.stargazers_count, 'stars!');
});
This will print out something like:
$ node index.js
Camo has 1,000,000 stars!
Okay, maybe that number is a slight exaggeration, but you get the point ;)
Making just one request like this isn't too hard with Promises, but what if we want to make the same request for lots of different repositories on GitHub? And what happens if we need to add control flow (like conditionals or loops) around the requests? As your requirements become more complicated, Promises become harder to work with and still end up complicating your code. They're still better than the normal callbacks since you don't have unbounded nesting, but they don't solve all of your problems.
For more complicated scenarios like the one in the following code, you need to get good at chaining Promises together and understanding when and where your asynchronous code gets executed.
"use strict";
var request = require('request-promise');
var headers = {
'User-Agent': 'YOUR-GITHUB-USERNAME'
};
var repos = [
'scottwrobinson/camo',
'facebook/react',
'scottwrobinson/twentyjs',
'moment/moment',
'nodejs/node',
'lodash/lodash'
];
var issueTitles = [];
var reqs = Promise.resolve();
repos.forEach(function(r) {
var options = { url: 'https://api.github.com/repos/' + r, headers: headers };
reqs = reqs.then(function() {
return request.get(options);
}).then(function(body) {
var json = JSON.parse(body);
var p = Promise.resolve();
// Only make request if it has open issues
if (json.has_issues) {
var issuesOptions = { url: 'https://api.github.com/repos/' + r + '/issues', headers: headers };
p = request.get(issuesOptions).then(function(ibody) {
var issuesJson = JSON.parse(ibody);
if (issuesJson[0]) {
issueTitles.push(issuesJson[0].title);
}
});
}
return p;
});
});
reqs.then(function() {
console.log('Issue titles:');
issueTitles.forEach(function(t) {
console.log(t);
});
});
Note: GitHub aggressively rate-limits unauthenticated requests, so don't be surprised if you get cut off after running the above code only a few times. You can increase this limit by passing a client ID/secret.
At the time of this writing, executing this code would yield the following:
$ node index.js
Issue titles:
feature request: bulk create/save support
Made renderIntoDocument tests asynchronous.
moment issue template
test: robust handling of env for npm-test-install
Just by adding a for
loop and an if
statement to our asynchronous code makes it much harder to read and understand. This kind of complexity can only be sustained for so long before it becomes too difficult to work with.
Looking at the code, can you immediately tell me where the requests are actually getting executed, or in what order each code block runs? Probably not without reading through it carefully.
Simplifying with Async/Await
The new async
/await
syntax allows you to still use Promises, but it eliminates the need for providing a callback to the chained then()
methods. The value that would have been sent to the then()
callback is instead returned directly from the asynchronous function, just as if it were a synchronous blocking function.
let value = await myPromisifiedFunction();
While seemingly simple, this is a huge simplification to the design of asynchronous JavaScript code. The only extra syntax needed to achieve this is the await
keyword. So if you understand how Promises work, then it won't be too hard to understand how to use these new keywords since they build on top of the concept of Promises. All you really have to know is that any Promise can be await
-ed. Values can also be await
-ed, just like a Promise can .resolve()
on an integer or string.
Let's compare the Promise-based method with the await
keyword:
Promises
Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!
var request = require('request-promise');
request.get('https://api.github.com/repos/scottwrobinson/camo').then(function(body) {
console.log('Body:', body);
});
await
var request = require('request-promise');
async function main() {
var body = await request.get('https://api.github.com/repos/scottwrobinson/camo');
console.log('Body:', body);
}
main();
As you can see, await
indicates that you want to resolve the Promise and not return that actual Promise object as it would normally. When this line is executed, the request
call will get put on the event loop's stack and execution will yield to other asynchronous code that is ready to be processed.
The async
keyword is used when you're defining a function that contains asynchronous code. This is an indicator that a Promise is returned from the function and should therefore be treated as asynchronous.
Here is a simple example of its usage (notice the change in the function definition):
async function getCamoJson() {
var options = {
url: 'https://api.github.com/repos/scottwrobinson/camo',
headers: {
'User-Agent': 'YOUR-GITHUB-USERNAME'
}
};
return await request.get(options);
}
var body = await getCamoJson();
Now that we know how to use async
and await
together, let's see what the more complex Promise-based code from earlier looks like now:
"use strict";
var request = require('request-promise');
var headers = {
'User-Agent': 'scottwrobinson'
};
var repos = [
'scottwrobinson/camo',
'facebook/react',
'scottwrobinson/twentyjs',
'moment/moment',
'nodejs/node',
'lodash/lodash'
];
var issueTitles = [];
async function main() {
for (let i = 0; i < repos.length; i++) {
let options = { url: 'https://api.github.com/repos/' + repos[i], headers: headers };
let body = await request.get(options);
let json = JSON.parse(body);
if (json.has_issues) {
let issuesOptions = { url: 'https://api.github.com/repos/' + repos[i] + '/issues', headers: headers };
let ibody = await request.get(issuesOptions);
let issuesJson = JSON.parse(ibody);
if (issuesJson[0]) {
issueTitles.push(issuesJson[0].title);
}
}
}
console.log('Issue titles:');
issueTitles.forEach(function(t) {
console.log(t);
});
}
main();
It is certainly more readable now that it can be written like many other linearly-executed languages.
Now the only problem is that each request.get()
call is executed in series (meaning each call has to wait until the previous call has finished before executing), so we have to wait longer for the code to complete execution before getting our results. The better option would be to run the HTTP GET requests in parallel. This can still be done by utilizing Promise.all()
like we would have done before. Just replace the for
loop with a .map()
call and send the resulting array of Promises to Promise.all()
, like this:
// Init code omitted...
async function main() {
let reqs = repos.map(async function(r) {
let options = { url: 'https://api.github.com/repos/' + r, headers: headers };
let body = await request.get(options);
let json = JSON.parse(body);
if (json.has_issues) {
let issuesOptions = { url: 'https://api.github.com/repos/' + r + '/issues', headers: headers };
let ibody = await request.get(issuesOptions);
let issuesJson = JSON.parse(ibody);
if (issuesJson[0]) {
issueTitles.push(issuesJson[0].title);
}
}
});
await Promise.all(reqs);
}
main();
This way you can take advantage of the speed of parallel execution and the simplicity of await
.
There are more benefits than just being able to use traditional control-flow like loops and conditionals. This linear approach lets us get back to using the try...catch
statement for handling errors. With Promises you had to use the .catch()
method, which worked, but could cause confusion determining which Promises it caught exceptions for.
So now this...
request.get('https://api.github.com/repos/scottwrobinson/camo').then(function(body) {
console.log(body);
}).catch(function(err) {
console.log('Got an error:', err.message);
});
// Got an error: 403 - "Request forbidden by administrative rules. Please make sure your request has a User-Agent header..."
can be expressed like this:
try {
var body = await request.get('https://api.github.com/repos/scottwrobinson/camo');
console.log(body);
} catch(err) {
console.log('Got an error:', err.message)
}
// Got an error: 403 - "Request forbidden by administrative rules. Please make sure your request has a User-Agent header..."
While it's about the same amount of code, it is much easier to read and understand for someone transitioning to JavaScript from another language.
Using Async Right Now
The async feature is still in the proposal stage, but don't worry, there are still a few ways you can use this in your code right now.
V8
While it hasn't quite made its way in to Node yet, the V8 team has stated publicly their intention to implement the async
/await
feature. They've even already committed the prototype runtime implementation, which means harmony support shouldn't be too far behind.
Babel
Arguably the most popular option is to transpile your code using Babel and its various plugins. Babel is extremely popular thanks to its ability to mix and match ES6 and ES7 features using their plugin system. While a bit more complicated to get set up, it also provides a lot more control to the developer.
Regenerator
The regenerator project by Facebook doesn't have as many features as Babel, but it is a simpler way to get async transpiling working.
The biggest problem I've had with it is that its errors aren't very descriptive. So if there is a syntax error in your code, you won't get much assistance from the regenerator in finding it. Other than that, I've been happy with it.
Traceur
I don't have any experience with this one personally, but Traceur (by Google) seems to be another popular option with a lot of available features. You can find more info here for details on what ES6 and ES7 features can be transpiled.
asyncawait
Most of the options available to you involve either transpiling or using a nightly build of V8 to get async
working. Another option is to use the asyncawait package, which provides a function to resolve Promises similarly to the await
feature. It is a nice vanilla ES5 way of getting similar-looking syntax.
Conclusion
And that's it! Personally, I'm most excited about this feature in ES7, but there are some other great features in ES7 that you should check out, like class decorators and properties.
Do you use transpiled ES7 code? If so, which feature has been the most beneficial to your work? Let us know in the comments!