Demo App Functionality
Preparing a Test Plan
I recommend designing a test plan at the same time you design your application’s functionality. Think about the different components of the app and how they can break. For example, if the app depends on an external api service assume that the api service can go offline or return invalid data. If the app accepts user input assume that the user input can be invalid and possibly dangerous. In my opinion the purpose of the test plan is to be an outline for developers as they write tests. The test plan should not be documentation for the tests since it can easily become out-of-date. Instead, your tests should be self-describing. For example, here is a simple test plan for this photo searching app:
- Form input – if the user enters valid search terms the app should work.
- Form input – if the user enters invalid search terms the app should return an error.
- API data – if the data returned from the flickr api is valid the app should work.
- API data – if the data returned from the flickr api is invalid the app should return an error.
- API data – if the flickr api returns zero results an alert should be returned to notify the user.
- Route – if the app is working correctly a 200 http status code should be returned.
- Route – if there is a server error a 500 http status code should be returned.
There are different types of tests that can be implemented when testing web applications:
- Unit tests – isolate a small block of code and test it’s intention
- Integration tests – test how different components of an application work together
- Service tests – isolate a service to see if it’s working correctly (ex: api endpoint, database)
My test app includes unit tests, integration tests, and api tests. I’m using the assertion library should.js for clean and readable tests.
Form validation methods are a good example for unit testing. Typically form validation methods take in a single input parameter and use a regular expression to see the input value is valid or not.
Here’s a form validation method with a couple Mocha unit tests:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Integration Tests for Routes
Node web application routes can be tested using the Supertest library. Supertest can start up a node app, make http requests to routes, and run assertions on http responses. It can pass parameters to routes and run assertions on things like the response http status code, content type, and execute regular expressions against the response body.
If your routes depend on accessing a database or api I recommend mocking that data for your route tests. This way your database or api won’t influence your route test results. I’m using the node module nock in my project to mock the data returned by the flickr public feed api.
Here are some route tests:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
It can be helpful to have tests for an external service used by your app. If you rely on external services and you have tests for them then when your app breaks it easier to determine if the problem is with your code base or the external service.
Mocha makes it easily to asynchronously test an api endpoint. Here’s a test I have for flickr’s public feed api:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
Code coverage tools analyze an application’s source code and test suite and then identify code that’s missing tests. Code coverage provides a way to measure the quality of your application.
Here’s a screenshot of the Istanbul html reporter for a module with zero tests:
After adding tests for all methods in this module the html report will look like this:
Istanbul can be installed as a global node module and used from the command line. Or you can use Istanbul as a local module with Grunt which is the approach I took with my test app. I describe configuring Istanbul with Grunt in the following “Grunt Workflow” section.
Sonar stores code coverage statistics about your application in a database which makes it possible to compare different versions of your application to see if your code coverage is getting better or worse.
Here’s a screenshot of this test project in SonarQube:
Grunt can be used to configure Mocha and Istanbul so you don’t need to worry about installing them as global modules. Here are the grunt plugins I’m using for Mocha and Instanbul:
The grunt-istanbul plugin is easy to use with Mocha or any other testing framework. To configure the grunt-istanbul plugin setup a task to run the following:
- Instrument your source code
- Run your test suite against your instrumented source code
- Store your coverage results
- Make the report
I follow these steps in my test project’s Gruntfile here: https://github.com/gregjopa/express-app-testing-demo/blob/master/Gruntfile.js
When I normally run my test suite I want it to point directly at my source code. But when I run the Istanbul code coverage task I want my test suite to point at the instrumented source code. To accomplish this I created a require_helper module. This require_helper module uses an environment variable to determine which path to use for loading the source code to run the tests against. When I execute my code coverage grunt task I set the environment variable using the grunt-env plugin to point to the instrumented source code. Here’s some example code to demonstrate how this requireHelper module works:
1 2 3 4 5 6 7 8