Agenda
The Why (~10 mins)
Bugs and Regressions
Safety Net
Goal: Continuous Integration
The How (~45 mins)
Overview of Different Testing Approaches:
Unit Testing (~20 mins)
UI Testing (~20 mins)
Automating Testing with Continuous Integration
free-roam workshop time: hack, code, get help, show off demos
Ohai! I'm Fil
Ohai! who r u?
Designer? Developer? QA?
what kind of a group do we have here?
Why Test?
Audience Poll: Who does it?
Dedicated QA department?
Developers?
Unclear?
presumably we're all involved in shipping some kind of product
do you test, and if so, who does it?
Why Test?
Audience Poll: What kind of testing?
Manual?
Unit?
Integration / End-to-End / Selenium?
what sorts of testing are you/your team doing? something not here?
Quality is everyone's responsibility.
if there's one thing to leave this workshop with, it's this message
Bugs
They suck. We have to fix them.
There's something worse than just a bug , though...
when i think poor quality software, bugs are what come to mind
Regressions
A bug that, at one point, was identified and fixed, but at a later date, crept back into the product.
One of the biggest embarassments as a team delivering a product.
by sticking to a few principles and following some of the techniques i hope y'all will learn in this workshop,
you can avoid regressions altogether.
Preventing Regressions
You release version 1 of your product. Yay!
You realize there's a bug in it. Boo...
You fix the bug, but then you also write a little program to verify that the bug doesn't exist.
This little program is a test!
You release version 2 of your product, which includes your bug fix. Yay!
Now every time you make a change, you run your test, effectively preventing you from ever releasing that same bug again .
LMAO, RIP, Bug
The Power of Testing
Testing is a safety net.
You are future-proofing your product.
It is also a powerful design tool - it lets you reason and formalize your architecture.
weve talked about how testing can help us incrementally improve the quality of our software, and prevent regressions.
but what about testing as a way to shape software in the first place? who has heard of Test Driven Development?
consider how your test suite would evolve differently, and how that shapes your program, if every test added was
from a bug to prevent a regression, vs. a meaningful test describing a high level feature.
"it should not *due to specific bug behaviour*"
"it should interact with module X in this particular way"
Continuous Integration
Next-Level Testing Power Up: The Holy Grail
Your tests being automated - fire and forget
Your automated test suite running on every commit .
You get feedback very quickly about your changes - did they break the tests?
Workshop Goal: instrument a PhoneGap app a) with tests and then b) automate them
How to Test
So many ways to approach testing your product - as many ways as there are to building it.
Testing Approaches
Different ways of experiencing and slicing your product up:
Manual . Slow, error prone, human required.
Unit . Testing code at the granular unit/module/class level.
Visual . Does it look like what it's supposed to look like?
Integration . Once you put all the pieces together, does it work?
Today we will be focussing on unit testing and integration/end-to-end testing specifically for PhoneGap apps.
Unit Testing
Testing the building blocks that make up your app, one block (or unit ) at a time.
The driving assumption is that the majority of quality problems in your application come from your code/logic.
Therefore, we focus unit tests on your application's logic.
should be focused on JUST the logic of your units. so, really disassembling your app into smaller parts we can test individually.
its expected that the programming languages / frameworks / programs you leverage as dependencies that make up your app (phoengap, cordova, the browser) are well tested and works. the assumption is that the main contributors to errors/failure in your app is YOUR code. so focus on testing YOUR code first and foremost.
Unit Testing, cont.
As the first line of defense against bugs and regressions, we want unit tests to:
Be easy to run, since we will run them all the time during development.
Be fast. Slow unit tests make me less likely to run them.
the second part (fast unit tests) actually enforces the principle of isolation: any kind of i/o will slow your test.
How Not to Unit Test
the previous two principles around unit tests leads certain constraints that are helpful to hold yourself to
no i/o (file/network)!!!!! if your unit test does network calls, its not a unit test, because we violate the principle of isolation.
Unit Testing a PhoneGap Application
A PhoneGap app is composed of HTML, JavaScript and CSS.
The logic in your code mostly exists in JavaScript.
So for your PhoneGap application, we will be unit testing JavaScript.
Quick Look at Star Track JavaScript
https://github.com/phonegap/phonegap-app-star-track/blob/master/www/js/my-app.js
this is a single-page app, and all of the app's logic is contained within a single JS file. if your app's source is split among multiple files, the same approach I will be dtailing applies to you, too. the single-js-file scenario is generally trickier to test properly than a more modular approach.
So How Do We Unit Test This?
Three requirements:
We need an environment where we can load our application's JavaScript and execute it.
We need an environment where we can load our tests and execute them.
We need the ability to report the test results.
A versatile JavaScript test runner that meets our three requirements, and works with any testing library.
karma-runner.github.io
Refer to Karma's site for installation, or just jump straight to my fork of the Star Track app.
Requirement 1
Environment for our JavaScript
To start, follow the installation guide to get all the Karma pieces in place - let's run through it quickly.
Let me walk through the Karma config to explain how Karma loads our app's JS.
cover basepath, files, browsers, autowatch and single run. we will turn autowatch on and single run off so that we can
focus/optimize on the app development experience. that is, as an app developer making changes to my code, how do i streamline
the test-running experience to help my development efforts. i will show that in a sec.
remember that the order of the files we include matter. in this case, our app relies on Framework7. so we need to include framework7
FIRST - just like in the browser! similarly, our tests rely on our own app code, so our tests get included last.
another common gotcha is coupling the logic (JS) to the view (HTML/document). some simple tweaks to the basic structure of our app let us
decouple the javascript from the document almost entirely. let me show you how that's done in the star track app case - there is a single init()
call that kicks off finding DOM elements, binding to their events, that kinda stuff.
Requirement 2
Environment for our Tests
Now let's take a look at some tests!
Let me walk through the Karma config to explain how Karma loads our tests.
open up test file. series of describe blocks that group relevant tests together, and it blocks that encapsulate individual tests.
i like to organize describe blocks by module or function that i am testing, and have each test exercise one aspect of my code.
open up karma config again. show frameworks. we're using jasmine, so we tell karma "expect to run jasmine tests now", k?
Requirement 3
Reporting Test Results
Node.js is our main runtime, so let's create easy npm
scripts to run our tests.
show package.json with dependencies for all karma things, and the one npm script we added `npm run unit`. all it does is trigger karma
Let's give it a shot!
sweet! it worked. we ran 7 tests and got feedback that they passed - that's nice. also note that if i change the tests or the source code,
karma runs the tests immediately in the background for you. this is a nice addition to one's development flow. let's zoom in on the pad2
function and tests to show this off.
Essential Unit Testing Tool: Mocking
Recall that unit testing relies on isolating our logic to small units.
This can make executing code that is highly coupled to other units difficult.
Mocking is a technique where we use software dummies to stand-in for other units, dependencies or even function parameters in order to keep tests focussed .
also called stubbing. lets look at the second set of tests that depict this technique, in increasingly more complex unit tests. the second set of tests exercise the javascript function responsible for doing the search: it'll validate the input, make an API call to spotify to search for music, delegate rendering of the results to another function, and finally handle any errors during the call to the spotify api. That's at least four tests, if not more.
End-to-End Testing
Also known as integration testing...
... and UI testing...
.. and other terms - but the idea is always the same:
Treat the software-under-test as a single, whole package, and interact with it as your customers do.
remember the assumption that failures are caused by your code, first and foremost? while that may be true 90% of the time, there are times when that assumption does not hold. where the interactions between the dependencies of your code, your code itself, and anything else cause weird behaviour.
End-to-End Testing, cont.
This kind of testing used to be done manually .
End-to-End Testing, cont.
What we want is an automated way to replace UI interactions.
In that sense, it is the opposite of unit testing with mocking :
No disassembling our code into small units we test individually - instead we test the entire application.
No use of mocks - bring on the I/O.
integration testing treats your entire application as one giant package, with everything included: UI, integrations with other API/services. in other words, _nothing_ is mocked - which is completely the opposite from what we did during unit testing. that introduces its own challenges, but the benefit is we are testing and playing with a version of the software that is identical, or very close, to what our users will be using.
HEAVY! expensive, slow. if you dont need to do integration testing, dont do it. generally, the larger the project and the longer it has been around, the more likely you need integration testing.
Selenium
(aka WebDriver)
Selenium is a tool that enables automation. It tells other software to do things for you.
who has heard of selenium? uses it?
for web-based software projects, there is a web standard called WebDriver that allows us to automate UIs. it provides a way to load URLs, tap/click elements, and validate that the web app is in a particular state. not that different from our JS unit tests, except that we’re no longer isolating the JS
What is WebDriver?
It is commonly referred to as Selenium (for maximum confusion)
It is a specification for an HTTP API.
This HTTP API serves as an abstraction over proprietary, closed and/or differing UI automation APIs.
Free and open-source.
where phonegap provides a JS API for various native device functionalities, selenium works in the _exact_ same way for automation APIs.
How Selenium Works
very briefly cover how this works, and we will segue into how we will use it.
four components: the thing you want to automate, the driver (an API to automate the thing), the selenium server, which is an abstraction over the driver, and your test, which speaks to the server.
What is Appium?
An implementation (+extension) of the WebDriver specification...
... designed specifically for mobile (Android, iOS) platforms...
... allowing automation of native, hybrid and web applications.
Back to our App...
Similar to Karma for unit testing, Nightwatch is a test runner purpose-built for integration testing in a browser.
We will use Nightwatch to glue our integration testing setup together
We will use Nightwatch to glue our integration testing setup together
first, let’s install it, and something called a selenium-server and chromedriver: npm install nightwatch selenium-server chromedriver —save-dev.
our first integration testing environment will be a browser (chrome, in this case). and, just like how in our development loop / workflow, we test
often using `phonegap serve` and loading up the browser, we will leverage this same approach to run integration tests in chrome.
this will be a stepping stone to running these integration tests on mobile.
lets go over the nightwatch config to see whats in there. lets cover the basic setup, the selenium server and chromedriver bits, and the default test environment.
We need one more bit of glue: a runner.js
script.
It will handle running phonegap serve
for us, and kicking off nightwatch. Let's take a look.
we also need to write what im calling a ‘runner’ script - something that gives us the flexbility to set up the testing environment in such a way that it works for our phonegap app. in particular, the runner script will run `phonegap serve` for us, and once its running, only then will it kick off nightwatch.
it will also help run some extra processes to test inside mobile emulators, but i will show that off in a bit.
Web Integration Test
Let's take a look at what integration tests in this case look like.
writing our first test. simple: load the url, make sure the find tracks button is present.
What about Mobile?
That Appium thing we talked about earlier - how do we get it to run our tests in a mobile environment?
Appium
But! The browser is not the same as a PhoneGap (hybrid) application:
Don't need to manage URLs.
Differentiating between "web" vs. "native" contexts.
even though the automation API is identical, the app running inside a browser vs running inside a native app container are two vastly different contexts. two main differences:
1. there is no concept of URL loading in a cordova app. so explicitly loading the URL is not needed, and in fact would probably fail.
2. phonegap apps are native apps, but your app lives inside what is called a webview within the native application. so when automating the UI of a phonegap app, we need to have the tests find this webview and _change the context of the UI automation to the webview.
Appium, cont.
We can easily factor out this difference in environments using a before
clause in our tests. Let's take a look.
so we need to tweak the tests such that the browser context and the native app context are both handled. we can do so by factoring out these differences into test scripts that run before/after all the tests, or each of the tests.
lets also show the different mobile environments we are testing in the nightwatch config, and how our runner script will start appium for us. note how we are pointing our test runner to the final built native binaries - we are literally testing what we eventuall ship to our customers.
worth noting that we probably want to build a new mobile binary before running the tests - to package up any changes and test those, too.
Continuous Integration
Classic software development lifecycle:
Develop - manual
Test - manual
Release - manual
Continuous Integration, cont.
Continuous integration-powered development lifecycle:
Develop - manual
Test - automated - run while you develop
Release - manual
ideally the people involved in app development are running all these awesome testing tools we’ve talked about locally. but in a big team, sometimes we forget to do that, or people’s environments are slightly different than our own or our customers’.
the idea is to automate the _running_ of all of these things we’ve talked about today on every change we make to our app.
so investing a little bit of time in automating the running of tests and checks on every commit can go a long way to saving time.
the karma unit tests we ran earlier, and its autowatch/autorun features, is a great example of tightening up the development and testing phases of the software development lifecycle, speeding up delivery.
you can take it even one step further and automate the releasing of your application to achieve Continuous Deployment - where every phase of the SDL other than development itself is automated. quite common these days with web applications and services. less so with mobile apps, mostly due to typical app store review processes.
CI Details
Suggest to automate running your tests and checks on every commit to a couple of different typical source code version control locations:
On the master
branch (or whatever branch used to reflect production or release-candidates)
On Pull Request (or ready-to-review) branches
id suggest running the tests on two different code change events/branches/locations:
1. master, or whatever branch you use with your team that denotes “what is in production”.
2. pull requests. PRs are a medium in which someone signals intent to change the software - perfect opportunity to run and report test results to.
it might seem like overkill, but in git, merge commits can sometimes include unintentional changes, so having one extra run on master over and above a PR can help catch those. PLUS: this is automated. a computer will do this work for us. it is mindless to me and you. all it takes is some compute time in the cloud. so why not?
Travis
A simple, free and beautiful service for enhancing software project with automation - perfect for CI!
Let's walk through setting up Travis for our app.
who is familiar with Travis?
a simple, free, and beautiful tool called travis is free for public repos. let me show you how to set it up. you log into travis with your github, enable travis runs for your repo of choice, tweak some settings (go over them), and then write up a travis.yml file (go over the basics). push those changes up, and travis will start executing the tests for you!
if we want to run karma on every change, we have to make sure to run it not in the ‘watch’ mode, like we did in the earlier unit testing section. we want to run it once, and report back.
it’s super useful when hook into PRs, too. you get provided that feedback at code-review and code-sharing time.
so lets put together a pull request that _breaks_ the tests, and have travis report it back to us.
Code Coverage
A measure of how much of your code is being executed by your tests.
Don't waste your time forcing this to 100%! It's just another data point. Good goal: continuous improvement, make it generally go up, not down.
Let me show you how to hook it up to Karma and see it in action.
You can also easily hook it into your PR process, just like Travis, using http://codecov.io .
another useful little extra tool you can hook into your dev cycle is code coverage reporting. its a metrics that tells you what % of your code your tests are exercising.
its a rough metric, and doesnt tell you much other than how much of your code you are testing. dont chase this metric too much! code coverage does not tell you about the quality of your tests, just how much of your code is being exercised. human judgment still needed. i find it is useful to know how code coverage _changes_ as code changes. does a pull request drop code coverage, or increase it? if coverage is dropped, perhaps thats one thing to look at in the PR: does it have sufficient test coverage? were tests written at all?
let me show you to get up and running with one such tool kinda already built into karma:
—> npm install karma-coverage codecov —save-dev. editing karma config to show a summary of that on every run (via reporter). needs to preprocess your js files (to figure out the coverage metrics). run `npm run unit` and check the report out at the bottom.
you can also have travis run codecoverage and report it back to us. useful, again, in PRs! show .travis.yml.
lets demo.
Let's Get Hacking!
Suraj and I will be wandering around, answering questions, helping with set up or issues, and generally available for the last bit of the workshop.