Here at ZenPayroll, writing software that pays tens of thousands of employees across the country inspires us as engineers to write bullet-proof code. As a team, we are committed to putting in the extra time to make our features as robust as possible and ensuring that the code we write today will be resilient to the changes and refactors of tomorrow. Our greatest ally in this fight is testing, and its a core tenant of how we write software at ZenPayroll.

Fully testing our features has always been an important part of our development process, and as we have grown as a team this has become even more critical to our workflow. To ensure our code-quality remains up to par, every proposed feature, refactor, and bug fix is submitted with a full test suite, and we all hold each other accountable for this in the code review process. This has been a saving grace for us in a variety of situations, ensuring that payroll is always delivered where and when it's supposed to and preventing bugs from sneaking their way into production.

Writing Specs

We write tests everyday at ZenPayroll, and seeing them go green definitely gets us excited. While we don't always TDD our code, we utilize it when it's useful and always aim for 100% test coverage. Our attitude towards testing stems from thinking about what may change in the future, and enumerating how the code should work now and why, through our specs. It's a excellent reminder mechanism for us and helps to "future proof" our code as we move on to new features and occassionally forget precisely why something was done.

We've hit a stride with our testing that has allowed our development process to flow smoothly, and we'd like to share the tools and approaches we've taken to get there. Hopefully, these tools can be added to your arsenal on the quest towards 100% code coverage!

Unit Tests

We start by writing unit tests to ensure our models are storing and acting on data the way we expect them to. RSpec is our weapon of choice for clean Ruby testing and we focus on writing specs that stress specific components, rather than flows, to keep our unit tests clear, fast, and descriptive of how the code is intended to behave. As new engineers join, this is a great place to understand how a particular piece of code is used in practice and this is only possible with readable tests - we strive to maintain easy to understand specs, even in complex situations.

When dealing with changing tax laws, often very specific actions need to be taken to ensure that we remain compliant with the US government. We shoulder this responsibility for thousands of employers across the country and take their trust very seriously. Tests help us know that we are continuing to calculate and file for them correctly. When developing payroll features, we write specs that capture tax calculations and verify their validity. This makes sure that we are alerted whenever rates change or our tax calculations return unexpected results - nothing gets an engineer's attention like failing tests!

JavaScript and UI Testing

We have a thick single page app that is designed to give our customers peace of mind that their information is correctly and safely stored, and that their tax filings are in good hands. Ensuring data is correctly displayed and passed to the server from the frontend is just as important to us as keeping our Rails models well tested.

We serve our test assets using Konacha and use the Poltergeist PhantomJS driver, which raises exceptions if any JavaScript errors are thrown - an excellent extra check when testing our JS. Mocha then powers all of our JavaScript testing, which includes our routers, models, collections, and view logic. With a full suite of frontend specs, we can feel confident when pushing new changes that our UI responds properly.

But how do we test views with data that resembles production?

In order to keep our test data in tune with our server-side updates, we have a fixture generator controller spec, which hits GET controller actions and generates JSON fixtures from their responses. We then feed these JSON fixtures into our Backbone models when setting up our test views. This runs once before our test suite; it helps us test our RABL layer and ensures that our frontend models are being populated with what they would have if they directly synced with our server.

Integrations Tests

Finally, we write integration specs as a general overview of long flows in our application. We rely on these type of tests the least, but they are helpful in making sure that our app is loading correctly and checking from a high level that everything is in place.

We use Capybara driven by PhantomJS, as we have found it provides the most consistency and speed when run in CI. We also have Capybara Screenshot setup to capture what the screen looked like at the time of error to help us track down the failures.

Integration specs have also allowed us to automate checking that our setup guidelines are accurate. The tax setup required for every state is unique, so ensuring that our tooltips are up to date with changing requirements can be a pain to do manually. We setup custom request specs that capture images of all our guidance messages for every state during tax setup and sends them to product and compliance team. This helps them make sure our application's guidelines are always accurate without wasting time going to check all of these flows manually. Automated testing, for the win!

Continuous Integration

Continuous integration is a crucial piece of our development process. Every commit that is pushed to a code review, and later merged into our development branch, triggers a build on our CI server. It's the first thing we look at when reviewing code, and links directly to every commit that touches the codebase. This can lead to a lot of tests running and it is critical that our builds run quickly and don't get backed up.

The Way it Used to Be

A year ago, all of our tests ran on a beefy Jenkins machine in our closet. We'd have a few code reviews a day, the full test suite would finish in 15 minutes, and life was good. Reviews would get merged into the development branch, the suite would be run, and we were free to proceed with deploys!

However, as we started to grow as a team and as a product, so did our build time. Before not too long the test suite was taking an hour, builds were getting backed up, and our responses from Jenkins became fewer and farther between. Sometimes it would take half a day to hear back on the status of a review. Our builds on the development branch started to fall behind, flaky tests started to emerge, and sometimes deploys were going out without a response from CI. Gasp! Something had to change.

Going for Gold

We knew we had to get back to our roots and find a build system that was fast, reliable, and supported enough concurrent builds to keep our development cycles running smoothly. We tried a variety of services but almost always found something that wasn't wellsuited to our needs, whether it was due to setup difficulty, poor parallelization, frustrating UI, or limited support.

After evaluating a variety of options, we ultimately decided on Solano CI and have seen massive productivity boosts since our switch. Solano has impressive performance and parallelization options directly out of the box, which piqued our interest. Combined with a clear UI, comprehensive dependency support, easy setup, total customizability, and a command line interface, it was a perfect fit for us. They also offer unique advanced features such as build profiles, which allowed us to break out more expensive tests into a separate, periodic suite. To top it off, their support is incredibly knowledgeable and responsive, they have worked closely with us on fine-tuning our build setup for performance and stability, reducing our build time from over an hour to under 10 minutes. Back to testing heaven!

Wrap Up

Today, our commands to merge a code review into development will automatically reject a change that hasn't yet passed CI. Of course, our deploys are all dependent on successful CI status linked to the commits being pushed to production. We've left the days of uneasy deploys behind and are confident that we're covered on any change we make - a feeling that we are committed to preserving.

Testing will always be integral to how we build new features at ZenPayroll and will continue to help us build resilient software at a fast pace, but it is just one of the many layers that help us produce high quality code. We use testing as a core piece of a development process that includes code reviews, occasionally pairing, keeping clean data, and smoke testing features on staging, in order to produce technology that we can be proud of.

If testing automation and developing software in this manner is important to you too, we would love to hear from you!

Keep on testing, folks.