Penny the pig holding a bubbling test tube
Penny the scientist pig

There is an old joke:

A QA engineer walks into a bar. They order a beer. They order zero beers. They order 999999999999 beers. They order -20 beers. They order a lizard. They order an asiogjwer.

Security engineers might do similarly absurd things: shout orders through a megaphone so no other orders can be heard, try to convince the bar staff that they own the bar and therefore don’t need to pay for drinks, or impersonate other patrons to place orders on their tabs.

QA engineers are adept at identifying and testing assumptions in software. They find edge cases, unexpected interactions between components, and search for undefined or underspecified behavior in systems. Security engineers are similarly inclined, though they may focus on more specific angles of attack.

As a software engineer, knowing how to test the code you maintain is now a standard and expected part of your job — many companies even rigorously test this skill during their technical interview process. This was not always the case, but the industry shifted in the 2000s to expect this core competency. Here, I will argue that the same transition is coming for security matters, and that this is a good thing for your code and your career.

Specialization is inevitable

Security engineers often lament the inability to get their fellow software engineers to take security matters in their code seriously, or even show any interest in learning about the most common mistakes and how to avoid them. Other engineers retort that it is the responsibility of security engineers to package up our best practices into tools and libraries that prevent bad behavior or catch mistakes, while providing actionable output that explains the issue and resolution. They are not wrong, but I believe we have to meet in the middle.

This specialization of software engineers should not be surprising, given the growing complexity and scope of the problems we solve. There are enough things to learn within our chosen sub-disciplines for a lifetime, such that even the idea of diving into the complexity of other disciplines can be exhausting. For this reason, backend engineers are often resistant to learning about frontend matters, users of static languages won’t even consider the merits of dynamic languages, and so on. We often pick our specialty at some point early in our careers, dedicate ourselves to becoming experts in that area, and declare everything else a problem for some other poor soul.

Some hiring committees still resist this, adamant in their search for “full stack” engineers who can be dropped into any project and be immediately effective — the commandos of software. These people do exist to some extent, though they are rare, expensive, and cannot possibly know everything — the space is simply too big now for one person to have a pervasive, deep knowledge.

But, Testing is Fundamental

How can I hold the opinion that specialization is natural and expected, while simultaneously believing that all engineers should have at least some knowledge of security matters? Why is security special?

Some skills and associated knowledge in software engineering transcend the boundaries of our sub-disciplines. Testing is one such skill. Code is very abstract and difficult to visualize, so we must develop analytic coping strategies to identify and test edge cases. We articulate assumptions, adopt hypotheses, and formulate experiments to test those hypotheses — in other words, the scientific method. We may not often think about our work in this way, but I believe this is an essential aspect of building dependable code.

This approach is most obvious to us when we are debugging unexpected behavior—we must become increasingly precise about our expectations and how the reality of the system differs. Along the way, we may need to challenge some deeply held assumptions about how our systems work (have you ever found yourself questioning basic arithmetic?). We often end the process with a eureka moment, and a statement like “well of course that was broken because of …”, and implement the fix with some chagrin. We perhaps even blame ourselves for not being smart enough to recognize that the issue would have arisen, when we originally wrote the code. Really, though, it is a happy accident of evolution that we are able to do any of this at all. We have no natural, physical intuition for what we do.

What many of us miss when we are testing new code is to look outside our ideal circumstances. We focus on building and testing the “happy path”, and are often comfortable saying that the behavior of code is undefined if some preconditions are not met. Undefined behavior will page you at 3am, and ruin your day.

By becoming better at testing, exploring the undefined, we write better code in future that behaves predictably. It becomes more routine to specify strong expectations and guarantees, both at the level of blocks of code and at the level of service contracts, and we learn the utility of programming defensively.

Test it, or someone else will

Many engineers find security matters intimidating, in part because they are adversarial. As engineers we work together to build something and, generally speaking, assume the best of our colleagues and their intentions. We incorrectly assume that malfunction is accidental rather than intentional. Paranoia is healthy, because they really are out to get you.

The most common security vulnerabilities exploit code that is overly permissive, that assumes good behavior. Buffer overflows, content injection, cross-site request forgery, and broken authorization all exploit the failure to specify and enforce constraints on input. So, we can improve the security of our code by taking more care to specify what it should do with invalid input. This is no different to what we would do to improve the robustness and reliability of our code.

It’s difficult to hold an accurate mental model of an entire system in your head, so instead, we can compensate by modeling and controlling our inputs in smaller, composable chunks. When we view the system composed from these small, defensible components, it may feel like the checks are duplicative or redundant. However, this defense-in-depth strategy is likely to implicitly help prevent undefined behavior and security vulnerabilities in the aggregate system that we may not even have thought possible.

When testing was someone else’s problem

Back in the 1990s, it was still standard practice in many organizations to have separate programming and quality assurance (QA) teams. The classic waterfall model of software development separated design, programming, and testing into distinct phases, often conducted by different groups of people. Programmers would write code to fulfill some specification, and QA would test the code to find bugs relative to the spec.

The relationship between programmers and testers was often adversarial — the incentives provided to each team were quite different, and in direct conflict. The effectiveness of a QA team was often measured by how many bugs they found, while the effectiveness of a programming team was measured by how many features they could implement, ideally without bugs. This produced a tension where programmers would view QA as sadistic beings. QA would drop reams of paper with glee on the desks of managers, filled with bugs reports for the last release candidate. Fixing those bugs  took up time that should be spent working on new features, or pushed out release timelines, so naturally programmers would respond with “working as intended” or “will not fix” as much as possible.

Making matters worse, in the course of implementing new features and fixing bugs, regressions would often be introduced in seemingly unrelated parts of the system. So, every test cycle, QA would have to re-test everything, as they could not assume that things that worked before would still work.

Testing became (mostly) your problem

Test automation was essential to counter the growing labor requirements of QA teams. Test automation tools for QA teams, operating outside the black box of compiled code, typically operated at the “reproduce user behavior” level — recording scripts of user input consisting of mouse and keyboard interactions with a user interface. These tests were very brittle — simply moving UI elements around would often be enough to break them.

The only way to produce less brittle tests would be to write tests as code that interacted with the underlying APIs and data structures. This rapidly blurred the distinction between programming and QA. The different types of testing began to become clear: unit tests, integration tests, and end-to-end tests, where unit tests were often significantly cheaper to write, run, and maintain in comparison to end-to-end tests. The programmers were best placed to write unit tests, ideally with (or before!) the code being tested. Testing became the primary responsibility of software engineers, with support from QA specialists, “software engineers in test” (SETs).

SETs provide tools to make testing easier, particularly in novel contexts where such tools are initially lacking, i.e. testing in browsers, mobile devices, and data processing pipelines. They may also build static analysis tools or “safe by default” libraries to detect or eliminate common bug patterns, such as inexhaustive conditional logic, time handling, deadlocks, memory leaks, and so on. Crucially, these tools are provide to allow generalist engineers to be effective in testing the software they write — something which is still primarily their responsibility, even if they have SETs to help.

Security will become (mostly) your problem

In the companies I have worked for over the last decade, responsibility for securing the software we write is at a similar point to where testing was in the early 1990s. Engineers are expected to take some basic training courses on security matters, and pinky-promise to be mindful of security, but the majority of the practical burden for ensuring the security of the product still falls on a dedicated security team, like the QA team before them.

Security teams still spend a significant amount of their time performing audits and speculative testing of the product for security vulnerabilities, or hire external vendors to perform yearly audits and penetration tests of the system. Vulnerability reports are still figuratively dumped on the desks of software engineers to fix. As engineering teams have not yet accepted security as core competency, fixing vulnerabilities and upgrading dependencies is often seen as a distraction from their core duties, allowing known issues to fester and eventually be exploited.

Security and privacy are becoming increasingly hot-button issues — consumers are becoming less forgiving of issues in these categories, and exploitation of critical vulnerabilities is often front-page news. An exploited security vulnerability can destroy a company’s long term prospects overnight. All too often, these vulnerabilities could have been avoided, if security awareness and practice was a more integral part of the skill set of all software engineers.

Much like the QA teams of the past, security teams cannot adequately defend our ever-growing systems on their own, and are chronically underfunded to fulfill such a broad mandate. At many organizations, there is a 20-to-1 (or often much worse) ratio of software engineers to security engineers. We don’t enjoy sending you vulnerability reports with a deadline to fix them any more than you like receiving these when your backlog is already chronically overloaded. Something has to change.

Security engineering is already evolving into a supporting role — providing tools, libraries, and education to the broader engineering organization to prevent and detect vulnerabilities. Tools like Dependabot can be a life-saver in quickly mitigating vulnerabilities that arise through our dependencies — your security team should certainly be advocating for using tools like this, if not also taking responsibility for configuring this for you in all your source repositories.

Your security team should be advocating for and configuring static analysis tools like Brakeman (see the OWASP-maintained list for other options), application firewalls like Cloudflare, and “safe by default” cryptography libraries like libsodium and Google Tink. Learning how to use, interpret and respond to the output of such tools should be the responsibility of engineers, with occasional consults from security specialists.

We’re all in this together

The bad guys are out there, looking for a way to profit from your mistakes. This is just the reality of modern software engineering — we have a responsibility to each other and our customers to be as prepared as we can be in the face of increasingly aggressive and organized adversaries.

If nothing else, ensure your organization isn’t the slowest runner in the pack — keep your dependencies up to date, subscribe to a security newsletter to stay notified of developments in the field (like tl;dr sec), and use static analysis tools to try and find the most common issues. Read the OWasp Top Ten. Set up a bug bounty program. Encourage your engineers to proactively look for security issues.

If you are an engineer and your organization has a security team, work with them proactively — do what research you can on likely vulnerabilities in the code you own, and reach out for consults when you are unsure. Good security teams will help you think about risk, and help focus attention on the most security critical interactions in your services. You’ll likely learn some stuff along the way that will make you a better engineer, and maybe, just maybe, you’ll enjoy it! I can but dream.