Thinking about it, the team I work in has quite a lot of ‘security checkpoints’ for our code. We try to put as many pieces of code under unit test. However, we are often hindered by an old framework that can be described as a sort of ‘CSLA meets Active Record’. If unittests are impossible, we will make fit-tests (using FitSharp).
We develop using the Scrum methodology, and, after you finished your task, you can stick the post-it in the ‘2nd opinion’ column. This isn’t required, but if you’re not sure about something, you can have a teammember look at the code you’ve written.
At the end of a story, there’s a paper hanging on our wall with a description of the story, the Definition of Done, and a list of acceptance requirements. When the story is finished (no more tasks for the story), any developer can take this paper and test the entire story. He/she checks all acceptance criteria, but also does some random testing, plays with the new features, acts as a normal user, tries some exceptional cases, etc.
At the end of a sprint, all stories are shown to the product owner and the entire team. Here again, the acceptance criteria are checked.
Before we do a release, we deploy our application (a .NET web application) on a local server and all stories are tested again, plus some default test scenarios (critical functionalities that always have to work for our users). This manual testing is done by the product owner, as well as one or two developers.
This is two weeks before the release into the production environment. One week before, we deploy to the beta environment of our customers (those who have a beta environment). This way, they can explore the new functionalities and signal bugs.
Finally, we release to the production environment. If anywhere along the way a bug is found, it is fixed of course. So you’d think this is a fairly tight strategy, wouldn’t you? And yet, bugs still get through. Granted, it’s been a while since we’ve had to deploy a critical bugfix to production. Also, usually we’ll catch the bug in our week of testing, but even then you would think it would’ve/should’ve been noticed earlier.
This strategy seems a lot of work, but it has saved us a lot of time. In the past, we’ve had weeks in which we had to deploy up to 8 critical bugfixes, some of which were incomplete and had to be fixed in a next bugfix. This creates a reasonable amount of stress, looks unprofessional to users, and wastes your time as a developer. So catching and fixing the bugs beforehand with this extensive strategy actually saves us time and lets us concentrate on developing new, requested features.
To quote the Google Testing Blog: “if it ain’t broke, you’re not testing hard enough.”