I was in a discussion the other day about automation, which in itself is an extremely broad term, but we were talking around quality assurance and the importance of having some kind of automated process when the “product” was ready. Whether this be spreadsheets, reports, commercial products, applications, system builds, etc. At some level you should have automation to verify that at least at a high level the product is consumable.
Test driven development and agile development have played a huge role in the quality of software over the past 10 years and it shocks me to know companies are simply not doing this today.
My side of the discussion was around the work I did when I joined Iris Associates – which changed to Lotus, then ultimately to IBM Collaboration Solutions. While this was around software quality, the story I was in discussion about was data integrity in spreadsheets. We had several layers of automation following a build. Once the build completed (on five different platforms) a series of automated tests were run for build verification. Once the automated tests ran there were some manual tests run if the automated tests had passed. We then had what we called “automated system tests”, also known as DNA (Domino Notes Automation). These tests were much more functional in nature and not at a single API level. So you can imagine these tests using many (if not hundreds or thousands of API’s) to complete what would normally be done by a human. I remember talking to a QA person about automating Event Admin (a Domino Server feature) and he said the hundred or so tests run by DNA would take me three months to do manually – and DNA did it in about 20 minutes – every day, on every build, on every platform. At the end of the day, and I am not sure of the exact number, but there were literally 10’s of thousands of tests run every single day on almost every single supported platform for both client and server. The results of the tests were of course published to a Lotus Notes database and notification emails were generated.
Why is this important?
So on any given day you could see exactly what and who broke the build. This was a huge leap forward in terms of quality because you no longer introduce a problem into the build and find out two months later during feature testing or even worse after the code is shipped. Resolution was often fast because we could isolate the exact submitted code from the ClearCase source control system.
Having a “clean” build meant any developer could synchronize the latest binaries and start work on “the next thing”. This allowed over 1200 developers to not be dependent on a single build model. A single build model is, we do builds on Fridays. That would mean if you wanted to start working on “the latest code” you would have to wait until Friday or use last Friday’s build. Over time this could kill a development cycle, especially if Friday’s build was bad. And remember, if you do builds once a week that’s 1200 developers checking code in every day – unchecked. By the time Friday roles around you may have merge conflicts and even worse – run time conflicts.
Today, much of this is done in the form of Unit Tests – like JUnit or our custom internal unit test suite called DLLTest for C/C++. Many members on my team had patents around this process and some of the small utilities we wrote in order to accomplish this. See below for the two patents I received for my automation work.
Patent References: (patent profile)
|2006/0070,034||2006||System and method for creating and restoring a test environment|
|2005/0289,517||2005||System and methods for client and template validation|