About 18 years ago I was hired to lead an Automation Development team for Lotus Notes and Domino. Near the end of the 1990’s, the Lotus Notes and Domino product had become so complex that a single code change in an obscure place in the code could negatively impact any numerous API’s or applications. The directive of the automation development team was simple – provide an automation framework for API and regression testing. At the time “test driven development” was not really a wide spread thing and developers hated writing unit tests. So the automation team had a single directive, automate the 20 million lines of code across 7 platforms after each build, every single day. The task was daunting but we saw an immediate improvement in quality in the first year alone, winning us an IBM Corporate Award.
You may think having a dedicated team to automate testing is a bit ridiculous these days because of unit testing being so pervasive in build frameworks, IDE’s, and languages. The number of test frameworks is a bit daunting and you have to do a lot of studying to do it right. A primary reason the automation development team worked so well is because they had a mission. It was as if we were actually working against the developers in many cases – trying to prove their code won’t work. So what did this create? A high level of quality, that’s what. The AD team was reviewed by how many test cases written and run and most of all, how many bugs the automation framework found. There were other measurements like how fast the tests ran, and how many platforms it ran on. The AD team not only automated the running of tests of 7 platforms, it also fully integrated into the build system for each platform. So when the build ran each and every night at midnight, the tests automatically kicked off against that build and the results posted to an internal web dashboard for management. Any manager could see how the tests for their area ran daily. If you checked in code into the main branch the previous day you had to be available at 7am in the morning to possibly fix a failed test or even worse, fix a failed build. The latter resulted in a call to the build room over the loud speaker – of which you could hear chuckles and laughter in the background cubicles and offices as you strolled down to the build room to fix your code. You were sure to be greeted by Kar Chung or John Paganetti as you entered the build room and even worse if Eisen or Holden were there.
So where am I going with this?
I have realized how brilliant the idea of a centralized automation team actually is. With the complexity of build frameworks and now containerization, you need experts in this space. You need teams dedicated to making this better, faster, and cheaper – continuously. If you leave this up to the developers, they will never create tests or focus on enhancing the automation as much as a team being evaluated for it. You will get half assed testing and half assed automation. It’s not a dig on the developers, they are simply focused on product feature and function; and many actually skip quality or scale of their code for the first cut, pushing that off to a future sprint – if they aren’t bogged down with other deliverables. I have seen many teams step back and use one or multiple sprints to only focus on automated and testing – that is another way of handling this, however, I still don’t think the innovation and quality is as good as someone who lives and breathes this daily and gets rated by it.
In todays world of containerization, there really is no reason why you can’t have code checked in, built, and then deployed to a test environment and full regression testing run against that code every single day. The automation could even deploy fully configured containers for third parties and partner developers, every single day, posted to a shared authenticated area just like open source projects. Meaning, partners get access to early “unstable” builds for their own innovation or product add-ons.
You may be catching on that what I am describing is actually “DevOps”, and it is.
There are many frameworks out there that bring your development team up to par with the industry in the DevOps space. Like the IBM Cloud, where you can define your entire pipeline from development, to QA to production. All fully automated. The problem I see is that you really have to be an expert in DevOps (and the platform tools) to get it right, again, the need for someone to “own automation”. The book “The DevOps Adoption Playbook“, by Sanjeev Sharma, is an excellent resource to help you in defining your developer operations plan. It has concrete examples for developing a business case to sell the DevOps idea to adopting DevOps to driving innovation with DevOps. It also goes pretty deep into test service virtualization where you can deploy test environments right from the main pipeline as I described above.
I have seen so many development models come and go over my years as a developer and one thing is consistent, you have to be accountable for your work. If your work is produce cool new products then most likely development operations, tool chains, and pipelines are secondary. This only leads to poorer quality of products and ultimately more resources to ship a quality product. Automate everything.