TDD has been around for quite a while, and personally speaking – I have practiced it, and also heard several stories from different projects ranging from complex backend telco products, to front-end mobile applications and portals.
This post examines two divergent views on TDD.
Before reading the post further, important to keep this thought at the back of our minds –
“Thumb Rules, Textbooks and Rulebooks dont lead to mature software”
There are two sections for discussion –
A. The “TDD Devil’s Advocate”.
B. TDD – a boon for developers!
PART – A: The TDD Devil’s Advocate
Most bugs in complex industrial software are due to unclear or incomplete requirements. On many occasions, requirements also change rapidly with changing business needs – which demands more “agility” (pun intended !) in the way we approach our development projects.
Course correction may be required without “fragmenting” the software design and without degrading the performance of software.
Hence, when we employ TDD cycles and raise a “bug”, the fix to this bug may actually digress from the original business goal if the requirements are hazy. More often than not, bug fixes introduce unwanted functionality in addition to the patch, which may have performance or logical correctness implications.
Our destiny is always in the hands of the junior most coder of the team who is “fixing the issue”.
Therefore, there is always a need for continuous “technical housekeeping” in order to ensure that developers stay on the right path – and not go down the long road leading to oblivion , by repeating the TDD cycles of “Test-Fix-Repeat” !
This is one practical challenge in practicing TDD, especially in large multi faceted software projects involving many teams – and the egos of multiple team leads !
Most developers are averse to testing, and we need to teach our development resources to think in terms of TDD, and help them design their unit tests. This requires a lot of time, as not every developer has a mindset for writing “complete” unit tests which do not pass without proper assertions in the code.
As a result, good test cases “on paper” seldom translate into software unit tests of the same quality.
Unit test case code reviews hence become a “hidden task” which takes up a lot of time.
As requirements change (which happens often), so does the overall “test strategy” and the nature of TDD unit tests.
This needs constant change and course correction in the unit test suite, and a subset of the previous effort spent in TDD cycles is rendered redundant, as many of the bugs which were raised may no longer be valid.
Over a period of time and many TDD iterations later, projects find themselves in a situation where the amount of developer energy and effort spent in writing x-unit tests in their CI environment is more than the time spent on the actual software product. The effort needed would multiply as more features pour in – and the product evolves.
To add to this, Agile practitioners who execute TDD in an organisation may not be domain experts and may not understand the original system design requirements.
For example, if we have to build a complex telco product, the designers and developers are experts of the 3GPP standards text (requirements), related telco protocols as well as aware of how to scale the system to meet realtime latency sensitive needs demanded by carrier grade systems.
These aspects are critical to deliver a product that would be competitive. In such a scenario, TDD adds very little value, as most of the requirements are pre-known due to being part of the standards, and the nature of “bugs” and the “correct fixes” can only be understood by experts.
Another example may be of a complex IT system consisting of multiple COTS products requiring systems integration to meet our business goals. This would require COTS systems knowledge as well as business process design. TDD can add some value here in assuring the correctness of business processes to provide the intended results.
Having said that, in order to “fix the business process correctly” – the jeopardy management capabilities of the COTS systems need to be understood in depth. Otherwise a “wrong fix” may be more expensive than not fixing it at all !
Another major pitfall with TDD is the fragmentation of the software design and the mutilation of “good code”.
Many developers who work neck deep in the enthusiasm of “fixing-testing-fixing again” – may actually hurt the overall quality of the product.
Addition of “band aid” code, lots of boiler plate additions, “quick fixes here and there” as well as bad programming practices creep into the code over multiple TDD cycles.
Code reviews also become a mundane and “routine” task, as most developers adopt a mindset of “iterative development” and think that they can keep fixing bugs later once the test cycle is over.
The seriousness of “getting it right the first time” gets diluted as an outcome.
This harms the overall code quality in the long run, and may need a major “undo” effort once the software does not perform when scaled!
From a systems testing perspective, the formal boundaries between functional testing and regression testing “blur” to a large extent. Test architects need to decide the correct “release” to use for a full cycle functional test, and a regression test. As the code base keeps changing rapidly – systems testing becomes less effective and more time consuming (despite automation, as the tool automation scripts also need to evolve with rapid changes due to TDD).
Even as TDD may give a sense of security of “fixing” bugs early and delivering “code that works”, the final outcome may not be in line with the original design goals and the intricate details of software performance may get compromised – as TDD does not pay much heed to load testing and HA failover/failback testing (non functional hardening).
Much has been discussed about employing TDD for performance testing, however it is easier said than done.
Performance testing of software is an art and not just “automation”.
The test strategy and design of performance tests need to be as close to the production scenarios as possible.
There is little or no value in executing performance tests on “little pieces” of functionality – as in the real world – the whole product has to function at scale together and provide 100% of the functionality.
If you happen to run into bottlenecks, it becomes almost impossible to trace back which bunch of “little changes” of the TDD process caused the degradation. Detailed profiling methods need to be employed as a result.
Almost every code optimisation and performance fix in the code also has to go through an updated execution to the TDD unit test suite to keep them “current” – which can be a lengthy process.
As a conclusion- TDD should be practiced in moderation and in the right way – just like all good things ! Some of the key aspects we should keep in mind are as follows –
a. There is no substitute to good design. No matter which agile methodology we follow, software design has to be excellent and design should be all encompassing with carefully thought out use cases. Just like we cannot build a 120 floor building without a solid foundation – no matter what construction “processes”, “architects” and raw material we use !
b. Good design “on paper” has little value if it cannot be converted into practice. Hence, the rigour of software implementation and “getting it right” should never get diluted due to “processes”, “iterative development” and “agility”.
c. Test Driven Development should be used as a means to audit and assure these objectives, and should not be confused as a “sole means” of developing products. TDD should not be allowed to compromise on performance and human discipline and it should not introduce “band aid code”.
d. It is always desirable to fix exceptions, but exceptions should not become the “rule” !
Having said that, this brings us to Part-B.
PART – B : TDD – A Boon for Developers !
As Donald Trump would put it – “TDD is great, its fantastic…it really is !!”.
There are projects where TDD can be valuable.
Take an example of an android and IOS application.
User interface design and user experience design is a creative process.
In many projects, software developers do not fully understand and appreciate the essence of UI/UX design objectives.
There is a constant need to “bridge the gap” between the UI/UX objectives and the outcome in the form of a mobile application.
This equally holds true for client side responsive web development using HTML5, Angular JS and other technologies.
The iterations in correcting the code to meet design objectives can be taxing for developers and UX designers alike.
This is where Test Driven Development can be a great tool.
UI/UX designers can work closely with developers and use TDD to ensure that the interaction design, navigation, wireframe layouts, animation effects and responsive design are adhered with – during the course of app development.
Changes in UI/UX can also go through TDD cycles to ensure that the creative objectives are not diluted.
TDD can play a crucial role in this respect, and I have seen many projects employ it successfully for web development and app development through “Tangible Collaboration”.
I have come to understand that using the right “processes” for the right job is such an important aspect.
We should not come under pressure to adopt “cool stuff” without fully thinking through the “context” of our projects and the skill set of the organisation.
Hope to hear rich experiences of TDD and Agile from everyone.