Wednesday, April 1, 2015

Can TDD be not so good for MES Customization?

Test Driven Development (TDD) has attracted a lot of support and affection. I don't need to write to show how good it is. It has so many fans and sometimes the fans sound a bit like religious people. To them, TDD is the best for every project type and any problem in Automated Test goes back to the "Bad Execution" of the TDD and nothing can be wrong about TDD. If you cast any doubt to the holiness of TDD, they call you an "unbeliever" and label you as the one who does not care about quality at a ll. Also like a religion, they have  non-questionable assumptions and the promises are something sound too good to be true (eternal heaven of no bugs)
Before starting, I like to make it clear that I am an advocate of smart automated testing and this writing is only focusing on blind Automated testing in .Net using test, mocking, expectation and continuous build frameworks trying to achieve a 100% coverage in small teams.
The criticism here are not global and does not apply to every project. In MES projects we need to do small once-off coding projects with moderate mission criticality and these criticisms apply more to these types of projects. I am not talking about a transaction processing system for a bank or stock-exchange or health records system. 
Now Let's Start:

A. Automated Tests are Untested Codes

Automated tests can get quite complex and just like any other code they need testing themselves before being trusted. Should we write unit-tests that test Unit-tests? I've heard some saying "A code without automated unit test is like a code that is not written yet". Does that mean auto-unit-tests are considered not written?. going to the religion analogy, if everything needs to be "created", who created the "creator"? TDD fans, tend to imply that tests are bug-free and their bug-freeness flows down to the code. Ore are we sweeping the problem somewhere else? 

B. Automated Tests Take Time to Write:

Hello-World examples of test/mock/validation frameworks show how easy it is to test with their over-simplified examples. But many times in reality it becomes very difficult to write a true test and even more difficult if you insist on using these frameworks. Unless you are already in heaven, you have limited project time and the time you spend on Automatic tests is competing with the time you could use to do manual tests. Their first defence is that the automated test is going to take zero time to execute and over time you actually save time. But they just assumed the test is going to run for many times, while in reality the change in the requirements or design or a dozen of other things can kill a test before it runs even once on a release version. 

C. Automated Tests Worsens Encapsulation:

The design in a TDD usually has to do serve a double purpose: Functionality and Maximizing Testability. You can end up loosing encapsulation to make code testable. You finds ways to avoid this, but they don't come free. Unless one day .Net and .Net languages are changed to support TDD so that you can test a method without making it visible to normal code, this problem will exist. A Symptom of this is usually having Interfaces for objects for no other reason than being able to write tests easier. 
Again, I know there are ways, techniques, platforms and etc.  to alleviate that, but in reality may Devs sacrifice the encapsulation to do tasks in time. The unmeasurable encapsulation is sold in favour of measurable coverage. 

 D. Automated Tests Need DI:

Do you think DI (Dependacy Injection) or RoC (Reversion of Control) is always a good thing regardless of automatic testing? Well not always. They used to be frowned at because DI usages sometimes can be summarized as using a bucket (Container) of objects or object makers (Factories) that have to abide to certain rules so that the DI creates them in the correct way. More often than you think this "buket" just becomes a badly implemented Context object and acts as an "anti-pattern". The DI framework adds complexity and becomes the core of your solution and affects (ruins?) your object creation design patterns. Bugs in object creation which is an important aspect of an application become more difficult to trace and debug. This is because of a very simple situation: The code you write for object creation, is no longer actually creating the object. You get exception in a misleading location

E  Automated Tests Kill Agility:

Writing tests that go red first and you make them green by implementing means only one thing: You are investing (by writing  test code) in a design that may need to change once you try to implement or showcase it. Implementing a design casts light on issues in the design. Showcasing an implementation casts lights on issues in requirement understanding and subsequently the design and implementation. The idea behind being Agile is to shorten the lifecycle so that we discover the issues earlier. Writing a full test often requires definintion of all interfaces which locks down the design. any change in design means changing all interfaces and automated tests.

F. Automated Tests Mean More Moving Parts:

Signing up for TDD usually requires using more libraries which exposes the project to bugs in these libraries. In an enterprise environments, their usage might not be actually allowed if their support cannot be guaranteed. And once we limit ourselves to the white list, we start having problem writing tests quickly, easily or without breaking the design principles. 

G. It  Too Much Attention Because It Is Quantitative

TDD Fanatics don't deny that the amount of automated tests and percentage of code coverage are not the only important code quality merits, but in reality some of them put too much emphasis on it. The measurable things gets more attention than qualitative merits. This is a common human error. As an example, unless we are a pro, when buying a camera, we tend to put too much emphasis on the camera's resolution because it is a number while the quality of the lens is hard to quantify and gets less attention.  
At the end of the day your code is used by a user which is a human. Or used by another Developer if it is library. Can we write a test that measures how human readable an exception is? Too much TDD focus can causes lack of attention to 
  1. Graceful error handling and display
  2. UI layout and user input validation
  3. Code design (lower level design) and Code comments and readability
  4. Variable , Method and Class naming and organizing

In cases, TDD advocates may end up having 100s of tests checking the business logic, but the Button not being visible to user to actually invoke it due to layout issues.

H. Time/Resource Management Hazard

With test development put first, Devs, Seeing the deadline away spent too much of time writing auto-tests in the beginning of their tasks. Apart from the fact that this results in delays in having any first initial versions, this causes some other issues. The development tasks or subprojects roughly can be divided into two time periods, or as some joke about it the first 90% and the second 90%. We as developers simply suffer from under-estimating and in the first 90% try to do things in the right way and rush up in the second to finish the work.
I call the first half the precious half where you fix something just because you want your work to be perfect. In the second half you fix only if your tester complains, or does not compile or your automatic test is failing. You may even comment out an automatic test in the second half!. 

The quality of code in the first half is much higher than the second. Too much TDD ends up having beautifully crafted tests testing terrible code and until last day the developer could not put himself in the user's shoe to deliver something that he himself as user enjoys. 

Final Words

Again, This is not about all project types, team sizes, and development environments. It is very specific about MES customization. 


Homepage
Homepage