Post TDD

January 21, 2007 § Leave a comment

Is ‘Post Test Driven Development’ possible?

I think so. Let me explain.

Test Driven Development (TDD) is about writing test code first before writing the production code to pass the test. Another way to look at it is that TDD is about the creation of contracts between validation mechanisms & a software system. This is departure from traditional iterative development where you would write the production code first & then the test code to prove the production code (if you write automated developer tests at all).

I believe that TDD should be on every developer’s tool belt because it forces a developer to apply good design principles (e.g. Single Responsibility Principle & Open-Closed Principle). If a developer does not apply these practices, then TDD becomes very difficult (which is why I believe why so many people have trouble with it & assume that the TDD paradigm does not work).

TDD is easier when you code from scratch, but if you are like me, you sometimes work in a code base which has been around for some time; a code base which has acquired years of technical debt and now screams to be refactored. Rigidity, needless complexity, & fragility are all signs that the code base needs fixing.

So if you get the time, how would you fix it?

Since I am a fan of TDD, I wondered if TDD could be applied to an existing software system as a refactoring tool. TDD is all about guiding development towards a good design, so if the design of a software system needs correcting perhaps TDD can help.

So how could TDD lead developers towards a good design within an existing software system? Look at the following trivial example:

public class MainApp {

public static void main(String[] args)
{
SomeUtility UtilityObject
= new SomeUtility();

UtilityObject.DoSomething();

}
}

public class DatabaseManager {

boolean QueryDatabase ()
{
boolean bDBResult = true; // simulated Database.Query()result return bDBResult;

}
}

public class SomeUtility {

private final DatabaseManager DBManager = new DatabaseManager();

boolean DoSomething()
{

boolean bQuerySuccess = DBManager.QueryDatabase();

return bQuerySuccess;

}
}

This code works. It is easy to read & responsibilities are well separated. Even better, you could change the database to another vendor and not affect the SomeUtility class. If you wanted, you could ship this code and it would function.

If this were your code, how satisfied would you be with the design?

Let us pretend that you wanted to write an automated test to quicken your testing efforts. Could you test how SomeUtility reacts to a query failure? How could you modify QueryDatabase to trigger failure or success events? How would you write it and avoid adding code to support the test? These are big questions you would need to address to adequately test this class.

Let us assume for a moment that you had no knowledge of this code. If you started from scratch using TDD, you would write a test first before the production code:

void TestSomeUtility()
{

}

The next step is to create an instance of the class you want to test & call the method you want to invoke:

void TestSomeUtility()
{
SomeUtility UtilityObject
= new SomeUtility();

boolean bResult = UtilityObject.DoSomething();

if (bResult) { /*success*/ } else { /*failure*/ }

}

So far this is exactly what you would do with TDD without preexisting production code.

Without existing production code TDD would demand that we create a new SomeUtility class with a DoSomething method that returns a boolean. We already have the class & method so the work is done for us.

Now comes the fun stuff.

To successfully test this class you may be thinking that the answer lies with controlling what QueryDatabase() returns, and you would be right. We need a way to ensure that UtilityObject receives the boolean value of choice from QueryDatabase, depending which code path we want to test.

Now, DatabaseManager is tightly coupled to SomeUtility. This means we cannot use SomeUtility without DatabaseManager. Code, which is tightly coupled, is difficult to reuse & test.

The solution here is to apply ‘Inversion of Control (IoC)‘ to abstract SomeUtility’s dependency on DatabaseManager.

After applying IoC we have:

public class SomeUtility
{

private final DatabaseManager DBManager;

SomeUtility(DatabaseManager newDBManager)
{

this.DBManager = newDBManager;

}

boolean DoSomething()
{

boolean bQuerySuccess = DBManager.QueryDatabase();

return bQuerySuccess;

}
}

Using IoC, we inject the DBManager in to SomeUtility via its constructor. There are other ways of injecting but this one requires less code to demonstrate IoC.

With IoC we now have now modified the code so that DatabaseManager is loosely coupled with SomeUtility. How does this help us? Remember the original problem of finding a way to control what UtilityObject receives from QueryDatabase? Now that we inject a DatabaseManager in to UtilityObject, we are no longer bound to inject a production version of Database Manager. We can now inject a version whose QueryDatabase will return the boolean value we desire, thus controlling which code path we test. If we mock a DatabaseManager object, we can return whatever boolean value we want from QueryDatabase().

Here’s what the code looks like now:

public class MainApp
{
public static void main(String[] args)
{

// production code DatabaseManager DBManager = new DatabaseManager();

SomeUtility UtilityObject
= new SomeUtility(DBManager);

UtilityObject.DoSomething();

// ...

// test code TestSomeUtility TSomeUtility = new TestSomeUtility();

TSomeUtility.RunTest();

}
}

public interface IDatabaseManager
{
public boolean QueryDatabase ();

}

public class DatabaseManager implements IDatabaseManager
{
public boolean QueryDatabase()
{

boolean bDBResult = true; // simulated Database.Query() result return bDBResult;
}
}

public class MockedDatabaseManager implements IDatabaseManager
{
public boolean QueryDatabase()
{
boolean bDBResult = false; // always fail return bDBResult;
}
}

public class TestSomeUtility
{
public void RunTest()
{

MockedDatabaseManager MDBManager
= new MockedDatabaseManager();

SomeUtility UtilityObject
= new SomeUtility(MDBManager);

boolean bResult = UtilityObject.DoSomething(); // test failure code path if (bResult)
{
/*success*/ } else {
/*failure*/
}
}
}

public class SomeUtility
{
private final IDatabaseManager DBManager;

SomeUtility(IDatabaseManager newDBManager)
{
this.DBManager = newDBManager;
}

boolean DoSomething()
{
boolean bQuerySuccess = DBManager.QueryDatabase();

return bQuerySuccess;
}
}

Again this is a very a simple example to prove a point. In production code, you would want to do more to make the code more orthogonal (e.g. high cohesion).

So what did I do? Well, MainApp now injects the DatabaseManager in to SomeUtility. It also calls a new class titled TestSomeUtility. I extracted the abstract interface ‘IDatabaseManager’ & implemented the interface for DatabaseManager. A new mock object was created MockedDatabaseManager was created whose QueryDatabase() method always returns false, simulating a query failure. TestSomeUtility was created to create a SomeUtility Object, injecting MockedDatabaseManager instead of DatabaseManager. SomeUtility now has a constructor which sets it private copy of DBManager. In addition, DMManager is now a IDatabaseManager, and not the DatabaseManager class.

What was achieved? Now we have a software system in which SomeUtility can be tested in isolation of other classes. This allows us to write automated tests that anyone can run at anytime to acquire information if the system is still functioning as expected. This saves tremendous time & increases confidence as the software system evolves and defects are corrected.

I’m sure if you’re a seasoned programmer you be questioning whether the extra code was worth it. I can tell you with no hesitation that it is. The work to create a testable system, which enforces good design practices, can save a development team a significant amount of time and effort while working within the software system. The explanation of this will be in a future post. Also keep in mind that you would not write your test framework from scratch like I started to demonstrate. If you’re going to create automated tests, NUnit & JUnit are great frameworks worth looking at.

So is post-TDD helpful in leading production code towards a good design? We we started with production code, started to write a test, identified tight coupling, modified the code to make it loosely coupled, completed the test. In the end, the design has been improved & we now have an automated test to prove it. I think using TDD as a post-design tool is very helpful. Remember, it’s never to late to correct your design.

The one downside is that you never know when you will be done. With TDD on production code, there is no way to predict how much more work is left because you simply work up to the next design defect, and you do not know how many design defects there are. You have heard the old adage, "How do you eat an elephant? One bite at a time.” While you may not know how large the elephant is, taking a bite every so often helps make that elephant a little smaller and easier to manage.

All Software is Unfinished

January 16, 2007 § Leave a comment

That program you just bought, it is not finished.

Oh, and the one you bought before that isn’t finished either; and the one before that, and so on, and so on.

All software is unfinished. Even software that was released to customers is also unfinished. Not because the business decided to ship before it was ‘completed’, but because software is constantly under development flux.

What was shipped is only a milestone in functionality. Thru the life of a software product, teams decide what features will go in to certain releases (milestones), and which features will be pushed to future releases.

The same rule applies to defects. Not all defects are corrected each release. Only defects that are deemed necessary are corrected. These include crashes, common/repeating errors, & feature limiting issues.

This should is not be a new revelation for you. Defects have been around for as long as there has been computer hardware. The first defect was a moth stuck in the wiring of an early computer, which is where the term ‘bug’ came from. Software is not immune to bugs. Nor are other engineering disciplines. Defects are an unpleasant fact of life.

What makes software so unique is that software is ‘soft’. It is nothing more then text translated in to functioning programs. This text can be adjusted at anytime to perform new activities. However, the same flexibility in software also makes it more prone to defects.

Today we tolerate & expect bugs in software. Today we do not depend on features being delivered when they are promised. Software is always unfinished.

If you are responsible for developing software remember that that there will always be another version around the corner. Do not depend on this other release to only add new features & correct defects. There is more to the software then what is ‘new’. Understand that it is unfinished & plan to spend time with the software:

· Did it meet expectations?

· What lessons have you learned?

· Are any areas prone to defects?

· Have new features & fixes modified the design?

· Is it still fully testable?

· Is any refactoring needed?

· Is documentation up to date?

Remember that all software is unfinished, so treat your code as a work in progress. Do not assume that what came before is a clean platform for new development. Plan to improve the code and avoid a buildup of technical debt.

Where Am I?

You are currently viewing the archives for January, 2007 at Journeyman.