Test Driven Development (TDD) is a great idea. You write some tests that say, basically, "this is what should happen." Then you write some code to make it so. It's really rather like waving your magic wand (or keyboard).
Repeat this often enough, the idea goes, and you have yourself a pretty good system, and a pretty good amount of code coverage so you can refactor safely.
All great things.
TDD is hard. It's really difficult to do once you get beyond the first little bit. After all, as you keep adding and modifying tests, every change starts to take a little longer. Not much longer, but enough you notice and start to lose focus. And if it takes more than 8 seconds for something to happen - say, your tests to finish - you're likely to lose focus. (See David Cornish and Dianne Dukette: The Essential 20).
The benefits of TDD, however, accrue as long as the tests exist when the feature exists. This holds true no matter if the feature is a single method or an entire new component. The goal of having tests to enable safe refactoring, to guide the implementation, is still met. We're just not dogmatic about product code vs test code: both product and tests are required; get there in the way that works for you as an engineer.
So on our team we practice test near development. We don't force people to write their tests first. Instead, we ensure that tests exist when the feature is complete. In practice, some engineers write a little code, then write some tests, then refactor a little code, then refactor some tests, then write some more code. The last thing written is usually the acceptance tests. We end up in the same place, but we've allowed the tests to come before, during, and after the development effort. I've taken to calling this "Test Near Development".
And you know what? It works for us.
Give it a shot sometime, if you're having trouble with TDD. It might be a gateway to help get you to full TDD, or it might just give you better test coverage that your team can sustain.