after a couple years of getting into TDD, I started paying attention to my personal programming loop
this is what I do when I work on a new project
1. think about the overall picture, annotating them in code comments
2. think about what components/classes I will need, putting my needs into code comments
3. try to use an available class
4. if not available, write a class with a minimal interface
5. write a unit test
6. implement the method in attempt to pass the unit test 7. debug, if necessary (evil, evil, evil)
8. goto 2. if I still need more components/functionality
9. if I have enough to pieces to work together and do a bigger job, I write an integration test
10. work to pass the integration test
11. run valgrind and drd on the new code
12. if they pass, commit into subversion
13. goto 1.
notes:
- the main goal is to minimize the time spent in 7. (the most evil stage) and to write reliable code that fits the requirements
- I don't do enough of 9. (programming sin)
- I will sometimes go on an excursion and learn new APIs and/or languages (programming sin)
after adopting this loop, I find myself spending less than 5% of my time debugging code - probably more like 2-3%
what does your personal programming loop look like?
How much time do you spend on steps 5, 9, and 10? I do none of those and I don't think I spend a lot more time than you in step 7, even while trying out new libraries.
1. think about what and how I want to write
2. try writing it
3. debug
4. come to a realization that the structure I decided on is terribly flawed
5. either go to 1 or leave the project rot for a few months until I get a new bright idea.
notes:
- the main goal is probably to minimize the time I spend typing and maximize the time I spend walking in circles
- I somehow don't have many problems debugging. Probably because my programs are so small.
- my main problem is ether inability to come up with a solid structure or just naive perfectionism - it's hard to say which..
I'm sure that my ability to find a good structure increases. Sadly the complexity of my projects tends to increase faster..
1. Create directory structure, Makefile, write a little bit of the README file (this is where I brainstorm what command-line options it needs to support)
2. Think about what the program needs to do and how I can split that into "chunks"
3. Start writing a part of the program
4. Check that that section works (is this unit testing?)
5. goto 3 until the "bare bones" of the program is done
6. Check that it all works together, run Valgrind, etc. (what's drd? can't find it on Google)
7. If it needs more features (remember, at this stage, it is usually just the absolute bare-bones), add more features and things like that.
8. Read through and tweak everything to make it more elegant, neater, faster and generally better (change an algorithm or even fix visibly broken code; this is usually when I have epiphanies like "that function doesn't do anything" or "these loops can be condensed into one"). I often end up undoing step 7 here, as the goal of step 8 is to remove complexity.
9. Debug
10. goto 7
I tend to reach step 5 and then do a long jump to hamsterman::4
@helios - you are probably a smarter programmer than I (and have a better memory)
without unit-testing, I don't think I can get past about 10k lines of code without breaking things... ...as the application gets bigger, the more useful unit-testing becomes - it's also a great source for documenting how to use a class that I haven't looked at in a year or two
@chrisname - drd is part of valgrind: it's great for threading (checking for race conditions and such) I find it more useful than helgrind
edit: @chrisname - 4. is unit testing if your check involves writing code and committing it into your code base so that you can run the check as often as you like; having unit-tests is almost as good as having a QA department behind you
1. Decide what the behavior of my program needs to be.
2. Write tests for that behavior.
3. Run tests...code fails because I haven't written any yet.
4. Write some code to make my tests pass, debugging fits in here somewhere.
5. When all tests pass, return to step 1.
I *never* write code without there being a test for it first.
0. Pick a mathematical problem I need to solve.
1. Come up with the algorithm that needs to be implemented.
2. Write down the formulas carefully and check them mathematically.
3. Write down the interface of the classes, set up the format of the data structures.
4. Implement.
5. Write a unit test.
6. Debug if needed.
7. Celebrate a job well done.
In reality it works like this:
0. Pick a mathematical problem I need to solve.
1. Come up with the key step in the algorithm.
2. Write down the most critical section of the algorithm.
3. Write an interface to suit the way the implementation turned out.
4. Hook it up to the remainder of the program.
5. Forget to write a unit test.
6. Debug Debug Debug and no sleep.
7. Design the mathematical printouts in human readable form.
8. Looking at the printouts, find out that the problem in Step 0 does not have a solution in the form in which the algorithm is looking for. Scrap everything.
With me being a visual person, I can see the program as one big diagram( and no, I'm not going nuts ). Using that same mental image, I'm able to plan out a rough flow diagram. Using that diagram, I build the base of the program. And then, I build on top of that base.
Being able to visually plan things, tends to be an awesome ability. ^_^
@kfmfe04,
Ok then, no, I'm not unit testing, since I usually write the testing code as a small program that just uses the relevant section of code. Usually I delete it when I'm done, occasionally I put it in a "tests" directory, but it never gets compiled into the program/library proper.
try
{
1) Identify the problem (interrogate the prisoner client)
2) *RESEARCH* the problem <-- IMPORTANT
3) Start thinking about all of the different "is a" possibilities (mentally picture the class hierarchy)
4) Start thinking about any potential uses of template methods and classes
5) Start thinking of how many lines of code the project could potentially take
6) If applicable, evaluate the estimated cost for the client
7) If the project is given the green flag, go to 8, else go to the bar work on another "project".
>*Note - Still haven't typed a line of code*<
8) Start an exception code enumeration & std::runtime_error extension to be built upon later.
9) Code the header(s) for the ultimate base classes to be used.
10) Think about all of the possible ways that an end-user could possibly throw a monkey-wrench
into the mix and add appropriate documentation (doxygen comments).
11) Implement the base classes (which are most likely still abstract classes)
12) For each base extension:
-A) Write the header
-B) Document the header
-C) Write the implementation
-D) Edit the header documentation as needed
-E) Strenuously test the new class (try to break it!)
-F) Edit, compile, run, repeat until satisfactory.
-G) Make the final documentation modifications
13) Go to the...next project.
I'm guessing that only roughly 25-30% of each project's duration is actually typing code, 5-10% debugging, 10-15% documenting, and the rest on dealing with clients, researching the problem, and thinking up the perfect way to attack the project.
@chrisname - you did the hard work already by writing the tests - don't throw them away! I recommend modifying them a little to fit something like goolgletest and save that code
the bigger your code gets, the more useful unit-testing becomes
edit: kudos to firedraco - if I were a project manager, I wouldn't hire anyone unless s/he already does unit-testing or is totally open to it (it's too easy to break code, especially other people's code!) - the more I write code, the more I feel programming is a fight against entropy and complexity
edit: @Lieber - agreed : getting good/appropriate specs is like >half the battle
One problem that I have with "unit testing" is that the tester is attempting to make their code pass the test...it seems backwards to me. Shouldn't the point be trying to make the code fail in an unexpected way and then fix it? Even if the code passes the test, it may still be able to be broken.
Alternatively, code that cannot be broken must certainly be able to pass all unit tests it's subjected to.
It's all semantics in the end, but IMO the focus should be on breaking the code, not making it pass a test.
unit testing is a minimum (it's a beginning, not an end) - it is, by no means, a guarantee that the code work in all cases
developers should ensure that at the very least, the code is working according to minimal expectations (think of unit-tests as specs that have been rendered in code)
QA, in simulating the end-user, usually focus on breaking code
I don't know how to advocate it except to suggest that people try it - the combination of unit-testing and SCM allows one to modify code quickly without fear of breaking things... (again, no guarantee - it depends on your code coverage, but it definitely helps)
edit: unit-testing is less critical for small/tiny projects, but when projects reach a certain critical size or has multiple developers working on it, testing could save your derriere