And I structure them all pretty much the same way. Here's what I put in and why:
- Overview. This section describes the purpose of the document and its intended audience. This way I don't get comments about skipping implementation details in a document intended for a first overview for an external party.
- Results Summary. What do we see? What changed or what benefits are there? If the audience never makes it past this section, at least they'll get the big picture and the end conclusion. (Depending on the audience, I don't necessarily expect the whole thing to be read in great detail.)
- How We Did It. This is the section that describes the setup of whatever we did (whether that's implemented a product or installed something or tested something). Maybe it's an architecture diagram. Maybe it's a test setup and configuration. Maybe it's a list of prerequisites. Either way, this is the basis on which our work was done. I write this section in a huge level of detail because I know that if I (or someone else) is going to do this again then getting that same underlying basis is going to be important for consistency of results.
- Details. This is the guts of the thing, usually. These are the tables of test results, the API, the product features and benefits, etc.
- Discussion. Based on the details presented, there are almost always things to discuss. Maybe it's future extensions to the product. Maybe it's further tests to run. Maybe it's an odd pattern or a reason behind some of the details. These are the questions that I expect people will ask. Often when I'm writing this one, I'll start out with the actual list of questions I anticipate (or that reviewers have asked already). Then I just fill in the blanks.
The length varies, and the contents vary, but pretty much every report-type thing I write has these in it. I find that kind of consistent outline lets me do these reports more reliably and more consistently.
What tricks do you use when you're writing a report?