Rule 1:
When showing error bars, always describe in the figure legends what they are.
Rule 2:
The sample size/number of independently performed experiments (i.e. n) must be stated in the figure legend.
Rule 3:
Error bars and statistics should only be shown for independently repeated experiments, and never for replicates.
Rule 4:
For very small values of n (e.g. 3), it is better to simply plot the individual data points rather than showing error bars and statistics.
Rule 5:
95% confidence intervals capture the true mean on 95% of occasions, so you can be 95% confident that your interval includes the true mean.
Rule 6:
How standard error bars relate to 95% confidence intervals; when n=3, and doublethe SE bars don't overlap, P <>
Rule 7:
With 95% CIs and n = 3, overlap of one full arm indicates P approx 0.05, and overlap of half an arm indicates P approx 0.01.
Rule 8:
In the case of repeated measurements on the same group (animals, individuals, cultures, or reactions, for example) CIs or SE bars are irrelevant to comparisons within the same group.
They really are quite basic, but it's useful to be reminded of rules 1-5 occasionally. Another part of the paper I quite liked was the single sentence summary of P values (as the result of t-tests for example), but more particularly how to interpret them.
If you carry out a statistical significance test, the result is a P value, where P is the probability that, if there really is no difference, you would get, by chance, a difference as large as the one you observed, or even larger.
Cumming G, Fidler F, & Vaux DL (2007). Error bars in experimental biology. The Journal of cell biology, 177 (1), 7-11 PMID: 17420288