I use a simple algorithm to come up with our hour estimates on software projects: (Effort * Risk) = Hours.
- Effort is the amount of time you think a task will take the complete.
- I like to use hours or days for effort scale.
- I consider "one day" to be six hours.
- I try really hard not to have tasks larger than a single day. Usually large tasks can be broken into smaller tasks, although sometimes they can't.
The three 3 levels of feature risk has more detail but the tl;dr is:
- 1 risk if you can do the task with your eyes closed.
- 2 risk if you feel any uncertainty at all.
- 3 risk if you need to Google-fu before you can even start estimating.
- Multiply the effort by the risk to get hours.
- e.g. A 1 risk task that you want 2 hours to complete means you should budget 2 hours.
- e.g. A 3 risk task that you want 2 hours to complete means you should budget 6 hours.
Why should I use (E*R)=H?
- It's easy to explain both concepts of Effort and Risk.
- The math to get to Hours is simple.
- It transparently communicates how you got to a certain hours estimate.
Why don't you use points for estimates?
I can't wrap my head around points: I always end up converting points to hours. Same thing when I give the numbers to my clients, except they then turn hours into cost based on the hourly rate. You could adapt this to points if that's how you roll, I think.
How does (E*R)=H work with Priority?
- The 3 levels of feature prioritization can be combined with Hours to help create a budget in a Feature Sheet.
- Priority and Risk are components for (Q*S)=(R*T) decisions and will identify less risky alternatives to meet the same outcome.
- A high hours for a feature may lower its priority, because the cost/benefit isn't there anymore.
Doesn't it read better if it was written Hours=Effort*Risk?
Yeah, except that I put these values in a spreadsheet as columns: Effort, then Risk, then Hours. This reads from left to right properly in that context, and is adapted from there.