Estimation
Frontend estimation is notoriously inaccurate — and in consulting, inaccurate estimates erode client trust fast. This page covers practical techniques, complexity multipliers, and the pitfalls that consistently catch frontend teams off guard.
Estimation Techniques
Three-Point (PERT)
The team's default method for client-facing estimates. Instead of guessing a single number, you provide three:
- a — optimistic (everything goes right)
- m — most likely (realistic scenario)
- b — pessimistic (things go wrong)
PERT expected value:
Standard deviation:
For project-level estimates, sum the individual expected values and aggregate uncertainty:
Worked example — a form feature with validation and error handling:
| a | m | b | |
|---|---|---|---|
| Hours | 2 | 4 | 10 |
· → present as 3.4–6.0 hours (±1 SD).
This works best when you decompose into 20+ tasks — the statistics smooth out individual mis-estimates. Always present estimates as ranges to clients.
T-Shirt Sizing
| Size | Rough Effort | Typical Use |
|---|---|---|
| XS | < 2 hours | Copy change, config tweak |
| S | 2–4 hours | Simple component, minor fix |
| M | 1–2 days | Standard feature, form with validation |
| L | 3–5 days | Complex feature, multi-step flow |
| XL | 1–2 weeks | Large feature, new page with complex interactions |
Good for early-stage backlog grooming and prioritization. Not precise enough for client-facing estimates.
Story Points vs. Time
| Aspect | Story Points | Time-Based |
|---|---|---|
| Measures | Relative complexity | Absolute duration |
| Good for | Sprint velocity, internal planning | Client budgets, fixed-scope contracts |
| Risk | Can devolve into "points = hours" | Optimism bias, varies by developer |
Recommendation: time-based (three-point) for client deliverables; story points for internal sprint planning.
Other Techniques
- Planning Poker — avoids anchor bias by having team members reveal estimates simultaneously.
- Affinity Mapping — sort backlog items by relative size without assigning numbers.
- Reference-based — compare new work to completed tasks with known effort.
- Bottom-up decomposition — break every task into sub-tasks. Most accurate, most time-consuming.
Three-point estimation forces you to think about the pessimistic case. That alone makes it better than gut-feeling time estimates.
Complexity Factors
The biggest source of estimation error is misjudging how complex a feature actually is. Common high-impact multipliers include:
- Complex forms (multi-step, conditional, async validation) — 2–5×
- Real-time features (WebSocket, sync) — 3–10×
- Third-party integrations (maps, payments, auth) — 2–4× each
- Accessibility & i18n — +15–30% and +10–20% respectively; dramatically more if retrofitted
- Offline support — 5–10× for offline-first with sync
For the complete catalog of multipliers organized by category — UI, technical, integration, non-functional, infrastructure, and process factors — see the dedicated reference:
👉 Complexity Factors Reference
Simplifiers
Not everything adds complexity. These factors reduce effort:
- Repetitive UI patterns — list views, CRUD screens. Estimate the first, discount the rest.
- Static / read-only content — minimal interactivity = fast.
- Well-defined design — complete Figma specs with all states (hover, focus, error, empty, loading) reduce back-and-forth.
- Existing component library — cuts UI implementation time by 30–50%. See Design Systems.
- Established patterns — if the team has built similar features before, leverage that experience.
- Mature CI/CD — less time on deployment issues. See Deploy.
Common Pitfalls
- Optimism bias — developers consistently underestimate. The planning fallacy is real. Apply a reality factor.
- Missing testing time — unit tests, integration tests, E2E, QA, bug fixes. Add 20–40% of development time.
- Cross-browser gaps — "works in Chrome" is not done.
- Responsive underestimation — mobile layouts are not free. Fluid design between breakpoints creates edge cases.
- Ignoring review cycles — client feedback, design revisions, approval gates. Every round adds time.
- Forgetting DevOps — CI/CD setup, environment config, DNS, SSL. Someone has to do it.
- Scope creep — "can we just add..." is never quick. Small additions cascade.
- Learning curve — new framework or library → budget 2–3× for the first implementation.
- Non-coding work — meetings, documentation, onboarding, handoff, demos. Easily 20–30% of a sprint.
- Anchor bias — the first number mentioned becomes the anchor. Estimate independently before discussing.
The #1 estimation mistake: forgetting that coding is only ~50% of the work. Testing, reviews, meetings, documentation, and deployment fill the rest. Budget for the whole picture.
Velocity & Capacity Planning
- Velocity — rolling average of last 3–5 sprints. Don't use sprint 1 as your baseline.
- Capacity — developers × available hours × focus factor (0.6–0.8). Focus factor accounts for meetings, context switching, Slack.
- Buffer — 15–25% contingency for unknowns. Commit at ~80% of calculated velocity.
- New team members — first sprint at ~50% capacity for onboarding ramp-up.
Project Type Considerations
- Greenfield — higher upfront cost (architecture, CI/CD, design system bootstrapping). First 1–2 sprints are slower as patterns are established. Velocity stabilizes by sprint 3–4.
- Brownfield — add discovery time. Existing tech debt slows new features — budget 10–20% overhead. Understand the architecture before committing to estimates.
- Migration (e.g., Angular → React, CRA → Next.js) — apply 1.5–2× multiplier. "Rewrite" is not "copy-paste in new syntax." Prefer strangler fig pattern (incremental migration). Account for running two systems in parallel.
For brownfield projects, spend the first few days on a technical discovery before committing to estimates. The existing codebase's quality determines everything.
Cross-reference: Rendering for architecture patterns.