Skip to main content

Estimation

Frontend estimation is notoriously inaccurate — and in consulting, inaccurate estimates erode client trust fast. This page covers practical techniques, complexity multipliers, and the pitfalls that consistently catch frontend teams off guard.

Estimation Techniques

Three-Point (PERT)

The team's default method for client-facing estimates. Instead of guessing a single number, you provide three:

  • a — optimistic (everything goes right)
  • m — most likely (realistic scenario)
  • b — pessimistic (things go wrong)

PERT expected value:

E=a+4m+b6E = \frac{a + 4m + b}{6}

Standard deviation:

SD=ba6SD = \frac{b - a}{6}

For project-level estimates, sum the individual expected values and aggregate uncertainty:

SDproject=SDtask2SD_{project} = \sqrt{\sum SD_{task}^2}

Worked example — a form feature with validation and error handling:

amb
Hours2410

E=2+16+1064.7hE = \frac{2 + 16 + 10}{6} \approx 4.7h · SD1.3hSD \approx 1.3h → present as 3.4–6.0 hours (±1 SD).

This works best when you decompose into 20+ tasks — the statistics smooth out individual mis-estimates. Always present estimates as ranges to clients.

T-Shirt Sizing

SizeRough EffortTypical Use
XS< 2 hoursCopy change, config tweak
S2–4 hoursSimple component, minor fix
M1–2 daysStandard feature, form with validation
L3–5 daysComplex feature, multi-step flow
XL1–2 weeksLarge feature, new page with complex interactions

Good for early-stage backlog grooming and prioritization. Not precise enough for client-facing estimates.

Story Points vs. Time

AspectStory PointsTime-Based
MeasuresRelative complexityAbsolute duration
Good forSprint velocity, internal planningClient budgets, fixed-scope contracts
RiskCan devolve into "points = hours"Optimism bias, varies by developer

Recommendation: time-based (three-point) for client deliverables; story points for internal sprint planning.

Other Techniques

  • Planning Poker — avoids anchor bias by having team members reveal estimates simultaneously.
  • Affinity Mapping — sort backlog items by relative size without assigning numbers.
  • Reference-based — compare new work to completed tasks with known effort.
  • Bottom-up decomposition — break every task into sub-tasks. Most accurate, most time-consuming.
tip

Three-point estimation forces you to think about the pessimistic case. That alone makes it better than gut-feeling time estimates.

Complexity Factors

The biggest source of estimation error is misjudging how complex a feature actually is. Common high-impact multipliers include:

  • Complex forms (multi-step, conditional, async validation) — 2–5×
  • Real-time features (WebSocket, sync) — 3–10×
  • Third-party integrations (maps, payments, auth) — 2–4× each
  • Accessibility & i18n — +15–30% and +10–20% respectively; dramatically more if retrofitted
  • Offline support — 5–10× for offline-first with sync

For the complete catalog of multipliers organized by category — UI, technical, integration, non-functional, infrastructure, and process factors — see the dedicated reference:

👉 Complexity Factors Reference

Simplifiers

Not everything adds complexity. These factors reduce effort:

  • Repetitive UI patterns — list views, CRUD screens. Estimate the first, discount the rest.
  • Static / read-only content — minimal interactivity = fast.
  • Well-defined design — complete Figma specs with all states (hover, focus, error, empty, loading) reduce back-and-forth.
  • Existing component library — cuts UI implementation time by 30–50%. See Design Systems.
  • Established patterns — if the team has built similar features before, leverage that experience.
  • Mature CI/CD — less time on deployment issues. See Deploy.

Common Pitfalls

  • Optimism bias — developers consistently underestimate. The planning fallacy is real. Apply a reality factor.
  • Missing testing time — unit tests, integration tests, E2E, QA, bug fixes. Add 20–40% of development time.
  • Cross-browser gaps — "works in Chrome" is not done.
  • Responsive underestimation — mobile layouts are not free. Fluid design between breakpoints creates edge cases.
  • Ignoring review cycles — client feedback, design revisions, approval gates. Every round adds time.
  • Forgetting DevOps — CI/CD setup, environment config, DNS, SSL. Someone has to do it.
  • Scope creep — "can we just add..." is never quick. Small additions cascade.
  • Learning curve — new framework or library → budget 2–3× for the first implementation.
  • Non-coding work — meetings, documentation, onboarding, handoff, demos. Easily 20–30% of a sprint.
  • Anchor bias — the first number mentioned becomes the anchor. Estimate independently before discussing.
danger

The #1 estimation mistake: forgetting that coding is only ~50% of the work. Testing, reviews, meetings, documentation, and deployment fill the rest. Budget for the whole picture.

Velocity & Capacity Planning

  • Velocity — rolling average of last 3–5 sprints. Don't use sprint 1 as your baseline.
  • Capacity — developers × available hours × focus factor (0.6–0.8). Focus factor accounts for meetings, context switching, Slack.
  • Buffer — 15–25% contingency for unknowns. Commit at ~80% of calculated velocity.
  • New team members — first sprint at ~50% capacity for onboarding ramp-up.

Project Type Considerations

  • Greenfield — higher upfront cost (architecture, CI/CD, design system bootstrapping). First 1–2 sprints are slower as patterns are established. Velocity stabilizes by sprint 3–4.
  • Brownfield — add discovery time. Existing tech debt slows new features — budget 10–20% overhead. Understand the architecture before committing to estimates.
  • Migration (e.g., Angular → React, CRA → Next.js) — apply 1.5–2× multiplier. "Rewrite" is not "copy-paste in new syntax." Prefer strangler fig pattern (incremental migration). Account for running two systems in parallel.
tip

For brownfield projects, spend the first few days on a technical discovery before committing to estimates. The existing codebase's quality determines everything.

Cross-reference: Rendering for architecture patterns.