Skip to main content

Estimation

Frontend estimation is notoriously inaccurate โ€” and in consulting, inaccurate estimates erode client trust fast. This page covers practical techniques, complexity multipliers, and the pitfalls that consistently catch frontend teams off guard.

Estimation Techniquesโ€‹

Three-Point (PERT)โ€‹

The team's default method for client-facing estimates. Instead of guessing a single number, you provide three:

  • a โ€” optimistic (everything goes right)
  • m โ€” most likely (realistic scenario)
  • b โ€” pessimistic (things go wrong)

PERT expected value:

E=a+4m+b6E = \frac{a + 4m + b}{6}

Standard deviation:

SD=bโˆ’a6SD = \frac{b - a}{6}

For project-level estimates, sum the individual expected values and aggregate uncertainty:

SDproject=โˆ‘SDtask2SD_{project} = \sqrt{\sum SD_{task}^2}

Worked example โ€” a form feature with validation and error handling:

amb
Hours2410

E=2+16+106โ‰ˆ4.7hE = \frac{2 + 16 + 10}{6} \approx 4.7h ยท SDโ‰ˆ1.3hSD \approx 1.3h โ†’ present as 3.4โ€“6.0 hours (ยฑ1 SD).

This works best when you decompose into 20+ tasks โ€” the statistics smooth out individual mis-estimates. Always present estimates as ranges to clients.

T-Shirt Sizingโ€‹

SizeRough EffortTypical Use
XS< 2 hoursCopy change, config tweak
S2โ€“4 hoursSimple component, minor fix
M1โ€“2 daysStandard feature, form with validation
L3โ€“5 daysComplex feature, multi-step flow
XL1โ€“2 weeksLarge feature, new page with complex interactions

Good for early-stage backlog grooming and prioritization. Not precise enough for client-facing estimates.

Story Points vs. Timeโ€‹

AspectStory PointsTime-Based
MeasuresRelative complexityAbsolute duration
Good forSprint velocity, internal planningClient budgets, fixed-scope contracts
RiskCan devolve into "points = hours"Optimism bias, varies by developer

Recommendation: time-based (three-point) for client deliverables; story points for internal sprint planning.

Other Techniquesโ€‹

  • Planning Poker โ€” avoids anchor bias by having team members reveal estimates simultaneously.
  • Affinity Mapping โ€” sort backlog items by relative size without assigning numbers.
  • Reference-based โ€” compare new work to completed tasks with known effort.
  • Bottom-up decomposition โ€” break every task into sub-tasks. Most accurate, most time-consuming.
tip

Three-point estimation forces you to think about the pessimistic case. That alone makes it better than gut-feeling time estimates.

Complexity Factorsโ€‹

The biggest source of estimation error is misjudging how complex a feature actually is. Common high-impact multipliers include:

  • Complex forms (multi-step, conditional, async validation) โ€” 2โ€“5ร—
  • Real-time features (WebSocket, sync) โ€” 3โ€“10ร—
  • Third-party integrations (maps, payments, auth) โ€” 2โ€“4ร— each
  • Accessibility & i18n โ€” +15โ€“30% and +10โ€“20% respectively; dramatically more if retrofitted
  • Offline support โ€” 5โ€“10ร— for offline-first with sync

For the complete catalog of multipliers organized by category โ€” UI, technical, integration, non-functional, infrastructure, and process factors โ€” see the dedicated reference:

๐Ÿ‘‰ Complexity Factors Reference

Simplifiersโ€‹

Not everything adds complexity. These factors reduce effort:

  • Repetitive UI patterns โ€” list views, CRUD screens. Estimate the first, discount the rest.
  • Static / read-only content โ€” minimal interactivity = fast.
  • Well-defined design โ€” complete Figma specs with all states (hover, focus, error, empty, loading) reduce back-and-forth.
  • Existing component library โ€” cuts UI implementation time by 30โ€“50%. See Design Systems.
  • Established patterns โ€” if the team has built similar features before, leverage that experience.
  • Mature CI/CD โ€” less time on deployment issues. See Deploy.

Common Pitfallsโ€‹

  • Optimism bias โ€” developers consistently underestimate. The planning fallacy is real. Apply a reality factor.
  • Missing testing time โ€” unit tests, integration tests, E2E, QA, bug fixes. Add 20โ€“40% of development time.
  • Cross-browser gaps โ€” "works in Chrome" is not done.
  • Responsive underestimation โ€” mobile layouts are not free. Fluid design between breakpoints creates edge cases.
  • Ignoring review cycles โ€” client feedback, design revisions, approval gates. Every round adds time.
  • Forgetting DevOps โ€” CI/CD setup, environment config, DNS, SSL. Someone has to do it.
  • Scope creep โ€” "can we just add..." is never quick. Small additions cascade.
  • Learning curve โ€” new framework or library โ†’ budget 2โ€“3ร— for the first implementation.
  • Non-coding work โ€” meetings, documentation, onboarding, handoff, demos. Easily 20โ€“30% of a sprint.
  • Anchor bias โ€” the first number mentioned becomes the anchor. Estimate independently before discussing.
danger

The #1 estimation mistake: forgetting that coding is only ~50% of the work. Testing, reviews, meetings, documentation, and deployment fill the rest. Budget for the whole picture.

Velocity & Capacity Planningโ€‹

  • Velocity โ€” rolling average of last 3โ€“5 sprints. Don't use sprint 1 as your baseline.
  • Capacity โ€” developers ร— available hours ร— focus factor (0.6โ€“0.8). Focus factor accounts for meetings, context switching, Slack.
  • Buffer โ€” 15โ€“25% contingency for unknowns. Commit at ~80% of calculated velocity.
  • New team members โ€” first sprint at ~50% capacity for onboarding ramp-up.

Project Type Considerationsโ€‹

  • Greenfield โ€” higher upfront cost (architecture, CI/CD, design system bootstrapping). First 1โ€“2 sprints are slower as patterns are established. Velocity stabilizes by sprint 3โ€“4.
  • Brownfield โ€” add discovery time. Existing tech debt slows new features โ€” budget 10โ€“20% overhead. Understand the architecture before committing to estimates.
  • Migration (e.g., Angular โ†’ React, CRA โ†’ Next.js) โ€” apply 1.5โ€“2ร— multiplier. "Rewrite" is not "copy-paste in new syntax." Prefer strangler fig pattern (incremental migration). Account for running two systems in parallel.
tip

For brownfield projects, spend the first few days on a technical discovery before committing to estimates. The existing codebase's quality determines everything.

Cross-reference: Rendering for architecture patterns.

Project Cookbooksโ€‹

For real-world examples of these estimation techniques applied to complete projects, see: