Common Mistakes
The most frequent product analytics errors and how to avoid them.
These are the mistakes we see repeatedly across B2B product analytics implementations. Each one has cost teams weeks of lost data, broken dashboards, or wasted budget. Learn from others' errors.
Planning mistakes
1. No tracking plan
The mistake: Adding tracking ad-hoc without documentation.
Why it is a problem: Without a plan, different developers invent different conventions. The same action ends up tracked under multiple names (UserSignedUp, user_signed_up, signup). Missing events are discovered months later when someone needs data that was never collected. Institutional knowledge about what events mean lives in people's heads, not in documentation.
The fix: Always start with a written tracking plan. Define every event, its naming convention, its category, and the business question it answers before writing any instrumentation code.
2. Tracking everything
The mistake: The "we might need this someday" approach.
Why it is a problem: Massive data volume and cost. On volume-billed platforms, every speculative event is a recurring charge. Signal is buried in noise. Nobody trusts the data because there is too much of it. Client performance degrades under the weight of tracking calls.
The fix: Track what answers real questions. Add more when needed. The cost of adding an event later is trivial; the cost of paying for a useless event for months is not. See Cost Optimization.
3. Misaligned tracking and business goals
The mistake: Tracking features instead of tracking what matters to the business.
Why it is a problem: You end up with extensive data about low-value interactions and no data about the events that drive acquisition, engagement, and retention. Dashboards are full but no one can answer the questions leadership asks.
The fix: Before choosing what to track, identify the key metrics that align with your business goals. Then work backward to the events that feed those metrics. See Telemetry Principles.
4. No event ownership
The mistake: Analytics is "everyone's job" -- which means it is no one's job.
Why it is a problem: No one reviews tracking changes. Quality degrades silently over time. Institutional knowledge is lost when people leave. New events are added without checking for conflicts or redundancy.
The fix: Assign ownership to a person or team. Make tracking plan review part of the product development process. Someone must be responsible for tracking quality.
B2B-specific mistakes
5. Missing account context
The mistake: Only tracking user-level events without account association.
Why it is a problem: You cannot analyze account health. You cannot segment by account traits (plan, MRR, industry). B2B analytics tools like Accoil cannot function. You have lost the most important dimension in B2B analytics.
The fix: Always call group(). Always ensure every event is attributable to an account. This is not optional in B2B. See B2B Tracking Patterns.
6. User-centric analysis only
The mistake: Counting users instead of counting accounts and users per account.
Why it is a problem: An account with 50 active users out of 200 seats tells a completely different story than an account with 2 active users out of 3 seats. User counts alone obscure account health. In B2B, the account is the revenue unit -- user counts are a supporting metric.
The fix: Structure your analytics around accounts first, users second. Use group traits to capture account-level metrics. Use snapshot metrics to track user counts per account.
Implementation mistakes
7. Scattered event definitions
The mistake: Defining events inline throughout the codebase.
// File A
track("user performed search");
// File B
track("Search Performed");
// File C
track("SEARCH_EXECUTED");Why it is a problem: The same event gets tracked differently in different files. There is no way to audit all events. Typos create phantom events. Consistency depends on developer memory, which does not scale.
The fix: Centralize event definitions in dedicated files. Every tracking call references the central definition. The correct name becomes the easy path; the wrong name requires going out of your way.
8. Inconsistent naming
The mistake: Different naming conventions across the codebase.
track("UserCreatedReport"); // PascalCase
track("report_created"); // snake_case
track("Report Created"); // Title Case with spaces
track("created-report"); // kebab-case, verb firstWhy it is a problem: Events are split across different names. Queries miss data. Dashboards undercount. New developers do not know which convention to follow. The cost of fixing naming retroactively is enormous -- you either migrate historical data or live with a permanent seam.
The fix: Pick a convention and enforce it structurally. Accoil uses Object_Action in Title_Case with underscores. The convention matters less than consistency. Enforce through centralized definitions, not developer discipline. See Naming Conventions.
9. Tracking mechanisms instead of outcomes
The mistake: Tracking UI interactions instead of business outcomes.
// Tracks the mechanism
track("Submit_Button_Clicked");
track("Create_Report_Button_Clicked");Why it is a problem: Breaks when the UI changes. Does not capture business meaning. The button click is implied by the outcome -- if a report was created, someone triggered the creation. Coupling analytics to UI elements means every UI refactor breaks your tracking.
The fix: Track outcomes: Report_Created, not Create_Report_Button_Clicked. Focus on what the user accomplished, not the UI gesture they used.
Exception: When A/B testing different UI paths to the same outcome, the mechanism temporarily matters. But this is the exception.
10. No error handling on analytics calls
The mistake: Letting analytics failures break the user experience.
// If analytics is down, the user's action fails
await analytics.track("Report_Created");
doNextThing();Why it is a problem: If the analytics provider is down or slow, the user's action fails or hangs. Analytics should be invisible to users. A tracking outage should never become a product outage.
The fix: Non-blocking calls with error handling. Never await analytics on the critical path:
trackReportCreated(context).catch(console.error);
doNextThing();See Implementation Architecture for the full error resilience pattern.
11. Duplicate events
The mistake: The same event fires multiple times for one user action.
Why it is a problem: Inflated metrics. Conversion rates look wrong. Hard to trust data. Common causes: event in a component that re-renders, event in both frontend and backend for the same action, event in multiple code paths that can both execute.
The fix: Track at the canonical moment, in one place. For important events, prefer server-side tracking where you have more control over execution.
Maintenance mistakes
12. Not testing tracking
The mistake: No QA process for analytics.
Why it is a problem: Broken tracking goes unnoticed for weeks. Dashboards show wrong data. Decisions are made on bad data. A tracking regression is invisible until someone notices a dashboard looks off, by which time weeks of data may be compromised.
The fix: Use debug mode during development. Monitor event volume in production -- sudden drops signal broken tracking. Write automated tests for critical events.
13. Orphaned events
The mistake: Old events are never cleaned up after features change or are removed.
Why it is a problem: Confusion about what is real. Queries return stale data. New developers do not know which events are active. The tracking plan becomes cluttered and untrustworthy.
The fix: Deprecate events with a timeline, then remove them. Keep a changelog. Version your tracking plan like you version your schema. See Telemetry Principles.
14. Deriving counts from events instead of snapshots
The mistake: Calculating current totals by counting creation events minus deletion events.
total_todos = todos_created.count() - todos_deleted.count()Why it is a problem: Events can be dropped or duplicated. Historical events predate your tracking. Migrations and imports bypass the event pipeline. The number drifts from reality over time and never quite matches your database.
The fix: Use snapshot metrics. Query your database directly on a schedule and send current-state data as group traits:
group(accountId, {
total_todos: await db.todos.count({ accountId }),
last_daily_sync: new Date().toISOString(),
});15. PII in event data
The mistake: Including personal data in event properties or traits without consideration.
identify("usr_123", {
email: "john@example.com", // Necessary for identity resolution
credit_card: "4242...", // Never do this
shipping_address: "123 Main St", // Not needed in analytics
});Why it is a problem: Privacy and compliance risk (GDPR, CCPA). Data cannot be easily anonymized once sent. Limits where data can be forwarded. Creates liability.
The fix: Send only what is needed. Email and name are often necessary for identity resolution and should go in user traits via identify(). Sensitive data (payment information, addresses, SSNs) must never enter the analytics pipeline. Use IDs to reference entities rather than embedding full values.
Self-assessment checklist
Review your implementation against these questions:
- Do you have a written tracking plan?
- Is account context on every B2B event?
- Is naming consistent across all events?
- Is PII limited to necessary user traits only?
- Are you tracking outcomes, not UI clicks?
- Is there error handling on all tracking calls?
- Do you test tracking before deploy?
- Is someone responsible for tracking quality?
- Are you validating data accuracy on a regular cadence?
- Do you review and clean orphaned events periodically?
- Is your tracking plan documented and versioned?
- Are current-state counts using snapshots, not event math?