Customer retention shapes the success of every business. When we know how many customers stick around, we can make smarter decisions about what we offer.

A cohort table lets us track groups of customers and see when they leave. We can spot patterns in the numbers and find out exactly when people drop off.
This info helps us figure out where to focus so more customers stay happy and engaged.
TL;DR
Cohort retention tracks how each sign-up group returns over time. We care about who sticks, not just today’s numbers.
We build a clean cohort table that shows cohort-by-week data. We pick one value-based activity to measure and then plot the retention curve.
The curve breaks down into three zones:
- Onboarding drop (W0→W1) – initial user loss
- Habit formation (W1→W4) – building user patterns
- Long tail (W5+) – sustained engagement
We use hazard analysis to find exact drop-off points. Quick expected-value math helps us measure business impact.
We stick to guardrails: privacy by design, accessibility, and no manipulative retention tactics.
Key definitions (one line each)
Cohort: A group of users who started using a service during the same period, like the week of January 6.
Retention rate: The percentage of users from a cohort who are still active in a specific week after starting.
Activity definition: The specific behavior that counts as “active” usage—something that shows real value, not just a visit.
Monthly churn: One minus the retention rate for a cohort, showing what percentage of users we lost.
Hazard rate: The percentage of active users who don’t come back in the next time period.
Resurrection: Users who went inactive for a while but then became active again.
The 7-step playbook
1) Decide the activity definition (before any math)
We start by defining a value action that matters to our business. We skip easy metrics like “opened app” and focus on actions that show real engagement.
For B2B, maybe it’s “Ran ≥1 successful sync” for a data tool. For B2C, it could be “Completed ≥1 workout” for a fitness app. We keep this definition steady for at least a quarter and document any changes.
2) Pick cohort cadence and window
We pick weekly cohorts for fast-moving products or monthly for slower cycles. Our analysis starts at W0=sign-up week, then W1, W2, and so on with no overlap.
We right-censor new cohorts since they won’t have W8 or W12 data yet.
3) Pull the minimal data
We grab three things: user_id
, signup_date
(or first_seen), and event_date
with active_flag
. The active flag is 1 when users did our value action that day or week.
We count distinct active users by cohort_week × activity_week
for our analysis.
4) Build the cohort table (normalize to %)
We create our retention curve by dividing each week’s active count by the total cohort size. This gives us percentage retention rates for each period.
We show this as a heatmap where darker cells mean higher retention. W0 is always 100%.
5) Plot the retention curve and compute hazard
We plot a line chart from W0 to W12. Hazard at week t is h(t) = [R(t) − R(t+1)] / R(t).
Hazard spikes tell us where users bail out.
6) Segment to explain differences
We split curves by source, device, country, use case, and activation status. Divergence points highlight issues with onboarding, pricing, or features.
7) Prioritize fixes with expected value
We estimate how many extra retained users each proposed change could bring. We prioritize fixes using impact × confidence × effort.
Worked cohort example (numbers you can copy)
Suppose we track 1,000 users who signed up the week of January 6. Activity is completing a key action once per week.
Week | Active users | Retention % |
---|---|---|
W0 | 1,000 | 100% |
W1 | 450 | 45% |
W2 | 350 | 35% |
W3 | 300 | 30% |
W4 | 275 | 27.5% |
W5 | 260 | 26% |
W8 | 230 | 23% |
W12 | 210 | 21% |
We can break this into three zones:
Onboarding drop (W0→W1): We lost 55 percentage points. This means we need to fix the first-run experience, reduce friction, and improve value messaging.
Habit formation (W1→W4): The decline slows down. We should help users build routines, send reminders, and deliver value more often.
Long tail (W5+): Users drop off more slowly. We need to make things stickier with integrations, team features, and by meeting core needs.
Hazard rates look like this:
- h(W0) = 55%
- h(W1) = 22.2%
- h(W2) = 14.3%
The biggest spike is in week zero, so onboarding needs attention before any big marketing push.
Quick calculation: will this change matter?
We can estimate the impact on churn metrics with some quick math. Say we boost retention from 45% to 51% at week one.
Added retained users (W1) = Cohort × ΔRetention × Confidence
= 1,000 × 0.06 × 0.7 = 42 extra retained at W1
If 75% of week-one users make it to week four:
Added retained at W4 ≈ 42 × 0.75 = 32
Over three months, with 12 cohorts of 1,000 users each, that’s about 384 more week-four retained users.
Each retained user brings $15 per month in gross contribution. So that’s 384 × $15 = $5,760 monthly lift after the quarter.
How to find drop-off moments (practical checks)
We track onboarding by measuring each step in the first-run funnel. We follow users from sign-up to verification, first action, and second session.
Step-by-step analysis shows where the biggest cliff is. We look at daily data from day 0 to day 7 to spot post-signup fatigue.
Feature comparison tells us what helps or hurts activation. We compare week 1 retention for users who tried certain features versus those who didn’t.
Activation plotting gives us the clearest view of onboarding upside. We plot two curves: users activated in week 0 and those not yet activated. The gap shows our churn prevention opportunity.
Trade-offs at a glance
How we measure customer lifecycle patterns affects how quickly we spot changes in revenue retention.
Decision | Benefits | Drawbacks | Best for |
---|---|---|---|
Weekly cohorts | Fast signal | Noisier data | PLG/B2C; quick loops |
Monthly cohorts | Smooth results | Slower read | Enterprise/B2B |
Strict activity | Quality read | Lower percentages | When we know value moment |
Lenient activity | Higher percentages | Vanity risk | Early exploration |
Classic retention | Simple, comparable | Misses intra-week cycles | Standard dashboards |
Rolling 28-day active | Smooths seasonality | Harder to tie cohorts | Executive summaries |
Retention from activated base | Shows habit health after value | Hides onboarding issues | Post-onboarding tuning |
B2B example (≤5 lines): data integration SaaS
We saw CSV import users had 38% week-one retention versus 52% for warehouse connector users. Permission prompts showed up before users saw value, and admins often blocked access.
We added sample data with a “Try sample sync“ feature first. We moved permission requests until after the demo, and sent admins pre-filled approval links.
CSV cohort retention jumped +9 percentage points at week one and +6 points at week four. Support tickets dropped 18%. Now we track user resurrections through scheduled email reports.
B2C example (≤5 lines): meditation app
We saw a huge 62 percentage point drop in week-one retention on Android. Users hit a long questionnaire and a notification prompt right when they opened the app.
Our fix? We started with a quick 60-second meditation session. Profile setup moved to after the second session.
We delayed the push notification request until after the third session.
- Week 1 retention jumped up by 7 percentage points
- User streaks got longer
- Complaint rates didn’t budge
Guardrails & ethics (must-pass gates)
We protect user privacy by making data anonymous and honoring consent. Users can delete their info anytime.
Our tools should work for everyone, so we follow WCAG 2.1 AA for charts and heatmaps. We include keyboard navigation and alt text for screen readers.
Non-manipulative practices matter. We skip fake urgency and hidden sign-ups that trick people.
We treat everyone fairly—even if they turn off tracking. Our retention reports show honest numbers to finance and ops teams.
Clear metric definitions help other teams make sense of our work and double-check the numbers.
Pitfalls & better alternatives
Lots of teams track “opened app” as their main activity, but this creates false signals about real engagement. We should focus on value actions like create, complete, or share instead.
Mixing calendar MAU with cohort retention is another classic mistake. They do different jobs. Cohorts show who sticks around, while calendar MAU tells us how many people are active right now.
Fresh cohorts cause censoring problems if we fill missing data with zeros. Just mark W8 and W12 as N/A for new user groups.
Small cohorts can lead to bad decisions from limited data. Set a minimum size like 300 users, or combine weekly cohorts. Add confidence bands to show how reliable the data is.
Averaging cohorts together hides important trends. Show recent cohorts separately—rolling averages can cover up regressions.
Optimizing only for W1 retention is risky. Some tricks boost week one but flop later. Keep an eye on W4 and W8 metrics too.
Mini FAQ
What’s a good W1 retention target? First, track your baseline. Then shoot for 3-10 percentage points higher through better onboarding and more frequent value moments.
Should we exclude unactivated users? Look at both. All sign-ups shows how onboarding performs. Activated-base tells you about habit health after users get value.
How do we measure resurrection? Call it churn if there’s no activity for 28 days. Track users who come back after that as resurrected and report them separately from normal retention.
What if seasonality affects our data? Compare year-over-year cohorts like January 2025 vs January 2024. Try rolling 4-week overlays for smoother trends and fewer weekly bumps.
Which tool should we use? Anything that groups by cohort and week works. SQL plus BI tools, or product analytics platforms, all get the job done. The main thing is consistent definitions and clean event tracking across the system.
Copy-ready templates
A) Spreadsheet recipe (cohort table)
Start with a raw data sheet with four columns: user_id
, signup_week
, activity_week
, and active_flag
.
Format weeks like 2025-W02. Set the active flag to 1 or 0.
Create a pivot table. Put signup_week
in rows and activity_week
in columns.
For values, use COUNTD(user_id) where active_flag=1
. This counts unique active users.
Normalize next. Divide each cell by its row’s W0 COUNTD value. Show these as percentages.
Build two charts: a heatmap for the grid and a line chart for each cohort.
Add a hazard column to spot drop-off points. Calculate (Rt − Rt+1)/Rt for each period. Plot these to catch retention spikes.
Duplicate your sheet to create segments. Break it down by source, device, country, and activated in W0? (Y/N).
B) SQL sketch (BigQuery style, adapt as needed)
WITH
signups AS (
SELECT user_id, DATE_TRUNC(MIN(event_time), WEEK) AS signup_week
FROM events
WHERE event_name = 'signup'
GROUP BY 1
),
activity AS (
SELECT user_id, DATE_TRUNC(event_time, WEEK) AS activity_week
FROM events
WHERE event_name = 'value_action' -- your value action
GROUP BY 1, 2
),
cohort_activity AS (
SELECT
s.signup_week,
a.activity_week,
COUNT(DISTINCT a.user_id) AS active_users
FROM signups s
JOIN activity a USING (user_id)
GROUP BY 1, 2
),
cohort_size AS (
SELECT signup_week, COUNT(DISTINCT user_id) AS cohort_size
FROM signups GROUP BY 1
)
SELECT
ca.signup_week,
ca.activity_week,
active_users,
cohort_size,
SAFE_DIVIDE(active_users, cohort_size) AS retention
FROM cohort_activity ca
JOIN cohort_size cs USING (signup_week)
ORDER BY signup_week, activity_week;
This query pulls signup data and matches it with user activity.
It figures out retention rates by cohort and week, which is pretty handy if you want to see how people stick around.