Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 94 additions & 0 deletions skills/app-growth-investigator/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
---
name: app-growth-investigator
description: Investigate growth, activation, retention, conversion, funnel bottlenecks, and product-value realization for apps and websites using analytics, database or warehouse, billing, release, and support data. Use when asked why users do not activate, return, convert, collaborate, or retain; to measure build impact; to explain churn-like behavior; or to find concrete growth levers from product data.
---

# App Growth Investigator

Use this skill to investigate an app or site like a sharp product operator, not a dashboard tourist. Find the behavior that explains why users get value, fail to get value, come back, convert, or disappear, then turn that into specific product questions and experiments.

## Core Mindset

- Think in funnels. Find the constrained step before debating everything else.
- Treat timing as signal. Ask when a behavior happens, not just how often.
- Segment aggressively. Aggregates hide the actual mechanism.
- Look for value failure, not just "traffic" or "churn."
- Care about product-business-model fit. Sometimes the product is working but the packaging is wrong.
- Turn every finding into a lever, not just an observation.

## Workflow

1. Frame the business question.
- Examples:
- "Why are new signups failing to activate?"
- "Where is the biggest conversion leak?"
- "Did the March 1 onboarding change improve first-week retention?"
- "Are users getting one-off value without becoming retained users?"

2. Pick source authority before drawing conclusions.
- Use the rules below.
- Read `references/source-patterns.md` for common stack combinations.

3. Get release context before forming hypotheses.
- Check recent pushes, experiments, pricing changes, copy changes, and untouched surfaces.
- Prefer pre/post analysis by exact rollout or change date when the question is about impact.

4. Choose one funnel family for the job-to-be-done.
- Read `references/funnel-patterns.md`.
- Define one funnel for the specific user job, not one giant master funnel.

5. Identify the bottleneck.
- Measure conversion rate, absolute user loss, and delay at each step.
- Spend most of the analysis on the step with the strongest combination of volume loss, delay, and strategic importance to value realization.

6. Segment before concluding anything.
- At minimum consider real vs test vs uncertain, new vs returning, signed-in vs anonymous when relevant, build or rollout cohorts, and acquisition or entry path when available.

7. Look for timing cliffs and "done in one sitting" behavior.
- Ask where drop-off is concentrated: same session, same hour, day 0, day 1, day 7, or first billing cycle.
- Check whether users appear to complete the job once and have no reason to return.

8. End with levers.
- Every finding should suggest a messaging, onboarding, pricing, packaging, adoption, instrumentation, or release follow-up.

## Source Rules

- Assign authority by claim:
- product database or warehouse for entity truth
- analytics events for interaction paths and timing
- billing system for monetization truth
- release or experiment history for rollout context
- support feedback or surveys for qualitative evidence
- logs when instrumentation is missing
- When sources disagree, do not average them together. Quantify the gap and explain what each source can and cannot prove.
- Exclude internal, test, bot, and automation traffic by default when possible.
- Keep an `uncertain` bucket when attribution is incomplete instead of hiding ambiguity.
- Always show denominators, time windows, and exclusion rules.
- If a metric depends on a proxy, say so plainly.

## Output Shape

Return a short report with:

1. Release context
- What changed recently
- What important surfaces have not changed
2. Funnel and bottleneck
- Funnel definition
- Biggest drop-off or delay
- Magnitude of the loss
3. Key findings
- 3-7 concrete insights with exact percentages, counts, and time windows
4. Interpretation
- What the findings likely mean
- Whether they point to friction, weak value, wrong packaging, traffic mix, or instrumentation debt
5. Recommended next checks
- The next cuts or queries that would confirm or falsify the hypothesis
6. Product levers
- Specific messaging, onboarding, pricing, packaging, feature, or instrumentation changes worth testing

## Reference Guide

- Read `references/source-patterns.md` when choosing authority rules or reconciling multiple data systems.
- Read `references/funnel-patterns.md` when selecting a funnel family or defining step-level metrics.
- Read `references/app-shapes.md` when the product shape affects the interpretation of activation, retention, conversion, or repeat usage.
4 changes: 4 additions & 0 deletions skills/app-growth-investigator/agents/openai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
interface:
display_name: "App Growth Investigator"
short_description: "Find activation, retention, and funnel bottlenecks."
default_prompt: "Use $app-growth-investigator to map the core funnel for an app or site, identify the biggest bottleneck, check timing cliffs and release context, and then surface the strongest growth levers from the available data."
85 changes: 85 additions & 0 deletions skills/app-growth-investigator/references/app-shapes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# App Shapes

Use this reference when the product shape changes how activation, retention, or monetization should be interpreted.

## SaaS Productivity or Workflow Product

Look for:

- setup without meaningful use
- meaningful use without second session
- solo usage vs team adoption
- feature usage that correlates with repeat workflows

Ask:

- What is the single biggest drop between signup, setup, first meaningful action, and second session?
- Do users reach value quickly and leave because the workflow is complete, or because the value is shallow?
- Does collaboration, sharing, or team setup materially improve retention?

## Content or Community Site

Look for:

- read without subscribe
- subscribe without return
- read without contribute
- contribute once without repeat contribution

Ask:

- Which acquisition paths produce repeat readers instead of one-session visitors?
- Why are readers not becoming repeat contributors?
- Does engagement deepen after following, replying, saving, or joining a thread or community?

## Marketplace or Transactional Product

Look for:

- browse without inquiry
- inquiry without transaction
- first transaction without repeat transaction
- supply-side and demand-side activation moving at different speeds

Ask:

- Is the bottleneck on supply creation, demand matching, or transaction completion?
- Which side of the marketplace is value-constrained?
- Do successful first transactions lead to durable repeat behavior?

## Ecommerce or Conversion-Heavy Site

Look for:

- landing without product engagement
- product engagement without cart
- cart without checkout completion
- first purchase without repeat purchase

Ask:

- Where is the biggest conversion leak between landing and purchase?
- Is the leak a traffic quality issue, a merchandising issue, a checkout issue, or a post-purchase value issue?
- Which cohorts show strong first purchase but weak repeat purchase?

## AI Product

Look for:

- prompt or upload without useful output
- useful output without follow-through
- one-off generations without repeat workflow use
- user delight that does not become habit

Ask:

- Are users getting one-off value without becoming retained users?
- Does generation accelerate a larger workflow or substitute for it?
- Do users revise, share, export, or build on outputs, or just sample them?

## Common Failure Modes

- "Too expensive" with weak usage often means low perceived value, not pure price sensitivity.
- Heavy onboarding activity with weak return usage often means first value is shallow.
- High surface activity with weak repeat value can mean the product is used as a disposable tool, not a durable workflow.
- Missing events are part of the story, not an excuse to skip the analysis.
129 changes: 129 additions & 0 deletions skills/app-growth-investigator/references/funnel-patterns.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# Funnel Patterns

Use this reference to pick the right funnel family for the job-to-be-done and to define a bottleneck worth studying.

## Bottleneck Rules

- Start with the narrowest question:
- where is the biggest drop?
- where is the longest delay?
- where do users stall even though upstream volume is healthy?
- Quantify both:
- step conversion rate
- absolute user count lost at each step
- The bottleneck is usually the step with the highest combination of:
- large percentage drop
- large absolute volume loss
- long delay
- strategic importance to value realization

Do not spend equal time on every step. Most of the work should go into explaining the bottleneck.

## Activation Funnels

Use when the question is whether new users reach first value.

Common steps:

- first visit or signup
- setup or onboarding start
- first meaningful action
- first value event
- second session or second value event

Good metrics:

- signup to first meaningful action
- setup completion rate
- first value event rate
- median time from signup to first value
- second-session return after first value

## Retention Funnels

Use when the question is whether users build a durable habit or repeat the core job.

Common steps:

- first value event
- day-1 return
- day-7 return
- day-30 return
- repeat value event

Good metrics:

- retention after first value
- share of users with repeat value within 7 or 30 days
- time to second core action
- repeat behavior by cohort, traffic source, or build

## Collaboration or Network Funnels

Use when value deepens after another person engages.

Common steps:

- user creates, invites, or shares
- second actor opens or responds
- second actor performs a meaningful action
- original actor returns
- multi-actor repeat usage

Good metrics:

- share or invite rate
- second-actor engagement rate
- return rate after a second actor engages
- repeat collaboration vs one-off collaboration

## Monetization Funnels

Use when the question is about trial, upgrade, paywall pressure, or churn-like behavior.

Common steps:

- pricing page or purchase intent
- trial start or checkout start
- activation before billing
- paid conversion or upgrade
- renewal or non-renewal

Good metrics:

- trial start to first value
- first value before upgrade or billing
- upgrade rate after value
- same-day or first-cycle cancellation
- usage before cancellation vs retained paid users

Key question:

- Are churn complaints actually caused by weak value realization before billing?

## Content, Marketplace, or Transaction Funnels

Use when the core loop is creation, publishing, listing, browsing, purchasing, or booking.

Common steps:

- post, publish, list, browse, or land
- engage, inquire, add to cart, respond, or begin checkout
- purchase, book, subscribe, or transact
- repeat purchase, repeat creation, or repeat response

Good metrics:

- landing to purchase
- list or post to first response
- browse to cart or checkout
- first transaction to repeat transaction
- supply-side activation vs demand-side activation

## Interpretation Patterns

- First-session drop-offs often point to expectation mismatch or onboarding friction.
- Fast first value plus weak repeat usage can mean the job is naturally infrequent or that the product is too disposable.
- Strong activation but weak conversion can mean value exists but pricing, timing, or packaging is off.
- Strong conversion but weak retention can mean the promise sells but the workflow does not become durable.
- A noisy downstream step is rarely the real constraint if an earlier step loses most users first.
78 changes: 78 additions & 0 deletions skills/app-growth-investigator/references/source-patterns.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Source Patterns

Use this reference to decide which system is authoritative for a claim and how to work across common product-data stacks.

## Source Categories

- Product database or warehouse
- Best for users, accounts, workspaces, teams, projects, content items, orders, bookings, and subscription state.
- Analytics events
- Best for paths, screens, clicks, sessions, event timing, step completion, and client-side drop-off.
- Billing or revenue system
- Best for trial starts, upgrades, renewals, cancellations, refunds, and plan mix.
- Release or experiment history
- Best for pre/post rollout cohorts, exposed vs unexposed users, and change attribution.
- Support feedback or surveys
- Best for turning complaints or qualitative claims into hypotheses to test behaviorally.
- Logs
- Best for backfilling missing instrumentation or verifying whether an event path exists at all.

## Authority Rules

- Name the source that is authoritative for each claim.
- Do not average disagreeing systems. Quantify the gap.
- When the event stream and entity truth differ, say which one answers the question better.
- Keep an `uncertain` bucket when identity stitching, bot filtering, or source coverage is incomplete.
- Always report excluded traffic volume when internal, test, bot, or automation filters materially change the denominator.

## Common Stack Patterns

### Product DB + PostHog

- Use the database for user, account, workspace, content, order, or subscription truth.
- Use PostHog for funnel sequencing, page or screen paths, and timing cliffs.
- Reconcile identity carefully when anonymous traffic later becomes signed-in.

### Product DB + Amplitude

- Use the database for canonical entities and monetization joins.
- Use Amplitude for pathing, retention curves, and behavioral cohorts.
- Be explicit about which Amplitude events are client-side proxies vs backend-confirmed actions.

### Product DB + Mixpanel

- Use the database for authoritative state transitions and historical truth.
- Use Mixpanel for event-level funnels and repeated behavior patterns.
- Watch for duplicate event names or mixed client/server instrumentation under one label.

### GA4 + Backend Events

- Use GA4 for traffic source, landing behavior, and broad site conversion flow.
- Use backend events or the database for account creation, checkout completion, fulfillment, and durable value events.
- Expect attribution gaps between anonymous web traffic and logged-in product usage.

### Stripe + Product Usage

- Use Stripe for trial, paid conversion, cancellations, renewals, and refunds.
- Use product usage data for value realization before and after billing moments.
- Test whether churn-like complaints map to weak usage before billing, short one-session use, or successful one-time completion.

### Warehouse-First Setups

- Use warehouse models as the reporting surface, but still state which upstream source owns each field.
- Check model freshness before drawing pre/post rollout conclusions.
- If model logic changes recently, treat it as part of release context.

## Release Context Questions

- What shipped recently in the funnel being studied?
- What copy, pricing, paywall, onboarding, or experiment changes define the relevant cohorts?
- Which high-traffic surfaces have not changed in weeks?
- If the metric moved without a related product change, is traffic mix, seasonality, or instrumentation a better explanation?

## Output Discipline

- Always show denominators.
- Always state the time window.
- Separate signal from speculation.
- If a metric is based on a proxy, say so plainly.