TL;DR

Most product-led growth teams have solved the first half of the adoption problem. They have an activation event, a behavior-triggered onboarding flow, and cohort data showing that trial-to-paid conversion is moving. What they have not built is the second half: the adoption programs that take retained users and guide them toward the behavioral milestones that predict upgrade intent and seat expansion. That gap is structural, not tactical, and it is why PLG motions stall after initial conversion and why expansion revenue keeps depending on sales plays the product should be running on its own. This article gives VPs of PLG the framework, the execution model, and the attribution methodology to close that gap. You will walk away with a two-motion flywheel that maps the specific behavioral signals predicting upgrade intent in your product, a three-component in-product program for turning those signals into expansion revenue without sales involvement, and a three-layer attribution model that connects PLG adoption programs to NRR in a format leadership trusts and funds.


Your PLG dashboard looks strong. Signups are up. Activation rates have improved for three consecutive quarters. Trial-to-paid conversion is moving in the right direction. And NRR is sitting at 104%.

That number is the problem. Not because 104% is a failure. Because 104% means your product is growing users and barely growing revenue from them. Expansion is not compounding through the product. It is trickling in through sales conversations that should not need to exist, or it is not happening at all.

The instinct is to look at the activation program. Run another A/B test on the onboarding flow. Tighten the day-seven trigger logic. Improve the checklist completion rate. These are reasonable responses to a retention problem. They are the wrong responses to an expansion problem, and confusing the two is exactly how PLG teams spend four quarters improving activation metrics while NRR stays flat.

The activation-to-retention motion and the retention-to-expansion motion are not the same program. They target different user behaviors, fire on different signals, require different intervention types, and produce different business outcomes. Most PLG teams have a documented strategy for the first motion. Almost none have one for the second. This article builds it. It does not cover the organizational strategy layer that connects product, CS, and growth around a shared adoption operating model. That ground is covered in product adoption strategy. It does not cover activation measurement fundamentals, which live in how to measure product adoption. What follows is specifically the expansion problem: why the PLG flywheel stalls after activation, and how to build the adoption programs that make it compound.

Why activation-focused adoption strategies stall PLG growth

There is a version of this problem every VP of PLG recognizes. The activation program is working. The onboarding flow is tighter than it was six months ago. Day-seven retention has improved. The weekly review shows green. And the quarterly NRR number arrives and it has not moved.

The explanation is not that the activation program failed. It is that activation and expansion are governed by completely different user behaviors, and a program built to drive one will not automatically produce the other.

They measure individual users, not account-level adoption depth

Activation measurement is almost always user-level. A user reaches the activation event and counts as a win. That measurement is correct for the activation stage. At the expansion stage, the unit of analysis is the account, not the individual. An account where one user has activated deeply and seven colleagues have never logged in looks identical to an expansion-ready account in user-level activation data. Those two accounts have completely different renewal and expansion trajectories, and a user-level adoption program cannot tell them apart.

They stop at first value, not at expansion-predictive behavior

The activation rate metric captures whether a user reached their first value moment. It says nothing about whether that user is moving toward the behaviors that predict upgrade intent: higher-tier feature usage, multi-workflow adoption, collaborative patterns that raise switching costs across the account. These are separate behavioral milestones. They require a separate strategy. A PLG team that optimizes for activation and then waits for expansion to follow is optimizing for the wrong outcome at the wrong stage.

They have no replicable model for what drives expansion

Most PLG teams can identify in retrospect which accounts expanded. They cannot reliably predict which accounts will expand next, or which behavioral sequence consistently produces that outcome. Without a documented expansion behavior model, every expansion conversation depends on a CS rep's intuition or a sales play that the product should be running on its own. That dependency is precisely the board credibility problem the VP of PLG cannot resolve with an activity report full of activation metrics.

📖 Your activation program is working. Your NRR isn't moving. These 19 tactics cover the adoption programs that bridge that gap — mapped to the exact drop-off point each one addresses → Get started with our free playbook.

The PLG adoption flywheel — two motions, one compounding system

The PLG Adoption Flywheel has two motions. Motion one is the activation-to-retention loop. Motion two is the retention-to-expansion loop. Both are required for the flywheel to compound. A team running only motion one generates user growth and flat NRR. A team running both generates user growth that converts into expansion revenue the product drives on its own.

Motion One: Activation to Retention

The entry point is signup. The goal is a user reaching the activation event and returning to the product habitually. The primary lever is behavior-triggered in-product onboarding guidance. Success is measured at the user level through activation rate at day seven and cohort retention at day 30.

Motion Two: Retention to Expansion

The entry point is a retained, habitually active user. The goal is that user reaching expansion-predictive behaviors: higher-tier feature access attempts, multi-seat collaborative patterns, usage ceiling behaviors that signal the product has become load-bearing in their workflow. The primary lever is contextual in-product expansion nudges triggered by product-qualified lead signals. Success is measured at the account level through seat adoption rate, PQL-to-upgrade conversion, and NRR by PLG cohort.

The motion does not start when sales decides to reach out. It starts when the product identifies the behavioral signal that precedes upgrade intent and acts on it. For this to run at scale without a sales conversation for every account, three things are required: a documented expansion behavior model, a behavioral trigger definition, and an in-product execution layer that responds to the trigger without an engineering sprint.

Why the flywheel only spins when both motions run

A PLG team running motion one and not motion two is collecting retained users and then handing them to sales to convert. That is not a product-led expansion motion. That is an acquisition motion with a sales-dependent monetization layer bolted on. The flywheel spins when motion two is operational: when retained users encounter in-product experiences that guide them toward expansion milestones, and when the product generates the commercial signal that tells CS exactly which accounts are ready for a conversation and which ones the product is still handling on its own.

The compounding effect is what separates a PLG motion with 104% NRR from one with 125%. Both teams have activation programs. Only one has built the retention-to-expansion loop that turns habitual product use into durable expansion revenue.

How to build an expansion behavior model

Most PLG teams skip this step. They move from "we need an expansion program" directly to "let's build an in-product nudge" without first documenting which user behaviors actually predict that an account will upgrade or expand seats. The nudge gets built, it fires, and conversion does not move — because it was targeting the wrong moment in the wrong behavioral sequence.

The expansion behavior model is the foundation that makes everything else in motion two reliable. It is the expansion equivalent of the activation event definition: a short, validated list of specific user and account behaviors that historically correlate with upgrade intent and seat expansion in your product. Without it, the expansion program is a guess. With it, every nudge, every trigger condition, and every CS handoff is grounded in evidence.

The three categories of expansion-predictive behavior

For most B2B SaaS products, the behaviors worth analyzing fall into three categories. These are the search space for the analysis, not a guaranteed list. The specific signals that matter in your product will emerge from the data.

Tier feature usage. A user who navigates to a feature gated to a higher plan has demonstrated upgrade intent through product behavior. This signal is more reliable than any survey response because it is self-reported through action rather than stated preference. Users who reached a paywall and continued trying to access the feature in subsequent sessions are a particularly high-confidence cohort.

Multi-seat collaborative patterns. Users who regularly share work with colleagues, assign tasks to teammates, or trigger product-generated notifications that reach non-users are creating account-level expansion surface. When this pattern crosses a threshold — three or more non-users receiving product-generated activity in a 30-day window, for example — the account has expansion readiness that a targeted in-product experience can activate. The specific threshold will vary by product and usage cadence, and should be validated against your own expansion cohort data rather than assumed.

Usage ceiling behavior. Users who repeatedly hit plan limits and continue trying to operate past them have made the product genuinely load-bearing in their workflow. Export limits, seat limits, API call limits, storage limits, and any constraint the user hits more than once in a billing period is a high-confidence expansion signal. Most PLG teams respond to this signal only through outbound sales. The expansion adoption program responds to it the moment it fires, inside the product, without waiting for a sales rep to notice the account in a usage report.

How to identify your specific expansion signals

Pull cohort data for accounts that expanded in the last 12 months. For each expanding account, reconstruct the behavioral sequence from the 30 to 60 days before the upgrade or seat expansion event:

  • Which features did they access in the weeks before expanding?

  • Which plan limits did they hit, and how repeatedly?

  • What collaborative patterns appeared at the account level before the commercial event?

The behaviors appearing consistently across expanding accounts, and significantly less frequently in accounts that did not expand, are your expansion signal candidates. Validate each one by calculating the upgrade conversion rate for accounts that exhibited the behavior versus those that did not:

  • A signal producing 30 or more percentage points of lift in upgrade conversion is worth building an in-product program around.

  • A signal producing less than 10 points of lift is noise and should be dropped regardless of how intuitive it feels.

Two to four validated signals is a normal output from this process. Teams that emerge with ten have not filtered rigorously enough. A narrowly targeted expansion program firing on two high-confidence signals will consistently outperform a broad program firing on eight weak ones.

What the model produces

Two concrete artifacts come out of this analysis:

  • The expansion signal list. The validated behavioral triggers that will power the in-product expansion program. Each signal has a confirmed lift threshold and a defined trigger condition.

  • The PQL threshold definition. The specific combination of signals, or the single signal strong enough on its own, that qualifies an account as expansion-ready. This threshold feeds the attribution model discussed later in this article and gives CS a data-backed trigger for timing conversations correctly, replacing gut feel and account age as the basis for outreach.

For the metric instrumentation that connects behavioral events to expansion outcomes in a measurable way, product adoption metrics covers the framework in full.

How to turn expansion signals into in-product adoption programs

You have a validated expansion behavior model. You know which signals predict upgrade intent in your product. The gap most PLG teams hit at this stage is not the strategy. It is execution. The signal fires, and the only way to respond to it is a CS email sent two days later or an engineering ticket sitting in a sprint queue for three weeks. By the time the intervention reaches the user, the intent moment has passed.

The three components below are what make an expansion adoption program operationally self-sufficient: running on product signals, without sales involvement, at the exact moment the intent is demonstrated.

Component 1: Behavior-triggered contextual nudges at the expansion moment

The highest-leverage moment to surface an upgrade experience is not a scheduled email. It is the moment a user hits an expansion signal inside your product: accesses a locked feature, reaches a usage ceiling, or completes a collaborative action that draws a non-user into the product's orbit. At that exact moment, the user has already told your product what they want through their behavior. The nudge does not need to sell. It confirms the intent they just demonstrated and shows them the clearest path forward.

There is a subtler reason this matters at the expansion stage specifically. Users who have made deliberate choices inside your product — who configured something, built something, completed a workflow that reflects their specific context — are materially harder to move to a competitor than users who were fully automated through setup. Harvard researchers call this the IKEA Effect: effort creates attachment, and attachment creates switching cost. The expansion adoption program is not just guiding users toward an upgrade. It is guiding them toward investment in your product itself.

AB Tasty deployed behavior-triggered in-product experiences using Jimo and reached 2,000 users in week one of a new feature release, with a 6x faster launch cycle than their previous approach. The speed came from removing engineering dependency from the execution layer entirely. The PLG team deployed, iterated, and measured without waiting on a sprint.

Component 2: Account-level seat adoption nudges for non-adopting team members

When your power user is deeply adopted and their colleagues are not, the account carries structural expansion risk and renewal risk at the same time. Your individual-level onboarding programs do not reach this problem because they are built to serve the user in front of them, not the account around them.

The in-product program for this scenario targets non-adopting members of an account directly, with feature discovery experiences triggered by account-level behavioral data. When three of eight licensed users in an account have activated and five have not, those five should receive guidance calibrated to where they are in the journey, not the same flow your power user saw on day one. The goal is raising seat adoption depth across the account to the threshold that predicts renewal and expansion before your CS team needs to step in.

Component 3: Suppression logic that protects already-expanded accounts

Expansion nudges shown to users who have already upgraded create friction and erode trust in your in-product communication across the board. Every irrelevant prompt that reaches an already-paying customer at a higher tier trains that user to dismiss future in-product messages, including the ones that matter.

The suppression rule is operationally simple:

  • When an account crosses the upgrade threshold, all expansion-stage nudges stop firing for every user in that account immediately.

  • When a user's individual behavior already reflects higher-tier usage patterns, suppress expansion prompts targeting behaviors they have already demonstrated.

  • Review suppression logic whenever a pricing tier changes or a new feature moves between plan levels. Stale suppression rules are one of the most common sources of irrelevant in-product messaging at scale.

This component is consistently underbuilt. Its absence is what makes your expansion program look like it is spamming your best customers with prompts to do something those customers already did.

How to prove PLG adoption ROI to leadership

Your expansion program is running. Signals are firing. Accounts are moving through the upgrade flow. And your board asks: how much of that NRR movement is actually attributable to the PLG adoption program versus natural expansion that would have happened anyway?

That question does not have a defensible answer without a causal attribution model. Activity reports showing nudge impression counts and click-through rates do not answer it. Correlation between your program launch date and an NRR trend line does not answer it. What answers it is a three-layer measurement model that draws a direct line from specific in-product programs to specific commercial outcomes, in a format that survives your finance team's scrutiny.

Layer 1: Program-to-PQL conversion

For each in-product expansion program you run, measure the rate at which users who encountered it went on to exhibit a product-qualified lead signal within 30 days, compared to users in the same behavioral cohort who did not encounter it.

This layer proves your program moved behavior. It is the first link in the causal chain and the one that validates whether your expansion signal targeting logic is working. If users who saw the nudge are not reaching PQL status at a materially higher rate than the control group, your program is either targeting the wrong signal or firing at the wrong moment in the behavioral sequence.

Layer 2: PQL-to-upgrade conversion

Track the upgrade conversion rate for accounts that reached PQL status through product behavior, segmented by which adoption program they encountered on the path to that status.

This layer connects your program to commercial intent. It answers the question that sits between "the program changed behavior" and "the program produced revenue": did the accounts your program moved to PQL status actually convert at a higher rate than PQL accounts the program did not touch? When the answer is yes, the causal chain from program to revenue is established across two measurement points.

Layer 3: PLG cohort NRR

Compare the NRR of cohorts that entered your expansion adoption program against cohorts that did not, holding acquisition month and initial plan constant. This is the number your board is actually asking about. Does investing in PLG adoption programs produce more revenue from your existing user base over time, or does it produce activity metrics that never show up in the financials?

When all three layers are instrumented and producing consistent results across multiple cohorts, you walk into your next planning cycle with a causal model rather than an activity report. The conversation shifts from defending adoption program spend to presenting a documented revenue multiplier on your existing user base.

The PLG adoption program maturity model

Before building anything new, it helps to know precisely where your program stands today. The maturity model below is not a benchmark against other companies. It is a diagnostic for your own motion: what you have, what is missing, and where to focus next.

Level 1: Activation-focused

Your team has a documented activation event, behavior-triggered onboarding guidance for new trial users, and time-to-value measurement by cohort. Trial-to-paid conversion is moving. Day-30 retention is tracked and improving.

What is missing:

  • No expansion behavior model. You do not yet know which specific user behaviors predict upgrade intent in your product.

  • No in-product programs for retained users. The adoption motion stops at activation.

  • No PQL threshold definition. Expansion readiness is assessed by CS intuition, not product signal.

  • No PLG cohort NRR attribution. You cannot connect adoption program activity to revenue outcomes in a format leadership trusts.

Your next priority is the expansion behavior model. Pull the cohort data for accounts that expanded in the last 12 months and run the signal validation methodology from the previous section. That analysis is the foundation everything else in motion two is built on. Building in-product nudges before completing it means targeting behavioral moments you have not yet validated as predictive.

Level 2: Expansion-aware

You have an expansion behavior model. At least one in-product expansion nudge is live and measured. Your team has a working PQL threshold definition and some visibility into PQL-to-upgrade conversion rates.

What is missing:

  • Account-level seat adoption programs are not yet running. Non-adopting team members within expansion-ready accounts are not receiving targeted in-product guidance.

  • Suppression logic is incomplete or absent. Some users are receiving expansion nudges after their account has already upgraded.

  • PLG cohort NRR is not yet connected to specific program attribution. You can show NRR movement but cannot prove causation to leadership.

Your next priority is account-level seat adoption programs, followed immediately by suppression logic. Running expansion nudges without suppression logic actively damages trust in your in-product communication channel, and that damage compounds over time as your user base grows.

Level 3: Compounding

Both motions of the PLG Adoption Flywheel are running. Expansion programs fire on validated behavioral signals. Your three-layer attribution model connects specific programs to PQL conversion, upgrade conversion, and PLG cohort NRR. Leadership treats your PLG adoption investment as a documented revenue multiplier, not a cost center with activity metrics.

At this level, the constraint is no longer strategy or measurement. It is execution velocity. The question your team is asking is: how quickly can we move from identifying a new expansion signal to having a live in-product program responding to it? Every week that gap exists is a week of intent moments passing without a product response.

For PLG teams at Level 3, removing engineering dependency from the adoption execution layer is the single highest-leverage operational investment available. When your team can build, deploy, and iterate on expansion programs without filing tickets, the flywheel accelerates on the same user base you already have.

Your PLG motion is only half built

You have the acquisition. You have the activation program. You have cohort data showing the first motion of the flywheel is working. What you do not yet have is the system that takes that retained user base and compounds it into expansion revenue the product drives on its own.

That is not a gap in your PLG philosophy. It is a gap in your adoption program. And it is a gap with a specific build path: validate the behavioral signals that predict upgrade intent in your product, build in-product programs that respond to those signals at the moment they fire, and instrument the three-layer attribution model that connects every program decision to a number your board recognizes as revenue.

The VP of PLG who walks into a planning cycle with that attribution model does not spend the meeting defending adoption program spend. They spend it presenting a documented case for why the existing user base is the highest-return growth surface available, and exactly what the product is doing to convert it.

Jimo is built for the execution layer this motion requires: behavior-triggered in-product experiences that respond to expansion signals the moment they fire, deployed without engineering involvement, and measured in a way that connects guidance interactions to NRR outcomes. Try Jimo today and build your first expansion-stage adoption program this week. 

If you are running a team at a scale where self-serve validation is not sufficient, see Jimo in action and see for yourself what the compounding motion looks like for your product.

FAQs

What is the difference between user adoption strategies and product-led growth strategies?

Product-led growth is a go-to-market motion: the product is the primary driver of acquisition, conversion, and expansion. User adoption strategies are the programs that make that motion work at each stage of the user journey. PLG defines how you grow. User adoption strategies define how you guide users through the behaviors that make that growth compound. A PLG motion without documented adoption strategies is a distribution model without an execution layer.

Why does improving activation rates not always improve NRR?

Activation and expansion are governed by different user behaviors at different stages of the journey. Activation measures whether an individual user reached their first value moment. NRR measures whether your existing user base is generating more revenue over time through upgrades and seat expansion. A user can activate reliably and still churn before an expansion conversation is possible. NRR moves when the retention-to-expansion motion is running, not just the activation-to-retention motion. Improving activation is necessary. It is not sufficient.

How do you identify which user behaviors reliably predict upgrade intent?

Pull cohort data for accounts that expanded in the last 12 months and reconstruct the behavioral sequence from the 30 to 60 days before each commercial event. The behaviors appearing consistently across expanding accounts, and significantly less frequently in accounts that did not expand, are your expansion signal candidates. Validate each candidate by calculating the upgrade conversion rate for accounts that exhibited the behavior versus those that did not. Signals producing 30 or more percentage points of lift in upgrade conversion are worth building programs around. Signals producing less than 10 points of lift should be dropped.

What is a product-qualified lead and how does it connect to expansion adoption programs?

A product-qualified lead is an account that has demonstrated expansion readiness through product behavior rather than through sales engagement or marketing activity. The PQL threshold is defined by the specific combination of behavioral signals your expansion behavior model identifies as predictive of upgrade intent. Your expansion adoption program is the mechanism that moves accounts toward that threshold through in-product experiences. The PQL signal is what tells CS when the product has done its job and a conversation will land on receptive ground.

How do you prove to a board that PLG adoption programs are driving expansion revenue rather than just activity?

The three-layer attribution model in this article is the methodology. Layer one connects program exposure to PQL conversion, proving the program moved behavior. Layer two connects PQL status to upgrade conversion, proving the program produced commercial intent. Layer three compares PLG cohort NRR for program participants against non-participants, holding acquisition month and initial plan constant, proving the program produced revenue. When all three layers produce consistent results across multiple cohorts, the causal chain from program investment to NRR is defensible in a board conversation.

How long does it take for an expansion-stage adoption program to produce measurable NRR impact?

Layer one signal, program-to-PQL conversion, is typically visible within 30 days of launch for a program targeting an active user base. Layer two signal, PQL-to-upgrade conversion, requires 60 to 90 days to accumulate enough conversion events to be statistically meaningful. Layer three signal, PLG cohort NRR, requires at least two full billing cycles to show movement and three to four cycles to establish a trend. Plan for a 90-day window before presenting the full attribution model to leadership, and use layer one and layer two data to maintain confidence in the program during that period.

When should CS get involved in an account your in-product expansion program has already flagged?

CS involvement should be triggered by the PQL threshold, not by account age or a calendar schedule. When an account reaches PQL status and the in-product expansion program has not produced an upgrade within a defined window, typically 14 to 21 days depending on your sales cycle, that is the CS handoff trigger. The product has demonstrated intent. CS closes the gap. Involving CS before the PQL threshold is reached pulls human resources into accounts the product is still capable of converting on its own, which is what makes CS time expensive and PLG economics harder to defend.

Is fully automated onboarding better for expansion, or does user effort play a role in retention?

User effort plays a significant role, particularly at the expansion stage. Harvard researchers found that people who assemble something themselves value the result materially more than people who receive the same thing pre-built. In a product context, users who made deliberate choices inside your product, who configured workflows, built reports, or completed sequences that reflect their specific context, carry meaningfully higher switching costs than users who were fully automated through setup. The best expansion adoption programs guide users to invest in the product, not just use it. Speed to value matters. So does the ownership that comes from doing something yourself.

Author

photo-amelie

Thomas Moussafer

Co-Founder @ Jimo

Level-up your onboarding in 30 mins

Discover how you can transform your product with experts from Jimo in 30 mins

Level-up your onboarding in 30 mins

Discover how you can transform your product with experts from Jimo in 30 mins

Level-up your onboarding in 30 mins

Discover how you can transform your product with experts from Jimo in 30 mins

Level-up your onboarding in 30 mins

Discover how you can transform your product with experts from Jimo in 30 mins