Search documentation

Search for pages in the documentation

Calculations

Plain-language explanation of how Deal Pulse scores are calculated

Deal Pulse calculates a 0-100 score by combining five categories. This page explains the calculation in plain language without technical jargon.

The Big Picture

Your Deal Pulse score is like a weighted average of five categories:

text
Overall Score =
  Engagement (50%) +
  Collaboration (25%) +
  Diversity (10%) +
  Organization (10%) +
  Communication (5%)

Translation: Engagement matters most (half your score), followed by Collaboration (quarter of your score). The other three categories matter but have less impact.


How Category Scores Work

Each category gets its own 0-100 score, then those scores are combined based on their weight.

Example:

text
Engagement: 70 (weight: 50%)
Collaboration: 60 (weight: 25%)
Diversity: 50 (weight: 10%)
Organization: 40 (weight: 10%)
Communication: 30 (weight: 5%)

Overall = (70 × 0.50) + (60 × 0.25) + (50 × 0.10) + (40 × 0.10) + (30 × 0.05)
        = 35 + 15 + 5 + 4 + 1.5
        = 60.5 → rounds to 61

Key insight: A great Engagement score (70) carries your overall score even when other categories are weak.


Recent Activity Matters More

The system gives more credit to recent activity than older activity.

Time windows:

  • Last 7 days: Full credit (100%)
  • Last 30 days: 85% credit
  • Last 90 days: 70% credit

What this means:

  • Activity from last week counts more than activity from last month
  • Activity from 2 months ago counts less than activity from 2 weeks ago
  • Activity older than 90 days doesn't count at all

Example - Comments:

text
3 comments last week:      3 × 100% = 3.0
2 comments 3 weeks ago:    2 × 85%  = 1.7
5 comments 2 months ago:   5 × 70%  = 3.5
Total weighted: 8.2 comments (not 10)

Why? Recent activity better reflects current deal health than old activity.


Buyer Actions Count More

Buyer activity gets 1.5x more credit than seller activity.

What this applies to:

  • Buyer comments vs seller comments
  • Buyer logins vs general activity
  • First-time buyer actions

Example:

text
Seller adds 10 comments:  10 × 1.0 = 10 points
Buyer adds 10 comments:   10 × 1.5 = 15 points

Why? Buyer engagement is harder to get and more indicative of real deal health.


How Each Category Calculates

Engagement (50% of overall score)

What it combines:

  1. Meeting activity (70% of Engagement) - Past and upcoming meetings
  2. Buyer logins (15% of Engagement) - How often buyers access Decision Site
  3. Recent activity (10% of Engagement) - Days since last activity
  4. Buyer milestones (5% of Engagement) - First buyer view, first contact add

How meetings work:

  • Either past OR future meetings can drive the score (whichever is higher)
  • Future meetings count for 80% of what past meetings count for
  • More recent meetings count more than older ones

How recency works:

  • Activity today: 100 points
  • Activity 15 days ago: 50 points
  • Activity 30+ days ago: 0 points

Example of a strong Engagement score:

  • 2 meetings last month, 1 meeting scheduled next week
  • Buyers logging in weekly
  • Last activity yesterday
  • Buyer viewed Decision Site and added a contact

Collaboration (25% of overall score)

What it combines:

  1. Milestone activity (40% of Collaboration) - Adding, updating, completing milestones
  2. Action item activity (40% of Collaboration) - Adding, completing tasks
  3. Template usage (10% of Collaboration) - Applying mutual plan templates
  4. Mutual plan started (10% of Collaboration) - One-time bonus for starting a plan

How it works:

  • All milestone actions count (add, update, complete)
  • Action items count when added or completed (not just updates)
  • Using a template gives bonus points
  • Starting a mutual plan gives a one-time boost

Example of a strong Collaboration score:

  • Active mutual plan with 5+ milestones
  • Milestones being updated regularly
  • Action items being created and completed
  • Used a template to structure the plan

Diversity (10% of overall score)

What it combines:

  1. Number of buyers (30% of Diversity) - Contacts with buyer roles
  2. Email domains (25% of Diversity) - Different companies represented
  3. Total contacts (20% of Diversity) - Overall stakeholder count
  4. Departments (12.5% of Diversity) - Different departments involved
  5. Job titles (12.5% of Diversity) - Different roles represented

How it works:

  • More contacts = higher score (capped at 5+)
  • Multiple buyers better than single buyer
  • Multiple companies (domains) shows broader involvement
  • Cross-departmental engagement boosts score

Example of a strong Diversity score:

  • 5+ contacts from buyer side
  • 2-3 different companies involved
  • IT, Finance, and Operations departments represented
  • Mix of roles (decision maker, technical evaluator, executive sponsor)

Organization (10% of overall score)

What it combines:

  1. Task organization (60% of Organization) - % of action items with assignee or due date
  2. Artifact uploads (20% of Organization) - One-time bonus for uploading content
  3. Balanced completion (20% of Organization) - Both buyer and seller completing tasks

How it works:

  • System checks: does each action item have an assignee OR a due date?
  • Percentage of organized tasks becomes the score
  • Uploading artifacts shows preparation (one-time boost)
  • Both sides completing tasks shows joint effort

Example of a strong Organization score:

  • 90% of action items have assignees or due dates
  • Shared proposals, case studies, technical docs
  • Buyers and sellers both completing their assigned tasks

Communication (5% of overall score)

What it combines:

  1. Buyer comments (40% of Communication) - Comments from buyers (1.5x weight)
  2. Seller comments (40% of Communication) - Comments from sellers
  3. Dialogue balance (20% of Communication) - Both sides participating equally

How it works:

  • Buyer comments worth more than seller comments
  • Balance bonus rewards back-and-forth dialogue
  • One-sided conversations (only seller or only buyer) get lower scores

Example of a strong Communication score:

  • Buyers commenting on milestones and artifacts
  • Sellers responding to questions
  • Regular back-and-forth dialogue
  • Both sides roughly equal in participation

Normalization (Preventing Outliers)

The system caps activity counts to prevent extreme values from distorting scores.

Caps:

  • Sessions per buyer: capped at 5
  • Comments or tasks: capped at 20
  • Contacts: capped at 5
  • Meetings: capped at 3

What this means:

  • Having 10 meetings doesn't give you 3x the score of 3 meetings
  • System normalizes to expected ranges
  • Prevents gaming with excessive activity

Example:

text
You have 7 meetings → normalized to 3 for scoring
You have 30 comments → normalized to 20 for scoring

Why? Quality matters more than quantity. A normal deal with 3 meetings shouldn't be penalized against an outlier with 10.


Special Rules and Bonuses

One-Time Bonuses

Certain actions give you a one-time score boost when they first occur:

Buyer engagement indicators:

  • First time buyer views Decision Site: +25 points (×1.5 = 37.5)
  • First time buyer adds contact: +25 points (×1.5 = 37.5)

Mutual plan initiation:

  • Starting a mutual plan: +20 points (10% of Collaboration)

Artifact upload:

  • First artifact upload: +20 points (20% of Organization)

These only count once - repeating the action doesn't give more bonus.

Meetings - Past vs Future

Special treatment for meetings:

  • Past meetings get full credit
  • Future meetings get 80% credit
  • System takes whichever is higher (past OR future)

Why? Meetings are less frequent than other activities. Softer time decay prevents over-penalizing normal meeting cadences.

Example:

text
Scenario 1: Had 3 meetings last month → score based on past
Scenario 2: Have 3 meetings scheduled next month → score based on future (×0.8)

Buyer vs Seller Activity

Where buyer weight (1.5x) applies:

  • Comments (Communication category)
  • First view/contact milestones (Engagement category)

Where it doesn't apply:

  • Meetings (everyone counts equally)
  • Milestones and tasks (everyone counts equally)

Score States

The overall score maps to states:

ScoreStateMeaning
50-100ON_TRACKHealthy engagement
25-49AT_RISKLow activity, needs attention
5-24OFF_TRACKVery low activity, likely stalled
0-4INACTIVEEssentially no activity

These thresholds don't affect the calculation - they just provide labels for interpreting scores.


When Scores Update

Timing:

  1. Your activity happens → recorded immediately in database
  2. Overnight (~1 AM) → system aggregates all activity
  3. Overnight (~2-3 AM) → Deal Pulse calculates scores
  4. Next morning → new score appears in UI

Important: Today's activity won't affect today's score. It appears in tomorrow's score.


What's NOT Considered

Deal Pulse doesn't see:

  • Email exchanges outside platform
  • Phone calls
  • Slack/Teams messages
  • In-person meetings (unless logged)
  • CRM activities
  • Message content or sentiment
  • Deal stage or value
  • Competitive situation

Why? System only tracks activity inside Decision Site to ensure measurable, consistent scoring.


Common Questions

"Why did my score drop even though I've been active?"

Time decay. Old activity is losing value faster than new activity is adding value.

Example:

  • Had 10 meetings 2 months ago (now worth 70% credit each)
  • Had 1 meeting last week (worth 100% credit)
  • Net effect: overall meeting score decreased

"I scheduled a meeting but score didn't change"

Scores update overnight. Today's actions appear in tomorrow's score.

"We had 5 meetings but only see score of 60 in Engagement"

Meetings are 70% of Engagement, not 100%. Plus time decay reduces older meetings' impact.

"Buyer is engaged via email but score is low"

External activity doesn't count. Get buyer to engage in platform (login, comment, complete tasks) for score to reflect their engagement.

"Score is 75 but deal feels cold"

Score measures engagement, not outcome. High score means healthy activity in platform. Doesn't predict whether deal will close - combine with CRM, pipeline value, and your judgment.


Next Steps


Algorithm Details

This page explains the behavioral details of the Deal Pulse scoring system - how it actually works in practice, what it prioritizes, and why it makes certain decisions.

Algorithm Identity

Name: vibe-clso4 Version: 1.1.0 Full identifier: vibe-clso4-v1.1.0

What this means:

  • The algorithm has a name and version for tracking
  • If the algorithm changes, the version number updates
  • Your historical scores show which version calculated them
  • Helps explain score changes over time (if algorithm updated)

Core Philosophy

What the Algorithm Values

1. Engagement over everything

The algorithm gives Engagement 50% of the overall weight because active deals have meetings and buyer participation. No amount of collaboration or communication can make up for lack of actual meetings and buyer interaction.

2. Recent activity over old activity

The algorithm penalizes inactivity over time through time decay. A deal with lots of activity 2 months ago but nothing recent will have a lower score than a deal with moderate recent activity. This reflects reality: current engagement predicts current deal health.

3. Buyer activity over seller activity

The algorithm gives buyer actions 1.5x weight because buyer engagement is the real signal. Sellers can always be active; buyer activity shows genuine interest and commitment.

4. Multi-threading reduces risk

The algorithm rewards stakeholder diversity (Diversity category) because single-threaded deals are risky. Losing your one champion kills the deal. Multiple stakeholders across departments provides resilience.

5. Structure shows maturity

The algorithm rewards organization (tasks with assignees/dates, artifacts uploaded) because structured processes tend to close. Ad-hoc deals are less predictable than well-organized ones.

What the Algorithm Doesn't Care About

External factors:

  • Deal size or value
  • Company size or industry
  • CRM stage or forecast
  • Economic conditions
  • Competitive situation

Qualitative factors:

  • Message sentiment or tone
  • Relationship strength
  • Champion power or influence
  • Budget approval status

Content details:

  • What documents contain
  • What comments say
  • Meeting agendas or outcomes

Why? The algorithm focuses on measurable engagement behavior it can observe in the platform. It doesn't try to interpret or predict - just measure activity patterns.


How Time Decay Works

The Decay Model

Time windows:

  • 7 days: Recent (100% credit)
  • 30 days: Medium (85% credit)
  • 90 days: Distant (70% credit)
  • 90+ days: Not counted (0% credit)

What this means in practice:

Day 1-7:

text
Activity gets full credit
Your score reflects this week's engagement

Day 8-30:

text
Activity starts losing value (85% credit)
Score gradually declines if no new activity

Day 31-90:

text
Activity worth even less (70% credit)
Old deals with no recent activity score low

Day 91+:

text
Activity falls out of calculation entirely
Ancient activity doesn't help your score

Why Meetings Decay Slower

Meetings get special treatment with softer time decay:

Regular decay:

  • Recent: 100%
  • Medium: 85%
  • Distant: 70%

Meeting decay:

  • Recent: 100%
  • Medium: 92.5%
  • Distant: 85%

Why? Meetings happen less frequently than platform activity. A meeting 45 days ago still shows engagement, while a comment 45 days ago is pretty stale.

Practical impact:

  • Monthly meeting cadence doesn't get over-penalized
  • Enterprise deals with longer cycles score fairly
  • Quarterly business reviews still count

Time Decay Example

Scenario: You had 10 comments 60 days ago, nothing since.

Calculation:

text
60 days ago = in "distant" window (70% credit)
10 comments × 70% = 7 weighted comments

As each day passes:
- Day 61: still 70% = 7 weighted
- Day 70: still 70% = 7 weighted
- Day 90: still 70% = 7 weighted
- Day 91: 0% = 0 weighted (falls out of window)

Result: Score drops to zero on day 91 unless new activity occurs.


How Buyer Activity Multiplier Works

Where It Applies

Buyer activity gets 1.5x weight in:

  1. Comments (Communication category)

    • Buyer comment worth 1.5 seller comments
    • Encourages seller to get buyers commenting
  2. First-time engagement (Engagement category)

    • First buyer view: 25 points × 1.5 = 37.5
    • First buyer contact add: 25 points × 1.5 = 37.5

Where It Doesn't Apply

No buyer multiplier for:

  • Meetings (everyone equal)
  • Milestones (everyone equal)
  • Action items (everyone equal)
  • Contact counts (raw numbers)

Why? These activities are collaborative by nature. Milestones should be joint effort, not just buyer-driven.

Practical Impact

Example - Communication score:

Scenario 1: Seller-heavy

text
Seller: 20 comments × 1.0 = 20 points
Buyer: 5 comments × 1.5 = 7.5 points
Total: 27.5 points

Scenario 2: Balanced

text
Seller: 10 comments × 1.0 = 10 points
Buyer: 10 comments × 1.5 = 15 points
Total: 25 points (plus balance bonus)

Even with fewer total comments, balanced participation scores nearly as well due to buyer multiplier and balance bonus.


How Normalization Works

Why Normalization

Problem: Some deals might have extreme activity (50 meetings, 100 comments) that distorts scoring.

Solution: Cap activity counts at expected maximums to normalize scores.

Caps

text
Sessions per buyer: 5
Comments/tasks: 20
Contacts: 5
Unique entities (domains, departments): 5
Meetings: 3

What this means:

  • Having 3 meetings gets you full meeting credit
  • Having 10 meetings doesn't give you 3x the score
  • Prevents gaming with excessive activity

Example

High-activity deal:

text
Meetings: 8
Contacts: 12
Comments: 50

Normalized for scoring:

text
Meetings: capped at 3 (full credit)
Contacts: capped at 5 (full credit)
Comments: capped at 20 (full credit)

Result: Deal gets full credit across categories. Extra activity doesn't inflate score artificially.

Why this is good:

  • Prevents outliers from skewing what "normal" looks like
  • Quality over quantity
  • 3 meaningful meetings > 10 quick check-ins

How Balance Bonuses Work

Dialogue Balance (Communication)

Checks: Are both buyers and sellers commenting?

Calculation:

text
If only sellers comment: 0 bonus
If only buyers comment: 0 bonus
If both comment (roughly equal): 30 points maximum
If both comment (one heavier): partial bonus

Example:

text
Buyer score: 60
Seller score: 80

Balance = (60 / 80) × 30 = 22.5 points

Why? One-sided conversations (seller monologuing or buyer questioning with no response) aren't healthy dialogue.

Task Completion Balance (Organization)

Checks: Are both buyers and sellers completing action items?

Calculation:

text
If only sellers complete: 0 bonus
If only buyers complete: 0 bonus
If both complete: 15 points

Binary - either/or. Even one completion from each side gets the full bonus.

Why? Mutual plan only works if both sides execute. One-sided completion suggests lack of true collaboration.


How One-Time Bonuses Work

Buyer Engagement Indicators

First buyer view:

  • Triggers: First time a buyer-role contact views Decision Site
  • Bonus: 25 points × 1.5 = 37.5 (5% of Engagement)
  • Happens: Only once, ever

First buyer contact add:

  • Triggers: First time a buyer-role contact adds another contact
  • Bonus: 25 points × 1.5 = 37.5 (5% of Engagement)
  • Happens: Only once, ever

Why these matter:

  • Shows buyer taking ownership
  • Buyer inviting others = expanding stakeholders
  • First actions are hardest to get

Mutual Plan Initiation

Trigger: Any mutual plan activity (milestone, action item, template)

Bonus: 20 points (10% of Collaboration)

Happens: Only once, ever

Why? Starting a mutual plan is a commitment signal. First plan activity deserves recognition.

Artifact Upload

Trigger: First time anyone uploads an artifact

Bonus: 20 points (20% of Organization)

Happens: Only once, ever

Why? Uploading content shows preparation and value delivery. First upload is the important one.

Important Notes

One-time means one-time:

  • Repeating the action doesn't give more bonus
  • Once triggered, stays in score calculation permanently
  • Can't "lose" these bonuses

Not subject to time decay:

  • Unlike activity counts, bonuses don't decay
  • First buyer view from 6 months ago still counts
  • Provides score "floor" for deals that accomplished these milestones

How Meeting Scoring Works

Past vs Future Meetings

The algorithm can use EITHER:

  • Past meetings (meetings that occurred)
  • Future meetings (meetings scheduled)

Whichever is higher drives the score.

Past meetings:

text
Full credit
Shows real engagement happened

Future meetings:

text
80% credit (discounted)
Shows forward momentum but not yet executed

Why This Design

Scenario 1: Active deal with regular past meetings

text
Had 3 meetings last 60 days → score based on past
System uses past meeting score

Scenario 2: Deal just starting, no past meetings yet

text
Have 3 meetings scheduled next 30 days → score based on future (×0.8)
System uses future meeting score (discounted)
System doesn't penalize for being new

Scenario 3: Paused deal coming back

text
Had 2 meetings 45 days ago (decaying)
Have 3 meetings scheduled next 2 weeks
System uses future (×0.8) because it's higher
Score reflects upcoming momentum

Surprising Behavior

Your score can INCREASE from scheduling meetings even if no meetings have occurred yet.

Example:

text
Current: 2 past meetings (60 days ago, heavily decayed)
Action: Schedule 3 future meetings (next 14 days)
Result: Score increases because future (×0.8) > decayed past

Algorithm Updates and Versioning

Current Version: 1.1.0

Changes from 1.0.0:

  • Refined time decay for meetings (softer decay)
  • Added meeting past/future either/or logic

Why version numbers matter:

  • Historical scores show which version calculated them
  • Score changes might be due to algorithm updates, not your activity
  • Helps troubleshoot unexpected changes

If Algorithm Updates

What happens:

text
Old scores keep their version: vibe-clso4-v1.0.0
New scores use new version: vibe-clso4-v1.1.0
Historical scores don't change
Future scores use new logic

Your scores won't be recalculated retroactively - only new daily scores use the updated algorithm.


Score Calculation Flow

Every night:

Step 1: Aggregate data (1 AM)

text
dbt runs queries on all activity
Counts meetings, comments, milestones, etc.
Separates by time window (7d, 30d, 90d)
Separates by role (buyer, seller)

Step 2: Calculate scores (2-3 AM)

text
Load aggregated data
Calculate each category (0-100)
Apply weights and sum
Round to integer
Determine state (ON_TRACK, AT_RISK, etc.)

Step 3: Store results

text
Save score, category breakdown, metadata
Mark as latest for this Decision Site
Previous scores kept for history

Step 4: Display (next morning)

text
New score appears in UI
Old score archived
Trend graph updates

What Happens When...

No Activity for 30 Days

Days 1-30:

text
Existing activity decays from 100% → 85% → 70%
Score gradually drops

Day 31:

text
Most recent activity now 30+ days old
Enters "distant" window (70% credit)
Score drops noticeably

Day 91:

text
Activity falls out of 90-day window entirely
Score drops to near-zero (only one-time bonuses remain)

Deal Room Just Created

Day 1:

text
No activity yet
Score: 0-5 (INACTIVE)

After first meeting scheduled:

text
Future meeting counts (×0.8)
Score: 20-40 (depending on other activity)

After mutual plan started:

text
Mutual plan bonus (+20)
Milestone activity starts counting
Score: 30-50 range

Buyer Goes Dark

Week 1 - no buyer activity:

text
Recent buyer sessions drop to 0
Buyer comments stop accruing
Score drops 5-10 points (depending on previous buyer activity)

Week 2-4 - continued silence:

text
Previous buyer activity decays (100% → 85%)
Time since last activity grows
Score drops 10-20 more points

Month 2+:

text
Buyer activity fully decayed (70% then 0%)
Communication and Engagement categories very low
Score in AT_RISK or OFF_TRACK range

You Complete 10 Milestones in One Day

Today:

text
Activity logged
Appears in database

Tomorrow:

text
10 completions in "7-day" window
Count normalized (capped if needed)
Collaboration category score increases
Overall score increases (Collaboration is 25% of total)

Next month:

text
10 completions now in "30-day" window (85% credit)
Score drops slightly due to time decay

3 months later:

text
10 completions now in "90-day" window (70% credit)
Score drops more

4 months later:

text
10 completions fall out of window
Score drops significantly unless new milestone activity occurs

Lesson: Sustained activity beats one-time bursts.


Next Steps