Lead Scoring That Actually Makes Sense
Feb 18, 2026
Mahdin M Zahere
The middle school version: Imagine you're a teacher grading homework, but instead of papers, you're grading how interested someone is in your product. Did they visit your pricing page? That's an A+. Did they only look at the blog once? That's a C. Did they come from a big company that matches your dream customer? Bonus points! Surface adds up all these "grades" automatically so our sales team knows exactly who to call first.
Why traditional lead scoring is broken
We used HubSpot's lead scoring for over a year. We built a model — points for page visits, points for email opens, points for content downloads, minus points for inactivity. We set an MQL threshold at 50 points. Leads above 50 got passed to sales.
After 6 months, we ran the analysis every team should run but rarely does: do our MQLs actually convert at a higher rate than non-MQLs?
The answer was barely. MQLs converted to meetings at 16%. Non-MQLs that reps cherry-picked from the CRM converted at 13%. Our scoring model — months of configuration and tuning — was generating a 3-percentage-point lift. The reps could almost match it with gut instinct.
The problem wasn't HubSpot. The problem was the inputs. Behavioral scoring — email opens, page visits, content downloads — measures attention, not intent. A marketing intern who reads every blog post scores higher than a VP who visits the pricing page once. The model rewards engagement, not buying behavior.
The three signal layers that actually work
We rebuilt our lead scoring in Surface around three layers that predict conversion — not just engagement.
Layer 1: Page activity — but only the pages that matter.
Not all page views are equal. We tracked which pages correlate with actual conversion and built scoring around those — not around total activity.
Page | Correlation with conversion | Score weight |
|---|---|---|
Pricing page | Highest. 4x more likely to book if they visited pricing. | Heavy |
Case studies | High. Indicates they're building a business case. | Medium-heavy |
Product/features pages (3+ min time on page) | High. Deep engagement, not scanning. | Medium-heavy |
Comparison pages | High. Actively evaluating alternatives. | Medium |
Blog posts | Low. Informational intent, not buying intent. | Minimal |
Careers page | Negative signal. They might be job hunting, not buying. | Slight negative |
This was counterintuitive. Our old model gave points for every page view. The new model essentially ignores blog visits and heavily weights pricing + case studies. The data supports it — those two pages are the strongest predictors of a lead becoming a meeting.
Layer 2: Enriched company data.
Does this lead's company match our ICP? This is the "bonus points" layer — and it's often more predictive than any behavioral signal.
We score on: company size (sweet spot: 50–500 employees), industry (B2B SaaS, tech, professional services), funding stage (Series A–C), tech stack (HubSpot or Salesforce users), and geography (US, UK, Canada, Australia).
A lead from a 200-person B2B SaaS company that uses HubSpot starts with a high base score even before they visit a single page. That company fit signal is more predictive than 10 page visits.
Layer 3: First-party stated signals.
What the lead tells us directly — on the form. Timeline, budget, use case, and company size captured at the moment of form submission. These are the most reliable signals because they come straight from the lead's mouth, not from inferred behavior.
A lead who selects "evaluating this quarter" and "budget approved" on the form gets a higher score than someone who's visited the pricing page 5 times but hasn't told us anything about their readiness.
How our SDR team uses lead scores
Every morning, our SDRs open their queue sorted by lead score. The top leads — high company fit, high-intent page activity, and qualifying form data — get immediate outreach. The middle leads get same-day follow-up. The low leads go to automated nurture.
This replaced the old approach of "work the list top to bottom in order of arrival." Now reps spend their first hour on the leads most likely to convert, not the leads that happened to submit first.
The impact: our SDRs book 40% more meetings per rep per month — not because they're working harder, but because they're working on the right leads first.
Real examples
Lead that scored high (and converted): VP of Marketing at a 300-person SaaS company. Visited pricing page twice. Read two case studies in one session. Submitted a demo request form with "evaluating this quarter, budget $25K–$50K." Lead score: 92. Booked a meeting the same day. Closed in 3 weeks.
Lead that looked good but wasn't: Marketing coordinator at a 5,000-person enterprise. Downloaded 4 whitepapers, attended a webinar, opened every email for 6 weeks. Old model: MQL at 85 points. New model: score of 34. Why? No pricing page visit, no form submission, role is a researcher not a buyer, company is outside our sweet spot. SDR called — confirmed they were doing research for a report, no buying intent.
The old model would have prioritized the second lead over the first. The new model gets it right.
Let your sales team focus on leads that matter. Try Surface's lead scoring.


