Every AI hiring tool stops at the offer letter. Nodes starts there.
One calibrated model runs every stage — sourcing, screening, interviewing, ramping, retaining, promoting. Deployed inside your VPC. Scored against four years of your own production data, not a vendor benchmark.
Three surfaces. One calibrated model underneath.
Fit Scoring writes back to your ATS. AI Interviews run mid-funnel. Persona Sourcing finds passive candidates against the same pattern. Every score, transcript, and match shares weights — and compounds with every cohort.
No separate models per product. No re-training per module. The scoring substrate is the product.
A 0–100 score, native in your ATS.
Calibrated per role and per location against your validated top-performer pattern. Twenty-eight behavioral, skill, and cultural dimensions. Every score ships with a plain-English rationale and a full decision trace.
Structured interviews, scored against production.
Conducted mid-funnel, async or live. Scores update against real 18-month outcomes, not interviewer gut feel. Hiring managers receive the transcript, the timestamped signal evidence, and a signed audit trail.
Find passive candidates that match what actually predicts production.
The sourcing engine queries against the patterns the model learned from your internal wins — not inferred keyword boilerplate. LinkedIn, GitHub, industry networks, public portfolios. Results ship with a provisional Fit Score.
where fit_score > 72 and signal("resilience", "prospecting_drive")
Eleven agents. One model. A compounding scoring substrate.
The same model that sources, screens, and interviews your pipeline also runs ramp, retention, attrition, internal mobility, succession, manager intelligence, career pathing, and the AI mentor co-pilot. Weights live in your VPC. Every agent reads the same decision trace.
carrier-hire-2025.11 — the one model under everything.
A fine-tuned open-source foundation model, calibrated inside the customer boundary. Weights owned by the customer. No third-party AI in the chain. This is the scoring substrate every product reads from.
What the model did after it shipped.
-
5.37×RPA rate gap between sales-keyword + high-scored hires and no-sales + low-scored hires. n=507 Jan–Mar 2025 cohort · p=0.0007
-
0Keywords that predict production after Bonferroni correction — across 8,181 tested. 30 anti-predictive · median OR 0.749
-
$54Production lift per day of ramp acceleration, every day faster to first milestone. $5.11M/yr at 2,000-hire volume · 47-day median acceleration
-
98%Of RPA-winning hires who would have been eliminated by applying all 6 standard ATS keyword filters. 80% eliminated by insurance-experience filter alone
SSO · SCIM · audit stream · terraform
Why legacy hiring tools miss what Nodes catches.
Keyword ATSes score resumes against strings. Gen-AI chatbots score candidates against a generic corpus. Nodes scores candidates against your own production data — and writes that score back to your ATS as a native field.
See the platform with your data.
We'll backtest Nodes against a sample of your production history in your environment. You'll see — numerically — which of your filters worked, which didn't, and what a calibrated Fit Score would have scored differently.