Product family · release 2026.4

Every AI hiring tool stops at the offer letter. Nodes starts there.

One calibrated model runs every stage — sourcing, screening, interviewing, ramping, retaining, promoting. Deployed inside your VPC. Scored against four years of your own production data, not a vendor benchmark.

Products
11 agents · one model
Deployments
VPC · single-tenant
Model
carrier-hire-2025.11
Core products

Three surfaces. One calibrated model underneath.

Fit Scoring writes back to your ATS. AI Interviews run mid-funnel. Persona Sourcing finds passive candidates against the same pattern. Every score, transcript, and match shares weights — and compounds with every cohort.

No separate models per product. No re-training per module. The scoring substrate is the product.

01 · Fit Scoring

A 0–100 score, native in your ATS.

Calibrated per role and per location against your validated top-performer pattern. Twenty-eight behavioral, skill, and cultural dimensions. Every score ships with a plain-English rationale and a full decision trace.

ATS-native fieldPer-role calibration28 dimensionsRationale included
fit_score · score_card v0472 · carrier-hire-2025.11
87
M. Rojas — Producer, Commercial Lines
req · austin-tx · screened 2026-04-14
p-74 vs. cohort
conscientiousness82
coachability79
resilience74
prospecting drive88
goal orientation85
+ 23 dimsavg 76
trace_id · dc_8871_2026-04-14 signed · reproducible
02 · AI Interviews

Structured interviews, scored against production.

Conducted mid-funnel, async or live. Scores update against real 18-month outcomes, not interviewer gut feel. Hiring managers receive the transcript, the timestamped signal evidence, and a signed audit trail.

Async or live12-min medianSigned transcript6 canonical signals
interview · structured · req rv-2412-08831 rec · 12:04
Nodes IV
Walk me through a week you had where your pipeline fell short of target. What did you change?
M. Rojas
The previous quarter I lost two referral accounts back-to-back. I rebuilt the outbound list from our CRM — split by renewal date — and doubled the follow-up cadence on anything past 45 days…
Nodes IV
What did you learn about your own assumptions from that cycle?
signal 01 · resiliencepresent
signal 02 · process resetpresent
signal 03 · self-auditpartial
4 of 6 signals · score 87 · advance sig hash · 0a91f…d72e
03 · Persona Sourcing

Find passive candidates that match what actually predicts production.

The sourcing engine queries against the patterns the model learned from your internal wins — not inferred keyword boilerplate. LinkedIn, GitHub, industry networks, public portfolios. Results ship with a provisional Fit Score.

LinkedIn · GitHub · portfoliosLearned patternsNo boilerplateProvisional score
sourcing · persona_query_0334 n=4 of 218
match persona("top_producer_commercial_lines_austin")
where fit_score > 72 and signal("resilience", "prospecting_drive")
JL J. Liangregional bank · austin · 4 yr tenure linkedin 81
PK P. Kwanfintech · san antonio · adjacent portfolio 78
RN R. Ndidiretail advisory · dallas · 3 yr linkedin 76
SA S. Aminsme lending · austin · 2 yr industry net 74
learned from 747 jan–mar 2025 scored cohort · p=0.006 export → sequence
The agent family

Eleven agents. One model. A compounding scoring substrate.

The same model that sources, screens, and interviews your pipeline also runs ramp, retention, attrition, internal mobility, succession, manager intelligence, career pathing, and the AI mentor co-pilot. Weights live in your VPC. Every agent reads the same decision trace.

Agent · 01 · pre-hire
Persona-based Sourcing
Finds passive candidates matching the patterns your production data validates — not keyword boilerplate.
statuslive
Agent · 02 · pre-hire
Fit Scoring / Screening
0–100 ATS-native score across 28+ dimensions. Top-scored hires reach sustained production at 2.47× the rate.
lift · p=0.0062.47×
Agent · 03 · pre-hire
AI Interviews
Structured mid-funnel interviews scored against real 18-month production outcomes. Signed transcript.
time-to-hire127d → 38d
Agent · 04 · post-hire
Ramp Acceleration
Role-calibrated onboarding signal. Scored hires ramp 47 days faster to first milestone.
median days to SNA62 vs 109
Agent · 05 · post-hire
Retention Risk
Early-warning signal stream over behavioral and production trace data. Weekly refresh.
statuslive
Agent · 06 · post-hire
Attrition Modeling
Cohort-level forecasting calibrated monthly against outcomes in your HRIS.
statuslive
Agent · 07 · post-hire
Internal Mobility
Cross-role pattern matching on the same dimensional fingerprint the model uses at hire.
statuslive
Agent · 08 · post-hire
Succession Planning
Key-role readiness scoring, persona-calibrated against validated top-performer traces.
statuslive
Agent · 09 · post-hire
Manager Intelligence
Team-level production trace — surfaces what actually correlates with manager-driven lift.
statuslive
Agent · 10 · post-hire
Career Pathing
Next-role fit and readiness mapped to your internal posting taxonomy.
statuslive
Agent · 11 · post-hire
AI Mentor Co-pilot
Personalized coaching for managers and agents, grounded in the same decision trace.
statusbeta
Model spec

carrier-hire-2025.11 — the one model under everything.

A fine-tuned open-source foundation model, calibrated inside the customer boundary. Weights owned by the customer. No third-party AI in the chain. This is the scoring substrate every product reads from.

model.id · carrier-hire-2025.11
carrier-hire-2025.11
Fine-tuned on 10,765 hires across 215+ locations, validated on 747 real-time-scored agents with 9+ months tenure. Retrained quarterly against post-hire production.
Calibration corpus
10,765agents
4-yr retrospective · p75+ sustained cohort
Dimensions
28
behavioral · skill · cultural
AUC vs. baseline
0.618
RPA · vs. 0.548 keyword ATS · p=0.006
Inference p50
74ms
in-cluster · customer-vpc · gpu
Fine-tune cadence
Quarterly
Calibrated on rolling 18-mo production
Top-perf lift
2.47×
p=0.006 · Bonferroni-corrected
Evidence · post-deployment

What the model did after it shipped.

  • 5.37×
    RPA rate gap between sales-keyword + high-scored hires and no-sales + low-scored hires. n=507 Jan–Mar 2025 cohort · p=0.0007
  • 0
    Keywords that predict production after Bonferroni correction — across 8,181 tested. 30 anti-predictive · median OR 0.749
  • $54
    Production lift per day of ramp acceleration, every day faster to first milestone. $5.11M/yr at 2,000-hire volume · 47-day median acceleration
  • 98%
    Of RPA-winning hires who would have been eliminated by applying all 6 standard ATS keyword filters. 80% eliminated by insurance-experience filter alone
Reads from the systems you already run.
ATS · HRIS · CRM · assessment · identity
SSO · SCIM · audit stream · terraform
ATSWorkday · Greenhouse · Lever · iCIMS · SmartRecruiters
HRISUKG · ADP · Workday HCM · Rippling
CRMSalesforce · Microsoft Dynamics · HubSpot
AssessmentsAssessio · Criteria · SHL · Pymetrics
IdentityOkta · Azure AD · Ping · Google Workspace
DataSnowflake · Databricks · BigQuery · redshift
TelemetryDatadog · Splunk · audit stream · siem export
DeployTerraform · Helm · AWS · Azure · GCP
Comparisons

Why legacy hiring tools miss what Nodes catches.

Keyword ATSes score resumes against strings. Gen-AI chatbots score candidates against a generic corpus. Nodes scores candidates against your own production data — and writes that score back to your ATS as a native field.

Dimension
Keyword ATS
Gen-AI chatbot
Nodes
Calibrated against
Recruiter-authored filters
Public internet corpus
Your 4-yr production data10,765 agents · p75+ cohort
Predictive lift (AUC · RPA)
0.548
unreported
0.618p=0.006 · n=747 scored cohort
Decision evidence
String-match log
LLM rationale, non-auditable
Signed decision tracereproducible · customer-owned
Data residency
Vendor cloud
Third-party AI in the chain
Customer VPC · zero egresssingle-tenant · SOC 2 Type II
Post-hire coverage
None
None
10 agents · same modelramp → succession · compounding
Legal review median
Varies
6+ months at regulated enterprises
17 daysat the reference customer · 6 prior rejections
30 minutes · Your data · No pitch deck

See the platform with your data.

We'll backtest Nodes against a sample of your production history in your environment. You'll see — numerically — which of your filters worked, which didn't, and what a calibrated Fit Score would have scored differently.

01
Scope a de-identified sample of your production data
02
We backtest against it — in your environment, not ours
03
You see which filters worked, which didn't, and the counterfactual
04
If the math works, we scope a VPC pilot (median: 34 days to prod)