Your VPC. Your weights. Zero egress.
Nodes deploys single-tenant inside your cloud account. Model training, inference, scoring, and every decision trace run on infrastructure you own. No external API calls. No OpenAI or Anthropic in your supply chain. No data leaves the boundary — because there's nowhere for it to go.
One Fortune 500 carrier rejected six AI vendors in eighteen months. They approved Nodes in seventeen days.
Most AI vendor reviews collapse at data residency, model supply chain, or egress controls. Nodes answers those three objections structurally — there is nothing to approve because there is nothing that leaves your cloud account.
The side-by-side below is a redacted composite of one actual deployment timeline versus the prior vendor stack that procurement, security, and legal had already rejected.
Everything runs inside your VPC. Nothing crosses the line.
Four lanes, one account, one tenant: control plane, model plane, customer data, audit. No shared infrastructure with another customer. No Nodes-operated inference. No path to the public internet from anything that touches your data.
No third-party AI in the chain.
Nodes ships a fine-tuned open-source foundation model and calibrates it inside your boundary on your four-year production trace. There is no call out to a hosted LLM provider at training time, at inference time, or in the audit stream.
Customer-calibrated.
Customer-owned.
Every weight under every inference is the weight the customer owns. Retrains happen in-cluster, on rolling 18-month production. Weight artifacts never leave the VPC.
- ✕OpenAI, Anthropic, Google, or any hosted LLM providernot a subprocessor · not called at train or inference time
- ✕Third-party embedding APIsembeddings produced in-cluster · no round-trip
- ✕Nodes-operated inference endpointsNodes ships the binary · the customer runs it
- ✕Training data shipped back to Nodesno telemetry with PII · aggregate ops only · opt-in
- ✕Closed-weight foundation modelswe do not pull weights you cannot inspect
Every asset. Where it lives. Who can read it.
The ledger your counsel will screenshot. Each row is enforced by network policy, IAM, or KMS — not just documented. Retention is customer-configurable; defaults shown.
The attestations your reviewer already has a checklist for.
Controls are implemented in the deployment, not bolted on in policy. Every certification below is backed by the same single-tenant-VPC architecture — the architecture is the control.
terraform module · helm chart
BYOK · mTLS · SSO / SCIM
Air-gapped variant on request
# One module. Your account. Your KMS. Your subnets. module "nodes" { source = "nodes-inc/nodes/aws" version = "2026.4" # deployment boundary vpc_id = var.vpc_id private_subnets = var.private_subnets allow_egress = false # enforced · default-deny # customer-held keys kms_key_arn = aws_kms_key.nodes.arn weights_bucket = aws_s3_bucket.weights.id object_lock = "compliance" # identity + audit sso_provider = "okta" siem_sink = var.splunk_hec_endpoint audit_stream = true # model model_artifact = "carrier-hire-2025.11" retrain_cadence = "quarterly" inference_gpu = "g5.2xlarge" }
The seven questions every security reviewer has asked.
Answered once, here, with the spec IDs your counsel will want to cite. For anything not covered, the architecture review (Day 01) is a working session with our security engineer — not a sales call.
01 Does any candidate or employee data ever leave our VPC? +
No. The deployment enforces a default-deny egress policy at the subnet level. Candidate and employee PII is read through from your ATS and HRIS and never copied out of your cloud account. Inference request bodies, scored responses, and decision traces all stay in-cluster. The only opt-in traffic that ever leaves is aggregate operational telemetry with every PII field scrubbed server-side before it's emitted — and that opt-in can be turned off in Terraform.
ref · terraform allow_egress=false
ref · spec · net-policy-01
02 Who has access to our model weights? +
You do. The weights artifact is delivered into a bucket you own, encrypted with a KMS key you hold. The Nodes service role has read access, scoped to the cluster; the artifact cannot be exported. Revoke the KMS key and inference stops — which is what every customer tests on Day 07 of deployment.
ref · spec · weights-01
03 Is OpenAI or Anthropic in the supply chain? +
No. Nodes fine-tunes an open-source foundation model with a pinned digest; there is no call out to a hosted LLM provider at training, fine-tune, or inference time. The DPA lists zero external subprocessors for model operations. This is the question that stalls most vendor reviews; here, it's answered by the deployment topology, not by policy.
ref · DPA · subprocessor schedule
04 How are model retrains handled on a per-customer basis? +
In-cluster, on your data, on a quarterly cadence, producing a new signed weights artifact that lives only in your VPC. Retrain jobs run under the Nodes service role with scoped IAM. Nodes does not pull your data back to retrain a shared model — there is no shared model. Your weights are yours.
ref · prov_chain · fine-tune stage
05 What happens to our deployment if Nodes is acquired or shuts down? +
The deployment keeps running. The weights are in your bucket under your KMS key. The inference binary is in your cluster. The Terraform module and Helm chart are pinned in your CI. A source-escrow clause in the master agreement gives you the right to rebuild the binary from source after a triggering event. "Lights-on without Nodes" is a stated design goal.
ref · spec · lights-on-01
06 How do GDPR / CCPA data subject requests flow? +
To your HRIS or ATS, never to Nodes. Because candidate and employee records live in your source-of-truth systems and are read through into Nodes only at scoring time, a delete or export request fulfilled in your HRIS is automatically reflected in Nodes on the next read. Decision traces bound to deleted subjects are tombstoned on the same schedule.
ref · DPA · controller/processor
07 Can you support air-gapped or GovCloud deployments? +
Yes. The Terraform module runs in AWS GovCloud, Azure Government, and GCP Assured Workloads out of the box. For air-gapped environments we ship weights, binaries, and the Helm chart as signed offline artifacts; retrains are triggered by a scheduled job inside the air-gapped cluster, with no return path. Talk to the architecture review team for the offline delivery process.
ref · spec · airgap-01
One hour with our security engineer. Not a sales call.
We walk through the deployment diagram against your cloud account topology, share the SOC 2 report, and leave you with a DPA template and a Terraform plan your team can review offline. Most reviewers come out with enough to open an internal architecture ticket the same day.