STACK SELECTION AND TRUST

Stack chosen per project, not per vendor

Cloud, model, and orchestration framework are picked against the workflow and the regulatory bar, not picked by default. Documented and audit-ready whichever combination ships.

How we choose the stack

Most of the quality in an AI engagement is decided before the first line of code. The cloud, the model, and the orchestration framework all get matched to the workflow, not assumed from the last project. Where we put the effort:

Cloud

GCP, Azure or AWS depending on the client's existing tenant and regulatory bar. Application hosting, vector storage, and model inference can each route through a different provider when the workflow requires it. Where US Cloud Act exposure has to be eliminated, we route through European data-sovereign cloud (Cleura, Nebul) instead.

Model

We will not deploy Claude where Gemini reads better for the task, or ChatGPT where Claude reads better. The pick is made against the workflow's actual demands (reasoning depth, latency, cost per inference, multimodal needs, jurisdiction of the inference endpoint). Where the regulatory bar requires it, we deploy a local open-weights model in the client's own infrastructure.

Agentic orchestration

LangGraph or Pydantic AI for stateful multi-agent flows, with human-in-the-loop gates on consequential outputs. Selected against the failure mode the agent needs to survive (hallucination, prompt injection, runaway autonomy), not against framework popularity.

Workflow automation, OCR, machine vision, RAG

Different problems, different tools. Document understanding picks an OCR engine that matches the document family. Computer vision picks a model that matches the visual class. RAG picks a retrieval shape that matches how the user actually queries. We use the tools that fit, not the tools we used last project.

The careful analysis and selection that happens before the build is where most of the quality lives. The choice we make at the start determines what the audit looks like, what compliance you can prove, and what the operating cost ends up being.

Data residency and sovereignty

European clients, European servers. Application hosting, vector storage, and model inference route through European regions by default. Specific providers depend on the client's existing infrastructure and the workflow: typical setups use Railway (Amsterdam) or the client's own Azure / GCP / AWS tenant for hosting, Neon Tech (Frankfurt) or the client's existing managed Postgres for vector storage, and Vertex AI on GCP europe-west1 / europe-west4 or Bedrock on AWS EU regions for inference.

This is a deliberate choice, not an accidental result. We minimize the personal data processed and keep data within the EU/EEA wherever possible.

For critical infrastructure that must remain on European grounds, including environments where US Cloud Act exposure has to be eliminated, we partner with Cleura and Nebul on European data-sovereign cloud. Both are EU-owned and EU-operated, designed to keep data on European jurisdiction end-to-end. Activated for engagements where data sovereignty is a contractual or regulatory requirement, not the default for every project.

Sub-processors and DPAs

These providers may process data in client projects. Which ones are active depends on the stack chosen for the workflow. Clients on Azure tenancy run a different set than clients on GCP, and inference may route through Vertex AI, Bedrock or a hosted Anthropic endpoint depending on the model and the regulatory bar.

Anthropic

Usage
Claude API for client products, Claude Team for internal development and operations. Per Anthropic's ToS, data is not used to train the models in either case.
Default region
EU/US per setup
DPA status
Integrated in Anthropic's ToS. For Own projects, the client contracts directly with Anthropic.
Links

OpenAI

Usage
Used only in the AI Pitch Analyzer (Whisper transcription). Anthropic is the primary choice elsewhere, but offers no voice model.
Default region
EU when possible
DPA status
Signed Nordic AI DPA

Railway

Usage
Hosting and PaaS for client projects.
Default region
Amsterdam (EU-West) by default
DPA status
Signed Nordic AI DPA

Neon Tech

Usage
Postgres and pgvector for vector databases and application data.
Default region
Frankfurt (AWS eu-central-1)
DPA status
Regulated under Databricks DPA, ref Neon §3.4

Google Cloud Platform

Usage
Project- and client-dependent. Active only when chosen for the solution.
Default region
europe-west1 (Belgium) by default; other EU regions per project
DPA status
Per client setup
Links

Microsoft (M365 / SharePoint / Azure)

Usage
Project- and client-dependent. Common when integrating with existing client setups.
Default region
Configurable, European regions as first choice
DPA status
Per client setup

We hold signed versions of several DPAs. They are made available on request via email.

Own vs Lease: technical differences

The two models build on the same components, but ownership, operation, and DPA relationships differ fundamentally.

Infrastructure ownership

Own

Client

Lease

Nordic AI

Tenant model

Own

Single-tenant in client environment

Lease

Per-vertical, isolated per client

DPAs with third parties

Own

Client signs directly (Nordic AI guides)

Lease

Nordic AI is the primary contract, client is sub-processor

Upgrades

Own

Client timing

Lease

Nordic AI rolls them out

Customization

Own

Full

Lease

Within the vertical framework

Data location

Own

Client choice

Lease

EU by default, can be tailored

Heightened security requirements

Our standard setup meets the bar for most projects. For especially sensitive personal or business data, we offer two escalation tiers that adapt the Own delivery.

Anthropic via the client's own cloud tenant

Anthropic is available as a managed service on AWS Bedrock, Google Vertex AI, and Azure. We route model calls through the client's own tenant, selected to match the existing tech stack, so processing and audit trails stay inside the client's perimeter.

Local on-prem models

For exceptionally high requirements, or explicit on-prem requests, we deploy local open models in the client's infrastructure. This is a premium delivery on the Own model, priced as an extended Own project.

DPA strategy for Own projects

In Own projects, Nordic AI helps the client set up direct data processing agreements with relevant third parties. We deliver an overview of which providers will process personal data in the client's setup, point to the right DPA template, and coordinate with the client's legal resource.

The legal work itself, evaluation and signing, is done by the client's own counsel. We are technical coordinator, not legal advisor.

We typically help set up DPAs with:

  • Anthropic (Claude API)
  • Railway (hosting)
  • Neon Tech (database)
  • Google Cloud Platform
  • Microsoft (Azure, M365, SharePoint)
  • Other providers relevant to the project

We are not lawyers and do not provide legal advice. The client's own legal resource evaluates and signs the agreements.

SLA: Own vs Lease

The SLA frames are clearly different. Concrete numbers are set in contract by scope.

Own

The client's own infrastructure has its own uptime. Railway, GCP, Microsoft and others publish their own SLAs. On top of that, Nordic AI delivers a support SLA with response time and incident handling, set by scope.

Lease

Nordic AI provides a single combined SLA covering platform uptime and support response time. Concrete numbers will be published when the lease model launches in Q3 2026.

Security measures

Measures are tailored to the project's risk level and the client's requirements, but the fundamentals are consistent.

  • Role-based access control on all systems
  • Data encryption at rest and in transit
  • Logging and monitoring of data access
  • Secure software development practices
  • Partnerships with ISO-certified cloud and infrastructure providers
  • Strict data minimization: we collect only what is necessary

Questions about technical setup or compliance?

We answer technical and compliance questions in detail, including questions about signed DPAs, data location, and provider setup.

Send us an email