On-premise.
For everything the cloud
is not allowed to touch.

Local anonymization. Local model. Only anonymized data leaves – or nothing at all.

anymize's anonymization and our small model anymize Spring run in your own infrastructure. Personal data never leaves your site – all cloud-specific compliance requirements (BSI C5, KRITIS cloud rules, DORA third-party clauses) fall away for the sensitive part, because in this setup anymize simply isn't a cloud provider for personal data. For banks, hospitals, public authorities and any organization with KRITIS status or a cloud prohibition: the maximum compliance standard we offer.

Why on-premise

Three reasons
to leave the cloud.

Cloud hosting with a European provider like Hetzner (Germany) covers 95 % of AI applications. But there are three situations where that path is not enough:

01 — Legal

Legal cloud prohibitions

Certain kinds of data must not leave your own infrastructure by law or professional rules – not even with a GDPR-compliant provider. Examples: certain patient data in hospitals with KRITIS status, classified government data, financial data under strict DORA readings.

02 — Trust

Confidential industries with client expectations

Law firms with especially sensitive mandates (white-collar, M&A) or consultancies close to government increasingly face clients who demand "no cloud, not even a European one." On-premise here is not a technical necessity but a trust argument.

03 — Economics

Compliance economics

Every cloud use for personal data drags audit chains with it: DPA, TOM review, C5 questionnaire, DORA third-party assessment. At larger organizations that costs weeks of internal work per audit cycle. An on-premise solution lifts these requirements – not because they are compensated, but because the scope no longer applies.

BSI C5 and the cloud frame

The regulatory
elegance of on-premise.

The core

Regulations like BSI C5 (Cloud Computing Compliance Criteria Catalogue) apply to cloud providers that process personal data. The scope is sharply defined: whoever is not a cloud provider, or processes no personal data, falls outside the scope.

How anymize resolves this on-premise

01

Anonymization runs with you.

In your data center, your own hardware, your network.

02

The Spring model runs with you.

Our small language model sits next to anonymization and works on the anonymized content.

03

Personal data never leaves your site.

No transfer, no export, no transmission. The data stays where it belongs.

04

Only anonymized content goes out.

Optionally to external models – with full context thanks to smart placeholders. Or nothing at all, in pure compliance mode.

The consequence is regulatorily precise: in this setup, anymize is not an external cloud provider for personal data. The corresponding cloud-specific requirements (BSI C5, KRITIS cloud rules, DORA third-party clauses) do not apply, because the scope is simply not met.

Compliance-Gewinn

What this practically
changes for you.

Five points every compliance lead grasps immediately:

  • No BSI C5 attestation required to deploy anymize (when on-premise).

  • No KRITIS notification obligation for external cloud use of the AI component.

  • Simplified DORA third-party assessment – anymize is not a critical IT third party, because it's not externally hosted.

  • No Schrems II transfer question – no transmission, no question.

  • No cloud impact assessment as foreseen by the DSK orientation guide for cloud use.

Anyone facing the compliance work of an AI project today knows how much that saves.

Transparency note

"Not a cloud provider" applies to the component running with you. Support access by anymize (on request only, documented, auditable) is contractually governed and does not count as ongoing cloud processing.

The three variants

Three sizes.
One principle.

Not every organization needs its own data center. We offer three deployment variants, scalable from a single practice server to a large government cluster:

Variant 1

Compact deployment

For smaller units: individual practices, small law firms, mid-sized companies with specific requirements.

A single server with sufficient GPU power for the anonymization pipeline and optionally the Spring model.

Scope

Anonymization + Spring model, low concurrency (typically up to ~20 simultaneous users), local web UI or API access for internal applications.

Variant 2

Enterprise deployment

For hospitals, public authorities, banks, large law firms: dedicated server infrastructure in your own data center environment.

High concurrency, high availability with failover nodes, integration into existing identity management systems (Active Directory, LDAP, SAML/OIDC).

Scope

The full anymize stack on-premise (anonymization, Spring, chat interface, knowledge bases, projects, artifact management, audit logs), scalable, with SLA-backed support.

Especially recommended

Variant 3

Hybrid deployment

The most interesting variant for many mid-sized organizations: anonymization and Spring model local – but optional access to external frontier models through the anymize cloud with anonymized content.

Sensitive documents are anonymized locally. For simple tasks, Spring handles them on site. For complex analyses that demand absolute frontier quality, only the anonymized placeholder versions go to GPT/Claude/Gemini – back in your local environment, de-anonymized, with original data.

Why this is powerful

You get the quality of international frontier models, without a single piece of personal data leaving your site. The cloud provider status still doesn't apply, because the anonymized pipeline no longer falls under "processing of personal data."

Component boundary

Honest
component boundary.

Available on-premise

  • Anonymization pipeline

    All three stages: algorithmic, fine-tuning model, prompt verification.

  • anymize Spring

    Our small language model, explicitly designed to run on-premise.

  • Web interface

    For chat, knowledge bases, projects and artifacts.

  • REST API

    For integration into internal applications and workflows.

  • Audit logs and configuration

    Admin panel with full access to protocols.

Not available on-premise (and why)

  • anymize Waterfall and Fountain

    Our larger models run exclusively on our own inference infrastructure. They operate on our European infrastructure (EU, at Hetzner in Germany); you can use them in the hybrid setup with anonymized content.

  • International frontier models

    GPT, Claude, Gemini, Mistral, Perplexity, Kimi – run by their vendors and by nature not installable on-premise. Access happens in the hybrid setup via anonymized requests.

What this means in practice

Pure compliance setup

"Nothing leaves the site": anonymization + Spring model local, no external requests.

Hybrid setup

"Compliance up front, quality behind": anonymization local, anonymized requests optionally to Waterfall/Fountain with us or to international frontier models.

Both paths are possible under the same contract – you configure per workspace which models are enabled.

Requirements

What sits
in your rack.

The exact hardware requirements vary with variant and expected load; here are the typical dimensions as orientation:

Compact deployment

  • CPU

    Modern server processor (e.g. current-generation AMD EPYC, 32+ cores)

  • RAM

    From 128 GB

  • GPU

    1× current enterprise GPU for the anonymization models and Spring (concrete recommendations in the sales conversation)

  • Storage

    2–4 TB NVMe SSD for models and data

  • Network

    Gigabit internal, outbound only if hybrid setup is desired

Enterprise deployment

Individual sizing

Individual sizing by concurrent users, document volume and high-availability requirements. We support capacity planning during the sales cycle.

Operating model

  • Container-based

    Docker / Kubernetes – integrates into your existing orchestration landscape.

  • Identity integration

    Via SAML, OIDC, LDAP, Active Directory.

  • Updates

    Rolled out regularly and in a controlled fashion – no auto-update without your consent.

  • Support access

    On request only, documented, auditable – no remote access active by default.

Who needs it

Six industries
that truly need on-premise.

Hospitals & clinics
Patient data with maximum confidentiality, KRITIS status from a certain size, KHZG requirements, § 203 StGB
Banks
DORA third-party clauses, BaFin supervision, banking secrecy, internal IT principles on cloud avoidance
Insurers
VAG requirements, insurance secrecy, portfolio data with extremely long retention periods
Public administration
State and federal guidelines on cloud use, NIS2 classification, citizen data in clear text, classification levels
Large law firms with M&A / white-collar mandates
Clients with their own cloud policy, absolute confidentiality demands, international parties
Industry with trade secrets
Technology protection against industrial espionage and foreign access rights, internal IP protection policies

Also: individual medical practices with especially sensitive clientele (addiction medicine, psychiatry, HIV specialists) increasingly choose compact deployments – not because they are KRITIS, but because their patients expect maximum discretion.

Sales contact

On-premise
is enterprise.

On-premise deployments are individual projects – there is no "buy online" button. Every installation is sized, planned and accompanied. It begins with a conversation in which we clarify the three decisive questions:

01

What data volume and user count do you expect?

02

What infrastructure is already in place – and what needs to be added?

03

Is compact enough, do you need enterprise, or is hybrid the right path?

After that, we prepare a binding offer with hardware recommendations, license model, support package and timeline. Typical project duration from kick-off to go-live: 4–12 weeks, depending on variant and your internal pace.

Already in the first conversation, you receive:

  • TOM documentation

  • Completed security questionnaire

  • Sub-processor list for the on-premise frame (significantly shorter than in the cloud setup)

What you should know about on-premise.

Frequently asked questions

The anonymization pipeline (three-stage), anymize Spring (our small language model), the web interface for chat, knowledge bases, projects and artifacts, plus the REST API for your own integrations. Our larger models (Waterfall, Fountain) do not run on-premise – they remain reachable through anonymized requests in the hybrid setup.

Start now.
14 days free trial.

All models. All features. No credit card.

We stand behind anymize. And we know – when an AI tool touches client, patient or employee data, a demo video isn't enough. That's why we give you 14 days of full access – all models, all features, no credit card. Enough time to be certain, before you trust us.

Your AI workplace awaits.