EU servers are not a firewall: Why anonymization is the only protection when using AI

Raitschin Raitschew, Founder

Server location is not decisive – what matters is whether personal data is sent to AI models at all

The lawyer who did everything right

A Hamburg-based lawyer has been using a German AI platform for contract analysis for three months. German servers. ISO certified. Privacy policy reviewed. Data processing agreement signed.

He uploads an employment contract. Client's name: Maria Schneider. Date of birth. Tax ID. Salary. Everything in it.

The lawyer is satisfied. GDPR-compliant. German platform. What could possibly happen?

What he hasn't considered: The data is sent to an AI model for processing. This model is operated by a US company – OpenAI, Anthropic, or Google. And these companies are subject to the CLOUD Act.

In August 2025, Marc Carniaux, vice president of Microsoft France, confirmed before the National Assembly: In the case of a formally correct US request, Microsoft is legally obligated to hand over data. Despite EU Data Boundary. Despite encryption. Despite all contracts.

Maria Schneider's client data was sent to an AI model controlled by a US company. The German server location of the platform doesn't change that.

Was that really GDPR-compliant?

The thesis no one wants to hear

Server location is not the decisive factor.

What matters is a different question: Is personal data sent to an AI model operated by a US company?

If yes, then complex legal questions apply. Third-country transfers. CLOUD Act. FISA 702. A compliance situation that depends on political developments in Washington.

If no – if you anonymize data before transmission – then the GDPR is not applicable to this data. Recital 26 states it clearly: Anonymized information does not fall under data protection.

No personal data sent = no data protection obligations for this transfer.

That is the structurally safest solution.

The fundamental problem: Where is the data processed?

The German AI landscape has developed strongly in recent years. There are now a number of platforms offering AI services with a focus on data protection and EU hosting:

DeutschlandGPT advertises BSI C5 certification, ISO 27001, and hosting in the Telekom data center in Frankfurt. Langdock from Berlin has won over 1,500 customers and is SOC 2 Type II certified. Logicc from Hamburg uses Azure Frankfurt and AWS Bedrock EU. MeinGPT by SelectCode hosts the platform with Hetzner in Germany.

These are serious providers with well-thought-out security concepts. The German server locations and certifications are real advantages over direct use of ChatGPT or Claude.

But on closer examination, a structural problem emerges that all these solutions share – and it's not the fault of the providers themselves:

The leading AI models – ChatGPT, Claude, Gemini – are developed and operated by US companies. OpenAI, Anthropic, Google. Even if these models are provided via European cloud regions (Azure EU, AWS EU, GCP EU), control remains with US companies.

And US companies are subject to US law.

The difference between storage and processing

A common misunderstanding lies here. Many platforms advertise that data is stored on German or European servers. That is correct and important.

But: Storage is not the same as processing.

If you upload a document and have it analyzed by an AI, the data is sent to an AI model for processing. This model runs on infrastructure controlled by US companies – even if the servers are physically located in Frankfurt or Dublin.

This is the critical point that many overlook.

The legal dilemma: Why EU servers don't automatically protect

The CLOUD Act – The extraterritorial law

In 2013, the US Department of Justice demanded that Microsoft hand over emails stored on servers in Dublin. Microsoft refused. The case went through the courts. The Second Circuit ruled in favor of Microsoft.

The reaction of the US Congress? The Clarifying Lawful Overseas Use of Data Act. Passed in 2018. Retroactive. Unambiguous.

The core statement is in Section 2713: US authorities can compel US companies to hand over data, “regardless of whether such data is located within or outside the United States.”

The physical server location is not legally decisive. What matters is control over the data. And this control lies with the companies that operate the AI models.

The BMI report from the University of Cologne from December 2025 formulates it unmistakably: “The ability of US authorities to secure data in this way cannot be reliably excluded by technical or organizational measures alone.”

FISA Section 702 – Mass surveillance

The Foreign Intelligence Surveillance Act allows US intelligence services to surveil non-US persons outside the USA. Without individual judicial authorization. Without specific suspicion.

Affected are “Electronic Communication Service Providers” – this also includes cloud providers and AI services.

The 2024 expansion further extends the circle of companies required to cooperate.

This was exactly the reason why the ECJ declared the Privacy Shield invalid in the Schrems II judgment: The US surveillance programs are not proportionate by EU standards. EU citizens have no judicially enforceable legal protection in the USA.

Structurally, nothing has changed.

Article 48 GDPR – The unresolved conflict

The GDPR, in Article 48, prohibits the recognition of judgments and authority decisions from third countries without an international agreement. Such a mutual legal assistance agreement between the EU and the USA regarding the CLOUD Act does not exist.

US companies are caught in a dilemma:

  • If they comply with the CLOUD Act: Potential GDPR violation
  • If they refuse: US sanctions

Microsoft has clarified its position. Marc Carniaux confirmed it before the French National Assembly: Given a formally correct request, Microsoft will hand over the data.

The Cologne report

In December 2025, a legal opinion from the University of Cologne, commissioned by the BMI, became public through an FOI request.

The core statement: For sensitive data, public administration, and critical infrastructure, the use of cloud services under US control with digital sovereignty and full GDPR compliance is “fundamentally incompatible”.

Fundamentally incompatible. Not “difficult”. Not “possible with additional measures”. Incompatible.

The EU-US Data Privacy Framework – An interim solution

Since July 10, 2023, the EU-US Data Privacy Framework enables data transfers to certified US companies without additional safeguards. The major AI providers are certified.

The DPF provides:

  • Executive Order 14086 with “proportionality” requirements for US intelligence services
  • A two-stage complaint mechanism
  • Annual reviews

That is progress compared to the period after Schrems II without an adequacy decision.

The uncertainty factors

FISA 702 has not been reformed. The law continues unchanged.

The word “proportionate” is interpreted differently in the USA than in the EU. What is considered proportionate in Washington could fail before the ECJ.

The Data Protection Review Court is not an independent judiciary in the classical sense. The judges are appointed by the US Attorney General. The proceedings are not public.

The Privacy and Civil Liberties Oversight Board (PCLOB), which was supposed to control surveillance practices, was weakened by the dismissal of members. The BfDI expressed “serious concerns”.

Schrems III on the horizon

Max Schrems and noyb have called the DPF a “copy of the failed Privacy Shield”. Judicial review is likely.

Any company that relies exclusively on the DPF for its AI use bears a residual risk. Not today. But possibly tomorrow.

The German supervisory authority landscape: Different assessments

The critical voices

The Data Protection Conference (DSK) stated in November 2022, by a narrow majority (9:8): Proof that Microsoft 365 can be operated in compliance with data protection law “cannot be provided on the basis of the Microsoft data protection addendum.”

The European Data Protection Supervisor (EDPS) found in March 2024: The European Commission itself violates data protection regulations with its use of Microsoft 365.

The pragmatic voices

In November 2025, the Hessian data protection commissioner Prof. Dr. Alexander Roßnagel declared Microsoft 365 to be usable in a data-protection-compliant manner – referring to the DPF and Microsoft's EU data boundary.

IT security researchers criticized that no technical review had been carried out.

The core problem

Supervisory authorities assess differently. They review contracts and documentation, but not the technical reality of data flows.

In the end, the using company bears the risk. Not the AI provider. Not the supervisory authority. You.

The structural solution: Anonymization before transmission

What does NOT provide sufficient protection

EU server location alone: The server location determines where data is stored. But if this data is sent to an AI model under US control for processing, the storage location is no longer decisive.

Encryption with provider keys: If the AI provider has the key, they can decrypt. And if they can decrypt, they can hand over.

Standard contractual clauses (SCCs): After Schrems II, SCCs are only effective with “Supplementary Measures”. The EDPB recommendations show how complex that is in practice.

Contractual assurances: “We will contest requests” is not the same as “We will not hand over”. In the end, US law applies to US companies.

What structurally protects

Anonymization before transmission: If no personal data is sent to the AI model, there is no personal data that could be handed over.

This is not risk minimization. This is risk elimination for this specific aspect.

The legal foundation of anonymization

Recital 26 of the GDPR:

“The principles of data protection should therefore not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.”

That is not an interpretation. That is the legal text.

Anonymized data = no personal data = GDPR not applicable to this data.

The EDPB confirms: Real anonymization lifts data protection obligations.

Why anonymization trumps all other measures

Independent of server location: It doesn't matter whether the server is in Frankfurt, Dublin, Singapore, or Virginia. If no personal data is sent, there is nothing to protect.

Independent of AI provider: It doesn't matter whether OpenAI, Anthropic, Google, or Mistral operates the model. No access to personal data is possible if none is transmitted.

Independent of the Data Privacy Framework: If Schrems III comes and the DPF is declared invalid, your processes remain unaffected. No dependence on political developments.

Anonymized data + any AI model = no GDPR relevance for the transmitted data

This is the mathematical certainty that no other measure can offer.

Practical example: The difference in everyday life

Scenario: Law firm analyzes employment contract

Without anonymization:

Input to AI: "Analyze this employment contract between Müller GmbH and Mr. Max Mustermann, born on 15.03.1985, residing at Bahnhofstraße 12, 80331 Munich, Tax ID 12 345 678 901..."

  • → Complete personal data is sent to the AI model
  • → This data reaches systems under US control
  • → CLOUD Act: US access theoretically possible
  • → GDPR: Art. 44ff. applicable, complex legal basis required
  • → With Schrems III: Legal basis may fall away

With anonymization:

Input to AI: "Analyze this employment contract between [[ORG-1]] and [[PER-1]], born on [[DAT-1]], residing at [[ADR-1]], Tax ID [[ID-1]]..."

  • → Only placeholders reach the AI model
  • → No personal data leaves your system
  • → CLOUD Act: Access only delivers anonymous placeholders
  • → GDPR: Not applicable to the transmitted data
  • → With Schrems III: No changes required

After processing

The AI responds with placeholders: “[[PER-1]] is entitled to 30 vacation days according to the contract...”

The back-transformation inserts the original data: “Max Mustermann is entitled to 30 vacation days according to the contract...”

The lawyer receives a fully usable analysis. With client names. With concrete references. Without any personal data ever having reached an external system.

The decision matrix: When to use which approach

Not all data is equally sensitive. Not every processing operation requires the same protective measures. But the decision should be made consciously.

Data typeStandard AIAnonymization
Publicly available information✓ SufficientOptional
Business data without personal reference✓ SufficientOptional
Business data with personal reference⚠️ Risk assessmentRecommended
Customer data⚠️ Risk assessmentStrongly recommended
Client data (lawyers)❌ Professional secrecyMandatory
Patient data❌ Art. 9 GDPRMandatory
Employee data⚠️ Works councilRecommended
Financial data with personal reference⚠️ RegulatoryStrongly recommended
Government data❌ BMI reportMandatory

The special categories

Article 9 GDPR defines “special categories of personal data”: health data, biometric data, genetic data, political opinions, religious beliefs, trade union membership, sexual orientation.

Stricter requirements apply to this data. Processing is generally prohibited, with narrowly defined exceptions.

Sending a patient report to an AI system – regardless of where hosted – is a high-risk operation. Anonymization is not “recommended” here. It is effectively mandatory.

Professional secrecy holders

Lawyers, tax advisors, auditors, doctors – they are all subject to special confidentiality obligations. Criminally sanctioned. Professionally relevant.

§ 203 StGB (German Criminal Code) protects professional secrecy. The disclosure of confided secrets is punishable.

If a lawyer enters client data into an AI system controlled by a US company – is he then passing on a secret?

The legal debate is still ongoing. But the practical consequence is clear: Those who want to be on the safe side anonymize.

Recommendations

For companies that want to use AI

Immediately: Take inventory. Which AI tools are being used? Which personal data is being transmitted? Who approved this?

Short term: Transfer Impact Assessment for every service. This is not an optional exercise. After Schrems II, it is mandatory. Document the risks.

Medium term: Evaluate an anonymization solution. Not needed for all use cases. But essential for sensitive data.

Long term: Exit strategy in case the DPF falls. What happens with Schrems III? Which processes are affected? How quickly can you switch?

For regulated industries

Lawyers: The duty of confidentiality under § 43a BRAO is non-negotiable. Client data in AI systems? Only anonymized.

Tax advisors: § 57 StBerG defines the duty of confidentiality. Tax returns contain highly sensitive data. Anonymization is the safe path.

Doctors: Patient data is in special categories under Art. 9 GDPR. Medical confidentiality under § 203 StGB is protected under criminal law. No compromises possible.

Financial service providers: BaFin has clear requirements for outsourcing. AI services from US providers? Check MaRisk compliance. Anonymization significantly reduces regulatory risk.

The checklist

  • Which personal data do we send to AI services?
  • Who operates the AI models we use?
  • What legal basis do we have for this transmission?
  • Have we conducted a Transfer Impact Assessment?
  • What happens in the case of a Schrems III ruling?
  • Have we evaluated an anonymization solution?
  • Is our data protection officer involved?
  • Does the works council know (if existing)?

The new way of thinking

The old question

“Is my AI provider GDPR-compliant?”

This question is misleading. It suggests that GDPR compliance is a property of the provider. Something you can buy. A checkmark on a list.

GDPR compliance is the result of an interplay of technology, organization, and law. The provider can contribute. But the responsibility lies with you.

The right question

“Am I sending personal data to AI systems controlled by US companies?”

This question forces analysis of the actual data flow. Not the marketing promises. Not the contracts. Reality.

And it opens up a solution: If you don't send personal data, you have solved this specific problem. No third-country transfer discussion for this data. No Schrems III concerns. No CLOUD Act risks.

The insight

EU servers are better than US servers. That is undisputed.

But: If the data is sent to AI models under US control for processing, the storage location is no longer the decisive factor.

The only structurally safe solution: Send no personal data.

Anonymization is not “one option among many”. For sensitive data, it is the only solution that works independently of political and legal developments.

All other measures – EU servers, encryption, contracts, certifications – are important and right. But they are risk minimizing. Not risk eliminating.

The difference is fundamental.

The conclusion

The AI revolution is real. The productivity gains are enormous. No company can afford to ignore this technology.

But use must be responsible.

The question is not: Which AI platform has the prettiest data protection badges?

The question is: Which data leaves my company? And how can I control that?

For public information and general business data without personal reference, the answer is simple: Use AI tools at your best discretion.

For sensitive data – client data, patient data, employee data, customer data – the answer is equally clear: Anonymize before you send.

Not because the AI providers are unreliable. But because the legal situation is complex, can change, and you bear the responsibility.

Anonymization gives you back control. Regardless of what is decided in Washington, Brussels, or Luxembourg.

This is not fear. This is precaution.

This is not a compromise solution. This is the best solution.

About anymize.ai

anymize.ai is the firewall for personal data. We enable secure use of AI services through automatic document anonymization – with over 95% detection rate and a unique bidirectional function.

Our technology detects and replaces personal data before it leaves your company. After AI processing, the original data is restored. You receive personalized results without transmitting personal data.

Our approach: GDPR compliance through technology, not through trust.

Frequently asked questions (FAQ)

Are EU servers sufficient for GDPR-compliant AI usage?

The server location alone is not decisive. If personal data is sent to AI models operated by US companies for processing, the CLOUD Act applies – regardless of where the data is stored. The safest solution: Anonymize personal data before transmission.

What is the safest way to use AI with sensitive data?

The safest approach is anonymizing personal data before transmission to AI systems. If no personal data is sent, the GDPR is not applicable to this transmission and there is no data that could be handed over.

Why is the CLOUD Act relevant for German companies?

The US CLOUD Act of 2018 obliges US companies to hand over data “regardless of where the data is located”. If AI models are operated by US providers such as OpenAI, Anthropic, or Google, they can be compelled to hand over data – even if the servers are in the EU.

What happens to my AI usage with a Schrems III ruling?

If the ECJ declares the EU-US Data Privacy Framework invalid, the legal basis for data transfers to US companies falls away. Companies that rely exclusively on the DPF would have to restructure their AI usage. Anonymized data would not be affected – it does not fall under the GDPR and requires no transfer basis.

Can lawyers use AI tools without concerns?

For general research and publicly available information: yes. For client data: only with anonymization. The attorney-client privilege under § 43a BRAO and professional secrecy under § 203 StGB prohibit uncontrolled disclosure of client information. Anonymization before transmission is the safe path.

How does bidirectional anonymization work?

In bidirectional anonymization, personal data is replaced by placeholders before AI transmission (e.g., “Max Müller” → “[[PER-1]]”). The AI processes only anonymized data. After receiving the AI response, the placeholders are replaced with the original data. The result: Personalized, usable answers – without personal data having left the company.

Take action now: GDPR-compliant AI usage with anymize.ai

Do you want to use the benefits of ChatGPT, Claude, and other AI models – without data protection risks?

anymize.ai offers:

  • Automatic detection of personal data (>95% accuracy)
  • Bidirectional anonymization with restoration of original data
  • Processing of documents, not just prompts
  • Integration into existing workflows (API, Zapier, Make.com, n8n)
  • German development, GDPR-compliant

Sources

  • BMI/University of Cologne: Legal opinion on cloud usage (December 2025)
  • EDPB Guidelines 05/2021 on the interplay of Art. 3 and Chapter V GDPR
  • DSK decision on Microsoft 365 (November 2022)
  • EDPS decision on the European Commission's Microsoft 365 usage (March 2024)
  • HBDI Hesse: Press release on Microsoft 365 (November 2025)
  • Marc Carniaux, Microsoft France: Statement before the National Assembly (August 2025)
  • Max Schrems/noyb: Statements on the EU-US Data Privacy Framework
  • GDPR Recital 26
  • US CLOUD Act, Section 2713
  • FISA Section 702

This article is for informational purposes and does not constitute legal advice. For specific questions about GDPR compliance of your AI usage, please contact your data protection officer or a specialized lawyer.

Start now.
14 days free trial.

All models. All features. No credit card.

We stand behind anymize. And we know – when an AI tool touches client, patient or employee data, a demo video isn't enough. That's why we give you 14 days of full access – all models, all features, no credit card. Enough time to be certain, before you trust us.

Your AI workplace awaits.