In nearly every finance-related organization I’ve encountered, a common pattern emerges:
Official policy states: “No ChatGPT, no external AI tools - compliance forbids it.”
The reality is that employees use these tools anyway, often in silence, due to overwhelming workloads.
This isn’t merely about “productivity hacks.” Employees are sharing parts of contracts, project documents, internal emails, and even client data with third-party tools operated by companies outside their own infrastructure and sometimes outside the EU.
From a compliance perspective, this poses significant risks:
- Data leaves the company’s network and jurisdiction.
- There is typically no data processing agreement in place.
- There is often no clear explanation of where this data is stored, how it is utilized, or who may access it.
On paper, many organizations are prohibiting this behavior. In practice, they turn a blind eye, hoping for the best.
Whether we like it or not, employees will continue to use these tools. If they don’t, in certain roles, they simply cannot keep pace with colleagues who do.
Thus, the real question for companies isn’t: “AI or no AI?”
It’s about choosing between:
- Unmanaged shadow use of public tools with sensitive data
- A clear, compliant, internal method for utilizing AI in the workplace
Both scenarios stem from the same desire for assistance and automation, but the latter involves:
- Operating on infrastructure controlled by the company
- Defining rules for what data can be used
- Implementing logging, access control, and auditability
- Integrating with real workflows (invoices, contracts, tickets, etc.)
That is what I mean by engineering AI systems behind the firewall.
If you do not want people pasting sensitive data into random web UIs, you have to give them a fast, useful and compliant way to use AI inside your own boundary.
If you want me to also add a German translation for this post and a versioned author meta file, say the word and I’ll add it next.
