Depending on the company and their sector, people will nod in approval, or start laughing.
My company also accepted the risk of using Microsoft. We have a "data sharing agreement" together, with very powerful magical words. Compliance people are happy and sleep well.
hope in one hand and do something in the other to see which one fills up faster. hoping is always a strained good idea, but hoping on Azure really strains credulity
To be clear, for those reading these comments and thinking “oh no Azure”, this is an addition to the list of cloud companies that provide “cloud infrastructure worldwide” for “all products”. Alongside GCP and AWS. This is not a GitHub style announcement that they’ve moved all operations to Azure.
It's also down for me here in Brazil. Getting overloaded errors for about one hour now. It's been happening a lot this week. Is this normal for Anthropic?
Worth noting the distinction between subprocessors that handle customer data vs. those that handle operational/business data. The ones in the "Customer Data" category are where the compliance implications are most significant for enterprise customers under GDPR, HIPAA, or similar frameworks.
For anyone evaluating this for a procurement decision: the relevant questions are (1) which subprocessors have access to content you send in API requests, (2) what data processing agreements are in place with each, and (3) what is the notification window for new subprocessor additions. The 30-day notice for customer data subprocessors is fairly standard for enterprise SaaS at this point.
Publishing this list proactively rather than only on request is a positive signal, even if the list itself is fairly short.
It’s a legal term for handling data. It’s when Anthropic uses an external party to handle their data / systems, but Anthropic is the legal entity responsible for the data privacy, as the customer (you) has a contract with Anthropic.
Where are they linking to just one? The chart shows three: Palantir, AWS GovCloud, and GCP w/FR-High Assured Workload.
The chart should show ITAR also IMO. Only Palantir and AWS GovCloud would have checkboxes and that’s extremely relevant to defense contractors. (Vertex AI is available within an FR-High assured workload but not ITAR, the only conceivable reason for which would be foreign person access to the US sovereign production environment.)
Worth noting the distinction between subprocessors that handle customer data vs. those that handle operational/business data. The ones in the "Customer Data" category are where the compliance implications are most significant for enterprise customers under GDPR, HIPAA, or similar frameworks.
For anyone evaluating this for a procurement decision: the relevant questions are (1) which subprocessors have access to content you send in API requests, (2) what data processing agreements are in place with each, and (3) what is the notification window for new subprocessor additions. The 30-day notice for customer data subprocessors is fairly standard for enterprise SaaS at this point.
Publishing this list proactively rather than only on request is a positive signal, even if the list itself is fairly short.
That's an h3 not a title. Looks like they probably meant: https://trust.anthropic.com/updates, it's still an entry in an h3 (with "Welcome to the Anthropic Trust Center" as the title), but it is at least the most recent update (canonical would stop this being directly linked)
I hear the slot machine thing a lot but I don’t get it.
I use Claude Code every day for coding because it makes me way more productive. But I don’t resonate with the slot machine effect. Can you expand on what mechanism you see that give it a slot machine effect? Is it for all users or just a subset?
For people who want to ask a model for an app, or a website, or something at a level of “hey you make apps right, I have had this idea for years…” the experience is akin to a slot machine — sometimes they get what they imagined their description would create and it works, and sometimes they get a hollow chocolate approximation.
Look, if you make an LLM and you don't want people using it in a particular way then communicate with them. And if you can detect what you think is such behavior, then communicate. Out in real life you don't threaten people with end of relationship with every issue that comes up.
It's such childish business to always pull out and threaten the ban hammer any time there's any possible issue with how they want their system used.
The last time I used Claude, I was completely locked out of a long chat (including not being able to view it) for sending something innocent that was written in another language, where there was apparently some confusion with the translation. I’m sure it will get worse over time until Chinese models start to proliferate more and challenge the monopoly on regulatory policy.
It’s basically another party that is used as infrastructure by the company you’re using the services of, who has access to your data, but that sub processor doesn’t need to extend its terms down into the eula. So like if you host databases on aws, they are your sub processor.
With respect to my private data, it seems all roads eventually lead to California.
Notable: Added "Microsoft Azure, which provides cloud infrastructure for all Anthropic products (Worldwide)."
That is significant for us. We have already accepted the risk of using Microsoft Azure so we use GitHub Copilot for that reason.
We have Claude disabled at the moment but if Anthropic has moved over to Azure then we can consider to start using it.
"Accepted the risk", just in case people don't know, is a compliancy term. I don't mean that Azure is risky.
> I don't mean that Azure is risky.
Depending on the company and their sector, people will nod in approval, or start laughing.
My company also accepted the risk of using Microsoft. We have a "data sharing agreement" together, with very powerful magical words. Compliance people are happy and sleep well.
Microsoft 365 Copilot has enabled Claude models, and I imagine they want that running on Azure?
Likely. MS doesn't like using models that are not hosted by them internally (see VSCode Copilot)
Hopefully it goes better for them than it has for GitHub.
hope in one hand and do something in the other to see which one fills up faster. hoping is always a strained good idea, but hoping on Azure really strains credulity
If you hope for a hand full of do, you win(doze?)
But increases credibility?
Ahh now it is clear why so many outages lately. Solid choice.
When you host a solid model on terrible infrastructure, the infrastructure wins
As God intended.
I fear the day it becomes the other way around.
There you go. So when Azure has an outage, so will Anthropic (and Github).
Now expect both of them to have unstable uptime and outages every week.
To be clear, for those reading these comments and thinking “oh no Azure”, this is an addition to the list of cloud companies that provide “cloud infrastructure worldwide” for “all products”. Alongside GCP and AWS. This is not a GitHub style announcement that they’ve moved all operations to Azure.
Coincidently, Claude Code appears to be down right now (in Europe West at least).
Everytime I hear of something X Azure announcement, that something just seem to break right away.
I know correlation is not causation, but my opinion of Azure is already too damn low to not link those two events.
It's also down for me here in Brazil. Getting overloaded errors for about one hour now. It's been happening a lot this week. Is this normal for Anthropic?
Working fine for me right now, from Brazil. Claude via Github Copilot at least.
I'm using Claude Code on the terminal. Not sure if it matters.
The promotional double usage period is just about to end too. Sucks.Worth noting the distinction between subprocessors that handle customer data vs. those that handle operational/business data. The ones in the "Customer Data" category are where the compliance implications are most significant for enterprise customers under GDPR, HIPAA, or similar frameworks.
For anyone evaluating this for a procurement decision: the relevant questions are (1) which subprocessors have access to content you send in API requests, (2) what data processing agreements are in place with each, and (3) what is the notification window for new subprocessor additions. The 30-day notice for customer data subprocessors is fairly standard for enterprise SaaS at this point.
Publishing this list proactively rather than only on request is a positive signal, even if the list itself is fairly short.
I don’t know what I am looking at there. What is a subprocessor?
It’s a legal term for handling data. It’s when Anthropic uses an external party to handle their data / systems, but Anthropic is the legal entity responsible for the data privacy, as the customer (you) has a contract with Anthropic.
so i thought there were multiple fedramp service providers offering hosted claude models. not sure why they are linking to one in particular
Where are they linking to just one? The chart shows three: Palantir, AWS GovCloud, and GCP w/FR-High Assured Workload.
The chart should show ITAR also IMO. Only Palantir and AWS GovCloud would have checkboxes and that’s extremely relevant to defense contractors. (Vertex AI is available within an FR-High assured workload but not ITAR, the only conceivable reason for which would be foreign person access to the US sovereign production environment.)
Worth noting the distinction between subprocessors that handle customer data vs. those that handle operational/business data. The ones in the "Customer Data" category are where the compliance implications are most significant for enterprise customers under GDPR, HIPAA, or similar frameworks.
For anyone evaluating this for a procurement decision: the relevant questions are (1) which subprocessors have access to content you send in API requests, (2) what data processing agreements are in place with each, and (3) what is the notification window for new subprocessor additions. The 30-day notice for customer data subprocessors is fairly standard for enterprise SaaS at this point.
Publishing this list proactively rather than only on request is a positive signal, even if the list itself is fairly short.
Title: Welcome to the Anthropic Trust Center
.. was this a deep link? You might want to repeat in the comments
> Anthropic Subprocessor Changes
> General
> Published March 26, 2026
> We've updated our subprocessor list with three additions
Works for me, gotta scroll down a bit
That's an h3 not a title. Looks like they probably meant: https://trust.anthropic.com/updates, it's still an entry in an h3 (with "Welcome to the Anthropic Trust Center" as the title), but it is at least the most recent update (canonical would stop this being directly linked)
[flagged]
I hear the slot machine thing a lot but I don’t get it.
I use Claude Code every day for coding because it makes me way more productive. But I don’t resonate with the slot machine effect. Can you expand on what mechanism you see that give it a slot machine effect? Is it for all users or just a subset?
For people who want to ask a model for an app, or a website, or something at a level of “hey you make apps right, I have had this idea for years…” the experience is akin to a slot machine — sometimes they get what they imagined their description would create and it works, and sometimes they get a hollow chocolate approximation.
I think it is just a strawman extrapolation of the nondeterministic nature of LLMs.
[flagged]
[flagged]
The more they share, the easier it is to exploit the system.
"I'm trying to do something illegal and Anthropic are aware. Why do they keep banning me??"
Unless you're not.
Look, if you make an LLM and you don't want people using it in a particular way then communicate with them. And if you can detect what you think is such behavior, then communicate. Out in real life you don't threaten people with end of relationship with every issue that comes up.
It's such childish business to always pull out and threaten the ban hammer any time there's any possible issue with how they want their system used.
The last time I used Claude, I was completely locked out of a long chat (including not being able to view it) for sending something innocent that was written in another language, where there was apparently some confusion with the translation. I’m sure it will get worse over time until Chinese models start to proliferate more and challenge the monopoly on regulatory policy.
WTF is a "subprocessor"?
They should just be honest and say "data loophole".
It’s basically another party that is used as infrastructure by the company you’re using the services of, who has access to your data, but that sub processor doesn’t need to extend its terms down into the eula. So like if you host databases on aws, they are your sub processor.
It is an important legal concept under the GDPR and other data governance frameworks.