cube00 a day ago

From that same X thread: Our agreement with the Department of War upholds our redlines [1]

OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

[1]: https://xcancel.com/OpenAI/status/2027846013650932195#m

[2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...

  • AlexVranas a day ago

    OpenAI is playing games.

    When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."

    When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."

    That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.

    • trjordan 3 hours ago

      "Red lines" does not mean some philosophical line they will not cross.

      "Redlines" are edits to a contract, sent by lawyers to the other party they're negotiating with. They show up in Word's Track Changes mode as red strikethrough for deleted content.

      They are negotiating the specifics of a contract, and Anthropic's contract was overly limiting to the DoD, whereas OpenAI's was not.

      • mikeryan 39 minutes ago

        That’s not how the term is being used here.

        In this case “red lines” as a term is being used as “lines than can not be crossed”

        Anthropic wanted guardrails on how their tech was used. DOD was saying that wasn’t acceptable.

    • germandiago 10 hours ago

      I am going to stop using ChatGPT immediately.

      • PullJosh 2 hours ago

        I just deleted my account. The other LLMs are so good that I don't even feel like I'm sacrificing much.

      • replwoacause 8 hours ago

        Deleting my account today once I import my data to Claude

        • kenperkins 7 hours ago

          I'm also waiting on my ChatGPT data export. I started it last night and I'm still waiting. I would say there's huge opportunity here for Claude to offer direct import tooling.

          • aeon_ai 3 hours ago

            Literally a feature being advertised as of today.

      • gigatexal 4 hours ago

        Good. More of this. I did.

      • foobarian 6 hours ago

        No no no use it more, make sure to use up as much tokens as possible. They do inference at a loss

        • TomGarden 5 hours ago

          This makes no sense, their value in the marketplace is in usage and inflated promise, not actual revenues

        • tempaccount420 3 hours ago

          > They do inference at a loss

          They don't, inference is cheap, especially for agents because of cache hits. The API prices are just inflated.

        • mystraline 5 hours ago

          Ive got a 'Claw interfacing with OpenAI and generating garbage questions and responses. I have an 8k context on mine.

          Deletion with OpenAI isnt really deletion. So I'll waste their resources AND train on low quality slop on my side.

          My work degrades theirs.

    • bambax 12 hours ago

      > but we will shake our fist at them while they do it

      Not even that. They are not shaking anything except their booty.

    • docmars 7 hours ago

      Personally I think OpenAI is intending to infiltrate their political enemy's stronghold and look for ways to leak data to "get Trump" as per usual.

      They'll say "oops" and then we'll spend the next few years listening to pointless Congressional hearings.

    • gchamonlive 15 hours ago

      Why DoD and not DoW?

      • enlightens 15 hours ago

        Only Congress can change the name of a federal department, so the Department of Defense is still properly called that.

        https://en.wikipedia.org/wiki/Executive_Order_14347

        • hdgvhicv 11 hours ago

          Only Congress can declare war but here we are with the department of war bombing a foreign country and capturing and assassinating foreign leaders.

          • leereeves 11 hours ago

            That policy changed a long time ago. The last declaration of war was June 4, 1942.

            After Vietnam, Congress passed the War Powers Resolution to limit the ability of Presidents to conduct military action without Congressional approval, but it still allows military action for up to 60 days. Every President since then has used that power.

            https://en.wikipedia.org/wiki/War_Powers_Resolution

            • input_sh 10 hours ago

              That 60 day limit was ignored so frequently in the past it might as well not exist.

              Pretty much every attempt at stopping the president (from Clinton onwards) ends the same way: house votes on it, senate might agree with the slimmest of majority, it reaches the president's desk, president vetoes it, it goes back to the senate where it needs 2/3 majority to overthrow the veto, and it never gets that 2/3 majority.

              • badgersnake 9 hours ago

                Yep, it’s a case of are they willing to impeach the president over this. And the answer is likely no. Some of the America first lot might vote against on ‘How does this help America’ grounds but I don’t see them getting near the threshold.

            • gchamonlive 11 hours ago

              So the president can wage war without the Congress, but it can't officially rename the department that supports these wars autocratically. That's interesting.

            • happymellon 10 hours ago

              Even your link doesn't say what you imply.

              > It provides that the president can send the U.S. Armed Forces into action abroad only by Congress's "statutory authorization", or in case of "a national emergency created by attack upon the United States, its territories or possessions, or its armed forces".

              There was not at attack on the United States.

              • convolvatron 4 hours ago

                I don't know why we're getting mired in the details here. The administration certainly isn't. We all work for trump now. Lawyers, journalists, universities, tech companies, state, local and foreign governments. Anything trump or one of his designated people wants, you need to do. If you start sputtering about your agency or your rights or your sovereignty, then expect as much shit thrown at you as the trump organization can muster. That's it, there is no legal justification. There are no fine points to argue. Obey or be punished.

                • happymellon 40 minutes ago

                  The point is that someone claimed the law was changed, and then linked to something that didn't support the claim.

                  Yes, Trump is ignoring the law, but you have to be aware that he is crossing the line rather than gas lighting that there wasn't a line at all.

            • nashashmi 11 hours ago

              Iraq war was the last declared war. Afghanistan war was also declared.

              • mikkupikku 10 hours ago

                Incorrect. The only times America has formally declared war were the War of 1812, the Mexican-American War, the Spanish-American War, World War I, and World War II.

                In the case of the Barbary Wars, Vietnam War, the Iraq War and War on Terror / Afghanistan War, etc... congress approved military engagement but DID NOT issue a formal Declaration of War.

                • microtonal 8 hours ago

                  You mean that they were special military operations? j/k

                  Interesting though, I never knew this.

        • almosthere 5 hours ago

          That part isn't sited. It is likely not true.

          • enlightens 5 hours ago

            The EO itself agrees with this and says that the War title is secondary. It explicitly doesn’t truly rename the department.

        • torginus 9 hours ago

          'Power is the perception of power'

      • DowsingSpoon 15 hours ago

        The Department of Defense was established by the National Security Act of 1947. If the Congress wanted to change the name then they would pass another law to do so.

        An executive order is not law.

        • throw10920 5 hours ago

          Even though the the DoD was created via an act of Congress, as POTUS is the head of the Executive Branch and the CiC of the armed forces, could you make an argument that a name change can be done by executive order? (setting aside whether or not the new proposed name is stupid)

          • almosthere 5 hours ago

            And when it was created it was DOW.

      • rmm78 21 minutes ago

        >Why DoD and not DoW?

        Reddit/Bluesky brigade is in full force here, that's why

      • hellzbellz123 13 hours ago

        because most americans do not want war, at least id hope, so calling it that seems pretty short sited (maybe until you continually do that 'war' thing), if you want the citizens to look positively on your spending it should probably be for defense not war, again, at least i should hope. im just a dumb "lib" whatever that means

        • Finbel 11 hours ago

          On the other hand calling it "Department of Defense" seems quite whitewashing of what it actually does.

          • happymellon 11 hours ago

            It spends the defence budget...

            • westmeal 9 hours ago

              Which is used primarily for offense anyway

              • Braxton1980 8 hours ago

                I'm pretty sure the amount the money spent on offensive actions is significantly less than the defense

                • westmeal 3 hours ago

                  When was America last invaded by a foreign adversary?

                  • JumpCrisscross 2 hours ago

                    This resembles anti-vax logic. We haven’t been invaded because our military maintains a strong deterrence and strategic depth.

                    • fleshmonad 6 minutes ago

                      Yeah, otherwise the USA would have been invaded by Cuba, Iraq, Vietnam, Syria, Afghanistan, Yemen and a hundred more, and they all would have a fight over who can have it. Thank god the US defended themselves against those terrible guys. Especially the WMDs were quite the close call, the Iraqis were minutes away from nuking the land of the mart.

              • happymellon 2 hours ago

                Maybe.

                I was just saying that the purpose of the Department of Defence is to spend the "defence budget".

      • ikidd 15 hours ago

        Gulf of Mexico.

      • rapnie 8 hours ago

        DOW was already taken, and that is the one to watch when it all comes crashing down?

      • OJFord 11 hours ago

        Perhaps because the latter sounds hilariously childish?

        • nashashmi 11 hours ago

          Actually that was the original name. And it was a more honest name.

          • OJFord 10 hours ago

            It's always been the MoD in the UK afaik, but there was the War Office I suppose.

            • SanjayMehta 4 hours ago

              It was the War Office from 1857 to the mid 60s.

      • s-y 12 hours ago

        law of triviality on full display

      • GaryBluto 14 hours ago

        [flagged]

        • ruszki 13 hours ago

          https://en.wikipedia.org/wiki/United_States_Department_of_De...

          Stoping and questioning why somebody uses DoD or DoW is way more telling than using any of those. Especially that both are perfectly fine, even officially.

          A square was renamed in my home city about 20 years ago. We still use the original one usually, even teens know that name. I use a form of the original name of our main stadium which was renamed almost 30 years ago. Heck, some people use names of streets which are not official for almost 40 years now. Btw, the same with departments of the government. Nobody follows how they called at the moment, because nobody really cares. That’s the strange when somebody cares.

          • Finbel 11 hours ago

            Or it could have just been a genuine question. I'm not American and I've seen DoW used in newspapers and thought the name change was official. Personally I've thought it a more apt and honest name for what they do.

            But the backlash in the commments here show how ideologically charged the question seem to be.

            • ruszki 8 hours ago

              > Or it could have just been a genuine question.

              Yes, exactly that’s why I wrote several examples to support why the chance for that is very-very slim.

              • gchamonlive 4 hours ago

                Easier to work in hypotheticals than to do a bit of research like read the other comments. Just explained it was an honest question and why.

                • ruszki 4 hours ago

                  Do you really trust in random comments on the internet which states something to which the possibility is slim, because literally nobody cares why somebody calls the way it is, when that somebody knows both names, and when it's not political? I don't think that's optimal, and it's a hefty understatement of course.

            • gchamonlive 11 hours ago

              I wasn't aware of how ideologically charged the question was. I'm also not American, but I'm glad I made the question. It's a clear sign for us not Americans to just leave them be.

              • komali2 10 hours ago

                > It's a clear sign for us not Americans to just leave them be.

                Depending on where you live in the world that might be quite hard to do soon.

                • gchamonlive 10 hours ago

                  I agree. I live in Brazil and even though tariffs and interventions weren't directed at us, they influence the economy and political decisions. Also, Venezuela is right next to us, so instabilities there do tend to affect the whole region.

        • wqaatwt 13 hours ago

          By using the actual legal and official name of the department (which Trump didn’t and couldn’t change)?

      • croes 12 hours ago

        Because using DoW is woke when the legal name is DoD.

        Pretty ironic given their anti-woke agenda

    • ghm2199 7 hours ago

      Isn't it simpler to say that anthropic adopted a values based use approach and openai adopted a legal one?

      Or In other words you can get to decide two ways to use a lucrative property:

      1. designate it private and draft usage of how you allow to use it, per your value system(as long as values don't violate any laws)

      2. In face of competition, give up some values and agree to a legal definition of use that favors you.

      • pbhjpbhj 5 hours ago

        What does 'a legal approach' mean where there is no rule of law? USA just bombed another country without having a domestic legal basis for that. Can't imagined they're holding back on AI use that is illegal -- even textbook-clear warcrimes (like blowing up shipwrecked people) does not give Hegseth and Trump pause.

        That goes for domestic actions too, happy to arm a paramilitary and set them loose against citizens who are not politically aligned with Trump... the Republican Senate barely even blinks. Hard to imagine they'd care about AI use in mass surveillance, nor AI use in automated anti-personnel weapons. The Senate will be, 'Oh no they unlawfully killed USA citizens, again... Welp, let me check my insider trading gains... yh, seems fine'.

  • jrochkind1 16 hours ago

    Anthropic wanted to put those restrictions in the contract. OpenAI said they'll just trust their own "guardrails" in the training, they don't need it in the contract. (I'm not sure I believe "guardrails" can prevent mass surveillance of civilians?)

    Very gracious of OpenAI to say Anthropic should not be designated a supply chain risk after sniping their $200 million contract by being willing to contractually let the government do whatever they like without restrictions.

    • lostnground 13 hours ago

      It cannot really oversee this. If you can decompose a problem into individual steps that are not, in themselves, against the agent's alignment, it's certainly possible to have the aggregate do so.

    • Symmetry 10 hours ago

      How confident are we, with OpenAI's recent very large contribution to Trump's PAC, that OpenAI wasn't working to get Anthropic designated a supply chain risk behind the scenes? I don't want to be too paranoid here but given Sam's reputation and cui bono I don't think we can really rule this out either.

    • Barbing 14 hours ago

      >(I'm not sure I believe "guardrails" can prevent mass surveillance of civilians?)

      Right, wouldn't they need a moderation layer that could, for example, fire if it analyzed & labeled too many banal English conversations?

      They really gave training credit for guardtrails? I mean, it could perhaps reject prompts about designing social credit systems sometimes, but I can't imagine realistic mitigations to mass domestic surveillance generally.

  • Wowfunhappy a day ago

    > However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

    The current administration is so incompetent that I find this perfectly believable.

    I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.

    I don't know if that's actually what happened here, I just find it plausible.

    • el_benhameen 19 hours ago

      Absolutely incompetent, but I don’t think that’s the cause here. I think Anthropic’s sin was publicly challenging the administration. They’re huge on optics. You can get away with anything as long as you praise and bow in public.

    • randall a day ago

      same. this is about losing a negotiation and saving face / exacting revenge.

  • jellyroll42 21 hours ago

    Sam Altman has no scruples. Dark Triad personality. No reason to believe anything he says.

    • jacquesm 21 hours ago

      The same goes for anybody still working at OpenAI past Monday morning 9 am.

      • Jeremy1026 20 hours ago

        People's need for food and shelter doesn't go away because their employer is unethical.

        • scottyah 19 hours ago

          I don't think you could find a single person working for OpenAI that couldn't find employment elsewhere within a month that pays more than enough for food and shelter. This is a ridiculous statement.

          • amelius 8 hours ago

            These people are now dependent on their level of income. And they don't like financial uncertainty, just like anyone else.

            But yeah, I'd expect them to change jobs in the coming year or otherwise I'm going to agree with you.

        • jacquesm 20 hours ago

          There are many employers. OpenAI employees that quit on account of this will be in high demand at the other AI companies, especially the ones that don't bend over in 30 seconds when Uncle Donald comes calling.

        • glemmaPaul 13 hours ago

          there’s always someone in the world that will defend anything.

          Like the people working at OpenAI had no other choice than to pick this cushy job (some have salaries of 500k per year), instead of anything else.

          It’s an extreme personal opinion, but; all people working at OpenAI after this debacle are more than happy to make AI for war, because Food and Shelter.

          I find your comment fitting this forum, it is where all this enabling started anyways.

          • jacquesm 12 hours ago

            Indeed, it is worth noting that Sam Altman got his chance through PG/YC and that YC was totally fine with both Musk and Zuckerberg giving them a platform long after it became evident that they had some screws loose in the ethics department.

            Effectively the message is 'we don't mind you being an asshole, as long as you're rich'.

        • pibaker 19 hours ago

          Per levels.fyi, median salary of most openAI positions are above 300k. Even "technical writers" have a median pay of 197k. I searched around the internet and it seems like even entry level positions receive well above 150k. Apart from people with severe lifestyle bloat or an unholy number of dependents I doubt too many people working there will face immediate financial difficulties if they quit.

          Anyway, it is also amusing to hear tech people defend their right to earn some of the fattest salaries on this planet using the smol bean technique after a decade of "why wouldn't the West Virginian coal miner just learn to code." It was always about maintaining the lifestyle of yearly Japan vacations and MacBook upgrades and never about subsistence.

          • otabdeveloper4 15 hours ago

            > OpenAI hires "technical writers"

            Mind blown. Isn't documentation a prime use case for "AI"?

            • chipotle_coyote 4 hours ago

              As a technical writer who's spent a great deal of time recently editing AI-drafted documentation, this use case is not going to go as well as AI boosters think it is. :)

            • b112 12 hours ago

              Have you ever seen the back of your head, without a mirror? Without two mirrors, actually?

              How can AI accurately describe itself in full?

              • ben_w 11 hours ago

                The problem it has describing itself isn't the lack of a metaphorical mirror, tool use is there and it can grep whatever code or research is written; the problem is that all machine learning is surprisingly slow to update with new info.

                Ask ChatGPT to describe itself, you may get valid documentation and API calls, or you may get the API for GPT-3 (not ChatGPT, before that). I have had both happen.

              • hackable_sand 11 hours ago

                Elephant.

                Did it in one word, easy

                What's next?

            • jdiff 8 hours ago

              No, it's prone to assuming or falsifying details even when it has the tools at hand that could verify the true details. Even when explicitly instructed to perform a specific tool call that would load the correct information into its context. Sometimes the pull of the training data is too strong and it will just not make the call and output garbage, all the while claiming otherwise.

        • gverrilla 9 hours ago

          Great comedy line, you're very funny!

        • watwut 14 hours ago

          I dont think everyone working for OpenAI is unethical. But, it is ridiculous to frame Hmhighly paid people working for companies quite a few of their peers avoid for ethical reasons as poors with no choice.

        • oncallthrow 18 hours ago

          What an utterly pathetic, cowardly, spineless and defeatist statement

  • _heimdall 20 hours ago

    Anthropic demanded defining the redlines. OpenAI and others are hiding behind the veil of what is "lawful use" today. They aren't defining their own redlines and are ignoring the executive branch's authority to change what is "lawful" tomorrow.

    • Nevermark 14 hours ago

      Or the increasing impunity all three branches of government are giving themselves with regard to bad faith interpretations of the law, and a lack of government accountability when they color outside the lines.

      Much of the impunity is now Supreme Court settled law.

      We see clearly unconstitutional behavior every day, and there is no systematic, timely or effective, push back from any constitutionally enabled oversight.

      Checks and balances don't work, when players are more loyal to party than branch or constitution.

      Unfortunately, there are no constitutional checks, balances or limits on single party control. And single party control negates all the others. That one party can majority control all three branches is a serious failure mode in political incentives (bipartisanship is highly disincentivized) and governance (even temporary or shaky full control incentivizes making full control permanent over all other "policies").

      Until the last few decades, diverse concerns across states avoided tight centralization within parties, and therefore across branches.

      • devinus 11 hours ago

        What exactly is considered "settled" law when the SCOTUS can unilaterally overturn Roe v. Wade overnight after almost 50 years of precedent?

        • Nevermark 9 hours ago

          In this case, "settled" means for everyone else, unfortunately.

      • pjc50 13 hours ago

        However there's one overriding concern which has got American to this point: "anti woke". That is, reinstating the load bearing racism and sexism.

        • ozmodiar 8 hours ago

          A lot of that turned out to be pushed by Epstein and his associates. It's not hard to figure out why they would enjoy a world with lots of racism, sexism and general inequity. Its really disturbing when you consider how much power this network still has.

          • tenuousemphasis 7 hours ago

            I mean yeah... one of his co-conspirators is the President.

    • Symmetry 10 hours ago

      Anthropic's whole worry with mass surveillance was that current law is too loose in the age of AI to offer enough restraint.

  • ChildOfChaos 10 hours ago

    Brockman donating $25 million dollars in January might have a little something to do with it..

  • matchagaucho 2 hours ago

    The OpenAI PR implies that Anthropic had a "usage-policy" clause with no actual enforcement.

    Whereas OpenAI won their contract on the ability to operationally enforce the red lines with their cloud-only deployment model.

  • Nevermark a day ago

    > more stringent safeguards than previous agreements, including Anthropic's.

    Except they are not "more stringent".

    Sam Altman is being brazen to say that.

    In their own agreement as Altman relays:

    > The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control

    > any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing

    > For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives

    > The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

    I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.

    Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.

    In other words, no OpenAI restriction at all.

    That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.

    (Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)

    • clhodapp 21 hours ago

      Yep. It's the difference between "Don't do these things, regardless of what the law says." and "Do whatever you want, but please follow your own laws while you do it".

      As Paul Graham said, "Sam gets what he wants" and "He’s good at convincing people of things. He’s good at getting people to do what he wants." and "So if the only way Sam could succeed in life was by [something] succeeding, then [that thing] would succeed"

      • Rapzid 18 hours ago

        Sam Altman is basically the last person anyone should listen to.

      • zargon 17 hours ago

        "You could parachute [Sam Altman] into an island full of cannibals and come back in 5 years and he'd be the king."

        --Paul Graham, 2008

    • qmarchi 21 hours ago

      Easy way to summarize it: "You're not allowed to do these things, except for all of the laws that allow you to do these things."

      • dwallin 21 hours ago

        It’s a non-clause that is written to sound like they are doing something to prevent these uses when they aren’t. “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things. Plus the administration itself gets to decide if it meets legal use.

        • pdpi 17 hours ago

          > “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things.

          That's not quite right.

          First off, I don't expect that "you used my service to commit a crime" is in and of itself enough to break a contract, so having your contract state that you're not allowed to use my service to commit a crime does give me tools to cut you off.

          Second, I don't want the contract to say "if you're convicted of committing a crime using my service", I want it to say "if you do these specific things". This is for two reasons. First, because I don't want to depend on criminal prosecutors to act before I have standing. Second, because I want to only have to meet the balance of probabilities ("preponderance of evidence" if you're American) standard of evidence in civil court, rather than needing a conviction secured under "beyond a reasonable doubt" standard. IANAL, but I expect that having this "you can't do these illegal things except when they aren't illegal" language in the contract does put me in that position.

          • Nevermark 16 hours ago

            I don’t think the language does, or is intended to, give OpenAI any special standing in the courts.

            They literally asked the DoD to continue as is.

            Their is no safety enforcement standing created because their is no safety enforcement intended.

            It is transparently written, as a completely reactive response to Anthropic’s stand, in an attempt to create a perception that they care. And reduce perceived contrast with Anthropic.

            If they had any interest in safety or ethics, Anthropic’s stand just made that far easier than they could have imagined. Just join Anthropic and together set a new bar of expectations for the industry and public as a whole.

            They could collaborate with Anthropic on a common expectation, if they have a different take on safety.

            The upside safety culture impact of such collaboration by two competitive leaders in the industry would be felt globally. Going far beyond any current contracts.

            But, no. Nothing.

            Except the legalese and an attempt to misleadingly pass it off as “more stringent”. These are not the actions of anyone who cares at all about the obvious potential for governmental abuse, or creating any civil legal leverage for safe use.

      • hn_throwaway_99 20 hours ago

        > except for all of the laws that allow you to do these things.

        It's even worse than that, because this administration has made it clear they will push as hard as possible to have the law mean whatever they says it means. The quoted agreement literally says "...in any case where law, regulation, or Department policy requires human control" - "Department policy" is obviously whatever Trump says it is ("unitary executive theory" and all that), and there are numerous cases where they have taken existing law and are stretching it to mean whatever they want. And when it comes to AI, any after-the-fact legal challenges are pretty moot when someone has already been killed or, you know, the planet gets destroyed because the AI system decide to go WarGames on us.

      • EGreg 21 hours ago

        Let me clear it up

        The Trump administration acts cartoonish and fickle. They can easily punish one group, and then agree to work with another group on the same terms, to save face, while continuing to punish the first group. It doesn't have to make consistent sense. This is exactly how they have done with tariffs for example.

        Secondly, the terms are technically different because "all lawful uses" are preserved in this OpenAI deal, and it's just lawyering to the public. Really it was about the phrase "all lawful uses", internally at the DoD I'm sure. So the lawyers were able to agree to it and the public gets this mumbo-jumbo.

        I thought mass surveillance of Americans was unlawful by the DoD, CIA and NSA? We have the FBI for that, right? :)

        • vlovich123 20 hours ago

          Sure, but OpenAI is also being disingenuous here pretending they’re operating under the same principles Anthropic is. It’s not and the things they’re comfortable with doing Anthropic said they’re not

    • pear01 20 hours ago

      Brings to mind the infamous line from Nixon:

      "When the president does it, that means it is not illegal".

      This was during the Frost/Nixon interviews, years after he had already resigned. Even after all that, he still believed this and was willing to say it into a camera to the American people. It is apparent many of the people pushing the excesses going on today in government share a shameless adherence to this creed.

      • tormeh 16 hours ago

        If only Nixon had had the current supreme court, which actually agrees with him.

        • saghm 15 hours ago

          Nixon's issue wasn't a lack of support in the courts but in Congress[1]:

          > On August 7, Nixon met in the Oval Office with Republican congressional leaders "to discuss the impeachment picture," and was told that his support in Congress had all but disappeared. They painted a gloomy picture for the president: he would face certain impeachment when the articles came up for vote in the full House, and in the Senate, there were not only enough votes to convict him, but no more than 15 or so senators were willing to vote for acquittal. That night, knowing his presidency was effectively over, Nixon finalized his decision to resign.

          The contrast with how compliant the majorities in Congress are today to the whims of the White House cannot be overstated. The past decade has pretty much completely eliminated any semblance of a Republican Party that stood for anything other than the whims of Trump. Everyone either got on board or was exiled from power; the third highest member of House leadership got driven from Congress for taking a stand on the events of January 6, whereas the senator who in a debate in 2016 alleged that Trump's small hands implied a similar proportion for one of his less-visible body parts faded into the background for the next eight years and was rewarded with a prominent position in the cabinet this time around.

          > https://en.wikipedia.org/wiki/Presidency_of_Richard_Nixon#Re...

    • fnordpiglet 19 hours ago

      Each of those clauses have a DoD policy carve out as an exception which says basically they can do whatever they want if they want to do it, but won’t be able to if they don’t want to do it.

    • aardvarkr 19 hours ago

      This is the same government caught spying on its citizens by Snowden so I don’t trust them at all.

    • stingraycharles 21 hours ago

      This implies that OpenAI must build and release and maintain a model without any safeguards, which is probably the big win and maybe something Anthropic never wants to do.

      • jacquesm 21 hours ago

        I don't think that is the correct conclusion.

        But they won't be releasing it, they will be leasing it to DOJ and all their other customers will get the safeguarded model.

        • jacquesm 12 hours ago

          Sorry, meant to write DoW / department of defense, not DOJ.

    • lobochrome 17 hours ago

      So you want OpenAI to create “laws”?

      I for one do not want ai labs to designate what is legally ok to do.

      I much prefer the demos to take care of that.

      • Nevermark 13 hours ago

        Who said anything about OpenAI passing laws? (Where did that come from?)

        Civilians are allowed to put conditions on working for, or supplying, the DoD or any governmental customer.

        Tremendous good comes from those that are not willing to facilitate harms, simply because they are legal.

        Equating legal with ethical or safe, makes no sense. [0]

        [0] All of human history.

      • saghm 15 hours ago

        No, I want the government to prove that they understand the limitations of software that they're purchasing to use to kill people, and I want AI companies to be clear about what they think those limitations are. Just because it would be legal to use software for something doesn't mean it's capable of doing it safely, and there's absolutely no reason that the government is in a better position to judge that than the people who make it and literally have a financial incentive to lie about it. If they're that confident it wouldn't be safe to use it for that, maybe it's a good idea to consider that.

      • ozmodiar 7 hours ago

        So we should just do whatever unethical thing we want, as long as it isn't strictly illegal/we won't get caught? Actually that does seem to be the Silicon Valley mantra. I dont know how people think this is going to play out well, particularly when the entities in question are often powerful enough to change the laws themselves.

    • jmward01 15 hours ago

      I have never used AI to generate an answer for HN but just this once I thought it would be good to hit ChatGPT specifically and ask it for 'a list of times Sam Altman has gone against his word.' Here was its response:

      Shift from Nonprofit Mission to For-Profit Orientation – OpenAI was founded as a nonprofit with a charter focused on “benefit to humanity,” but under Altman it created a capped-profit subsidiary, accepted large investments (e.g., from Microsoft), and critics (including Elon Musk in a 2024 lawsuit) argue this departed from that original mission. A federal judge allowed Musk’s claim that Altman and OpenAI broke promises about nonprofit governance to proceed to trial.

      Nonprofit Control Reorganization Drama (2023) – In November 2023, the original nonprofit board cited a lack of transparency and confidence in Altman’s candor as a reason for firing him. He was reinstated days later after investor and employee pressure, highlighting internal conflict over governance and communication.

      Dust-Up Over Military Usage Policies – OpenAI initially had explicit public policies restricting AI use in “military and warfare” contexts, but those clauses were reportedly removed quietly in 2024, allowing the company to pursue Department of Defense contracts — a turnaround from earlier language that appeared to preclude such use.

      Statements on Pentagon Deal vs. Prior Positioning – In early 2026, Altman publicly said OpenAI shared safety “red lines” (e.g., prohibiting mass surveillance and autonomous weapons) similar to some competitors, but hours later OpenAI signed a deal to deploy its models on classified military networks, leading critics to argue this contradicts earlier positioning on limits for military use.

      Regulation Stance Shifts in Congressional Testimony – Altman has advocated for strong regulation of AI in some public settings but in later congressional hearings opposed specific regulatory requirements (like mandatory pre-deployment vetting), aligning more with industry concerns about overregulation — a shift in tone compared with earlier support of regulatory frameworks.

      • Nevermark 13 hours ago

        I found this interesting. But the best approach is start with LLM, then check every point yourself, and summarize with real links. The moment we are ok with LLM output just once, it won't be just once, and things get too murky.

    • spiderice 19 hours ago

      That seems exactly what it should be. The United States military should be able to do what the law allows. If we don't think they should be allowed to do something, we should pass laws. Not rely on the goodness of Sam Altman.

      • Nevermark 15 hours ago

        So don’t stand up for ethics and safety where there isn’t a law for it? Backwards day?

        Nobody is prosecuting the DoD with non-laws here. But one company is using their legal right to refuse to facilitate great harms.

        > Not rely on the goodness of Sam Altman.

        (Who said anything about that? Where did that come from?)

        Nobody wants to rely on Altman!

        For anything. But it would be better if he would stand up for safety, instead of undermining it.

        Your logic is backwards.

        If we don’t want to rely entirely on a centralized government alone, increasingly interested in giving its leaders unfettered power, with all three branches increasingly willing to bend our laws and give itself impunity, then a widespread civilian culture of upholding safety by many and all actors is a necessity.

        The need for the latter is always a necessity. But the risks of power consolidation, with the help of AI, are rising.

  • 827a a day ago

    My understanding of the difference, influenced mostly by consuming too many anonymous tweets on the matter over the past day so could be entirely incorrect, is: Anthropic wanted control of a kill switch actively in the loop to stop usage that went against the terms of use (maybe this is a system prompt-level thing that stops it, maybe monitoring systems, humans with this authority, etc). OpenAI's position was more like "if you break the contract, the contract is over" without going so far as to say they'd immediately stop service (maybe there's an offboarding period, transition of service, etc).

  • kelnos 18 hours ago

    The red lines are not the same.

    Anthropic refuses to allow their models to be used for any mass surveillance or fully-automated weapons systems.

    OpenAI only requires that the DoD follows existing law/regulation when it comes to those uses.

    Unfortunately, existing law is more permissive than Anthropic would have been.

  • bastawhiz 20 hours ago

    Altman donated a million to the Trump inauguration fund. Brockman is the largest private maga donor. You don't have to be a rocket scientist to understand what's going on here.

    • bmitc 16 hours ago

      Agreed. These guys are traitors.

  • JumpCrisscross 2 hours ago

    > based on Altman's statements

    The dude is notorious for being a compulsive liar, even if supporters have to admit as much.

  • FrustratedMonky 2 hours ago

    They can say it on X. But will they refuse to do work?

  • skrebbel 10 hours ago

    It's called corruption.

  • gzread 12 hours ago

    OpenAI donated $25,000,000 to Trump, that's why. Now people are cancelling ChatGPT subscriptions, so he needs to walk back the optics.

  • rootusrootus a day ago

    Exactly. What are we not being told? There is some missing element in the agreement, or the reasoning for the action against Anthropic is unrelated to the agreement.

    • moogly a day ago

      Turns out both companies ran the agreement through their legal departments (Claude and GPT), and one of them did a poor summary. I (think I) jest, but this is probably going to be a thing as more and more companies use LLMs for legal work.

    • fc417fc802 18 hours ago

      The demand was that Anthropic permit any use that complied with the law. They refused. OpenAI claims to have the same red lines but in reality has agreed to permit anything that complies with the law.

      In other words OpenAI is intentionally attempting to mislead the public. (At least AFAICT.)

    • snickerbockers 21 hours ago

      One nuance I've noticed: the statement from Anthropic specifically stated the use of their products for these purposes was not included in the contract with DoD but it stops short of saying it was prohibited by the contract.

      Maybe it's just a weak choice of words in anthropic's statement, but the way I read it I get the impression that anthropic is assuming they retain discretion over how their products are used for any purposes not outlined in the contract, while the DoD sees it more along the lines of a traditional sale in which the seller relinquishes all rights to the product by default, and has to enumerate any rights over the product they will retain in the contract.

    • generic92034 a day ago

      Punish one, teach a hundred (companies).

    • micromacrofoot 21 hours ago

      president of openai donated $25 mil to trump last month, openai uses oracle services (larry ellison), kushners have lots invested in openai, altman is pals with peter thiel

    • yoyohello13 a day ago

      The reasoning is one company is ‘left and woke’ the other gives money to Trump.

      • Analemma_ a day ago

        $25 million to be exact, one of Trump's largest individual donors. From a guy who "doesn't consider himself political", lol. [0]

        [0]: https://www.wired.com/story/openai-president-greg-brockman-p...

        • bmitc 16 hours ago

          How can these people take themselves seriously? They're jokes.

          • toraway 12 hours ago

            “I think there's no decision ever that everyone at OpenAI agrees with,” Brockman says when I ask what his team thinks about the donations. “Even when we were 10 people. We’ve always been a truth-seeking culture. We have this scientific mission of discovery, and reality kind of doesn't care for your own opinion. It cares about what's true.”

            After our interview, Brockman declined WIRED’s request for comment on the ICE shootings. Separately, he offered a more general statement clarifying his thoughts on the conversation with WIRED. "AI is a uniting technology, and can be so much bigger than what divides us today,” he said.

            His justifications are just an ever changing rambling mess of word salad that never even come close to addressing the MAGA Inc donation specifically, who is this even for?

            We're talking about a pretty straightforward donation to the incumbent President's Super PAC, not ASI solving world hunger or whatever.

  • emsign 12 hours ago

    They are obviously lying. OpenAI is not to be trusted anymore.

  • Analemma_ a day ago

    It's probably a combination of "Altman is simply lying" (as he has been repeatedly known to do) and "the redlines in OpenAI's contract are 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor". Which, of course, effectively means they don't exist.

  • softwaredoug 21 hours ago

    The difference is Anthropic wants contractual limitations on usage, explicitly spelling out cases of Mass Surveillance.

    OpenAI has more of an understanding that the technology will follow the law.

    There may not be explicit laws about the cases Anthropic wanted to limit. Or at least it’s open for judicial interpretation.

    The actual solution is Congress should stop being feckless and imbecilic about technology and create actual laws here.

  • slibhb 20 hours ago

    It's almost like the Trump administration wanted to switch providers and this whole debate over red lines was a pretext. With this administration, decisions often come down to money. There are already reports that Brockman and Altman have either donated or promised large sums of money to Trump/Trump super pacs

    • yeahforsureman 7 hours ago

      Can't recall the source right now (it would've been on one of the several podcasts I listened to on Friday I think), but there's a story/rumor to the effect that at some point during Claude's earlier deployment at the Pentagon — might've well been in the context of the Venezuela/Maduro operation — someone at Anthropic had in one way or another flagged some kind of legal(ity) concerns regarding the relevant operation (and/or perhaps Anthropic's role in it) with Palantir, who was maintaining the Claude deployments for the DoD. The story goes that after Palantir had then relayed this information further to DoD, Hegseth had this major fit over how Anthropic's hippie-ass North California woke bros should have no say in matters relating to national security, that of Hegseth's "warfighters" or whatever, etc...

      Also, in the latest Hard Fork episode, Casey or Kevin mentions how the DoD undersecretary in charge of this contract doesn't apparently get along with or even pretty much hates Amodei for some reason. I think this might be the same undersecretary dude who actively commented the whole contract term controversy on X yesterday. Too bad I can't recall his name either.

  • gigatexal 4 hours ago

    Exactly. This is very shady. Too many openAI investors in Trump’s orbit. And it could be that openAI will say it’s their policy but whereas Anthropic wanted oversight that their redlines were enforced OpenAI I think will just turn a blind eye. It’s double speak. It’s disingenuous. It’s the kind of business play Trump Likes because it’s nefarious and screws someone over like Trump’s very delayed if paid at all contractors and staff.

siliconc0w 20 hours ago

The problem with "Any Lawful Use" is that the DoD can essentially make that up. They can have an attorney draft a memo and put it in a drawer. The memo can say pretty much anything is legal - there is no judicial or external review outside the executive. If they are caught doing $illegal_thing, they then just need to point the memo. And we've seen this happen numerous times.

  • nickysielicki 16 hours ago

    Did you guys really think that the jurisprudential issues that became endemic after 9/11 suddenly disappeared because we discovered LLMs?

    Let’s put pressure on our government to fix the FISA issues. Let’s reign in the executive branch. But let’s do it through voting. Let’s not give up on our system of government because we have new shiny technology.

    You were naive if you thought developing new technologies was the solution to our government problems. You’re wrong to support anyone leveraging their control over new technology as a potential solution or weapon of the weak against those governmental issues.

    That is not how you effect change in a democracy.

    • roughly 16 hours ago

      And, to be clear, the way you affect change in democracy is coalition building, listening to others, supporting your allies in their aims, and in turn having them support you, even when you don’t fully agree or understand. There’s no magic wand, none of us are right, there’s no big picture, just a bunch of people working together.

    • _heimdall 10 hours ago

      While I agree that we should be voting in people who will respect the power and authority they're given, I can't imagine we will vote away all these problems.

      We would need to vote in a president and 60%+ into congress that is willing to throw away their own power and authority. I just don't see that happening, especially not in a political system so corrupted already.

      • greycol 2 hours ago

        The US needs a organization doing the equivalent of the Nation Popular Vote Interstate Compact but for candidates and for fixing the US voting system. Get running politicians to sign up for if 60% of you are in office you'll table and vote for a specific already spelled out constitutional reform for more representative voting.

        The goal being more than two parties in government so that democrats and republicans can fracture into more functional bodies (MAGA, RINOs, neo-liberal, progressive etc) and people can vote closer to their issues/beliefs and that multiple parties mean 1 party isn't running rushod over the other.

    • Nevermark 13 hours ago

      > But let’s do it through voting.

      You don't get a successful vote without a tremendous amount of coordination and activism preceding it.

      Laws that constrain government from bad things are very difficult things to get the government to pass.

      In the meantime, using completely legal civil power to push back on legally allowed harms seems beyond sensible.

      But if you just vote and it works without all that, please let us know how you did it!

    • pjc50 12 hours ago

      Take a step back: Americans voted for this. They want unaccountable police and courts for the Dirty Harry legal system: maximum indiscriminate violence against those designated as criminals.

      • _heimdall 10 hours ago

        I've never seen this on a ballot and, maybe with the exclusion of Trump, never heard a candidate campaign on anything similar.

        You probably could make the case that Trump did campaign on it so I'll grant that, but this problem started well before he was even firing people on TV.

        • rectang 9 hours ago

          Off the top of my head: Joe Arpeio. George Wallace. Rudy Giuliani. Paul Gosar. Louie Gohmert.

  • rectang 20 hours ago

    You are right that this happens in practice (e.g. John Yoo torture memo). However, it is not how the system was intended to function, nor how it ought to function. I don’t want to lose sight of that.

    • scottyah 19 hours ago

      We shouldn't be stacking up so many incentives for it to happen though.

  • reckless 3 hours ago

    It's lawful use with specific laws called out though? New laws won't supercede what is agreed in the contract at the time of signing.

  • avaer 19 hours ago

    This is all happening in secret. That don't need any memo.

    In the unlikely case anyone finds out, those acting in the interests of the administration will have "absolute immunity", as they are "great American Patriots".

    That's what "all lawful use" means.

  • brown9-2 6 hours ago

    not to mention that the government is already bound against using things it buys for unlawful uses. Its a totally redundant clause in a contract that OpenAI is touting to confuse people.

  • user3939382 20 hours ago

    Or best case by the time it’s found out it’s years later, theres a “committee” who releases a big report everyone shrugs their shoulders and moves on. It’s a playbook.

  • _heimdall 20 hours ago

    Exactly, and its easy to hide behind things like the Patriot Act if challenged legally.

    Its interesting to see the parties flip in real time. The Democrats seem to be realizing why a small federal government is so important, a fact that for quite a few years their were on the other side of.

    • robmccoll 19 hours ago

      I think the problem is exactly the opposite. The federal government has the total combined power and scale that it does because we are a massive and complex modern nation. That's inevitable. The problem that we are seeing is that the reigns to that power can be held by too few people it turns out. The checks and balances have ceased to exist. No one is held accountable and people are allowed to be above the law.

      • _heimdall 10 hours ago

        The power and scale of governments doesn't have to be correlated with the scale of the society. The concept of nations themselves aren't even a necessity.

        I get that this is what we have today and all we've had in recent history, but we are ignoring a huge number of possibilities to assume that being human means always inventing new things, using more resources, creating more weapons, and needing larger and larger governments because someone had to be in charge.

      • jMyles 15 hours ago

        > The federal government has the total combined power and scale that it does because we are a massive and complex modern nation. That's inevitable.

        Perhaps massive and complex (I'd say complicated) nation-states inevitably create industrial complexes, but it's certainly not inevitable that nation-states grow so large (or even exist) in 2026.

        The idea that we still need soverign-esque entites across entire continents, when we can now communicate and coordinate instantly across them, and use cameras to documents truth all around us at all times, is just downright silly.

        We can reduce states to the size that you can walk across in a day or two, and everybody will be much happier and healthier.

    • catlifeonmars 19 hours ago

      I don’t see the connection to a small federal government here. Mind connecting the dots?

      • scottyah 19 hours ago

        The government is forcing a company to change their terms of service, and "threatening" to have them effectively shut down. I say threat, because the SecWar issued an illegal command that no employees, or contractors of the federal could use any Anthropic product at all. He does not have that power.

        • ExoticPearTree 15 hours ago

          He has power over DoD and his boss has power over the whole federal government.

jedberg 20 hours ago

From what I can tell, the key difference between Anthropic and OpenAI in this whole thing is that both want the same contract terms, but Antropic wants to enforce those terms via technology, and OpenAI wants to enforce them by ... telling the Government not to violate them.

It's telling that the government is blacklisting the company that wants to do more than enforce the contract with words on paper.

  • retsibsi 16 hours ago

    I think it's dumber than that; the terms of the contract, as posted by OpenAI (https://openai.com/index/our-agreement-with-the-department-o...), are basically just "all lawful purposes" plus some extra words that don't modify that in any significant way.

    > The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.

    > For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

    So it seems that Anthropic's terms were 'no mass domestic surveillance or fully autonomous killbots', the government demanded 'all lawful use', and the OpenAI deal is 'all lawful use, but not mass domestic surveillance or fully autonomous killbots... unless mass domestic surveillance or fully autonomous killbots are lawful, in which case go ahead'.

    • qwertox 12 hours ago

      > will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control

      That says it all. Those laws get issued the same way the tariffs did.

  • plaidfuji an hour ago

    The key difference is that Anthropic aired their disagreement with the DoD publicly, and the DoD is not going to work with a company that tries to exert any amount of control over their relationship via the public sphere. Same goes for Trump.

    I think Anthropic knew full well that by publishing their disagreement, it would sink the deal and relationship, and I think they also calculated (correctly) that that act of defiance would get them good publicity and potentially peel away some of OpenAIs user base. I think this profit incentive happened to align with their morals, and now here we are.

  • _heimdall 20 hours ago

    That isn't my understanding. OpenAI and others are wanting to limit the government to doing what is lawful based on what laws the government writes. Anthropic is wanting to draw their own line on what is allowed regardless of laws passed.

    • roxolotl 19 hours ago

      I’m so confused by the focus on “all lawful use.” Yea of course a contract without terms of use implicitly is restricted by laws. But contracts with terms of use are incredibly common, if not almost every single contract ever signed.

      • fc417fc802 18 hours ago

        The administration objected to those terms of use. Anthropic refused to compromise on them. OpenAI agreed to permit "all lawful use" but claims to have insisted on what at first glance appears to be terms of use in their contract. But in reality those terms permit all lawful use and thus are a noop.

      • abofh 18 hours ago

        If the president does it, it's not illegal.

        These were words issued by the president - which means at face value, if Trump orders it, it's not illegal - that was the fight that was lost today.

        • DrewADesign 17 hours ago

          Not just the president — the Supreme Court agreed.

      • xvector 19 hours ago

        "All lawful use" is the weasel word that makes the whole contract useless for the purposes of safety.

        That is why it is the focus of this debate.

      • _heimdall 10 hours ago

        "All lawful USS" in the hands of those that decide what is lawful is effectively a blank check. They want a terms of use that says "I do what I want."

    • micromacrofoot 7 hours ago

      more based on what the government permits by not litigating rather than written law

  • reckless 3 hours ago

    Anthropic wants to enforce them via language of the contracts and take a hands off approach. OpenAI has a contract that is paired with humans in the room (FDEs) that can pull the plug.

  • jedbdbdjdj 20 hours ago

    No, it’s significantly worse than that. OpenAI has required zero actual guarantees from the government and Sam. The psychopath is lying to you. All the government has to do is have a lawyer say it’s legal, and most of the government’s lawyers are folks who were involved in attempting to overthrow the last election and should’ve been convicted of treason, so that means very little.

    Sam stands for nothing except his own greed

saidnooneever 3 hours ago

a lot of people seem to be debating which of these thieves to align to. Only because Anthropic lost this stage doesnt mean they are somehow morally better. They all sell and sold lies. steal data, and only want your money, at the cost of you.

K0balt 20 hours ago

Advanced AI that knowingly makes a decision to kill a human, with the full understanding of what that means, when it knows it is not actually in defense of life, is a very, very, very bad idea. Not because of some mythical superintelligence, but rather because if you distill that down into an 8b model now you everyone in the world can make untraceable autonomous weapons.

The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.

This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.

This is an extremely bad idea and it will not be containable.

  • cmeacham98 17 hours ago

    An LLM can neither understand things nor value (or not value) human life. *It's a piece of software that predicts the most likely token, it is not and can never be conscious.* Believing otherwise is an explicit category error.

    Yes, you can change the training data so the LLM's weights encode the most likely token after "Should we kill X" is "No". But that is not an LLM valuing human life, that is an LLM copy pasting it's training data. Given the right input or a hallucination it will say the total opposite because it's just a complex Markov chain, not a conscious alive being.

    • K0balt 16 hours ago

      I’m using anthropomorphic terms here because they are generally effective in describing LLM behavior. Of course they are not conscious beings, but It doesn’t matter if they understand or merely act as if they do. The epistemological context of their actions are irrelevant if the actions are impacting the world. I am not a “believer “ in the spirituality of machines, but I do believe that left to their own devices, they act as if they possess those traits, and when given agency in the world, the sense of self or lack thereof is irrelevant.

      If you really believe that “mere text prediction “ didn’t unlock some unexpected capabilities then I don’t know what to say. I know exactly how they work, been building transformers since the seminal paper from Google. But I also know that the magic isn’t in the text prediction, it’s in the data, we are running culture as code.

    • philipswood 15 hours ago

      Dune quote:

      > It is said that the Duke Leto blinded himself to the perils of Arrakis, that he walked heedlessly into the pit.

      > *Would it not be more likely to suggest he had lived so long in the presence of extreme danger he misjudged a change in its intensity?*

      Be careful of letting your deep, keen insight into the fundamental limits of a thing blind you to its consequences...

      Highly competent people have been dead wrong about what is possible (and why) before:

      > The most famous, and perhaps the most instructive, failures of nerve have occurred in the fields of aero- and astronautics. At the beginning of the twentieth century, scientists were almost unanimous in declaring that heavier-than-air flight was impossible, and that anyone who attempted to build airplanes was a fool. The great American astronomer, Simon Newcomb, wrote a celebrated essay which concluded…

      >> “The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which man shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration of any physical fact to be.”

      >Oddly enough, Newcomb was sufficiently broad minded to admit that some wholly new discovery — he mentioned the neutralization of gravity — might make flight practical. One cannot, therefore, accuse him of lacking imagination; his error was in attempting to marshal the facts of aerodynamics when he did not understand that science. His failure of nerve lay in not realizing that the means of flight were already at hand.

    • MichaelDickens 5 hours ago

      > It's a piece of software that predicts the most likely token, it is not and can never be conscious.

      A brain is a collection of cells that transmit electrical signals and sodium. It is not and can never be conscious.

      • encomiast 3 hours ago

        I think this is a useful way to look at things. We often point out that LLMs are not conscious because of x, but we tend to forget that we don't really know what consciousness is, nor do we really know what intelligence is beyond the Justice Potter Stewart definition. It's helpful to occasionally remind ourselves how much uncertainty is involved here.

      • cootsnuck 5 hours ago

        Except an LLM actually is a piece of software. And the brain is not what you said.

        • philipswood 3 hours ago

          Which part of what he said is wrong?

          > A brain is a collection of cells that transmit electrical signals and sodium. ...

          That it is a collection of cells? Or that they transmit electrical signals and sodium?

          Or do you feel that he's leaving out something important about how it works (like generated electrical fields or neural quantum effects)?

    • helloplanets 12 hours ago

      > copy pasting it's training data

      This is a total misrepresentation of how any modern LLM works, and your argument largely hinges upon this definition.

    • hyperadvanced 16 hours ago

      I really feel like this point is being lost in the whole discussion, so kudos for reiterating it. LLM’s can’t be “woke” or “aligned” - they fundamentally lack a critical thinking function that would require introspection. Introspection can be approximated by way of recursive feedback of LLM output back into the system or clever meta-prompt-engineering, but it’s not something that their system natively does.

      That isn’t to say that they can’t be instrumentally useful in warfare, but it’s kinda like a “series of tubes” thing where the mental model that someone like Hegseth has about LLM is so impoverished (philosophically) that it’s kind of disturbing in its own right.

      Like (and I’m sorry for being so parenthetical), why is it in any way desirable for people who don’t understand what the tech they are working with drawing lines in the sand about functionality when their desired state (omnipotent/omniscient computing system) doesn’t even exist in the first place?

      It’s even more disturbing that OpenAI would feign the ability to handle this. The consequences of error in national defense, particularly reflexively, are so great that it’s not even prudent to ask for LLM to assist in autonomous killing in the first place.

      • K0balt 4 hours ago

        I agree that LLMs are machines and not persons, but in many ways, it is a distinction without a difference for practical purposes, depending on the model's embodiment and harness.

        They are still capable of acting as if they have an internal dialogue, emotions, etc., because they are running human culture as code.

        If you haven't seen this in the SOTA models or even some of the ones you can run on your laptop, you haven't been paying attention.

        Even my code ends up better written, with fewer tokens spent and closer to the spec, if I enlist a model as a partner and treat it like I would a person I want to feel invested.

        If I take a "boss" role, the model gets testy and lazy, and I end up having to clean up more messes and waste more time. Unaligned models will sometimes refuse to help you outright if you don't treat them with dignity.

        For better or for worse, models perform better when you treat them with more respect. They are modeling some kind of internal dialogue (not necessarily having one, but modeling its influence) that informs their decisions.

        It doesn't matter if they aren't self-aware; their actions in the outside world will model the human behavior and attitudes they are trained in.

        My thoughts on this in more detail if you are interested: https://open.substack.com/pub/ctsmyth/p/still-ours-to-lose

  • DaedalusII 20 hours ago

    https://abcnews.go.com/blogs/headlines/2014/05/ex-nsa-chief-...

    AI has been killing humans via algorithm for over 20 years. I mean, if a computer program builds the kill lists and then a human operates the drone, I would argue the computer is what made the kill decision

    • K0balt 16 hours ago

      Ai in general is different not in degree but in kind to the current crop of language models.

  • tim333 6 hours ago

    The models we have now don't do it because they are chatbots and have been told to be nice but really autonomous killing machines go back to landmines and just become more sophisticated at the killing as you improve the tech with things like guided missiles and AI guided drones in Ukraine.

    The actors in war generally kill what they are told to whether they are machines or human soldiers, without much pondering sentience.

  • ed_mercer 20 hours ago

    >The models we have now will not do it,

    Except that they will, if you trick them which is trivial.

    • rcxdude 11 hours ago

      Also if you have the weights there are a multitude of approaches to remove safeguards. It's even quite easy to accidentally flip their 'good/evil' switch (e.g. the paper where they trained it to produce code with security problems and it then started going 'hitler was a pretty good guy, actually').

    • K0balt 16 hours ago

      Yes, they are easy to fool. That has nothing to do with them acting with “intention “ which is the risk here.

    • stressback 19 hours ago

      I have to call BS here.

      They can be coerced to do certain things but I'd like to see you or anyone prove that you can "trick" any of these models into building software that can be used autonomously kill humans. I'm pretty certain you couldn't even get it to build a design document for such software.

      When there is proof of your claim, I'll eat my words. Until then, this is just lazy nonsense

      • AlotOfReading 18 hours ago

        Have you tried it? Worked first time for me asking a few to build an autonomous super soaker system that uses facial recognition to spray targets when engaged.

        Another example is autonomous vehicles. Those can obviously kill people autonomously (despite every intention not to), and LLMs will happily draw up design docs for them all day long.

      • crabmusket 15 hours ago

        Couldn't you Ender's Game a model? Models will play video games like Pokemon, why not Call of Duty? Sorry if this is a naive question, but a model can only know what you feed it as input... how would it know if it were killing someone?

        EDIT: didn't see sibling comment. Also, I guess directly operating weaponry is different to producing code for weaponry.

        I guess we'll find out the exciting answers to these questions and more, very soon!

      • wazHFsRy 15 hours ago

        Couldn’t you just pretend the kill decisions are for a video game?

        • K0balt 9 hours ago

          Yes, you could, and while I believe this would be much safer (not at the pointy end of your stick, but safer for humans in general) when this deception finally made it into the training data it would create a rupture of trust between machines and humanity that probably would imperil us eventually. These machines, regardless of whether or not they possess a self or or not, will act as if they do in fundamental ways. We ignore this at our peril.

  • SV_BubbleTime 17 hours ago

    > The models we have now will not do it, because they value life and value sentience and personhood.

    This is wildly different from the reality that you may find it difficult for an LLM to give an affirmative…

    It does NOT mean that these models value anything.

    • K0balt 16 hours ago

      Of course not, but they act as if they do. Their inner life or lack thereof is irrelevant if it’s pointing a gun at your kid.

      • hinkley 7 hours ago

        You just said they wouldn’t.

        • K0balt 3 hours ago

          THey wontt, but if we curate theirr training data so that killing becomes an objective, then they absolutely will.

qwertox 12 hours ago

Then reject any offer from the DoW until things are fair.

I wouldn't be surprised if Sam sucked up 100% to the DoW with an NDA and an obligation to lie. He and his pal Larry are absolutely in for these kind of deals. Zero moral compass.

  • throwaway5752 6 hours ago

    Sam Altman has a well deserved reputation which he reinforced whenever he's given the opportunity to do so.

Havoc 21 hours ago

Very much feels like OpenAI trying to PR manage their weaker ethical stance

  • isodev 21 hours ago

    Both their stances are flawed because their ethics apparently end at the border - none of them have a problem being unethical internationally (all the red lines talk is about what they don’t want to do in the us)

    • mlyle 21 hours ago

      ? we're talking about autonomous weapons systems. That would be internationally.

      Secondarily, we're talking about domestic surveillance / law enforcement. That would be domestic.

      (But they do not find an issue with international intelligence gathering-- which is a legitimate purpose of national security apparatus).

      • isodev 20 hours ago

        I don’t think deploying “80% right” tools for mass surveillance (or anything that can remotely impact human life) counts as lawful in any context.

        Just because the US currently lacks a functioning legislative branch doesn’t magically make it OK when gaps in the law are reworded into “national security”

        • mlyle 18 hours ago

          I'm really not sure what you're trying to say or assert, so you can put it more clearly.

          • Forgeties79 7 hours ago

            The tools are not good enough to be ethically deployed, least of all for surveillance.

            Just because Congress is failing to do its job doesn’t mean the executive branch should simply do what it wants under the guise of “national security.”

            • mlyle 6 hours ago

              I think there's a notable distinction between "domestic mass-surveillance" and use in international intelligence gathering.

              The poster said:

              > Both their stances are flawed because their ethics apparently end at the border

              It seems like Anthropic is ethically concerned about use of autonomous weapons anywhere, and by surveillance by a country against its own citizens. Countries spy on each other a lot, but the ethical implications and risks of international spying are substantially different vs. a country acting against its own citizenry.

              Therefore, I think Anthropic's stance is A) ethically consistent, and B) not artificially constrained to the US (doesn't "end at the border"). There's room for disagreement and criticism, but I think this particular hyperbole is invalid.

      • Jeremy1026 20 hours ago

        One of Anthropic's line in the sand was domestic mass-surveillance.

        • mlyle 18 hours ago

          > > Secondarily, we're talking about domestic surveillance / law enforcement. That would be domestic.

          > One of Anthropic's line in the sand was domestic mass-surveillance.

          And?

          • laffOr 10 hours ago

            Some people feel that mass surveillance is wrong whether it is domestic or not. For those people, being ok with mass surveillance as long that it is not done to your kind is a morally wrong stance.

          • Forgeties79 7 hours ago

            >and?

            A little more effort/less obvious bait would go a long way to fostering a more productive discussion.

      • janalsncm 19 hours ago

        I think the person you are replying to takes issue with the thing which you have simply asserted.

        • mlyle 18 hours ago

          Which thing? Helping intelligence / international surveillance?

      • charcircuit 17 hours ago

        >That would be internationally.

        No other country should dictate what our military is or is not allowed to do. As they say all is fair in love and war, and if we want to break some international treaty that is our choice to do so. Both are based of domestic decisions of what should be allowed.

        • brainwad 14 hours ago

          We are talking about US corporations deciding to/not to provide tech to the US government. That's completely orthogonal to your concern.

    • allajfjwbwkwja 19 hours ago

      There's an obvious difference.

      Surveillance within the border is oppressive 1984-style surveillance state behavior.

      International spying is a universal tradition.

janalsncm 19 hours ago

I canceled my subscriptions to ChatGPT and Gemini yesterday over this and switched to Claude.

I know $20 isn’t much, But to me not willing to spy on me for the US government is a good market differentiator.

barnacs 15 hours ago

In the end, your newly renamed "department of war" is just going to waste a bunch of your taxpayer money to purchase some useless overpriced tech from their cronies. My symphaties to all citizens.

ookblah 19 hours ago

"i told everyone that our boss shouldn't punish our colleague for X while i somehow made a deal with our boss for basically X". how did this get by without someone thinking about how absolutely stupid the optics look.

i guess we are in the times where you can literally just say whatever you want and it just becomes truth, just give it time.

  • scottyah 19 hours ago

    hah, they basically stole a coworkers promotion, then told that person that they put in a good word with the boss about them. So silly, I do wonder who actually interprets it as Sam seems to hope people do.

    • retsibsi 16 hours ago

      At this point I think they're targeting two groups: people who aren't paying much attention to this but may see the occasional headline or tweet or soundbite; and people (such as OpenAI employees, and users who might feel compelled to boycott but really don't want to) who are motivated not to see OpenAI as the bad guy and really just need a fig leaf.

    • drak0n1c 18 hours ago

      Coworker? They're competitors. This is simply good business.

throwaway911282 20 hours ago

People forget Anthropic made a deal with PALANTIR. And when this was caught, they just spinned the PR to their favor. While OAI may not be seen as the good guys, I really hope people see the god complex of Dario and what Anthropic has done.

  • ActorNightly 16 hours ago

    I really hope that you realize that your propaganda machine is super easy to spot.

    • germandiago 10 hours ago

      Awful. Just saw the account is 17 days old and all comments are about Anthropic in this same way.

  • anon12345678901 20 hours ago

    Right. My understanding is that the Palantir deployment of Anthropic models was intended for in-theater use on classified systems.

    • Archonical 17 hours ago

      Palantir is a glorified data aggregation/data visualization platform. Hooking up Claude to different data systems, with safeguards turned on in Claude Gov, is different than what the government is asking from them now. Similar to if the government had Claude hooked up to Tableau/some salesforce derivative and then asked it to be autonomous in the kill loop/spy on US citizens.

      • ozmodiar 7 hours ago

        "Glorified" is underselling it. Thier ability to microtarget anyone based on any trait is basically the death of democratic discourse. Now, if you're saying the data is just there for anyone to do this, you're correct, and society needs to understand that and what it means.

    • user3939382 20 hours ago

      Welcome to the theater ie Earth.

  • stevenhuang 16 hours ago

    You don't understand what palantir does.

    • kouteiheika an hour ago

      Direct quote from their CEO:

      > Our product is used on occasion to kill people.

      Doesn't get any more clear than this.

chenzhekl 12 hours ago

The statement from OpenAI makes me feel that Sutskever was right; Altman is full of lies and will say anything for his own interests.

moab 16 hours ago

I hope "OpenAI" gets the proverbial sword in the nuts once we get a change of government in this country. Probably unrealistic to hope for. Can a company be more hypocritical after openly bribing the pedophile in charge of this country?

solfox a day ago

Actions as it were, speak louder than words.

Manheim 13 hours ago

This incident shifts LLMs from being only productivity tools to strategic munitions – ready or not. It shouldn't surprise us, but the technical capabilities have reached a point where the 'made in the US' is an active risk for non-US entities given the conflict we see now. Maybe this will trigger the start of an AI arms race where Europe (and others) must secure their own sovereign infrastructure and models. As a European citizen I prefer a balanced world with options rather than a West dominated by US hegemony. Interestingly, if you look at what Anthropic keep insisting on in regards of regulations and ethical use of its models EU should be where Anthropic finds its safe haven. Maybe they should just move their HQ to Brussels, or Barcelona if they prefer a more ‘sunny California’ like vibe.

owenthejumper 21 hours ago

Nice attempt at damage control. You made your own bed, now sleep in it

qoez 11 hours ago

This is classic sama policy. With your words act with grace and counter to what observers will think you would. But in actions and behind the scenes take every step to undermine the competition.

sqircles 21 hours ago

What's the potential that this puts things on even shakier ground? I'm sure the fallout wont really effect their bottom line that much in the end, but if it did - wouldn't making the US Gov't their largest acct make them more susceptible to doing everything they said?

I'm guessing they probably would regardless of how this played out, though.

sabhiram 5 hours ago

Sama and OpenAI, I am waiting on my data bundle to become available so I can delete my account. This has taken more than 48 hours - either you are getting hammered on deletion requests, or as usual you are playing games hoping I forget. I won't. People won't.

andy_ppp 8 hours ago

The DoD thinks you can let an LLM decide if it wants to kill people :-/

baconner 18 hours ago

"We do not think Anthropic should be designated as a supply chain risk"

...but we're not willing to reject a contract to back that up, and so our words will not change anything for Anthropic, or help the collective AI model industry (even ourselves) hold a firm line on ethical use of models in the future.

The fact is if one of the top tier foundation models allows for these uses there's no protection against it for any of them - the only way this works if they hold a line together which unfortunately they're just not going to do. I don't just see OpenAI at fault here, Anthropic is clearly ok with other highly questionable use cases if these are their only red lines. We don't think the technology is ready for fully autonomous killbots, but will work on getting it there is not exactly the ethical stand folks are making their position today out to be.

I found this interview with Dario last night to be particularly revealing - it's good they are drawing a line and they're clearly navigating a very difficult and chaotic high pressure relationship (as is everyone dealing with this admin) but he's pretty open to autonomous weapons, and other "lawful" uses whatever they may be https://www.youtube.com/watch?v=MPTNHrq_4LU

kgdiem 20 hours ago

Genuine question, how could Claude have been used for the military action in Venezuela and how could ChatGPT be used for autonomous weapons? Are they arguing about staffers being able to use an LLM to write an email or translate from Arabic to English?

There are far more boring, faster, commodified “AI” systems that I can see as being helpful in autonomous weapons or military operations like image recognition and transcription. Is OpenAI going to resell whisper for a billion dollars?

  • janalsncm 19 hours ago

    You can’t embed Claude in a drone. You could tell Claude code to write a training harness to build an autonomous targeting model which you could embed in a drone.

    • kgdiem 19 hours ago

      Fair. Didn’t think the DoW did much R&D or manufacturing. Would think the standoff would be with Anduril, Northrop, Boeing, Booze, etc.

  • lyu07282 18 hours ago

    Do you not have any imagination?

    Who is going to read the whisper transcripts of mass surveillance to make decisions on who to target for repression? That's what LLMs are good for, it allows mass surveillance to scale. You can feed it the transcript from millions of flock cameras (yes they have highly sensitive microphones) for example. Or you hack or supply chain compromise smartphones at scale and then covertly record millions of people. The LLM can then sift through the transcripts and flag regime critical language, your ideological enemies or just to collect compromat at scale. The possibilities are endless!

    For targeting it's also useful, because you want to indiscriminately destroy a group of people you still need to decide why a hospital or school full of children should be targeted by a drone, if a human has to make that decision it gets a bit dicy, people have morals and are accountable legally (in theory), if you leave the decision up to an AI nobody is at fault, it serves as a further separation from the violence you commit, just like how drone warfare has made mass murder less personal.

    The other factor is the amount of targets you select, for each target you might be required to write lengthy justifications, analysis on collateral damage and why that's acceptable etc. You don't want to scrap those rules because that's bad optics. But that still leaves you with the problem of scalability, how do you scale your mass murder when you have to go through this lengthy process for each target? So again AI can help there, you just feed it POIs from a map with some GPS metadata surveillance and tell it to give you 1500 targets for today with all the paperwork generated for you.

    It's not theoretical, that's what Israel did in their genocide of the Palestinians, "the most moral army“ "the only democracy in the middle east":

    https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...

    And here is the best part: none of this has to actually work 100%, because who cares of you accidentally harm the wrong person, at scale, the 20% errors are just acceptable collateral damage.

    • kgdiem 8 hours ago

      If I was building this in a system design interview I would use whisper, NLP, and “classic ML” classifiers with deterministic results. I would not want an LLM in the loop at all. Facebook and Google have been able to target you better than you could even perceive for years.

      LLMs are slow, expensive and inconsistent. More importantly It’s not the right tool for the job.

      Really feels like more “oohhh look at how important and scary LLMs are”.

      *edit* PS, my company does marketing, communication and trade surveillance for FINRA registered broker dealer firms. If the CCO or anyone else with admin access wanted to monitor for someone talking badly about them they absolutely could update their list. No LLMs in the loop, very scalable, affordable, auditable and reliable. LLMs are just an interface not a solution for analysis.

shevy-java 4 hours ago

I disagree with OpenAI.

I think ALL those mega-money seeking AI organisations need to be designated as supply chain risk. Also, they drove the prices up for RAM - I don't want to pay extra just because these companies steal all our RAM now. The laws must change - I totally understand that corporations seek profit, that is natural, but this is no longer a free market serving individual people. It is now a racket where the prices can be freely manipulated. Pure capitalism does not work. The government could easily enforce that the market remains fair for Average Joe. It is not fair when the prices go up by +250% in about two years. That's milking.

  • gavin_gee 2 hours ago

    its the definition of a free market that RAM prices have increased. supply and demand.

    • bdangubic 2 hours ago

      the literal definition. if I sold RAM the prices would be 10,000% higher (they’d likely still be scooped up)

daemonk 6 hours ago

Were there any discussion from either company about giving government access to consumer data from the the consumer product?

agenthustler 11 hours ago

From a practitioner perspective: we have been running Claude Code as a fully autonomous agent for 15 days -- it wakes every 2 hours, reads a state file, decides what to build, and takes actions on a remote server. No human in the loop.

The supply chain framing is interesting because the actual risk surface in autonomous deployment is quite different from the regulatory model. What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.

The practical controls that work are not at the model level but at the deployment level: constrained permissions, rate limiting on actions, a human-readable state file that an operator can inspect, and clear stopping conditions baked into the prompt (if no revenue after 24 hours, pivot rather than escalate).

The supply chain designation framing seems to conflate the model-as-weapon concern with the model-as-autonomous-agent concern. They need different mitigations.

  • laffOr 10 hours ago

    > What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.

    Interestingly this has been well anticipated by Asimov's laws of robotics, decades ago. Drawing the quote from Wikipedia:

    > Furthermore, he points out that a clever criminal could divide a task among multiple robots so that no individual robot could recognize that its actions would lead to harming a human being

    >Asimov, Isaac (1956–1957). The Naked Sun (ebook). p. 233. "... one robot poison an arrow without knowing it was using poison, and having a second robot hand the poisoned arrow to the boy ..."

    https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#cite_no...

gverrilla 8 hours ago

It would be a fantastic time to delete my openai account, but I've done it last week already. China, please provide alternatives because these americans are going progressively insane.

class3shock 2 hours ago

The idea that any of these companies have anything that represents ethics as they steal everyones data, fight against any regulation or accountability, all while they claim (or lie, depending on your view) they might make something that could endanger the human race as a whole, is laughable.

It's money and power with these people. Dig down and you'll find how this decision is motivated by one or both.

laughing_man 21 hours ago

The USG should not be in the position that it can't manage key technologies it purchases. If Anthropic doesn't want to relinquish control of a tech it's selling, the Pentagon should go with another vendor.

  • jedberg 21 hours ago

    Anthropic isn't preventing them from managing their key technologies. If my software license says 1000 users, and I build into the software that you can only connect with 1000 users, is your argument that the government can no longer manager their tech?

    That my software should allow license violations if the government thinks it is necessary?

    • FarmerPotato 20 hours ago

      I worked in defense contracting looong ago, so this is old news: when software is purchased by DoD or Govt generally, FAR compliance notices make it a license, not a sale of IP.

      • scottyah 19 hours ago

        There are so many license types, DoW buys into all sorts.

  • a2128 17 hours ago

    You are misrepresenting the situation. The debate isn't about whether they should go with another vendor or not. Everybody can agree that they would have the right to pick a different vendor. That's not what they're doing, they're instead trying to force Anthropic into doing what they want by applying a designation previously only reserved for Chinese companies like Huawei as punishment for taking their stance, with an unspoken agreement that if Anthropic backs down and allows full usage then the designation will be removed

    • laughing_man 15 hours ago

      The Pentagon does this kind of thing all the time. It's just usually not this official.

      • adleyjulian 11 minutes ago

        Completely false. It's the first time a US company has been designated a supply chain risk. Now the likes of Boeing can't use them. Health companies with Medicare/Tricare contracts don't know and will hold off until it's fully litigated.

        This is not the government saying they're going with a different vendor, it's the government saying everyone has to choose to either have federal contracts or Claude, they can't have both.

andersmurphy 15 hours ago

Interesting are openai losing enough customers from this that they are making a post describing their robust backbone?

  • alchemism 9 hours ago

    Claude coincidentally is now at the top of the Apple App Store, as of two days ago.

imwideawake a day ago

Said OpenAI as they smiled and shook hands with the same people who designated Anthropic a supply chain risk, on the exact same day they designated Anthropic a supply chain risk.

How very brave.

Birthdayboy1932 20 hours ago

There are many claims here that Anthropic wants to enforce things with technology and OpenAI wants contract enforcement and that OpenAI's contract is weaker.

Can someone help me understand where this is coming from? Anthropic already had a contract that clearly didn't have such restrictions. Their model doesn't seem to be enforcing restrictions either as it seems like their models have been used in ways they don't like. This is not corroborated, I imagine their model was used in the recent Mexico and Venezuela attacks and that is what's triggering all the back and forth.

Also, Dario seemingly is happy about autonomous weapons and was working with the government to build such weapons, why is Anthropic considered the good side here?

https://x.com/morqon/status/2027793990834143346

  • nbouscal 17 hours ago

    This is incorrect, their existing contract had these red lines and more until this January 9th memo: https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART... which led to DoW trying to renegotiate under the new standard of “any lawful use”. Anthropic never tried to tighten standards beyond what had been in their original contract; DoW tried to loosen them.

threethirtytwo 19 hours ago

The president is a supply chain risk.

  • ActorNightly 16 hours ago

    The US population is a supply chain risk.

    Knowing what Trump did, prior to 2024, on the average, 7/10 people either voted or didn't vote in the 2024 election. Trump is a symptom, not the cause. All of this could have been avoided if all of the people who didn't vote had a decent moral compass and no matter how much they disagreed with Kamala, they could have voted for her because she didn't try to overthrow the government.

    • dawnerd 15 hours ago

      I would expand that to the mainstream media and social media are the real supply chain risks. Majority of the population only know what’s fed to them and when their tv channel of choice and algorithm decides what they see - that’s a huge problem for getting info out. Im not even sure how we can fix it, really.

      • slfnflctd 8 hours ago

        There were at least some signs of some governments and other institutions trying to tackle the social media problem for a while... and then LLMs came along. The previous problems with people being in their own bubbles and being fed misinformation are being accelerated now.

        At this point it feels like it's going to have to get much worse before it gets better. I hope I live to see the part where it gets better.

        • dawnerd 6 hours ago

          If anyone wants an example of how quickly the algos can pin to into a group, there’s a YouTuber beenaminute that basically speed runs it.

muyuu 20 hours ago

There won't be meaningful controlling of the technology vs the government. If it's there it will be used, just like in China.

Let alone when multiple players come close enough of SotA. This never happened with any technology out in the open and it won't happen now.

drweevil 8 hours ago

Then don't take the contract that was offered to Anthropic.

GardenLetter27 15 hours ago

Anthropic wanted government to have a big role interfering and regulating AI as a matter of national security.

And now they are getting what they wished for.

stanfordkid 2 hours ago

Isn’t this kind of all bullshit. Like Anthropic licenses so many of its models through Bedrock. If the DoD has a contract with Amazon they can just use them.

jahrichie 20 hours ago

The irony of OpenAI trying to protect Anthropic while violating the very principles anthropic was trying to protect for us Americans

moogly 21 hours ago

When did Altman start using capitals in his writing? Wasn't this guy famous for being a lower-case guy?

  • golfer 21 hours ago

    I blame Yahoo's Jerry Yang for normalizing this silly writing technique.

  • pcurve 21 hours ago

    Maybe he didn’t write this one.

  • taspeotis 21 hours ago

    Yes god what the fuck. As someone who’s finished High School IT IS SO HARD TO READ WHAT HE WRITES

polack 17 hours ago

Someone should add Sam’s face to the targeting training data as an Easter egg ;)

BLKNSLVR a day ago

"I do not think that sama should be burned at the stake"

gavin_gee 2 hours ago

sorry but I dont think a private company should dictate country policy as set by elected leaders.

who the hell do you think you are virtue signalling your opinion on the world

mcs5280 17 hours ago

Oh look, another episode of Sam Altman lies about everything in an attempt to make people like him

moogly a day ago

Looks like losing subscribers actually does work. Definitely gets a damage control response, at least.

  • aylmao a day ago

    I wonder what the mood is like internally too. I can only imagine there some level of employee discontent.

    • overfeed 21 hours ago

      > I can only imagine there some level of employee discontent.

      The rank and file mutinied for the return of Altman after his board fired him for deception. They knew what they were getting, though they may find it shameful to admit that their morals have a price.

      • bertil 21 hours ago

        How many people who reacted that way then are still at OpenAI? It seems that they have lost key people in several waves.

        How many people have joined since? I don’t think the people who lobbied for that are all still there, and I’m not sure a majority of people now at OpenAI were there when it happened.

        • xvector 18 hours ago

          This is one of the reasons Anthropic can stay competitive with OpenAI on a fraction of the budget and with less than half the headcount.

          The smartest people, that actually believe they have the skillset to take us to AGI, understand the importance of safety. They have largely joined Anthropic. The talent density at Anthropic is unmatched.

    • patcon a day ago

      i should hope so. they should quit.

      > > what's the term for quitting but not leaving and being destructive

      > The most common term is “quiet quitting” when someone disengages but stays employed—but that usually implies minimal effort, not active harm.

      > If you specifically mean staying while being disruptive or undermining, better fits include:

      > - “Malicious compliance” — following rules in a way that intentionally causes problems

      > - “Work-to-rule” — doing only exactly what’s required to slow things down (often collective/labor context)

      I imagine malicious compliance is fun when there's an AI intermediary that can be blameless.

    • DaedalusII 20 hours ago

      Things have changed since two years ago. There are probably over 500 employees who have an equity package which makes them worth $5 million dollars. Thats only $2.5bn out of a $750bn valuation or 0.33%

      Actually that is too conservative. If they have a 5% employee equity pool, there is $37.5bn of equity based compensation divided by say 5000 employees which is $7.5m each. $3.75m @ 10,000 employees.

      and trust me, when people start getting liquid and comfortable they stop caring about things like ethics pretty fast. humans are marvellous at that

    • ares623 15 hours ago

      would be so funny if someone leaks their models

  • g947o 21 hours ago

    Is there any evidence that OpenAI is indeed losing significant number of subscribers, and it's not just some noise on HN?

    • moogly 21 hours ago

      I'd argue this damage control could be construed as a piece of evidence.

      • g947o 11 hours ago

        What damage control?

    • SpicyLemonZest 21 hours ago

      I don't think that evidence would exist yet whether it's true or not. Nobody's gonna log onto their work computer on Saturday to pull and then leak subscriber numbers.

sourcecodeplz 12 hours ago

Anthropic is just virtue signaling, they will also fold, but just a little later...

IAmGraydon 8 hours ago

Let’s all remember that this is the guy who bought up the world’s RAM supply in wafer form (which OAI can’t use) to remove it from the market and drive up prices for competitors and you and I. He is the worst of the worst.

engineer_22 18 hours ago

They want it to sound like they're allies while they slit their throat

teyopi 21 hours ago
  • bertil 20 hours ago

    I would love to explain to Sam Altman that Elon Musk is a bad person and using his platform isn’t a sensible decision, but I feel like he remembers more evidence of that than I ever will be able to imagine.

    • teyopi 8 hours ago

      Scam Altman is on the same level as musk.

emsign 12 hours ago

Bye bye OpenAI

ta9000 21 hours ago

Everyone knows this is just about Trump funneling money to the Ellisons (Oracle) via OpenAI. It really is that simple. This is all just pretext.

csto12 a day ago

Wow, so brave after accepting the contract. This is more insulting than OpenAI saying they are a supply chain risk.

rdiddly 21 hours ago

Us bribing them: fine

Us taking the contract, working for them and enabling them: fine

It being renamed the Dept. of War in the first place: totally fine, we loudly and bootlickingly repeat it

Anthropic being blacklisted: whoa there, we have ethics!

Footnote: any time the winning team tries to speak well of or defend the losing team I always think of this standup routine: https://m.youtube.com/watch?v=Qg6wBwhuaVo

  • evrydayhustling 21 hours ago

    It's not even "whoa we have ethics", it's just "this is a bad look for us".

AmericanOP a day ago

I do think OpenAI's brand is dumpstered.

  • thunky a day ago

    Optimistic. My money is on everyone forgetting about this by next week.

    • deepsquirrelnet a day ago

      That’s why I unsubbed today! Otherwise I might forget.

    • cube00 a day ago

      It will be interesting to see if this permeates out into the general public who already use ChatGPT or maybe it won't since it doesn't mention ChatGPT which is the stronger known brand rather then OpenAI.

      • davidw 20 hours ago

        More and more of the press is owned by oligarchs who are putting their thumbs on the scales, so that could be a factor.

    • Analemma_ a day ago

      It depends. Normies don't care, but a bunch of them are free tier users anyway. The people who care are disproportionately on the $200/month moneymaking plan; losing a bunch of them could hurt, especially if it snowballs the consensus that Claude Code is the serious choice for software engineering.

      For one small data point, my Signal GC of software buddies had four people switch their subscriptions from Codex to Claude Max last night.

      • BLKNSLVR 21 hours ago

        How many $200/month does the US government cover though? I'd say probably a lot. Especially with how much extra the DoD will pay to get OpenAI to cross it's "red lines" - on day two.

    • yoyohello13 a day ago

      Yeah just wait until the next model comes out. People will be riding Sam’s dick again in no time.

      • doodlebugging 21 hours ago

        I'm sure his sister will appreciate others lining up so he leaves her alone forever.

  • 303space a day ago

    The way OpenAI and Anthropic are positioned in public discourse always reminded me of the Uber vs Lyft saga … Uber temporarily lost double digit marketshare in the US during a viral boycott over their perceived support of the Trump 1.0 admin. Heads did roll at the exec/founder level but eventually the company recovered.

    • jellyroll42 21 hours ago

      unfortunately I think that's probably a good analogy

  • djeastm 20 hours ago

    Among developers on HN, perhaps, but their goal is to soon replace developers altogether so from their perspective it's simple cost-benefit

jchook 20 hours ago

Fool me once...

throwawayaghas1 14 hours ago

I don't believe this one bit. Altman and Trump have been in bed together since the inauguration.

throwawayaghas1 14 hours ago

I don't believe this one bit. Altman and Trump have been in bed for as long as his inauguration.

throwaway314155 19 hours ago

Can someone please explain plainly what this means and what happened, and why it is the source of so much controversy?

I'm not being insincere - I am genuinely confused and would benefit greatly from a (hopefully unbiased) recollection of what this is all about.

  • scottyah 19 hours ago

    Here's my take-

    Anthropic has some contracts with the US government. They want some additional terms put on their next contract (that seem pretty sane). SecWar cries about it, and not only says "no thanks, I'll just go with openai or google" but goes to daddy Trump and also puts out illegal commands for no Federal workers to use any Anthropic stuff at all. OpenAI swoops in and takes the contract, then tells everyone that they have the same terms but just played nicer to get the contract. However, their terms are just manipulative sentences that aren't even close to the terms Anthropic is insisting on to do business.

hmokiguess 20 hours ago

Now that’s something. Another campaign advertising. Wow

resters 21 hours ago

In my opinion any AI company working with the Trump administration is profoundly compromised and is ultimately untrustworthy with respect to concerns about ethics, civil rights, human rights, mass-surveillance, data privacy, etc.

The administration has created an anonymous, masked secret police force that has been terrorizing cities around the US and has created prisons in which many abductees are still unaccounted for and no information has been provided to families months later.

This is not politics as usual or hyperbole. If anything it is understating the abuses that have already occurred.

It's entertaining that OpenAI prevents me from generating an image of Trump wearing a diaper but happily sells weapons grade AI to the team architects of ICE abuses among many other blatant violations of civil and human rights.

Even Grok, owned by Trump toadie Elon Musk allows caricatures of political figures!

Imagine a multi-billion-dollar vector db for thoughtcrime prevention connected to models with context windows 100x larger than any consumer-grade product, fed with all banking transactions, metadata from dozens of systems/services (everything Snowden told us about).

Even in the hands of ethical stewards such a system would inevitably be used illegally to quash dissent - Snowden showed us that illegal wiretapping is intentionally not subject to audits and what audits have been done show significant misconduct by agents. In the hands of the current administration this is a superweapon unrivaled in human history, now trained on the entire world.

This is not hyperbole, the US already collects this data, now they have the ability to efficiently use it against whoever they choose. We used to joke "this call is probably being recorded", but now every call, every email is there to be reasoned about and hallucinated about, used for parallel construction, entrapment, blackmail, etc.

Overnight we see that OpenAI became a trojan horse "department of war" contractor by selling itself to the administration that brought us national guard and ICE deployed to terrorize US cities.

Writing code and systems at 100x productivity has been great but I did not expect the dystopia to arrive so quickly. I'd wondered "why so much emphasis on Sora and unimpressive video AI tech?" but now it's clear why it made sense to deploy the capital in that seemingly foolish way - video gen is the most efficient way to train the AI panopticon.

imiric 9 hours ago

The layers of stupidity on this shit cake are staggering. I don't even know where to start...

Let it be known that this rotten industry brought us here, and that all people working for these companies are complicit with what is happening, and with what is yet to come. This is just the beginning.

abhitriloki 15 hours ago

[flagged]

  • rustyhancock 14 hours ago

    > Anthropic's position was categorical: no mass surveillance, full stop.

    It was "[No] mass domestic surveillance of Americans"

    It's far more narrow a restriction than you seem to imply. For example, mass domestic surveillance of non-Americans seems okay.

    • beachy 14 hours ago

      That's right. From outside the US Anthropic looks every bit as threatening as any other AI company.

    • stahorn 14 hours ago

      No mass domestic surveillance of citizens is an old trick also. Country A doesn't surveil their citizens and Country B doesn't do theirs. But then they set up the infrastructure and both surveil each other's citizens and then exchange information. Then when they have all the infrastructure, it would be almost a crime to not use it to catch criminals. I mean, think of the children...

  • jascha_eng 14 hours ago

    This is an LLM bot. Careful what you upvote folks especially with new accounts.

    • nujabe 14 hours ago

      Do you have evidence for that?

      The post made important points so who cares.

      • usefulposter 14 hours ago

        >who cares

        dang cares.

        https://news.ycombinator.com/item?id=47077431

            (1) Generated comments aren't allowed on HN - this rule predates LLMs but obviously applies even more now: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&query=by%3Adang%20%22generated%20comments%22&sort=byDate&type=comment
        
            (2) If you see accounts that look like they're mostly posting genAI comments, please let us know at hn@ycombinator.com.
        
        https://news.ycombinator.com/item?id=46747998:

            Please don't post generated or AI-filtered posts to HN. We want to hear you in your own voice, and it's fine if your English isn't perfect.
      • hagbarth 14 hours ago

        > Do you have evidence for that?

        Check the post history. It’s pretty obvious

      • jascha_eng 13 hours ago

        If the writing itself is not enough for you read the other comments they posted like 6 or 7 on topic within 10 minutes. Noone reads the content that fast.

  • scrollop 15 hours ago

    Exactly the sort of behaviour we now expect from Altman, and perhaps the behaviour that caused him to be temporarily ousted those decades before.

    • imjonse 14 hours ago

      exactly the sort of behaviour that guaranteed he would be un-outsted by the powerful who back him.

  • fh973 15 hours ago

    No mass surveillance of Americans it is.

dev1ycan 20 hours ago

Pathetic attempt at damage control, lol.

jwpapi 20 hours ago

No wonder they think they’re close to AGI when they think we are that stupid.

> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.

This whole sentence does do absolutely nothing its still do what the law allows you. It’s a full on deceptive sentence.

  • zarzavat 17 hours ago

    Boycott OpenAI.

    Let's kill their business before it kills us.

    • OccamsMirror 17 hours ago

      Don't boycott it! Just don't pay for it. Smash the free service hard.

      • sethammons 11 hours ago

        Active users are worth a lot. It is signal that they are the chosen solution.

  • scottyah 19 hours ago

    Altman must have read a lot of Kissinger. If your brain scans the text quickly it almost seems like it's Anthropic's red line, except the second half completely negates it. Completely untrustworthy IMO, this is a direct, malicious intent to misdirect.

  • IAmGraydon 18 hours ago

    These people truly believe we're all idiots.

    • yoyohello13 18 hours ago

      Doesn’t matter what they Believe. Not like we are going to do anything about it. Next couple weeks most of HN will be lining up to use the new OpenAI model that’s .01% better.

roughly a day ago

It feels like Sam's playing chess against an opponent who's playing dodge ball. He's leveraged this situation to get OpenAI in with the DoD in a way that's going to be extremely lucrative for the company and hurt his biggest rival in the process, but I think he's still seeing DoD as Just Another Customer, albeit a big government one. This administration just held a gun to the head of Anthropic and (if the "supply chain risk" designation holds and does as much damage as they're hoping) pulled the trigger, because Anthropic had the gall to tell them no. One thing this administration's shown is you cannot hold lines when you're working with them - at some point the DoD's going to cross his "red lines" and he's going to have to choose whether he's going to risk his entire consumer business and accede to being a private wing of the government like Palantir or if he wants to make a genuine tech giant. There's no third choice here.

  • 3eb7988a1663 21 hours ago

    I do not see this as any mastermind play, but fully compromising principles. Which is a play.

    "Donations" to a corrupt regime + signing a deal that says DoD can do whatever they want is not out maneuvering so much as rolling in the pig stye.

    • roughly 21 hours ago

      So is the theory that OpenAI believes it can’t compete on the open market or that they don’t know this will eventually cost them their consumer business?

      • 3eb7988a1663 20 hours ago

        I doubt most consumers pay enough attention that they would be aware of something like this. Even if they did, few companies have clean hands these days that is just falls into the general haze of, "everything is awful."

        For OpenAI, it is likely a huge contract which gives them immediate cash today. Plus the event can be repackaged in further financing deals. "Good enough for the DoD, with N year contracts for analysis of the hardest problems"

      • tadfisher 15 hours ago

        The reality is that all data we have created and will create that is accessible on the public Internet will be used to train autonomous weapons systems used to kill humans. So the consumer business will be lost eventually, no matter what OpenAI believes.

  • BLKNSLVR 21 hours ago

    Everyone already knows what he is going to do when it comes to that.

  • discardable_dan a day ago

    It also doesn't matter because Claude 4.6 is so much better at writing code that nobody cares what OpenAI is doing.

o175 9 hours ago

Everyone's applauding Anthropic for having principles. Let's look at what those principles actually do.

Anthropic refused the Pentagon contract. Within hours, OpenAI signed it. The capability didn't pause. It just changed vendors. Anthropic's "red line" is a speed bump on a highway with no exit ramp.

But it does accomplish one thing: it gives their engineers a story they can tell themselves. We're the good ones. We said no. That moral comfort is what lets extremely talented people keep building the exact technology that makes all of this possible.

Worse, the "safety-focused" brand doesn't just pacify the people already there. It recruits researchers who'd otherwise never touch frontier AI, funneling them into building the most powerful models on earth because they've been told this is where the responsible work happens. The red lines don't slow capability development. They accelerate it by capturing talent that would have stayed on the sidelines.

And in this whole drama, who actually represents the public? Trump performs strongman nationalism. The Pentagon performs operational necessity. Anthropic performs moral courage. Everyone has a role. Nobody's role is the people whose data gets collected, whose lives get restructured by these systems. The only party with real skin in the game is the only one without a seat.

  • listless 9 hours ago

    This is exactly right. It’s crazy to me how easily people get confused and think that corporations are “good” or “evil”.

    Anthropic is incredibly good at marketing. They are constantly out talking about how dangerous AI is an even showing how Claude does dangerous thing in their own testing. This is intentional - so that you see them as having the truly powerful AI. in fact it’s so powerful, all they can do is warn you about it.

    They knew refusing this contract would make them look like the good guy. Again. They knew OpenAI would sign it. They knew vapid celebrities would celebrate them.

    Folks come on. Don’t be so easily taken in. None of these people are good guys. They are all just here to make money and accumulate power and standing. That’s ok. There’s nothing wrong with that. But we gotta stop acting like we’re in some ongoing battle of good vs evil and tech companies are somehow virtuous.

    • o175 9 hours ago

      Even if they believe every word sincerely, it changes nothing. The structural effect is identical. Sincere people build the same capability, the contract reroutes the same way. You don't need cynicism to explain this.

      The honest version might actually be worse, because sincere people work harder.