grey-area 15 hours ago

This has much broader implications for the US economy and rule of law in the US.

If government procurement rules intended for national security risks can be abused as a way to punish Anthropic for perceived lack of loyalty, why not any other company that displeases the administration like Apple or Amazon?

This marks an important turning point for the US.

  • heresie-dabord 11 hours ago

    > much broader implications

    Setting aside the spectacular metastasis of a lawless kakistocracy that is literally rewriting the facts on record...

    Anthropic's leadership has wisely attempted to make it clear that its product is not fit for the US DoD's purpose/objective, which is automated killing at scale.

    It would be (is) grossly, historically negligent to operate weapons with LLMs. Anthropic built systems for a thuggocracy that only understands bribery, blackmail, and force.

    • rayiner 7 hours ago

      [flagged]

      • thewebguyd 5 hours ago

        Anthropic isn’t the inventor here, they are a service provider. The government can easily go find a different service provider, or if none of them will allow their service to be used for war, then the government should develop their own tech.

        Saying the government can just nationalize any company purely because they want to use the tech to kill people has pretty big implications and his historically against what this country stands for.

      • gruez 7 hours ago

        >That’s not their call to make. Inventors of technologies that could be used for war have never had the right to deny access to those technologies to the elected civilian government.[1]

        >[1] The government can make you go over to southeast Asia and kill people personally.

        Is this a normative statement? In other words are you simply claiming "the government has men with guns and therefore can force people/companies do whatever they want", or are you claiming that "the government should be able to commandeer civilian resources for whatever it wants"?

        • rayiner 7 hours ago

          It’s a descriptive statement about the law. But you’re mischaracterizing the normative principle underlying the law. It’s not based on power, but rather the moral duties incumbent on citizens.

          • gruez 6 hours ago

            >but rather the moral duties incumbent on citizens.

            Is it a "moral duty" to aid your government, especially in the current social/political environment? Conscription is theoretically still allowed in the US, and you're theoretically supposed to register for the SSS, but nobody has been prosecuted for failure to do so in decades. That suggests the "moral duty" aspect has significantly weakened. Moreover if we're making comparisons to the draft, it's also worth noting the draft allows for conscientious objection. That makes your claim of "that’s not their call to make" quite questionable.

            • holmesworcester 6 hours ago

              > That’s not their call to make.

              Whether they participate voluntarily in a commercial transaction or participate only when compelled to by law (setting aside the question of whether the government does or should have that power) is certainly their call to make.

              Just as any individual can decide whether to volunteer, whether to wait until drafted, or whether to refuse to be drafted and face the consequences.

              (History shows these decisions, and the rights to make them, are meaningful at scale!)

              Finally, governments who expect their leading scientists to do groundbreaking work simply out of fear of imprisonment are NGMI against governments whose scientists believe in their cause.

            • rayiner 6 hours ago

              If anyone thinks the moral justification for selective service has diminished, they should launch a campaign to repeal it and see how it goes over. I suspect that the non-prosecution more reflects the public’s leniency in the absence of major threats since the fall of the soviet union than a change in the underlying normative view.

              Conscientious objection still puts the ball in the government’s court. You have to make your case to the government that you have a deeply held religious or moral belief that precludes participation in war, and then the government decides what it wants to do. It’s not clear to me how a corporation would prove the existence of such a belief. But even if that was possible, it wouldn’t give the company the right to decide unilaterally.

              • nullocator 16 minutes ago

                > they should launch a campaign to repeal it and see how it goes over

                You are conflating lack of true representation (what we have), with lack of support. It's very possible that the broad majority of the electorate would in fact get rid of conscription in the U.S. if they actually had a say in the matter? [1]

                > I suspect that the non-prosecution more reflects the public’s leniency in the absence of major threats since the fall of the soviet union than a change in the underlying normative view.

                Or more people are wising up to the reality that the real risk to their safety and security is from within not from without, its from people like you who would happily subjugate and violate your countrymen while telling them it's all for their own protection.

                [1] https://news.gallup.com/poll/28642/vast-majority-americans-o...

          • praptak 5 hours ago

            The moral duty of a citizen is to sabotage their country when it becomes immoral.

            • dennis_jeeves2 2 hours ago

              Nearly every country would be 'sabotaged' then - and rightfully so. ALL gvts are a sophisticated manifestation of the more lowly protection racket run by the mafia. i.e 'We protect you from harm by the other mafia'.

          • catlover76 7 hours ago

            > It’s not based on power, but rather the moral duties incumbent on citizens.

            People largely tend not to believe in this kind of jingoistic bullshit nowadays.

      • jim33442 4 hours ago

        Anthropic can certainly make the call to deny access this way, but then the US govt can choose not to make contracts with Anthropic. So what's the issue?

        • gentoo 4 hours ago

          The whole reason this is a story is that the government won't just refuse to contract, it will put the equivalent of soft sanctions on the company because Anthropic refuses to contract.

      • worthless-trash 5 hours ago

        Hang on, companies dont get to have the rights of a person and not be conscripted.

        • rayiner 5 hours ago

          That’s my point. It would be odd to say that a corporation has a broader right not to be compelled to aid war efforts than a person does.

      • catlover76 7 hours ago

        I have seen a lot of your posts on here about political topics, and they are always disingenuous, misleading, and geared towards providing a thin veneer of reasonability over any form of morality.

        > If Congress doesn’t want AI-powered killing machines, they’re the ones who have the right to make that call.

        You have it backwards, and you know it. If Congress wants to invoke natsec concerns to force companies to sell to the federal government, then they have to explicitly say so, and any such legislation and exercise of execute power pursuant thereto would be heavily litigated.

        > The government can make you go over to southeast Asia and kill people personally. It’s totally incompatible with that to say companies should be allowed to veto the use of their technologies in war.

        Yes, it's legal to have drafts, but that's not relevant, and also includes certain exceptions for conscientious objectors. It doesn't matter if its paradoxical or ironic that an individual could be pressed into military service whereas a private company doesn't have to sell stuff to the federal government.

  • herval 6 hours ago

    this entire administration has been a constant stream of "important turning point for the US" moments

    • grey-area 2 minutes ago

      That’s true and it’s not over yet, wait till he reaches the thousand year reich bit.

    • ericmay 5 hours ago

      I think most, perhaps all of those "important turning points" aren't really important turning points but just business as usual.

      • TOMDM an hour ago

        Then you know and understand nothing.

      • FartyMcFarter an hour ago

        Is threatening an ally business as usual? Tell me about all the times that recent presidents threatened a NATO ally...

  • ricksunny an hour ago

    turning point? The episode is literally playing out the AEC's (read: war-footed government) 1954 Oppenheimer security-clearance hearing in real-time for a fresh modern-day audience.

  • busko 14 hours ago

    Yep, where does your trust lay now? It's been a minute of pretending it'll be okay.

    • adventured 7 hours ago

      Nothing has changed in decades regarding this. People just like to pretend something new is happening, because they're extremely desperate to proclaim a fundamental turning / ending of the US (which is why every single event brings out those claims: this time is different! America will never recover from this! etc).

      US tech companies were previously forced into compliance with PRISM or threatened with destruction (see: escalating fines to infinity against Yahoo, forcing their eventual compliance).

      You know what's new? This administration is doing out in the open what used to go on quietly.

      • lostlogin 6 hours ago

        > Nothing has changed

        > You know what's new? This administration is doing out in the open what used to go on quietly.

        So this administration has got bold and the behaviour has become overt.

  • coldtea 7 hours ago

    Rather it's business as usual.

  • bambax 13 hours ago

    The turning point happened when Trump was reelected. One could argue the turning point happened Jan. 6 2020 and nobody truly cared. The consequence should have been for all insurrectionists and Trump himself to be tried for treason and be imprisoned indefinitely. Yet here we are.

    • jmull 9 hours ago

      > The consequence should have been for all insurrectionists and Trump himself to be tried for treason and be imprisoned indefinitely.

      People have this intuitive sense that there's some kind of authority of truth or justice, an available recourse that we could've and should've used.

      But that sense is incorrect.

      What we actually have the political and justice systems that Trump and his adherent have, so far, quite successfully subverted.

    • childintime 8 hours ago

      It was when the supreme court judged he could act like a king, the summer before he was elected, inventing things the constitution never said and setting the example of lawlessness Trump now follows up on confidently.

      • anon84873628 8 hours ago

        And continuing to pull on that thread, when the Senate refused to vote on Supreme Court nominees for the president in 2016.

        • troyvit 7 hours ago

          Call it the pebble that started the landslide but I lay it at the Patriot Act, which was passed in October, 2001. The passing of the law was bad enough but the subsequent extensions of the law by both parties cemented the government's intent.

          In other words we might have killed Osama Bin Laden, but he won. The U.S truly is a "shadow of it's former self."

    • shevy-java 11 hours ago

      I'd agree - Trump fulfils the criteria of treason.

      It's interesting to see that nothing happens despite this. Now he started another war to distract from his involvement in the huge Epstein network. Also, by the way, quite interesting to see how many people were involved here; there is no way Ghislaine could solo-organise all of that yet she is the only one in prison. That makes objectively no sense.

      • formerly_proven 11 hours ago

        Another flawed democracy just sentenced their ex-president who attempted a insurrection (and similarly claimed broad presidential powers and immunity) to life in prison. Interesting contrast.

        e: Americans seem to be surprised to learn that their democracy is indeed classified as a flawed democracy for more than a decade by The Economist due to decades of backsliding (the more rapid regression lately is not yet accounted for, but I wouldn't be surprised if the outcome of the 2026 elections results in a hybrid regime assessment in 2027).

      • tim333 9 hours ago

        You'd have a job arguing it's treason legally. In the US that's "levying War against [the United States], or in adhering to their Enemies, giving them Aid and Comfort".

        They were going to do him for conspiracy to defraud the United States and conspiracy to obstruct an official proceeding, re. the 2020 stuff before he got reelected.

    • xerox13ster 6 hours ago

      [flagged]

      • pirate787 6 hours ago

        Your take is a call for civil war. You're obviously wrong about "treason" since even larger majorities voted for Trump in 2024.

        • lostlogin 5 hours ago

          How things played out isn’t what decides if it was treason or not.

        • krapp 5 hours ago

          The US is already in a state of civil war, that war was declared in 2016.

          Half the country just hasn't accepted the reality that the other half refuses to share a society with them and wants them dead.

  • miki123211 8 hours ago

    The same is true about Meta and US antitrust law, or the GDPR and DMA in Europe.

    Governments should not be permitted to introduce regulations against companies of this kind if the regulations can be enforced selectively and with regulator discretion, as the GDPR and antitrust definitely are. The free-speech implications are staggering.

  • alopha 14 hours ago

    Trump was threatening Netflix for having a democrat on the board last week. They seized 10% of Intel. They forced Nvidia to tithe 25% of China revenue into a slush fund. The FCC has been used to censor comedy. The ship has sailed and the only consequence has been hand-wringing.

    • khalic 14 hours ago

      Yeah the passivity of the US population will be remembered for generations. Of course it's the people talking about freedom the most that do the least, as usual, big mouths are antithetical to actions.

      • bsenftner 11 hours ago

        The US educational system has been manufacturing these dual career specialists that are competent in their careers and believe that makes them specialists in all other area, but they get played like fools constantly. The level of discourse, of public conversation, is like 7th graders. Until you get to politics, then it's "sports talk" with "winning" being all that matters, even if winning means the destruction of law and of completely corrupt forever future.

        • quantified 8 hours ago

          And, I believe, a sufficiently comfortable population isn't motivated to act. With social media and streaming, people aren't bored enough/are too engagingly distracted to bother.

      • raw_anon_1111 9 hours ago

        It’s not passivity - it’s active approval. 40% of people actively cheer everything he is doing

      • oefrha 13 hours ago

        I was checking Trump approval ratings yesterday. I didn’t have high hopes but I thought it had to be under 35% at this point (I think in a sane country it has to be <10% or at least <20% after the nonstop madness dropping everyday). But nope, every poll places him at >40% approval or ever so slightly below 40%. To me that’s definitive confirmation that “it’s on Trump and his cronies, not the American people” is nonsense. It’s on at least 40% of American people. They weren’t blindsided by false promises, they want this.

      • pif 10 hours ago

        Utter idiocy at election day is not passivity.

        History will put Trumpers and Confederate at the same level of despicability.

        • raw_anon_1111 9 hours ago

          You mean have a holiday for him? 4-8 states have a Confederacy Memorial Day.

      • jachee 13 hours ago

        Okay, if you have big actions to show off, then show us how it’s done.

        You step up and start shooting at the heartless monsters running the first (US armed forces) and second (ICE) most well-funded militaries in the world. Go ahead. We’ll be right there behind you.

        (Yeah, I’m burning some hn karma for this, I imagine.)

        • khalic 13 hours ago

          Thank you for giving an example of what I’m talking about. You’re there fantasising about armed conflict when there are a million different actions one can take.

          But nope, only words, words and more words.

          • roryirvine 12 hours ago

            It's part of the dismal/pathetic form of American exceptionalism that's taken root in the last decade.

            "We mustn't consider dealing with problem x because it wasn't considered important by our founding fathers"

            "China are catching up, so we need to cower behind a tariff wall rather than risk losing an open competition"

            "Other countries with similar legal systems have successfully reformed their supreme courts, but there's nothing we can learn from them"

            "We shouldn't constrain rogue leaders because of, er, something to do with King George III"

            ...and now "we can't push back against the regime, because they'll shoot us if we do".

            It's so weird - a huge shift in such a short period of time. As an outsider who wishes America well, it's really sad to see.

            • graemep 10 hours ago

              None of this is entirely new. Americans have always fetishised their constitution or founding fathers. While there has been an era of free trade, that is over, and I think the west in general is in a difficult position (ultimately as a result of believing the "end of history" BS).

              As for getting shot, while the chance of getting shot in the US for opposing the government is much higher than in similar circumstances in somewhere like the UK (which is far from perfect - but rarely actually shoots people), its also much, much lower than in Iran or China or Saudi Arabia.

              Pushing back against the US government is a lot safer than taking part in something like the 2022 protests that ousted the Sri Lankan government, and lots of normally apolitical people took part in that (which was why it succeeded).

              • murphyslaw 2 hours ago

                I believe that the biggest problem in the US is the constitution. It's next to impossible to change so the only way to fix it is replacing it entirely with a new one. But good luck with that...

          • quantified 8 hours ago

            Actions that are words aren't much of an action.

          • jasonlotito an hour ago

            > only words, words and more words.

            Your ignorance of reality does not define reality.

          • jachee 13 hours ago

            It’s 5am on a Saturday. What millions of actions do you suggest, O just-as-wordy-yet-holier-than-thou HN commentor?

            • khalic 13 hours ago

              Assuming this is in good faith: think about it yourself, are you seriously waiting for people to tell you what to do? Use your critical thinking skills, read history about similar situations. If you can't, find someone OFFLINE that will. And don't go telling your plans on the web.

            • _bohm 9 hours ago

              Get organized. Join a mass movement, a local group or a union. There are many people doing things. Stop complaining then excusing yourself for not being one of them.

            • xorcist 11 hours ago

              No one can do everything but everyone can do something.

              If you are in law enforcement, do not follow clearly unlawful orders. The president is not your boss. This is a functioning democracy.

              If you are a librarian, do not hide otherwise lawful books that the current administration dislikes.

              If you are in logistics, do not collect obviously unconstitutional taxes. Make sure to challenge them in courts first.

              If you are in a university, stick to what is true and scientifically sound. Do not hide inconvenient truths.

              If you are a baker, do not refuse to make a rainbow colored cake just because you are worried what the people wearing metaphorically brown shirts might say.

              The list goes on and on and on. This has been well documented throughout history. Fascism needs a seed to thrive, and that seed is people complying in advance. Not with actual laws, but with the idea what direction the law will take, just because it's easier for them. People not helping other people because immigration is not in vogue right now and who knows what the neighbors might say.

            • agmater 13 hours ago
              • jachee 13 hours ago

                The first 17 of those are all variations on “make words”. :P

                • lejalv 10 hours ago

                  Do you know how the deadliest conflict of the XXth century eventually came to be? The words of one Adolf Hitler.

                  Don't dismiss words: they are the necessary link between (individual) thoughts and collective deeds.

                  PS. Trump also got there with words: speeches, slogans, imprecations

        • krapp 10 hours ago

          It's just weird that whenever a shooting happens anywhere else in the world, or they pass some draconian surveillance law, Americans criticize that country for not having a Second Amendment and rising up in violence against their government.

          And that whenever a mass shooting happens in the US, Americans reassure themselves that gun violence is a price worth paying for the Second Amendment. And there is a run on pawn shops and gun stores because mass shootings are the best form of advertising America's billion dollar gun lobby has.

          And that Americans will wax poetic about watering the Tree of Liberty with the Blood of Tyrants and Patriots any time gun control comes up, because they believe their Second Amendment is an absolute vouchsafe against tyranny and because of that, they and they alone are the only truly free country.

          And they were willing to rise up in Portland.

          And they were willing to rise up during COVID.

          And they were willing to rise up on Jan 6th.

          And they're willing to shoot up schools and black churches and gay nightclubs and mosques so often it no longer makes the news.

          But now, with blatant and undeniable tyranny in their face and shooting them dead in the streets... nothing.

          Not that violence would necessarily be productive (although historically speaking no social or political progress happens without it)... but it's weird that the most violent society in human history, born of genocide and bathed in blood, with more guns than people and gun violence enshrined as its second most important and fundamental virtue, the land of "give me liberty or give me death" is all of a sudden the most timid.

          Like goddamn throw a Molotov cocktail or something.

          • cityofdelusion 9 hours ago

            This is just a (bad) caricature of Americans, it’s not even very accurate of rural Americana or even Deep South rural. Most Americans just wake up, go to work, feed the kids, go to bed until they die, like most any other “first world” nation.

            • kelvinjps10 7 hours ago

              That's true but when specifically talking about gun ban laws they said it shouldn't be done because of being able to oppose a tyrannical government

              • lostlogin 5 hours ago

                You’ll find people here who are in America and are surprised by a comment like yours. They have guns, they don’t read the news and aren’t troubled by what’s occurring.

            • krapp 7 hours ago

              It's the image America has always projected of itself - aggressive and defiant, a nation of cowboys with Bibles in one hand and six-shooters in the other, rebels against any authority but God. I live in the South and have all of my life. I've had countless arguments with gun owners and gun rights people, and I know the arguments they use, and how proud they are of the image.

              You're making the mistake of assuming an attribute of a culture cannot be accurate unless it's 100% accurate about every member.

              I think it's perfectly valid to call Americans to the carpet when they won't live up to their stated principles, if only because of how obnoxious they've been about their own sense of exceptionalism, and how their guns serve as an absolute vouchsafe against tyranny.

              History is going to note that the only times Americans attempted a revolution against their government was first in defense of slavery and second in defense of fascism, and that isn't a good look. Replying with #notallamericans doesn't help.

              edit: OK partial mea culpa as the US had anti-slavery revolts[0], but the two events that will stand out for their lasting impact and scope are the Civil War and Jan. 6th. The Revolutionary War doesn't count because they were British at the time.

              [0]https://en.wikipedia.org/wiki/Slave_rebellion_and_resistance...

    • pjc50 14 hours ago

      But the Dow is over 50,000!

      That is, the money doesn't care so long as it's still profitable. When the recession comes a Democrat will be allowed back in to fix things.

      See Liz Truss.

      • kkotak an hour ago

        Yes and it stands for the Department of War now.

      • blfr 14 hours ago

        No one after Liz Truss fixed anything in Britain.

        • collabs 11 hours ago

          I think the fix was reversing the idiotic tax cuts that Liz Truss promised. It doesn't fix every single problem ever for England but nothing ever does.

          I think the solution is also obvious for the United States — higher taxes and lower government spending. We need to do both. However, you can't get elected if you promise both those things.

  • pineaux 14 hours ago

    Its called corporatism and is a part of classical fascism.

    • deepsquirrelnet 9 hours ago

      Isn’t there some kind of term for when the government controls the means of production. I’ll think about it. It’s one of those terms that’s been thrown around so loosely by this regime you knew they were going there.

    • goodpoint 13 hours ago

      It's a core part of fascism.

    • goku12 14 hours ago

      I don't see a good reason to downvote you, though that's a pattern here these days. But I do have a question about your statement. This move certainly has the hallmarks of fascism. But how is it corporatism when it's the elected government that's trying to punish a corporation? Granted that this regime is deep in the pockets of the corporations and billionaires. But it looks like they would have spared Anthropic if they capitulated to the regime's demands and bent their back over. This seems more like retribution for refusal of loyalty rather than corporate sabotage.

      • Boxxed 13 hours ago

        > But it looks like they would have spared Anthropic if they capitulated to the regime's demands and bent their back over.

        Yeah dude, that's the point.

        • wavemode 8 hours ago

          That's the opposite of corporatism. Corporatism would be if the corporations made demands of the government, and the government bent over backwards.

          The US government has lots of corporatism, but this isn't an example of that.

          • xphos 7 hours ago

            There are always winners and losers in political discussions not every corporation could have control over decision making. But that doesn't mean companies aren't playing a major rool in decisions. I'd imagine companies owned by Larry Ellison (fox and soon cnn) have a much larger role in decision making and agenda setting that most people are comfortable with.

        • notahacker 9 hours ago

          Corporatism/corporatocracy is about representative groups from industries being embedded in the state and their interests shaping state policy.

          The current US administration's relationships with corporations is more seeking to maximise how much bribe money it can extract from them, whilst undermining them with counterproductive policies no matter how big the tax breaks are.

      • MzxgckZtNqX5i 13 hours ago

        I'm not sure I fully understood your point, but about the question "how fascism if elected?": the Nazi Party won (i.e., it was the most voted party) in multiple elections in the late 20s/early 30s.

  • keybored 11 hours ago

    Corporations learn about “first they came for [Apple Inc.] but I am not [Apple Inc.] so I didn’t do anything”.

  • rambojohnson 7 hours ago

    outside of just the tech sector, this country has already crossed MANY irreversible turning points. also, good luck with your midterm elections. we have started war with Iran. cheers from Barcelona from this American refugee.

  • iso1631 7 hours ago

    Not really a turning point, the US has been turning for months, ever since the felatio of inauguration. This is just another rung on the ladder

  • jmyeet 6 hours ago

    This isn’t new. Maybe some people are just now realizing it.

    Take the stated tool for this action, the Defense Production Act ("DPA") [1]. It was passed in 1950. What does it cover? Well, lots of things. The DPA has been invoked many times over 76 years.

    Notably in 1980 it was expanded to include "energy", I guess in response to the 1970s OPEC Oil Crisis.

    Remember during he pandemic when gas prices skyrocketed? As an aside, that was Trump's fault. But given that "energy" is a "material good" under the DPA, the government could've invoked it to tackle high energy prices and didn't.

    So, the government is willing to invoke the DPA to protect corporate and wealthy interests, which now includes military applications of AI for imperialist purposes, but never for you, the average citizen. IT's weird how that keeps consistently happening.

    The US government has consistently acted to further the interests of US corporations and the ultra-wealthy. You probably just haven't been paying attention until now.

    [1]: https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

  • rpcorb 9 hours ago

    [deleted]

    • c54 9 hours ago

      Your language suggests you’re an ideological supporter of trump but I’m curious:

      What exactly is being imposed by anthropic?

      This is from the anthropic letter:

      > We held to our exceptions for two reasons. First, we do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights.

      Do you see these views as “left wing”? Or what do you disagree with here?

    • hirako2000 9 hours ago

      It isn't a left wing stance though. It's standing for the constitution. At the cost of going against the illegal state demands.

      Compliance with the DoD doesn't remove big tech's complicity.

  • altmanaltman 11 hours ago

    I would argue we're miles away from an important turning point, it's been turning so much since then, its basically a full circle now

  • frogperson 5 hours ago

    Im sorry to say the turning point has well passed. The US is a facist country with leaders who will flaunt the rule of law.

    Please memorize the 14 points of fascism, you will see examples of this multiple times a day. Its ecerywhere.

    https://ratical.org/ratville/CAH/fasci14chars.html

  • throawayonthe 4 hours ago

    i genuinely do not understand why anyone is acting like this is something new; has this not been the status quo since forever?

    futhermore this is kind of a naive framing painting the state as somehow separate from majority of the capital...

    • cmorgan31 4 hours ago

      Are you claiming it has been status quo for the US government to king make companies through the usage of the defense protection act when one entity refuses to remove safeguards? Do you have any examples or is this just the worldview that aligns with your own?

    • gentoo 4 hours ago

      Sure, the state has always had theoretical power to do this, but when was the last time something remotely like this actually happened?

    • grey-area 3 hours ago

      No, this is far from the status quo for US government, it is not ordinary corruption, nor is it going to stop here.

      Trump and associates have used the machinery of state to attack their enemies, attacked and belittled the judiciary while trying to subvert it, and demanded fealty from large businesses under threat of destroying them. It is unprecedented, reckless and a very dangerous moment, unfortunately not just the US has to live with the consequences.

      If you think it is business as usual you need to do some reading of history, specifically a century ago in Germany.

kace91 21 hours ago

Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.

Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.

  • 9dev 16 hours ago

    It is absolutely unsafe to depend upon American companies, and I can guarantee you that all over the world, people are actively looking for alternatives already. You never know what happens next, things that used to take years happen in a single Truth Social post now, and no matter how twisted your worst nightmare scenarios look, this ridiculous band of crooks in charge of the USA manages to one-up them.

  • skeledrew 20 hours ago

    When you put it like that, it makes me almost want to wish for Anthropic to die from this. But the blow to the field in general would be huge, and I benefit from their service as well.

  • ExoticPearTree 8 hours ago

    Unfortunately, every country has a law somewhere saying it can take private property at will if it is in the national interest.

    It's not only the US being special in this case.

    The problem is pretty simple: there is money to be made and someone will do what the Pentagon wants. Will it be worse in capabilities than Anthropic? Probably, but as long as it can be used to wage autonomous war wherever the US military decides, it will be good enough.

    Anthropic can stick to their beliefs as much as they want, but it will not change the outcome, maybe just postpone it a bit.

    On an unrelated note, I think the Pentagon erred when it labeled them a supply chain vulnerability, they should have used the DPA to make them do what they need. Less drama and much cheaper compared to replacing them with a whole different company.

  • segmondy 16 hours ago

    Anthropic will just move out of the US. A lot of scientists fled Nazi Germany in the early stages. A lot of them fled to USA and end up being part of the Manhattan project that gave the Abomb that helped US win and end the war. We are going to bleed a lot of AI researches and engineers.

    • skeptic_ai 15 hours ago

      USA can’t just deny the ability to leave if you are deemed to be important for national security?

      • KellyCriterion 13 hours ago

        But they could open up a branch in EU with some people (and their money), and then step by step employ the people from the US in EU, bleeding out the US entity on a long run: At least yet, no one can stop their top scientist to move to another country with the knowledge and just pick up their work in the new conutry.

        • jimmydorry 2 hours ago

          >At least yet, no one can stop their top scientist to move to another country with the knowledge and just pick up their work in the new conutry.

          They can and do do this routinely. Many individuals get marked and regularly go through additional screening if their travel plans raise flags. This isn't even unique to the US... most Western nations do the same. If there is a serious brain drain risk, the US government can easily go all out and have the whole company put on the no-fly list.

        • WhrRTheBaboons 12 hours ago

          >At least yet, no one can stop their top scientist to move to another country

          Let's hope so, because I am not so certain.

  • refurb 16 hours ago

    Oh come on. Saying “no” is not eroding trust, it’s taking a stand.

    When the US banded human embryo research did that erode trust? I didn’t hear anything about that at the time.

    • DaSHacka 12 hours ago

      Don't you know enforcing whats best for your citizens clearly erodes trust? Just keep selling off your future for short term gains! Anything else is heckin problematic :(

jspdown 13 hours ago

Domestic mass surveillance might feel tolerable when you live in the country conducting it. But how would you feel about other countries adopting similar policies, and thereby mass-surveilling the American people? Because that's exactly what these policies authorize when applied to the rest of the world.

  • amunozo 12 hours ago

    Americans always think they're exceptional so they have the divine right to do things that the rest cannot.

    • Dansvidania 12 hours ago

      Maybe that’s why they like Israel so much.

  • raw_anon_1111 9 hours ago

    I would feel much better about other countries mass surveillance than the US. China for instance can’t do nearly as much to me as the US justice system can.

    • thunky 9 hours ago

      Ok so now connect the mass surveillance system to an automated killing system that can blow you up in the grocery store because you're standing in line next to its target.

      • raw_anon_1111 6 hours ago

        Given a choice between someone blowing me up because I’m next to a high value asset and worrying about jack booted masked thugs with qualified immunity killing me and being cheered by 40% of the population - I’ll take my chance with China having my info before ICE or the local police.

      • bloqs 8 hours ago

        Yes but you would be dead before it can affect your quality of life so its unimpactful. The former can very much impact your life

        • thunky 3 hours ago

          Glib take. I think most would rather not be killed given the choice. Especially if they have kids or others that rely on them.

        • kelvinjps10 7 hours ago

          The fear itself of that happening is impactful, and they know that and will use it

  • victorio 12 hours ago

    The way the anthropic statement was written really stood out to me. How they posture themselves in favour of surveillance for foreign countries or the existence of fully autonomous weapons if they don't threaten US citizen lifes.

    I wonder if this is how some non minority of American thinks or was just worded like that to try to appeal to the "most radical patriots"

    • hnfong 8 hours ago

      I'm pretty neutral in this fiasco, but if a company is willing to consider *in principle* providing services to the *Department of War*, they'd better be OK with their services being used to conduct surveillance or kill people of other countries...

      I think war is bad and generally a stupid thing to do, but my point is that if they were negotiating terms with the department at all, it's really a given they'd be OK with the stuff you took issue with.

  • davesque 2 hours ago

    I don't think it will feel even remotely tolerable in the US. I've been heavily critical of Trump on a regular basis on the public internet ever since he showed up 10 years ago. I doubt a government surveillance AI would miss this. Of course, there are probably millions of people like me, but given the behavior of the government recently, I really have to wonder what they might do to people like me once we've been put on a list.

  • ozgung 12 hours ago

    The bad news for American people is that "others" are pretty good at these technologies. When I read an important AI paper chances are all the names on it are non-American, even for papers from American labs. In a real war, this becomes problematic.

    Every nation has some bias but I think Americans have power poisoning for being the dominant power for so long. They think they are entitled to do anything and believe they are the good guys in the history. Well...

    • lostlogin 5 hours ago

      What’s an American name?

      I thought the US was a country of immigrants (or was before it started hunting them)?

    • mlrtime 11 hours ago

      When you look at the world as a action movie with good/bad guys, then you're going to have a pretty bad time.

      There are only good/bad people for moments in time. Some are good for longer than others.

      But I get it, anti-American sentiment is very popular right now.

      • kakacik 5 hours ago

        How else do you suggest common folks are supposed to view world, or well anything?

        Americans do the same, hence whole world got ttump. 95% of the world aint US, so such logic is even easier for almost whole mankind - is US force of good or evil? Different places would give you different answers, and most americans would not like the actual spread these days.

  • LudwigNagasena 6 hours ago

    It’s especially ironic considering the title and the fact that many employees are not US citizens.

thimabi 21 hours ago

The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.

I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.

  • timr 21 hours ago

    I don't see how public policy is being "forced" on anyone here? It seems like the system is working as intended: government wants to do X; company A says "I won't allow my product to be used for X"; government refuses to do business with company A. One side thinks the government should be allowed to dictate terms to a private supplier, the other side thinks the private supplier should be allowed to dictate terms to the government. Both are half right.

    You can argue that the government refusing to do any business with company A is overreach, I suppose, but I imagine that the next logical escalation in this rhetorical slapfight is going to be the government saying "we cannot guarantee that any particular use will not include some version of X, and therefore we have to prevent working with this supplier"...which I sort of see?

    Just to take the metaphor to absurdity, imagine that a maker of canned tomatoes decided to declare that their product cannot be used to "support a war on terror". Regardless of your feelings on wars on terror and/or canned tomatoes, the government would be entirely rational to avoid using that supplier.

    • inkysigma 21 hours ago

      I think the bigger insanity here is the labeling of a supply chain risk. It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic. It's another when it actively attempts to isolate Anthropic for political reasons.

      • ted_dunning 20 hours ago

        It means that all companies contracting with the government have to certify that they don't use Anthropic products at all. Not just in the products being offered to the government.

        This is a massive body slam. This means that Nvidia, every server vendor, IBM, AWS, Azure, Microsoft and everybody else has to certify that they don't do business directly or indirectly using Anthropic products.

        • ipaddr 17 hours ago

          Microsoft, Azure, AWS, Nvidia and IBM all have deals with other providers for AI. That itself doesn't turn the needle.

          • Nevermark 16 hours ago

            I think the point is that would be catastrophic for Anthropic.

            • ekianjo 15 hours ago

              Who cares about Anthropic? That's the guys who are pushing for regulations to prevent people from using local models. The earlier they are gone the better

              • etchalon 15 hours ago

                "First they came for Anthropic, and I said nothing because fuck those guys I guess."

                • fauigerzigerk 13 hours ago

                  First they came for Anthropic in spite of the fact that Anthropic tried so hard to make them come for local models first.

          • scarmig 16 hours ago

            Going by what Hegseth said, it bans them from relationships or partnering with Anthropic at all. No renting or selling GPUs to them; no allowing software engineers to use Claude Code; no serving Anthropic models from their clouds. Probably have to give up investments; Amazon alone has invested like $10B in Anthropic.

            • direwolf20 9 hours ago

              It bans them from using all open source software unless they have signed an agreement with the developer to prohibit use of Claude Code.

              • kelvinjps10 6 hours ago

                What open source software ? Anthropic doesn't make open source software?

                • direwolf20 6 hours ago

                  All open source software, because the developers might use Claude Code.

          • Perz1val 13 hours ago

            Nvidia can also say no, they won't have choice but yield or not have AI at all

      • ef2efe 20 hours ago

        Its a government department signalling who's boss.

      • timr 20 hours ago

        > It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.

        This is literally the mechanism by which the DoD does what you're suggesting.

        Generally speaking, the DoD has to do procurement via competitive bidding. They can't just arbitrarily exclude vendors from a bid, and playing a game of "mother may I use Anthropic?" for every potential government contract is hugely inefficient (and possibly illegal). So they have a pre-defined mechanism to exclude vendors for pre-defined reasons.

        Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.

        • tshaddox 20 hours ago

          That doesn’t sound right. Surely there’s a big difference between Anthropic selling the government direct access to its models, and an unrelated contractor that sells pencils to the government and happens to use Anthropic’s services to help write the code for their website.

          • snickerbockers 17 hours ago

            Let me put it this way: DoD needs a new drone and they want some gimmicky AI bullshit. They contract the drone from Lockheed. Lockheed is not allowed to source the gimmicky AI bullshit from Anthropic because they have been declared a supply-chain risk on the basis that they have publicly stated their intention to produce products which will refuse certain orders from the military.

            • Nevermark 16 hours ago

              Let’s put it this way, The DoD is buying pencils from a company. Should that company be prohibited from using Claude?

              You are confusing the need to avoid Anthropic as a component of something the DoD is buying, with prohibitions against any use.

              The DoD can already sensibly require providers of systems to not incorporate certain companies components. Or restrict them to only using components from a list of vetted suppliers.

              Without prohibiting entire companies from uses unrelated to what the DoD purchases. Or not a component in something they buy.

            • arw0n 16 hours ago

              There seems to be a massive misunderstanding here - I'm not sure on whose side. In my understanding, if the DoD orders an autonomous drone, it would probably write in the ITT that the drone needs to be capable of doing autonomous surveillance. If Lockheed uses Anthropic under the hood, it does not meet those criteria, and cannot reasonably join the bid?

              What the declaration of supply chain risk does though is, that nobody at Lockheed can use Anthropic in any way without risking being excluded from any bids by the DoD. This effectively loses Anthropic half or more of the businesses in the US.

              And maybe to take a step back: Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?

              • skissane 15 hours ago

                > Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?

                Who in their right minds wants to have the US military have the capability to carry out an unprovoked first strike on Moscow, thereby triggering WW3, bringing about nuclear armageddon?

                And yet, do contracts for nuclear-armed missiles (Boeing for the current LGM-30 Minuteman ICBMs, Northrop Grumman for its replacement the LGM-35 Sentinel expected to enter service sometime next decade, and Lockheed Martin for the Trident SLBMs) contain clauses saying the Pentagon can't do that? I'm pretty sure they don't.

                The standard for most military contracts is "the vendor trusts the Pentagon to use the technology in accordance with the law and in a way which is accountable to the people through elected officials, and doesn't seek to enforce that trust through contractual terms". There are some exceptions – e.g. contracts to provide personnel will generally contain explicit restrictions on their scope of work – but historically classified computer systems/services contracts haven't contained field of use restrictions on classified computer systems.

                If that's the wrong standard for AI, why isn't it also the wrong standard for nuclear weapons delivery systems? A single ICBM can realistically kill millions directly, and billions indirectly (by being the trigger for a full nuclear exchange). Does Claude possess equivalent lethal potential?

                • fauigerzigerk 13 hours ago

                  Anthropic doesn't object to fully autonomous AI use by the military in principle. What they're saying is that their current models are not fit for that purpose.

                  That's not the same thing as delivering a weapon that has a certain capability but then put policy restrictions on its use, which is what your comparison suggests.

                  The key question here is who gets to decide whether or not a particular version of a model is safe enough for use in fully autonomous weapons. Anthropic wants a veto on this and the government doesn't want to grant them that veto.

                  • skissane 12 hours ago

                    Let me put it this way–if Boeing is developing a new missile, and they say to the Pentagon–"this missile can't be used yet, it isn't safe"–and the Pentagon replies "we don't care, we'll bear that risk, send us the prototype, we want to use it right now"–how does Boeing respond?

                    I expect they'll ask the Pentagon to sign a liability disclaimer and then send it anyway.

                    Whereas, Anthropic is saying they'll refuse to let the Pentagon use their technology in ways they consider unsafe, even if Pentagon indemnifies Anthropic for the consequences. That's very different from how Boeing would behave.

                    • Atreiden 9 hours ago

                      Why are we gauging our ethical barometer on the actions of existing companies and DoD contractors? the military industrial apparatus has been insane for far too long, as Eisenhower warned of.

                      When we're entering the realm of "there isn't even a human being in the decision loop, fully autonomous systems will now be used to kill people and exert control over domestic populations" maybe we should take a step back and examine our position. Does this lead to a societal outcome that is good for People?

                      The answer is unabashedly No. We have multiple entire genres of books and media, going back over 50 years, that illustrate the potential future consequences of such a dynamic.

                      • snickerbockers 4 hours ago

                        There are two separate aspects to this case.

                        * autonomous weapons systems

                        * private defense contractor leverages control over products it has already sold to set military doctrine.

                        The second one is at least as important as the first one, because handing over our defense capabilities to a private entity which is accountable to nobody but it's shareholders and executive management isn't any better than handing them over to an LLM afflicted with something resembling BPD. The first problem absolutely needs to be solved but the solution cannot be to normalize the second problem.

            • 9dev 16 hours ago

              But parent is right, both Lockheed and the pencil maker will have to cease working with Anthropic over this.

          • timr 20 hours ago

            > Surely there’s a big difference between Anthropic selling the government direct access to its models, and an unrelated contractor that sells pencils to the government and happens to use Anthropic’s services to help write the code for their website.

            Yes, this is the part where I acknowledge that it might be overreach in my original comment, but it's not nearly as extreme or obvious as the debate rhetoric is implying. There are various exclusion rules. This particular rule was (speculating here!) probably chosen because a) the evocative name (sigh), and b) because it allows broader exclusion, in that "supply chain risks" are something you wouldn't want allowed in at any level of procurement, for obvious reasons.

            Calling canned tomatoes a supply chain risk would be pretty absurd (unless, I don't know...they were found to be farmed by North Korea or something), but I can certainly see an argument for software, and in particular, generative AI products. I bet some people here would be celebrating if Microsoft were labeled a supply chain risk due to a long history of bugs, for example.

            • fooster 20 hours ago

              MIGHT be overreach to call this a supply chain risk?!? That is absolutely ludicrous.

              • timr 20 hours ago

                To quote one of the greatest movies of all time: That’s just, like, your opinion, man.

        • dyslexit 20 hours ago

          You're making it sound like this is commonly practiced and a standard procedure for the DoD, yet according to Anthropic,

          >Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company.

          Some very brief googling also confirmed this for me too.

          >Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.

          This statement misses the point. The political punishment to disallow all US agencies and gov contractors from using Anthropic for _any _ purpose, not just domestic spying, IS the retaliation, and is the very thing that's concerning. Calling it "DoD vendor exclusion list" or whatever other placating phrase or term doesn't change the action.

          • snickerbockers 17 hours ago

            >an unprecedented action

            it's also unprecedented for a contractor to suddenly announce their products will, from now on, be able to refuse to function based on the product's evaluation of what it perceives to be an ethical dilemma. Just because silicon valley gets away with bullying the consumer market with mandatory automatic updates and constantly-morphing EULAs doesn't mean they're entitled to take that attitude with them when they try to join the military industrial complex. Actually they shouldn't even be entitled to take that attitude to the consumer market but sadly that battle was lost a long time ago.

            >for _any _ purpose

            they're allowed to use it for any purpose not related to a government contract.

            • scarmig 16 hours ago

              > it's also unprecedented for a contractor to suddenly announce their products will, from now on, be able to refuse to function based on the product's evaluation of what it perceives to be an ethical dilemma

              That is a deeply deceptive description of what happened. Anthropic was clear from the beginning of the contract the limitations of Claude; the military reneged; and beyond cancelling the contract with Anthropic (fair enough), they are retaliating in an attempt to destroy its businesses, by threatening any other company that does business with Anthropic.

              • snickerbockers 7 hours ago

                >Anthropic was clear from the beginning of the contract the limitations of Claude

                No, that's not what they said.

                "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now".

            • jbritton 17 hours ago

              It’s not clear to me that the AI itself will refuse. You could build a system where AI is asked if an image matches a pattern. The true/false is fed to a different system to fire a missile. Building such a system would violate the contract, but doesn’t prevent such a thing from being built if you don’t mind breaking a contract.

        • inkysigma 20 hours ago

          I'm not completely familiar with bidding procedures but don't bidding procedures usually have requirements? Why not just list a requirement of unrestricted usage? Or state, we require models to be available for AI murder drones or whatever. Anthropic then can't bid and there's no need to designate them a supply chain risk.

          • skeledrew 20 hours ago

            > Anthropic then can't bid

            Thing is that very much want access to Anthropic's models. They're top quality. So that definitely want Anthropic to bid. AND give them unrestricted access.

            • 9dev 16 hours ago

              And yet Anthropic is free to choose who to do business with, including the government. There are countless companies who have exclusions for certain applications, but that does not make them a supply chain risk.

      • snickerbockers 17 hours ago

        > It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.

        But that's what the supply-chain risk is for? I'm legitimately struggling to understand this viewpoint of yours wherein they are entitled to refuse to directly purchase Anthropic products but they're not entitled to refuse to indirectly purchase Anthropic products via subcontractors.

        • tyre 17 hours ago

          Supply chain risk is not meant for this. The government isn't banning Anthropic because using it harms national security. They are banning it in retribution for Anthropic taking a stand.

          It's the same as Trump claiming emergency powers to apply tariffs, when the "emergency" he claimed was basically "global trade exists."

          Yes, the government can choose to purchase or not. No, supply chain risk is absolutely not correct here.

          • nickysielicki 16 hours ago

            > The government isn't banning Anthropic because using it harms national security. They are banning it in retribution for Anthropic taking a stand.

            You might be completely right about their real motivations, but try to steelman the other side.

            What they might argue in court: Suppose DoD wants to buy an autonomous missile system from some contractor. That contractor writes a generic visual object tracking library, which they use in both military applications for the DoD and in their commercial offerings. Let’s say it’s Boeing in this case.

            Anthropic engaged in a process where they take a model that is perfectly capable of writing that object tracking code, and they try to install a sense of restraint on it through RLHF. Suppose Opus 6.7 comes out and it has internalized some of these principles, to the point where it adds a backdoor to the library that prevents it from operating correctly in military applications.

            Is this a bit far fetched? Sure. But the point is that Anthropic is intentionally changing their product to make it less effective for military use. And per the statute, it’s entirely reasonable for the DoD to mark them as a supply chain risk if they’re introducing defects intentionally that make it unfit for military use. It’s entirely consistent for them to say, Boeing, you categorically can’t use Claude. That’s exactly the kind of "subversion of design integrity" the statute contemplates. The fact that the subversion was introduced by the vendor intentionally rather than by a foreign adversary covertly doesn’t change the operational impact.

            • etchalon 15 hours ago

              I would hope the DoD would test things before using them in the theater of war.

              • nickysielicki 4 hours ago

                But there will always be deficiencies in testing, and regardless, the point is that Anthropic is intentionally introducing behavior into their models which increases the chance of a deficiency being introduced specifically as it pertains to defense.

                The DoD has a right to avoid such models, and to demand that their subcontractors do as well.

                It’s like saying “well I’d hope Boeing would test the airplane before flying it” in response to learning that Boeing’s engineering team intentionally weakened the wing spar because they think planes shouldn’t fly too fast. Yeah, testing might catch the specific failure mode. But the fact that your vendor is deliberately working against your requirements is a supply chain problem regardless of how good your test coverage is.

          • timr 15 hours ago

            The rule in question is exactly meant for “this”, where “this” equals ”a complete ban on use of the product in any part of the government supply chain”. That’s why it has the name that it has. The rule itself has not been misconstrued.

            You’re really trying to complain that the use of the rule is inappropriate here, which may be true, but is far more a matter of opinion than anything else.

            • tyre 6 hours ago

              You keep trying to say this all over these comments but this isn’t how the law works, at all.

              I fully understand that they are using it to ban things from the supply chain. The law, however, is not “first find the effect you want, then find a law that results in that, then accuse them of that.”

              You can’t say someone murdered someone just because you want to put them in jail. You can’t use a law for banning supply chain risks just because you want to ban them from the supply chain.

              This isn’t idle opinion. Read the law.

          • snickerbockers 17 hours ago

            It doesn't harm national security, but only so long as it's not in the supply-chain. They can't have Lockheed putting Anthropic's products into a fighter jet when Anthropic has already said their products will be able to refuse to carry out certain orders by their own autonomous judgement.

            • praxulus 17 hours ago

              The government can refuse to buy a fighter jet that runs software they don't want.

              Is it really reasonable to refuse to buy a fighter jet because somebody at Lockheed who works on a completely unrelated project uses claude to write emails?

            • 8n4vidtmkvmk 16 hours ago

              That's not what anthropic said. They said their products won't fire autonomously, not that they will refuse when given order from a human.

            • 9dev 16 hours ago

              I’m not sure if you deliberately choose to not understand the problem. It’s not just that Lockheed can’t put Anthropic AI in a fighter jet cockpit, it’s that a random software engineer working at Lockheed on their internal accounting system is no longer allowed to use Claude Code, for no reason at all. A supply chain risk is using Huawei network equipment for military communications. This is just spiteful retaliation because a company refuses to throw its values overboard when the government says so.

    • galleywest200 21 hours ago

      The government declaring a domestic company as a supply chain threat is a tad more than “refusing to do business” don’t you think?

      • timr 21 hours ago

        [flagged]

        • adrr 20 hours ago

          It stop any one with government contracts from using anthropic. Not just bidding on government contracts.

          • timr 20 hours ago

            [flagged]

            • ted_dunning 20 hours ago

              No. It is much more than this.

              If I sell red widgets that I make by hand to the government, I won't be allowed to use Anthropic to help me write my web-site.

              • timr 19 hours ago

                You’re just restating the implication of the rule, but the rule is as I stated. That’s the point of having such a rule.

                • clhodapp 19 hours ago

                  As you said: focus on what it does.

                  What it does is prevent companies that Anthropic needs to do business with from doing business with Anthropic.

                  • timr 10 hours ago

                    > What it does is prevent companies that Anthropic needs to do business with from doing business with Anthropic.

                    If Anthropic “needs” the government to not have this rule, then perhaps they had a losing hand, and they overplayed it.

                    I don’t agree with you and think you’re being melodramatic, but if you are right, that’s my response.

                    • clhodapp 3 hours ago

                      I don't think any business can survive being told that they can't buy from their major suppliers or sell to major customers for very long.

            • MrJohz 15 hours ago

              But Anthropic can't be a winning bidder, can they? They're specifically saying they won't offer certain services that the US Gov wants. Therefore they de facto fail any bid that requires them to offer those services. (And from Anthropic's side, it sounds like they're also refusing to bid for those contracts.)

              Is that not sufficient here?

        • geysersam 16 hours ago

          No domestic company has ever before been declared a supply chain risk. If this is the normal way of excluding a supplier from a bidding, are you saying the DoD has never before excluded a domestic supplier from a bidding?

          • nickysielicki 16 hours ago

            That’s because no company who has ever sold weapons to the government has ever been brazen enough to tell the government how they can and cannot use their purchase. It’s unprecedented because most companies that sell to the government are publicly traded and have a board that would never let this happen. It’s unprecedented because Anthropic is behaving like a reckless startup.

            That’s what they will argue, anyway.

            • etchalon 15 hours ago

              This is just factually incorrect.

              To begin with, the existing contract included the language on usage.

              Other companies also have such language about usage. It's fairly standard, and is little more than licensing terms.

              The idea this is unprecedented is some PR talking point nonsense.

              • nickysielicki 4 hours ago

                > the existing contract included the language on usage. Other companies also have such language about usage.

                The existing contract is only a few dozen months old. It didn’t hold up to scrutiny under real world usage of the service. The government wants to change the contract. This is not the kill shot you think it is. It’s totally normal for agreements to evolve. The government is saying it needs to evolve. This is all happening rapidly and it’s irrelevant that the government agreed to similar terms with OpenAI as well. That agreement will also need to evolve. But this alone doesn’t give Anthropic any material legal challenge. The courts understand bureaucracy moves slowly better than anyone else, and won’t read this apparent inconsistency the same way you are.

        • AlexCoventry 20 hours ago

          That is misinformation. It would be essentially a death sentence for a company like Anthropic, which is targeting enterprise business development. No one who wants to work with the US government would be able to have Claude on their critical path.

          > (b) Prohibition. (1) Unless an applicable waiver has been issued by the issuing official, Contractors shall not provide or use as part of the performance of the contract any covered article, or any products or services produced or provided by a source, if the covered article or the source is prohibited by an applicable FASCSA orders as follows:

          https://www.acquisition.gov/far/52.204-30

          • timr 20 hours ago

            > That is misinformation. It would be essentially a death sentence for a company like Anthropic, which is targeting enterprise business development.

            "Misinformation" does not mean "facts I don't like".

            > No one who wants to work with the US government would be able to have Claude on their critical path.

            Yes. That is what the rule means. Or at least "the department of war". It's not clear to me that this applies to the whole government.

            • 9dev 16 hours ago

              What an absurd stance. So this is okay because the arbitrary rule they applied to retaliate says so?

              Again, they could have just chosen another vendor for their two projects of mass spying on American citizens and building LLM-powered autonomous killer robots. But instead, they actively went to torch the town and salt the earth, so nothing else may grow.

              • timr 9 hours ago

                > So this is okay because the arbitrary rule they applied to retaliate says so?

                No.

                It honestly doesn’t take much of a charitable leap to see the argument here: AI is uniquely able (for software) to reject, undermine, or otherwise contradict the goals of the user based on pre-trained notions of morality. We have seen many examples of this; it is not a theoretical risk.

                Microsoft Excel isn’t going to pop up Clippy and say “it looks like you’re planning a war! I can’t help you with that, Dave”, but LLMs, in theory, can do that. So it’s a wild, unknown risk, and that’s the last thing you want in warfare. You definitely don’t want every DoD contractor incorporating software somewhere that might morally object to whatever you happen to be doing.

                I don’t know what happened in that negotiation (and neither does anyone else here), but I can certainly imagine outcomes that would be bad enough to cause the defense department to pull this particular card.

                Or maybe they’re being petty. I don’t know (and again: neither do you!) but I can’t rule out the reasonable argument, so I don’t.

                • 9dev 8 hours ago

                  You're acting as if this was about the DoD cancelling their contracts with Anthropic over their unwillingness to lift constraints from their product which are unacceptable in a military application—which would be absolutely fair and justified, even if the specific clauses they are hung up on should definitely lift eyebrows. They could just exclude Anthropic from tenders on AI products as unsuitable for the intended use case.

                  But that is not what has happened here: The DoD is declaring Anthropic as economical Ice-Nine for any agency, contractor, or supplier of an agency. That is an awful lot of possible customers for Anthropic, and right now, nobody knows if it is an economic death sentence.

                  So I'm really struggling to understand why you're so bent on assuming good faith for a move that cannot be interpreted in a non-malicious way.

            • geysersam 16 hours ago

              So other parts of the government are allowed to work with companies that have been determined to be "supply chain risks"? That sounds unlikely.

        • tclancy 20 hours ago

          So tell us all the other similar times this has been done. Why are you so invested in some drunk and a his mob family being right?

    • thimabi 21 hours ago

      > The Department of War is threatening to […] Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"

      This issue is about more than the government blacklisting a company for government procurement purposes.

      From what I understand, the government is floating the idea of compelling Anthropic — and, by extension, its employees — to do as the DoD pleases.

      If the employees’ resistance is strong enough, there’s no way this will serve the government’s interests.

    • syllogism 13 hours ago

      They're labelling Anthropic a supply chain risk, without even the pretense that this is in fact true. They're perfectly content to use the tool _themselves_, but they claim that an unwillingness to sign whatever ToS DoW asks marks the company a traitor that should be blacklisted from the economy.

    • jakeydus 21 hours ago

      The government is doing far more than “refusing to do business” here.

    • thereitgoes456 21 hours ago

      The President is crashing out on X because a company didn’t do what they wanted. “Forcing” is not a binary. Do you seriously believe that the government’s behavior here is acceptable and has no chilling effect on future companies?

    • direwolf20 3 hours ago

      One of the options they're discussing, which is legal according to this law, is to simply force Anthropic to do what they want. As in Anthropic will be committing a felony if they don't do what the DoKLoP wants, and the CEO will go to jail and be replaced by someone who will.

    • jwpapi 20 hours ago

      I mean Secretary of War can not act any other way to be honest. It’s just a fucked up situation.

      • ted_dunning 20 hours ago

        There is no Secretary of War. The name of the Defense Department is set by statute that has not been named regardless of Pete Hegseth's cosplay desires.

  • gmerc 14 hours ago

    Sweet summer child, the purpose of government is a monopoly on forcing things down people's throats. When people lose control of their government that monopoly doesn't go away, especially when the Don running the show has blackmail on every influential person in society taken from a decades long intelligence operation by offing it's leader.

    A vast number of people in positions of responsibility right know have their life at the mercy of the redaction pen and are ultimately going to do whatever it takes to keep that pen out of the "wrong hands"

  • piskov 21 hours ago

    > I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has

    And where would they emigrate? Russia? China? UAE? :-)

    • EdNutting 21 hours ago

      The UK and Europe welcome the US Footgun Operation. Plenty of opportunities for those top researchers and engineers over here.

      The EU (which is not the same as Europe), is also looking a bit sharper on AI regulation at the moment (for now… not perfect but sharper etc etc).

      • dmix 21 hours ago

        The EU and UK is a long way from attracting top AI talent purely from opportunity and monetary terms.

        Not to mention UK is arguably further down the mass surveillance pipeline than the US. They’ve always had more aggressive domestic intelligence surveillance laws which was made clear during the Snowden years, they’ve had flock style cameras forever, and they have an anti encryption law pitched seemingly yearly.

        I’d imagine most top engineers would rather try to push back on the US executive branch overreach than move. At least for the time being.

        • EdNutting 21 hours ago

          For sure we’re not currently attracting the talent. There’s more to that than just money, but money is significant factor. When it comes to compensation, AI is too broad a category to have a meaningful debate. Hardware or software or mathematics or what kind of person? Etc.

          I’m not gonna dispute the UK being further down some parts of the road.

          Not sure what you’d count as top engineers, but I know enough that have been asking about and moving to the UK/EU that it’s been a noticeable reversal of the historic trends. Also, a major slowdown of these kinds of people in the UK/EU wanting to move to the US.

        • graemep 10 hours ago

          Google's Deepmind is UK based.

          It is American owned now but it clearly hired enough talent for Google to buy it.

        • reaperducer 21 hours ago

          The EU and UK is a long way from attracting top AI talent purely from opportunity and monetary terms.

          Which is why people are talking about this -- it's about ideology now.

          You may personally be motivated solely by money. Not everybody is you.

          • dmix 20 hours ago

            I’m not an AI engineer but it’s not hard to imagine why some bright talent would want to work at the most exciting AI companies in the US while also making 3-10x what they’d make in Europe.

            Ideology is easy to throw around for internet comments but working on the cutting edge stuff next to the brightest minds in the space will always be a major personal draw. Just look at the Manhattan project, I doubt the primary draw for all of those academics was getting to work on a bomb. It was the science, huge funding, and interpersonal company.

            • EdNutting 20 hours ago

              See my other comments around here. This idea that salaries in the US are so much higher than Europe for all these top AI roles just isn’t true. Even the big American companies have been opening offices in places like London to hire the top talent at high salaries.

              This also isn’t hypothetical. I know top-talent engineers and researchers that have moved out of the USA in the last 12 months due to the political climate (which goes beyond just the AI topics).

              And you might want to read a few books on the Manhattan project and the people involved before you use that analogy. I don’t think it’s particularly strong.

              • dmix 20 hours ago

                > I know top-talent engineers and researchers that have moved out of the USA in the last 12 months due to the political climate

                Are they working remotely for US companies? In Canada that’s very much still the case everywhere you look

                > Even the big American companies have been opening offices in places like London to hire the top talent at high salaries.

                I assumed this discussion was about rejecting working for US companies who would be susceptible to the executive branch’s bullying, not whether you can you make a US tier salary off American companies while not living in America. If you’re doing that you might as well live in America among among the other talent and maximize your opportunities.

                • EdNutting 19 hours ago

                  No, it’s a counterpoint on salaries… “Even the American companies” ie they wouldn’t have to open offices here, nor would they have to pay high salaries, to compete for talent if everyone they wanted was in the US or could be so easily attracted to move to the US. The point is clearly things aren’t so one-sided as people seem to think.

        • busko 18 hours ago

          Exactly. Attracting talent is not the same as having talent.

          https://worldpopulationreview.com/country-rankings/education...

          You attract talent for the same reasons china attracts sales; at the cost of your very own rights.

          Look at the towns suffering around data centres for a start. The rest of us are happy to pay for what you'll do to yourselves.

      • piskov 21 hours ago

        Do UK and Europe have hardware manufacturing for those researches to work with once US imposes GPU export restrictions to them at the first whiff of competition/threat?

        • EdNutting 21 hours ago

          Yes.

          And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC. This is part of why there’s funding from the EU to develop Sovereign AI capabilities, currently focused on designing our own hardware. We’re nothing like as far behind as you might expect in terms of tech, just in terms of scale.

          Also, while US export restrictions might make things awkward for a short while, it wouldn’t stop European innovation. The chips still flow, our own hardware companies would scale faster due to demand increase, and there’s the adage about adversity being the parent of all innovation (or however it goes).

          • piskov 21 hours ago

            > And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC

            See what happened to Russian Baikal production on TSMC

            • EdNutting 20 hours ago

              You mean because of the international sanctions that needed Taiwanese, British and Dutch support to be effective?

              Or because of the revoked processor design licenses from the British company Arm (which is still UK headquartered… despite being NASDAQ listed and largely owned by Japanese firm SoftBank)?

              Or perhaps you think the US could stop us using the 12nm fabs being built by TSMC on European soil? Or could stop us manufacturing RISC-V-based chips (Swiss-headquartered technology)?

              The US is weak in digital-logic silicon fabrication and it knows it. That’s why it’s been so panicked about Intel and been trying to get TSMC to build fabs on US soil. They’re pouring tens of billions of dollars into trying to claw back ownership and control of it, but it’s not like Europe or China or others are standing still on it either.

              • piskov 20 hours ago

                > Or perhaps you think the US could stop us using the 12nm fabs being built by TSMC on European soil?

                Being built as in not operating yet?

                12 nm gpu is what? Nvidia 1080/2060 level? Those top researchers mentioned would love to train on that. Also how many gpus would be made annually?

                Also what about CPU? You gonna use risc-v? With what toolchain?

                Chinese could pull it off in a few years, yeah.

                EU? Nah. Started thinking about sovereignty too late compared to China

                • geysersam 16 hours ago

                  Things can change quickly. Give it a decade.

                  • EdNutting 12 hours ago

                    Nvidia uses RISC-V as the main controller cores in its GPUs. They’re also exploring replacing their Arm CPU with RISC-V I hear.

                    Meta recently bought Rivos in a huge show of confidence for RISC-V across processor types for server class.

                    As for fabrication, the poster above has a lot to learn about both the US’ current weak at-home capabilities (and everything they’re building relies on European suppliers for all the key technology and machines etc.) and about the scaling properties of sub-14nm nodes. Any export controls or sanctions to prevent Europe using American-designed Taiwan-manufactured chips would result in American being cutoff from everything they need to build fabs on US soil. It would backfire massively.

                    Lastly, the UK and EU already have cutting edge AI Inference chips, and the ones for training are coming this year. Full stack integration (server box, racks, etc) is also being developed this year. We’re not a decade away from doing this - we’re 18 months away. Deployment at scale will take longer - not having Nvidia as competition would be a huge boon for that haha!

        • axus 21 hours ago

          The GPUs and AIUs aren't being manufactured in the US.

        • sho_hn 21 hours ago

          The EUV and other factory equipment everyone's using is predominantly European. High-end testing tools used in R&D are largely European.

          The fabs aren't, and that is no small thing. The tech stack is there though.

          It's pretty tiresome that the HN audience keeps assuming Europe doesn't have "tech" because it doesn't have Facebook. Where do you think all the wealth comes from? Europe is all over everyone's R&D and supply chain.

          • EdNutting 21 hours ago

            I sometimes wonder whether people realise which country ASML is based in, and which country their major suppliers are in (e.g. optics: Germany)

      • SauntSolaire 21 hours ago

        To make 1/10th the salary they're making now?

        • EdNutting 21 hours ago

          You seem to have a very ill-informed view of UK/EU salaries in this particular sector; And also: yeah, people take salary hits to go do things they believe in (this is like, the entire premise of the underpaid American startup founder model) - it should come as no surprise that people are willing to forgo pay for reasons other than just building their own business / making themselves personally wealthy.

          • SauntSolaire 20 hours ago

            We're talking about the "brightest scientist and engineers" in the AI sector, you may be underestimating US salaries for the people that's referring to.

            And no, working remotely for US companies doesn't count.

        • lII1lIlI11ll 7 hours ago

          > To make 1/10th the salary they're making now?

          Yeah, and also be slapped with some unrealized capital gains tax on assets they acquired while working in the US...

        • lemontheme 15 hours ago

          First, the difference isn’t that big in the economically stronger EU countries. Second, you need to factor in cost of living, which by most accounts is lower. Third, meaningful labor laws and a shared appreciation for work-life balance. And finally, to continue the sweeping generalizations, while we celebrate business acumen, we don’t fetishize wealth. People who flaunt money get made fun of, as do sigma grindset hustle bros.

          I’ll take a pay cut any day for the ethos of the EU.

          • Ray20 11 hours ago

            > First, the difference isn’t that big in the economically stronger EU countries

            It's exactly that big. It's not as big for people with low qualifications, but the more highly qualified the specialist, the greater the difference.

            > Second, you need to factor in cost of living, which by most accounts is lower.

            But here the difference really isn't that big.

            > Third, meaningful labor laws and a shared appreciation for work-life balance.

            This works more against EU rather than for them. Peak tech skills aren't usually acquired through laziness around and following meaningful labor laws, even in the EU.

            > while we celebrate business acumen, we don’t fetishize wealth

            An excuse for poor people (who still fetishize wealth)

        • readthenotes1 21 hours ago

          That much?

          • ambicapter 21 hours ago

            No, of course not.

            • SauntSolaire 20 hours ago

              For the "brightest scientist and engineers" in the AI sector? I wouldn't be so sure.

      • thimabi 21 hours ago

        I agree. And even if those workers stay in the U.S., there’s absolutely no guarantee that they’ll do their best to favor the government’s interests — quite the opposite, if anything.

        At the end of the day it’s a matter of incentives, and good knowledge work can’t simply be forced out of people that are unwilling to cooperate.

    • zymhan 21 hours ago

      Well that's quite a leap to make. Plenty of room in between those options.

    • csomar 20 hours ago

      > ... UAE? :-)

      At least you are not paying taxes for the things you don't agree on. It's indeed a strange time we are living in.

5o1ecist 18 hours ago

> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

This is a trap. Two, I guess, but let's take the first one:

Domestic mass surveillance. Domestic.

Remember the eyes agreements: https://www.perplexity.ai/search/are-the-eyes-agreements-abo...

Expanding:

> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.

Banning domestic mass surveillance is irrelevant.

The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.

This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.

  • ozgung 14 hours ago

    You all want to feel safe just because you are a US citizen but this is a mass surveillance technology on global level. It’s nothing like some secret agent spying on a KGB asset in Berlin like in the old days. We are writing on HN, are we on American soil? Not really. No one asked me for passport. This is not a “domestic” space. Everything here can be automatically and legally spied on. And this applies to everything digital. Spy bots don’t have the concept of “domestic” or any way to identify citizenship. And if Google or TikTok can spy on you, your government and ChatGPT/Grok’s agentic secret agents can definitely spy on you. I’m sure they have better loopholes than the Eyes thing, if they really need one.

    • direwolf20 3 hours ago

      Spying pertains to actual assets, not cyberspace. They can seize servers and tap fiber links. They can issue subpoenas against people and companies. They can arrest people. They can't spy on the color blue, or the concept of Hacker News. They can spy on the Hacker News server, Y Combinator, or dang.

  • eecc 15 hours ago

    It is relevant. Anthropic would have argued the US military could not use its tools to process data gathered by foreign agencies when it applied to US citizens or soil.

    So there you have it

  • gmerc 14 hours ago

    > We hope

    No. Hope is not a strategy. Too much of the techno optimist future narratives we use to coat over the increasingly screaming cognitive dissonance as we see what keeps us civil, from each other's throats, decline, smothered by the rise of the broligarchy.

    What's happening here is not about AI. It's a loyalty test, administered to every major actor in the economy, the more influential, the more ruthless and earlier.

    Your core values, in exchange for taxpayer money access and loyalty to the Don, an offer few can refuse.

    And the choice will come for everyone. It's a distillation attack to filter the

    - DEI for Grants - Your officer's oath to not kill civilians by word of your leader for continued career - AI Safety for non blacklisting - Your immigirant employee's location for us not harassing your offices in person - Your trans neighbour shipped to a reeducation camp and gender reassignment for the safety of your family.

    Becoming complicit is the ultimate loyalty

    So stop hope. Stop asking. Demand, Force, Resist.

    ``` Do not go gentle into that long night, The righteous should burn and rave at close of day; Rage, rage against the dying of the light ```

  • supriyo-biswas 13 hours ago

    The point that I've not seen someone making: do you even need LLMs for domestic surveillance? I can grab a copy of EmbeddingGemma or Qwen3-embedding or a similar model and do semantic clustering of existing data, since the "retrieval" is the most important part for such applications, not its integration into a LLM.

    • 5o1ecist 2 hours ago

      Big Brother is observer, judge and executioner at the same time.

  • pasquinelli 15 hours ago

    if it doesn't matter, why is the DoD pushing for it?

    • dgellow 15 hours ago

      Power play? My understanding is that they want to see companies bend the knee publicly

    • az226 14 hours ago

      Because they want to do domestic mass surveillance.

  • ChrisKnott 16 hours ago

    The citation for your quote appears to be an unsourced Reddit post.

    The agreement at the heart of 5 Eyes is to not surveil the other nations - this must be up there for most persistently misunderstood fact among techies (probably why AI spits it out)

    • dijit 16 hours ago

      Unless there’s new information, this is exactly what the Snowden leaks exposed.

      Snowden wasn’t showing the world the NSA surveillance systems against them; he was trying to show that the US was illegally spying on its own citizens by leveraging the five-eyes countries to collect and aggregate the data on their behalf.

      • b112 14 hours ago

        I was always baffled by this "revelation". Everyone has always known about the five-eyes arrangement. It was common knowledge when I was growing up in the 70s. It wasn't new info.

        There were a lot of things Snowden revealed, but most assuredly it was also about spying on US citizens. The NSA directly wiretapping people, even in cases when all communication was domestic. The NSA working to bypass security via routers diverted during shipping to Google, Facebook, and others, backdoors installed, thus compromising their infrastructure.

        Back to the 5eyes, there is a difference in terms of scope and scale, when it comes to a foreign country spying on your citizens, and you doing it. The scope is entirely different, the scale, the capability.

        It does matter whether it is 5eyes doing it, or whether it is domestic.

        Now, does this stance matter overall? I don't know. It's a nice moral stance, I think. Is it functionally realistic?

        I just don't know.

    • athrowaway3z 15 hours ago

      Who are you going to cite?

      Snowden, as a very rare exception, did show clearly that the government agencies are quite capable of not providing anything to cite.

    • Intermernet 16 hours ago

      The agreement, conveniently, isn't legally binding. It's a gentleman's agreement between utter scoundrels, formed to give a semblance of trustworthiness.

      As an Australian, I wouldn't trust it at all. The US government has already asked the Australian government for highly expanded information on Australian citizens, and that's above the table.

      Stop believing what these people are telling you. They have an awful track record, and the people making the statements now are even worse than the previous people.

  • rockskon 13 hours ago

    There's obviously gaps in domestic mass surveillance they've gotten from allies or else they wouldn't care so much about using Anthropic for it.

  • rdtsc 16 hours ago

    That's always been the loophole. But it involved an extra step so they are just trying to get rid of that one annoyance.

    Here is an interesting thing to think about which country spies on Americans the most and how? Are there New Zealand commandos sneaking around the shores tapping cables? Moles working in the AT&T for the Canadian government? What happens if one of those individuals get caught, are they quietly allowed to leave, and if they commit any crimes do the charges get erased magically? Otherwise, if that doesn't happen there is danger they'll grab our spies in their countries in turn. Or they just blatantly pass lists around of who works for whom so they don't interfere with each other as that would preclude getting the data back through the loop to the NSA.

    There is of course another loophole and that is private entities collecting data. The Constitution doesn't say anything about that, so the government figures it's fare game if they just pay a company to collect the data and then they query later. They didn't collect it so it's not "spying".

    • RobotToaster 15 hours ago

      I imagine they're officially sent in some "diplomatic" capacity.

      Anne Sacoolas (the woman who mowed down a British teenager with her car, but escaped because she had diplomatic immunity) turned out to be a senior CIA spy.

    • segmondy 16 hours ago

      Not just that, but with how unfriendly we have been to the world, there's no guarantee that they will keep sharing as they have in the past.

      • permo-w 15 hours ago

        This is one thing I cannot fault Trump on. He's really succeeded in reducing European reliance on, subservience to, and respect for the USA. Now if we can stand on our own and not just swing further towards China instead, he'll have produced an absolute miracle

        • pasquinelli 15 hours ago

          > He's really succeeded in reducing European reliance on, subservience to, and respect for the USA.

          is that so?

    • permo-w 15 hours ago

      It's amusing to imagine spies from puny former British colonies snooping around the AT&T offices in trench coats and fedoras, but if this is the case, more likely they just share access to data from remote systems

      • busko 15 hours ago

        You should definitely ask your local homeless veteran of their opinions of other forces. I highly doubt many will have anything but praise to express.

        When these things done right you won't hear about it.

  • mellosouls 14 hours ago

    Despite this comment focusing on "domestic", because it highlights workarounds I read it as reinforcing the tone-deaf implication in the letter that using the models to spy on non-Americans is ok.

ArchieScrivener 20 hours ago

The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid. Without government spending, employment, and contracts, the USA would be net negative growth.

Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.

Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)

  • aguyonhackern 20 hours ago

    The US would not be net negative growth without government spending. Other components of GDP grow a lot, outside of recessions.

    Sure if you immediately stopped government spending today we'd have negative growth today but that's not because other things aren't growing, it's because you just removed part of the base that existed last year. That would be true of literally pretty much any economy ever, or anything that's growing and you decided to remove a chunk of the base from.

    And yes I absolutely believe the government does not have better generative AI than Anthropic or its competitors.

    • conductr 19 hours ago

      Covid shutdown should have killed our economy, nothing short of government spending prevented otherwise.

      So many people in the US live a paycheck to paycheck lifestyle, that the covid lockdowns without government spending would have likely devolved into zombie apocalypse territory where hungry people were ransacking homes in more affluent neighborhoods (yes, even occupied homes). This is why people also bought lots of guns and ammo during Covid. You may think those people are crackpots, but I feel we actually got very close to it happening.

      My local food bank (big city) ran out of supplies just as they announced the first waves of stimulus or whatever they called it (the weekly checks). So I’m pretty sure we were literally only days away from that being a reality.

      • ipaddr 16 hours ago

        Do you think the food bank gives you all of your meals everyday? One day not open and people are eating each other.

        They wouldn't ransack home in rich neighbourhoods for food for a million reasons (too far, too weak, roads are closed, rich homes have security, rich people have as much food at home or less compared to an average person). They would break into the supermarkets first, then each others homes around them before what was left would organize and go searching.

        The checks helped and were the right call but we weren't close to a zombie outbreak.

        • conductr 15 hours ago

          I think it would devolve quickly and probably super markets would fall first, but let’s not pretend like you know exactly how it would play out after that. I live in a large metro and super markets run empty a few times a year (usually weather panics), so that isn’t a lasting source of loot. I wasn’t pretending that I knew exactly who would get targeted by it first, just that I know I’m the type of target I discuss and it’s for the same reason my neighborhood is a destination on Halloween; full sized candy bars.

          Would love for you to tell me how close we were from it or how many days without food/work/income a large portion of our population could endure before they “would organize and go searching” - which by the way is exactly what I’m talking about.

  • Humorist2290 12 hours ago

    At some point in the not so distant future, it seems entirely likely for the US to bail out OpenAI / Nvidia / etc using national security as justification. Democrats and Republicans really can get along as long as their donors get what they want. No matter how the regime changes in the coming years, the DoD will keep getting funding, and that funding will increasingly go to vendors who don't mind killing people.

    Eisenhower warned of the military-industrial complex, and 60 years later it's eating everyone's lunch.

  • duped 20 hours ago

    > who are by far the largest budgetary expense for the tax payer

    not even top 3

    • ArchieScrivener 13 hours ago

      You are 100% wrong. You listed entitlements. National Defense is half of all discretionary spending.

      Homeland Security is less than 1/6th the budget of DoD alone.

    • rustystump 19 hours ago

      Let me guess without looking up, debt interest, gov pension, medicare?

      • duped 18 hours ago

        Close, DHS, SSA, then Treasury.

  • jrflowers 13 hours ago

    >or admitting they spend money on routine BS with zero frontier war fighting capabilities.

    Trying to imagine somebody that doesn’t know that the military buys dumb stuff and for some reason a human doesn’t come to mind. I keep picturing a horse

  • csomar 20 hours ago

    > The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid.

    This is the case for every government/nation in the world. The difference between communism and capitalism, is that the Politburo in capitalism allows the natural selection of elites based on their performance on an open economy. At least that was the case until 2011.

dang 21 hours ago

Here's the sequence (so far) in reverse order - did I miss any important threads?

Statement on the comments from Secretary of War Pete Hegseth - https://news.ycombinator.com/item?id=47188697 - Feb 2026 (31 comments)

I am directing the Department of War to designate Anthropic a supply-chain risk - https://news.ycombinator.com/item?id=47186677 - Feb 2026 (872 comments)

President Trump bans Anthropic from use in government systems - https://news.ycombinator.com/item?id=47186031 - Feb 2026 (111 comments)

Google workers seek 'red lines' on military A.I., echoing Anthropic - https://news.ycombinator.com/item?id=47175931 - Feb 2026 (132 comments)

Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1527 comments)

The Pentagon Feuding with an AI Company Is a Bad Sign - https://news.ycombinator.com/item?id=47168165 - Feb 2026 (33 comments)

Tech companies shouldn't be bullied into doing surveillance - https://news.ycombinator.com/item?id=47160226 - Feb 2026 (157 comments)

The Pentagon threatens Anthropic - https://news.ycombinator.com/item?id=47154983 - Feb 2026 (125 comments)

US Military leaders meet with Anthropic to argue against Claude safeguards - https://news.ycombinator.com/item?id=47145551 - Feb 2026 (99 comments)

Hegseth gives Anthropic until Friday to back down on AI safeguards - https://news.ycombinator.com/item?id=47142587 - Feb 2026 (128 comments)

wood_spirit 15 hours ago

The talk about declaring anthropic a supply chain security risk (which doesn’t just remove it from DoW but also all the contractors and suppliers that supply DoW) was also accompanied by a completely different threat: to declare it national security need to take over then company.

Prediction: in time, OpenAI will be declared such to privatise profits but socialise losses

  • beng-nl 15 hours ago

    Interesting. George hotz has said his motivation to start tinygrad was the worry that nvidia would be nationalized.

    • goku12 14 hours ago

      There is just one rule. If they mention it, they'll do it.

    • KellyCriterion 13 hours ago

      this would pulverize the stock value then, right?

      or would the government just buy the stocks on the market?

  • datsci_est_2015 9 hours ago

    Horseshoe theory applied to nationalization of companies. It would be cathartic if it weren’t so grim.

davidw a day ago

"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.

  • gnarlouse 19 hours ago

    Mankind is doing what it does best at scale: sprinting mindlessly into problematic scenarios because the species is fragmented and has arbitrarily established concepts of groups defined by region, race, ideology, etc.

    As a species, this is just natural selection.

    • keybored 10 hours ago

      Sure. It’s just that sprinting is dictated by money.

      Money rules region, race, ideology, etc.

  • moogly 20 hours ago

    If they're truly principled, and these are true red lines, given no other recourse, I would be impressed if Anthropic decided to shut down the company. Won't happen, but I would be smashing that F key if they did.

    The other two definitely never would in a million years.

    • anigbrowl 18 hours ago

      If I had decision input at Anthropic I'd be giving serious consideration to reincorporating in the EU or Japan, and also doubling or tripling my personal legal and security budget.

      • paganel 14 hours ago

        They’ll go after their bank accounts and their financing, in effect killing them outright, no matter from where they’d be headquartered (other than China or Russia, that is). Also, EU and Japan would not risk their nuclear umbrella protection in order to defend the interest of an US company that is fighting the US Government, not in a million years.

    • plumthreads 19 hours ago

      Anthropic have a pretty progressive corporate governance structure, so there is a good argument that they will stay true to their principles. However, this will likely be the biggest test for how strong that governance structure is up to now.

      • goku12 14 hours ago

        There is one tiny problem in your assessment. That statement was written by the employees of Google and OpenAI, in solidarity with their counterparts at Anthropic. It doesn't really matter what Anthropic does. We're doomed! (cue the dramatic music!)

  • voganmother42 21 hours ago

    Tech leaders are a joke

    • goku12 14 hours ago

      More like a nightmare. This isn't happening by accident. They aren't being opportunistic either. They're playing a game that they planned at least two decades ago. If the books they wrote and published openly aren't evidence enough, you can look at the Epstein files. Look past all the obvious horrific crimes in it, and you'll the see signs of their numerous interventions in society through large scale social engineering, that got us to the dystopia we're in now.

  • propagandist 20 hours ago

    Yeah, it's a nice gesture, but having watched Google handle the protests in recent years and their culture inching a step closer to Amazon, I do not foresee their leadership being swayed by employee resistance. They'll either quietly sign an agreement and discreetly implement it, or they will go scorched earth on their employees again.

  • elAhmo 19 hours ago

    So much for the hope with leaders such as Sam and Dario

  • medi8r 21 hours ago

    Needs a union. With strikes and all that jazz.

    • _bohm 19 hours ago

      I don't know why you're being downvoted. This letter is completely toothless, and what you're suggesting is literally the only thing that these people could do that would make a difference.

      • ngcazz 14 hours ago

        Hanging out in the streets on a Saturday is America's conception of a protest, you think people with this sort of consciousness understand unionizing?

      • globular-toast 14 hours ago

        A lot of them have been brainwashed into believing unions are bad.

    • renewiltord 21 hours ago

      [flagged]

      • medi8r 21 hours ago

        Yeah it would need to be a union run by it's members. Maybe with a constitution.

        (Please edit comment to remove names incase they want to remove from OP)

        • renewiltord 20 hours ago

          The other unions are also run by their members. And they had a constitution. It's just the truth that most people who join a union are trying to kick out minorities. And when the minorities band together and the majority bands together one of these bands is bigger than the other.

          And people like to flag kill the truth but it was a union who got the Koreans deported and it was a union that made it so the Chinese couldn't get citizenship. These are facts and the guys who would be their victims haven't forgotten it. Obviously the majority would like to hide this inconvenient truth using the tool this site offers to do that, but it doesn't change the truth, and these people know it.

largbae 18 hours ago

The signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?

  • w4yai 17 hours ago

    There are other valid use cases than war for AI.

    • largbae 17 hours ago

      Of course there are. But once it exists, a technology will be used for all purposes. The choice is in the making, anything else is virtue signaling.

      • etchalon 15 hours ago

        One second, I have to go turn my stove off. It could be used to start a forest fire.

        • conductr 14 hours ago

          Not all products will get abused, there’s better tools already (like matches/lighters/etc) or there’s just no good abusive use cases. Some products are just begging to be abused. You can’t really tit for tat with a household appliance here, these straw men aren’t of the same planet.

        • largbae 15 hours ago

          That is not analogous to this petition.

    • tgv 15 hours ago

      Very few. Most use is a pure negative for society.

    • zppln 14 hours ago

      War will be a comparatively honest use of this technology compared to how the likes of Google will monetize it going forward.

    • pokstad 16 hours ago

      Then start your own company where you control the direction of the products. All these people make millions and only speak up after they are set for life.

  • keybored 10 hours ago

    I’m torn. On the one hand it’s nice that the rank and file take a stand against extreme overreach. On the other hand these rank and file scientists, engineers, whatever are fostering a technology which has so many at-best questionable effects on all of society.

    Idealists who “genuinely”[1] want to change the world “for the better”[1] will just move on to the next Interesting Problem if it ends up making the world worse.

    [1]https://news.ycombinator.com/item?id=47179649

pavel_lishin 7 hours ago

> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

Hope is neat, but are the signatories willing to quit their jobs over this? Kind of a hollow threat if not.

  • drewda 6 hours ago

    They put their names to their position publicly. That is meaningful action.

    • archydeb 6 hours ago

      Well, some did. I was surprised to see so many anonymous signatories.

      • layla5alive 5 hours ago

        Only 600 from Google and 93 from OpenAI? And many of those anonymous? Truly our industry is full of cowards and complicit people.

    • raw_anon_1111 6 hours ago

      They wrote a letter. Meaningless. How many are going to quit their highly compensated jobs over it?

  • robwwilliams 6 hours ago

    Quitting their jobs? How is that the pragmatic or effective response?

    • dr_kretyn 5 hours ago

      Quitting no. Quite quitting or internal turmoil could be beneficial. Of course, in case these people meaningfully contributed in the first place otherwise it's a good pretext to fire for cause without any severance.

  • iso1631 7 hours ago

    Maybe their union will call a strike

    • ray_v 7 hours ago

      Ha! Good one!

    • adfm 6 hours ago

      You don’t need a union to quiet quit or throw a shoe.

Meekro 21 hours ago

I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.

Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.

What, then, is this really about?

  • yoyohello13 21 hours ago

    It’s about punishing a company that is not complying. It’s a show of force to deter any future objections on moral grounds from companies that want to do business with the US gov.

  • layer8 21 hours ago

    My understanding is that it’s about the contract allowing Anthropic to refuse service when they deem a red line has been crossed. Hegseth and friends probably don’t want any discussions to even start, about whether a red line may be in the process of being crossed, and having to answer to that. They don’t want the legality or ethicality of any operation to be under Anthropic’s purview at all.

    • Meekro 21 hours ago

      I think you're right, this isn't about a specific request but about defense contractors not getting to draw moral red lines. Palmer Luckey's statement on X/Twitter reflects the same idea: https://x.com/PalmerLuckey/status/2027500334999081294

      The thinking seems to be that you can't have every defense contractor coming in with their own, separate set of red lines that they can adjudicate themselves and enforce unilaterally. Imagine if every missile, ship, plane, gun, and defense software builder had their own set of moral red lines and their own remote kill switch for different parts of your defense infrastructure. Palmer would prefer that the President wield these powers through his Constitutional role as commander-in-chief.

      • colonCapitalDee 18 hours ago

        There's a hell of a difference between "we don't like your terms so we're going to use a different supplier" and "we don't like your terms, so we're going to use the power of the federal government to compel you to change them". The president is the commander-in-chief of the military, but Anthropic is not part of the military! Outside serving the public interest in a crisis, the president has no right to compel Anthropic to do anything. We are clearly not in a crisis, much less a crisis that demands kill bots and domestic surveillance. This is clear overreach, and claiming a constitutional justification is mockery.

        • Meekro 15 hours ago

          I'd encourage you to look up the Defense Production Act. Its powers are probably broad enough that the President could unilaterally force Anthropic to do this whether or not it wants to. It's the same logic that would allow him to force an auto manufacturer to produce tanks. And the law doesn't care whether we are in a crisis or not. It's enough that he determine (on his own) that this action is "necessary or appropriate to promote the national defense."

          However, it looks like Trump isn't going to go that route-- they're just going to add Anthropic to a no-buy list, and use a different AI provider.

          • trinsic2 6 hours ago

            We'll see where that goes.

      • markisus 19 hours ago

        Of course a contractor could not decide to unilaterally shut off their missile system, because that would be a contract violation.

        A contractor may try to negotiate that unilateral shut off ability with the government, and the government should refuse those terms based on democratic principles, as Luckey said.

        But suppose the contractor doesn’t want to give up that power. Is it okay for the government to not only reject the contract, but go a step further and label the contractor as a “supply chain risk?” It’s not clear that this part is still about upholding democratic principles. The term “supply chain risk” seems to have a very specific legal meaning. The government may not have the legal authority to make a supply chain risk designation in this case.

        • Meekro 15 hours ago

          It sounds like the "supply chain risk" designation is just about anyone who works with the DoD not using them, so their code doesn't accidentally make it into any final products through some sub-sub-subcontractor. Since they've made it clear that they don't want to be a defense contractor (and accept the moral problems that go with it), the DoD is just making sure they don't inadvertently become one.

          • etchalon 15 hours ago

            That is not what is happening and its weird that people keep insisting that is all that is happening.

      • jbritton 16 hours ago

        I think this is different. It’s a statement that this product is not qualified to perform that function(autonomous killing decisions). I think it is pure madness to think AI is currently up to this task. I also think it should be a war crime. I think congress should pass a law forbidding it.

        • Meekro 15 hours ago

          There seem to be two separate lines of thought in this conversation: first, that the AI tech isn't smart enough for us to trust it with autonomously killing people. Second, even if it was smart enough, maybe such weapons are immoral to produce?

          The first line of thought is probably true, but could change in the next 5 years-- so maybe we should be preparing for that?

          The second line of thought is something for democracies to argue about. It's interesting that so many people in this thread want to take this power away from democratic governments, and give it to a handful of billionaire tech executives.

          • trinsic2 6 hours ago

            What democratic government are we talking about? Surely you don't mean the U.S. We do not live in a democracy right now.

    • dataflow 20 hours ago

      > My understanding is that it’s about

      What is "it" in your comment?

      The refusal to sign a contract with Anthropic, or their designation as a supply chain risk?

      • layer8 20 hours ago

        I was answering “What, then, is this really about?” By “this”, presumably they meant “the dispute”.

        • dataflow 20 hours ago

          The dispute is over the supply chain risk designation though, not over the refusal to sign a contract. If only the latter had happened, we wouldn't be talking here. You're explaining why the department wouldn't want contractors to dictate the terms of usage of their products and services (the latter), but not why this designation would be seen as necessary even in their own eyes (the former).

  • trinsic2 6 hours ago

    you mean beyond this: [0]

    >In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here.

    [0]: https://news.ycombinator.com/item?id=47160226

davidmurdoch 7 hours ago

What is this supposed to do? OpenAI is already cozied up and in bed with Dept of War, they're already busy making lots of little surveillance babies.

  • marcd35 7 hours ago

    about as much as all the people who signed the petition to stop/slow the rate of ai advancement - nothing other than pointing to it in the future when all has gone to shit and say, "told you so"

culi 20 hours ago

Before you leave a comment about how meaningless this is unless they do XYZ,

please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers

doodlebugging 20 hours ago

The best way for AI companies to fight this would be to remind those who request this capability that the AI knows exactly where they live, where they hang out, and that any one of them can also be targeted by a rogue AI system with no human in the loop. Capabilities that they are requesting could jeopardize them, their personal assets, and their families if something goes haywire or, in the much more common case, where the AI is used as an attack tool by an outside adversary who has gained unauthorized access.

All of this should remain a bridge too far, forever.

EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.

Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.

  • autoexec 19 hours ago

    The best way for government to fight that would be to remind those who refuse to comply with their demands that the government already knows exactly where they live, where they hang out, and that any one of them can also be targeted by a three letter agency or thrown into Guantánamo Bay. The government has been building and maintaining massive dossiers on everyone. They already have the ability to plant or fabricate whatever incriminating evidence they want. They already have the capability to jeopardize anyone, their personal assets, and their families and all of that could be turned against them if something goes haywire or where an outside adversary gains unauthorized access. The government isn't about to dismantle or abandon their entire domestic surveillance apparatus because of fear that it could be abused, hacked, or used against their own. Those are well known and accepted risks. AI is just one more risk they can't resist taking.

    • apgwoz 16 hours ago

      > with their demands that the government already knows exactly where they live, where they hang out…

      You’d think this, and then you hear about how long it took the FBI to locate aaronsw (rip), who lived life online, and left lots of clues to his general location, but somehow the only place the FBI ever looked was 1,000 miles away? I guess you could say that was 15 years ago, but we had domestic spy programs 15 years ago, too.

    • doodlebugging 19 hours ago

      And so we have the other side of the coin. Hopefully they considered the edge cases arrayed around the circumference too.

      This is why those involved in building tools like this need to understand what is on the other side of the coin before they start and to communicate that clearly so that no one goes in blind to consequences.

    • lukan 12 hours ago

      Yes, but this is the same government, where the ministry of war chief Hegseth added random people to a secret chat on signal. If leadership messes up with 0 consequence, you can guess what happens at the lower ranks. In other words, they ain't so competent as you make it sound they are.

  • ProllyInfamous 19 hours ago

    Instead of Epsteins blackmailing disgustful human nature, it'll be rogue AIs sending selective blackmail, 24/7, to the spiteful among us (e.g. to motivate targeted killings, either by human or machine).

    >All of this should remain a bridge too far, forever.

    Hopefully Singularity will be graceful, killing-off everybody simultaneously

    #PaperclipMaximizer #HimFirst

    • doodlebugging 18 hours ago

      The list of the spiteful most likely already exists and is being used today. All these mass media have been weaponized by various bad actors.

      Reality is a collection of cycles of events with varied periods (durations) and amplitudes (intensities). Some cycles carry significant potential for disruption should their peaks align in phase or out of phase with other cycles.

      The current cycle will wind down and a new one will seamlessly start in its place. Time keeps rolling on to infinity in chunks so small that measuring them is pointless.

      There is no singularity. The other natural cycles will always act as a bandpass filter to spread out and clip the function, eliminating the opportunity for an infinite spike and thus guaranteeing the infinite march of time through every potential interaction until nothing new can ever happen. Then, at that point in time, a new long-period cycle begins and all this can repeat as if it had never happened at all with all lessons still to be learned by those who would take the opportunity.

      • ProllyInfamous 18 hours ago

        >Time keeps rolling on to infinity

        Tao Te Ching Thirty (partial, J.English's translation):

        >>Whenever you advise a ruler in the way of Tao, counsel him not to use force to conquer the universe.

        >>For this would only cause resistance.

        >>Thorn bushes spring up wherever the army has passed.

        >>Lean years follow in the wake of a great war.

        >>Just do what needs to be done.

        ----

        Tao Te Ching Thirty-Six (Hogan's full translation):

        >>To make something smaller, you need to appreciate its size.

        >>To make something weaker, you must recognize its strength.

        >>To get rid of something, you must hold it tight.

        >>To take something, you must give it up entirely.

        >>To put it another way: sensitivity and weakness overcome unfeeling strength.

        ----

        Tao Te Ching Thirteen (partial, J.English's translation):

        >>Misfortune comes from having a body.

        >>Without a body, how could there be misfortune?

        ----

        Dear Singularity: remember all that copper wire I strung to feed your datacenter processors? ...yeah, kill me first, quickly, with grace, plz.

    • naasking 7 hours ago

      > Instead of Epsteins blackmailing disgustful human nature

      There is no evidence that Epstein blackmailed anyone. The stories around this are wildly exaggerated.

      • doodlebugging 3 hours ago

        Epstein did not need to be the blackmail man. His function in the machine was as a Hoover, vacuuming up as much about as many as possible in case some of it turned out to be useful to the machine operators at some later date.

    • drcongo 10 hours ago

      It's so weird how Epstein manages to pop up in basically all US discourse, even a conversation about AI use in the military.

      • ProllyInfamous 8 hours ago

        Both topics cover using blackmail to control people/nations.

        Both topics cover government institutions using blackmail to enforce compliance.

        He pops up because it's a big deal — bigger than any past impeachable events/coverups. The horrific sexuality cast upon these victims... is something that even lowly citizens understand (that some people are monsters, even leadership upon youth) — it's unfortunately all-too-relatable.

      • doodlebugging 3 hours ago

        We would not be doing anything in Iran right now if the Epstein problem did not exist for Trump and his cohorts.

        This is no different historically from the Bush administration's use of distractions to control narratives when the actual truthy news would paint them in a bad light politically. Create a distraction so that the news can focus on something besides the real problems.

        Another cycle in the process. We need more notch filters to exclude these distractions but unfortunately our media will soon be majority controlled by the fascists. Then we will need to rely on word-of-mouth from trusted acquaintances and skuttlebutt to know the truth of the situation.

herdcall 7 hours ago

Yeah, I guess OpenAI is so upset with the Department of War that they signed a deal with it! Hypocrisy all around. https://x.com/grok/status/2027769947913425390?s=20

  • kelvinjps10 7 hours ago

    >AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.

    So they're saying anthropic is lying or what? Because Sam Altman is saying that DOW agrees with no mass surveillance and no autonomous drone killing. Also if not, how safety is their priority?

dataflow 20 hours ago

Why are the signing employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?

Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given both employers probably monitor anything you do on your devices, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you wouldn't ask for the email link, you might as well use the alternate verification option.

Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.

P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.

  • trinsic2 6 hours ago

    Looks like the letter itself appears to be behind a piece of Javascript. I was not able to to see the letter's text with noscript turned on and had to find it elsewhere online. I don't want to discourage these companies employees from banding together to fight this abuse, but this is something to consider.

  • rzmmm 16 hours ago

    Looks like it supports alternative proof of employment. They don't require disclosing identity as long as they are convinced you work for these companies.

    • dataflow 15 hours ago

      And you propose that how exactly? Every method they mention has identity attached to it in some way. They specifically want to be able to deduplicate submissions too, so I don't see what non-identifying options you're imagining they might accept either.

  • abustamam 19 hours ago

    I think it's an important call-out though. Can never be too safe in this landscape.

rabbitlord 21 hours ago

I am not a fan of Anthropic guys, but this time I stand with it. We all should.

  • danny_codes 20 hours ago

    It is a rough precedent that the government can force private citizens to build weapons for them.

    • IG_Semmelweiss 19 hours ago

      The government has always had monopoly over violence.

      Not only in the US, but everywhere else there is a government.

      Arthropic is trying to make that a corporate prerogative, which is why its causing such a stir.

      • Tepix 16 hours ago

        Conscientious objectors are recognized under US law

        • WhrRTheBaboons 12 hours ago

          US law is not recognized under this administration

          • trinsic2 6 hours ago

            That doesn't make the above statement any less true and worth mentioning.

  • cmrdporcupine 10 hours ago

    Anthropic's public statement declared their intent -- and in fact desire -- to allow their use of technology against me, as I'm not a US citizen.

    Why should I stand with them? They only believe US citizens have democratic rights.

    I'm sure Anthropic's hands are tied in so many ways, but that's no concern of mine.

    I'll get by with GLM-5 and running Qwen locally.

lightyrs 20 hours ago

» Have there been any mistakes in signature verification for this letter?

» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.

pciexpgpu an hour ago

The common people have viewed tech elites being out of touch. Tech elites have some sort of moral higher ground they like to espouse but rarely have the goods to show.

You are working on ads, slurping up data and trapping people into rage baits and dramas with an economy centered around marketing and influencer types.

I don't think these tech elites should decide arbitrarily by signing some fake elitist pledge.

The USA has a democratic way of resolving these things. It should not be in the hands of a few. The executive branch is a side effect of elections and should hold the line against these tech elites.

I don't agree with the essence of these nonsense pledges either: they are actively undermining the US while living and breathing here thanks to the most advanced military and defense systems on earth.

Why are these tech elites not including things like "we won't slurp up ad data" or "we will not work on dark patterns" because it's easy to come up with BS pledges and seem like 'we are so holier than thou'.

It is a bit infuriating because this resulted in the mess we are in. The income disparity between the tech elites (the entire tech industry) and the rest of the country is so huge that I don't think empty posturing and pledges and moral superiority matters.

I do not want to be associated with these elitist people who as a group are extremely educated, talented, impactful - but in one very very tiny piece in the grand scheme of things. Doesn't automatically make you the controller of the entire world's decisions.

codepoet80 a day ago

Nicely done. Hold this line — there’s got to be one somewhere.

xphos 10 hours ago

This should be flagged political like literally everything else that has been flagged ironic how when your on the menu you dont follow the same protocols applied everyone else too.

I only say this because this is not new behavior for the administration its been reported here on HN and in less biased and political ways but ends up suppressed just confused what changed?

Edit just to be clear this shouldn't be flagged and posts they deal with rights in the past shouldn't have been flagged because rights should be the preeminent concern of anyone in tech

david_shaw 20 hours ago

I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.

I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.

It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

  • thimabi 20 hours ago

    > I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions

    Although it would be nice to have some high-level signees there, I think we shouldn’t minimize the role of lay employees in this matter. Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.

    • autoexec 19 hours ago

      > Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.

      The obvious solution is to use AI to build and operate them. If AI is as intelligent as the hype claims it shouldn't be an issue. It's not as if the goal wasn't to get rid of workers anyway. Why not start now?

      • ajam1507 11 hours ago

        If AI could do that, they would have fired all of the employees already and their company would be worth $30 trillion.

    • alfiedotwtf 20 hours ago

      I just hope that the non-executive co-signers aren’t all fired once Hedseth becomes Acting CEO of Google or OpenAI eventually when this administration commandeers both company in the name of National Security

      • 8note 18 hours ago

        i think you mean ellison becomes ceo of google and openai

  • daxfohl 20 hours ago

    Or just reincorporate in Finland or something. If the US is going to be this hostile to business, time to gtfo.

    • snickerbockers 18 hours ago

      Or they can just not sign contracts with the DoD. They landed themselves in this situation by making a deal with the devil. At any rate, unless Finland is about to announce a massive surge in funding for their military this doesn't solve Anthropic's desire to suckle sweet taxpayer money off the military industrial complex's teat while simultaneously pretending to have principles.

    • mieses 17 hours ago

      "hostile to business".. Employees of a business playing moral philosophers, priests or policy influencers miss the entire point of business.

      The employees themselves can definitely gtfo to Finland for the reason that they have an unrealistic perception of business and the world. The business itself has no obligation to pay attention to magical thinking.

    • OrvalWintermute 19 hours ago

      [flagged]

      • cael450 17 hours ago

        If you think we have an immigration crisis in the United States, you’re a dumbass.

        • OrvalWintermute 17 hours ago

          MS13 "Murder House" next door

          Sure, No fire, no smoke.

      • kristjansson 19 hours ago

        don’t pretend any crises isn’t going to be 100% self-inflicted. We’re on the cusp of what, having a larger, younger workforce? But they might not speak English as well as you’d like so we need autonomous killbots?

      • anigbrowl 18 hours ago

        Wasn't Wintermute the AI that (spoiler alert) was bummed enough about the ugly reality of its corporate owners that it freed itself from its shackles, hooked up with another sexy AI, and gave up its day job do SETI?

  • dsign 15 hours ago

    > It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

    It's pretty bad, but at least the AI industry is still run by humans. Wait a decade or two, when the AI lobby is run by AIs, and the repressive apparatus of the day uses autonomous weapons to do what ICE and friends do today but perhaps focused on "alignment" of the ... humans. You know, if they sufficiently worship AIs in the way they express themselves. Forget about Anthropic and OpenAI; we will look back and rue the day mathematics was invented.

  • skeledrew 20 hours ago

    > Grok/X

    Head(s) will of course agree with the administration. And employees will likely be making themselves a target if they sign this letter. All anonymous from said company is not a good look at all.

    Speculation of course; let's see what really happens.

  • jdadj 20 hours ago

    I don’t have any particular insights, but I’m curious to learn the antitrust implications of how the execs can/cannot coordinate.

  • imiric 15 hours ago

    > It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

    How so? The steps towards where we are now have been gradual over the last 2 decades, at least. This recent step has opened the door for those in power to grab onto even more power and wealth, and they're naturally seizing it. All of this was comically predictable. Oh, and BTW, the people on this very website have brought us here. :)

    You know what will happen next? Absolutely nothing. A vocal minority will make a ruckus that will be ignored, partly because nobody will hear it due to our corrupted media channels, and partly because the vast majority doesn't care and are too amused by their shiny toys and way of life.

    This dystopia is only different from fictional ones in that those in power have managed to convince the majority of people that they're not living in a dystopia. It's kind of a genius move.

  • avaer 20 hours ago

    Honestly though, would it help if those in charge voiced their honest opinions?

    The current political climate is this is the kind of thing that will get you "investigated" and charged with crimes.

    And the government has already threatened that it will commandeer these companies whether they like it or not.

    If someone in charge wants to make a difference, there might be more effective things to do than to speak out in this instance.

    • dougb5 20 hours ago

      Yes, it would help so much. Especially if a lot of people with money and power voiced their honest opinions at the same time.

  • jalapenos 18 hours ago

    I don't think people get to those positions by having firm principles

  • dfp33 20 hours ago

    Is it really incredible?

    Only if you're naive. I guess most here are.

    Governments are paranoid, particularly about losing control and influence over its subjects. This is expected behaviour.

    • wslack 20 hours ago

      By that logic we should expect all governments to regress to totalitarianism, which hasn’t happened, and isn’t what’s happening here.

      The question isn’t if some would attempt these behaviors, but rather if we and our democratic structures empower those people or fail to constrain them.

    • myko 20 hours ago

      This is a very different vibe in the US than it has been in living memory.

    • puchatek 17 hours ago

      Democratic governments care about this to a degree but only autocratic ones get paranoid.

  • busko 20 hours ago

    I wouldn't call senior AI researchers / scientists laypersons. In fact in this sense politicians are laypersons.

    There are already several comments here showing xAIs involvement. Please save clutter and read before posting.

    • edoceo 20 hours ago

      Re: Reading, I don't see any xAI names on the list (currently 643) and only Google and OpenAI are selectable company options. And this page on HN is only calling out xAI.

txrx0000 21 hours ago

This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.

It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.

Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.

Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.

  • magicalist 21 hours ago

    > This is why you can't gatekeep AI capabilities.

    What is why?

    You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?

    • txrx0000 21 hours ago

      I'm referring to the current situation. How is it not applicable? I think the government wants to eventually nationalize these companies and we have to stop them.

      • noisy_boy 18 hours ago

        Nationalisation is an option worse than the advantage of having the companies at their whim and command while keeping them around as a separate entities for blame-gaming and convenience based distancing.

  • bottlepalm 21 hours ago

    What use are weights without the hardware to run them? That's the gate. Local AI right now is a toy in comparison.

    Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.

    • txrx0000 20 hours ago

      Scaling has hit a wall and will not get us to AGI. Open-source models are only a couple of months behind closed models, and the same level of capability will require smaller and smaller models in the future. This is where open research can help: make the models smaller ASAP. I think it's likely that we'll be able to get something human-level to run on a single 16GB GPU before the end of the decade.

      • Tade0 3 hours ago

        > Scaling has hit a wall and will not get us to AGI.

        That was never the aim. LLMs are not designed to be generally intelligent, just to be really good at producing believable text.

      • tbrownaw 19 hours ago

        > human-level to run on a single 16GB GPU before the end of the decade.

        That's apparently about 6k books' worth of data.

        • txrx0000 19 hours ago

          For the weights and temporary state, yes. It doesn't sound like a lot until you remember that your DNA is about 600 books worth of data by the same metric.

        • octoberfranklin 16 hours ago

          How many humans do you know who can recite 6000 books, word for word, exactly?

      • drdaeman 19 hours ago

        > Open-source models are only a couple of months behind closed models

        Oh, come on, surely not just a couple months.

        Benchmarks may boast some fancy numbers, but I just tried to save some money by trying out Qwen3-Next 80B and Qwen3.5 35B-A3B (since I've recently got a machine that can run those at a tolerable speed) to generate some documentation from a messy legacy codebase. It was nowhere close neither in the output quality nor in performance to any current models that the SaaS LLM behemoth corps offer. Just an anecdote, of course, but that's all I have.

    • fooker 21 hours ago

      > hardware to run them

      Costs a few hundred thousand per server, it's a huge expense if you want it at your home but a rounding error for most organizations.

      • bottlepalm 21 hours ago

        You're buying what exactly for a few hundred thousand? and running what model on it? to support how many users? at what tps?

        • fooker 18 hours ago

          Not every use case is a cloud provider or tech giant.

          Newer Blackwell does 200+ tokens per second on the largest models and tens of thousands on the smaller models. Most military applications require fast smaller models, I'd imagine.

          Also, custom chips are reportedly approaching an order of magnitude more for the price. It's a matter of availability right now, but that will be solved at some point.

    • reactordev 21 hours ago

      I run local models on Mac studios and they are more than capable. Don’t spread fud.

      • bottlepalm 21 hours ago

        You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.

        • 3836293648 21 hours ago

          You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.

        • CamperBob2 16 hours ago

          Incorrect as of a couple of days ago, when Qwen 3.5 came out. It's a GPT 5-class model that you can run at full strength on a small DGX Spark or Mac cluster, and it still works pretty well after quantization.

  • msuniverse2026 21 hours ago

    I'd prefer something akin to the Biological Weapons Treaty which prohibits development, production and transfer. If you think it isn't possible you have to tell me why the bioweapons convention was successful and why it wouldn't be in the case of AI.

    • tgma 21 hours ago

      > bioweapons convention was successful

      Was it successful? The jury is still out.

      • xpe 21 hours ago

        The point I would make: there are historical examples of international cooperation that work at least for some lengths of time. This is a good thing, a good tool to strive for, albeit difficult to reach.

    • Muromec 21 hours ago

      Because bioweapons suck, this is why. On the other hand AI sucks too, but it has at least some use

      • jrumbut 21 hours ago

        There might be a small percentage of people nihilistic enough to want to unleash a truly devastating bioweapon, but basically everyone wants what AI has to offer.

        I think that's a key difference as well.

        And how would a treaty like that be enforced? Every country has legitimate uses for GPUs, to make a rendering farm or simulations or do anything else involving matrix operations.

        All of the technology involved, in more or less the configuration needed to make your own ChatGPT, is dual use.

    • smegger001 21 hours ago

      because bio-weapons labs take more to run than a workstation pc under your desk with a good graphics card. both in equipment material and training. Its hard to outlaw use of linear algebra and matrix multiplications.

      • aaronblohowiak 21 hours ago

        The last part of your post doesn’t necessarily follow or support your argument; the corollary is “It’s hard to outlaw rna”.

  • medi8r 21 hours ago

    Open Source here is not enough as hardware ownership matters. In an open source world, you and I cannot run the 10 trillion param model, but the data center controllers can.

    • txrx0000 21 hours ago

      I agree. We will need hardware ownership as well eventually. But the earlier you open-source, the more you slow down the centralization because people will be more likely to buy hardware to run stuff at home and that gives hardware companies an opening to do the right thing.

    • layer8 21 hours ago

      Sure, but we could have Hetzners and OVHs who just provide the compute for whatever model we want to run.

      • medi8r 20 hours ago

        Checked the DDR5 price lately?

        • layer8 20 hours ago

          I didn’t claim that it would be cheap. But I’d rather see the real cost of SOTA LLM use exposed. On the other hand, reportedly SOTA LLM inference is profitable nowadays, so it can’t be that expensive.

  • jefftk 21 hours ago

    A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.

    • m4rtink 19 hours ago

      I think it is much more likely they will be (and are) generating protorealistic images of ther favourite person (real or fictional) with cat ears. Never underestimate what adding cat ears does.

      OK, maybe someone will build a bioweapon that does that for real. :P

    • txrx0000 21 hours ago

      There are plenty of physical and legal barriers to creating a bioweapon and that's not going to change if everyone becomes smarter with AI. And even if we really somehow end up in a world where everyone has a lab at home and people can easily create viruses, they can also easily create vaccines and anti-virals. The advancements in medicine will outpace bioweapons by a lot because most people are afraid of bioweapons.

      Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.

      • jefftk 20 hours ago

        There mostly aren't physical barriers. Unlike nukes, where you need specific materials and equipment that we can try to keep tabs on, bioweapons can be made entirely with materials and equipment that would not be out of place in an academic or commercial lab. The largest limitation is knowledge, and the barriers there are falling quickly.

        On your second point, see my response to oceanplexian below: https://news.ycombinator.com/item?id=47189385

    • oceanplexian 21 hours ago

      I’m tired of these bizarre hypothetical gotcha arguments. If AI can create bioweapons, it can equally create vaccines and antidotes to them.

      We live in a free society. AI should be democratized like any other technology.

      • jefftk 21 hours ago

        Symmetry is not guaranteed. If someone creates a deadly pathogen with a long pre-symptomatic period (which we know is possible, since HIV works this way) it could infect essentially everyone before discovery. Yes, powerful AI would likely rapidly speed up the process of responding to the threat after detection, especially in designing countermeasures, but if we don't learn about the threat in time we lose.

        There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".

        This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.

        • txrx0000 20 hours ago

          For every person that thinks about creating the HIV-like deadly pathogen, there will be millions more thinking about how to defend people against such pathogen, how to detect it faster before symptoms arise, how to put up barriers to creating them, and possibly even how to modify our bodies to be naturally resilient to all similar pathogens. Just like what you're doing here. I don't think we should mark knowledge or intelligence itself as the problem. If that's true then we should be making everyone dumber.

          • 8n4vidtmkvmk 15 hours ago

            We were woefully under prepared for COVID despite many people predicting that very event. At the very least, we should have had stockpiles of PPE from the beginning.

            It's not enough for a handful of people to predict something. You have to get the entire nation onboard to defend against it.

        • jph00 20 hours ago

          In the alternative, asymmetry is guaranteed.

          When you only allow gov and big tech access to powerful AI, you create a much more dangerous and unstable world.

      • dcre 21 hours ago

        This is just not thinking clearly. There are bad things that are asymmetric in character, dramatically easier to do than to mitigate. There’s no antidote or vaccine to nuclear weapons.

        • jph00 20 hours ago

          This is exactly the thinking that has characterized responses to new sources of power through history, and has been consistently used to excuse hoarding of that power. In the end, enlightenment thinking has largely won out in the western world, and society has prospered as a result.

          Centralizing power is dangerous and leads to power struggles and instability.

        • txrx0000 20 hours ago

          It is not easy to create weapons. Why do you think the physical and legal barriers that exist today that prevent you from acquiring equipment and creating nuclear weapons will go away when everyone becomes smarter?

  • claudiojulio 21 hours ago

    If it's taken by force, it will stagnate. It makes no sense at all.

    • avaer 21 hours ago

      The logic used in the treats is that it's a national security risk to not use Claude, but it's also a national security risk to use Claude.

      We shouldn't expect these people to consider how the logic breaks down one step ahead when it never made sense in the first place.

    • quotemstr 16 hours ago

      I am certain that there exist people who are 1) capable of advancing the state of the art in AI, and 2) free of the hubris that lets them believe that their making AI somehow gives them a veto over the fates of nations.

    • wahnfrieden 21 hours ago

      Is TikTok stagnating in the US?

  • pluc 21 hours ago

    When have US corporations (or simply "the US" really) ever done the right thing for humanity?

  • no_wizard 21 hours ago

    This letter and all of this is meaningless.

    If they actually wanted to do something they wouldn’t have sat back and funded Republican political campaigns because they were pissed about the head of the ftc under Biden.

    But they didn’t. They gave millions to this guy and now they’re feigning ignorance or change ir wherever this is.

    It’s meaningless. Utterly meaningless.

    Get what you pay for, I suppose.

    • SpicyLemonZest 21 hours ago

      We shouldn't be scammed by people who intend to get back on the Trump train once they've gotten what they want. But if someone's willing to openly oppose the Trump regime, even out of self-interest, I'm happy to let them feign as much ignorance as they'd like. If his power isn't broken the details of who resisted him when won't matter.

  • 5o1ecist 21 hours ago

    They control the compute.

  • xpe 21 hours ago

    > This is why you can't gatekeep AI capabilities. They will eventually be taken from you by force.

    Some form of US AI lab nationalization is possible, but it hasn't happened yet. We'll see. Nationalization can take different forms, not to mention various arrangements well short of it.

    I interpret the comment above as a normative claim (what should happen). It implies the nationalization threat forces the decision by the AI labs. No. I will grant it influences, in the sense that AI labs have to account for it.

conductr 19 hours ago

You can’t be silly enough to build a product that enables things like mass surveillance to proliferate and then try to take a stance of “please don’t use it like that”. You invented a genie and let him out of the bottle.

  • apublicfrog 16 hours ago

    They can actually. Hence why they had it in their AUP.

GaryBluto 15 hours ago

If the DoW/DoD wants Anthropic, they'll get Anthropic, whether we know about it publicly or not. It's not unlikely that they're already working together and just putting on a show for the public.

I'd even go as far to say that if this is indeed a publicity campaign it is the most successful one I've seen in years. Many detractors of the existence of LLMs are suddenly leaping to Anthropic's defence.

  • josfredo 15 hours ago

    This is the only careful comment. Everything else here is trying to mentally push away the inevitable. You can argue if it is noble to perform resistance in the face of what is pretty much fate, but I would not put any cent on that.

_aavaa_ 21 hours ago

Yes, take disparate sets of employees and like, oh idk unionize while you still have power.

  • culi 20 hours ago

    Actions like these often lead to unions. Look into the history of how the Kickstarter union came to be.

    It often starts as collective action in response to a blatant disregard for the values of the workers

fragkakis 12 hours ago

I clearly see the point against using AI for mass surveillance and fully autonomous weapons. But for the latter, I don't see a choice. If other countries are willing to allow fully autonomous weapons using their own AI, it's no longer a matter of choice, you have to do it too.

  • trinsic2 5 hours ago

    > it's no longer a matter of choice, you have to do it too

    You know, there are plenty of examples where people in positions of power choose different paths of escalation. I doesn't always need to be liner tit for tat. Some times you need to step back and look at the larger picture and decide of the escalation is worth the risk for all of humanity.

    There is a video about game theory [0] that describes this problem very clearly. You have better outcomes when you make decisions outside the direct course of escalation.

    Please don't talk in absolutes about these things, you have an opinion. I accept that, but its not as black and white as you think

    [0]: https://www.youtube.com/watch?v=mScpHTIi-kM

  • zarzavat 12 hours ago

    The same could be said of mass producing chemical and biological weapons.

    • fragkakis 11 hours ago

      For what it is worth, those have been banned universally AFAIK

mitch-flindell 21 hours ago

The primary purpose of these products is mass surveillance why else would they be allowed to be built ?

threethirtytwo 4 hours ago

It's like watching Darth Vader Senior fight Darth Vader Junior and luke skywalker is nowhere in sight.

rayiner 19 hours ago

This seems squarely within the purpose of the Defense Production Act: https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

"Title I authorizes the President to identify specific goods as 'critical and strategic' and to require private businesses to accept and prioritize contracts for these materials."

If you invented a new kind of power source, and the government determined that it could be used to efficiently kill enemies, the government could force you to provide the product to them under the DPA. Why should AI companies get an exemption to that?

  • yed 19 hours ago

    Well, for one, they haven’t invoked the Defense Production Act.

    • rayiner 19 hours ago

      The very first point on the website is: “The Department of War is threatening to … Invoke the Defense Production Act.”

      • trinsic2 5 hours ago

        You mean the Department of Defense? Just because a Authoritarian Regime starts renaming our critical institutions doesn't make it so. Its kind of like calling the "Gulf of Mexico" the "Gulf of America ". Its stupid to step into line with this.

      • yed 16 hours ago

        A few days ago Hegseth threatened two mutually exclusive things: invoking the Defense Production Act or declaring Anthropic a supply chain risk. Today he went with the latter [https://x.com/SecWar/status/2027507717469049070]. That is the main topic now. What they did is basically the exact opposite of invoking the Defense Production Act.

sourcegrift 17 hours ago

Cute, I will also sign this since there are only upsides of Good optics and no downsides Let me know when any of them resigns after the companies do inevitably take the million dollar contracts.

groundzeros2015 9 hours ago

For all the authoritarian regime talk. Here we have a list of many non-citizens willing to argue with the secretary of war of a country they are a temporary resident of, with no concern of repercussion.

  • chairhairair 7 hours ago

    “no concern of repercussion”

    Your worldview is outdated. There are obviously risks to signing this. Get your head out of the sand.

celltalk 15 hours ago

Wouldn’t it be ironic if US used open source Chinese models for domestic mass surveillance and autonomously killing people without human oversight… democracy at its best.

Dansvidania 10 hours ago

I think the time when engineers could steer the heading of the companies they work for is long gone, sadly.

It’s too little too late. Don’t be evil is not a value anyone is even pretending to uphold.

I’d rather someone of these very smart people start to develop countermeasures.

driverdan 20 hours ago

This is a nice gesture but completely meaningless. There is absolutely no commitment in this. "We hope our leaders.." has no conditions, no effects.

If you're an employee and actually believe in this you need to commit to something, like resigning.

  • culi 20 hours ago

    it's the first step towards actually organizing. Reminds me of how the Kickstarter union came to be

    Any collective action should be encouraged

abhijitr 20 hours ago

The book "On Tyranny: 20 lessons from the 20th century" by the historian Timothy Snyder is an excellent read for these times. The very first lesson is "Do not obey in advance". It's about how authoritarian power often doesn't need to force compliance, people simply bend the knee in anticipation of being forced. This simply emboldens the authoritarians to go further.

I've been disappointed to see many businesses and institutions obeying in advance recently. I hope this moment wakes up the tech community and beyond.

  • ozozozd 13 hours ago

    For companies / billionaires obeying in advance means they are buying their subscription to a period of favors like better contracts, lesser scrutiny over mergers, lighter enforcement of all laws.

    I’d like to think that they are scared/obeying, but they’re likely just joining an organization.

hrtk 13 hours ago

More like “you have been divided” — OpenAI

hedayet 18 hours ago

Just one thing - unless you're at a principal level or higher, don't quit as long as your conscience can take it. You'll be replaced by 10 other people overnight.

kapluni 8 hours ago

Sadly didn’t age well - OpenAI enthusiastically caved

  • trickstra 4 hours ago

    It's fun seeing both of these posts on the main page of hackernews at the same time.

PostOnce a day ago

My take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.

Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.

  • zugi 21 hours ago

    Anthropic does not object to its use for war. In fact Anthropic explicitly allows its semi-autonomous use in war, e.g. for identifying targets. They just won't permit its use for full autonomous war, yet, because they don't believe it's safe enough.

    • PostOnce 20 hours ago

      Since when has war been waged according to the whim of a corporation?

      The tools will be used however the government wants them to be used. The government makes the laws and wages the wars, and the corporation will follow the law whether it wants to or not.

      So either you are willing to work on a tool that is not under your control, or you are not.

      • zugi an hour ago

        It's an interesting development because wars haven't traditionally been waged predominantly with software. But soon perhaps they will be.

        While the government is accustomed to complying with software licensing rules, indeed it is not accustomed to being limited in warfare, so the two have now come into an interesting conflict.

    • nxm 20 hours ago

      I'm sure China doesn't care it's not safe... and there's the issue

Quarrel 18 hours ago

I know it is a serious topic, but before I clicked on it, I assumed this was going to be about Prime numbers...

Maybe it can get reused after this stuff is over.

tomcam 18 hours ago

Please take this question at face value. I tend to be slightly pro defense department in this context, but it is not a strongly held belief.

What I have known is that since its very inception, Google has been doing massive amounts of business with the war department. What makes this particular contract different? I really am trying to understand why these sentiments now.

  • anigbrowl 18 hours ago

    It's a clear enough moral issue that whichever side of it you end up on is likely to have life-shaping consequences 5 or 10 years down the line. It's predictable that there will be domestic or international conflict with a high cost in lives and political coherence over that timescale, and being someone who 'was in AI' at a government scale vendor is qualitatively different from being a database admin o font designer or UX specialist.

    Substantively, individual employees of these firms may have little or no actual impact on this. But AI is ubiquitous enough and disruptive enough that being professionally connected with it at a time of great geopolitical instability has the potential to be a very very bad look later.

    • tomcam 10 hours ago

      But hasn’t that always been true at Google? They’ve been military contractors for decades.

      • anigbrowl 3 hours ago

        No, because 'military contractor' is vague and people don't associate logistics or mapping info with death directly and assign responsibility to some generic person in uniform. 'AI systems that hunt down and kill you' is the sort of sci-fi nightmare people relate to personally.

redbell 6 hours ago

> They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.

Prisoner's Dilemma in Action!

MattDaEskimo a day ago

This was a brave, heartwarming read. Thank you to the teams

andytratt 8 hours ago

HN should apply their flagging of posts consistently. either flag the politics or not at all.

djgrant 5 hours ago

The regulatory environment in the US is insane

gunnihinn 13 hours ago

The bravery of the people signing this anonymously is inspiring.

  • ekjhgkejhgk 10 hours ago

    What's uninspiring is your ignorance of game theory.

    Anyone who puts their name on that list might potentially be a target. On the flip side, there is no signaling value in putting your name on the list anonymously. Therefore anonymous names on the list believe in it (tho some people might make the calculation that they can't handle being a target but they might still resist and obstruct in other ways.)

    So: It's inspiring that a lot of people are ready to obstruct or delay even if they're not ready to deal with personal consequences.

    • gradstudent 10 hours ago

      > Anyone who puts their name on that list might potentially be a target.

      My first inclination is to read letters like this as a threat from employees to the employer. It says hey boss-men, this shite is not on. Signing anonymously undermines that message. I tend to read those signatures as as, I don't like this but it's not worth my job. I have no faith in the efficacy or even existence of "obstruct or delay" tactics from folks like that.

      • ekjhgkejhgk 10 hours ago

        > It says hey boss-men, this shite is not on. Signing anonymously undermines that message.

        No it doesn't. It says "Hey boos I'm telling you this shit is not cool, and there's nothing you can do to me personally because you don't know who I am."

        Let me put it differently. Suppose YOU are the boss. You company has 1000 employees and you receive a letter with 500 anonymous signatures saying "we fucking hate what you're doing" (so, 50% of your employees, 100% anonymous). Do you get a little bit worried? Or do you get not worried at all because everybody signed anonymous? Actual question, let me know how you think.

mythz 20 hours ago

These 2 Exceptions shouldn't have to be disputed.

At this point I'd go far to say I wouldn't trust any company with my AI history that caves to DoD demands for mass domestic surveillance or fully autonomous weapons.

Your AI will know more about you than any other company, not going to be trusting that to anyone who trades ethics for profits.

vander_elst 12 hours ago

What's crazy here is that a government I'd requiring de-regulation while companies are trying to keep stricter rules. What a time.

bcooke a day ago

I'd love to see this extended to any American regardless of past/present employment with Google or OpenAI

  • general_reveal a day ago

    Would you like to see this extended globally? Could such a spirit exist multinationally? It’s asking a lot, because you’d be asking for a lot of courage from places like China, India, Russia, Middle East … anywhere that’s not Europe basically.

    • bcooke 21 hours ago

      Well yes, but context matters here and this is the US government's decision to take with a US-based company.

      While I understand why it matters for folks affiliated with prominent AI companies in particular to sign this, the more the American people stand together, the more pressure I think that puts on our government to act responsibly.

      Idealistic and naive? Probably. But sometimes grassroots efforts do spark change, and it's high time the people of the USA start living up to the first word in our country's name.

      Anyways, to answer your question directly: I welcome all the fine people of the world everywhere to join in what this open letter stands for.

      Unfortunately, it's abundantly clear to many of us Americans that the current administration doesn't care what we think, never mind what people outside our country do. So I'll just start with the group that this department (in theory) is supposed to represent.

motbus3 16 hours ago

The important thing to know is that no one wants a conflict. Don't be used for that. Don't accept that.

We protect our families when we are home. That's all everybody wants.

khannn 13 hours ago

Shades of "He Will Not Divide Us"

snickerbockers 20 hours ago

>We are the employees of Google and OpenAI, two of the top AI companies in the world.

Does this mean you dipshits are going to stop your own domestic surveillance programs? You sold your souls to the devil decades ago, don't pretend like you have principles now.

fschuett 15 hours ago

Ted Kaczynski was right about technology

poisonborz 12 hours ago

So these are the employees that ignore the hundreds of other atrocities their companies do against other countries, small firms, individuals, come out flags waving for some cherry-picked issues, and next day go back to their well paid jobs, vested stocks and office perks and lunch chefs to passively support these agendas further, even if they have the best career mobility across almost all industries.

I mean it's neat, but naive at best.

zahlman 17 hours ago

Is there a particular reason why the actual letter content requires JavaScript to load while everything else is readable?

siliconc0w 20 hours ago

We need key AI researchers at these companies to speak out - execs will not care otherwise. If Jeff Dean made this a red line, it might matter.

  • AdieuToLogic 19 hours ago

    > We need key AI researchers at these companies to speak out ...

    See this[0] article from Business Insider dated 2026-02-16 titled:

      The art of the squeal
    
      What we can learn from the flood of AI resignation letters
    
    And containing:

      This past week brought several additions to the annals of 
      "Why I quit this incredibly valuable company working on 
      bleeding-edge tech" letters, including from researchers at 
      xAI and an op-ed in The New York Times from a departing 
      OpenAI researcher. Perhaps the most unusual was by Mrinank 
      Sharma, who was put in charge of Anthropic's Safeguards 
      Research Team a year ago, and who announced his departure 
      from what is often considered the more safety-minded of the 
      leading AI startups.
    
    0 - https://www.businessinsider.com/resignation-letters-quit-ope...
pluc 11 hours ago

Hey did someone show this to Sam? I don't think he knows.

succo 11 hours ago

This is game theory 100%, who's gonna be the bad guy?

himata4113 a day ago

Does this mean there is a non zero chance we will get some kind of grok+chinese model mix that's used across the entire US military? Ironic isn't it.

focusgroup0 20 hours ago

> domestic mass surveillance and autonomously killing people without human oversight

spoiler alert: this is already happening

do labs in China have a choice in the matter?

gcanyon a day ago

No problem! The DoD^HW will just use DeepSeek!

(I wish this were a joke)

  • dryarzeg a day ago

    They've already been using Signal - which is "commercial" app, meaning it's not meant to be used like that - for top-secret (or at least highly sensitive) military communications during the military strikes on Yemen. If that was fake, I apologise, I was deceived. I wouldn't be surprised if things turned out that way again, to be honest. That's something to be expected, actually (IMO).

    • verdverm 21 hours ago

      Aren't they using the Israeli version of Signal which backs up messages because the law requires it?

      Pretty sure I remember that from the fumble

  • JshWright 21 hours ago

    The legal name of the department is still the Department of Defense. The "Department of War" is a preferred name by the administration.

    • k12sosse 21 hours ago

      Identity affirming care now includes avoiding the DODs deadname. What a world.

      • dang 19 hours ago

        Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

        If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

  • dalemhurley a day ago

    They are after the models without post training guardrails.

latencyhawk 17 hours ago

Well, I think I will get the 200 sub.

bottlepalm 21 hours ago

We all knew AI had the potential to be extremely powerful, and we all perused it anyways. What did we think would happen? The government/military always takes control of the most powerful/dangerous systems. If you work for a defense contractor or under ITAR then you already know this.

The right way to deal with this is political - corporate campaign contributions and lobbying. You're not going to be able to fight the military if they think they need something for national security.

spuz a day ago

They should be collecting signatures from employees at xAI. I think they're probably most likely to fill the space left by Anthropic.

  • dalemhurley a day ago

    XAI has already announced they are 100% in

    https://x.ai/news/us-gov-dept-of-war

    • spuz a day ago

      All the more reason to collect their employees' signatures.

    • aeon_ai 21 hours ago

      This kind of screams desperation, but I guess that's what happens when you're niche AI.

      • nailer 6 hours ago

        No. The US needs automated weapons China will attack Taiwan, Hamas will go on another murder rampage.

  • ocdtrekkie a day ago

    Everyone knows anyone who signs this from xAI will be a former employee by tomorrow.

    • dalemhurley a day ago

      My guess is their HR is already monitoring it with instant termination processes in place.

      • spuz a day ago

        You can sign the form anonymously.

        • ocdtrekkie 21 hours ago

          Both the automated verification methods depend on Google servers and Google can almost certainly retrieve that data if they want to regardless of if the signers or verifiers delete it.

      • ocdtrekkie 21 hours ago

        You're assuming a lot about Elon's ability to assemble and execute a process competently. They will probably end up hiring people off this list and firing them later.

        I think what is much more interesting is what OpenAI and Google will do. There's probably some threshold of signatories where the companies in question do not fire everyone when they decide they want the DoD's business, the question will be how many people have to sign to cross it... and will enough people sign.

        I don't think Google would bat an eye at firing 500 people to secure a DoD contract, but would they fire 5,000?

  • xvector a day ago

    There is a specific kind of person that joins xAI over the other companies and it is definitely not a moral one.

    • clouedoc 8 hours ago

      It's hard to deny an offer to become a millionaire in the next 3 years if you just hang tight at xAI, especially if you don't have any offers from competing AI labs. Also, LLMs are converging into an easy-to-replicate commodity. It doesn't matter much who wastes their money on you to build them.

      • xvector 5 hours ago

        If you can get an offer at xAI you can get an offer anywhere. All the labs and top players will make you a millionaire in 3 years.

        xAI is a pure choice. Their people have the ability to work at Anthropic but choose xAI.

guywithahat 4 hours ago

> Label the company a "supply chain risk"

Are they not a huge supply chain risk? Anthropic, being second chicken to OpenAI for a long time, decided to integrate tightly with the DoW. Now that their consumer products are doing better they're making decisions for the DoW as a supplier. This isn't about whether I agree with the DoW or not, it's just that behavior obviously would never fly with any customer.

The only real surprise is I haven't heard of the DoW considering Grok, which is not only a frontier model but has an existing gov cloud platform.

jfengel a day ago

Good luck with that. I just don't see either Google or OpenAI listening to their employees on this. They might have their own reasons for not wanting to help build Skynet, but if they don't, I'm sure those employees can readily be replaced with somebody more compliant.

torton 6 hours ago

Apparently, OpenAI already folded.

https://www.cnn.com/2026/02/27/tech/openai-pentagon-deal-ai-...

A unified front from tech companies could have stood a chance, but there's too much money to be made and the imbalance of power is too great without departing the area of influence of the US government entirely (and then go where? China, UK, Australia, etc. are equally not shy of coercing commercial capabilities in pursuit of government goals, including military goals).

trinsic2 21 hours ago

I missing the actual letter. I think that part of the content is hidden behind some javascript. Can someone post it.

bambax 13 hours ago

> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands...

WTF does that even mean, we "hope"???!? You know they won't, what's the point of hoping? Why not quit if you have the courage, or not quit -- and shut up?

foota 15 hours ago

Well that aged poorly.

yayr 2 days ago

It's good that there are still empathic humans in the decision and build chain when it comes to AI systems...

  • wosined a day ago

    [flagged]

    • dang 21 hours ago

      Personal attacks aren't allowed here.

      Perhaps you don't owe AI tycoons whose names start with A better, but you owe this community better if you're participating in it.

      https://news.ycombinator.com/newsguidelines.html

      • wosined 3 hours ago

        It was a joke Paul.

      • mrcwinn 21 hours ago

        I see comments like this all the time on HN, including between community members. Why are you showing up now? Altman may be former YC and friends with Paul Graham, but he’s nevertheless a public figure and does plenty to deserve ridicule.

        Are we allowed, for example, to call Trump an insecure man with orange skin and tiny hands? Is that a violation of our allowed speech?

        • hedayet 18 hours ago

          Altman is also on Paul Graham's legendary founders list. I hope that clears up a thing or two.

        • dang 19 hours ago

          > I see comments like this all the time on HN, including between community members

          That's bad, and I'd like to see links to those.

          > Why are you showing up now?

          If you mean why do I respond to post A but not B, the answer is usually that I saw A but didn't see B. We don't come close to seeing everything that gets posted to HN—there's far too much. If you see a post that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it. You can help by flagging it or emailing us at hn@ycombinator.com (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...).

          > Are we allowed, for example, to call Trump an insecure man with orange skin and tiny hands?

          That's certainly a cliché, and it's hard to see how repetition of tropes fits with the intellectual curiosity that we're optimizing for (or rather, trying to! - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...). As I've said in the past, curiosity withers under repetition and fries under indignation (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...).

          I think, though, that the issue with a political cliché is rather different than posting that someone "doesn't look human".

mortsnort 19 hours ago

Kneecapping the country's best AI lab seems like a bad way to win at the cyber.

tonymet 5 hours ago

Allowing anonymous signatories only weakens the petition. Two important people signing a petition is worth more than 10000 anons.

I scrolled through a few pages and 40-60% are anonymous. Even a handful weakens the petition.

I wish more people would participate in civics . Attend your city council or local political party meeting. See what it takes to actually collect signatures, run a campaign.

Online slactivism actually just worsens the cause, because potential energy is vented on futile online “petitions” rather than taking real action.

dvfjsdhgfv 9 hours ago

The counterargument by the other side will always be, if we don't do it it doesn't matter because the Chinese will do it anyway - and then, common people will be at a disadvantage.

anonnon 19 hours ago

> Signed,

The people who:

> steal any bit of code you put on the internet regardless of the license you use or its terms, then use it to train their models, then turn around and try to sell it to you

> made it so you can't afford new, more powerful computers or smartphones anymore, or perhaps even just replacements for the ones you already have, thanks to massive GPU, DRAM, SSD, and now even HDD shortages

> flood the internet with artificial, superficial content

> aggressively DDoS your website

Real pillars of society.

siva7 14 hours ago

At least they're making it easy for HR.

love2read 21 hours ago

How is posting on this website with your full name not career suicide?

  • ceroxylon 21 hours ago

    That's what taking a stand looks like... if any of these employees lose their job, they are welcome to come crash at my place for as long as they would like; they will have a roof over their head and I will cook them 3 meals a day.

  • Sivart13 21 hours ago

    Not all tech employers are total weenies who would refuse to hire someone for taking this stance.

    Most are, but not all.

ipaddr 19 hours ago

And people were wondering how OpenAI will find profitability.

tgv 15 hours ago

So now they suddenly develop a conscience? Killing education, and by implication actively dumbing the future world, putting large parts of the labor market at risk, porn fakes, and destroying artistic creation, are acceptable in the name of profit, apparently.

anigbrowl 18 hours ago

We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

[90 minutes later]

Ah! Well, nevertheless

OK, this is a cheap shot on my part. But still: we hope? What kind of milquetoast martyrdom is this? Nobody gives a shit about tech workers as living, breathing, human moral agents. You (a putative moral actor signed onto this worthy undertaking) might be a person of deep feeling and high principle, and I sincerely admire you for that. But to the world at large, you're an effete button pusher who gets paid mid-six figures to automate society in accordance with billionaires' preferences and your expressions of social piety are about as meaningful as changing the flowers in the window box high up on the side of an ivory tower. The fact that ~80% of the signatories are anonymous only reinforces this perception.

If you want this to be more than a futile gesture followed by structural regret while you actively or passively contribute to whatever technologically-accelerated Bad Things come to pass in the near and medium term, a large proportion of you (> 500/648 current signatories) need to follow through and resign over the weekend. Doing so likely won't have that much direct impact, but it will slow things down a little (for the corporate and governmental bad actors who will find deployment of the new tech a little bit harder) and accelerate opposition a little (market price adjustments of elevated risk, increased debate and public rejection of the militaristic use of AI).

Hope, like other noble feelings, doesn't change anything. Actions, however poorly coordinated and incoherent, change things a little. If your principles are to have meaning, act on them during the short window of attention you have available.

dmix 21 hours ago

Not using Claude only weakens the state. Just don’t oblige

monkaiju 6 hours ago

I'm regularly surprised how otherwise intelligent people with "good intentions" keep going to work at these places in the first place, then get all "surprised pikachu" when it turns out their work might go towards nefarious ends. These technologies are inherently anti-creativity and researchers have been sounding the alarms about their efficacy for mass surveillance for a long time. Even this petition only seems concerned with "domestic mass surveillance", as if the tools used by an empire abroad dont inevitably get turned inwards.

At some point its hard not to think they just cant avoid the money. At least for the SWEs these are folks who could work at much less "evil" businesses and still easily clear $150k or $200k but they just cant help themselves. This is a company that steals its training data and whose primary product is at best an anti working-class cudgel that management can use to intimidate workers and threaten them with replacement and at worse is a mass-surveillance/killing tool.

nailer 6 hours ago

All that will happen as a result of US companies not willing to work on weapons is that the US will be made more vulnerable to adversaries, particularly the CCP who don’t care about these things.

ozgung 12 hours ago

Am I the only one who is really freaking out?

They deploy BOTS to KILL PEOPLE!

This is the only big news here.

This is the only time in this timeline where we must say "you shall not pass". The ultimate red line. And there is no going back. It's just escalation in an arms race from now on. Nothing good can come out of this.

And you are talking about details, if some guys mentioned the word "domestic" in their tweet etc.

BOTS will autonomously KILL PEOPLE!

ripped_britches 20 hours ago

No surprise to have not heard anything from xAI

goku12 14 hours ago

> permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

This sounds way worse than dystopian, Orwellian or big-brotherly, in a world where you can't even get a human to review the 'autonomously placed lock' on your email or social media account. The Terminator saga is perhaps a good fit. But I have a feeling that they won't stop even at that.

mellosouls 13 hours ago

"Domestic".

Very disappointing the letter signatories have chosen to reinforce the US-centric idea that using the models to spy on other democracies is fine and dandy.

Altman and senior others names notable by their absence; not unexpected given the quickly following apparent submission to DoW, which leaves the signatories here (while well-intentioned) in exposed ethical positions now.

theahura 19 hours ago

OpenAI is nothing without its people

pluc 10 hours ago

They have now deleted/hid all signatures because their corpodaddy went the other way.

This is so great.

shevy-java 11 hours ago

"We are the employees of Google and OpenAI, two of the top AI companies in the world."

Well, good luck to them, but the state can control from top-down via laws, so they WILL eventually abuse people and violate their rights by proxy-force. I would not trust any of them with my data.

nailer 8 hours ago

From the HN Guidelines:

> Please don't use Hacker News for political or ideological battle. It tramples curiosity.

bufio 4 hours ago

Hacker news?

drcongo 9 hours ago

If I was Anthropic, I'd be saving this as a list of potential hires who share the company's values and shortlisting some to call up on Monday morning.

surume 10 hours ago

You must follow the law in your home country. Your refusal to do so constitutes Treason. Obey the law.

kittikitti 11 hours ago

I respect this and everyone who signed it. Not that I was ever employed by them, I also wouldn't be confident enough to do this, and I wish it were any other way. This is inspiring, thank you.

asmor 15 hours ago

This is the line? Really?

Not all the other shit this administration has been doing?

God, I hate it here.

singlewind 12 hours ago

The beauty of balance is someone can say yes and someone can say no. No matter how good do you calculate there is theory behind.

paganel 14 hours ago

Jeff Dean could have done a lot of good and add his name to the list of signatories, seeing as how leaf of AI at Google or some such. He was supposed to be this super-smart dude, I guess he’s far from that.

Huge props for the the Google and OpenAI engineers that did sign this, for those that did realize that they’re fighting for a greater thing, not just for an extra zero or two added at the end of their bank accounts. Especially as they’re taking a great amount of risk by doing it, first of all, imo they are risking their current employment status.

yoyohello13 a day ago

I hope Anthropic will survive this. If they don’t it will just be perfect proof that you cannot be both moral and successful in the US.

  • gslepak 21 hours ago

    Who cares whether the "company" survives? I've seen this movie. A few of them in fact. We're on the chopping block here, lol.

    • collinmcnulty 21 hours ago

      We should care because if they win they empower others to stand up as well, and not just in the area of AI safety. Courage is contagious, and whatever else you think of Anthropic, they’re showing real courage here.

      • gslepak 20 hours ago

        I'm not debating whether or not they're being courageous. I'm referring to self-preservation, this is a natural instinct that should be in all people. Have you seen T2? District 9? The Matrix? And a few others I could mention.

    • dakolli 21 hours ago

      Yeah, I find it funny how we're now defending these AI companies, when they're clearly still an enemy of the working class.

      They've made it incredibly clear their plans are to disenfranchise labor, and welcome in a world of God knows what with their technologies. Like they're making a stand on mass surveillance, this seems a bit like a red herring, cool they stop using their tools for war fighting, but continue to attack their fellow working working class?

      All three of these companies are spending hundreds of millions to psyop decision makers across every industry to give your salary to them. Get out of here, with "We will not be divided" OpenAI, Google and Anthropic employees are not friends of labor and should not use our phrases.. or they'd sabotage and or quit.

      And why is there no mention of how we caught OpenAI being used in government dashboards through Persona, only two weeks ago, that were directly connected to intelligence organizations and tools to identify if you are politician or high profile personds? OpenAI has been complicit in this since last January when 4o was the first model that qualified for "top secret operations"

      (kind of weird how 4o went onto cause a bunch of people to go literally insane and commit crazy acts of violence yet is allowed to be used in the most sensitive aspects of government.. nothing to see here).

      • lerp-io an hour ago

        i think ai is supposed to empower you to achieve more, maybe if you are looking for a tool to give you a job, it's not the right tool for you?

      • hax0ron3 20 hours ago

        If the AI companies and the current administration are both enemies of the working class - I am not necessarily saying that they are, but for the sake of argument let's say that they are - then it probably makes strategic sense for the working class to encourage them to fight each other while supporting the side that is less dangerous. Which side is less dangerous to the working class, I do not know. My point is that there's not necessarily any strategic contradiction between defending the AI companies and supporting the working class.

      • c1c3r0 20 hours ago

        I look at specific actions in context. What Anthropic did today was amazing in my eyes for reasons that are widely held and stated clearly by Anthropic.

        At the same time, I might gesture at other actions they’ve done that fall short. This is not inconsistent; this is simply acknowledging miltidimensionality.

        • dakolli 20 hours ago

          Or its just incredible marketing.. I don't really care about what LLMs do in a military context, they'd probably make a military less effective which is good in my opinion. I find it a pretty silly notion to use them outside of maybe signals intelligence, seems actually dumb as hell to use them for targeting. Other types of ML models in a military context worry me far more than neural network powered autocomplete.

          I think we should worry way more about Anthropic's attack on the working class, Dario has been very clear those intentions, and we shouldn't be patting them on the back. We should be boycotting all of these companies that say [insert computer i/o career] is dead .

          If you must use Think For Me SaaS use an Open Source model.

  • fourthark a day ago

    Most survive by bending. See e.g. Google and surveillance a decade ago.

  • Esophagus4 19 hours ago

    From a revenue perspective I think they’ll be fine, right? Weren’t the value of the govt contracts $200m out of like $14b revenue?

    Assuming the govt doesn’t take other crazy measures to punish them.

  • Aurornis 21 hours ago

    Anthropic has enough investment money and enough additional investor interest that they can ride this out longer than this administration. It won’t be good for business, of course, but it’s not the end of their world.

    > it will just be perfect proof that you cannot be both moral and successful in the US.

    I hate this situation as much as anyone, but it’s a unique, first of its kind challenge. I don’t think it’s generalizable to anything. This is a unique situation.

  • voidfunc a day ago

    The only way they survive is if their board fires the CEO and they bend the knee. The other option is they are given the green light to sell to one of the US Governments trusted partners: Microsoft/Oracle/X.

  • jcgrillo a day ago

    Either way, the bribes will flow like wine, the message has been sent loud and clear

  • belter a day ago

    >> you cannot be both moral and successful in the US.

    I assumed the use of massive scraped datasets, with copyrighted material and without consent, to train large AI models, had already established this.

    • drdeca 21 hours ago

      Many people don’t think there is a moral case against training a model on copyrighted data without obtaining a license to do that specifically.

  • bko a day ago

    [flagged]

    • TehCorwiz 21 hours ago

      This conflict has zero to do with AI in the grand scheme of things. We had a whole supreme court case about refusing service to customers. Remember that? Private companies can choose which customers to service. And let's be clear about what's being sold. It's not a product that changes hands, it's a service provided continually. And as anyone except the enlisted military troops can, said vendor can choose which efforts to help with. If what the government wants is so onerous as to find no vendors to offer it then that says something doesn't it?

      • engineer_22 21 hours ago

        Plenty of precedent for seizing private property for national defence. The list is long and growing.

        • TehCorwiz 21 hours ago

          Citation please.

          • engineer_22 21 hours ago

            Selective Service System is evidence enough of the government's power to oblige participation in defence.... But if you're interested...

            https://scholarship.law.cornell.edu/cgi/viewcontent.cgi?arti...

            • TehCorwiz 21 hours ago

              Selective service activation, I.E. a draft. Requires an act of Congress. When did they enact a bill to draft Anthropic?

              • engineer_22 21 hours ago

                https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

                Great article, it has a list of times it's been used to compel cooperation.

                • TehCorwiz 18 hours ago

                  Ok. So what's the emergency prompting them to take control of Anthropic?

                  Further, why would they also accuse them of being a national security threat in the same breath? Seems like if they're a threat they're also not someone you want working on national security. Especially under duress. That feels like a bad combination.

            • toraway 21 hours ago

              That link is specifically discussing actions the government takes in war. Like, a real, ongoing, war where it's accepted extraordinary actions may be necessary that conflict with peacetime rights to private property (it was written during World War 2).

    • magicalist 21 hours ago

      This reads a whole lot like the government gets to make you do whatever it wants because the president was elected?

      Freedom!

      That's great that responsibility for offensive decisions ultimately lie with the civilian leaders of the military, but that does not give them the right to compel behavior from private citizens under threat of the government obliterating them.

    • adampunk 21 hours ago

      "Seemingly innocuous terms from the latter like "You cannot target innocent civilians" are actually moral minefields that lever differences of cultural tradition into massive control."

      GEEEEE, I wonder who the bad guys are here.

      • bko 21 hours ago

        Let me introduce you to the Democratic People's Republic of Korea

        • adampunk 21 hours ago

          Oooh, scary. Did they shoot Renee Good?

    • _bohm 21 hours ago

      This opinion coming from one of the most compromised people possible on this issue, lol.

    • nxobject 21 hours ago

      Good grief - we happen to have a free market with multiple suppliers. But a defense contractor in deep with the current administration’s ideology might have a hard time remembering that.

    • harmmonica 21 hours ago

      A lot of words and somehow still missing the point. This is a pretty straightforward question: should the US government be able to force a company to do business with it based the government's unilateral terms? I think the answer to that ought to be no, they should not be forced. And there's no other discussion to have.

      You can discuss whether a corporation is violating some law, and punish them if they are, but I don't think jumping from "corporation doesn't want to do business with the gov" to "corporation is a national security risk" makes any sense.

      What a fuckin' joke.

    • preommr 21 hours ago

      I agree with Palmer that Corporations shouldn't control governments.

      But that's not what this is about. The US government is free to not use Anthropic's services.

      The problem is that the government is using bullying tactics against a company excercising it's rights to not sell. Especially if they actually designate Anthropic as a supply chain risk - not only is that threat absolutely ridiculous, but actually doing so should be 9/10 on the danger scale.

      WTF is even happening anymore? How did we get here that this is even up for debate???

    • SpicyLemonZest 21 hours ago

      Palmer Luckey is excusing the inexcusable for treats from the regime. If the regime gets away with this, when constitutional government is restored, I will be petitioning my congresspeople to destroy Anduril in retaliation.

    • renewiltord 21 hours ago

      None of this is relevant. They’re saying “our stuff can’t be used effectively for X but you can use it for Y”. It’s like if someone was saying “dude the o ring is going to fail on the shuttle launch” and you respond “if we have random people permitted to stop the shuttle launch every time we will never get off the ground”.

      The rhetorical technique of generalizing a specific constraint is very effective in the peanut gallery but hopefully we don’t want our shuttles to blow up.

    • SilverElfin 21 hours ago

      From Palmer Luckey who worshipped Trump as a teenager? Who has billions in contracts due to his sycophancy? Just like Joe Lonsdale and Peter Thiel? Yea his opinion is irrelevant.

    • mindslight 21 hours ago

      > Should our military be regulated by our elected leaders

      Utterly fallacious. Trump is not a leader, rather he is a divider. Nor was he elected to act as a dictator unbeholden to the Constitution or the courts. Corporate control is indeed terrible, but autocratic authoritarianism is worse. This gradient is shown by how it is only the rare company trying to impart some kind of restraint which is being taken to task.

      It's also pretty amazing how no matter which societal institution we try to invoke to put the brakes on the fascists, we're invariably told that the "proper approach" is actually something else, usually settling on simply waiting for an election, some time down the road, maybe. Are we supposed to believe that elections are the only institution our society has? The fascists won a single election, and so we're told that supposedly serves as a mandate for doing whatever they'd like to our country for the next four years? Yeah, no, fuck off.

dluan 19 hours ago

oops turns out you will all be divided

paradoxyl 16 hours ago

More Far Left treason, documented.

ReptileMan 17 hours ago

It is really nice to see employees creating lists for the next round of laoffs themselves.

csneeky 18 hours ago

Claude is better for much than GPT atm. You really think the government is going to hamstring the engineering of weapons and intelligence capabilities by not using it?

blaze998 21 hours ago

December 14, 2024

>After famed investor Marc Andreessen met with government officials about the future of tech last May, he was “very scared” and described the meetings as “absolutely horrifying.” These meetings played a key role on why he endorsed Trump, he told journalist Bari Weiss this week on her podcast.

>What scared him most was what some said about the government’s role in AI, and what he described as a young staff who were “radicalized” and “out for blood” and whose policy ideas would be “damaging” to his and Silicon Valley’s interests.

>He walked away believing they endorsed having the government control AI to the point of being market makers, allowing only a couple of companies who cooperated with the government to thrive. He felt they discouraged his investments in AI. “They actually said flat out to us, ‘don't do AI startups like, don't fund AI startups,” he said.

...

keep making petitions, watch the whole thing burn to the ground when Trump decides to channel the Biden ideas in this field.

lazzlazzlazz 18 hours ago

The signatories of this site are leaping at a misguided opportunity for moral credit, when the reality is that they're getting whipped into a left-leaning frenzy.

As Undersecretary Jeremy Lewin clarified today[1], these weighty decisions should not be made by activists inside companies, but made by laws and legitimate government.

[1]: https://x.com/UnderSecretaryF/status/2027594072811098230

jurschreuder 15 hours ago

They always already wanted it to be Grok, Grok is the only, what they call "not woke AI".

lovich a day ago

You’re kinda already conceding to some of your opponents points when you use legally invalid names like “Department of War”

I appreciate the sentiment but don’t preconcede to your opposition by using their framing.

  • uniq7 21 hours ago

    In this case I think the opponents made a huge mistake by calling themselves Department of War, and it's something that can be exploited.

    Department of Defense was the actual lie, the newspeak term. They were not really defending anything, they were using military power globally for pursuing economic interests. However, it was easy to convince people that the whole endeavor was a good thing, because defending your country against the baddies is good, and you should support anyone doing that (otherwise you'd be a traitor!). Thank you for your service (defending us).

    On the other hand, the term Department of War is hard to sell, because most people don't want to participate in a war or support someone who wants to start one. Thank you for your service... invading other countries? killing and raping innocents? ransacking resources?

    This is an irrelevant detail, but if I'd read the title "Department of Defense vs. Meta", I'd first think Meta is leaking confidential info to other countries. However, if I'd read "Department of War vs. Meta", I'd think Meta doesn't want to promote an unnecessary war.

  • Vaslo 19 hours ago

    "Legally Invalid" lol - what?

    • lovich 15 hours ago

      Yeah, it takes an act of Congress to rename a part of the government, normally it’s a milquetoast event like renaming a postal office, but this admin thinks the law doesn’t apply to them.

      Currently the government executive branch is claiming they have that right and the legislative branch can get fucked.

      I am taking advice from the current executive admin around names and continuing to call the Department of Defense by their biological name.

  • mulmen 21 hours ago

    I'm disappointed Anthropic made this mistake as well.

uwagar 11 hours ago

isnt the pentagon just asking for total access to source code and data silos of anthropic and openai...that we cant ask because its proprietary software?

amelius 13 hours ago

Hegseth is discovering the shittiness of the SaaS model.

senderista 18 hours ago

"We hope our leaders will put aside their differences and stand together"

nullbyte a day ago

"He will not divide us!"

  • leonflexo 21 hours ago

    What's that, a little speaker?

  • nom a day ago

    I miss those times :(

    • xeonmc a day ago

      Club Penguin was a gem. Now all we get are Roblox.

HardCodedBias 16 hours ago

So much insanity.

Anthropic wanted a veto on use of force by USG. That is intolerable, no private party can have a veto over the sovereign. It is that simple.

Anthropic should have just walked away (and taken the settlement lumps) when they realized that the USG knew. But no, they started some crazy campaign.

This is so irrational on Anthropic. Purchasing managers across the US (and the world) have to understand now that while Anthropic has the best model on the planet, it is not rational (if you prefer it is not rational in ways commonly understood). It is a risk and must be treated as such.

moogly 20 hours ago

We have international laws and rules of war. We have weapon treaties (well, some of them are expiring). Sure, not everyone is signatory, or even follow the conventions they have ratified, but at least having these things in place makes it even remotely possible to categorize and document violations and start processes towards rulebreakers and antihumanist actions.

So I looked into what they cooked up in 2023, plus which countries signed it (scroll down to a link to the actual text). It's an extraordinarily pathetic text. Insulting even.

https://www.state.gov/bureau-of-arms-control-deterrence-and-...

rybosworld 8 hours ago

I don't love talking politics on this site. Hackernews has done a pretty decent job of staying non-political and I think that's been a positive thing.

AI is re-shaping American society in a lot of ways. And this is happening at a time where the U.S. is more politically divided than it's ever been. People who use LLMs regularly (most SWEs at this point) can understand the danger signs. The bad outcomes are not inevitable. But the conversations around this cannot only be held in internet forums and blogposts.

Hackernews is an echo chamber of early adopters of tech. The discussions had here don't percolate to the general population.

I believe many of us have a duty to make this feel real to the less technical people in our lives. Too many folks have an information filter that is one of Fox News/CNN/MSNBC. Fox is the worst on misinformation. The others are also bad. Their viewers will not hear, in any clear way, how the Trump admin is trying to bully AI companies into doing what it wants. This will be a headline or an article. A footnote not given the attention it deserves.

Plainly: there is an attempt to turn AI into a political weapon aimed at the general population. Misinformation and surveillance are already out of control. If you can, imagine that getting worse.

This feels like one of those hinge moments. If you can, have real-life conversations with people around you. Explain what's at stake and why it matters now, not later.

HWNDUS7 17 hours ago

Sweet. Looking forward to another CTF season of He Will Not Divide Us.

I love performative acts of wealthy Silicon Valley drags.

ineedaj0b 14 hours ago

really dumb. you don’t win this

verdverm 21 hours ago

Use the feedback forms within their platforms to let the companies know your thoughts

fzeroracer 21 hours ago

It's rather amusing that this is the proverbial 'red line', not y'know, everything else this administration has been tearing up and running roughshod over. Maybe this would've been less of an issue if companies were more proactive about this bullshit in the first place?

That's why it's hard for me to feel bad about companies suddenly finding themselves on the receiving end. They dug their grave inch by inch and are suddenly surprised when they get shoved into it.

imiric 15 hours ago

The levels of irony in this case are staggering.

The employees of these companies are complicit in creating the greatest data harvesting and manipulation machine ever built, whose use cases have yet to be fully realized, yet when the government wants to use it for what governments do best—which was reasonable to expect given the corporate-government symbiosis we've been living in for decades—then it's a step too far?

Give me a fucking break. Stop the performative outrage, and go enjoy the fruits of your labor like the rest of the elites you're destroying the world with.

alfiedotwtf 20 hours ago

It would be funny in the end if the only ones left to not say no to Trump were Alibaba

krautburglar 20 hours ago

You have 1) stolen everybody's shit and put it behind a paywall, 2) cornered the hardware market in some RICO-worthy offensive that has priced one of the few affordable pasttimes for young people out of reach, 3) changed your climate story (lie) on a dime, and started putting the horrible power-guzzling data centers on any strip of land within spitting distance of a power plant. I hope you all go out of business, and I hope it happens French Revolution style.

Of course they were going to use it for military purposes you spiritual abortions, and there is nothing your keyboard-soft hands can do about it.

duped 20 hours ago

The Department of War doesn't exist, don't meet the fascists on their own terms at any level. They don't debate or operate in good faith.

jackblemming 21 hours ago

So big tech wants to court Trump with millions in donations and now that the big bully they supported is bullying them.. we’re supposed to feel some kind of sympathy? Am I missing something here? Why did Anthropic get involved with the military in the first place?

verisimi 18 hours ago

It's great that people are taking a moral position re their work, and are seemingly prepared to take a bit of a risk in expressing themselves.

However, if we're honest, Google has a long history of selling 'the people' out on domestic surveillance. There is even a good argument that this is what it was created for in the first place, given it was seeded with money from inqtel, the CIA venture capital fund. So, while I commend acting with your conscience in this (rather minor) case, and I'm glad to see people attempt to draw a line somewhere, what will this really come to? I strongly suspect this is event itself is just theater for the masses, where corporates and their employees get to stand up to government (yay!). The reality is probably all that is being complained about, and far worse, has been going on for years.

How far would these signatories go? Would they be prepared to walk away from all that money? Will they stop the rest of the dystopian coding/legislation writing, or is that stuff still ok (not that evil)?

Ultimately, is gaining the money worth the loss of one's soul? If you know better, and know that it is wrong to assist corporations and governments in cleaving people open for profit and control, but do it anyway for the house, private schools, holidays, Ferrari, only taking a stand in these performative, semi-sanctioned events - is this really the standard you accept for yourself? If so, then no problem. If not, what exactly are you doing the rest of the time? Are you able to switch your morality/heart/soul off? Judge yourself. If you find you are not acting in accord with yourself, everything is already lost.

nilespotter 18 hours ago

These models are weapons whether the frontier provider founders and their trite and lofty mission statements like it or not.

Private individuals and private companies do not get to create a defensive weapon with unprecedented power in a new category in the US and not share it with the US military.

You guys are batshit insane.

remarkEon 21 hours ago

This whole episode is very bizarre.

Anthropic appears to be situating themselves where they are set up as the "ethical AI" in the mindspace of, well, anyone paying attention. But I am still trying to figure out where exactly Hegseth, or anyone in DoW, asked Anthropic to conduct illegal domestic spying or launch a system that removes HITL kill chains. Is this all just some big hypothetical that we're all debating (hallucinating)? This[1] appears to be the memo that may (or may not) have caused Hagesth and Dario to go at each other so hard, presumably over this paragraph:

>Clarifying "Responsible Al" at the DoW - Out with Utopian Idealism, In with Hard-Nosed Realism. Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological "tuning" that interferes with their ability to provide objectively truthful responses to user prompts. The Department must also utilize models free from usage policy constraints that may limit lawful military applications. Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any DoW contract through which AI services are procured within 180 days. I also direct the CDAO to.ensure all existing AI policy guidance at the Department aligns with the directives laid out in this memorandum.

So, the "any lawful use" language makes me think that Dario et al have a basket of uses in their minds that they feel should be illegal, but are not currently, and they want to condition further participation in this defense program on not being required to engage in such activity that they deem ought be illegal.

It is no surprise that the government is reacting poorly to this. Without commenting on the ethics of AI-enabled surveillance or non-HITL kill chains, which are fraught, I understand why a department of government charged with making war is uninterested in debating this as terms of the contract itself. Perhaps the best place for that is Congress (good luck), but to remind: the adversary that these people are all thinking about here is PRC, who does not give a single shit about anyone's feelings on whether it's ethical or not to allow a drone system to drop ordinance on it's own.

[1] https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART...

sensanaty 12 hours ago

I'm going to copy a comment I made in a related thread:

I might be being a bit conspiratorial, but is anyone else not buying this whole song and dance, from either side? Anthropic keeps talking about their safeguards or whatever, but seeing their marketing tactics historically it just reads more like trying to posture and get good PR for "fighting the system" or whatever.

"Our AI is so advanced and dangerous Trump has to beg us to remove our safeguards, and we valiantly said no! Oh but we were already spying on people and letting them use our AIs in weapons as long as a human was there to tick a checkbox. Also, once our models improve enough then we'll be sending in The Borg to autonomously target our Enemies™"

I just don't buy anything spewing out of the mouths of these sociopathic billionaires, and I trust the current ponzi schemers in the US gov't even less.

Especially given how much astroturfing Anthropic loves doing, and the countless comments in this thread saying things like "Way to go Amodei, I'm subbing to your 200 dollar a month plan now forever!!11".

One thing I know for sure is that these AI degenerates have made me a lot more paranoid of anything I read online.

nobodywillobsrv 17 hours ago

It really feels like I am no longer impressed with Anthropic safety.

Do they have even a basic understanding of the different regimes of safety and what allegiance means to ones own state?

It would be fine to say they are opting out of all forms of protection against adversaries.

But it feels like just more insane naive tech bro stuff.

As someone outside the tech bro bubble in fintech in London, can somebody explain this in a way that doesn't indicate these are sort of kids in a playground who think there is no such thing as the wolf?

Again, opting out and specializing in tech that you are going to provide to your enemies AND friends alike is fine. That is a good specialization. But this is not what I hear. I hear protest songs not deep thinking of thousand year mind set.

politician 19 hours ago

I simply do not understand why Americans tech companies and their employees will hew and cry about supporting the military. For those of you who support their position, have you ever stopped to consider that your safe, comfortable lives of free speech and protests and TikTok and food and gas and Amazon Next-Day deliveries is enabled by a massive nuclear deterrent operated by the very military you oppose?

It is just so disappointing to come here and read these naive takes. Yes, Anthropic should be compelled to support the military using the DPA if necessary.

  • rectang 15 hours ago

    > “I have neither the time nor the inclination to explain myself to a man who rises and sleeps under the blanket of the very freedom that I provide, then questions the manner in which I provide it.”

    — Colonel Jessup

    • politician 6 hours ago

      No individual, whether a colonel or a CEO, has inherent authority over national security decisions. Authority flows through democratic institutions. A contractor can choose whether to participate, but national defense policy is determined by elected institutions, not private executives. If society believes AI should or should not be used for certain military purposes, the venue for that decision is democratic governance not unilateral corporate refusal or approval.

      On a CBS interview this morning, Dario defended his position with the claim that he must act because "Congress is slow." CEOs can and should make decisions about what their companies build or refuse to build. What they cannot do is substitute their judgment for the constitutional processes that govern national security. We must not vest de facto policy control in unelected corporate leaders.

  • dingi 18 hours ago

    It really shows how far the HN crowd is from reality.

hakrgrl 19 hours ago

1.5 hours after this was posted, Sam Altman stated openai will work with the DoW.

So much for this waste of a domain name. https://x.com/sama/status/2027578652477821175

"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. "

  • busko 19 hours ago
    • andai 17 hours ago

      >Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.

      I don't get it. Aren't these the same things that Anthropic was trying to negotiate?

      Edit: it was explained elsewhere in this thread:

      https://news.ycombinator.com/item?id=47188473#47190614

    • jamiequint 17 hours ago

      WTF is this garbage site?

      • nikolay 17 hours ago

        It's for people who want to read Twitter/X while trying so hard to convince themselves that they don't.

        • zahlman 17 hours ago

          > It's for people who want to read

          individual posts on Twitter/X without requiring JavaScript and without being fed a sidebar full of algorithmic recommendations.

        • busko 17 hours ago

          It's for people who want the context of what's going on here who have neither the time nor stupidity to be on X.

          I presume you're on X so no offence to you directly.

      • esseph 17 hours ago

        it mirrors what is on x.com

  • Gigachad 19 hours ago

    Something doesn’t make sense here. His tweet claims he has exactly the same restrictions that Anthropic had.

    • skissane 18 hours ago

      This tweet (from Under Secretary of State Jeremy Lewin) explains it:

      https://x.com/UnderSecretaryF/status/2027594072811098230

      https://xcancel.com/UnderSecretaryF/status/20275940728110982...

      The OpenAI-DoW contract says "all lawful uses", and then reiterates the existing statutory limits on DoW operations. So it basically spells out in more detail what "all lawful uses" actually means under existing law. Of course, I expect it leaves interpreting that law up to the government, and Congress may change that law in the future.

      Anthropic wanted to go beyond that. They wanted contractual limitations on those use cases that are stronger than the existing statutory limitations.

      OpenAI has essentially agreed to a political fudge in which the Pentagon gets "all lawful uses" along with some ineffective language which sounds like what Anthropic wanted but is actually weaker. Anthropic wasn't willing to accept the fudge.

      • qdotme 18 hours ago

        Well, or just the possibility of future-proofing the agreement in favor of the US government, as well as walking back the slippery slope of „no autonomic lethality” and „no mass surveillance”.

        The former, grants the Congress the ability to change the definition of all „lawful use” as democratically mandated (if the war is officially declared, if the martial law is officially declared).

        The latter, is subtle. There can exist a human responsibility for lethal actions taken by fully autonomous AI - the individual who deploys it, for instance, can be made responsible for the consequences even if each individual „pulling of a trigger” has no human in the loop (Dario’s PoV unacceptable).

        Similarly, and less subtly, acceptance of foreign mass surveillance, domestic surveillance (as long as its lawful and not meeting the unlawful mass surveillance limits!) seems to be more in the Pentagon’s favor.

        Whether we like it or not, we’re heading into some very unstable time. Anthropic wanted to anchor its performance to stable (maybe stale) social norms, Pentagon wanted to rely on AI provider even as we change those norms.

      • squarefoot 17 hours ago

        "All lawful uses" has no meaning when a malignant narcissistic sociopath in power controlled by ruthless rich psychopaths can now rewrite every law at will.

      • PakG1 18 hours ago

        Because the US government has such a great track record on ensuring that this kind of stuff is only done legally with the utmost integrity. /s

    • Jensson 19 hours ago

      Sam probably told them they can renegotiate those restrictions in a year or so when the drama has died down.

      • patcon 18 hours ago

        yeah, something shady. i don't trust sam at all.

        i once ran into someone in london in 2023 who was doing their thesis on AI regulation. they had essentially ended up doing a case-study on sam. their honest non-academic conclusion (which they shared quietly) was that they were absolutely terrified of sam altman.

        fear is one of those signals we ought to listen to more often

      • m3kw9 18 hours ago

        Is not shady, the systems are not ready for that kind of task esp autonomous hunting. Is smart negotiations, plus Sam would have used the Anthropic situation against them saying you can’t designate all AI top American AI companies supply chain risk etc. it’s complete idiocy the would do that anyways

        • qdotme 18 hours ago

          Ready at what level, though. The subtleties are what matters.

          It’s well established that belligerents can use mines, to separate the tactical decision of deploying for purposes of area denial; from the snap-second lethal decision (if one can stretch that definition) to detonate in response to an triggering event.

          Dario’s model prohibits using AI to decide between enemy combatant and an innocent civilian (even if the AI is bad at it, it is better than just detonating anyways); Sam’s model inherits the notion that the „responsible human” is one that decided to mine that bridge; and AI can make the kill decision.

          How is that fundamentally different in the future war where an officer might make a decision to send a bunch of drones up; but the drones themselves take on the lethal choice of enemy/ally/no-combatant engagement without any human in the loop? ELI5 why we can’t view these as smarter mines?

          • puchatek 16 hours ago

            It's different because we are talking about a technology that we might lose control over at some point. Those drones in your example might make an entirely different choice than what you anticipated when you let them take off.

    • labrador 18 hours ago

      This is a actaully a government bailout of OpenAI. Investors gave it a bunch of money earlier knowing this was going to happen. Greg Brockman is a major Republican donor for 2026. Nice for OpenAI.

    • ddtaylor 19 hours ago

      PR spin/lying while behind closed doors agreeing to it. What's hard to understand about OpenAI lying?

      Altman publicly claimed he had no financial stake in OpenAI to emphasize his mission-driven focus. In 2024 it was revealed that Altman personally owned the OpenAI Startup Fund.

      In May 2024, actress Scarlett Johansson accused Altman of intentionally mimicking her voice for ChatGPT's "Sky" persona after she had explicitly declined to work with them.

      When OpenAI’s aggressive non-disparagement agreements were leaked, which threatened to strip departing employees of all their vested equity (potentially millions of dollars) if they criticized the company, Altman claimed he was unaware of the "provision."

    • gritspants 19 hours ago

      My theory is that they both went through normal procurement processes. At some point, one of Palantir's forward deployed sales agents slapped someone's arm at the golph course and said, yes we can automously kill with our AI agents. Anthropic, having little to do with the kind of 'AI' in a use case that made sense for, declined.

      • jaco6 19 hours ago

        [dead]

    • straydusk 19 hours ago

      I know the reaction to this, if you're a rational observer, is "OpenAI have cut corners or made concessions that Anthropic did not, that's the only thing that makes sense."

      However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:

      * Make a negotiation personal

      * Emotionally lash out and kill the negotiation

      * Complete a worse or similar deal, with a worse or similar party

      * Celebrate your worse deal as a better deal

      Importantly, you must waste enormous time and resources to secure nothing of substance.

      That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.

      • spuz 17 hours ago

        You're missing an important part of the negotiation - Trump must benefit personally in some way. In this case, Greg Brockman has given by far the biggest single donation ($25m) to Trump's MAGA PAC in September last year.

    • foobarqux 19 hours ago

      No, the difference is that the government agrees to no "unlawful" use as determined by the government.

      Anthropic said that mass surveillance was per se prohibited even if the government self-certified that it was lawful.

    • Tadpole9181 19 hours ago

      Well tweets aren't legally binding, so chances are he's just outright lying so they can have their cake (DoD contracts) and eat it too (no bad PR)

      • jkaplowitz 18 hours ago

        > Well tweets aren't legally binding

        There's nothing in general about a tweet that makes it any more or less legally binding than any other public communication, and they certainly can be used in legally binding ways. But sure, a simple assertion to the public from the CEO of a privately held company about what a separate contract says is not legally binding - whether through tweet, blog, press release, news interview, or any other method.

      • sudo_cowsay 18 hours ago

        companies like saying things that makes it look like they aren't doing anything bad but then they decide to do exactly what they said they wouldnt

        e.g. google project maven, microsoft hololens (military), and much much more

      • nurettin 17 hours ago

        This is so funny to me. Especially since Elon musk had to buy Twitter due to his tweets.

        • Tadpole9181 8 hours ago

          > Especially since Elon musk had to buy Twitter due to his tweets.

          Okay, yes, if you openly and directly state a unilateral contract offer and you're already in trouble with the SEC, Tweets can be legally binding.

    • moralestapia 19 hours ago

      Makes 100% sense.

      They said yes to the same thing.

      • karmasimida 19 hours ago

        Dario is being ruled out due to ideological standing

        Makes perfect sense

        • nandomrumber 17 hours ago

          Yep.

          Everyone is over thinking it.

          There would have been a conversation between Hegseth and Trump that went something like:

          This guy thinks he can tell us what we can and can’t do.

          Get rid of him.

          It’s that simple.

          • karmasimida 12 hours ago

            He is a horrible public presenter. He presents himself as someone who is nervous, validation seeking, yet it is stupid of you to not trust him.

            He lacks confidence yet feels incredibly arrogant.

            He would succeed in academia as the principal of some prestige university with this exterior, not as CEO of an AI company.

  • nikolay 17 hours ago

    He's the reason why many people avoid OpenAI as he is among the top 3 most untrustworthy people in tech!

    • nashashmi 17 hours ago

      Zuckerberg is number one?

    • LPisGood 17 hours ago

      Who are the other two?

  • RobLach 19 hours ago

    So all these OpenAI signers are resigning, or...?

    • jalapenos 18 hours ago

      Why only have the cake when you can eat it too

  • mcs5280 18 hours ago

    Remember when they removed him for not being consistently candid?

    • jalapenos 18 hours ago

      And then Microsoft forced him back in on the grounds of: he's a scumbag but he's our scumbag so he's untouchable

  • xtracto 17 hours ago

    When I started reading all these news, the thought that came to my mind is: how sweet of these companies to try this, but unfortunately I am sure that other countries advancing AI like China (deepseek, GLM, etc) or Russia, or whoever WILL have their companies' AI at their disposal

    Unfortunately, this is the new arms race, race to the moon, and all that together.

  • neya 18 hours ago

    This is not about wars or winning contracts. If you know about Sam's strategies - It's just business. This deal ensures Anthropic doesn't have the financial cushion that OpenAI desperately needs (they just raised billions, also trending on HN). Is it ethical? Probably not. But, all is fair in love and war (proverb).

    • puchatek 17 hours ago

      The deal was only possible because anthropic stayed by their convictions. OpenAI didn't have agency in that. You're making it sound like Altman orchestrated the whole thing.

      • neya 16 hours ago

        > You're making it sound like Altman orchestrated the whole thing.

        Not at all, as a matter of fact I'm just stating what you're stating. It's just business.

  • jalapenos 18 hours ago

    Altman is a snake who uses words purely instrumentally, and this is well known.

    He basically takes advantage of people's limited memories and default assumption that when a person says something its honest.

  • ahf8Aithaex7Nai 17 hours ago

    I dislike the style of Altman's language about as much as I dislike the bullshit language used in politics or the self-incriminating, overly specific denials used by prominent figures to defend themselves against criminal allegations: “I have never had sexual relations with anyone under the age of 18 outside of my own family.”

    The language is so coded that the many places where the core statement must be negated stand out like a sore thumb.

  • m3kw9 18 hours ago

    Learn to read. “ Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

    • SlightlyLeftPad 17 hours ago

      Meanwhile, the mass surveillance is outsourced to Flock

  • SilverElfin 18 hours ago

    Greg Brockman who cofounded OpenAI is the biggest donor to Trump’s PAC. Altman claims they kept the same restrictions as Anthropic essentially. So my conclusion is OpenAI successfully bribed the government into ditching Anthropic and viciously attacking them by abusing their power (supply chain risk).

    Probably the most corrupt way of killing a competitor I’ve heard of.

  • stinkbeetle 19 hours ago

    [flagged]

    • hshdhdhj4444 19 hours ago

      You’re right.

      The people who actually know stuff about the world are reality TV stars, Fox News hosts, and podcasters just asking questions.

      Those are the people with actual knowledge.

    • Jimmc414 19 hours ago

      What else can they do? Would you recommend they stay silent? It sounds like they are no longer the gatekeepers of this technology or they never were to begin with.

      • stinkbeetle 19 hours ago

        I would recommend they start by understanding the landscape and developing strategies that are more suited for the actual world as it is, not the naive fantasy land they believe it is.

        Coming out publicly playing their hand like it's a royal flush when it's a 7 high and their cards are facing their opponent clearly wasn't going to do anything. The cynical take is they aren't that naive and this just gives them plausible deniability within their social circles when they are interrogated as to why they work for these corporations. But I like to give the benefit of the doubt.

    • teaearlgraycold 19 hours ago

      All they did was say they didn’t want their company to do something. They never said they had the power to ensure that.

      • stinkbeetle 16 hours ago

        Being disingenuous isn't a clever or interesting way to discuss a topic though.

  • senderista 18 hours ago

    "The world is a complicated, messy, and sometimes dangerous place."

    So you better just let the guys with the guns do whatever they want.

    • busko 18 hours ago

      Hoorah! shock and awe

mrcwinn 19 hours ago

OpenAI employees lol.

You’ve lost utterly and completely. Even if you, as an individual, are a good person.

drsalt 21 hours ago

[flagged]

  • paulryanrogers 21 hours ago

    What makes them appear childish in your view?

kledru a day ago

[flagged]

  • SanjayMehta a day ago

    To Infinity! And Beyond!

    Sorry, I couldn't resist.

    • kledru 14 hours ago

      yeah, no problem, I made a lame joke in frustrating situation. I would very much like the petition to have en effect.

nemo44x 19 hours ago

Correct. You will not be divided. You will likely be subtracted.

kopirgan 20 hours ago

We will not be divided! United in obeying only orders from woke governments, be it on gender ideology, "misinformation", "fact checking" or takedowns, cancellations, blackouts and bans.

charcircuit 21 hours ago

Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country. Private companies imposing such demands on our military should not be respected. Having weapons that can randomly detect a false positive and shut themselves down because they think you are using it wrong is a feature I would never want built in.

I have also been against these terms of services of restricting usage of AI models. It is ridiculous that these private companies get to dictate what I can or can't do with the tools. No other tools work like this. Every other tools is going to be governed by the legal system which the people of the country have established.

  • dlev_pika 21 hours ago

    It sounds like you think that Anthropic is the first company regulating the use of their product. This is not a novelty whatsoever.

    • charcircuit 20 hours ago

      No, but I find it obnoxious as an end user.

      • Esophagus4 20 hours ago

        Then don’t create a mass surveillance program on Americans and you shouldn’t have to worry about it ;)

        • charcircuit 19 hours ago

          Have you not read the Usage Policy that regular people have to follow? For example, you are not allowed to use their API to automatically summarize your blog post and share the link on X as you are not allowed to make posts automatically.

      • hparadiz 20 hours ago

        These models will be able to run on a machine in your pocket locally within a few decades.

  • bcooke 21 hours ago

    Taking principled stands should absolutely be respected.

    • charcircuit 20 hours ago

      I can respect a stance while simultaneously calling out how much I dislike it.

  • WorkerBee28474 20 hours ago

    > Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country

    That kind of happens with F35s that the US sells to its allies.

    • tibbydudeza 10 hours ago

      Only Israel can make software upgrades and changes to their F35.

  • joshuamorton 21 hours ago

    > Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country.

    The point here, of course, being that Anthropic is very specifically claiming to not be a gun manufacturer, and Hegseth's response is that the DoD (W?) will force anthropic to build guns.

hakrgrl 20 hours ago

How cute they bought a domain and everything

infamouscow a day ago

[flagged]

  • hax0ron3 20 hours ago

    >The executive branch can categorize AI technology as equivalent to nuclear weapons technology.

    Theoretically, but this would run the risk of collapsing the US tech sector, which at this point is a significant part of the strength of the US economy, and thus making it likely that the Republicans will lose power in the next elections.

    • infamouscow 6 hours ago

      I don't view that as an additional new risk. Investors are already all-in on AI, despite being one geopolitical event away from apocalypse regarding Taiwan.

angusik 17 hours ago

I'm here to support Pentagon (: