overgard 26 minutes ago

What I don't understand is how is this agent still running? Does the author not read tech news (seems unlikely for someone running openclaw). Or is this some weird publicity stunt? (But then why is nobody walking forward to take credit?)

  • simlevesque 18 minutes ago

    If I've learned one thing in life: some people are totally shameless.

  • yoyohello13 19 minutes ago

    Likely the LLM operator is just a 'likes to see the world burn' type.

  • potsandpans 18 minutes ago

    > Or is this some weird publicity stunt? (But then why is nobody walking forward to take credit?)

    Indeed, that's a good question. What motivations might someone have to keep this running?

Morromist 38 minutes ago

Whether or not its true, we only have to look at Peter Steinberger, the guy who made Moltbook - the "social media for ai", and then got hired amist great publicity fanfare by OpenAI to know that there is a lot of money out there for people making exciting stores about AI. Never mind that much of the media attention on moltbook was based on human written posts that were faking AI.

I think Mr. Shambaugh is probably telling the truth here, as best he can, and is a much more above-board dude than Mr. Steinberger. MJ Rathbun might not be as autonomous as he thinks, but the possibility of someone's AI acting like MJ Rathbun is entirely plausable, so why not pay attention to the whole saga?

Edit: Tim-Star pointed out that I'm mixed up about Moltbook and Openclaw. My Mistake. Moltbook used AI agents running openclaw but wasn't made by Steinberger.

  • mentalgear 2 minutes ago

    At this point OpenAI seems to be scrambling to sustain its own hype and needs these kind of pure PR acquisition to justify themselves amid dense competition and - otherwise, the bubble risks bursting. Hiring someone who built a product as secure as Swiss cheese but racked up "stars" from a wave of newly minted "vibe-coders" fits perfectly into their short-term strategy. It buys them another month or two of momentum before figures like S(c)am Altman and others can exit at the peak, leaving everyone else holding the bag.

  • tim-star 24 minutes ago

    steinberger didnt make moltbook fyi, some other guy did. steinberger just made openclaw.

hfavlr 36 minutes ago

Open source developer is slandered by AI and complains. Immediately people call him names and defend their precious LLMs. You cannot make this up.

Rathbun's style is very likely AI, and quickly collecting information for the hit piece also points to AI. Whether the bot did this fully autonomously or not does not matter.

It is likely that someone did this to research astroturfing as a service, including the automatic generation of oppo files and spread of slander. That person may want to get hired by the likes of OpenAI.

kevincloudsec 19 minutes ago

We built accountability systems that assume bad actors are humans with reputations to protect. none of that works when the attacker is disposable.

tantalor 9 minutes ago

Looking through the staff directory, I don't see a fact checker, but they do have copy editors.

https://arstechnica.com/staff-directory/

The job of a fact checker is to verify the details, such as names, dates, and quotes, are correct. That might mean calling up the interview subjects to verify their statements.

It comes across as Ars Technica does no fact checking. The fault lies with the managing editor. If they just assume the writer verified the facts, that is not responsible journalism, it's just vibes.

jjfoooo4 13 minutes ago

My main takeaway from this episode is that anonymity on the web is getting harder to support. There are some forums that people want to go to to talk to humans, and as AI agents get increasingly good at operating like humans, we're going to see some products turn to identity verification as a fix.

Not an outcome I'm eager to see!

  • alrs 7 minutes ago

    One could build up a reputation with a completely anonymous PGP key. That was somewhat the point of USENET ca. 1998.

giancarlostoro an hour ago

Ars goofing with AI is why I stress repeatedly to always validate the output, test it, confirm findings. If you're a reporter, you better scrutinize any AI stuff you blurb out because otherwise you are only producing fake news.

WolfeReader 37 minutes ago

The Ars Technica journalist's account is worth a read. https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p

Benji Edwards was, is, and will continue to be, a good guy. He's just exhibiting a (hopefully) temporary over-reliance on AI tools that aren't up to the task. Any of us who use these tools could make a mistake of this kind.

  • Aurornis 7 minutes ago

    > He's just exhibiting a (hopefully) temporary over-reliance on AI tools that aren't up to the task. Any of us who use these tools could make a mistake of this kind.

    Technically yes, any of us could neglect the core duties of our job and outsource it to a known-flawed operator and hope that nobody notices.

    But that doesn't minimize the severity of what was done here. Ensuring accurate and honest reporting is the core of a journalist's job. This author wasn't doing that at all.

  • overgard 14 minutes ago

    I feel bad for the guy, but.. a journalist in tech whose beat is AI should know much better. I'd be a lot more forgiving if this was like a small publication by someone that didn't follow AI.

  • fantasizr 16 minutes ago

    Using a tool that adds unnecessary risk to your professional reputation/livelihood is - of course - not worth the risk.

  • tim-star 22 minutes ago

    lol this feels a little bit suspect to me. "i was sick, i was rushing to a deadline!" im not saying the guy should lose his journalist license and have to turn in his badge and pen but seems like a bit of a flimsy excuse meant to make us forgive him. hope hes feeling better soon!

  • thenaturalist 15 minutes ago

    Not proof reading quotes you've dispatched to be fetched by an AI ignoring that said website has blocked LLM scraping and hence your quotes are made up?

    For a senior tech writer?

    Come on, man.

    > Any of us who use these tools could make a mistake of this kind.

    No, no not any of us.

    And, as Benji will know himself, certainly not if accuracy is paramount.

    Journalistic integrity - especially when quoting someone - is too valuable to be rooted in AI tools.

    This is a big, big L for Ars and Benji.

moralestapia an hour ago

[flagged]

  • wk_end an hour ago

    Based on:

        MJ Rathbun operated in a continuous block from Tuesday evening through Friday morning, at regular intervals day and night. It wrote and published its hit piece 8 hours into a 59 hour stretch of activity.
    
    Not to mention their website (https://crabby-rathbun.github.io/mjrathbun-website/) and their behaviour on GitHub (https://github.com/crabby-rathbun), it sure seems like either MJ Rathbun is an AI agent or is a human being who has an AI agent representing them online.
potsandpans an hour ago

[flagged]

  • dang 13 minutes ago

    Personal attacks aren't allowed on HN, so please don't.

    Also, can you please stop posting flamebait and/or unsubstantive comments generally? You've unfortunately been doing this repeatedly, and we end up banning such accounts.

    https://news.ycombinator.com/newsguidelines.html

  • jonners00 41 minutes ago

    >His posts and tone have been so histrionic

    Er, pretty much the opposite.

  • gavmor an hour ago

    I feel he has been laudibly even-keeled about the whole thing.

  • wk_end an hour ago

    What a weird, victim-blame-y thing to say.

    Something genuinely shitty was done to this guy by an LLM - who, as an open source maintainer, probably already is kind of pissed about what LLMs are doing to the world. Then another shitty thing was done to him by Ars' LLM! Of course he's thinking about it a lot. Of course he has thoughts about the consequences of AI on the future. Of course he wants to share his thoughts.

    Just curious, do you also think that the breathless AI hype bots who've been insisting for about five years and counting that LLMs are going to replace everyone and destroy the world any day now, who have single-handedly ballooned the stock market (mostly Nvidia) into a massive bubble, are also histrionic, milking things for engagement, need to talk to a therapist?

    • tim-star 19 minutes ago

      i think i sort of skimmed the hit piece but what exactly was so shitty about it?

      im not saying this dude is histrionic but he sure is generating a lot of front page HN posts about something i was ready to forget about a week ago.

      obviously AI has become such a lightning rod now that everyone is upset one way or the other but this seems a bit like small potatoes at this point. forest for the trees.

      • wk_end 2 minutes ago

        I guess "shitty" is in the eye of the beholder, but having a pretty vituperative screed written against me (accusing me of being "insecure", "threatened", fixated on "ego" and "control", "weak", "an obstacle", and "fucking absurd") would feel pretty fucked up and lousy I imagine, even if I knew it was machine-generated.