supermdguy 5 hours ago

Bizarre reading the thread, it feels like their Claude responding to the other posters’ Claudes

  • phreack 4 hours ago

    That was my immediate impression too! It feels like it's all AI maximalists who seem to have a need to filter their every interaction through an LLM. And the result looks and reads just like Moltbook.

TheTaytay 2 hours ago

This should be the top comment. The OP misunderstands the change and has their LLM write an expose. The company responds with a well-reasoned explanation that it would actually cost MORE money if there was a global 1h default for ALL prompts. It gets downvoted and the pitchforks stay out because…I presume the words like “cache read likelihood” sounds like made up fluff to the audience, rather than an actual explanation?

  • glenngillen 52 minutes ago

    Because it is made up fluff for this audience. There is a wall of data and evidence + anecdotes from many people pointing to the exact problem here and giving concrete examples of how this absolutely does cost more.

    And an admittedly uncharitable TLDR on the response is: "yeah... but most users just ask one thing and barely use the product so they never need the cache. Also trust me bro".

    Which sure, fine. I'm willing to bet is technically true. I'd also bet those users never previously came close to hitting their session limits given their usage because their usage is so low. But now people who were previously considered low to moderate users are hitting limits within minutes.

    They may as well have just said "we've looked at the data and we're happy with this change because it's a performance improvement for people we make the most margin on. Sucks to be you".

dnw 5 hours ago

Interesting that they actually acknowledge there was a change on March 6th. Kudos to the prompt analysis work that uncovered it!