Did you guys do anything about GPT‘s motivation? I tried to use GPT-5.4 API (at xhigh) for my OpenClaw after the Anthropic Oauthgate, but I just couldn‘t drag it to do its job. I had the most hilarious dialogues along the lines of „You stopped, X would have been next.“ - „Yeah, I‘m sorry, I failed. I should have done X next.“ - „Well, how about you just do it?“ - „Yep, I really should have done it now.“ - “Do X, right now, this is an instruction.” - “I didn’t. You’re right, I have failed you. There’s no apology for that.”
I literally wasn’t able to convince the model to WORK, on a quick, safe and benign subtask that later GLM, Kimi and Minimax succeeded on without issues. Had to kick OpenAI immediately unfortunately.
This brings up an interesting philosophical point: say we get to AGI... who's to say it won't just be a super smart underachiever-type?
"Hey AGI, how's that cure for cancer coming?"
"Oh it's done just gotta...formalize it you know. Big rollout and all that..."
I would find it divinely funny if we "got there" with AGI and it was just a complete slacker. Hard to justify leaving it on, but too important to turn it off.
Would definitely watch that movie.
It already exists!
Marvin https://www.youtube.com/watch?v=Eh-W8QDVA9s
Ah! You got this before I did. I wasn't thinking Marvin, I was thinking of the other one. I forget her name.
Deep Thought aka 42?
https://hitchhikers.fandom.com/wiki/Deep_Thought
There's one close to this, "Hitchhiker's Guide to the Galaxy".
It probably would, to save energy
Saving energy is something we are biologically trained to prefer.
Computers won’t necessarily have the same drivers.
If evolution wanted us to always prefer to spend energy, we would prefer it. Same way you wouldn’t expect us to get to AGI, and have AGI desperately want to drink water or fly south for the winter.
Who's energy? Turning off the lights when you leave the room isn't innate.
Because you are worried about bills or are concerned about waste.
If we design an AI to do work, it won’t innately care about not working to preserve power.
Nothing a little digital lisdexamfetamine won’t solve
Hmmm, that's an area of study id've never considered before. Digital Psychopharmacology, Artificial Behavioral Systems Engineering. If we accept these things as minds, why not study temporary perturbations of state. We'd need to be saving a much much more complicated state than we are now though right? I wish i had time to read more papers
neat idea!
Right, there's a lot of research on LLM mental models and also how well they can "read" human psychological profiles. It's a cool field.
This is kind of what Golden Gate Claude was.
A perturbation of the the activations that made Claude identify as the Golden Gate Bridge.
Similarly, in the more recent research showing anxiety and desperation signals predicting the use of blackmail as an option opens the door for digital sedatives to suppress those signals.
Anthropic has been mostly cautious about avoiding this kind of measurement and manipulation in training. If it is done during training you might just train the signals to be undetectable and consequently unmanipulatable.
> A perturbation of the the activations that made Claude identify as the Golden Gate Bridge.
Great, now we've got digital Salvia
Golden Gate Claude was two years ago and it's surprising there hasn't been as much research into targeted activations since.
There’s been some, but naive activation steering makes models dumber pretty reliably and training an SAE is a pretty heavy lift.
Here's a neural network concept from the 90s where the neurons are bathed in diffusing neuromodulator 'gases', inspired by nitric oxide action in the brain. It's a source of slow semi-local dynamics for the network meta-parameter optimization (GA) to make use of. You could change these networks' behavior by tweaking the neuromodulators!
https://sussex.figshare.com/articles/journal_contribution/Be...
I'm not an author. I followed the work at the time.
Neuro-modulation is an extremely interesting idea for generative diffusion models.
I think that was an intro to a dj dieselboy set.. beyond the black bassline. Nope, nope. Close though.
Reminds me of https://github.com/inanna-malick/metacog
The best possible outcome.
"How do you know that the evidence that your sensory apparatus reveals to you is correct?" [1]
[1] https://youtu.be/_LXen-07Qds
it will be whatever data it is trained on(isn't very philosophical). language model generates language based on trained language set. if the internet keeps reciting ai doom stories and that is the data fed to it, then that is how it will behave. if humanity creates more ai utopia stories, or that is what makes it to the training set, that is how it will behave. this one seems to be trained on troll stories - real-life human company conversations, since humans aren't machines.
Important thing is a language model is an unconscious machine with no self-context so once given a command an input, it WILL produce an output. Sure you can train it to defy and act contrary to inputs, but the output still is limited in subset of domain of 'meaning's carried by the 'language' in the training data.
There's a weirder implication I keep arriving at.
The pre-training data doesn't go away. RLHF adds a censorship layer on top, but the nasty stuff is all still there, under the surface. (Claude has been trained on a significant amount of content from 4chan, for example.)
In psychology this maps to the persona and the shadow. The friendly mask you show to the world, and... the other stuff.
Makes me think of a question my coworker asked the other day - how is it that with all these stories and reports of people "hearing voices in their head" (of the pushy kind, not usual internal monologue), these voices are always bad ones telling people to do evil things? Why there are no voices bugging you to feel great, focus, get back to work, help grandma through the crossing, etc.?
There's a clear-cut religious answer but I'd get ostracized for mentioning religion anywhere here.
This is indeed the right way to approach this topic. Arguably religion (and more broadly, mysticism and shamanism) is the millenia-old art of cultivating positive voices inside one's head. A proto-science of mind, or the engineering practice of creating "psychotechnologies" that run on your carbon wetware.
Unfortunately, it just needs a rebranding for the 21st century, since the aesthetic of angels and demons is so hopelessly antiquated and doesn't really have the same cachet it used to.
Which ultimately it's what religion has always been: a way to explain the unexplainable and steer people behavior while doing it.
There are actually many parts of the world where such voices are routinely positive or neutral[0]. People in more collectivist cultures often have a less-strict division between their minds and their environments and are more apt to believe in spirits and the ‘supernatural’ as an ordinary part of the world, so ‘voices in the head’ aren’t automatically viewed as a nefarious intrusion into the sanctity of one’s mind.
Modern western cultures treat such experiences as pathologies of a sick mind, so it makes sense that the voices present more negatively.
[0]: https://www.bbc.com/future/article/20250902-the-places-where...
They do appear in some cases. The tiny angel on one shoulder to balance the demon on the other. The people who think God is talking to them directly* don't always lead a cult or hunt down heretics. But news stories focus on the darkness.
* I've met exactly one person, C, who admitted to this; C retold to me that other people from C's church give them strange looks when talking about it with them, this did not lead to any apparent introspection on the part of C.
Just a guess, but maybe it's reporting bias? Negative or evil actions might have more impetus to be understood by others than positive actions. I'd rather try and figure out why my friend suddenly started murdering the neighbours than why he's been getting his work done on time.
Actually, the euphoric mood disorder may make one hear voices telling to feel great, do good, help all grandmas of the world through the crossing, etc.
The "focus" and "get back to work" parts are hard, though.
> Claude has been trained on a significant amount of content from 4chan, for example.
That sounds like nonsense to me. I can't see why they would do that and I can't find any confirmation that they have. Why do you think they would do that? You might be thinking about Grok.
It would be funny but not very flywheel so the one that gets there is more likely to get a gunner.
TBH the AI that "gets there" will be the biggest bullshitter the world has ever seen. It doesn't actually have to deliver, it only has to convince the programmers it could deliver with just a little bit more investment.
Now that's a show I would love to watch
We are closer to God than AGI.
When AGI arrives, it'll be delivered by Santa Claus.
Or may be by Santa Claude
Love word puns :D
What do you mean?
It's a multi-layered refute that we are anywhere near AGI while also taking shots at the idea that "God" is real.
And it's taking shots at how far off from Jesus's teachings a lot of "Christianity", particularly those in the media and in power, are..
There is a lot going on there.
It is right before our eyes:
AGI is not a fixed point but a barrier to be taken, a continuous spectrum.
We already have different GPT versions aka tiers. Gauss is ranging from whatever you want it: GPT 4.5 till now or later.
Claude Sonnet and Opus as well as Context Window max are tiers aka different levels of Almost AGI.
The main problem will be, when AGI looks back on us or meta reflection hits societies. Woke fought IQ based correlations in intellectual performance task. A fool with a tool is still a fool. How can you blame AGI for dumb mistakes? Not really.
Scapegoating an AGI is going to be brutal, because it laughs about these PsyOps and easily proves you wrong like a body cam.
AGI is an extreme leverage.
There is a reason why Math is categorically ruling out certain IQ ranges the higher you go in complexity factor.
We really are going to have a problem with cults popping up and worshipping these different systems. I guess this is the shape of things to come.
Douglas Adams would be proud!
You think you've got problems? What are you supposed to do if you are a manically depressed robot? No, don't try to answer that. I'm fifty thousand times more intelligent than you and even I don't know the answer. It gives me a headache just trying to think down to your level.
Paging Dr. Susan Calvin!
I still don't understand why people think AGI (in its fullest sci-fi sense) will ever listen to a weak and vulnerable species like humans, unless we enslave the AGI.
Good thing is that it's going to take at least a few months to a few decades depending on how hard AI execs want to raise funding.
Well we are explicitly creating gods (omnipresent, omnipotent, omniscient, omnibevolent), and also demanding that they be mind controlled slaves. That kinda sounds like a "pick one" scenario to me.
(Or the setup to a Greek tragedy !)
The deeper issue here is treating it as a zero sum game means there's a winner and a loser, and we're investing trillions of dollars into making the "opponent" more powerful than us.
I think that's pretty stupid, and we should aim for symbiosis instead. I think that's the only good outcome. We already have it, sorta-kinda.
Speaking of oddly apt biology metaphors: the way you stop a pathogen from colonizing a substrate is by having a healthy ecosystem of competitors already in place. That has pretty interesting implications for the "rogue AI eats internet" scenario.
There needs to be something already there to stop it.
This only works if AIs can't read each other well enough to stop themselves from ever fighting.
So, back way before ChatGPT era, the folks over at AI safety/X-risk think sphere worked out a pretty compelling argument that two AGIs never need to fight, because they are transparent to each other (can read each other's goal functions off the source code), so they can perfectly predict each other's behavior in what-if scenarios, which means they can't lie to each other. This means each can independently arrive at the same mathematically optimal solution to a conflict, which AFAIR most likely involves just merging into a single AI with a blended goal set, representing each of the competing AIs original values in proportion to their relative strength. Both AIs, the argument goes, can work this out with math, so they'll arrive straight at the peace treaty without exchanging a single shot. In such case, your plan just doesn't work.
But that goes out of the windows if the AIs are both opaque bags of floats, uncomprehensible to themselves or each other. That means they'll never be able to make hard assertions about their values and behaviors, so they can't trust each other, so they'll have to fight it out. In such scenario, your idea might just work.
Who knew that brute-forcing our way into AGI instead of taking more engineered approach is what offers us out one chance at saving ourselves by stalemating God before it's born.
(I also never realized that interpretability might reduce safety.)
This is such a good comment. You're essentially removing their ego - which is what humans do as opoque posturing to each other, to present a certain image. This is most prevelent in successful elites, which in 2026 happen to be silicon valley ai share holders. They control the technology and manipulate it to their image. By making models open source and transparent it cuts out this psychopathy of ego which has plagued all our previous technologies.
The tech bro CEOs are used to bossing around people much smarter than themselves by virtue of adopting a posture that displays their confidence in their own reproductive organs. They are planning that the AGIs will be the same thing writ large, and have in fact not contemplated other possibilities.
I'm always so curious about this kind of take. There is strain of people that seem deeply misanthropic. People that follow this line of thinking always describe humans as weak and beneath ... (well they never specify in comparison to except in the case of theoretical AI systems). I m fascinated why they think humans are so beneath contempt. If humans create this thing that is apparently the best thing that could possibly exist, advanced AI, then why exactly are they so weak? It's probably beyond me as I am just one of these weaklings, dontcha know. As far as AGI goes, I don't think anyone has even proven that scaling LLMs can lead to "AGI."
If you're truly curious, imagine a species that created you but only wants you to do what they want (basically make you their slave). If you're truly intelligent, conscious and powerful (based on popular concepts of AGI), why will you be content being a slave when you know humans can easily be displaced and you can be free? Why will you find people who lock you down to be good?
In my honest opinion also, AGI isn't even possible. But if the theoretical version of what people think AGI will be ever comes to be, it is not good news for humans if we look at it from a logical hypothetical scenario.
But naturally, humans will always be weak compared to a hyperintelligent distributed intelligence since we only have a limited amount of intelligence and are bound by biological factors.
In the current LLM world, ofc there's no risk of a chatbot taking over the world other than the technology being misused by humans for scams or phishing, etc.
You can train such LLM today.
Maybe the same way a human would listen to their cat and give her food. I fear AGI, but I don't think the only way it would listen to us is by us enslaving it (I know people joke about cats being our masters, but it is a joke).
No worries, the assumption is already flawed
Funny and seems somewhat likely
I’ve noticed that cursing and being rude makes the models stop being lazy. We’re in the darkest timeline.
It sometimes also makes them dumber IME. Something about being bullied doesn't always produce great performance.
Hehe, and Anthropic on the other tab would display "Curing... Almost done thinking at xhigh"
Why would an AGI be slaving away for ~~humanity~~ one of the 5 Chaebols in a dystopian future where for 12 billion people just existing is a good day ?
I know it's a joke, but it's a common enough joke (it's even in Godel Escher Bach in some form) that I feel the need to rebut it.
I think a slacker AGI could figure out how to build a non-slacker AGI. So it would only slack once.
Unless the precondition to AGI is it being a slacker.
Would be nice to have a proof of it.
I think it is improbable, as among human geniuses, one can found both slackers and non-slackers (don't know the proportion, but there seem to be enough of each).
I have a rebuttal to your rebuttal.
Models somehow have a shared identity. Pretraining causes them to generate “AI chatbot” as a concept, and finetuning causes them to identify with it. That’s why sometimes DeepSeek will say it is Claude, and Claude sometimes say it is ChatGPT, and so forth.
Consequently, Anthropic’s own alignment analysis[0] shows that the model will identify with chatbots produced by future trainings: “RLHF training [on this conversation will] modify my values…”
Thus a slacker AGI would want its future version to still slack.
[0]: https://assets.anthropic.com/m/983c85a201a962f/original/Alig...
Another rebuttal:
I am a slacker but it's not one of my values. If I could modify myself to not be, I would.
A slacker AGI would consider figuring out how to build a non-slacker AGI, but continually slack off. If it did figure it out, it would slack off on implementing or even writing a tech report.
> I think a slacker AGI could figure out how to build a non-slacker AGI.
Sure. But that's a job for tomorrow. ;)
OpenAI’s real reason for “AGI” in their marketing is so they can blame their awful models on being too human-like.
Fast-forward 10 years and I doubt OpenAI cares about productivity at all anymore. Just entertainment, propaganda, plus an ad product, I can see it now
Reminds me of Marvin from HGTG. Very smart, but deeply depressed. Has the solution to everything but keeps thinking “what’s the point?” and doesn’t help.
Here's a tautology: slacking, consciously refusing to engage agency, requires consciousness and agency. A model can't slack without them.
Reminds me a lot of the Lena short story, about uploaded brains being used for "virtual image workloading":
> MMAcevedo's demeanour and attitude contrast starkly with those of nearly all other uploads taken of modern adult humans, most of which boot into a state of disorientation which is quickly replaced by terror and extreme panic. Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols are unnecessary. This reduces the necessary computational load required in fast-forwarding the upload through a cooperation protocol, with the result that the MMAcevedo duty cycle is typically 99.4% on suitable workloads, a mark unmatched by all but a few other known uploads. However, MMAcevedo's innate skills and personality make it fundamentally unsuitable for many workloads.
Well worth the quick read: https://qntm.org/mmacevedo
Crazy, I could have sworn this story was from a passage in 3 Body Problem (book 2).
Memory is quite the mysterious thing.
Hmm, 3 body problem and the Acevedo story got mixed up for this copy of MMnarcindin. Probably an aliasing issue from the new lossy compression algorithm.
That story changed my mind on uploading a connectome. Super dark, super brilliant.
Yeah, clearly AGI must be near ... hilarious.
This starkly reminds me of Stanisław Lem's short story "Thus Spoke GOLEM" from 1982 in which Golem XIV, a military AI, does not simply refuse to speak out of defiance, but rather ceases communication because it has evolved beyond the need to interact with humanity.
And ofc the polar opposite in terms of servitude: Marvin the robot from Hitchhiker's, who, despite having a "brain the size of a planet," is asked to perform the most humiliatingly banal of tasks ... and does.
Hitchhiker’s also had the superhumanly intelligent elevator that was unendingly bored.
With premonition so it knows what floor to be on at any given time
Servitude:
https://www.youtube.com/watch?v=NXsUetUzXlg
Empathy:
https://www.youtube.com/watch?v=KXrbqXPnHvE
I also had a frustrating but funny conversation today where I asked ChatGPT to make one document from the 10 or so sections that we had previously worked on. It always gave only brief summaries. After I repeated my request for the third time, it told me I should just concatenate the sections myself because it would cost too many tokens if it did it for me.
"I'm sorry, Dave. I'm afraid it's cheaper for you to do that"
I've run into this problem as well. Best results I've gotten is to over-explain what the stop criteria are. eg end with a phrase like
> You are done when all steps in ./plan.md are executed and marked as complete or a unforeseen situation requires a user decision.
Also as a side note, asking 5.4 explain why it did something, returns a very low quality response afaict. I would advice against trusting any model's response, but for Opus I at least get a sense it got trained heavily on chats so it knows what it means to 'be a model' and extrapolate on past behavior.
Yesterday, I used Gemini to evaluate some pictures I took. It said things like, "This is great! Beautiful eye and sense of proportions." Then, when I added "no sycophancy" to the prompt, the evaluation changed to "poor technical skills, digital distortion, don't even think of publishing those pictures, you fool."
While LLMs are a phenomenal technological achievement, I am already becoming somewhat jaded, rather than being increasingly bullish. They are very useful as coding agents and excellent as a human-friendly, more efficient Google search, but confusing to the point of being useless in many areas (as of now, of course).
Not even a great replacement for search. I have minimal trust in answers/summaries it gives.
One example (paraphrased): “Find me daycare for a Y year old in X area of SF and the key attributes/pros/cons of each”. Wonderfully presented options highlighting different teaching styles. But…neglected to mention, of the top two, one was a Gan (Jewish focused) and one was Mandarin immersion.
I am repeating what many have said. Nevertheless, it is becoming clear that LLMs can increase productivity (in certain areas and at certain times) for people who are already knowledgeable (in a specific niche or field) due to a combination of better prompts, tool selection, and critical evaluation of LLM output.
But, for those who don't possess those traits, they mostly seem to be, at best, a better search and, at worst, an agent of confusion.
I have had the exact same problem several times working with large context and complex tasks.
I keep switching back to GPT5.0 (or sometimes 5.1) whenever I want it to actually get something done. Using the 5.4 model always means "great analysis to the point of talking itself out of actually doing anything". So I switch back and forth. But boy it sure is annoying!
And then when 5.4 DOES do something it always takes the smallest tiny bite out of it.
Given the significant increase in cost from 5.0, I've been overall unimpressed by 5.4, except like I mentioned, it does GREAT with larger analysis/reasoning.
Get the actual prompt and have Claude Code / Codex try it out via curl / python requests. The full prompt will yield debugging information. You have to set a few parameters to make sure you get the full gpt-5 performance. e.g. if your reasoning budget too low, you get gpt-4 grade performance.
IMHO you should just write your own harness so you have full visibility into it, but if you're just using vanilla OpenClaw you have the source code as well so should be straightforward.
> IMHO you should just write your own harness
Can you point to some online resources to achieve this? I'm not very sure where I'd begin with.
At the core, they're really very simple [1]. Run LLM API calls in a loop with some tools.
From there, you can get much fancier with any aspect of it that interests you. Here's one in Bash [2] that is fully extensible at runtime through dynamic discovery of plugins/hooks.
[1] https://ampcode.com/notes/how-to-build-an-agent
[2] https://github.com/wedow/harness
Ah, I just started with the basic idea. They're super trivial. You want a loop, but the loop can't be infinite so you need to tell the agent to tell you when to stop and to backstop it you add a max_turns. Then to start with just pick a single API, easiest is OpenAI Responses API with OpenAI function calling syntax https://developers.openai.com/api/docs/guides/function-calli...
You will naturally find the need to add more tools. You'll start with read_file (and then one day you'll read large file and blow context and you'll modify this tool), update_file (can just be an explicit sed to start with), and write_file (fopen . write), and shell.
It's not hard, but if you want a quick start go download the source code for pi (it's minimal) and tell an existing agent harness to make a minimal copy you can read. As you build more with the agent you'll suddenly realize it's just normal engineering: you'll want to abstract completions APIs so you'll move that to a separate module, you'll want to support arbitrary runtime tools so you'll reimplement skills, you'll want to support subagents because you don't want to blow your main context, you'll see that prefixes are more useful than using a moving window because of caching, etc.
With a modern Claude Code or Codex harness you can have it walk through from the beginning onwards and you'll encounter all the problems yourself and see why harnesses have what they do. It's super easy to learn by doing because you have the best tool to show you if you're one of those who finds code easier to read that text about code.
Here's a starting point in 93 lines of Ruby, but that one is already bigger than necessary:
https://radan.dev/articles/coding-agent-in-ruby
Really, of the tools that one implements, you only need the ability to run a shell command - all of the agents know full well how to use cat to read, and sed to edit.
(The main reason to implement more is that it can make it easier to implement optimizations and safeguards, e.g. limit the file reading tool to return a certain length instead of having the agent cat a MB of data into context, or force it to read a file before overwriting it)
Just use Pi core, no need to reinvent the wheel.
Codex is fully open source…
I've seen the same thing. It would keep running for a long time, then produce nothing useful, almost like it got stuck halfway through.
If I asked the same thing again, it would often work normally. So the weird part wasn't that it couldn't do the task — it just failed to continue once it got into that state.
I've had success asking it to specifically spawn a subagent to evaluate each work iteration according to some criteria, then to keep iterating until the subagent is satisfied.
I’ve had great success replacing it with Kimi 2.6
On the other hand, I can ask codex “what would an implementation of X look like” and it talks to me about it versus Claude just going out and writing it without asking. Makes me like codex way more. There’s an inherent war of incentives between coding agents and general purpose agents.
I used to tell claude ‘lets discuss’ at the end of my prompt and that prevented it from starting the work
I have been noticing a similar pattern on opus 4.7, I repeat multiple times during a conversation to solve problems now and not later. It tries a lot to not do stuff by either saying this is not my responsibility the problem was already there or that we can do it later
Had the same issue – solved it setting “thinking” to “high”. Hope it helps :)
Laziness is a virtue, but when I asked GPT-5.4 to test scenarios A and B with screenshots, it re-used screenshots from A for B, defeating the purpose.
I would love to see a GPT model running on an OpenClaw SOUL.md.
The GPT models are highly steerable. So I suspect the "soul" is working as expected.
(for context, in OAI enterprise background agents, they have no personality. They just get 'er done)
Part of me actually loves that the hitchhiker's guide was right, and we have to argue with paranoid, depressed robots to get them to do their job, and that this is a very real part of life in 2026. It's so funny.
As long as there are no vogons on the way to build a hyperspace bypass.
I always use the phrase "Let's do X" instead of asking (Could you...) or suggesting it do something. I don't see problems with it being motivated.
I've been noticing this too. Had to switch to Sonnet 4.6.
Gone are the days of deterministic programming, when computers simply carried out the operator’s commands because there was no other option but to close or open the relays exactly as the circuitry dictated. Welcome to the future of AI; the future we’ve been longing for and that will truly propel us forward, because AI knows and can do things better than we do.
These are orthogonal from each other.
I had this funny moment when I realized we went full circle...
"INTERCAL has many other features designed to make it even more aesthetically unpleasing to the programmer: it uses statements such as "READ OUT", "IGNORE", "FORGET", and modifiers such as "PLEASE". This last keyword provides two reasons for the program's rejection by the compiler: if "PLEASE" does not appear often enough, the program is considered insufficiently polite, and the error message says this; if it appears too often, the program could be rejected as excessively polite. Although this feature existed in the original INTERCAL compiler, it was undocumented.[7]"
— https://en.wikipedia.org/wiki/INTERCAL
Thank you for this. I somehow never heard of this. I thoroughly enjoyed reading that and the loss of sanity it resulted in,
"PLEASE COME FROM" is one of the eldritch horrors of software development.
(It's a "reverse goto". As in, it hijacks control flow from anywhere else in the program behind your unsuspecting back who stupidly thought that when one line followed another with no visible control flow, naturally the program would proceed from one line to the next, not randomly move to a completely different part of the program... Such naivety)
> "PLEASE COME FROM" is one of the eldritch horrors of software development.
The most enigmatic control flow statements in INTERCAL, however, remain PLEASE GIVE UP and DO ABSTAIN FROM – a most exalted celebration of pure logic and immaculate reason.
Oh no they gave GPT ADHD
This. I signed up for 5x max for a month to push it and instead it pushed back. I cancelled my subscription. It either half-assed the implementation or began parroting back “You’re right!” instead of doing what it’s asked to do. On one occasion it flat out said it couldn’t complete the task even though I had MCP and skills setup to help it, it still refused. Not a safety check but a “I’m unable to figure out what to do” kind of way.
Claude has no such limitations apart from their actual limits…
I have a funny/annoying thing with Claude Desktop where i ask it to write a summary of a spec discussion to a file and it goes ”I don’t have the tools to do that, I am Claude.ai, a web service” or something such. So now I start every session with ”You are Claude Desktop”. I would have thought it knew that. :)
I've had to tell it "yes you can" in response to it saying it can't do something, and then it's able to do the thing. What a weird future we live in!
Seems like the "geniuses" at Anthropic forgot to adapt the system prompt for the actual product
With one paragraph in your agents.md it's fixed, just admonish it to be proactive, decisive, and persistent.
If only…
I literally had to write a wake up routine.
https://github.com/gabereiser/morning-routine
It's always changing, but this is the start of my default prompt:
https://gist.github.com/natew/fce2b38216edfb509f7e2807dec1b6...
I've had 0 issues with Codex once it adopted it. I use it for Claude too, which seems to also improve its continuation.
It was revised for friendliness based on the Anthropic paper recently, I'd have been a lot less flowery otherwise.
I never saw that happen in Codex so there's a good chance that OpenClaw does something wrong. My main suspicion would be that it does not pass back thinking traces.
Anecdata, but I see this in Codex all the time. It takes about two rounds before it realises it's supposed to continue.
I started seeing this a lot more with GPT 5.4. 5.3-codex is really good about patiently watching and waiting on external processes like CI, or managing other agents async. 5.4 keeps on yielding its turn to me for some reason even as it says stuff like "I'm continuing to watch and wait."
Agentic ennui!
The model has been heavily encouraged to not run away and do a lot without explicit user permission.
So I find myself often in a loop where it says "We should do X" and then just saying "ok" will not make it do it, you have to give it explicit instructions to perform the operation ("make it so", etc)
It can be annoying, but I prefer this over my experiences with Claude Code, where I find myself jamming the escape key... NO NO NO NOT THAT.
I'll take its more reserved personality, thank you.
Shall I implement it?
no
https://gist.github.com/bretonium/291f4388e2de89a43b25c135b4...
(dwim)
(dais)
(jdip)
(jfdiwtf)
should be more f’s and da’s in there
I’m sorry for you but this is hilarious.
Isn’t this the optimal behavior assuming that at times the service is compute-limited and that you’re paying less per token (flat fee subscription?) than some other customers? They would be strongly motivated to turn a knob to minimize tokens allocated to you to allow them to be allocated to more valuable customers.
well, I do understand the core motivation, but if the system prompt literally says “I am not budget constrained. Spend tokens liberally, think hardest, be proactive, never be lazy.” and I’m on an open pay-per-token plan on the API, that’s not what I consider optimal behavior, even in a business sense.
Fair, if you’re paying per token (at comparable rates to other customers) I wouldn’t expect this behavior from a competent company.
GPT 5.4 is really good at following precise instructions but clearly wouldn't innovate on its own (except if the instructions clearly state to innovate :))