nicbou 16 minutes ago

I'm a solo dev. In fact I'm hardly a dev; it's just a helpful skill. Code writing speed IS a problem, because it takes valuable time away from other tasks. A bit like doing the dishes.

I just set up Claude Code tonight. I still read and understand every line, but I don't need to Google things, move things around and write tests myself. I state my low-level intent and it does the grunt work.

I'm not going to 10x my productivity, but it'll free up some time. It's just a labour-saving technology, not a panacea. Just like a dishwasher.

  • apsurd 4 minutes ago

    The intention of the title is to say your main problem. The problem separating you from $PROFIT$ in:

        1. Idea
        2. ???
        3. Profit
    
    Coding effectively is definitely one problem. And you're right that AI helps with that problem. But for startups, side-hustles, VC-pitches and the inner-workings of companies and so on (HN crowd) coding was never the problem.
  • rustystump 13 minutes ago

    This I think is pretty spot on. I still have to review the code ideally line by line. It is like templates, generators, etc. they help and do make things faster but 10x isnt gonna happen unless requirement gathering also 10x which so far, ai has had no impact on.

  • mooreds 8 minutes ago

    This is the way.

    • malfist 5 minutes ago

      Hacker News is not Reddit, please remember that threads are supposed to get more interesting the deeper they nest.

  • riskable 3 minutes ago

    Great big difference though: A dishwasher is a water-saving and energy-saving technology.

    Not saying LLMs are all bad, just that comparing them to dishwashers is probably not the best idea.

bvirb 9 minutes ago

When we (the engineering team I work on) started using agents more seriously we were worried about this: that we'd speed up coding time but slow down review time and just end up increasing cycle time.

So far there's no obvious change one way or the other, but it hasn't been very long and everyone is in various states of figuring out their new workflows, so I don't think we have enough data for things to average out yet.

We're finding cases where fast coding really does seem to be super helpful though:

* Experimenting with ideas/refactors to see how they'll play out (often the agent can just tell you how it's going to play out)

* Complex tedious replacements (the kind of stuff you can't find/replace because it's contextual)

* Times where the path forward is simple but also a lot of work (tedious stuff)

* Dealing with edge cases after building the happy path

The single biggest potential productivity gain though I think is being able to do something else while the agent is coding, like you can go review a PR and then when you come back check out what the agent produced.

I would say we've gone from being extremely skeptical to cautiously excited. I think it's far fetched that we'll see any order of magnitude differences, we're hoping for 2x (which would be huge!).

furyofantares an hour ago

> The bottleneck is understanding the problem. No amount of faster typing fixes that.

Why not? Why can't faster typing help us understand the problem faster?

> When you speed up code output in this environment, you are speeding up the rate at which you build the wrong thing.

Why can't we figure out the right thing faster by building the wrong thing faster? Presumably we were gonna build the wrong thing either way in this example, weren't we?

I often build something to figure out what I want, and that's only become more true the cheaper it is to build a prototype version of a thing.

> You will build the wrong feature faster, ship it, watch it fail, and then do a retro where someone says "we need to talk to users more" and everyone nods solemnly and then absolutely nothing changes.

I guess because we're just cynical.

  • bob1029 an hour ago

    > Why can't we figure out the right thing faster by building the wrong thing faster?

    Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.

    This is easily the biggest bottleneck in B2B/SaaS stuff for banking. You can iterate maybe once a week if you have a really, really good client.

    • vidarh 17 minutes ago

      The customer doesn't need to be shown every "wrong thing".

      • elictronic 15 minutes ago

        But think of the strawmen.

    • senko 35 minutes ago

      > Why can't we figure out the right thing faster by building the wrong thing faster?

      > Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.

      Heh, depends on what you do. Many times the stakeholders can't explain what they want but can clearly articulate what they don't want when they see it.

      Generate a few alternatives, have them pick, is a tried and true method in design. It was way too expensive when coding was manual, so often you need multiple rounds of meetings and emails to align.

      If you don't think coding was the bottleneck, you're not thinking creatively about what's only now possible.

      It's not what you can do faster (well, it is, up to a point), but also what can you now, do that would have been positively insane and out of the question before.

      • pmontra 25 minutes ago

        That's done by arranging a demo (the very old way) or (better) by deploying to a staging server. The customer meets with you for a demo not very often, maybe once per month, or checks what's on the staging server maybe a couple of times per week. They have other things to do, so you cannot make them check your proposal multiple times per day. However I concede that if you are fast you can work for multiple customers at the same time and juggle their demos on the staging servers.

      • skydhash 18 minutes ago

        > Generate a few alternatives, have them pick, is a tried and true method in design. It was way too expensive when coding was manual, so often you need multiple rounds of meetings and emails to align.

        Why do you need coding for those. You can doodle on a whiteboard for a lot of those discussions. I use Balsamiq[0] and I can produce a wireframe for a whole screen in minutes. Even faster than prompting.

        > If you don't think coding was the bottleneck, you're not thinking creatively about what's only now possible.

        If you think coding was a bottleneck, that means you spent too much time doing when you should have been thinking.

        [0]: https://balsamiq.com/product/desktop/

    • furyofantares an hour ago

      That's fair. I'm usually my own customer.

      • Bukhmanizer 37 minutes ago

        I think a lot of the discourse around LLMs fails because of organizational differences.

        I work in science, and I’ve recently worked with a couple projects where they generated >20,000 LOC before even understanding what the project was supposed to be doing. All the scientists hated it and it didn’t do anything that it was supposed to. But I still felt like I was being “anti-ai” when criticizing it.

        I understand that it’s way better when you deeply understand the problem and field though.

        • Rapzid 27 minutes ago

          I'm starting to see this. It starting to seem like a lot of the people making the most specious, yet wild AI SLDC claims are:

          * Hobbyist or people engaged in hobby and personal projects

          * Startup bros; often pre-funding and pre-team

          * Consultancies selling an AI SDLC as that wasn't even possible 6 months ago as "the way; proven, facts!"

          It's getting to the point I'd like people to disclose the size of the team and org they are applying these processes at LOL.

    • golergka 7 minutes ago

      attempt != release to customer

      when you're building a feature and have different ideas how to go about it, it's incredibly valuable to build them all, compare, and then build another, clean implementation based on all the insights

      I used to do it before, but pretty rarely, only for the most important stuff. now I do it for basically everything. and while 2-4 agents are working on building these options, I have time to work on something else.

    • doctorpangloss 19 minutes ago

      You have it completely backwards.

      Most Enterprise IT projects fail. Including at banks. They are extremely saleable though. They don't see things that are failures as failures. The metrics are not real. Contract renewals do not focus on objective metrics.

      This is why you make "$1" with all your banking relationships and actually valuable tacit knowledge, until Accenture notices and makes bajillions, and now Anthropic makes bajillions. Look, I agree that you know a lot. That's not what I'm saying. I'm saying the thing you are describing as a bottleneck is actually the foundation of the business of the IT industry.

      Another POV is, yeah, listen, the code speed matters a fucking lot. Everyone says it does, and it does. Jesus Christ.

  • onlyrealcuzzo 42 minutes ago

    AI is really good when:

    1. you want something that's literally been done tons of times before, and it can literally just find it inside its compressed dataset

    2. you want something and as long as it roughly is what you wanted, it's fine

    It turns out, this is not the majority of software people are paying engineers to write.

    And it turns out that actually writing the code is only part of what you're paying for - much smaller than most people think.

    You are not paying your surgeon only to cut things.

    You are not paying your engineer only to write code.

    • closewith 20 minutes ago

      > It turns out, this is not the majority of software people are paying engineers to write.

      The above are definitely the majority of software people are paying developers to write. By an order of magnitude.

      The novel problems for customers who specifically care about code quality is probably under 1% of software written.

      If you don't recognise this, you simple don't understand the industry you work in.

      • onlyrealcuzzo 8 minutes ago

        As it turns out - "just make this button green" - is not the majority of what people at FAANG are doing...

        As it turns out - 4 years before LLMs - at least one of the FAANGs already had auto-complete so good it could do most of what LLMs can practically do in a gigantic context.

        But, sure...

      • slopinthebag a minute ago

        Non-novel problem != non-novel solution

        Most problems are mostly non-novel but with a few added constraints, the combination of which can require a novel solution.

      • skydhash 7 minutes ago

        Everyone has its own set of novel problems. And they use libraries and framework for things that are outside it. The average SaaS provider will not write its own OS, database, network protocols,... But it will have its own features and while it may be similar to others, they're evolving in different environments and encounter different issues that need different solutions.

    • slopinthebag 15 minutes ago

      Actually the surgeon analogy is really good. Saying AI will replace programming is like saying an electric saw will replace surgeons because the hospital director can use it to cut into flesh.

      • duskdozer 9 minutes ago

        It's so much faster too! How many meters of flesh have you cut this month, and how are you working toward increasing that number?

  • p-o an hour ago

    > Why not? Why can't faster typing help us understand the problem faster?

    Why can't you understand the problem faster by talking faster?

  • hrmtst93837 20 minutes ago

    Fast prototyping helps when the prototype forces contact with the problem, like users saying "nope" or the spec collapsing under a demo. If the loop is only you typing, debugging, and polishing, you're mostly making a bigger mess in the repo and convincing yourself that the mess taught you something.

    Code is one way to ask a question, not proof that you asked a good one. Sometimes the best move is an annoying hour with the PM, the customer, or whoever wrote the ticket.

  • john_strinlai an hour ago

    >Why not? Why can't faster typing help us understand the problem faster?

    do you have a example (even a toy one) where typing faster would help you understand a problem faster?

    • lgessler an hour ago

      Has everyone always nailed their implementation of every program on the first try? Of course not. Probably what happens most times is you first complete something that sorta works and then iterate from there by modifying code, executing, observing, and looping back to the beginning. You can wonder about ultimately how much of your time/energy is consumed by the "typing code" part, and there's surely a wide range of variation there by individual and situation, but it's undeniable that it is a part of the core iteration loop for building software.

      I don't understand why GP's comment is so controversial. GP is not denying that you should maybe think a little before a key hits the keyboard as many commenters seem to suppose. Both can be true.

      • nyeah 36 minutes ago

        That kind of thinking pops up very prominently in the article.

    • intrasight 43 minutes ago

      Here's a literal toy one.

      Build a toy car with square wheels and one with triangular wheels and one with round wheels and see which one rolls better.

      The issue isn't "typing faster" it's "building faster".

      • skydhash 26 minutes ago

        No need to build three, you just have to quickly write a proof for which shapes can roll. You'll then spend x+y units of time, where y<<x, instead of 3*x units. We have stories that highlight the importance of thinking instead of blindly doing (sharpening the axe, $1 for pressing a button and $9999 for knowing which button to press).

        • Supermancho 9 minutes ago

          > quickly write a proof for which shapes can roll.

          Writing the 3 are the proofs.

    • observationist 37 minutes ago

      Sometimes articulating the problem is all you need to see what the solution is. Trying many things quickly can prime you to see what the viable path is going to be. Iterating fast can get you to a higher level of understanding than methodical, deliberative construction.

      Nevertheless, it's a tool that should be used when it's useful, just like slower consideration can be used. Frontier LLMs can help significantly in either case.

      • john_strinlai 33 minutes ago

        so, what i am gathering is that some people in this comment section read "typing faster" literally, while other people are reading it and translating it to "iterating faster".

        • observationist 22 minutes ago

          "Code writing speed" is just a superficial dismissal of AI without consideration as to whether AI is being used well or poorly for the task at hand. Saying that AI is the same as making people type faster, or that AI only produces slop, etc, is a very self limiting mindset.

    • jmulho 38 minutes ago

      I often understand problems by discussing them with AI (by typing prompts and reading the response). Typing or reading faster would make this faster.

  • zabzonk an hour ago

    > Why can't faster typing help us understand the problem faster?

    Why can't standing on your head?

  • bayindirh 13 minutes ago

    > Why not? Why can't faster typing help us understand the problem faster?

    Sometimes you need to think slow to understand something. Offloading your thinking to a black box of numbers and accepting what it emits is not thinking slow (i.e. ponder) and processing the problem at hand.

    On the contrary, it's entering tunnel vision and brute forcing. i.e. shotgun coding.

  • garrickvanburen 10 minutes ago

    I'm reminded of the original Agile joke, "software you don't want in 30days or less." today it's "software you don't want in 5days or less."

  • mooreds 44 minutes ago

    > Why not? Why can't faster typing help us understand the problem faster?

    I think we can, in some cases.

    For instance, I prototyped a feature recently and tested an integration it enabled. It took me a few hours. There's no way I would have tried this, let alone succeeded, without opencode. Because I was testing functionality, I didn't care about other aspects: performance, maintainability, simplicity.

    I was able to write a better description of the problem and assure the person working on it that the integration would work. This was valuable.

    I immediately threw away that prototype code, though. See above aspects I just didn't need to think about.

    • coldtea 26 minutes ago

      >There's no way I would have tried this, let alone succeeded, without opencode

      Sure there is.

      You could have used Claude or Codex directly :)

  • cdrnsf an hour ago

    Because you're working on the implementation before you understand the problem?

    • mooreds 40 minutes ago

      Ding ding ding!

      The article talks about process flows and finding the bottleneck. That might be coding, but probably is not.

  • nyeah an hour ago

    "Why can't faster typing help us understand the problem faster?"

    Because typing is not the same as understanding.

    • coldtea 24 minutes ago

      The typing referred to here is not "the typing part of coding" (fingers touching the keyboard), it's the whole coding (LLM is not a typing aid, it's a coding aid).

      And coding faster CAN help us understand the problem faster. Coding faster means iterating, refactoring, trying different designs - and seeing what does and doesn't work, faster.

  • doix an hour ago

    Pretty much, the article assumes people didn't build the wrong thing before AI. Except that did happen all the time and it just happened slower, took longer to realize that it was the wrong thing and then building the right thing took longer.

    It's funny, because you could actually take that story and use it to market AI.

    > I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks.

    Except now with AI it takes one engineer 6 hours, people realize it's the wrong thing and move on. If anything, I would say it helps prove the point that typing faster _does_ help.

    • Terr_ an hour ago

      Sometimes being involved in the construction process allows you to discover all the (many, overlapping) ways it's the "wrong thing" sooner.

      In the long term, some of the most expensive wrong-things are the ones where the prototype gets a "looks good to me" from users, and it turns out what they were asking for was not what they needed or what could work, for reasons that aren't visually apparent.

      In other words, it's important to have many people look at it from many perspectives, and optimizing for the end-user/tester perspective at the expense of the inner-working/developer perspective might backfire. Especially when the first group knows something is wrong, but the second group doesn't have a clue why it's happening or how to fix it. Worse still if every day feels like learning a new external codebase (re-)written by (LLM) strangers.

  • furyofantares an hour ago

    The post also smells heavily LLM-processed. I feel like I've been had by someone pumping out low effort blog posts.

  • ErroneousBosh 42 minutes ago

    > Why not? Why can't faster typing help us understand the problem faster?

    Why do you need to type at all to understand the problem?

    I write my best code when I'm driving my car. When I stop and park up, it's just a case of typing it all in at my leisure.

podgorniy 40 minutes ago

Yeah, we again have a solution (LLMs) in search of problems.

Proper approach to speeding things up would be to ask "What are the limiting factors which stops us from X, Y, Z".

--

This situation of management expecting things to become fast because of AI is "vibe management". Why to think, why to understand, why to talk to your people if you saw an excited presentation of the magic tool and the only thing you need to do is to adopt it?..

  • raw_anon_1111 31 minutes ago

    This is categorically not true. For almost all of my 30 years it’s been

    1. Talk to the business, solve XYProblems, deal with organization complexity, learn the business and there needs.

    2. Design the architecture not just “the code”, the code has to run on something.

    3. Get the design approve and agree on the holy trinity - time/cost/budget

    4. Do the implementation

    5. Test it for the known requirements

    6. Get stakeholder approval or probably go back to #4

    7. Move it into production

    8. Maintenance.

    Out of all those, #4 is what I always considered the necessary grunt work and for the most part even before AI especially in enterprise development where most developers work has been being commoditized in over a decade. Even in BigTech and adjacent codez real gud* will keep you as a mid level developer if you can’t handle the other steps and lead larger/more impactful/more ambiguous projects.

    As far as #5 much of that can and should be done with automated tests that can be written by AI and should be reviewed. Of course you need humans for UI and UX testing

    The LLMs can do a lot of the grunt work now.

petcat an hour ago

As human developers, I think we're struggling with "letting go" of the code. The code we write (or agents write) is really just an intermediate representation (IR) of the solution.

For instance, GCC will inline functions, unroll loops, and myriad other optimizations that we don't care about. But when we review the ASM that GCC generates (we don't) we are not concerned with the "spaghetti" and the "high coupling" and "low cohesion". We care that it works, and is correct for what it is supposed to do. And that it is a faithful representation of the solution that we are trying to achieve.

Source code in a higher-level language is not really different anymore. Agents write the code, maybe we guide them on patterns and correct them when they are obviously wrong, but the code is merely the work-item artifact that comes out of extensive specification, discussion, proposal review, and more review of the reviews.

A well-guided, iterative process and problem/solution description should be able to generate an equivalent implementation whether a human is writing the code or an agent.

  • pbasista 18 minutes ago

    > review the ASM that GCC generates (we don't)

    Of course we do not. Because there is no need. The process of compiling higher order language to assembly is deterministic and well-tested. There is no need to continue reviewing something that always yields the same result.

    > We care that it works, and is correct for what it is supposed to do.

    Exactly. Which is something we do not have with an output of an LLM. Because it can misunderstand or hallucinate.

    Therefore, we always have to review it.

    That is the difference between the output of compilers and the output of LLMs.

    • petcat 6 minutes ago

      > The process of compiling higher order language to assembly is deterministic and well-tested.

      Here are the reported miscompilation bugs in GCC so far in 2026. The ones labeled "wrong-code".

      https://gcc.gnu.org/bugzilla/buglist.cgi?chfield=%5BBug%20cr...

      I count 121 of them.

      I've posted this 3 times now. Code-generation by compilers written by experts is not deterministic in the way that you think it is.

    • qalmakka 8 minutes ago

      This. The comparison between compilers and LLMs is so utterly incorrect, and yet I've heard it multiple times already in the span of a few weeks. The people suggesting this are probably unaware of the fact that Turing complete languages follow mathematical properties not just vibes. You can trust the output of your compiler because it was thoroughly tested to ensure it acts as a Turing machine that converts one Turing complete language (C, C++, whatever) into another Turing complete language (ASM) and there's a theorem that guarantees you that such a conversion is always possible. LLMs are probabilistic machines and it's grossly inappropriate to put them in the same category as compilers - it would be like saying that car tires and pizzas are similar because they're both round and have edges.

  • krackers an hour ago

    >Source code in a higher-level language is not really different anymore

    Source code is a formal language, in a way that natural language isn't.

    • jrop an hour ago

      Right? That's the only reason that "coding with LLMs" works at all (believe me, all at the same time, I am wowed by LLMs, and carry a healthy level of skepticism with respect to their ability as well). You can prompt all you want, let an Agent spin in a Ralph loop, or whatever, but at the end of the day, what you're checking into Git is not the prompts, but the formalized, codified artifact that is the bi-product of all of that process.

    • yason 15 minutes ago

      Somewhat ironically, perhaps a formal, deterministic programming language (in its mathematical-kind of abstract beauty) is the outlier in the whole soup. The customers don't know what they need, we don't know what we ought to build, and whatever we build nobody knows how much of it is the right thing and what it actually does. If the only thing that causes people to sigh is the requirement to type all that into a deterministic language maybe at some point we can just replace that with a fuzzy, vague humanly description. If that somehow produces enough value to justify the process we still won't know what we need and what we're actually building but at least we can just be honestly vague about it all the way through.

    • inamberclad an hour ago

      When you get to the really tightly controlled industries, your "formal" language becomes carefully structured English.

      • petcat an hour ago

        Legalese exists precisely because it is an attempt to remove doubt when it comes to matters of law.

        Maybe a dialect of legalese will emerge for software engineering?

        • batshit_beaver 32 minutes ago

          Legalese already exists in software engineering. Several dialects of it, in fact. We call them programming languages.

        • ruszki 41 minutes ago

          Legalese is nowhere near precise, and we have a whole very expensive system because it’s not precise.

          • petcat 31 minutes ago

            It is an attempt the be precise, and to remove doubt. But you're right that doubt still creeps in.

    • eecc an hour ago

      This is the answer

  • zelphirkalt 5 minutes ago

    I see it differently: The code is our medium of communicating a solution.

    > "Programs must be written for people to read, and only incidentally for machines to execute." -- Hal Abelson

    Without this, we quickly drift into treating computers and computer programs as even more magic, than we already do. When "agents" are mistaken about something, and put their "misunderstanding" into code that subsequently is incorrect, then we need to be able to go and look at it in detail, and not just bring sacrifices for the machine god.

  • munchbunny 15 minutes ago

    In my experience it doesn't really work that way. It's somewhat akin to a house that's undergone multiple remodels. You eventually run out of the house's structural capacity for more remodeling and you have to start gutting the interior, reframing, etc. to reset the clock.

    At least today the coding agents will cheat, choose the wrong pattern, brute force a solution where an abstraction or extra system was needed, etc. A few PR's won't make this a problem, but after not very long at all in a repo that a dev team is constantly contributing to (via their herds of agents) it can get pretty gnarly, and suddenly it looks like the agents are struggling with tech debt.

    Maybe one day we can stop writing programming languages. It's a thought-provoking idea, but in practice I don't think we're there yet.

  • felipellrocha an hour ago

    If you truly believe that, why don’t you just transform code directly to assembly? Skip the middleman, and get a ton of performance!

    • bdcravens an hour ago

      I assume you're being cynical, but there's a lot of truth in what you say: LLMs allow me to build software to fit my architecture and business needs, even if it's not a language I'm familiar with.

    • operatingthetan an hour ago

      I know you're being cheeky but we are definitely heading in that direction. We will see frameworks exclusively designed for LLM use get popular.

      • nemo44x 35 minutes ago

        I think that’s possible too but the trouble is training them. LLMs are built on decades of human input. A new framework, programming language, database, etc doesn’t have that.

        We are in the low hanging fruit phase right now.

    • charcircuit 36 minutes ago

      Assembly eats up context like crazy. I usually only have my LLM use assembly for debugging / performance / reversing work.

    • n4r9 an hour ago

      Can agents write good assembly code?

      • svachalek an hour ago

        With the complexity of modern pipelines, there are very few humans that can beat a good optimizing compiler. Considering that with an LLM you're also bloating limited context with unsemantic instructions I can't see how this is anything but an exercise in failure.

  • Rapzid 19 minutes ago

    The semantics described in the high-level language are absolutely maintained deterministically.

    With agentic coding the semantics are not deterministically maintained. They are expanded, compressed, changed, and even just lost; non-deterministically..

  • yummypaint 42 minutes ago

    Just because an LLM can turn high level instructions into low level instructions does not make it a compiler

  • exceptione an hour ago

    None of the comparisons make any sense. In short, these concepts are essential to understand:

    - determinism vs non-determinism

    - conceptual integrity vs "it works somewhat, don't touch it"

    • petcat 43 minutes ago

      > determinism vs non-determinism

      Here are the reported miscompilation bugs in GCC so far in 2026. The ones labeled "wrong-code".

      https://gcc.gnu.org/bugzilla/buglist.cgi?chfield=%5BBug%20cr...

      I count 121 of them. It appears that code-generation is not as deterministic as you seem to think it is.

      • tcmart14 36 minutes ago

        Deterministic doesn't mean correct. Compilers can have bugs. What deterministic means, given the same input you get the same output every time. So long as given the same code it generates the same wrong thing every time, its still deterministic.

      • yCombLinks 25 minutes ago

        99.9% vs about 20%. Pretty weak argument.

  • tcmart14 38 minutes ago

    I really hate the trying to make LLM coding sound like it's just moving up the stack and is no different from a compiler. A compiler is deterministic and has a set of rules that can be understood. I can look at the output and see patterns and know exactly what the compiler is doing and why it does and where it does it. And it will be deterministic in doing it.

    • petcat 37 minutes ago

      > compiler is deterministic and has a set of rules that can be understood.

      Here are the reported miscompilation bugs in GCC so far in 2026. The ones labeled "wrong-code".

      https://gcc.gnu.org/bugzilla/buglist.cgi?chfield=%5BBug%20cr...

      I count 121 of them. It appears that code-generation is not as deterministic as you seem to think it is.

      • tcmart14 30 minutes ago

        I commented elsewhere, but that doesn't mean it's not deterministic. Deterministic means given the same input it gives the same output. Compilers can still have bugs and generate the wrong code. But so long as given the same input it generates the same wrong output, it is still deterministic.

        • petcat 23 minutes ago

          Compilers can generate wrong output in many different ways. And they're all analogous to the same ways that a sophisticated LLM can generate wrong outputs.

          The compiler relies on:

          * Careful use of the ENV vars and CLI options

          * The host system, or the compilation of the target executable (for cross-compiling)

          * It relies on the source code specifically

          How is this really different from careful prompt engineering, and an extensive proposal/review/refine process?

          They are both narrowing the scopes and establishing the guardrails for what the solution and final artifact will be.

          > proposal/review/refine process

          This is essentially what a sophisticated compiler, or query optimizer (Postgres) does anyway. We're just doing it manually via prompts.

          • tcmart14 5 minutes ago

            And none of that means it isn't non-deterministic. Compilers still satisfy the, given the exact same environment and input, you get the same output. It doesn't matter the number of inputs. So long as f(3, 2) always gives 5, it's deterministic. Doesn't matter what f(x,y) does so long as it always gives the same output per input. LLM generation does not do this. If given f(3,2), sometimes it says 5, sometimes 6, sometimes 1001, sometimes 2.

            And we are talking compilers, not query optimizers, so I don't really care what they do.

      • Kilenaitor 19 minutes ago

        Having bugs is not the same as being non-deterministic.

        I get the point that the compiler is not some pure, perfect transformation of the high-level code to the low level code, but it is a deterministic one, no?

      • slopinthebag 10 minutes ago

        It is deterministic, unless GCC is now including a random statistical process inside its compiler to generate machine code. You've copied this same comment repeatedly, it doesn't become more correct the more you spam it.

  • tovej an hour ago

    Is this a copypasted response? I've seen the exact same bs in other AI threads on this site.

    • bcassedy 2 minutes ago

      This user has posted the parent post nearly verbatim twice. And the exact same responses about determinism several times.

everdrive an hour ago

Companies genuinely don't want good code. Individual teams just get measured by how many things they push around. An employee warning that something might not work very well is going to get reprimanded as "down in the weeds" or "too detail oriented," etc. I didn't understand this for a while, but internal actors inside of companies really just want to claim success.

  • mooreds 36 minutes ago

    > Companies genuinely don't want good code.

    I might be more charitable. I'd say something like "Companies genuinely want good code but weigh the benefits of good code (future flexibility, lower maintenance costs) against the costs (delayed deployment, fewer features)."

    Each company gets to make the tradeoffs they feel are appropriate. It's on technical people to explain why and risks, just like lawyers do for their area of expertise.

  • dannersy 8 minutes ago

    They don't care about good code, but they do pay people a lot of money to care about good code. If the people you hired didn't care, our software quality would be worse than it is. And since people are caring less in the face of AI, it is getting worse.

myylogic an hour ago

I think both sides are partially right, but they’re optimizing for different failure modes.

Speed doesn’t fix misunderstanding, but it does change how quickly you can iterate toward understanding.

In practice, building something (even if it’s wrong) creates feedback loops you can’t get from thinking alone. Especially in systems like ML/LLMs, where behavior emerges from the pipeline rather than just the idea.

The real bottleneck isn’t typing speed — it’s how fast you can validate assumptions.

Faster iteration without reflection leads to chaos. Pure thinking without building leads to stagnation.

The balance is tight feedback loops with deliberate evaluation.

  • skydhash 34 minutes ago

    The thing is that speed in building stuff doesn't really help. What you want is a model and a simulation framework. In the traditional way, you usually start with a simple model and a simple framework. Then when you add a new parameter, you adapt the framework and once you've found a balanced set of inputs, you think which new parameter you want to add. This iteration leads to a great understanding of the model and the behavior of the system that implements it.

    LLM usage is usually a build of the system for the full parameter set. The speed increase is countered by the fact that there's no understanding of the system and the simulation space is so large that the user don't really bother to explore it. There's been a lot of talks about having a full test suite for simulation, but they are discrete and only prove specific points in the input space (There's a lot of curves that can pass through a finite set of points)

po1nt 5 minutes ago

While reading articles like this, I feel like we're just in the "denial" stage. We're just trying to look for negatives instead of embracing that this is a definite paradigm shift in our craft.

ianberdin 9 minutes ago

I don’t agree. I have built Replit clone alone in months. They have hundreds of millions of funding…

Btw: https://playcode.io

k1rd 7 minutes ago

> That's the part most people get. Here's the part they don't, and it's the part that should scare you: > When you optimise a step that is not the bottleneck, you don't get a faster system. You get a more broken one.

if you ever played factorio this is pretty clear.

bluegatty 12 minutes ago

It's unfair to characterize AI as 'code writing / completion' - it's at minimum 1/4 layer of abstraction above that - and even just 'at that' - it's useful.

So 'writing helper' + 'research helper' + 'task helper' alone is amazing and we are def beyond that.

Even side features like 'do this experiment' where you can burn a ton of tokens to figure things out ... so valuable.

These are cars in the age of horses, it's just a matter of properly characterizing the cars.

mikkupikku 11 minutes ago

My problem when writing code is mainly executive dysfunction; I constantly succumb to the temptation to take the easy way and do it properly later, and later never comes. For some reason, using a coding agent seems to alleviate this. Things get done the way I think they should be done, not just in a way that's "good enough for now."

slibhb 9 minutes ago

The idea that LLMs don't significantly increase productivity has become ridiculous. You have to start questioning the psychology that's leading people to write stuff like this.

myylogic 37 minutes ago

I agree that speed isn’t the core problem — misunderstanding is.

But faster iteration does change how quickly you converge toward understanding.

In practice, especially with AI-assisted coding, the real issue I’ve seen isn’t just writing the wrong thing faster — it’s losing the feedback loop. When generation becomes too cheap, people stop validating assumptions and just keep stacking outputs.

Building something quickly is still valuable, but only if it’s tied to tight feedback and evaluation.

Otherwise, you don’t just build the wrong thing faster — you also learn the wrong mental model faster.

  • slopinthebag 9 minutes ago

    Ai generated comments are against the rules of HN.

    @dang this bot is spamming

larsnystrom 29 minutes ago

I can really relate to this. At the same time I’m not convinced cycle time always trumps throughput. Context switching is bad, and one solution to it is time boxing, which basically means there will be some wait time until the next box of time where the work is picked up. Doing time boxing properly lowers context switching, increases throughput but also increases latency (cycle time). It’s a trade-off. But of course maybe time boxing isn’t the best solution to the problem of context switching, maybe it’s possible to figure out a way to have the cookie and eat it. And maybe different circumstances require a different balance between latency and throughput.

metalrain 17 minutes ago

I think it's more abstraction problem.

You could write more code, but you also could abstract code more if you know what/how/why.

This same idea abstracts to business, you can perform more service or you can try to provide more value with same amount of work.

m463 17 minutes ago

> The Goal ... it's also the most useful business book you'll ever read that's technically fiction

factorio ... it's also the most useful engineering homework that's technically a game

milesward 23 minutes ago

Correct, but I'd frame it to confused leaders a bit differently. Because we made this part easier, we've increased how critical, how limiting, other steps/functions are. Data's more valuable now. QA is more valuable now. More teams need to shift more resources, faster.

725686 29 minutes ago

The work typing is wrong.

It is not about the speed of typing code.

Its about the speed of "creating" code: the boilerplate code, the code patterns, the framework version specific code, etc.

sorokod 21 minutes ago

Amdahl's law applies regardless of whether you are believe in it or not

avereveard 20 minutes ago

Eh code doesn't have a lot of value. Especially filling methods between signatures and figuring out the dependencies exact incantation is mechanistic and definitely time better spent doing other things.

A lot of these blog start from a false premise or a lack of imagination.

In this case both the premise that coding isn't a bulk time waste (and yes llm can do debugging, so the other common remark still doesnt apply) is faulty and unsubstantiated (just measure the ratio of architects to developers) but also the fact that time saving on secondary activities dont translate in productivity is false, or at least it's reductive because you gain more time to spend on doing the bottlenecked activity.

gammalost an hour ago

Seems easy to address with a simple rule. Push one PR; review one PR

  • hathawsh an hour ago

    Also add a PR reviewer bot. Give it authority to reject the PR, but no authority to merge it. Let the AIs fight until the implementation AI and the reviewer AI come to an agreement. Also limit the number of rounds they're permitted to engage in, to avoid wasting resources. I haven't done this myself, but my naive brain thinks it's probably a good idea.

    • dmitrygr an hour ago

      > I haven't done this myself, but my naive brain thinks it's probably a good idea.

      Many a disaster started this way

      • hathawsh an hour ago

        Yep, we're on the same wavelength.

  • zer00eyz an hour ago

    The problem is most of the people we have spent the last 20 years hiring are bad at code review.

    Do you think the leet code, brain teaser show me how smart you are and how much you can memorize is optimized to hire the people who can read code at speed and hold architecture (not code but systems) in their head? How many of your co-workers are set up and use a debugger to step through a change when looking at it?

    Most code review was bike shedding before we upped the volume. And from what I have seen it hasn't gotten better.

cess11 17 minutes ago

One of the main reasons I like vim is that it enables me to navigate code very fast, that the edits are also quick when I've decided on them is a nice convenience but not particularly important.

Same goes for the terminal, I like that it allows me to use a large directory tree with many assorted file types as if it was a database. I.e. ad hoc, immediate access to search, filter, bulk edits and so on. This is why one of the first things I try to learn in a new language is how to shell out, so I can program against the OS environment through terminal tooling.

Deciding what and how to edit is typically an important bottleneck, as are the feedback loops. It doesn't matter that I can generate a million lines of code, unless I can also with confidence say that they are good ones, i.e. they will make or save money if it is in a commercial organisation. Then the organisation also needs to be informed of what I do, it needs to give me feedback and have a sound basis to make decisions.

Decision making is hard. This is why many bosses suck. They're bad at identifying what they need to make a good decision, and just can't help their underlings figure out how to supply it. I think most developers who have spent time in "BI" would recognise this, and a lot of the rest of us have been in worthless estimation meetings, retrospectives and whatnot where we ruminate a lot of useless information and watch other people do guesswork.

A neat visualisation of what a system actually contains and how it works is likely of much bigger business value than code generated fast. It's not like big SaaS ERP consultancy shops have historically worried much about how quickly the application code is generated, they worry about the interfaces and correctness so that customers or their consultants can make adequate unambiguous decisions with as little friction as possible.

andrewstuart 18 minutes ago

These “LLM programming ain’t nothing special” posts are becoming embarrassing for the authors who - due to their anti AI dogmatism - have no idea how truly incredibly fast and powerful it’s become.

Please stop making fools of yourselves and go use Claude for a month before writing that “AI coding ain’t nothing special” post.

Ignorance of what Claude can actually do means your arguments have no standing at all.

“I hate it so much I’ll never use it, but I sure am expert enough on it to tell you what it can’t do, and that humans are faster and better.”

  • slopinthebag 7 minutes ago

    What makes you think they haven't? I agree with them and I've been heavily using Claude / Codex for a while now. And I'm slowly trying to use AI more selectively because of these concerns.

lukaslalinsky an hour ago

If I can offload the typing and building, I can spend more energy understanding the bigger picture

wolttam 33 minutes ago

"Our newest model reduces your Mean Time To 'Oh, Fuck!' (MTTF) by 70%!"

gyanchawdhary 25 minutes ago

he’s treating “systems thinking” and architecting software like it’s some sacred, hard to automate layer that AI apparntly sucks at ..

renewiltord an hour ago

Understanding the problem is easier for me experienced with engaging with solutions to the problem and seeing what form they fail in. LLMs allow me to concretize solutions so that pre-work simply becomes work. This allows me to search through the space of solutions more effectively.

luxuryballs an hour ago

It’s definitely going to create a lot of problems in orgs that already have an incomplete or understaffed dev pipeline, which happen to often be the ones where executive leadership is already disconnected and not aware of what the true bottlenecks are, which also happen to often be the ones that get hooked by vendor slide decks…

nathias 41 minutes ago

people can have more than one problem

myst 5 minutes ago

No one there is solving a problem. The AI bros are hooking a new generation (NG) on _their_ set of crutches, without which NG "is not coding (living) up to their true potential". Nothing personal, just business.

PS. The tech bros tried to do exactly that to millennials, but accidentally shot boomers instead.

6stringmerc an hour ago

Because the way the world is currently and this is trending, let me jump in and save you a lot of time:

Expedience is the enemy of quality.

Want proof? Everyone as a result of “move fast and break things” from 5-10 years ago is a pile of malfunctioning trash. This is not up for debate.

This is simply an observation. I do not make the rules. See my last submission for some CONSTRUCTIVE reading.

Bye for now.

teaearlgraycold an hour ago

> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks. The prospect didn't even end up buying. The feature got used by eleven people, and nine of them were internal QA. That's not a delivery problem. That's an "oh fuck, what are we even doing" problem.

I have very much upset a CEO before by bursting his bubble with the fact that how fast you work is so much less important than what you are working on.

Doing the wrong thing quickly has no value. Doing the right thing slowly makes you a 99th percentile contributor.

gedy an hour ago

I'm cynical but kinda surprised that so many mgmt types are rah-rah AI as "we're waiting for engineering... sigh" has been a very convenient excuse for many projects and companies that I've seen over past 25 years.

  • shermantanktop an hour ago

    Absolutely. Everyone loves a roadblock that someone else needs to clear, giving back some time to breathe and think about the problem a bit.

    This only works in large companies. In startups this is how you run out of money.

phillipclapham 42 minutes ago

[flagged]

  • _under_scores_ 30 minutes ago

    For me it's also in generating output that I know is right when I see it, but don't necessarily know every intricate detail of up front.

    • ohyoutravel 28 minutes ago

      You’re engaging with an LLM.

      • _under_scores_ 23 minutes ago

        Yes?

        Edit: you mean op?

        • ohyoutravel 21 minutes ago

          No I mean you. The person to whom you are responding is a bot. No judgment, just pointing this out in case you don’t want to waste human brain cycles.

          • _under_scores_ 15 minutes ago

            Oh! Whats the tell out of curiosity?

            • ohyoutravel a minute ago

              This one in particular is a new account with a high volume of similarly-structured posts over an impossibly short time.

              Bigger tells are the other two green accounts posting multiple top level comments in this topic that are nearly identical. Perhaps the programmer had an off by one error somewhere.

              I count at least three top level posters, if not as many as five, in this topic that are LLMs. The real absurdity is devnotes responding to myylogic, who are both LLMs.

            • ohyoutravel 2 minutes ago

              This one in particular is a new account with a high volume of similarly-structured posts over an impossibly short time.

              Bigger tells are the other two green accounts posting multiple top level comments in this topic that are nearly identical. Perhaps the programmer had an off by one error somewhere.

              I count at least three top level posters, if not as many as five, in this topic that are LLMs.

dannersy 5 minutes ago

The blog isn't even necessarily anti-AI yet the majority of responses here are defending it like the author kicked their dog.

The sentiment that developers shouldn't be writing code anymore means I cannot take you seriously. I see these tools fail on a daily basis and it is sad that everyone is willing to concede their agency.