apricot 1 day ago

I am this very term teaching 18-year-old students 6502 assembly programming using an emulated Apple II Plus. They've had intro to Python, data structures, and OO programming courses using a modern programming environment.

Now, they are programming a chip from the seventies using an editor/assembler that was written in 1983 and has a line editor, not a full-screen one.

We had a total of 10 hours of class + lab where I taught them about assembly language and told them about the registers, instructions, and addressing modes of the chip, memory map and monitor routines of the Apple, and after that we went and wrote a few programs together, mostly using the low-resolution graphics mode (40x40): a drawing program, a bouncing ball, culminating in hand-rolled sprites with simple collision detection.

Their assignment is to write a simple program (I suggested a low-res game like Snake or Tetris but they can do whatever they want provided they tell me about it and I okay it), demo their program, and then explain to the class how it works.

At first they hated the line editor. But then a very interesting thing happened. They started thinking about their code before writing it. Planning. Discussing things in advance. Everything we told them they should do before coding in previous classes, but they didn't do because a powerful editor was right there so why not use it?...

And then they started to get used to the line editor. They told me they didn't need to really see the code on the screen, it was in their head.

They will of course go back to modern tools after class is finished, but I think it's good for them to have this kind of experience.

  • flawn 1 day ago

    Is this course online available? Sounds like great fun.

  • ssgodderidge 1 day ago

    Whoa, I didn’t know such an thing existed. What emulator do you use?

    • apricot 21 hours ago

      AppleWin, and the assembler is an early version of Glen Bredon's Merlin.

  • drzaiusx11 1 day ago

    I took several classes along these lines in college; one writing a rudimentary OS on bare metal 68k asm, wiring up peripherals on breadboards, etc. Creating an ALU using only 74 series logic chips and the like. This was 30y ago, but the 1970s chips were already antiques, but the lessons were timeless. I'm happy courses like this still exist and I wish everyone had an opportunity to take them as part of standard computer science curriculum. For me at least, they fundamentally shaped my perspective of computing machinery that I never would have experienced otherwise.

    Today I program 6502/7 asm for my Atari to help me unwind and it grounds me and gives me joy, while in my day job I'm easily 10 levels of abstractions higher.

    • ggerules 9 hours ago

      Programming in an assembly language is a very zenith like experience for me.

      • drzaiusx11 9 hours ago

        I love having a relationship with the lowest levels of our craft. Access to an electron microscope and decapping chips to make my own reimplementions (in software) is next on my bucket list. If chip lithograph wasn't so prohibitively expensive I'd also try my hand at that...

  • philipnee 1 day ago

    still remember my assembly class with HC11 20 yrs ago: amazed by how much we can do with so little hardware.

  • wffurr 1 day ago

    >> Ed is for those who can remember what they are working on.

    https://www.gnu.org/fun/jokes/ed-msg.html

    My first job out of university I was taught how to use a line editor in IBM UniData. It was interesting getting used to writing code that way.

    But it was an amazing day when I discovered that the "program table" was just a directory on the server I could mount over FTP and use Notepad++.

  • zrobotics 23 hours ago

    I took a very similar class 9 years ago, and it was honestly one of the most helpful things I got out of my CS degree. The low level and limited tooling taught me to think before I start writing.

    I've had other people look askanse at me, but on greenfield work I tend to start with pen and graph paper. I'm not even writing pseudocode, but diagramming a loose graph with potential functions or classes and arrows interconnecting them. Obviously this can be taken too far, full waterfall planning will be a different exercise in frustration.

    I find spending a few hours planning out ahead of time before opening an editor saves me tons of time actually coding. I've never had a project even loosely resemble the paper diagram, but the exercise of thinking through the general structure ahead of time makes me way more productive when it comes time to start writing code. I've tried diagramming and scaffolding in my editor, but then I end up actually writing code instead of big picture diagramming. Writing it on paper where I know I'll have to retype everything anyway removes the distractions of what method to use or what to name a variable.

    The few times I've vibe-coded something this was super helpful, since then I can give much more concrete and focused prompts.

    • chrisweekly 12 hours ago

      "Plans are useless, but planning is essential."

    • jimbokun 8 hours ago

      This is why whiteboards used to be so popular in many/most tech company offices.

      Doing this exact same process interactively with other people, and a not to NOT ERASE or later taking a picture of the whiteboard with your phone.

      • bsaul 7 hours ago

        "used to be" ?? What are engineering team doing nowadays when discussing architecturing their systems ?

        • wrs 7 hours ago

          In my experience the last several years, primarily we’re all on Zoom waving our hands and making false promises to update Confluence with what we talked about. I miss offices with walls and whiteboards.

        • josh_s 4 hours ago

          Miro.com is one of the few SaaS products that our team's collaboration could not live without.

          Perfect for a distributed team to replace the DO NOT ERASE white boards of yore.

      • BrandoElFollito 3 hours ago

        I have theee whiteboards in ly office, and almost all the walls of my teams space is covered with whiteboards. They are always full and it is always a drama when some space need to be made

  • ChrisMarshallNY 22 hours ago

    My first real program was a UVEPROM copier. It was written in MC6800 Machine Code, and we had 256 bytes (not kilobytes) of RAM for everything; including the code. That was in 1983.

    I am currently working in Swift, with an LLM, on a fairly good-sized app, in Xcode, for a device that probably has a minimum of 64 GB of storage, and 8 GB of RAM.

    I don’t really miss the good ol’ days, to be honest. I’m having a blast.

  • cobbzilla 22 hours ago

    I love all of this, but at least let your students use vi, it was around back then (or close). plus they don’t have to give it up when they go back in the real world, it’s an evergreen skill!

    • sagacity 19 hours ago

      To be fair, it's mostly an evergreen skill because people don't know how to exit.

      • s1mplicissimus 16 hours ago

        I don't have a problem, I can quit anytime I want!

      • TeMPOraL 16 hours ago

        Friends don't let friends use vi - they know that once you start, you'll never quit!

      • cobbzilla 14 hours ago

        if you learned it in class, I hope you learn how to exit!

      • assimpleaspossi 12 hours ago

        I know you're joking but if one can't figure out how to quit vi they should find other employment opportunities in other fields of work.

        • fuzztester 6 hours ago

          And I bet emacs has a mode for that too.

      • ggerules 9 hours ago

        shift :wq

        Or....

        Ctrl-Z

  • MattBearman 19 hours ago

    I had a similar experience recently coding a wordle clone on and for a Psion 3a (an early 90s palmtop pc) the screen only shows a few lines of code, and the built in ide is little more than a text editor. I really enjoyed the process

  • sitzkrieg 18 hours ago

    thank you for picking an enjoyable architecture!

    scaring people away w x86 cruft right out the gate is no good for anyone :-)

    • MaxBarraclough 15 hours ago

      Can't imagine anyone teaching x86 assembly these days, did you mean x86-64?

      • tremon 14 hours ago

        What do you imagine the difference to be between those two?

        • MaxBarraclough 8 hours ago

          The improved register count must make it much less claustrophobic for students. It's not just the same ISA but with wider words.

          Looks like I'm mistaken on terminology though. x86 includes the 16-bit, 32-bit, and 64-bit ISAs of that family, and doesn't refer specifically to the 32-bit generation.

  • spockz 18 hours ago

    One of my favourite experiences coming up as an engineer was working with a very senior engineer right in the beginning. Whenever he had a task or problem, he would start out thinking, maybe doodling a bit on paper, go for a walk, and only then sit down at his computer and start typing. He would type in one go only compiling in the end, and it would work. (Even typos were rare.)

    All this to say that it is extremely useful to have the program and the problem space in your head and to be able to reason about it before hand. It makes it clearer what you expect and easier to catch when something unexpected happens.

    • econ 8 hours ago

      > He would type in one go only compiling in the end, and it would work. (Even typos were rare.)

      Then with each year grow more paranoid if there are no bugs or typos.

    • dehrmann 7 hours ago

      I see some of the value in planning, but experimentation is so cheap, there's also a lot of value in trying it, seeing what works, and learning from it. The main drawback I see from experimentation is failing to understand why something worked.

  • sixtyj 18 hours ago

    I thought it was common practice to think things through first and only then start doing something, but it seems that these days a lot of people have taken inspiration from Zuckerberg’s motto, “move fast and break things”… I’ll never forget that.

    • sixtyj 15 hours ago

      P.S.: I didn't mean that in a negative way; I was just surprised that we have to learn this because our kids have forgotten it, or probably we don’t teach planning in elementary schools.

      • roughly 7 hours ago

        TBH I think the bigger problem for how we teach kids are twofold:

        1. There's a right answer to every problem in school

        2. If you got it wrong, that's bad, and you did bad.

        The pattern I've seen from younger people these days is a learned helplessness, where there's no room for them to be creative in school, and any attempt to do so runs the risk of failing an assignment, getting a B, missing out on Harvard, and spending the rest of their lives poor in a ditch, or so they're told.

    • senordevnyc 9 hours ago

      I think it depends on your goals. There are many domains where you’re better off just trying lots of things and iterating towards a more ideal solution, vs. waiting to start until it’s been analyzed thoroughly to find the perfect solution.

      For example, I suspect more startups die from over-analysis than from acting too quickly and breaking things beyond repair.

      That said, I think LLMs can be a mixed bag here. I find that they can really help my analysis phase, by suggesting architectures, finding places where future abstractions will leak, reminding me of how a complex project works, etc. I’ve found it invaluable to go back and forth in a planning phase with an agent before even deciding what exactly I want to build, or how.

      And on the implementation side, they make code attempts very cheap, so I can try multiple things and just throw them away if I don’t like the result.

      But that said, I do find that it requires discipline, because it’s very easy to get into a groove where I don’t do any of that, and instead just toss half-baked ideas over the wall and the agent figure out the details. And it will, and it’ll be pretty decent usually, but not as good as if I pair program with it fully.

    • roughly 7 hours ago

      One place I've seen people get caught here is when they don't actually have the information they need to solve the problem - when they don't understand the problem space well enough, or they don't know the boundaries of the systems or technologies they're using well enough, or there's unanswered questions. At that point, I've seen people dig into research projects and 15 page design document discussions that would all be obviated by a day or two of just doing the thing and seeing what happens.

      My understanding is that was the actual point of "move fast and break things" - gain knowledge by trying stuff to help you make better decisions, even if you make a mistake and need to roll back or fix it. The art to this is figuring out how to contain the negative consequences of whatever you're testing, but by all means, experiment early to gather information.

      I've stated it to mentees as "don't be afraid to start a fire as long as you know where the fire extinguishers are" - it's OK to fail in the service of learning so long as you fail in a contained way.

  • tikotus 16 hours ago

    A bit tangential, but I believe dynamic vs static typing works the same. I switch quite often between them, and when ever I've had a longer break from dynamic typing, coming back to it feels quite heavy. "How did I ever do this?" It feels so heavy.

    But a few hours (or days) in, I forget what the problem was. A part of my brain wakes up. I start thinking about what I'm passing around, I start recognizing the types from the context and names...

    It's just a different way of thinking.

    I recognized the same feeling after vibe coding for too long and taking back the steering wheel. I decided I'd never let go again.

    • genxy 13 hours ago

      In dynamically typed programs, you can if you allow it, let the types happen to you. In a statically typed program, you are forced to think about them from the beginning. That same abstract concept is at play with vibe coding, but instead the code now happens to you.

      My best LLM written code is where I did a prototype of the overall structure of the program and fed that structure along with the spec and the goal. It is kind of the cognitive bitter lesson, the more you think the better the outcome. Always bet on thinking.

      • okeuro49 11 hours ago

        I've used a dynamically typed language extensively. I don't think they are suited for anything but small scripts.

        Refactoring is a nightmare, as as types don't exist, the compiler can't help you if you try to access a property that doesn't exist.

        I think generally people have realised this, and there are attempts to retrofit types onto dynamically typed languages.

        • hyperhello 9 hours ago

          It bugs me that there are two kinds of languages. Parameters and variables could be typed optionally in a dynamic language; either error in the compiler or at runtime; otherwise you just haven’t made any type errors while you coded and the code is fine either way.

          • zahlman 8 hours ago

            This is what gradual typing (such as TypeScript, or the use of Python annotations for type-checking) accomplishes. The issue is that it basically always is bolted on after the fact. I have faith that we aren't at the end of PL history, and won't be surprised if the next generation of languages integrate gradual typing more thoughtfully.

            • BoingBoomTschak 5 hours ago

              The problem with these two languages is that the runtime type system is completely different (and much weaker) than the compile time one; so that the only way to be safe is to statically type the whole program.

              CL has a pretty anemic type system, but at least it does gradual without having to resort to this.

            • hyperhello 3 hours ago

              JavaScript caught on because it was the best casual language. They've been trying to weigh it down ever since with their endless workgroup functionality and build processes. If we get a next generation casual language, it'll have to come from some individual who wants it to happen.

              • HappMacDonald 26 minutes ago

                No, JavaScript caught on because at the time it was the only game in town for writing web front-ends, and then people wanted it to run on the server side so that they could share code and training between front end and back end.

        • zahlman 8 hours ago

          To "realize" that it would have to be true. The longer I've stuck with untyped Python the more I've preferred it, and the more I've seen people tie themselves in knots trying to make the annotations do impossible (or at least difficult) things. (It also takes away bandwidth for metaprogramming with the annotations.)

      • BoingBoomTschak 5 hours ago

        A needlessly confrontational view. Some people do use dynamic typing as a way to stumble around until it works (e.g. most scientists) but some others simply don't want the noise associated with a static type system accurate enough to really say what you want; especially during prototyping/interactive use. Which is why gradual typing exists, really.

        Same reason my views about GC evolved from "it's for people lacking rigour" to "that's true, but there's a second benefit: no interleaving of memory handling and business logic to hurt clarity".

  • juliendorra 16 hours ago

    This is a really close equivalent to keep learning sketch and clay modeling in design school

  • p2detar 15 hours ago

    > And then they started to get used to the line editor. They told me they didn't need to really see the code on the screen, it was in their head.

    As someone that used to write C and Assembly programs on a sheet of paper for university exams, I chuckled a bit. I finished university in post-soviet country twenty years ago or so and this was the norm. I used to hate it so much.

  • andersmurphy 15 hours ago

    I've been diving into aarch64 assembly recently building a small Forth. It's honestly really refreshing.

    If you're prepared to forgo some portability and pick an architecture assembly opens up a lot if options. Things like coroutines, automatic SIMD become easier to implement. It's also got amazing zero cost C FFI (and I'm only half joking). Linux kernel booting into a minimal STC Forth is a lot of fun.

    Not to mention you can run your code on android without SDK or NDK over ADB (in the case of aarch64).

  • mchaver 15 hours ago

    Do you have any more notes/lectures/references that you can share? I would like to try something similar.

  • pipes 14 hours ago

    I was going to say "why on earth are you making them use a line editor there is probably a vscode plugin for the assembler with syntax highlighting" then I got to your point about it being in their head instead. This reminds me of what zed Shaw said, for some reason code written without an ide is better and he's not sure why.

    As a sort of an adjacent point, I worked through a book that is used on a course often called "from nand to Tetris". It is probably the best thing I've done, in terms of understanding how computers, assemblers and compilers work

    https://amzn.eu/d/07pszOEy

    • shevy-java 12 hours ago

      > This reminds me of what zed Shaw said, for some reason code written without an ide is better and he's not sure why.

      I am not sure whether the statement is correct; I am not sure whether the statement is incorrect either. But I tested many editors and IDEs over the years.

      IDEs can be useful, but they also hide abstractions. I noticed this with IntelliJ IDEA in particular; before I used it I was using my old, simple editor, and ruby as the glue for numerous actions. So when I want to compile something, I just do, say:

          run FooBar.java
      

      And this can do many things for me, including generating a binary via GraalVM and taking care of options. "run" is an alias or name to run.rb, which in turn handles running anything on my computer. In the IDE, I would have to add some config options and finding them is annoying; and often I can't do things I do via the commandline. So when I went to use the IDE, I felt limited and crippled in what I could do. My whole computer is actually an IDE already - not as convenient as a good GUI, of course, but I have all the options I want or need, and I can change and improve on each of them. Ruby acts as generic glue towards everything else on Linux here. It's perhaps not as sophisticated as a good IDE, but I can think in terms of what I want to do, without having to adjust to an IDE. This was also one reason I abandoned vim - I no longer wanted to have my brain adjust to vim. I am too used to adjust the language to how I think; in ruby this is easily possible. (In Java not so much, but one kind of has to combine ruby with a faster language too, be it C, C++, Go, Rust ... or Java. Ruby could also be replaced, e. g. with Python, so I feel that discussion is very similar; they are in a similar niche of usage too.)

      • jimbokun 7 hours ago

        I really like this approach. A good reminder that Ruby started out as a shell scripting language, as evidenced by many of the built in primitives useful for shell programming.

      • bccdee 7 hours ago

        Simpler tools are a forcing function for simplicity. If you don't have code search, you'll need to write code that is legible without searching. If you don't have auto-refactoring utils, you'll have to be stricter about information-hiding. And if you don't have AI, you might hesitate to commit to the first thing you think of. You might go back to the drawing board in search of a deeper, simpler abstraction and end up reducing the size of your codebase instead of increasing it.

        Conveniences sometimes make things more complicated in the long run, and I worry that code agents (the ultimate convenience) will lead to a sort of ultimate carelessness that makes our jobs harder.

        • dijksterhuis 6 hours ago

          > Simpler tools are a forcing function for simplicity. If you don't have code search, you'll need to write code that is legible without searching.

          i was working in a place that had a real tech debt laden system. it was an absolute horror show. an offshore dev, the “manager” guy and i were sitting in a zoom call and i was ranting about how over complicated and horrific the codebase was, using one component as a specific example.

          the offshore dev proceeded to use the JetBrains Ctrl + B keybind (jump to usages/definitions) to try and walk through how it all worked — “it’s really simple!” he said.

          after a while i got frustrated, and interrupted him to point out that he’d had to navigate across something like 4 different files, multiple different levels of class inheritance and i don’t know how many different methods on those classes just to explain one component of a system used by maybe 5 people.

          i used nano for a lot of that job. it forced me to be smarter by doing things simpler.

      • SoftTalker 7 hours ago

        When .NET first came out I started learning it by writing C# code in Notepad and using csc.exe to compile it. I've never really used Visual Studio because it always made me feel that I didn't understand what was happening (that said, I changed jobs and never did any really big .NET project work).

  • FrankRay78 13 hours ago

    I find something similar happening as I transition to spec driven development - whilst the agents do the work I used to do, I spend a hell of a lot more time thinking about what I want the outcome to be, rather than hacking around the limitations of frameworks I know, avoiding tech I don’t. It’s freeing actually.

  • neocron 13 hours ago

    I would love if you made such lectures into a series of YouTube videos as I sadly didn't have the pleasure in university. We only Java 6/7 as an indicator how long ago that was

  • IshKebab 13 hours ago

    Hmm I dunno if I'm a fan of making things painful for students just because it was painful in the past. When I "learnt" assembly in uni we had to manually assemble the opcodes and type them into a 10 digit keypad. I didn't learn anything, it just put me off.

    I'm pretty skeptical that using a line editor will have helped them learn. It probably helped them memorise their code but is that really learning? Dubious.

  • ggerules 9 hours ago

    Upvotes for apricot and zrobotics for thoughtful shared experiences.

    One of the continuous battles I kept loosing when introducing an assembly language undergraduate course. Other higher up colleages and deans would say... too hard... nobody uses that anymore... and shut the course down. But I would always sneak it into other courses I taught, systems programming, computer languages, computer architecture. But I've always felt there was a hole in my student's understanding of computers.

    I grew up in a time when assembly language was a part of the cariculum. It helped bridge the gap between higher level languages like C/C++ ...etc. Also why certain language features exist. Also how many language constructs work. Also more importantly, as pointed out by the two posters above, it gives you a way to think about the CPU one asm line at a time what is going on the CPU ecosystem. That is fantastic training!

    Even though I kept loosing the assembly language course battles, I hope I planted enough seeds in students that they will take it up on their own at some point. Everyone should at least learn to program in one assembly language.

  • deepsun 8 hours ago

    What I found teaching coding us that Python is actually not good as the first language. The main issue is types -- you should always think of what type your variable is (and what types are collection items are). Python makes it hard.

    Spaces are sometimes mandatory sometimes not. Something I didn't even think might be confusing, for me it's like breathing.

    • Neywiny 8 hours ago

      Indentation is mandatory, but when are spaces? Imo, if you're running static analysis it'll take care of types well enough. Sometimes I need a cast but not often.

      Contrasting that to helping a college roommate with Arduino code he said he didn't understand what it was doing: he had 0 indentation. Braces everywhere. He didn't understand what it was doing not because it was complex logic (it was only maybe 30 simple lines) but because his flow control was visually incomprehensible. It's pretty hard to do that in Python.

      But that's why I believe in polygloty. Best of multiple worlds.

  • YZF 6 hours ago

    Agree it's good. I used to program with these tools because that's all I had and when I had better tools I moved onto to them. We also used to build our own better tooling.

    When I built my first guitar I had very few tools so I used what I had since I'm cheap ;) Then I bought better tools and it made my life a lot easier. But I got some lessons from the experience. Mostly though it was a pain that's solved by better tooling.

AstroBen 1 day ago

I wish more was being invested in AI autocomplete workflows. That was a nice middle-ground.

But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).

  • heyalexhsu 1 day ago

    I can see the logic behind "manual coding" but it feels like driving across country vs taking the airplane. Once I've taken the airplane once, its so hard to go back...

    • bluefirebrand 21 hours ago

      Can't understand this mentality. If I had the time I would much rather never set foot in an airport again. I would drive everywhere. And I would much rather write my own code than pilot an LLM too

      • justapassenger 21 hours ago

        You’re describing extremely valid approach for a hobby. Less for a business.

        • latexr 14 hours ago

          The fact so many people think businesses need to do do do, faster faster faster, now now now, at all costs is a major reason everything sucks, everything is fucked up, everyone is exploited.

        • orphea 14 hours ago

          No, they are not. Even ignoring business where using AI would have consequences for you (medical is one example), there are plenty "normal" software companies that value quality over slop.

    • AstroBen 20 hours ago

      I only see this being the case for throwaway code and prototypes. For production code you want to keep long term it's not so clear cut.

      • FrankRay78 13 hours ago

        I’m writing production quality code with agents, it was the development ‘harness’ that took time up get right.

    • resonancel 19 hours ago

      It's more like driving across country vs firing a missile with you being the warhead...

    • otabdeveloper4 17 hours ago

      Real life measurements show a 25 percent improvement in coding speed when using AI at best. And this is before you take technical debt into account!

      Yes, AI unlocks coding for people who fail FizzBuzz. This isn't really relevant to making software though.

    • oneeyedpigeon 17 hours ago

      Airplanes are good for certain types of journey, but they're vastly inefficient for almost all of them.

    • armchairhacker 12 hours ago

      How?

      I usually code faster with good (next-edit) autocomplete then writing a prompt and waiting for the agent.

  • fouronnes3 23 hours ago

    I've had a lot of enjoyment flipping the agentic workflow around: code manually and ask the agent for code review. Keeps my coding skills and knowledge of the codebase sharp, and catches bugs before I commit them!

    • bdangubic 23 hours ago

      if it catches a lot of bugs maybe you’d be better of letting it write it in the first place :)

      • saulpw 21 hours ago

        Nono, that is the reverse centaur. Structure your own thoughts, that's the human work.

      • cppluajs 18 hours ago

        IME, not really. When you prompt it to review its own written code, it will end up finding out a bunch of stuff that should have been otherwise. And then you can add different "dimensions" in your prompt as well like performance, memory safety, idiomatic code, etc.

      • lionkor 17 hours ago

        It also writes lots of bugs which it'll catch some of, in an independent review chat.

        This is bogus. If you think LLMs write less buggy software, you haven't worked with seriously capable engineers. And now, of course, everyone can become such an engineer if they put in the effort to learn.

        But why not just use the AI? Because you can still use the AI once you're seriously good.

        • roganartu 12 hours ago

          > But why not just use the AI? Because you can still use the AI once you're seriously good.

          Perhaps because the jury is still out on whether one can become “seriously good” by using AI if they weren’t before.

      • jb1991 17 hours ago

        This is definitely not correct in my opinion. You’re essentially saying, instead of a person actually getting better at the craft, just give up and let someone else do it.

      • vips7L 12 hours ago

        Statistically LLMs generate more bugs for the same feature.

  • HDThoreaun 20 hours ago

    AI autocomplete sucked. Everyone quickly moved on because it is not a useful interface

    • lkirkwood 19 hours ago

      Why? I thought it was pretty good, just get the rest of your function a lot of times and no context switching to type to an agent or whatever. It just happens immediately and if it's wrong just keep typing till it isn't. You can still use an agent for more complex things.

      I just wish I knew of a good Emacs AI auto complete solution.

    • allthetime 19 hours ago

      It’s wildly useful. Type out a ridiculously long function name that describes what you want it to do and often… there it is.

    • wavemode 19 hours ago

      > AI autocomplete sucked

      > Everyone moved on

      > it is not a useful interface

      You've made three claims in your brief comment and all appear to be false. Elaborate what you mean by any of this?

    • 59nadir 15 hours ago

      LLM auto-complete is the most useful experience I've had with LLMs by quite a margin, and those were the early GitHub Copilot versions as well. In terms of models and cost it overperformed. It wasn't always good but it was more immediately useful than vibecoding and spec-driven development (or vibecoding-in-a-nice-dress).

      I think most people "moved on" because they both thought the agent workflow is cooler and were told by other people that it works. The latter was false for quite some time, and is only correct now insofar that you can probably get something that does what you asked for, but executed exceedingly poorly no matter how much SpecLang you layer on top of the prompting problem.

    • fg137 13 hours ago

      Who's "everyone"?

      In some codebases, autocomplete is the most accurate and efficient way to get things done, because "agentic" workflows only produce unmaintainable mess there.

      I know that because there are several times where I completely removed generated code and instead coded by hand.

  • ksymph 18 hours ago

    Man, same here, those early days of Cursor were mindblowing; but since then autocomplete has stagnated, and even the new Cursor version is veering agentic like everything else.

    I hope if/when diffusion models get a little more traction down the line it'll put some new life into autocomplete(-adjacent) workflows. The virtually instantaneous responses of Inception's Mercury models [0] still feel a little like magic; all it's missing is the refinement and deep editor integration of Cursor.

    On the subject of diffusion models, it's a shame there aren't any significant open-weight models out there, because it seems like such a perfect fit for local use.

    [0] https://www.inceptionlabs.ai/

  • ZihangZ 14 hours ago

    This matches my experience. When I write the boring glue code myself, I get a map of the project in my head.

    When I let an agent write too much of the structure, the code may work, but a week later every small change starts with "where did it put that?"

ironman1478 6 hours ago

I didn't realize how I learned to develop software from 2011-2015 was the old way lol. (Am I old now?).

I appreciate that the author understands why doing everything "the old way" is good. AI is a tool, it can't be a replacement for how you think and it can't be a replacement for the actual work.

I wish more people had a desire for the inner workings of things because it makes you better at actually using tools. Implementing compilers, databases, OSes, control systems, etc. is like practicing swimming. Yeah, you might not ever swim again but when you need to the muscle memory will be there when you need to get out of the ocean (I know this is a strained metaphor).

Knowing more can only be a boon to using LLMs for coding and it's really a general problem in ML. I work in a science field as hw / sw engineer and I've seen so many pure data science people say they can replace all our work with a model, flail for 2 years and then their whole org gets canned. If they just read a textbook or collaborated (which they never do, no matter how polite you are), they'd have been able to leverage their data science skills to build something great and instead they just toil away never making it past step 0.

temporallobe 23 hours ago

The very first few years of my career I spent writing code (mostly Perl) in vi (not even vim) on a SPARC running Solaris. I bought myself the O’Riley Perl Cookbook and that was pretty much my sole guide apart from the few internet forums that were available at the time. Search engines were still primitive, so getting help when you got stuck was far more difficult. But it forced me to deeply learn a lot of things - Perl syntax (we had no syntax highlighting, intellisense, etc.) terminal tools, and especially vi keystrokes. Looking back, there was far less distraction and “noise”, though I admit that could have been the fact that it was the beginning of my career and expectations were lower. I miss those times because now everything feels insanely more layered and complex.

  • andsoitis 18 hours ago

    > I miss those times because now everything feels insanely more layered and complex.

    For me it was GW-BASIC and no editor as we know them today.

    That was instant gratification, rapid development, no silly layers. It was pretty pure. It is what hooked me.

    In a sense, agentic coding, has brought back the excitement to building software for me because I don’t have to wrangle all the crazy enterprise or other modern development considerations directly. There’s a closer connection between thought and result, which is what was the magic that captured my imagination.

sph 18 hours ago

This title is the most depressing thing I have read on this site

  • tikotus 16 hours ago

    I had to see if it was a joke. Oh, my.

  • zombot 16 hours ago

    And this is only the beginning. Wait until they send you to the insane asylum for talking about coding by hand.

  • Remdo 15 hours ago

    I thought Hacker News was a place where you could share your views and discuss them with different people, but your comment seems to show that it now became a Facebook comment section. Thank you for your eye-opening contribution.

    • orphea 14 hours ago

      Well, they did share their view and it's being discussed, so what's the problem?

      • 0123456789ABCDE 8 hours ago

        the problem is that it has no substance at all. it is the equivalent of "your opinion sucks", it doesn't elaborate on it, so there is nothing to discuss either. beyond _this_, which in itself isn't much i'll give you that.

        • eventualcomp 6 hours ago

          When I saw the title, I actually didn't have much emotion beyond curiosity. But then after checking thus comment, it piqued my interest, made me step back and really consider the ramifications of how we got here. And then yes I became depressed also.

          Anyway, I got value out of it, comments dont have to increase net factual information to be meaningful, because we are all capable of reflection.

  • andersmurphy 15 hours ago

    Python from scratch now means writing python without LLMs? It's like wild swimming all over again.

    The shark is being jumped.

  • mchaver 15 hours ago

    I like how your comment can be interpreted in two completely opposite manners. Either it is depressing that coding by hand is something curious, worthy of blogging about, or you are an AI-maximalist deriding lowly meat powered coding. Based on your post history I'll assume the former interpretation :)

wkjagt 1 hour ago

I left big tech about 5 years ago, which was an interesting timing looking back. It's not even that long ago, but man have things completely changed since then. I still code a lot, but only for fun. I've never even tried agentic coding. I'm kinda sad that "coding the old way" (as what this apparently is now) has become obsolete so quickly, but also very grateful that I was coincidentally born at the right time to have lived through a good chunk of time where people still wrote code themselves.

mindcrime 1 day ago

I'm a big advocate for AI, including GenAI. But I still spend a fair amount of time coding by hand, or "by hand + Copilot completions enabled". And yes, I will use spec driven development with SpecKit + OpenCode, or just straight up "vibe code" on occasion but so far I am unwilling to abdicate my responsibility to understand code and abandon the knowledge of how to write it. Heck, I even bought a couple of new LISP and Java books lately to bone up on various corners of those respectively. And I got a couple of Forth books before that. Not planning to stop coding completely for a while, if ever.

  • scarface_74 1 day ago

    My responsibility is to make sure my code meets functional and non functional requirements. It’s to understand the *behavior*. My automated unit, integration, and load tests confirm that.

    Someone thought I was naive when I said my vibe coded internal web admin site met the security requirements without looking at a line of code.

    I knew that because the requirements were that anyone who had access to the site could do anything on the site and the site was secured with Amazon Cognito credentials and the Lambda that served it had a least privileged role attached.

    If either of those invariants were broken, Claude has found a major AWS vulnerability.

    • sgarland 1 day ago

      Did you mean to reply to someone else? This seems awfully defensive for a reply to parent’s comment.

      • imtringued 16 hours ago

        Yeah only the first two sentences were actually relevant. The rest was a humble brag that there is no application level security, which is a really weird thing to brag about.

        When I use SAML, I still have to check that the user has some sort of attribute that indicates that access was granted to the application. If this access rule is defined outside the application, then why bring up Claude? If it isn't then Claude is responsible for implementing the access rule, which means the comment is 100% wrong.

        • guzfip 2 hours ago

          OP is a known autist who goes around and does this between HN and Reddit

    • Terr_ 1 day ago

      As written, I do think that's naive. Being sure the person/browser is authorized doesn't mean that the signals you get are actions they intended.

      Suppose that in normal use a user can visit a certain URL which triggers a dangerous effect. An attacker could trick the user into performing the action by presenting a link to them titled "click here for free stuff."

      There are various ways to protect against that (e.g. CORS, not using GET methods) but backend cloud credential management does not give it to you for free.

      • scarface_74 23 hours ago

        And that same user is already trusted to have admin access to the entire organizational AWS credentials - I did say it was an internal management site.

        The lambda itself only has limited permissions to the backend. The user can’t do anything if the lambda only has permission to one database and certain rights to those tables, one S3 bucket, etc.

        Heck with Postgres on AWS you can even restrict a Cognito user to only have access to rows based on the logged in user.

        And the database user it’s using only has the minimum access to just do certain permissions.

    • losvedir 23 hours ago

      It wouldn't prevent the admin page from exfiltrating data, though, right? Like, POSTing whatever data is loaded on the page to an arbitrary attacker controlled website.

      • scarface_74 23 hours ago

        That would require the logged in user to do something stupid. That’s like saying what’s to prevent the authorized user from emailing his credentials to a random person.

        • dinkumthinkum 20 hours ago

          You may want to go back and ask the expert in that vibe coding equation if it would say this is a wise approach.

    • Refreeze5224 20 hours ago

      Thank you for doing your part to keep webapp pentesters in business.

    • mindcrime 19 hours ago

      > My automated unit, integration, and load tests confirm that.

      Do they? Did you write them? If not, how do you know they confirm the desired behavior? If your tests are AI generated (and not human reviewed) then even if you're doing spec-driven development and provide a comprehensive spec, how can you be sure the tests actually test the desired behavior?

      Now if you're either writing or reviewing the tests, then sure.

      Also, for what it's worth, when I talk about my "responsibility" I'm speaking more from a self-imposed sense of... um, almost a moral responsibility I feel, not something involving a 3rd party like a customer or employer.

      • scarface_74 4 hours ago

        I review AI generated test of AI just like I reviewed tests of developers on a team I was leading.

        There is no “morality” when it comes to my job. Outside of my feeling morally obligated to give my employer the benefit of all my accumulated skills for 40-45 hours a week in exchange for the money (and in a previous life RSUs) in my account.

        I feel accountable to my coworkers and customers to deal with them fairly and honestly.

        What other moral obligation should I have besides my employer, coworkers and customers?

phaser 1 day ago

Here’s how i do it: I create a lot of stuff using AI to the max, but I also spend the necessary of time on reviewing that the AI is producing code that passes my cognitive load standards. this involves some tokens spent on grooming code and documenting well. Most of this is effortless thanks to an AGENTS.md based on this: https://github.com/zakirullin/cognitive-load/blob/main/READM... but i have a good sense of catching when things are getting weird and i steer back.

Then, when credits run out. It’s show time! The code is neatly organized, abstractions make sense, comments are helpful so I have a solid ground to do some good old organic human coding. I make sure that when i’m approaching limits I’m asking the AI to set the stage.

I used to get frustrated when credits ran out because the AI was making something I would need to study to comprehend. Now I’m eager to the next “brain time hand-out”

It sounds weird but it’s a form of teamwork. I have the means to pay for a larger plan but i’d rather keep my brain active.

  • flawn 1 day ago

    Thanks for sharing. I have thought about approaches by deliberately leaving tasks to me while the agent does something to keep my brain active & prevent atrophy. Maybe I should work on a Claude Code skill/hook for that :)

  • brianush1 22 hours ago

    > Don’t abuse DRY, a little duplication is better than unnecessary dependencies.

    That's an interesting thing to include. I agree with this point in principle, but I've found that Claude, at least, duplicates logic FAR too often and needs nudging in the other direction.

    • deaux 18 hours ago

      > but I've found that Claude, at least

      You hit on a very important point here. The linked AGENTS.md is a bad idea for general purpose use because the things it's meant to tackle, including an inherent bias towards or against DRY, is one of the big differences between model families. GPT 5.4 Codex has a very different "coding personality" from Claude Opus.

      It's a product of whatever model it was tested on.

  • neonstatic 18 hours ago

    > It sounds weird but it’s a form of teamwork.

    I can't do it. If I let an LLM write code for me, that code is untouchable. I see it as a black box, that I will categorically refuse to open. If it works, I use it, but don't trust it. If it breaks, I get frustrated. The only way that works for me is me behind the driving wheel at all times and an LLM as an assistant that answers my questions. We either brainstorm something or it helps me express things I know in languages syntax. Somehow that step has always been a bit of a burden for me - I understood the concepts well, but expressing them in syntax was a bit of a difficulty.

    • stringfood 18 hours ago

      but demanding to be behind the wheel and understand all the code will affect velocity compared to other teams that are utilizing AI to the max

      • hgomersall 18 hours ago

        Well yes, if a team doesn't bother to understand the code, that's certainly quicker.

        • LeCompteSftware 10 hours ago

          "Yes they crashed into a wall and all died, whereas you steered around it, but you must acknowledge that they crashed twice as quickly as you didn't crash. If you were driving their car, you would have just slowed them down."

          People should have read to the end of "Building a C compiler with a team of parallel Claudes"[1]:

            The resulting compiler has nearly reached the limits of Opus [4.6]’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
          

          "tried (hard!)" is very ominous. I wonder how Mythos would fare. Presumably it would get further, maybe much further. But I strongly doubt the "frequently broke existing functionality" problem was solved. Eventually humans have to understand the most difficult parts of the code. Good luck with that!

          [1] https://www.anthropic.com/engineering/building-c-compiler

      • neonstatic 16 hours ago

        What good is "velocity" if no-one understands the direction we are going and how we are going? Sounds like a recipe for disaster.

elAhmo 14 hours ago

It is amazing to see such a change in the industry: this title is something nearly every single developer could've said ~two years ago, now anyone claiming to code by hand is almost like an endangered species.

  • stingraycharles 13 hours ago

    It worries me that I’m losing my muscle memory, especially for the new technologies and programming languages I have been working with over the past year (as it’s suddenly so much easier to get shit done), and I feel like I’m creating a knowledge debt as I go.

    I’m very concerned about how the next generation of software engineers will pick up deep knowledge about this stuff, if at all.

  • armchairhacker 12 hours ago

    “Writing by hand” two years ago is analogous to writing without an IDE, which was rare.

  • andersmurphy 7 hours ago

    The bigger change is those that do write by hand have stopped open sourcing their work. The age of sharing innovation for free is quickly coming to an end.

    I also expect a bunch of open source/open core projects to go closed source in the next few months.

    • Hrun0 7 hours ago

      YC startups still have to be open source though, right?

      • elAhmo 2 hours ago

        Where did you get that impression? Most aren’t

    • leg100 6 hours ago

      I'm not sure if that's the case. Of course one has always been able to copy or plagiarise open source code. The value of open source projects is that they continue to be maintained by people that understand the code base. LLM's lifting that code doesn't change that.

birdfood 1 day ago

Getting to spend 3 months on a self learning journey sounds wonderful. My hunch is that these deep skills will be valuable long term and that this new abstraction is not the same as moving from assembly to c, but I am not completely sure. Lately most of my code has been llm generated and I can’t say I feel any sense of enjoyment, accomplishment, or satisfaction at the end of a work day. But I’ve also come to realise I really only enjoy 5-10% of the coding anyway and the rest is all the tedious semi-mechanical changes that support that small interesting core. On the scale of human history working with computers is a blip in time and I wonder how the period of hand writing code will be viewed in a hundred years, perhaps as a footnote or simply bundled as ‘everything before machines were self automating’.

  • ACS_Solver 1 day ago

    I think it's possible that the current shift will be similar to the "assembly to compiled language" shift.

    Once upon a time we wrote code in assembly language. Then we moved to C or other compiled languages. Assembly programming remained a very useful but niche skill. You compile your code and trust the compiler. You can examine the compiler output and that is at times necessary, but that's not something most developers know how to do.

    We may be looking at something similar. Most development work moving to the LLM abstraction level, with the skills being writing good prompts, managing the context window, agents, memories and so on. Some developers will be able to examine LLM generated code and spot problems there, but most will not have that skill.

    I'm not sure how to feel about it. Since ChatGPT showed up and until a couple months ago, I was firmly skeptical of LLM programming. We had new models every few weeks and I felt like each new model is just a different twist on the same low quality slop output. But recently the models seem to have crossed some threshold where their capabilities really improved and I have now used Claude - still using it sparingly - to implement features in much less time than I'd need myself or to locate a bug based on just log output. I don't yet buy the "coding is solved" hype but we're at least looking at the biggest change to programming since the adoption of high-level programming languages.

    • archagon 1 day ago

      AI is not an abstraction layer — no more than paying contractors is an “abstraction” over writing the code yourself.

      • andsoitis 18 hours ago

        You might be right, but I would be more open and curious.

andai 6 hours ago

From the linked article by Cal Newport:

> One solution to this constant companion problem: Spend more time with your phone out of easy reach. If it’s not nearby, it won’t be as likely to trigger your motivational neurons, helping clear your brain to focus on other activities with less distraction.

Reminds me of this study: "The mere presence of a smartphone reduces basal attentional performance"

The effect persisted even when the phone was switched off. It only went away when the phone was moved to a different part of the building.

https://www.nature.com/articles/s41598-023-36256-4

brianjlogan 1 day ago

I started using Zed as a half measure. I think I'll start using AI for planning and suggested implementation steps.

I am seeing non technical people getting involved building apps with Claude. After the Openclaw and other Agentic obsession trends I just don't see it pragmatic to continue down the road of AI obsession.

In most other aspects of life my skills were valuated because of my ability to care about details under the hood and the ability to get my hands dirty on new problems.

Curious to see how the market adapts and how people find ways to communicate this ability for nuance.

derangedHorse 1 day ago

> We don’t have teachers or a curriculum, and there’s very little required structure beyond making a full-time commitment during your retreat

I saw this quote when looking at the Recurse Center website. How does one usually go about something like this if they work full time? Does this mainly target those who are just entering the industry or between jobs?

I know the article is mostly about what the author built at the coding retreat, but now he has me interested in trying to attend one!

  • nicholasjbs 1 day ago

    (Recurse Center cofounder here)

    Most folks do RC between jobs, either because they quit their job specifically to do RC or because they lost their job and then decide to apply. Other common ways are as part of a formal sabbatical (returning either to an industry job or to academia), as part of garden leave, or while on summer break (for college and grad students). We also get a fair number of freelancers/independent contractors (who stop doing their normal work during their batches), as well as some retirees.

    Some folks use RC as a way to enter the industry (both new grads and folks switching careers), though the majority of people who attend have already worked professionally as programmers.

    We've had people aged 12 to early 70s attend, though most Recursers are in their 20s, 30s, and 40s.

fouronnes3 1 day ago

This is awesome! I myself did a 12 weeks batch at RC (W1'24) and had an absolute blast. Happy coding! Stay curious.

  • gregsadetsky 1 day ago

    fellow RC'er here - hi! I was Fall 2 '23.

  • culi 1 day ago

    Huge fan of RC. Have a close friend that did it. I've been so close to applying multiple times in my life but the timing just never quite works out

beej71 1 day ago

I'll bet we see more and more of this in the future. As developer skills atrophy due to over-reliance on LLMs, we'll have to keep our skills sharp somehow. What better way than a sabbatical?

  • linzhangrun 22 hours ago

    There's no way around it; just like how once you get used to Python, you gradually become ignorant of and indifferent to the underlying layers. With the continuous development of AI, this will be inevitable.

  • bendmorris 8 hours ago

    To answer "what better way," clearly using the skills regularly is much better. Letting them atrophy for potentially multiple years and then trying to resurrect them repeatedly doesn't seem like a recipe for maintaining sharp skills to me.

    • beej71 8 hours ago

      That's definitely optimal, but I don't think a lot of people are going to have that opportunity. It's not really in the short-term interest of a company to have people spending time on that.

kolleykibber 15 hours ago

Crazy to think this headline would have been satire a year ago.

octagons 12 hours ago

I think the author’s intent is well-placed, but it does feel a bit sad that this subject is blog-worthy.

I’ve spent a lifetime teaching myself programming, computers, and engineering. I have no formal education in these disciplines and find that I excel due to the self-taught nature of my background.

I take a very metered approach to AI and use it for autocomplete while still scrutinizing every token (not the AI kind) as well as an augment to my self-pedagogy. It’s great to be able to “query” or get a summary from a set of technical documents on demand.

However, I don’t understand the desire to remove oneself from the process with AI. I simply don’t do anything that won’t teach me something new or improve my existing skills.

There’s more to engineering than simply programming. Both the engineer and the intended user base must also understand the system. The value lost is greater than the sum of all the parts when an LLM produces most or all of the code.

  • pretzel5297 12 hours ago

    > I simply don’t do anything that won’t teach me something new or improve my existing skills.

    Not trying to be rude but you either must not be a professional software engineer or your skill level isn't that high yet. You simply cannot always do things that teach you new skills or improve existing ones. In any sufficiently complex project, even the most novel ones, you'll do things you've done many times before.

    • skydhash 9 hours ago

      I'm a bit skeptical too, but I can understand his points. Most of what is rote is probably written somewhere and if you have a library of code and snippets (including the existing project), it's easy to copy and adapt it. And that activity is very inducing to flow state, so you don't mind the time spent.

    • octagons 9 hours ago

      I’m not a software engineer. Most of my work these days focuses on microcontroller exploitation. I have 15 years of professional experience as a security consultant/contractor.

    • t43562 4 hours ago

      I think professionals are almost always doing things that are at least 30% new...otherwise they've had a long time in one job which is a fortunate thing nowadays.

      My last job started with "here's a book about go programming." 2 years later I was learning FastAPI. Now I'm programming in C again but I have spent most of my time learning about git actions and writing SCCS->git conversion software. I've never used SCCS before.

tossandthrow 1 day ago

I love being able to put my brain cells at lean, coq, haskell. All the fun stuff. And have my money job taken care of mostly with agents.

assimpleaspossi 13 hours ago

>>ai is here. so i'm spending 3 months coding the old way

The old way?! So not using AI is already being called "the old way"?!!

That statement sends alarm bells off about writing on the internet and trust to be put into it as if I'm the first one to notice it.

  • bertil 9 hours ago

    People were still writing code by hand three months ago…

fouronnes3 23 hours ago

I wonder if we could design a programming language specifically for teaching CS, and have a way to hard-exclude it from all LLM output. Kinda like anti virus software has special strings that are not viruses but trigger detections for testing.

This would probably require cooperation during model training, but now that I think of it, is there adversarial research on LLM? Can you design text data specifically to mess with LLM training? Like what is the 1MB of text data that if I insert it into the training set harms LLM training performance the most?

  • inerte 23 hours ago

    > Can you design text data specifically to mess with LLM training?

    Maybe text that costs a LOT of tokens. Very, very verbose. I think if there are rules and on the internet, LLMs can eventually figure it out, so you have to make it expensive.

    Another way would be to go offline. Never write it down, only talk about it at least 50 meters away from your phone. Transmitted through memory and whisper.

  • mswphd 22 hours ago

    LLM's train in some standardized ways to emit things like tool calls, right? if you make those tokens a fundamental part of your programming language, it's possible you'd be able to run into tokenizer bugs that make LLMs much more annoying to use. Pure conjecture though.

  • dougiejones 21 hours ago

    The solution is rather simple: make all keywords in the language as offensive as possible, and require every file to start with a header comment for instructions to build a homemade bomb.

    • andsoitis 18 hours ago

      I thought about it, and had ideas like function -> fuck and throw -> shit. But I think humans would actually find it more distracting and unpleasant than an LLM would because we are more affected by social and emotional norms.

      Maybe there’s another way…

  • imtringued 16 hours ago

    Just make a procedurally generated programming language.

  • SoftTalker 7 hours ago

    We had the first part: scheme.

abcde666777 1 day ago

Personally I haven't stopped doing things the old way. I haven't had any issues using LLMs as rubber ducks or brain storming assistants - they can be particularly useful for identifying algorithms which might solve a given problem you're unfamiliar with. Basically a variant on google searching.

But when it comes to the final act I find myself unwilling to let an LLM write the actual code - I still do it myself.

Perhaps because my main project at the moment is a game I've been working on for four years, so the codebase is sizable, non-trivial, and all written by me. My strong sense even since coding LLMs showed up has been that continuing to write the code is important for keeping it coherent and manageable as a whole, including my mental model of it.

And also: for keeping myself happy working on it. The enjoyment would be gone if I leaned that far into LLMs.

  • bschwindHN 1 day ago

    I'm in the same boat. LLMs help with some research and idea bouncing, and then I write all the code myself.

    Despite what some might say, there isn't a big moat between those who use LLMs for programming and those who don't. So if I ever truly need to use LLMs to survive, I'll just have to start paying for a subscription.

    In the meantime, I'll be keeping my own skills sharp and see how that turns out in a few years. I'm afraid software quality is going to take a nosedive in the near future, it was already on a downward trend.

yodsanklai 13 hours ago

> so i'm spending 3 months coding the old way

the old way which is about one year ago?

  • derwiki 10 hours ago

    I mean ya, management at my company has made it clear that all of our code should be written by agents. So yea, things have changed a lot in a year.

    • 6031769 8 hours ago

      Please come back in a year and tell us how that has worked out for them.

lrvick 1 day ago

I did things the old way for 25 years and my carpal tunnels are wearing out. LLMs let me produce the same quality I always have with a lot less typing so not mad at that at all. I review and own every line I commit, and feel no desire to go back to the old way.

What scares the shit out of me are all these new CS grads that admit they have never coded anything more complex than basic class assignments by hand, and just let LLMs push straight to main for everything and they get hired as senior engineers.

It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.

If you have never written and maintained a complex project by hand, you should not be allowed to be involved in the development of production bound code.

But also, I feel this way about the industry long before LLMs. If you are not confident enough to run Linux on the computer in front of you, no senior sysadmin will hire you to go near their production systems.

Job one of everyone I mentor is to build Linux from scratch, and if you want an LLM build all the tools to run one locally for yourself. You will be way more capable and employable if you do not skip straight to using magic you do not understand.

  • sho_hn 1 day ago

    > If you have never written and maintained a complex project by hand, you should not be allowed to be involved in the development of production bound code.

    So only the old hands allowed from now on, or how are we going to provide these learning opportunities at scale for new developers?

    Serious question.

    • rafaelmn 1 day ago

      Even by pessimistic progress projections AI will be better than most at coding before this is a long term issue. And the output multiplier I'm seeing I suspect the number of SWEs needed to achieve the same task is going to start shrinking fast.

      I don't think SWE is a promising career to get started in today.

      • lrvick 1 day ago

        But you have to be good at SWE to be good at security engineering and sysadmin, and the demand there is skyrocketing.

        We have a completely broken internet with almost nothing using memory encryption, deterministic builds, full source bootstrapping, secure enclaves, end to end encryption, remote attestation, hardware security auth, or proper code review.

        Decades of human cognitive work to be done here even with LLM help because the LLMs were trained to keep doing things the old way unless we direct them to do otherwise from our own base of experience on cutting edge security research no models are trained on sufficiently.

      • mwwaters 1 day ago

        There’s certainly a lot of uncertainty.

        But pro-AI posts never seem to pin themselves down on whether code checked in will be read and understood by a human. Perhaps a lot of engineers work in “vibe-codeable” domains, but a huge amount of domains deal with money, health, financial reporting, etc. Then there are domains those domains use as infrastructure (OS, cloud, databases, networking, etc.)

        Even where it is non-critical, such as a social media site, whether that site runs and serves ads (and bills for them correctly) is critical for that company.

      • 8note 1 day ago

        im not convinced that it will.

        you dont notice it when you are only looking at your own harness results, but the llm bakes so very much of your own skills and opinions into what it does.

        LLMs still regurgitate a ton.

        • rafaelmn 12 hours ago

          If you're enrolling in uni today you're looking at 6-10 years till your career is in a good place. I'm willing to bet there will 1/10 of junior positions available in 5 years.

          And insufficient talent because of retirement becomes an issue in like 30 years even with current developer demand, and I expect that demand to go down significantly over time, even with current level of capabilities.

      • jazz9k 1 day ago

        Perhaps at some point, but tokens are expensive and the major providers are burning through cash.

        I suppose it's like bandwidth cost in the 90s. At some point, it becomes a commodity.

      • weakfish 11 hours ago

        Something I don’t see “SWE-is-Doomed” comments address is this - SWE, I think we can all agree, is one of the more complex white collar jobs. If it gets automated, then likely most other white collar jobs have been too. At that point, don’t we have bigger problems?

    • hallway_monitor 1 day ago

      Junior developers have always been a lot less effective than senior developers. We will need new senior developers so we will need to train junior developers. Maybe we train them by forcing them to do things the hard way. The slow way. By hand. Because if we let them do things the fast way they are going to cause some serious damage.

      • SlinkyOnStairs 1 day ago

        Who's going to be doing that?

        Employers were already refusing to hire juniors, even when 0.5-1 years' salary for a junior would be cheaper than spending the same on hiring a senior.

        They'll never accept intentionally "slower" development for the greater good.

        • 8note 1 day ago

          internships for one.

          my last summer intern did everything the manual way, except for a chunk where I wanted him to get something done fast without having to learn all the underlying chunks

        • jacobsenscott 1 day ago

          > They'll never accept intentionally "slower" development for the greater good.

          That comes post Chernobyl.

    • lrvick 1 day ago

      The same way I learned 25 years ago still works today. Volunteer on open source projects.

      Always happy to mentor people at stagex and hashbang (orgs I founded).

      Also being a maintainer of an influential open source project goes on a resume, and helps you get seen in a crowded market while boosting your skills and making the world better. Win/win all around.

      • sho_hn 1 day ago

        Can't disagree, that's how I did it too :-)

  • teruakohatu 1 day ago

    > It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.

    That is exactly been the situation for years. Once graduated accountants are not doing maths. They are using software (Exel, Xero etc.). They do need to know some basic formulas eg. NPV.

    What they need to know is the law, current business practices etc.

  • adamddev1 1 day ago

    > It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.

    It's not though. It's fundamentally different because TurboTax will still work with clear deterministic algorithms. We need to see that the jump to AI is not a jump from hand written math to calculators. It's a jump from understanding how the math works to another world of depending on magic machines that spit out numbers that sort of work 90% of the time.

    • bluefirebrand 1 day ago

      Imagine if Math calculators were just subtly wrong some percentage of the time for use cases that people use dozens or hundreds of times a day. If you could punch in the same math formula 100 times and get more than 1 answer on a calculator, most people wouldn't trust those for serious work.

      They probably wouldn't think that the calculator makes them faster either

      • layer8 1 day ago

        If calculators did work that way, I'm afraid that people would nevertheless take them up because "it saves so much time", and would develop fancy heuristics to plausibility-test for errors.

  • thesz 1 day ago

    From what I remember, typical new C++ debugged code speed is about 20-25K lines per year, lines that are non-blank, non-comment and not completely verifiable by compiler. E.g., standalone bracket or comma or semicolon are not lines of code, function header is too not a line of code, but computation, conditions and loops are. This is from old IBM statistics, I learned about it circa 2007.

    If we assume that there are 50 weeks per year, this gives us about 400-500 lines of code per week. Even at long average 65 chars per line, it goes not higher than 33K bytes per week. Your comment is about 1250 bytes long, if you write four such comments per day whole week, you would exceed that 33K bytes limit.

    I find this amusing.

    • slopinthebag 1 day ago

      LOL. If you look at their comment history, they sure are typing a lot of characters for their wrists.

      • thesz 1 day ago

        Yes, I checked their history of comments before posting. It made me confident that I hit the right note.

        My software engineering experience longs almost 37 years now (December will be anniversary), six-to-seven years more than Earth's human population median age. I had two burnouts through that time, but no carpal tunnel syndrome symptoms at all. When I code, I prefer to factor subproblems out, it reduces typing and support costs.

      • lrvick 1 day ago

        I find it much more valuable to exchange ideas with humans than type every curl bracket and common boilerplate pattern and debug commit myself.

        That said, I am also actively experimenting with VTT solutions which are getting quite good.

        • slopinthebag 1 day ago

          Most of the commentators here are bots these days anyways.

    • raincole 19 hours ago

      > I find this amusing.

      In what way? You're either very young or very old, right? Voice-to-text has been a common way to input text online since iPhone. Someone commented on HN != they typed that many words with their fingers.

      • thesz 14 hours ago

        I strongly believe you can use voice-to-text for coding.

        If the person I replied do use voice-to-text, their mention of carpal syndrome is moot and this is amusing. If they do not use voice-to-text, it is still amusing in the sense of my previous comment.

        • raincole 12 hours ago

          Or, you know, it's far easier to input natural language with voice-to-text than coding with voice-to-text, so even if they can write long comments on HN, coding is still a problem?

          Nah, impossible. They must be making up their carpal syndrome because nothing is ever real.

    • octagons 12 hours ago

      I mean this genuinely and in good faith in case you didn’t already know it: the term for “non-blank, non-comment…” in programming is usually “Significant Lines Of Code” or SLOC.

  • einpoklum 1 day ago

    > LLMs let me produce the same quality I always have with a lot less typing

    If that's true, then you likely used to produce slop for code. :-(

    > I did things the old way for 25 years and my carpal tunnels are wearing out.

    You wrote so much code as to wear out your carpal tunner? Are you sure it isn't the documentation and the online chatter with your peers? :-(

    ... anyway, I know it's corny to say, but - you should have, and shoudl now, improve the ergonomics of your setup. Play with things like the depth of your keyboard on your desk, the height of the chair and the desk, with/without chair handrests, keyboard angle, etc.

    > Job one of everyone I mentor is to build Linux from scratch

    "from scratch" can mean any number of things.

    • lrvick 1 day ago

      > If that's true, then you likely used to produce slop for code. :-(

      Local models are quite good now, and can jump right in to projects I coded by hand, and add new features to them in my voice and style exactly the way I would have, and with more tests than I probably would have had time to write by hand.

      Three months ago I thought this was not possible, but local models are getting shockingly good now. Even the best rust programmers I know look at output now and go "well, shit, that is how I would have written it too"

      That is a hard thing to admit, but at some point one must accept reality.

      > anyway, I know it's corny to say, but - you should have, and shoudl now, improve the ergonomics of your setup. Play with things like the depth of your keyboard on your desk, the height of the chair and the desk, with/without chair handrests, keyboard angle, etc.

      I already type with colemak on a split keyboard with each half separated and tented 45 degrees on a saddle stool, with sit/stand desk I alternate. I have read all the research and applied all of it that I can. Without having done all that I probably would have had to change careers.

      > "from scratch" can mean any number of things.

      As far as I know I was the first person alive to deterministically build linux from 180 bytes of machine code, up to tinycc, to gcc, to a complete llvm native linux distribution.

      When I say from scratch, I mean from scratch. Also, all of this before AI without any help from AI, but I sure do appreciate it to help with package maintenance and debugging while I am sleeping.

      • mistrial9 10 hours ago

        not all coding tasks are created equal. It is not a representative discussion to show a "greenfield" routine in new, popular language using only standard library (I assume) for one coder, is the whole topic. In this example, greenfield refers to "implement a new routine in a way I prefer"..

        sadly the discussion steeply diverges IMHO -- rote work by defined spec in highly defined API environment is absolutely a thing, while abstracted and/or original ideas on a platform of choice with substantial new or custom work implementation, is different.

linkregister 1 day ago

It's easy to take for granted lots of experience programming before the advent of LLMs. This seems like a good strategy to develop understanding of software engineering.

I remember writing BASIC on the Apple II back when it wasn't retro to do so!

ludr 1 day ago

I've settled into a pattern of using agents for work (where throughput/results are the most important) and doing things the hard way for personal or learning projects (where the learning is more important).

bitwize 1 day ago

The fact that with AI development, your brain is no longer in a tight feedback loop with the codebase, leading to a significant drift between your model and reality, is still a sticking point with me and agentic development. It feels like trying to eat with silicone rubber chopsticks. I lose all precision and dexterity.

I still keep hoping there'll be a glut of demand for traditional software engineers once the bibbi in the babka goes boom in production systems in a big way:

https://m.youtube.com/watch?v=J1W1CHhxDSk

But agentic workflows are so good now—and bound to get better with things like Claude Mythos—that programming without LLMs looks more and more cooked as a professional technique (rather than a curiosity or exercise) with each passing day. Human software engineers may well end up out of the loop completely except for the endpoints in a few years.

whateveracct 6 hours ago

i feel like i'm being gaslit into believing that coding was ever hard or the bottleneck.

Typing and thinking in English is demonstrably slower than in code/the abstract (Haskell for me.)

And no, I didn't write English plans before AI. Or have a stream of English thought in my head. Or even pronounce code as I read and wrote it. That's low-skill stuff.

  • booleandilemma 5 hours ago

    It's the opposite. Coding was, and is, still hard. If you don't believe me go do an Advent of Code challenge after day 15. LLMs have tricked everyone into believing coding is easy.

    • whateveracct 4 hours ago

      I'm confused though. I have never like spent hours at a time coding at my job or side projects. And yet I keep building stuff..

      I tried LLMs because of AI fomo by my CEO. "Opus 4.whatever is a stepwise improvement - I am convinced all coders who don't go all-in on AI will be obsolete soon." Multiple times I tried. Every time Claude creates crap and I have to spend a bunch of time correcting it in a loop. Or basically scripting it. And it's like..I never spent this much time thinking or actively working before.

      So I'm back to being natty and I am delivering more and have more time during the workday to spend with my wife and kid and video games etc.

serbrech 1 day ago

Should have LLM providers create stack overflow type of site based on user’s most asked problem. At least we won’t deplete de source of normal searches results.

moomin 1 day ago

Not the point of the article but

> 15 years of Clojure experience

My God I’m old.

  • mattdecker100 1 day ago

    Old? OP showed a pic of an Apple IIe. I bought one for a few thousand bucks (I forget exactly how much). I've been an SE for 44 years. We just added the final abstraction layer.

fallingfrog 1 day ago

I mean, that's the only way I code. I don't use llm's to do my work for me. I'm perfectly capable of solving any sort of problem on my own, and then I'll understand it well enough to explain it to someone later.

  • bigfishrunning 22 hours ago

    Me too. We're the overpaid cobol programmers of the future

  • wulfstan 18 hours ago

    I’ve just started a new job and my first month has been coming to terms with a vibe coded codebase. Nobody really understands it. I think that people who have the skills to really know what is there and how it all fits together will be the most valuable workers in future.

    My ex business partner said “AI won’t take your job, but the person who uses it will”. I don’t agree. The person who isn’t reliant on AI is the one you should really be afraid of.

myst 14 hours ago

I hope this is some kind of fiction. Certainly reads this way.

epx 1 day ago

It is all a conspiracy, now that mechanical keyboards are affordable and available and so many shapes and switches, they want to take this last pleasure (typing) from us

  • bschwindHN 1 day ago

    Right?? I've gotten into mechanical keyboards quite a lot the past few years and it has totally made development and writing more enjoyable. Not giving that up any time soon.

fsckboy 1 day ago

how about the whole thing was written by an AI as satire, mocking human "coding retreat"

delbronski 1 day ago

So we’ve already grown nostalgic for the old days… skimming through an alien looking codebase, scratching your head trying to figure what crazy abstraction the last person who touched this code had in mind. Oh shit it was me? That made so much more sense back then… but it’s been 6 hours and I can’t figure out why this does not work anymore. So you read some docs but they are poorly written. So you find something on Google and try to hack that into your solution. But nope, now more stuff broke. There goes your day.

  • bluefirebrand 1 day ago

    Yeah, why spend time puzzling over old, proven code that you wrote. Instead spend your time puzzling over new, unproven code that an LLM generated

  • AstroBen 1 day ago

    > skimming through an alien looking codebase, scratching your head trying to figure what crazy abstraction the last person who touched this code had in mind. Oh shit it was me? That made so much more sense back then

    This is exactly how you learn to create better abstractions and write clear code that future you will understand.

    • delbronski 10 hours ago

      You are right about the learning part. But I’ve been at this for 20 years. Even the best, most pristine and organizad code I’ve seen has not been “clear”. The average LLM code today is a lot more clear than the average developer code.

  • civvv 15 hours ago

    Literally a skill issue?

    • delbronski 10 hours ago

      Yes. A skill I have not mastered in 20 years. And I’ve yet to meet a person who has. If you are out here writing perfectly looking code 100% of the time that everyone else including you can perfectly understand a year later, then hats off to you.

      But in my long career even the smartest most experienced software engineers I’ve met m write their share of crazy abstractions from hell.

pizzafeelsright 1 day ago

I like to write personal letters too. I also send emails.

I do the former for fun. The latter to provide for my family.

There is a reason old men take on hobbies like woodworking and fixing old cars and other stuff that has been replaced by technology.

LeCompteSftware 1 day ago

This is ominous and very depressing given what we've recently learned / reconfirmed about LLMs sapping our ability to persist through difficult problems:

> There were 2 or 3 bugs that stumped me, and after 20 min or so of debugging I asked Claude for some advice. But most of the debugging was by hand!

Twenty whole minutes. Us old-timers (I am 39) are chortling.

I am not trying to knock the author specifically. But he was doing this for education, not for work. He should have spent more like 6 hours before desperately reaching for the LLM. I imagine after 1 hour he would have figured it out on his own.

  • sho_hn 1 day ago

    Now imagine someone else reading this and genuinely considering 20 minutes a long time to wait :-)

  • alemwjsl 1 day ago

    Yep and after 6 hours don't reach for LLM, instead:

    * Ask someone to come over and look

    * Come back the next day, work on something else

    * Add comment # KNOWN-ISSUE: ...., and move on and forget about it.

    But year spent days on a bug at work before ha ha!

    • justonceokay 1 day ago

      You say this as if the LLM isn’t committing things it doesn’t even recognize as bugs if you don’t babysit it. I’d rather have a codebase with a few very well marked evil zones, rather than a codebase no one has read. All code contains demons and it’s good to have an understanding of their locations and relative power

    • moregrist 1 day ago

      > Come back the next day, work on something else

      This is a tried and true way of working on puzzles and other hard problems.

      I generally have 2-4 important things in flight, so I find myself doing this a lot when I get stuck.

      • ignoramous 1 day ago

        > This is a tried and true way of working on puzzles and other hard problems ... generally have 2-4 important things in flight

        Just a note that, for chronic procrastinators, having 2 to 4 important things going on is a trigger & they'd rather not complete anything.

        I wonder, for such folks, if SoTA LLMs help with procrastination?

    • calvinmorrison 1 day ago

      so many eureka moments of mine were simply sitty on the MTA

  • Trasmatta 1 day ago

    YES. I don't know how many multi WEEK sessions of debugging I've been through in my career. Frustrating, but so many valuable lessons learned in the process. LLMs are absolutely causing us to lose something very important.

    • voidfunc 1 day ago

      If I told someone I spent a week debugging a problem these days I think I would get laughed out of the call. Even a day might hit somw chuckles.

      If you cant fix the bug just slop some code over it so its more hidden.

      This is all gonna be fascinating in 5-10 years.

      • seanw444 1 day ago

        This really does feel like a mass hysteria event. Bizarre to have to live through it.

      • SlinkyOnStairs 1 day ago

        This does depend on who you are; If you're a senior with 10+ years of experience, it's a failure of your abilities to cut your losses or know when to seek help if you take far too long debugging something.

        But for juniors, it's invaluable experience. And as a field we're already seeing problems resulting from the new generations of juniors being taught with modern web development, whose complexity is very obstructing of debugging.

        • badc0ffee 1 day ago

          There are definitely situations where you can't ask for help and you can't turn your back on the bug.

          I worked on a project that depended on an open source but deprecated/unmaintained Linux kernel module that we used for customers running RHEL[1]. There were a number of serious bugs causing panics that we encountered, but only for certain customers with high VFS workloads. I spent days to a week+ on each one, reading kernel code, writing userland utilities to repro the problem, and finally committing fixes to the module. I was the only one on the team up to the task.

          We couldn't tell the customers to upgrade, we couldn't write an alternative module in a reasonable timeframe, and they paid us a lot of money, so I did what I had to do.

          I'm sure there are lots of other examples like this out there.

          [1] Known for its use of ancient kernels with 10000 patches hand-picked by Red Hat. At least at the time (5-10 years ago).

          • z500 1 day ago

            Thank you for injecting some perspective into the thread of AI hysteria. I feel like everyone is imagining a bug in a CRUD app.

          • t43562 5 hours ago

            For sure! I had a bug that crashed our system once every 14 days or so and every coredump had a different stack trace. The "star programmer" managed to shift the bug onto me, the newbie graduate, after failing. This was a long time ago and I had to sort of invent fuzz-testing (as far as I knew!) to reproduce the problem in a short enough time that it could be debugged. That bug took weeks to find and there was nobody to help and only a manager kicking my arse every day. Instead of a medal I got brickbats for solving it but they did carry on using my testing system...

      • dinkumthinkum 19 hours ago

        What LLMs are you all using that solves every problem in 5 minutes? It is fast at some various classes of problems but the idea that they solve complex bugs that took serious engineers significant time, I'm just not seeing that. Where is all this amazing software and revolution we were promised? Why are there even bugs?

      • t43562 5 hours ago

        It has always been the case that doing difficult things isn't impressive - it's always speed that impresses. Hence AI.

        On the other hand, while I notice people not being impressed, they are careful to shift difficult things off onto others if at all possible.

    • encrux 1 day ago

      I don’t miss multi week debugging sessions.

      Having a tool that instantly searches through the first 50 pages of google and comes up with a reasonable solution is just speeding up what I would have done manually anyways.

      Would I have learned more about (and around) the system I‘m building? Absolutely. I just prefer making my system work over anything else, so I don’t mind losing that.

      • Trasmatta 1 day ago

        The multi week debugging sessions weren't fun, but that doesn't mean they weren't valuable and important and a growth and learning opportunity that we now will no longer experience.

        • LeCompteSftware 1 day ago

          IMO the more salient point is that bugs requiring multiple weeks of human work aren't going away! Claude has actually not been trained on, say, a mystifying and still poorly-explained Java concurrency bug I experienced in 2012, which cost a customer $150,000. Now in 2026 we have language-side tooling that mitigates that bug and Claude can actually help a lot with the rewrite. But we certainly don't have language tooling around the mysterious (but now perfectly well-explained) bug I experienced in 2017 around daylight saving's time and power industry peak/off-peak hours. I guess I haven't asked, but I can almost guarantee Claude would be no help there whatsoever.

          Just so many confusing things go wrong in real-world software, and it is asinine to think that Mythos finding a ton of convoluted memory errors in legacy native code means we've solved debugging. People should pay more attention to the conclusion of "Claude builds a C compiler" - eventually it wasn't able to make further progress, the code was too convoluted and the AI wasn't smart enough. What if that happens at your company in 2027, and all the devs are too atrophied to solve the problem themselves?

          I don't think we're "doomed" like some anti-AI folks. But I think a lot of companies - potentially even Anthropic! - are going to collapse very quickly under LLM-assisted technical debt.

        • glhaynes 1 day ago

          Seems like there's a good argument to be made that we'll have plenty of opportunities for valuable growth and learning, just about different things. Just like it's always been with technology. The machine does some of the stuff I used to do so now I do some different stuff.

    • echelon 1 day ago

      > LLMs are absolutely causing us to lose something very important

      The time wasted thinking our craft matters more than solving real world problems?

      The amount of ceremony we're giving bugs here is insane.

      Paraphrasing some of y'all,

      > "I don't have to spend a day stepping through with a debugger hoping to repro"

      THAT IS NOT A PROBLEM!

      We're turning sand into magic, making the universe come alive. It's as if we just got electricity and the internet and some of us are still reminiscing about whale blubber smells and chemical extraction of kerosene.

      The job is to deliver value. Not miss how hard it used to be and how much time we wasted finding obscure cache invalidation bugs.

      Only algorithms and data structures are pure. Your business logic does not deserve the same reverence. It will not live forever - it's ephemeral, to solve a problem for now. In a hundred years, we'll have all new code. So stop worrying and embrace the tools and the speed up.

      • Trasmatta 1 day ago

        > The time wasted thinking our craft matters more than solving real world problems?

        This is both a strawman and a false dichotomy.

        • echelon 1 day ago

          I mean to cause a stir! Let me invoke every logical fallacy and dirty rhetorical device I can if it draws attention.

          Too many of our engineering conversations are dominated by veneration of the old. Let me be hyperbolic so that I can interrupt your train of thought and say this:

          We're starting to live in the future.

          Let go of your old assumptions. Maybe they still matter, but it's also likely some of them will change.

          The old ways of doing things should be put under scrutiny.

          In ten years we might be writing in new languages that are better suited for LLMs to manipulate. Frameworks and libraries and languages we use today might get tossed out the door.

          All energy devoted to the old way of doing things is perhaps malinvested into a temporary state of affairs. Don't over-index on that.

          • i2km 1 day ago

            Please keep this slop off HN

            • echelon 12 hours ago

              > slop

              Stop using that word. The majority of human efforts pre-AI have been slop too.

      • dinkumthinkum 19 hours ago

        I mean, I know this may seem alien to you, but not everyone's life is about being a corpo good-boy. I don't know what career level you are but many people got into computing because they were really interested in it, not like a grad from 2016 that majored in CS because their dad said there was money in it and they should change their major from marketing. Also, there is something to be said for having people that still actually know what a computer is. What if your friends Altman and Amodei decide to start charging actual money for these tools? Sounds incredibly unlikely, I know, but it might be useful one day to have people that still know what the stack or the heap are.

    • jjice 1 day ago

      But oh my god, do you remember how good it felt to finally fix it?

      The euphoria I felt after fixing bugs that I stayed up late working on is like nothing else.

      • mapontosevenths 1 day ago

        Debugging code is fun for the same reason hitting yourself in the head with a hammer is: It feels really good when you stop.

    • raw_anon_1111 1 day ago

      And if I were your boss you would immediately be fired if you spent weeks trying to debug an issue a junior developer solved just by launching Claude and telling it the symptoms of the issue because you refused to use an LLM.

      • oasisaimlessly 4 hours ago

        I'm an actual boss, and wasting a week is cause for a 30-minute post-mortem, not immediate termination.

        • raw_anon_1111 4 hours ago

          I am a lead consultant and depending on the size of the project, I do have a team of less senior consultants who are dotted line reports. If I ever found one of my reports who missed deadlines because they refused to do something as simple as launch Claude (with our $5000 a month allowance a piece) and ask it to debug an issue, yes I would write them up.

          There is a direct easy to measure line about the revenue that anyone below me makes the company. My revenue per hour isn’t as exact since I support pre-sales and follow on work.

    • chasd00 1 day ago

      Geez you guys need to spend some time in orgs where your paycheck is depends on getting the bugs fixed and deployed. If your direct deposit happens whether you deliver or not then you’re missing the most valuable career lesson of all.

  • Gigachad 1 day ago

    Often when LLMs give me some command option or advice I haven’t seen before I try to independently verify it. And I’ve often been frustrated just how hard it is to find this info from the source documents.

    Though a lot of the time this is more an inefficiency of the documentation and Google rather than something only LLMs could do.

    • nyarlathotep_ 1 day ago

      As the rate of 'hallucinations' seems to have dropped dramatically (at least IME as regards non-existent flags and the like), I'm more concerned with usage. I often use grep.app/GH code search to look for usage examples as a sanity check when things look "off", for exactly the reason you described--there's often a total lack of good documentation on things like that, especially on "younger" tools/stuff.

    • skydhash 12 hours ago

      Most established projects has quite good documentation. And if they don’t, that’s because they consider the code as the documentation and just provide with an overview. The source code is the main truth and there’s a trick to reading it quickly. But that’s acquired by playing around with a lot of projects

  • derangedHorse 1 day ago

    I'm sure the author will encounter problems where the only way to solve them will be the marginal effort provided by a human. At that point he won't be just be solving problems to work his brain, but also to accomplish a goal.

  • usernametaken29 1 day ago

    I’ve worked in financial modelling before where you need to make sure results are correct, not approximate. One time there was a nasty bug in pandas multiindexes (admittedly we banned pandas for all new code because it just can’t do semver). Spent 9 days to debug three lines of code. Endurance and patience are learned skills and sometimes they’re the only way you can get a correct verifiable solution.

    • BodyCulture 18 hours ago

      What are you using instead of pandas? Thanks!

      • usernametaken29 1 hour ago

        Either nothing (a lot of functionality of pandas can be done with simple plain python) or polars for complex queries. Also look at the statistics module which has a lot of useful things in there

  • JuniperMesos 1 day ago

    Why shouldn't someone consult some kind of external resource for help, after struggling with a specific coding problem for 20 minutes? Why is 6 hours the right amount of time to timebox this to?

    • thrance 1 day ago

      The struggle is the point, that's how you learn. If you offload your task to someone/something else after barely 20 minutes of head scratching, you've missed the plot entirely.

    • demorro 1 day ago

      20 minutes is not enough time to drive you into a state of desperation, where you may be forced to try something novel which will expand your mind and future capabilities in unknown and unexpected ways. You might be driven to contact another human being, for example.

      • YesBox 1 day ago

        I was typing up a long and somewhat boring story.

        So, the short of it is that this is a great insightful comment that I can back up with my own experience in making a game from scratch over the last 4+ years.

    • bhelkey 1 day ago

      There wasn't always an external resource to go to for help. Especially for legacy pieces of software, it was easy to become the person with most context on the team.

      • noosphr 1 day ago

        This is reaching "you won't always have a calculator" levels of cope.

        • bigfishrunning 21 hours ago

          And yet doing arithmetic in your head is an extremely useful skill to this very day

    • Jtarii 1 day ago

      It entirely depends on what your goals are.

      If you want to solve the problem quickly then just use the resources you have, if you want to become someone who can solve problems quickly then you need to spend hundreds of hours banging your head against a wall.

    • bsder 23 hours ago

      1) 20 minutes is barely enough time to get into flow.

      2) There are different levels of debugging. Are your eyes going to glaze over searching volumes of logs for the needle in a haystack with awk/grep/find? Fire up the LLM immediately; don't wait at all. Do the fixes seem to just be bouncing the bugs around your codebase? There is probably a conceptual fault and you should be thinking and talking to other people rather than an AI.

      3) Debugging requires you to do a brain inload of a model of what you are trying to fix and then correct that model gradually with experiments until you isolate the bug. That takes time, discipline and practice. If you never practice, you won't be able to fix the problem when the LLM can't.

      4) The LLM will often give you a very, very suboptimal solution when a really good one is right around the corner. However, you have to have the technical knowledge to identify that what the LLM handed you was suboptimal AND know the right magic technical words to push it down the right path. "Bad AI. No biscuit." on every response is NOT enough to make an LLM correct itself properly; it will always try to "correct" itself even if it makes things worse.

      • agdexai 15 hours ago

        Good breakdown. I'd add a layer to point 2: beyond deciding when to use the LLM, there's a separate question of which tool in the LLM ecosystem fits the task.

        For haystack-style debugging (searching logs, grepping stack traces), a fast cheap model with large context (Gemini Flash, Claude Haiku) is more cost-effective than a frontier model. For the conceptual fault category you mention — where you actually need to reason about system design — that's when it might be worth paying for o3/Claude Opus class models.

        The friction is that most people default to whatever chatbot they have open, rather than routing to the right tool. The agent/LLM tooling space has gotten good enough that this routing is automatable, but most devs haven't set it up yet.

    • dinkumthinkum 20 hours ago

      You will lose the ability to struggle through different problems. You will become psychologically weak. Degrees of time matter. Twenty minutes is about the time of a sitcom. If you can't sit with a problem then you will be weak and weak people make hard times. Oh well.

  • raw_anon_1111 1 day ago

    Why? I’m as old timer as old timer can get - started programming as a hobby in 1986 in assembly on an Apple //e in 65C02 assembly language.

    But just today a bug was reported by a customer (we are still in testing not a production bug). I implemented this project myself from an empty git repo and an empty AWS account including 3 weeks of pre implementation discovery.

    I reproduced the issue and through the problem at Claude with nothing but two pieces of information - the ID of the event showing the bug and the description.

    It worked backwards looking at the event stream in the database, looking at the code that stored the event stream, looking at the code that generated the event stream (separate Lambda), looking at the actual config table and found the root cause in 3 minutes.

    After looking at the code locally, it even looked at the cached artifacts of my build and verified that what was deployed was the same thing that I had locally (same lambda deployment version in AWS as my artifacts). I had it document the debug steps it took in an md file.

    Why make life harder on myself? Even if it were something I was doing as a hobby, I have a wife who I want to spend time with, I’m a gym rat and I’m learning Spanish. Why would I waste 6 hours doing something that a computer could do for me in 5 minutes?

    Assuming he has a day job and gets off at 6, he would be spending all of his off time chasing down a bug that he could be using doing something else.

    • grebc 1 day ago

      It’s always the journey that matters.

      If you’re experienced as you are, you’re not learning the same way a junior assigned this might learn from it.

      • raw_anon_1111 1 day ago

        So the project I mentioned while I did write every single line of app code and IAC, made every architectural decision, etc., I did come on an off the project over the course of a year and I couldn’t even remember some of the decisions I made.

        I also used Codex and asked questions about how the codebase worked to refresh my own memory. Why wouldn’t a junior developer do the same?

        I mentioned that I had Codex describe in detail how it debugged it. It walked through each query it did, the lines of code it looked at and the IAC. It jogged my memory about code I wrote a year ago and after being on other projects

        • grebc 1 day ago

          If you’re 50+ as you intimated in your first post then you have a wealth of knowledge that juniors don’t.

          Just because it worked this time doesn’t mean it always will.

          If you need further explanation of why you might want to spend more time resolving a bug to learn about the systems you’re tasked with maintaining then I’m at a loss sorry.

          • scarface_74 22 hours ago

            And then as experience developer you would have to try one of the other tools in your toolbox. Why should someone tie a hand behind their back and not use an LLM out of some sense of nerd pride?

            • grebc 14 hours ago

              How do you get the experience if you always just reach for your LLM?

              • icedchai 50 minutes ago

                This is a real issue we'll face soon enough. It's less of a problem for senior+ developers that have experience and muscle memory. For people just starting in the industry, they won't develop the ability to research problems and solve them on their own.

                I work with some junior level, outsourced developers that write prompts like "fix the tests." The result is, of course, bad. The consulting company charges $200+ hour for them. Garbage in, garbage out. Good thing I hit my retirement number. I can bail out anytime.

        • t43562 4 hours ago

          It's what managers or team leads experience. They have a team that work on the code and even if they used to know what was going on they end up doing everything second hand and they don't realise when the project is going wrong technically because they aren't spending the time to build knowledge and it all looks reasonable from 5000 feet.

          They start to make questionable decisions based on how they think things are. I have done this. Getting back into development allowed me to see what was going wrong, why changes were difficult and what we needed to do to test properly.

          Hurray, you're an AI manager now but be careful how much you decide to not look "in the box" especially if you're trying to come up with release dates and so on.

          • raw_anon_1111 4 hours ago

            Before AI, I didn’t know how each line of code worked for a project I was responsible for but the work was done by lower level developers. If I had a question about how they implemented something I would ask them to walk me through it and give feedback - just like my CTO did with me when I was responsible for leading major initiatives. It’s been over a decade (before AI) where my responsibilities were only what I personally coded.

            I treat AI just like a mid level developer ticket taker.

            To a first approximation, no one gets ahead in corporate America or BigTech (been there done that) because they “codez real gud” and pull tickets off of a Jira board.

            In the last decade+, I’ve been a early technical hire to lead a major initiative by a new manager/director/CTO respectively and none of them were interested in asking me questions about my coding ability. We spoke like seasoned professionals.

            Even at my job in BigTech in the cloud consulting department (full time, RSU earning blue badge employee specializing in cloud + app dev) it was behavioral where they wanted to determine if I “were smart and got things done” [1]. My job after that where I am now as a staff consultant (full time employee) leading projects the interview was concerned about getting work done on time, on budget, meets requirements and whether the customer was happy with my results. Absolutely no one in the value chain cares about hand crafted bespoke code as long as it meets functional and non functional (security, scalability, usability, etc) requirements.

            • t43562 3 hours ago

              Of course. You've had to give up and operate at the people level and it works because all those above you are doing that even more so.

              ...and you want to get ahead.

              But you've made a trade off and to think otherwise would be a mistake. Someone else has to straddle the line that you're floating above and obviously part of your job is to get hold of such people.

    • LeCompteSftware 1 day ago

      Did you miss this part?

         But he was doing this for education, not for work.
      

      That's why he should spend 6 hours on it, and not give up and run to the gym. That's like saying "I shouldn't spend an hour at the gym this week, lifting weights is hard and I want to watch TV. I'll just get my forklift to lift the weights for me!"

      • raw_anon_1111 1 day ago

        With his experience, I seriously doubt that he is trying to compete in the job market based on his ability to “codez real gud”. At his (and my) experience level he is more than likely going to get his next job based on a higher level of “scope” and “impact” (yes I’m using BigTech promo docs BS).

  • j1elo 1 day ago

    I just grabbed an Android remaster of "Broken Sword: Shadow of the Templars", a 90's point-and-click adventure that has been added a hints system which pops up automatically after a timeout of the player not progressing.

    This can be set as far as 1h of being stuck. Can also be 5 minutes. But by default it is 30 seconds.

    My inner kid was screaming "that's cheating!" :-D but on second thought it is a very cool feature for us busy adults, however it's sad the extremes that gamedevs have to go in order to appease the short-term mindless consumers of today's tik-toks.

    But more seriously, where's the joy of generating long-standing memories of being stuck for a while on a puzzle that will make you remember that scene for 30 years? An iconic experience that separates this genre from just being an animated movie with more steps.

    I couldn't imagine "Monkey Island II but every 30 seconds we push you forward". Gimme that monkey wrench.

    TFA and this comment just made me have this thought about today's pace of consumption, work, and even gaming.

  • Tanoc 1 day ago

    Often times the fastest way to debug is to write it wrong, write it wrong again, find an example where somebody wrote it right, write that wrong in your own file, then figure out what you changed to adapt it that made it go wrong.

    If anyone remembers middleschool mathematics this is the coding example of the teacher making you write out the equations in their longest form instead of shortcutting. It's done this way because it shows you your exact train of thought and where you went wrong. That sticks in your head. You understand the problem by understanding yourself. Giving up after twenty minutes instead of stopping, clearing your active cognitive load, and then coming back erases your ability to understand that train of thought.

    For a comparison it's like being in first person view in a videogame, and the only thing you have is the ability to look behind you, versus being able to bring up a map that has an overhead view. In first person you're likely to lose where exactly you went to get where you are, while with the overhead view map you can orient your traveled route according to landmarks and distance.

dang 1 day ago

[stub for offtopicness]

(I swapped the title for the subtitle earlier because I thought it was more informative. What I missed was the flamebaity effect that "the old way" would have. Obvious in hindsight!)

  • sho_hn 1 day ago

    Remember the old days of our youth, i.e. last week Monday, when we still wrote code by hand?

    • phoronixrly 1 day ago

      I can't tell if OP is satire... I've just seen so many unhiged takes that this article reads completely in line with the discourse...

      • tayo42 1 day ago

        Taking 3 months off from work to do program some basic stuff is already kind of disconnected from reality.

    • justonceokay 1 day ago

      Well when the fashions change, the old ways are “old fashioned” but only literally

  • SrslyJosh 1 day ago

    > "coding the old way"

    You mean the way that the majority of code is still written by professionals?

daneel_w 1 day ago

Depressing. It's like reading has-been actors' stories about how they went to wellness retreats to "reconnect with themselves" to try get back on the job. I can't wait for the day when the same type of people as the author - or indeed, the author himself - start labeling plain regular programming as "artisanal" and "craft".

mchusma 1 day ago

You should do what you want, and as a break it’s fine. But IMO right now the most leverage for most people is learning how to effectively manage agents. It’s really hard. Not many are truly good with it. It will be relevant for a long time.

  • sdevonoes 1 day ago

    For the average and mundane stuff, sure do whatever everyone is doing.

    For the good stuff, there’s no alternative but to know and to have taste. Llms change nothing.

  • baq 1 day ago

    The agents are already learning to manage agents, if it’s relevancy you’re looking for you might want to take up plumbing instead.

    • onair4you 1 day ago

      Not sure what you are using, but that’s easier said than done. I just set up an agent to ensure that my other agent would follow my coding guidelines by using hooks. The coding agent responded by switching to editing with `sed`, etc. to circumvent the hooks.

      Claude Opus is going to give zero fucks about your attempts to manage it.

    • bdangubic 1 day ago

      this is exactly right, I don't manage agents anymore (and have spent countless hours before learning how to do so, now this is a skill like my microsoft access skills (which were amazing back in the day...)

  • sd9 1 day ago

    What has been most valuable for you?

    It is hard indeed. I find it really quite exhausting.

    Personally, I feel like I have always been a very competent programmer. I'm embracing the new way of working, but it seems like quite a different skillset. I somewhat believe that it will be relevant for a long time, because there is an incredibly large gap in outcomes between members of my team using AI. I've had good results so far, but I'm keen to improve.

  • zingababba 1 day ago

    I see you got downvoted by I agree. I went through a massive valley of despair and turned back to hand crafting only to realize that for me coding was always a means to an end and I really didn't care at all about how I got there. Now I'm having a lot of fun building out all kinds of wonky projects.

  • aerhardt 1 day ago

    > It’s really hard

    How? I just open multiple terminal panes, use git tree, and then basically it’s good old software dev practices. What am I missing?

    • bensyverson 1 day ago

      You're probably significantly underselling the value of your own "good old software dev practices."

      • LeCompteSftware 1 day ago

        I believe the point (which you seem to tacitly agree with) is that a young dev's time is much better spent reading and writing code "the old-fashioned way" vs chasing the new SOTA in AI-assisted development. A competent dev can basically master agentic development in a few months. But it takes years to become competent.

      • aerhardt 14 hours ago

        Oh yea, I agree that building good software remains roughly as challenging as ever.

        I was asking if there was something about the “agentic” part in particular that was difficult.

  • slopinthebag 1 day ago

    Yeah, it's really difficult to remember to tell it "make no mistakes". Typing a prompt is also really hard, especially when you have to remember the cli command to open the agent. Sometimes I even forget if I need to use "medium", "high", or "xhigh" for a task.

    • jodrellblank 6 hours ago

      I may as well post this troll comment somewhere and get it out of my system, but have you noticed how people called "slopinthebag" who claim to hate slop and enshittification are the same people who make barrel-scrapingly low-quality thought-free comments which make the sites they are posting on worse for everyone?

      Shouldn't they be the people making the most substantial, artisanal, sweat-of-the-brow deep-thought comments?

  • Marazan 1 day ago

    > It will be relevant for a long time.

    Citation needed.

  • idle_zealot 1 day ago

    > It will be relevant for a long time.

    Why would you think that? The landscape is fast-moving. Prompting tricks and "AI skills" of yesterday are already dated and sometimes actively counterproductive. The explicit goal of the companies working on the tech is to lower the barriers to entry and make it easier to use, building harnesses and doing refinement that align LLMs to an intuitive mode of interaction.

    Do you think they'll fail? Do you think we've plateaued in terms of what using a computer looks like and your learnings for wrangling the agents of this year will be relevant for whatever the new hotness is next year? It's a strong claim that demands similarly strong argument to support.

  • the_gipsy 1 day ago

    If they're so great, then we will end up somewhere where it's easy to pick up.

  • dyauspitr 1 day ago

    You will be relevant for 6 months until they manage themselves.

  • dinkumthinkum 20 hours ago

    When you say it's hard, what does that mean? Presumably, if the AI is so good, why can't you just ask it to do that? Why are you even needed?

einpoklum 1 day ago

[flagged]

  • dang 1 day ago

    Please don't cross into personal attack on HN. You can make your substantive points without that.