points by Jarred 5 days ago

cargo check reported over 16,000 compiler errors when I wrote that message. It could not print a version number or run JavaScript. I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive. There’ll be a blog post with more details.

gobdovan 5 days ago

If this experiment ends up resulting in a real migration path, I think that would be completely awesome. Maybe it means we have a chance to revive older projects such as ngspice [0], but with modern affordances and better safety properties.

From your post, though, it sounds like Bun may have been a pretty direct rewrite, without too many hard choices along the way. Is that fair?

[0] https://ngspice.sourceforge.io/

  • therealpygon 4 days ago

    I hear your suggestion without feeling the need to remark the far too common Linux/Deveoper response of “but if you just do all this other stuff and run it this special way and install 15 dependencies and compile XYZ lib from source then clearly it works fine and you’re mistaken”.

    That’s exactly the type of thing that is needed is to optimize projects for modern compatibility, portability and safety when other modernization efforts or forks don’t exist.

    That said, I suspect this rewrite went so quickly and so optimally because it had the benefit of (effectively) 100% test coverage already in place in a really well defined system. Most open source project spawn from efforts of a single developer who frequently never waste time writing tests for a little side project. Later as it grows, they rarely stop and go back to implement testing. So if you’re truly working with an old dead project, there is a really good chance there are zero tests to be found. That is far more difficult to reach the same completeness unless the goal is simply to port all of those same problems to a new language and hope type safety fixes them.

    (Not specific to ngspice, just mean generally.)

    • eternal_braid 4 days ago

      You can instruct an LLM to improve the test coverage.

      • pryelluw 4 days ago

        You are absolutely correct!

  • tracker1 2 days ago

    I've found Rust to be pretty enjoyable to work with in terms of Agent assisted development. Easier still if you have something you're trying to port or recreate in Rust for various reasons. There are definitely some rougher edges around a few things as you get more general purpose in terms of app targets. Some of the DB engines can use some work or may be missing interfaces you use in other supported languages/platforms... There's a somewhat limited set of UI options, and no clear winner.

    Lifetimes can get pretty hard in very complex code bases... even if other aspects of burrow checking may be more common, this is where I've had and seen the biggest gaps in understanding in practice. That said, you can usually do inefficient things to work around these issues with the opportunity to come back later. Often inefficient Rust with lots of clone operations is still faster, smaller, lighter than the same services in Java or C# as an example.

sheepscreek 4 days ago

UPDATE: This would make for an excellent case study if you don’t mind sharing the details. I am very curious about the number of agents, hours it took, and models used (did you use Mythos?).

This would not have been possible 5 years ago. LLMs are going to push us into the space age. Both Anthropic and OpenAI have committed to spending 10s of billions of dollars on training alone for the year. I am equally excited and terrified at the pace of progress!

inglor 5 days ago

Rust is really fun to work with and the compiler is great, just make sure the rewrite takes compile times into account since larger projects often have to be organized in a way that makes compilation reasonably fast.

  • ignoramous 5 days ago
      how long does it take to compile?
    
      @jarredsumner: It's basically the same as in zig using our faster zig compiler. If we were using the upstream zig compiler, rust port would compile faster.
    

    https://x.com/jarredsumner/status/2053050239423312035

    • jorams 4 days ago

      This is at least partially disingenuous. Zig is working on, and has already shipped for some situations, a faster compiler. Bun runs on an outdated version of Zig that doesn't include it.

  • laurencerowe 5 days ago

    In my experience Bun in Zig compiles more slowly than Deno in Rust.

    • hiccuphippo 5 days ago

      Single compiles for sure. Where Zig is optimizing compilation is in the incremental compiler, which I've seen compile the compiler itself in an instant after a single line change. Of course, that kind of speed is probably not interesting to some people if the AI is writing tons of lines of code before they go to the compilation step.

      • laurencerowe 5 days ago

        I found making single line changes in Bun’s zig code led to very long compiles compared to doing the same in Rust code. It was a while ago though and maybe I was doing something wrong.

        • cdud3 5 days ago

          Probably a very long time ago then. Try again with Zig 0.16. It's amazing how fast recompiles can be.

          • lukaslalinsky 5 days ago

            They can't, because Bun is tied to a fork of Zig 0.14 which is not compatible with regular Zig compiler.

            • Jarred 4 days ago

              Bun’s patched Zig is on Zig 0.15.1

cpeterso 5 days ago

What coding model are you using for the rewrite? Opus for everything? A prerelease model like Mythos?

folderquestion 5 days ago

Just an aside, is there any way to know how many of those 16,000 compiler errors are independent. I mean, could it be that just by changing say 500 lines of code all those errors disappear?

Perhaps 16,000 could just measure cascade breakage, for example one lifetime mismatch can cause errors in every function that tries to use that reference.

Rust reference lifetime bookkeeping is a difficult task for LLMs. The LLM has to maintain, across multiple functions and structs, which references outlive which. Furthermore compiler messages are highly contextual and lifetime patterns are sparse in the training set.

nhatcher 5 days ago

That's a post I am eagerly waiting to read.

Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.

I am a Rust developper myself but I really love Zig and Bun. I am just overly curious of all this.

  • nextaccountic 5 days ago

    > Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.

    Even LLMs themselves can't accurately estimate this (though this may be out of distribution stuff)

    • yen223 4 days ago

      LLMs have no conception of time, unless you explicitly feed in timestamps to the context

      • 0x457 4 days ago

        It doesn't stop LLMs provide "this feature set will require 4 months to finish" (and then finishing it one hour)

        • yen223 4 days ago

          Sorry yeah, I meant to say LLMs have no concept of time, so time estimates they give are almost always hallucinations

        • bw86 4 days ago

          Scotty from Star Trek does approve!

Aeolun 5 days ago

This does not surprise me in the least. Several Claudes are very good at splitting up and working through them all.

Eufrat 5 days ago

I think given the current mood of things, it would be prudent to not make such strong assertions on anything. Trust is in increasingly short supply these days.

  • minimaxir 5 days ago

    Nothing Jarred said is an assertion other than "There’ll be a blog post with more details."

    • dakj12iH 5 days ago

      "I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive."

      These are two assertions. There could have been a prior secret rewrite that took much longer than six days and this is a marketing stunt for Anthropic. In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.

      • preommr 5 days ago

        Those are not assertions of anything meaningful. We have no idea what his expectations were. Maybe he expected it to be absolute crap, and it was only kind of crap. None of it means that it's actually viable. My fat uncle trying to beat Bolt's time could exceed my expectations by improving from 30s to 20s, doesn't mean it's ever going to be a reality.

        > In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.

        In case people still don't get it, Jarred works for Anthropic and Bun belongs to Antrhopic. This means that people that have an ax to grind against anthropic (admittedly a reasonable position), will take the most antagonistic position they possibly can because of personal bias.

        • thrwaway55 5 days ago

          I disagree. This is the same sort of marketing strategy as Mythos.Wow it out performed so much we have to tell you in the future. If he wasn't aligned financially with the outcome I'd agree but he's not

          • perching_aix 5 days ago

            So do you picture them locking up the Rust port behind closed doors as well, or what's the game gonna be? Cause it reads like it's kinda all public already.

            • thrwaway55 5 days ago

              Absolutely not, I think they prioritize it because it's internal. I to expect to see a stronger marketing push on its ability to do language translations because there is honestly value in that. Question is when they have compute but it's less crisis marketing then their security stuff so I'd see it at a lower priority. I just don't think it's as honest as the parent post posits

              • refulgentis 5 days ago

                The Mythos-truther community is absolutely batshit, sorry. You wrote fanfic and now you're writing more fanfic. The company is faking for marketing so therefore they're faking for marketing. The only things in common between the two situations are you and the word Anthropic, the rest of us are just confused and worried. I'm worried, that's why I'm speaking to you plainly.

sysguest 5 days ago

> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

haven't used zig...(only used rust)

but zig doesn't solve those problems?

  • josephg 5 days ago

    Nope! Zig is like C in this regard. There’s no borrow checker. Managing memory is your responsibility.

    It gives you a few more tools than C - like a debug allocator, bounds checked array slices and so on. But it’s not a memory safe language like rust.

    • dnautics 5 days ago

      It's not.. but im pretty sure it could be. could probably even take this (WIP) idea and bolt on a formal verifier pretty easily.

      https://github.com/ityonemo/clr

      • josephg 5 days ago

        It'd take more than that to match rust's borrow checker. Rust's borrow checker tracks lifetimes, and sometimes needs annotations in code to help it understand what you're actually trying to do. I suppose you could work around that by adding lifetime annotations in zig comments. Then you've have a language that's a lot like rust, but without an ecosystem of borrowck-safe libraries. And with worse ergonomics (rust knows when it can Drop). And rust can put noalias everywhere in emitted code. And you'd probably have worse error messages than the rust compiler emits.

        Its an interesting idea. But if you want static memory safety in a low level systems language, its probably much easier to just use rust.

        • dnautics 5 days ago

          > I suppose you could work around that by adding lifetime annotations in zig comments.

          you can make a no-op function that gets compiled out but survives AIR

          > rust knows when it can Drop.

          and its possible to cause problems if you aren't aware where rust picks to dropp.

          > And rust can put noalias everywhere in emitted code.

          zig has noalias and it should be posssible to do alias tracking as a refinement.

          > But if you want static memory safety in a low level systems language, its probably much easier to just use rust.

          don't use that attitude to suck oxygen out of the air. rust comes with its own baggage, so "just using rust because its the only choice" keeps you in a local minimum.

          • josephg 5 days ago

            > and its possible to cause problems if you aren't aware where rust picks to drop.

            Can you give some examples? I've never ran into problems due to this.

            > don't use that attitude to suck oxygen out of the air. rust comes with its own baggage

            Yeah, that's a totally fair argument. One nice aspect of the approach you're proposing is it'd give you the opportunity to explore more of the borrow checker design space. I'm convinced there's a giant forest of different ways we could do compile time memory safety. Rust has gone down one particular road in that forest. But there's probably loads of other options that nobody has tried yet. Some of them will probably be better than rust - but nobody has thought them through yet.

            I wish you luck in your project! If you land somewhere interesting, I hope you write it up.

            • dnautics 5 days ago

              > Can you give some examples? I've never ran into problems due to this.

              If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.

              thank you. Unfortunately in the last few weeks i've been too busy with my startup to put as much work into it. We'll see =D

              • josephg 5 days ago

                > If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.

                Yeah, I've heard of people being surprised that when they make massive collections of Box'ed entries, then get surprised that it takes a long time to Drop the whole thing. But this would be the same in C or Zig too. Malloc and free are really complex functions. Reducing heap allocations is an essential tool for optimisation.

                The solution to this "unexpected performance regression" in rust is the same as it is in C, C++ and Zig: Stop heap allocating so much. Use primitive types, SSO types (SmartString and friends in rust) or memory arenas. Drop isn't the problem.

                • brabel 5 days ago

                  In zig the solution is to use an arena allocator. That’s about as easy as it gets. Maybe Rust also allows doing that, I don’t know.

                  • staticassertion 5 days ago

                    You can use arenas in Rust, it's just not as trivial to swap allocators generally. But there are plenty of crates for it.

                • dnautics 4 days ago

                  no, in zig it's never unexpected, because if you're freeing memory the freesite is known, it's a function call.

                  • josephg 3 days ago

                    Right; because in zig the default behaviour is to leak memory. Rust adds an invisible free() call. Leaking is something you have to do explicitly.

                    I understand zig's philosophy here. But I prefer rust's default behaviour.

                    • dnautics 3 days ago

                      yeah, IMO generally explicit is better. It's hard to take something implicit and increase the visibility (I'm aware there are tools to show you lifetimes in rust). But another option is to statically analyze the code (or the IR) and have something else check that you aren't leaking.

    • pjmlp 4 days ago

      Those tools exit in C tooling as well, now that many ignore them is another matter.

      MSVC has a debug allocator since at least Visual Studio 5.

  • efficax 5 days ago

    zig is unmanaged memory. But rust also allows memory leaks, and they're not uncommon in large, complex programs. So this rewrite will not necessarily control for that.

    • X0Refraction 5 days ago

      What language doesn't allow memory leaks?

      • dmytrish 5 days ago

        There are two kinds of memory leaks: forgotten manual freeing (all references are gone, but allocation is not) and forgetting to get rid of references that keeps an allocation alive. Both are a kind of logical error, but the first is mostly possible in languages with manual memory management. The second one is a universal logical error (only programmer knows which live references are really needed).

        • tardedmeme 4 days ago

          Rust allows reference-counting cycles, right?

        • ethanpailes 4 days ago

          In the Haskell community I’ve seen the second kind called “space leaks.” I don’t see it used much outside that community but I like the term and use it when talking about other languages as well.

      • efficax 4 days ago

        I suppose all languages allow them, depending on how you define a memory leak. Garbage collected languages generally prevent them, since you never have to explicitly free memory, but if there are reference cycles, that memory can never be freed automatically. Rust has the same problem, but since rust uses lifetimes to understand when to drop things, many people expect that this will mean there can be no memory leaks, but leaks are not considered a correctness or safety issue (oom is a panic and panic is safe!). Not only explicitly possible (through Box::leak) but also possible by mistake (again, usually through reference cycles).

        • xigoi 4 days ago

          > but if there are reference cycles, that memory can never be freed automatically.

          Many garbage collection algorithms can deal with cycles.

  • nyrikki 5 days ago

    Zig is a middle ground. It solves some of the common foot-guns in C, Without the costs of affine substructural typing that offers Rust its super powers.

    I am of the opinion that it is horses for courses and not a universal better proposition.

    Because my needs don’t fit in with Rust’s decisions very well I will use zig for personal projects when needed. I just need linked lists, graphs etc…

    While hopefully someone can provide a more comprehensive explanation here are the two huge wins for my use case.

    1) In Zig, accessing an array or slice out of bounds is considered detectable illegal behavior.

    2) defer[0] allows you to collocate the the freeing of resources with code.

    That at least ‘feels’ safer to me than a bunch of ‘unsafe’ rust that is required for my very specific use case.

    I was working on some eBPF code in C and did really miss zig.

    For me it fits the Pareto principle but zig is also just a sometimes food for me, so take that for what it is worth.

    [0] https://zig.guide/language-basics/defer/

    • IshKebab 5 days ago

      Fwiw you don't need unsafe for graphs or linked lists in Rust. At least not directly - these things can be abstracted. The petgraph crate is the most popular for graphs. I'm not sure about linked lists because linked lists are the wrong choice 99.9% of the time.

      I've written hundreds of thousands of lines of Rust and outside of FFI, I've written I think one line of unsafe Rust.

      • awesome_dude 5 days ago

        Show code

        • IshKebab 5 days ago

          Err https://github.com/petgraph/petgraph

          What are you asking for exactly?

          • awesome_dude 5 days ago

            I don't think it's unreasonable, even though I am getting marked down for daring to ask, for people who are making assertions, even if they are well understood *within their own community* (that is, not necessarily universally known) to show examples of what they are talking about.

            You're correcting someone, so it's clear that your understanding isn't universal, and example code is the absolute minimum.

            • rascul 5 days ago

              It doesn't seem clear what code you're asking for.

          • zipy124 5 days ago

            Forgive me if I've mis-understood this thread, but there are unsafe declerations in that crate. Is there really any difference between using unsafe in your own code, versus wrapping it inside some crate?

            I guess you are making the point that the user does not have to concern themselves with the unsafe declarations?

            • simonkagedal 4 days ago

              I would say yes, there’s a difference, in general. I would much rather leave the unsafe code to crates used and tested by many other applications, than have them in the application code itself.

            • IshKebab 4 days ago

              > Is there really any difference between using unsafe in your own code, versus wrapping it inside some crate?

              Yes, in the same way that there's a difference between using `std::Vec` (which uses `unsafe`), and writing an unsafe Vec class yourself.

              Or even the difference between using Python (which wraps an unsafe CPython implementation), and doing everything in unsafe Python code.

              The difference is that widely used code like CPython and `std::Vec` are much much better tested and audited than anything I would write myself, because so many people use them. This is a continuum so something like petgraph is going to be not as well tested as std::Vec but still way better tested than anything I've written.

        • program_whiz 5 days ago

          I think he meant "show me a true linked list / node graph in rust that isn't unsafe". The reason being its not possible using c-style pointer following (or without just putting everything auto-pointers). What you've shown is exactly the tradeoff they were referring to. In rust, the answer is: make sure lifetime of all memory is explicitly managed, then use integers for the 'links' between nodes.

          His point was that for his programming, he wants to be able to make real pointers and real linked lists with memory unsafe, which Rust makes difficult or opaque. For example with linked list, you could simulate (to avoid unsafe), by either boxing everything (so all refs are actually smart pointers), or you can use a container with scoped memory lifetime, and have integers in an array that are the "next" pointer. In addition to extra complexity, the "integers as edges" doesn't actually solve the complexity, it just means you can't get a bad memory error (you can still have 'pointers' that point to the wrong index if you're rolling your own).

          Same with your graph code. Using a COO representation for a graph does in theory make it "memory safe" (albeit more clumsy to use if you are doing pointer-following logic), and it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node).

          I think the point (which I agree with for things like linked list, graph, compiler) is that depending on your usecase, the "safety" guarantees of rust are just making it harder to write the simplest most understandable code. Now instead of: `Node* next` I have lifetimes, integer references, two collections (nodes and edges) to keep in sync, smart pointers, etc. Previously my complexity was to make sure `next != null`, now its a ton of boilerplate and abstractions, performance hits, or more subtle bugs (like 'next' indices getting out of sync with the array of 'nodes').

          If there was a way to explicitly track the lifetime of an arbitrary graph/tree of pointers at compile time, we wouldn't need garbage collection -- its not solvable at compile time, and the complexity has to live somewhere.

          • IshKebab 4 days ago

            > it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node

            This is not actually a different kind of bug; it's just use-after-free, which you can of course get when using pointers instead of indices.

            Actually it's slightly safer than pointer use-after-free because it is type safe and there's no UB.

            Also some of the Rust arenas give you keys (equivalent to pointers) which can check for this. There's a good list here (see "ABA mitigation"):

            https://donsz.nl/blog/arenas/

  • SuperV1234 5 days ago

    Zig doesn't even have RAII...

    • reactordev 5 days ago

      which is a good thing. C++'s RAII is magic-sauce that does a lot for you when you can simply use `defer` in zig. A constructor is just a function call. A destructor is just a function call.

      • shakow 5 days ago

        And a function call is just a fancy JMP, still it's generally acknowledged to be better to have all the bookkeeping automated.

      • fooker 5 days ago

        How is defer not magic sauce?

        • zephen 5 days ago

          Whether you consider it magic is up to you, but, unlike a destructor in RAII, there is nothing automatic going on. If you don't explicitly invoke a destructor, you won't get a destructor.

          The fact that you can explicitly invoke the destructor to happen later is simply syntactic sugar, just like if/else/while, or any other control construct more powerful than a conditional jump instruction.

          • drysine 5 days ago

            > If you don't explicitly invoke a destructor, you won't get a destructor.

            When you explicitly invoke a "destructor", you do it on many code paths (and miss one or two)

            >The fact that you can explicitly invoke the destructor to happen later

            You don't specify where the `defer`-red "destructor" will be invoked.

            • zephen 4 days ago

              > When you explicitly invoke a "destructor", you do it on many code paths (and miss one or two)

              Unless, of course, you do it inside a defer block.

              > You don't specify where the `defer`-red "destructor" will be invoked.

              Yes, actually, you do. It is patently obvious, by code inspection, where the destructor, or anything else specified in a deferred block, will be invoked. defer is a perfectly cromulent part of structured control flow, allowing for easy reasoning about when things occur without having to calculate an insane number of permutations of conditional branch instructions.

          • smj-edison 5 days ago

            And more importantly, you can choose what destructor to call. This is perhaps what's most underrated about defer, because defer can select among many different destructors possible, at multiple different levels (group free with arenas, individual free, etc).

            • zephen 4 days ago

              Or even whether you need a destructor, or something simpler, like nulling out a pointer or two to break a reference loop.

              defer is a perfectly general structured flow concept; it only cares about when you do something, and is completely orthogonal to what you need to accomplish.

              • reactordev 4 days ago

                I'm not sure the folks responding can tell the difference.

      • nly 5 days ago

        Constructors and destructors are also just function calls in C++

        And you can't forget to type defer

      • rcxdude 5 days ago

        Does defer in zig track the objects lifetime directly, or is it like the various other 'context' features in other languages where it only really works for lifetimes of function-local variables and leaves you on your own when things get more complicated? (which, IMO, is precisely when RAII becomes most useful. It does seem like most of these languages only consider the 'forgetting to cleanup on an early return from a function' case)

      • SuperV1234 3 days ago

        It's not a good thing. The reasoning is extremely simple and I don't understand how can anyone oppose it: there are some operations that you don't want to forget BY DEFAULT.

        If I open a file, eventually I want to close it. If I allocate some memory, eventually I want to deallocate it.

        Any programming language design that intentionally puts the onus BY DEFAULT on the user to *not forget to manually do something* is honestly asinine.

        Defer has a place (I do use defer in C++, in fact you can implement it with RAII, proving that RAII is strictly more powerful/more flexible), but the default should be the safest and most straightforward option.

        Also "magic-sauce that does a lot for you" is just false. It's literally a function call injected at the end of a scope.

  • baranul 5 days ago

    It is quite obvious that Zig is pre 1.0 with thousands of stranded unsolved issues (per their GitHub repo). A review of Zig hype gives the strong impression it was created by being relentlessly and suspiciously pushed on HN, beyond logic or its language rankings (per TIOBE or GitHub stats), so that many were under the illusion that the language was something more or other than what it really is.

    Zig is still under development and beta. Stability, crashes, and leaks should not be surprising, and even expected. To stick with a beta language, usually companies and developers are philosophically and/or financially aligned with the language. An example is JangaFX and Odin, where they not only have committed to using the language (despite being beta) in their products, but have directly hired GingerBill.

    Team Bun appears to have "alignment and relationship issues" with Zig, to the point they have decided to extensively explore their options. Now Bun is rewritten in Rust. They are seeing if Rust solves their requirements. As with any relationship, if one ignores or takes a partner for granted, don't be surprised if they want a divorce or jump to someone else.

    • smj-edison 5 days ago

      You might want to check their Codeberg then, because they've moved all their development over there...

      • baranul 5 days ago

        Zig very much could of moved all of their GitHub issues over to Codeberg, to be resolved, but chose not to do so. Thus left thousands of issues unsolved and stranded.

        This maneuver was arguably obfuscated by the anti-LLM stance and finger pointing at Microsoft, but nevertheless, many still have noticed. Zig, for a long time, had been falling behind and doing poorly on their open to close ratio for resolving issues. It should be embarrassing to leave so many issues open.

        Even if not accepting new GitHub issues, they have demonstrated an inability to resolve existing issues, except at an extremely slow pace. Considering there are just about no new issues on their GitHub repo, it is understandable if there are those that find the pace to close and amount of issues unacceptable or questionable, in addition to the clearly bad open to close ratio.

        • smj-edison 5 days ago

          Did you read their migration post? They are thinking about it as COW, so they're using both issue trackers right now, but as soon as the update an issue it jumps straight to the Codeberg issue tracker. It's an unconventional way of doing it, but it's no conspiracy.

lelanthran 5 days ago

Peter Naur: Programming as Theory Building

Bun: Hold my beer