If it wasn't built by Matz I'd have severe doubts, but it's clearly defined and I presume he knows all limitations of the Ruby semantics well.
My thesis work (back when EcmaScript 5 was new) was an AOT JS compiler, it worked but there was limitations with regards to input data that made me abandon it after that since JS developers overall didn't seem to aware of how to restrict oneself properly (JSON.parse is inherently unknown, today with TypeScript it's probably more feasible).
The limitations are clear also, the general lambda calculus points to limits in the type-inference system (there's plenty of good papers from f.ex. Matt Might on the subject) as well as the Shed-skin Python people.
eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code? And how is untyped parsing done (ie JSON ingestion?).
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code
This depends on the individual writing code. Some use it more than others.
I can only give my use case.
.send() I use a lot. I feel that it is simple to understand - you simply
invoke a specific method here. Of course people can just use .method_name()
instead (usually without the () in ruby), but sometimes you may autogenerate
methods and then need to call something dynamically.
.define_method() I use sometimes, when I batch create methods. For instance
I use the HTML colour names, steelblue, darkgreen and so forth, and often
I then batch-generate the methods for this, e. g. via the correct RGB code.
And similar use cases. But, from about 50 of my main projects in ruby, at
best only ... 20 or so use it, whereas about 40 may use .send (or, both a
bit lower than that).
eval() I try to avoid; in a few cases I use them or the variants. For instance, in a simple but stupid calculator, I use eval() to calculate the expression (I sanitize
it before). It's not ideal but simple. I use instance_eval and class_eval more often, usually for aliases (my brain is bad so I need aliases to remember, and sometimes it helps to think properly about a problem).
method_missing I almost never use anymore. There are a few use cases when it is nice to have, but I found that whenever I would use it, the code became more complex and harder to understand, and I kind of got tired of that. So I try to avoid it. It is not always possible to avoid it, but I try to avoid it when possible.
So, to answer your second question, to me personally I would only think of .send() as very important; the others are sometimes but not that important to me. Real-world code may differ, the rails ecosystem is super-weird to me. They even came up with HashWithIndifferentAccess, and while I understand why they came up with it, it also shows a lack of UNDERSTANDING. This is a really big problem with the rails ecosystem - many rails people really did not or do not know ruby. It is strange.
"untyped parsing" I don't understand why that would ever be a problem. I guess only people whose brain is tied to types think about this as a problem. Types are not a problem to me. I know others disagree but it really is not a problem anywhere. It's interesting to see that some people can only operate when there is a type system in place. Usually in ruby you check for behaviour and capabilities, or, if you are lazy, like me, you use .is_a?() which I also do since it is so simple. I actually often prefer it over .respond_to?() as it is shorter to type. And often the checks I use are simple, e. g. "object, are you a string, hash or array" - that covers perhaps 95% of my use cases already. I would not know why types are needed here or fit in anywhere. They may give additional security (perhaps) but they are not necessary IMO.
Why do you say HashWithIndifferentAccess shows a lack of understanding? Like many Rails features, it's a convenience that abstracts away details that some find unpleasant to work with. Rails sometimes takes "magic" to the extreme through meta-programming. However, looking at the source [1], HashWithIndifferentAccess doesn't use eval, send, method_missing, or define_method. So I'm not sure how it seems weird to someone who works more with plain Ruby.
I think you could work around send(). Not a Ruby person, but in most languages you could store functions in a hashmap, and write an implementation of send that does a lookup and invokes the method (passing the instance pointer through if need be).
Won’t work with actual class methods, but if you know ahead of time all the functions it will call are dynamic then it’s not a big deal.
Seeing the performance improvement numbers I'm pretty sure there's a type-inference system below it to realize types in all paths (same as the AOT JS compiler I created).
It's not to be beholden to types per-se, but rather that fixed types are way faster to execute since they map to basic CPU instructions rather than operations having to first determine the type and then branch depending on the type used.
The problem with dynamic types is that they either need to somehow join into fixed types (like with TypeScript specifying a type-specification of the parsed object) or remain dynamic through execution (thus costing performance).
> If it wasn't built by Matz I'd have severe doubts, but it's clearly defined and I presume he knows all limitations of the Ruby semantics well.
It's a very pragmatic design: Uses Prism - parsing Ruby is almost harder than the actual translation - and generates C. Basic Ruby semantics are not all that hard to implement.
On the other extreme, I have a long-languishing, buggy, pure-Ruby AOT compiler for Ruby, and I made things massively harder for myself (on purpose) by insisting on it being written to be self-hosting, and using its own parser. It'll get there one day (maybe...).
But one of the things I learned early on from that is that you can half-ass the first 80% and a lot of Ruby code will run. The "second 80%" are largely in things Matz has omitted from this (and from Mruby), like encodings, and all kinds of fringe features (I wish Ruby would deprecate some of them - there are quite a few things in Ruby I've never, ever seen in the wild).
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code? And how is untyped parsing done (ie JSON ingestion?).
They are pervasive. The limitations are similar to those of mruby, though, which has its uses.
Supporting send, method_missing, and define_method is pretty easy.
Supporting eval() is a massive, massive pain, but with the giant caveat that a huge proportion of eval() use in Ruby can be statically reduced to the block version of instance_eval, which can be AOT compiled relatively easily. E.g. if you can statically determine the string eval() is called with, or can split it up, as a lot of the uses are unnecessary or workaround for relatively simple introspection that you can statically check for and handle. For my own compiler, if/when I get to a point where that is a blocking issue, that's my intended first step.
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code?
Quite a lot, that's what allows you to build something like Rails with magic sprinkled all around. I'm not 100% sure, but probably the untyped JSON ingestion example uses those.
Remove that, and you have a very compact and readable language that is less strongly typed than Crystal but less metaprogrammable than official Ruby. So I think it has quite a lot of potential but time will tell.
> Quite a lot, that's what allows you to build something like Rails with magic sprinkled all around
True, but I'd point out that use in frameworks/DSLs etc is the main place you see those things, and most of the code people write in their own projects don't use these.
In my experience (YMMV), eval and send are rare outside of things like, slightly cowboy unit tests (send basically lets you call private methods that you shouldn't be able to call, so it's considered terrible form to use it 'IRL'. Though there is a public_send which is a non-boundary-violating version too).
Also in my opinion, unless you're developing a framework or something, metaprogramming (things like define_method etc) are Considered Harmful 95% of the time (at least in Ruby), as I think only about 5% of Ruby developers even grok it enough to work in a codebase with that going on. So while it might seem clever to a Staff Eng with 15 years of Ruby experience, the less experienced Rubyist who is going to be trying to maintain the application later is going to be in pain the whole time due to not being able to find any of the method definitions that appear to be being called.
I disagree, I use metaprogramming in application code quite regularly, although I tend to limit myself to a single construct (instance_eval) because I find that makes things more manageable.
In my opinion the main draw of Ruby is that it's kind of Lisp-y in the way you can quickly build a metalanguage tailored to your specific problem domain. For problems where I don't need metaprogramming, I'd rather use a language that is statically typed.
The two are not mutually exclusive. On many occasions I've used C# to define domain-specific environments in which snippets of code, typically expressions, are compiled and evaluated at runtime, "extending the language" by evaluating expressions in the scope of domain-specific objects and/or defining extension methods on simple types (e.g., defining "Cabinet" and "Title" properties on the object and a "Matches" extension method on System.String so I can write 'Cabinet.EndsWith("_P") || Title.Matches("pay(roll|check)", IgnoreCase)').
Or even just a compiler to C piggybacking off <objc/runtime/objc.h>; I think Apple still spends a lot of time making even dynamic class definition work fast. I haven't touched Cocoa/Foundation in a while, but I think (emphasis on think) a lot of proxy patterns in Apple frameworks still need this functionality.
>eval, send, method_missing, define_method, as a non-rubyist how common are these in real-world code?
The interesting bunch (to me, based on experience) is `eval`, `exec`, and `define_method` (as well as creating new classes with `Class.new` `Struct.new`). My sense is that the majority of their use is at the time of application boot, while requiring files. In some ways, it is nearly a compilation step already.
For some context, just presented by Matz at RubyKaigi 2026. It’s experimental but he built it with help from Claude in about a month. Successful live demo.
It’s named after his new cat, which is named after a cat in Card Captor Sakura, which is the partner to another character named Ruby.
> It’s experimental but he built it with help from Claude in about a month.
We talk a lot about AI building programs from soup to nuts. But I think people overlook the more likely scenario. AI will turn 10x programmers into 100x programmers. Or in Matz’s case maybe 100x programmers into 500x programmers.
It actually helps me write better code. I am pretty lazy so I kind of don't do much refactoring unless I have to, I care mostly about happy paths and I kind of avoid treating edge cases unless I can't do it. I usually don't optimize much.
But since writing code is very easy now with AI I write better code because it doesn't take me more time and effort.
That cat story seems more than a little suspicious given the Ruby Central drama / its relation to the founders of Spinel.coop. This project feels likely vindictively named.
While obviously super-impressive, it is clearly not maintanable without AI agent. It has spinel_codegen.rb is 21k lines of code with up to 15 levels of nesting in some methods.
Compilers code was never pretty, but even by those standard, I feel like it is a very-very hard to maintain code by humans.
spinel_codegen.rb is an eldritch horror. I always get spaghetti code like this when using Claude, and I've been wondering if I'm doing something wrong. Now I see an application that looks genuinely interesting (not trivial slop) written by someone I consider to be a top notch programmer, and the code quality is still pretty garbage in some places.
For example infer_comparison_type() [1]. This is far from the worst offender - it's not that hard to read - but what's striking here that there is a better implementation that's so simple and obvious and Claude still fails to get there. Why not replace this with
COMPARISON_TYPES = Set.new(["<", ">", "<=", ">=", "==", "!=", "!"])
def infer_comparison_type(mname)
if COMPARISON_TYPES.include?(mname)
"bool"
else
""
end
# Or even better, strip the else case
# (Which would return nil for anything not in the set)
end
This would be shorter, faster, more readable, and more easily maintainable, but Claude always defaults to an if-return, if-return, if-return pattern. (Even if-else seems to be somewhat alien to Claude.) My own Claude codebases are full of that if-return crap, and now I know I'm not alone.
Other files have much better code quality though. For example, most of the lib directory, which seems to correspond to the ext directory in the mainline Ruby repo. The API is clearly inspired by MRI ruby, even though the implementation differs substantially. I would guess that Matz prompted Claude to mirror parts of the original API and this had a bit of a regularizing effect on the output.
It's true that it's shorter, but I suspect that the if-return, if-return pattern compiles down to much faster code. Separately, this code was originally written in C then ported. There are reasonable explanations for why Matz has the code written this way besides the typical AI slop.
I'm skeptical of that reasoning because the original C wasn't too clean or performant either. For example emit.c from an earlier commit [1]
It writes a separate call to emit_raw for each line, even though there many successive calls to emit_raw before it runs into any branching or other dynamic logic. What if you change this
emit_raw(ctx, "#include <stdio.h>\n");
emit_raw(ctx, "#include <stdlib.h>\n");
emit_raw(ctx, "#include <string.h>\n");
emit_raw(ctx, "#include <math.h>\n");
// And on for dozens more lines
to this
emit_raw(ctx,
"#include <stdio.h>\n"
"#include <stdlib.h>\n"
"#include <string.h>\n"
"#include <math.h>\n"
// And on for dozens more lines
);
That would leave you with code that is just as readable, but only calls the emit function once, leading to a smaller and faster binary. Again, this is a trivial change to the code, but Claude struggles to get there.
Compiler code can be pretty if you have the time to maintain it. Compilers are some of the most modular applications you can build with hard boundaries between subsystems and clear handoffs at each level.
The problem is that people often do not have the time to refactor once they have gotten the thing to work. And the mess keeps growing.
Management problem more than anything else, I feel.
Compilers should not have so much churn. You decide on a set of language features, stick to it and implement. After that, it should only be bugfixes for the foreseeable future till someone can make a solid case for that shiny new feature.
Obviously it doesn't matter much now if it's maintabable by hand or not. If code is passing tests and benchmarks, I am happy.
But I am not sure that huge files are easy for the AI to work with. I try to restrict the files to 300 lines. My thinking is that if it's easy for a human to understand the code, it will be easy for coding agents, too.
- No metaprogramming: send, method_missing, define_method (dynamic)
- No threads: Thread, Mutex (Fiber is supported)
- No encoding: assumes UTF-8/ASCII
- No general lambda calculus: deeply nested -> x { } with [] calls
Assuming UTF-8/ASCII isn’t, IMO, a huge limitation, but some of the others likely are, for quite a few programs. Removing them also will require serious work, I think.
This is really cool, I've been looking for an AOT compiler for ruby for a long time.
The lack of eval/meta-programming fallbacks is a shame though, but I guess they kept the focus on a small, performant subset.
It would be nice to have gems compiled with this AOT compiler that can interact well with MRI.
When it comes to packaging/bundling more standard ruby (including gems) we'll still need tebako, kompo, ocran – and then there's a bunch of older projects that did similar things too like ruby-packer, traveling ruby, jruby warbler etc.
It's nice to have another option, but still, I'm hoping for a more definitive solution with better developer UX.
> No metaprogramming: send, method_missing, define_method (dynamic)
> No threads: Thread, Mutex (Fiber is supported)
Speaking as someone who has written a lot of Ruby code over the years, utilizing every single one of these features of Ruby, I have to say this is the version of Ruby I've evolved to want: simpler and easier to understand but with the aesthetic beauty of Ruby intact.
IMO this more limited variant of Ruby is more practical now that we have extremely productive code generation tools in the form of LLMs. A lot of meta-programming ostensibly exists to make developers more productive by reducing the amount of boilerplate code that has to be written, but that seems no longer necessary now that developers aren't writing code.
What about Crystal? If it's just the aesthetic beauty you want then it might be a good fit as it's similar syntax yet statically typed which leads into more efficient compiled code.
Requiring static type annotations is likely a dealbreaker for many. I wish Matz had gone the route of python3 and allowed _optional_ inline type annotations instead of the mess that is RBS.
Crystal is great and I think it nailed statically typing in a Ruby like language, but I've always been wary of day job use of the language because it doesn't have a significant user base.
If this stabilizes and gets networking support, there are definitely projects that I'll be able to use this for, and the buy-in will be easier than proposing Crystal.
I'd agree that lack of eval is "for the best", but lacking threads and mutexes isn't. Lack of define_method makes a lot of sense as well given the use case.
However, send/method_missing is in common use in preexisting libraries and it shouldn't be particularly difficult to implement (via in memory lookup tables at "compile" (to c) time etc), so either they're ommitted for the reasons you say, or he just hasn't gotten around to it yet. I'm hoping the latter, but only for compatibility sake as I won't be able to use it for any real work at least in the short term otherwise.
Curious why "no threads" when the ruby scheduler and underlying pthread implementation should work fine in C land. I guess to be "zero dependency"? Seems an odd trade-off to me, unless optional "extensions" are planned / omitted for later implementation etc.
I don’t see anywhere that it’s something they specifically decided not to support. Probably they just haven’t gotten around to it yet? Multithreading is notoriously difficult to get right.
It says it isn't supported right in the readme. Just isn't clear on the "why" yet. Not getting to it yet is my hope. I maintain 14+ highly threaded ruby services atm, for context.
"plansturbation" is a real industry, there are tons of successful YouTubers that sell millions of dollars in tutorials, courses, books, etc on how to setup your productivity harness
I see this being useful in infrastructure tools. Imagine a statically compiled bundler that can also do the job of RVM and friends (installing Ruby) but it is still written in Ruby.
The classic Ruby buildpack is written in Ruby but we have to bootstrap it with bash and it's annoying and has edge cases. The CNB is written in rust to not have that problem and the idea that you can ship a single binary with no dependencies is really powerful.
Crystal has an explicit static type system and is actually optimized at the language level for AOT compilation. These features are pretty much required for compiling and maintaining large programs.
This is for a limited subset of Ruby - almost no popular Ruby gems would run under it. It's more like PreScheme [1] (ie. a subset of a language oriented at C compilation).
I don't think these compete in the same niches right now. Full Ruby almost certainly requires a JIT.
It's a similar subset to mruby, and it might well end up influencing mruby, which does have its users. But it's almost a different language in some ways.
This is what I've been wondering after only a cursory glance ("It...generates optimized C code" from the OP). Interesting that mruby itself got a major version update around the same time (in just the past few days) https://github.com/mruby/mruby/blob/master/doc/mruby4.0.md
This should be seen in another perspective, we will eventually reach the point where LLMs can vomit the formal specification in whatever language we feel like.
The revenge of Rational Unified Process, Enterprise Architect and many other tools.
As someone who spent a few years working on a compiler in this space — it’s tough, but are the results of instant startup and immediately decent performance without warm-up satisfying to use in practice. I really hope this takes off and yet another language can break free from dominant interpreter + jit compiler monoculture that we currently have for higher-level programming languages.
wow, I wanted to have this for a long time.
I looked at Crystal, but it never sat right with me.
I think some of the limitations can still be implemented
(definitely Threads and Mutex), and I'd prefer it to compile
to LLVM-IR or something, not C, but overall I think it is
great to see Matz playing around with AOT compiling.
I will eat the downvotes to say what I actually think.
Unless this gets back eval, metaprogramming and threads this isn't all that interesting as an actual language. There are plenty of compiled languages out there. Metaprogramming is what makes Ruby interesting and expressive.
I know that this is just an experiment, but I've seen plenty of cases where stuff exactly like this gets forced into use in production because someone established in the company thinks some new experimental tool is "cool" and "the future".
---
Also, I would like to direct people to take a look at Factor programming language that both compiles into fairly efficient binaries and has amazing metaprogramming features inspired by Lisp and Smalltalk. It doesn't have real threads either, though, which is extremely unfortunate.
One of cool things about Factor (and part of why I brought it up) is that it basically does something similar out of the box. There is a full-featured optimizing compiler and a simpler, faster non-optimizing compiler for eval-like functionality. They work seamlessly together in the interactive Factor environment:
Right now the cost of c interop in ruby is too high. It's actually more perfomant in the general case to rewrite any c lib wrappers in pure ruby these days and let jit do the work
I find the current documentation difficult to understand.
This is a problem I see with many ruby projects. How would I reword
this?
Well, first thing, after stating what spinel is, I would show a
simple example. Ideally a standalone .rb file or something like
that, that can be downloaded (or whatever other format). Yes,
the README shows this, but believe it or not, I have realised
that I am usually below average when trying to understand something
that is now. I even manage to make copy/paste mistakes. This is
why I think one or two standalone as-is examples would be best.
And then I would explain use cases.
The current structure of the document is strange. Has that been
written with AI? If AI replaces the human individual, why is it
then expected that real people should read that? So many questions
here ...
Also, I would really like for the ruby ecosystem to not be split
up into different entities. I understand that mruby does not have
the same goals as MRI ruby, but still, there is fragmentation.
Now there is spinel - how does it relate to other parts of ruby?
Why are truffleruby and jruby separate? (I know why, so I am not
objecting to the rationale; I am pointing out that for a USER it
would be better if things would be more unified here in general.)
Ruby really needs to focus on its inner core. The base should be
solid. Even more so when it is harder to attract genuinely new
developers.
Antis: "If AI is so useful, where are the AI shovelwares? Where are the AI open source contributions? It's all hype"
6 months later: Matz used Claude and now Ruby runs 86 times faster after 1 month of work
At this point it's impossible to take antis seriously at all. Every claim is disproven simply by the passage of time. History will remember them just like the dot-com antis (that is, it won't)
This doesn't implement all of Ruby. It's easy to make a language that looks like Ruby run fast. It's hard to make a CRuby compatible Ruby fast (all the dynamic features add a ton of overhead).
If it wasn't built by Matz I'd have severe doubts, but it's clearly defined and I presume he knows all limitations of the Ruby semantics well.
My thesis work (back when EcmaScript 5 was new) was an AOT JS compiler, it worked but there was limitations with regards to input data that made me abandon it after that since JS developers overall didn't seem to aware of how to restrict oneself properly (JSON.parse is inherently unknown, today with TypeScript it's probably more feasible).
The limitations are clear also, the general lambda calculus points to limits in the type-inference system (there's plenty of good papers from f.ex. Matt Might on the subject) as well as the Shed-skin Python people.
eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code? And how is untyped parsing done (ie JSON ingestion?).
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code
This depends on the individual writing code. Some use it more than others.
I can only give my use case.
.send() I use a lot. I feel that it is simple to understand - you simply invoke a specific method here. Of course people can just use .method_name() instead (usually without the () in ruby), but sometimes you may autogenerate methods and then need to call something dynamically.
.define_method() I use sometimes, when I batch create methods. For instance I use the HTML colour names, steelblue, darkgreen and so forth, and often I then batch-generate the methods for this, e. g. via the correct RGB code. And similar use cases. But, from about 50 of my main projects in ruby, at best only ... 20 or so use it, whereas about 40 may use .send (or, both a bit lower than that).
eval() I try to avoid; in a few cases I use them or the variants. For instance, in a simple but stupid calculator, I use eval() to calculate the expression (I sanitize it before). It's not ideal but simple. I use instance_eval and class_eval more often, usually for aliases (my brain is bad so I need aliases to remember, and sometimes it helps to think properly about a problem).
method_missing I almost never use anymore. There are a few use cases when it is nice to have, but I found that whenever I would use it, the code became more complex and harder to understand, and I kind of got tired of that. So I try to avoid it. It is not always possible to avoid it, but I try to avoid it when possible.
So, to answer your second question, to me personally I would only think of .send() as very important; the others are sometimes but not that important to me. Real-world code may differ, the rails ecosystem is super-weird to me. They even came up with HashWithIndifferentAccess, and while I understand why they came up with it, it also shows a lack of UNDERSTANDING. This is a really big problem with the rails ecosystem - many rails people really did not or do not know ruby. It is strange.
"untyped parsing" I don't understand why that would ever be a problem. I guess only people whose brain is tied to types think about this as a problem. Types are not a problem to me. I know others disagree but it really is not a problem anywhere. It's interesting to see that some people can only operate when there is a type system in place. Usually in ruby you check for behaviour and capabilities, or, if you are lazy, like me, you use .is_a?() which I also do since it is so simple. I actually often prefer it over .respond_to?() as it is shorter to type. And often the checks I use are simple, e. g. "object, are you a string, hash or array" - that covers perhaps 95% of my use cases already. I would not know why types are needed here or fit in anywhere. They may give additional security (perhaps) but they are not necessary IMO.
Why do you say HashWithIndifferentAccess shows a lack of understanding? Like many Rails features, it's a convenience that abstracts away details that some find unpleasant to work with. Rails sometimes takes "magic" to the extreme through meta-programming. However, looking at the source [1], HashWithIndifferentAccess doesn't use eval, send, method_missing, or define_method. So I'm not sure how it seems weird to someone who works more with plain Ruby.
1. https://github.com/rails/rails/blob/fa8f0812160665bff083a089...
I think you could work around send(). Not a Ruby person, but in most languages you could store functions in a hashmap, and write an implementation of send that does a lookup and invokes the method (passing the instance pointer through if need be).
Won’t work with actual class methods, but if you know ahead of time all the functions it will call are dynamic then it’s not a big deal.
Seeing the performance improvement numbers I'm pretty sure there's a type-inference system below it to realize types in all paths (same as the AOT JS compiler I created).
It's not to be beholden to types per-se, but rather that fixed types are way faster to execute since they map to basic CPU instructions rather than operations having to first determine the type and then branch depending on the type used.
The problem with dynamic types is that they either need to somehow join into fixed types (like with TypeScript specifying a type-specification of the parsed object) or remain dynamic through execution (thus costing performance).
> If it wasn't built by Matz I'd have severe doubts, but it's clearly defined and I presume he knows all limitations of the Ruby semantics well.
It's a very pragmatic design: Uses Prism - parsing Ruby is almost harder than the actual translation - and generates C. Basic Ruby semantics are not all that hard to implement.
On the other extreme, I have a long-languishing, buggy, pure-Ruby AOT compiler for Ruby, and I made things massively harder for myself (on purpose) by insisting on it being written to be self-hosting, and using its own parser. It'll get there one day (maybe...).
But one of the things I learned early on from that is that you can half-ass the first 80% and a lot of Ruby code will run. The "second 80%" are largely in things Matz has omitted from this (and from Mruby), like encodings, and all kinds of fringe features (I wish Ruby would deprecate some of them - there are quite a few things in Ruby I've never, ever seen in the wild).
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code? And how is untyped parsing done (ie JSON ingestion?).
They are pervasive. The limitations are similar to those of mruby, though, which has its uses.
Supporting send, method_missing, and define_method is pretty easy.
Supporting eval() is a massive, massive pain, but with the giant caveat that a huge proportion of eval() use in Ruby can be statically reduced to the block version of instance_eval, which can be AOT compiled relatively easily. E.g. if you can statically determine the string eval() is called with, or can split it up, as a lot of the uses are unnecessary or workaround for relatively simple introspection that you can statically check for and handle. For my own compiler, if/when I get to a point where that is a blocking issue, that's my intended first step.
Didn’t Ruby already embedded GCC at some point with similar ideas in mind?
Not embedding them, but mjit generated C and used a C compiler to compile it:
https://www.heroku.com/blog/ruby-mjit/#mjit
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code?
Quite a lot, that's what allows you to build something like Rails with magic sprinkled all around. I'm not 100% sure, but probably the untyped JSON ingestion example uses those.
Remove that, and you have a very compact and readable language that is less strongly typed than Crystal but less metaprogrammable than official Ruby. So I think it has quite a lot of potential but time will tell.
> Quite a lot, that's what allows you to build something like Rails with magic sprinkled all around
True, but I'd point out that use in frameworks/DSLs etc is the main place you see those things, and most of the code people write in their own projects don't use these.
In my experience (YMMV), eval and send are rare outside of things like, slightly cowboy unit tests (send basically lets you call private methods that you shouldn't be able to call, so it's considered terrible form to use it 'IRL'. Though there is a public_send which is a non-boundary-violating version too).
Also in my opinion, unless you're developing a framework or something, metaprogramming (things like define_method etc) are Considered Harmful 95% of the time (at least in Ruby), as I think only about 5% of Ruby developers even grok it enough to work in a codebase with that going on. So while it might seem clever to a Staff Eng with 15 years of Ruby experience, the less experienced Rubyist who is going to be trying to maintain the application later is going to be in pain the whole time due to not being able to find any of the method definitions that appear to be being called.
I disagree, I use metaprogramming in application code quite regularly, although I tend to limit myself to a single construct (instance_eval) because I find that makes things more manageable.
In my opinion the main draw of Ruby is that it's kind of Lisp-y in the way you can quickly build a metalanguage tailored to your specific problem domain. For problems where I don't need metaprogramming, I'd rather use a language that is statically typed.
The two are not mutually exclusive. On many occasions I've used C# to define domain-specific environments in which snippets of code, typically expressions, are compiled and evaluated at runtime, "extending the language" by evaluating expressions in the scope of domain-specific objects and/or defining extension methods on simple types (e.g., defining "Cabinet" and "Title" properties on the object and a "Matches" extension method on System.String so I can write 'Cabinet.EndsWith("_P") || Title.Matches("pay(roll|check)", IgnoreCase)').
It seems like a compiler from Ruby to Objective C could support all the Ruby features while still being more performant than interpreted Ruby.
there was MacRuby[0] which I seem to remember had an AoT compiler and was built on the ObjC foundation but was later abandoned.
[0] https://en.wikipedia.org/wiki/MacRuby
It was basically forked into RubyMotion, which is closed source but actively developed.
Or even just a compiler to C piggybacking off <objc/runtime/objc.h>; I think Apple still spends a lot of time making even dynamic class definition work fast. I haven't touched Cocoa/Foundation in a while, but I think (emphasis on think) a lot of proxy patterns in Apple frameworks still need this functionality.
I'm one of those who use eval often. Could I avoid it, possibly, but it seems more ergonomic for me.
>eval, send, method_missing, define_method, as a non-rubyist how common are these in real-world code?
The interesting bunch (to me, based on experience) is `eval`, `exec`, and `define_method` (as well as creating new classes with `Class.new` `Struct.new`). My sense is that the majority of their use is at the time of application boot, while requiring files. In some ways, it is nearly a compilation step already.
Given how common it is rails and how much it uses those, everywhere!
For some context, just presented by Matz at RubyKaigi 2026. It’s experimental but he built it with help from Claude in about a month. Successful live demo.
It’s named after his new cat, which is named after a cat in Card Captor Sakura, which is the partner to another character named Ruby.
> It’s experimental but he built it with help from Claude in about a month.
We talk a lot about AI building programs from soup to nuts. But I think people overlook the more likely scenario. AI will turn 10x programmers into 100x programmers. Or in Matz’s case maybe 100x programmers into 500x programmers.
AI is the function f(x) = x • |x|. It turns 10x into 100x, 1x into 1x, and -10x into -100x.
Something there, but this sounds too optimistic for x ∈ [-1, 0] and too pessimistic for x ∈ [0, 1].
I should specify the domain to be ℤ
It actually helps me write better code. I am pretty lazy so I kind of don't do much refactoring unless I have to, I care mostly about happy paths and I kind of avoid treating edge cases unless I can't do it. I usually don't optimize much.
But since writing code is very easy now with AI I write better code because it doesn't take me more time and effort.
Thanks! Video doesn't seem to be live yet but they seem to be dripping them out here:
https://www.youtube.com/@rubykaigi4884/videos
The most recent cartoon Spinel in my mind is from Steven Universe, so I hadn't noticed the Spinel/Ruby (Moon) pun, that made my day.
I never expected SU to come up in HN! Unfortunately, it wouldn't be the best reference...
Did something happen with SU?
Oh, no – I meant Spinel and her tragic past.
oh, I thought it was about the mineral which is ruby-ish :)
https://en.wikipedia.org/wiki/Spinel
That cat story seems more than a little suspicious given the Ruby Central drama / its relation to the founders of Spinel.coop. This project feels likely vindictively named.
While obviously super-impressive, it is clearly not maintanable without AI agent. It has spinel_codegen.rb is 21k lines of code with up to 15 levels of nesting in some methods.
Compilers code was never pretty, but even by those standard, I feel like it is a very-very hard to maintain code by humans.
spinel_codegen.rb is an eldritch horror. I always get spaghetti code like this when using Claude, and I've been wondering if I'm doing something wrong. Now I see an application that looks genuinely interesting (not trivial slop) written by someone I consider to be a top notch programmer, and the code quality is still pretty garbage in some places.
For example infer_comparison_type() [1]. This is far from the worst offender - it's not that hard to read - but what's striking here that there is a better implementation that's so simple and obvious and Claude still fails to get there. Why not replace this with
This would be shorter, faster, more readable, and more easily maintainable, but Claude always defaults to an if-return, if-return, if-return pattern. (Even if-else seems to be somewhat alien to Claude.) My own Claude codebases are full of that if-return crap, and now I know I'm not alone.
Other files have much better code quality though. For example, most of the lib directory, which seems to correspond to the ext directory in the mainline Ruby repo. The API is clearly inspired by MRI ruby, even though the implementation differs substantially. I would guess that Matz prompted Claude to mirror parts of the original API and this had a bit of a regularizing effect on the output.
[1] https://github.com/matz/spinel/blob/98d1179670e4d6486bbd1547...
It's true that it's shorter, but I suspect that the if-return, if-return pattern compiles down to much faster code. Separately, this code was originally written in C then ported. There are reasonable explanations for why Matz has the code written this way besides the typical AI slop.
I'm skeptical of that reasoning because the original C wasn't too clean or performant either. For example emit.c from an earlier commit [1]
It writes a separate call to emit_raw for each line, even though there many successive calls to emit_raw before it runs into any branching or other dynamic logic. What if you change this
to this
That would leave you with code that is just as readable, but only calls the emit function once, leading to a smaller and faster binary. Again, this is a trivial change to the code, but Claude struggles to get there.
[1] https://github.com/matz/spinel/blob/aba17d8266d72fae3555ec91...
I agree with the overall sentiment, but personally have grown to love if/return style.
I find it easier to reason about and as it ages, stays maintainable vs more elsif branches with multiple conditions each.
Compiler code can be pretty if you have the time to maintain it. Compilers are some of the most modular applications you can build with hard boundaries between subsystems and clear handoffs at each level.
The problem is that people often do not have the time to refactor once they have gotten the thing to work. And the mess keeps growing.
And the migrations. Or rather all the half-started migrations that never get through meaning you have to deal with api v1,2,3 all the times.
Those are pervasive in any old and large project but in my experience especially so in compilers.
Management problem more than anything else, I feel.
Compilers should not have so much churn. You decide on a set of language features, stick to it and implement. After that, it should only be bugfixes for the foreseeable future till someone can make a solid case for that shiny new feature.
Scope creep is bane of most projects.
Obviously it doesn't matter much now if it's maintabable by hand or not. If code is passing tests and benchmarks, I am happy.
But I am not sure that huge files are easy for the AI to work with. I try to restrict the files to 300 lines. My thinking is that if it's easy for a human to understand the code, it will be easy for coding agents, too.
Limitations
- No eval: eval, instance_eval, class_eval
- No metaprogramming: send, method_missing, define_method (dynamic)
- No threads: Thread, Mutex (Fiber is supported)
- No encoding: assumes UTF-8/ASCII
- No general lambda calculus: deeply nested -> x { } with [] calls
Assuming UTF-8/ASCII isn’t, IMO, a huge limitation, but some of the others likely are, for quite a few programs. Removing them also will require serious work, I think.
This removes a large portion of the magic of Ruby.
I do use class_eval quite a bit but I could pivot to precomputed script generators for those use cases going forward
This is really cool, I've been looking for an AOT compiler for ruby for a long time.
The lack of eval/meta-programming fallbacks is a shame though, but I guess they kept the focus on a small, performant subset.
It would be nice to have gems compiled with this AOT compiler that can interact well with MRI.
When it comes to packaging/bundling more standard ruby (including gems) we'll still need tebako, kompo, ocran – and then there's a bunch of older projects that did similar things too like ruby-packer, traveling ruby, jruby warbler etc.
It's nice to have another option, but still, I'm hoping for a more definitive solution with better developer UX.
Yeah I had to fork warbler recently since it hasn't been updated in forever
> No eval: eval, instance_eval, class_eval
> No metaprogramming: send, method_missing, define_method (dynamic)
> No threads: Thread, Mutex (Fiber is supported)
Speaking as someone who has written a lot of Ruby code over the years, utilizing every single one of these features of Ruby, I have to say this is the version of Ruby I've evolved to want: simpler and easier to understand but with the aesthetic beauty of Ruby intact.
IMO this more limited variant of Ruby is more practical now that we have extremely productive code generation tools in the form of LLMs. A lot of meta-programming ostensibly exists to make developers more productive by reducing the amount of boilerplate code that has to be written, but that seems no longer necessary now that developers aren't writing code.
What about Crystal? If it's just the aesthetic beauty you want then it might be a good fit as it's similar syntax yet statically typed which leads into more efficient compiled code.
Requiring static type annotations is likely a dealbreaker for many. I wish Matz had gone the route of python3 and allowed _optional_ inline type annotations instead of the mess that is RBS.
Crystal is great and I think it nailed statically typing in a Ruby like language, but I've always been wary of day job use of the language because it doesn't have a significant user base.
I agree, it's a network effect problem and I wouldn't use it for professional use, only personal.
This subset of Ruby doesn’t either.
If this stabilizes and gets networking support, there are definitely projects that I'll be able to use this for, and the buy-in will be easier than proposing Crystal.
I'd agree that lack of eval is "for the best", but lacking threads and mutexes isn't. Lack of define_method makes a lot of sense as well given the use case.
However, send/method_missing is in common use in preexisting libraries and it shouldn't be particularly difficult to implement (via in memory lookup tables at "compile" (to c) time etc), so either they're ommitted for the reasons you say, or he just hasn't gotten around to it yet. I'm hoping the latter, but only for compatibility sake as I won't be able to use it for any real work at least in the short term otherwise.
The benefit of meta programming was never having less code to write.
It was having less code to read.
Curious why "no threads" when the ruby scheduler and underlying pthread implementation should work fine in C land. I guess to be "zero dependency"? Seems an odd trade-off to me, unless optional "extensions" are planned / omitted for later implementation etc.
I don’t see anywhere that it’s something they specifically decided not to support. Probably they just haven’t gotten around to it yet? Multithreading is notoriously difficult to get right.
It says it isn't supported right in the readme. Just isn't clear on the "why" yet. Not getting to it yet is my hope. I maintain 14+ highly threaded ruby services atm, for context.
Wow, written in just over a month. Say what you will about AI, but it has enabled serious speedups in the hands of a talented coder.
Rest of industry: OK we need to set up our agent harness, write our SOUL.md, config permissions, skills, mcps, hooks, env...
Matz: gem env|info and find should do
"plansturbation" is a real industry, there are tons of successful YouTubers that sell millions of dollars in tutorials, courses, books, etc on how to setup your productivity harness
I see this being useful in infrastructure tools. Imagine a statically compiled bundler that can also do the job of RVM and friends (installing Ruby) but it is still written in Ruby.
The classic Ruby buildpack is written in Ruby but we have to bootstrap it with bash and it's annoying and has edge cases. The CNB is written in rust to not have that problem and the idea that you can ship a single binary with no dependencies is really powerful.
Given it's built by Matz, how realistic is it that this becomes a core part of Ruby? And if so, how threatening is that for Crystal?
Crystal has an explicit static type system and is actually optimized at the language level for AOT compilation. These features are pretty much required for compiling and maintaining large programs.
This is for a limited subset of Ruby - almost no popular Ruby gems would run under it. It's more like PreScheme [1] (ie. a subset of a language oriented at C compilation).
I don't think these compete in the same niches right now. Full Ruby almost certainly requires a JIT.
[1]: https://prescheme.org/
It's a similar subset to mruby, and it might well end up influencing mruby, which does have its users. But it's almost a different language in some ways.
> It's a similar subset to mruby...
This is what I've been wondering after only a cursory glance ("It...generates optimized C code" from the OP). Interesting that mruby itself got a major version update around the same time (in just the past few days) https://github.com/mruby/mruby/blob/master/doc/mruby4.0.md
Matz is behind both projects, unclear to me what this project is intended to accomplish over mruby.
This should be seen in another perspective, we will eventually reach the point where LLMs can vomit the formal specification in whatever language we feel like.
The revenge of Rational Unified Process, Enterprise Architect and many other tools.
Instead of UML diagrams it is markdown files.
As someone who spent a few years working on a compiler in this space — it’s tough, but are the results of instant startup and immediately decent performance without warm-up satisfying to use in practice. I really hope this takes off and yet another language can break free from dominant interpreter + jit compiler monoculture that we currently have for higher-level programming languages.
wow, I wanted to have this for a long time. I looked at Crystal, but it never sat right with me.
I think some of the limitations can still be implemented (definitely Threads and Mutex), and I'd prefer it to compile to LLVM-IR or something, not C, but overall I think it is great to see Matz playing around with AOT compiling.
Would love to see Guido do the same with Python w/ Claude to see what the end result is.
As the son of a geologist, I find the name a great choice.
I will eat the downvotes to say what I actually think.
Unless this gets back eval, metaprogramming and threads this isn't all that interesting as an actual language. There are plenty of compiled languages out there. Metaprogramming is what makes Ruby interesting and expressive.
I know that this is just an experiment, but I've seen plenty of cases where stuff exactly like this gets forced into use in production because someone established in the company thinks some new experimental tool is "cool" and "the future".
---
Also, I would like to direct people to take a look at Factor programming language that both compiles into fairly efficient binaries and has amazing metaprogramming features inspired by Lisp and Smalltalk. It doesn't have real threads either, though, which is extremely unfortunate.
https://factorcode.org/
I've never even heard of Factor. thanks for sharing.
Spinel would be more interesting if this compiled subset could run side by side with interpreted Ruby, like Pallene does for (slightly modified) Lua.
One of cool things about Factor (and part of why I brought it up) is that it basically does something similar out of the box. There is a full-featured optimizing compiler and a simpler, faster non-optimizing compiler for eval-like functionality. They work seamlessly together in the interactive Factor environment:
https://docs.factorcode.org/content/article-compiler.html
Right now the cost of c interop in ruby is too high. It's actually more perfomant in the general case to rewrite any c lib wrappers in pure ruby these days and let jit do the work
i have wanted to be able to compile ruby to a binary for some time – and have dreamed of poking at this problem with claude – so this is pretty cool.
if you can get a Rack compatible web server to build… i'd waste some serious time playing with this.
Even Matz now uses claude code...
He has been using it for quite a long time for mruby already.
For Ruby and c development it’s the best llm we got right now, the others lag behind by a lot sadly.
With openAI it’s so bad for my usecase it’s basically unusable.
is this done by matz and claude? :)
I find the current documentation difficult to understand.
This is a problem I see with many ruby projects. How would I reword this?
Well, first thing, after stating what spinel is, I would show a simple example. Ideally a standalone .rb file or something like that, that can be downloaded (or whatever other format). Yes, the README shows this, but believe it or not, I have realised that I am usually below average when trying to understand something that is now. I even manage to make copy/paste mistakes. This is why I think one or two standalone as-is examples would be best.
And then I would explain use cases.
The current structure of the document is strange. Has that been written with AI? If AI replaces the human individual, why is it then expected that real people should read that? So many questions here ...
Also, I would really like for the ruby ecosystem to not be split up into different entities. I understand that mruby does not have the same goals as MRI ruby, but still, there is fragmentation. Now there is spinel - how does it relate to other parts of ruby? Why are truffleruby and jruby separate? (I know why, so I am not objecting to the rationale; I am pointing out that for a USER it would be better if things would be more unified here in general.)
Ruby really needs to focus on its inner core. The base should be solid. Even more so when it is harder to attract genuinely new developers.
Maybe AI generated? I mostly suffer to read AI generated documentation.
if spinel gets to where it can compile 100% of mruby there could be some nice synergies there.
Antis: "If AI is so useful, where are the AI shovelwares? Where are the AI open source contributions? It's all hype" 6 months later: Matz used Claude and now Ruby runs 86 times faster after 1 month of work
At this point it's impossible to take antis seriously at all. Every claim is disproven simply by the passage of time. History will remember them just like the dot-com antis (that is, it won't)
This doesn't implement all of Ruby. It's easy to make a language that looks like Ruby run fast. It's hard to make a CRuby compatible Ruby fast (all the dynamic features add a ton of overhead).