This was originally posted here a decade ago. I’m happy to see it’s still alive.
I’ve been using some generated assets for a game with voxelized art. I intend to take a deeper look at this and see if it can simplify parts of my workflow.
This is fascinating. I see its powered by weights and probabilities - would this be a very simple ancestor of things like Stable Diffusion that we have now, or would this be on a completely different branch (different approach)
It’s procedural generation but that’s pretty much where the similarities end. People today might use a big generative NN model to do this, using maybe a thousand times as much energy to get essentially the same result. Gen AI is definitely a big step forward in our relentless drive to make software more inefficient in order to compensate for any efficiency gains that the hardware guys come up with.
> WFC is a console application that depends only on the standard library. Get .NET Core for Windows, Linux or macOS...
Not very familiar with dotnet: does the above sentence mean it's an SDK that can produce svelte binaries that depend only on the C standard library? I thought the final executable required a whole runtime?
Read that as "only depends on the base dotnet runtime." I think the C# compiler at least can emit native code these days, but I'm not primarily a dotnet dev either so not too familiar with that.
The idea of a "vibe" was around long before the term "vibe coding". It's not that much more surprising to see "vibe" used before "vibe coding" than it would be to see "coding" used before "vibe coding".
I always wondered how this compares to the 1999 algorithm Texture Synthesis by Non-parametric Sampling [1]. The results look very similar to my eyes. Implementation here [2] — has anyone tried both?
This was originally posted here a decade ago. I’m happy to see it’s still alive.
I’ve been using some generated assets for a game with voxelized art. I intend to take a deeper look at this and see if it can simplify parts of my workflow.
https://news.ycombinator.com/item?id=12612246
"Wave function collapse" - such a fancy name for a relatively simple algorithm without any connection to actual wave functions.
And one of my favorite examples how fancy names impact popularity. Also see Mersenne Twister.
This is fascinating. I see its powered by weights and probabilities - would this be a very simple ancestor of things like Stable Diffusion that we have now, or would this be on a completely different branch (different approach)
It’s procedural generation but that’s pretty much where the similarities end. People today might use a big generative NN model to do this, using maybe a thousand times as much energy to get essentially the same result. Gen AI is definitely a big step forward in our relentless drive to make software more inefficient in order to compensate for any efficiency gains that the hardware guys come up with.
It's like simple n-gram Markov chain algorithms vs modern LLMs for text
> WFC is a console application that depends only on the standard library. Get .NET Core for Windows, Linux or macOS...
Not very familiar with dotnet: does the above sentence mean it's an SDK that can produce svelte binaries that depend only on the C standard library? I thought the final executable required a whole runtime?
Read that as "only depends on the base dotnet runtime." I think the C# compiler at least can emit native code these days, but I'm not primarily a dotnet dev either so not too familiar with that.
Cool! Anyone knows if there are generalizations to video? Let's say the input is not a Bitmap but a sequence of bitmaps?
An explanation of how this works here: https://robertheaton.com/2018/12/17/wavefunction-collapse-al...
It’s interesting this article uses the phrase, “you feed it the vibe your going for,” about 5 years before “vibe coding” became a common term.
The idea of a "vibe" was around long before the term "vibe coding". It's not that much more surprising to see "vibe" used before "vibe coding" than it would be to see "coding" used before "vibe coding".
I always wondered how this compares to the 1999 algorithm Texture Synthesis by Non-parametric Sampling [1]. The results look very similar to my eyes. Implementation here [2] — has anyone tried both?
[1] https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/p...
[2] https://github.com/goldbema/TextureSynthesis
That's pretty satisfying to watch.