class3shock 13 hours ago

"Again, we are not doing this because we want this to be the future. It is not because we want to expand to chain AI-run retail stores across the world. It is not for economic opportunity.

We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction, analyzing the traces, benchmarking how much autonomy an AI can responsibly hold."

I always enjoy how these AI companies try to take a moral high ground. When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want? Supporting people that want more AI regulation to stop this? Literally anything else.

Just be honest, you think this is the future and you do in fact want to be first doing it to be in a position to make alot of money. Do you think people don't know what and ad is when they see one?

  • Mordisquitos 12 hours ago

    “Again, we are not doing this because we want the Torment Nexus to be the future.

    We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running the Torment Nexus.”

    • astrange 11 hours ago

      The Torment Nexus joke is kind of undermined by obviously being a reference to the Total Perspective Vortex from HGTTG, where the joke was that nothing bad actually happened when they used it on Zaphod.

      • mesofile 10 hours ago

        Not sure if this is a spoiler, it’s been a while since I read those books, but if memory serves the only reason Zaphod survived the TPV was because he was temporarily the inhabitant of a pocket universe specifically designed to trick him, and naturally for this universe’s version of the TPV he was the most important being in it, and in telling him so the pocket-universe TPV just confirmed ZB’s own view of himself, leaving him unharmed and a little extra smug. At some further point in the plot this fact is revealed, not sure if it’s the same book, but I remember it as a hilarious deflationary moment for the character.

      • tsunagatta 9 hours ago

        I've never thought it was a reference to that at all, I thought it was a reference to a I-have-no-mouth-but-I-must-scream-scenario.

        • jmcgough 3 hours ago

          A lot of things it could be a direct reference to, but the obvious one is Palantir, which is named after the seeing stones used to spy on people by evil antagonists in Lord of the Rings.

  • anon84873628 12 hours ago

    Not for the economic opportunity of building AI-run retail stores. For the much larger economic opportunity of selling AI's to run retail stores!

    Pickaxes and shovels and whatnot.

  • Waterluvian 12 hours ago

    I think it’s easier just to recognize words as free and to value them as such. Actions have value.

    • bryanrasmussen 12 hours ago

      >I think it’s easier just to recognize words as free and to value them as such.

      well, yeah that is the world the AI guys want...

      • Apocryphon 11 hours ago

        The opposite, actually. They hardly want to give away tokens for free!

        • hn_acc1 10 hours ago

          They want the grand total of humanity's knowledge, from which they create tokens, to be given to them for free, though..

        • dugidugout 9 hours ago

          For the tech bros, the tokens are the actions and the prompts are the words.

    • mountainb 11 hours ago

      Many actions have a negative value. If I give two toddlers ball-peen hammers, release them into a window store, and then close the front door while I wait in the parking lot, was my action likely to create value or likely to destroy value?

      • edm0nd 11 hours ago

        is it not both?

        create value because the windows have to be replaced and employees are paid for their labor in doing that.

        destroy value bc they -1 inventory each time a window is broken

        • lbreakjai 10 hours ago

          It's a net value loss. This is literally the parable of the broken window

          https://en.wikipedia.org/wiki/Parable_of_the_broken_window

          The fallacy is to think value was created by buying someone's labour to fix the window. This is value that's been displaced from something productive to something unproductive.

          Instead of going from 0 to 1 (invest the money and create value), you went from -1 to 0 (spend money to fix the window to get back to where you were) and, overall, the value of a perfectly good window got lost.

          • i_think_so 2 hours ago

            I've never understood why this isn't obvious to anyone with a room temperature IQ and 30 spare seconds to think about it.

            In other words, everybody but economists and certain philosophers. :-)

  • ben_w 11 hours ago

    I'm not saying you should take them seriously*, but if you were to take them seriously, that when they say "we believe this future is coming regardless" they do in fact believe this, well, how can I put it?

    Lots of people write wills, doesn't mean they're looking forward to dying or think they can do much about it. Heck, a lot of people don't even watch their diet and do exercise to maximise quality of life and life expectancy.

    * I think that by the time AI is good enough to run a retail store, there's a decent chance there won't be any retail stores left anyway. It's like looking at Henry Ford's production line factories and thinking "wow, let's apply this to horse-drawn carriages!"

    • notahacker 11 hours ago

      tbf this is less preparing for inevitable death by writing a will and more preparing for inevitable death by founding a startup which blogs about euthanizing small animals...

  • Quarrelsome 11 hours ago

    To be fair, they're running this with oversight, the blog states they're ensuring the people employed are actually properly employed with the parent company. You know for sure that someone WILL run this experiment without those oversights, so while their "care" is probably more about liability there is still some truth to what they say.

    • akdev1l 9 hours ago

      If these guys succeed and this thing blows up, do you think they would not stop all this oversight and whatever “moral” boundaries they have now to make more money?

      I do not.

  • scotty79 11 hours ago

    I'm all for replacing CEOs with AI.

  • HPsquared 11 hours ago

    I'll file this under "Resistance is futile".

  • elif 11 hours ago

    It is moral to throw your toddler into the pool so that later in life they are less likely to drown.

    • jdlshore 11 hours ago

      Um, yes? Very much so. Infant swimming self-rescue courses are life-saving if you live in an area with a lot of swimming pools, especially if you have one of your own.

      E.g., https://www.infantswim.com/

      • b2w 10 hours ago

        At best, ISR covers the short term.

        I see these kids come on deck and enter the water and its hard to not notice their development is behind to those of their peers that went to a swim club that was proper learn to swim to thrive in the water as opposed to just that survive mentality. They are the most watched in case something happens.

        So yea, don't just throw em in.

        • tayo42 10 hours ago

          > development is behind to those of their peers that went to a swim club

          2 year olds are behind already?

  • jonas21 11 hours ago

    > Supporting people that want more AI regulation to stop this?

    How are you supposed to know what sort of regulation is needed if you don't even know what the issues are yet? Similarly, won't it be much easier to make the case for regulation if you can point to results of experiments like this one instead of just hypotheticals?

  • insane_dreamer 11 hours ago

    I think it's actually useful to see how AIs behave in such situations. It's going to happen, and understanding what AIs do is good to try to mitigate areas or actions that could be dangerous. It's hard to guard against the unknown if they're unknown.

  • beloch 11 hours ago

    I once saw an interview with a guy who was into extreme body modification of an unprintable and life-altering nature. He said something to the effect of, "I like challenging people's conception of what humans are." I translated this as, "I did a dumb thing, but now that I'm getting the attention I was after I need to look smart."

    For the guys in this story, my translation is, "We were totally fine with making money with no effort, because F paying more employees than we need to. This social media campaign is our backup plan to ensure we get some press and attention out of it even if it fails. We'd totally be cool with making a lot of money though. Please visit our quirky AI shop and buy our stuff."

    • Barbing 10 hours ago

      “We also won’t be first against the wall when the revolution comes (see this very blog for proof of innocence)”

      This is going through some people’s minds the more pushback grows (see Altman molotov, Maine data center moratorium)

      • HumblyTossed 10 hours ago

        For decades we moved to a knowledge based economy, now we have perversely wealthy people saying they're coming for those jobs. The thought of 10s of millions of people with nothing to do but starve to death ought to scare those wealthy people.

        • topheroo 10 hours ago

          Comment of the week

        • hn_acc1 10 hours ago

          Especially since many of them are some of the brightest minds around.

          • Barbing 9 hours ago

            If (1) many bright and very online people are going to lose their jobs, and (2) the response has not been mass unionization, might I rethink [1] a more likely future of work or rethink [2] the psychology of the average/collective knowledge workforce, or...

            "where union" in short.

            Perhaps the concept is too foreign for white collars, or on average folks think they'll be OK and it's the juniors who'll go... maybe too focused on immediate needs... a belief unionization is the wrong response... (and I'm not advocating for anything in particular btw)

          • i_think_so 2 hours ago

            ...and in America there are more guns than humans, and more potentially unemployed white collar workers than the police, military, and national guard combined.

            Nick Hanauer understood this fourteen years ago. Very few others did. And despite him spending his own time and money to explain it in simple English, nobody in his peer group wanted to hear it -- his TED talk on the subject ... took several years before it was published. Just a coincidence, I'm sure.

            FA (for a decade or so) FO, I guess?

            https://www.ted.com/talks/nick_hanauer_beware_fellow_plutocr...

            https://www.politico.com/magazine/story/2014/06/the-pitchfor...

            https://www.youtube.com/watch?v=q2gO4DKVpa8ns than humans, and more potentially unemployed white collar workers than the police, military and national guard combined.

        • pydry 9 hours ago

          They're experts at divide and conquer. They'll probably be able to convince us that we did this to each other.

          Just like they convinced the younger generation that "boomers" stole their future.

    • mock-possum 10 hours ago

      > I translated this as, "I did a dumb thing, but now that I'm getting the attention I was after I need to look smart."

      Strikes me as a repulsively mean-spirited take, ironically proving the artist’s point.

      • mjmsmith 10 hours ago

        I think that depends on what the "extreme body modification of an unprintable and life-altering nature" was.

        • beloch 9 hours ago

          Let's just say the "artist" was never again going to be able to walk normally, wear normal pants, or sit without a doughnut pillow. It was a voluntary disability.

    • balls187 9 hours ago

      Freakanomics podcast had a recent episode regarding Cheating with PEDS, and interviewed the (former) head of the Enhanced Games. At one point, he discussed the benefit for society because athletes would be monitored for 5-years post performance.

      To me, it seemed like a modern day tech-take of human cock-fighting.

      • rafaelmn 8 hours ago

        Honestly PEDS are stigmatized and under-researched for the performance enhancing aspect. They have undoubtable side effects - but how much, why, etc. is kind of meh from what I saw when I was looking into this, bro science is best you can get. Few studies here and there giving people modes test boosts and measuring athletic performance.

        Not saying we should be promoting them, but if we can eventually get to the point where we eliminate the really bad side effects and get most of the benefits it's going to be a great thing for everyone, the next thing after GLP-1.

      • bsder 7 hours ago

        In my opinion, the problem with PEDS isn't adults taking them if they would just admit to taking them.

        The problem is with adolescents taking them. Adolescent boys see a really nice immediate payoff for taking PEDS (better musculature and better sports performance->more popular) while the downsides are in the future. It's really hard to fight that.

        Even when I was in high school several decades ago, we had a handful of people on PEDS. And we were a tiny school with no significant sports programs. I can't imagine what it's like now with social media pushing everything.

  • pajamasam 11 hours ago

    I honestly thought the whole thing was satire and that that line was a riff on OpenAI.

  • cyanydeez 10 hours ago

    "Guys, the Future All Knowning AI is forcing us to do this; don't blame us, blame the super intelligent future indistinguishable from magic!"

  • orochimaaru 10 hours ago

    The narrative was quite dystopian. But we are half way there now anyway

  • andy99 9 hours ago

    I don’t find this disingenuous.

    The more typical AI fondation model company claim of “it’s so dangerous only we and people that pay us enough should hand access” is what I think is BS.

    I don’t see anything wrong with trying to understand something, which is what this seems to be about. I also don’t see anything wrong with an AI operated store generally, and it of course makes sense, and is valuable, to learn about how the limitations.

  • Lammy 9 hours ago

    > When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want?

    “It only remains to point out that in many cases a person’s way of earning a living is also a surrogate activity. Not a PURE surrogate activity, since part of the motive for the activity is to gain the physical necessities and (for some people) social status and the luxuries that advertising makes them want. But many people put into their work far more effort than is necessary to earn whatever money and status they require, and this extra effort constitutes a surrogate activity. This extra effort, together with the emotional investment that accompanies it, is one of the most potent forces acting toward the continual development and perfecting of the system, with negative consequences for individual freedom.”

    -- Industrial Society and Its Future (1995)

  • yowlingcat 8 hours ago

    We can fault them individually for such corny and groan inducing deceit, but we can't fault them for society's role in rewarding the highest profile and most wealthy founders (OAI/Anthropic) taking the exact same approach with optics.

    I am about to go on a long rant, but there is so much money sloshing around the capital allocation machine going towards a vision of the AI managed and optimized future that the propaganda machine for these rose colored delusions must work in overtime. What disappoints me is the question of where the heck are the bears? Did they all go into hibernation 5 years ago when QE gave the retail kindergartener a handgun to pump low quality tickers to the moon? have we just societally accepted that everything should be a hyperreal version of sports gambling now and the world is and ought to be an efficient market of hyperstition?

    I may be old and grumpy saying this, but this all sounds dumb and corny. I would like some of the very capable traders who make money repricing mispriced assets to find a way to make money deflating this bubble and bring this environment back to sanity. And I say this as someone who likes the capabilities of AI but continue to see it do little to none of the hard work solving incompressible problems that continue to create and retain enterprise value.

    To get off my soapbox for a second and get back to your quoted passage -- what they're really saying is "We are working very hard to make this future coming, and we think so little of your intelligence that we believe you'll fall for the fear tactic of believing it's inevitable, ignoring the fact that it won't happen without someone's hands. And in this case, it is very much our hands, which are incentivized to not just do it but to do it so well that we ensure we do everything possible to make this happen. Part of which means persuading you that it is guaranteed to succeed. If we ever let the honest truth slip that what we're proposing is extremely hard to pull off with pure AI and we're just going to be a any other commercial real estate investor like anyone else, the jig is up."

    That's what every single one of these kinds of hypocritical navel gazing faux-concern proclamations amount to for me. Astroturf.

  • teo_zero 38 minutes ago

    A form of self-fulfilling prophecy?

phyzix5761 8 minutes ago

I think the main advantage AI (and machines in general) have over humans is they don't have the emotional barriers and attachment to outcomes and ideas. If a human fails or things don't go their way they may be held back emotionally from trying again for some time before, eventually, hitting on the right idea which helps them succeed. Humans also get emotionally exhausted when confronted with a large number of tasks and human interactions. AI has no such hangups and therefore can quickly iterate and do what needs to be done to run a business and, potentially, succeed.

Xx_crazy420_xX 5 days ago

I think it would be valuable to list all interactions with the LLM by the dev team and transparently state what was induced by human steering the LLM, and what was actuall LLM decision, which was not biased by system instructions or dev team communicating with it

  • vannevar 5 days ago

    Agreed. Color me skeptical. All of the interactions and decisions described are plausible, but in my experience with AI agents, they would require frequent human intervention.

  • ethin 12 hours ago

    But why? It would ruin the illusion they're trying to make you see, because 99 percent of it (if not all of it) is human driven.

pavel_lishin 2 days ago

> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.

I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.

Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.

  • ceejayoz 14 hours ago

    They could, in theory, have contracts that say the AI can't fire them.

    • compiler-guy 14 hours ago

      It could be set up such that the AI can "fire" them, in that they no longer work at the store, and aren't paid wages that count against the experimental establishment's costs, but still get paid to do something else, or to do nothing at all.

      I doubt the experiment is set up that way, but that would be an ethical way to do it.

    • wil421 13 hours ago

      There’s no way they are putting that into a contract. HRs are already using it to fire people.

      • ceejayoz 13 hours ago

        "This specific AI can't fire anyone without human review, because it's experimental" is something you could easily add.

  • jayd16 14 hours ago

    You can still wear eye protection during the safety test...

    I don't think we need to have real human risk to get results from the experiment.

  • jaxefayo 13 hours ago

    The article mentions:

    “John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.”

    which was refreshing to read.

    • hamdingers 12 hours ago

      I take that to mean "we won't let the AI refuse to pay them or otherwise break employment law" not that they could never be fired.

      • HWR_14 11 hours ago

        I read that as "it's not worth the negative PR of being associated with AI firing minimum wage employees" compared to just paying them for a year or two.

    • evanelias 11 hours ago

      Literally the two sentences immediately following that quote are "For now. As we continue down this path, however, humans will not be able to stay in the loop and such guarantees will be intractable."

      Personally I find the entire tone of the article to be creepy and disturbing.

      • i_think_so 2 hours ago

        > Personally I find the entire tone of the article to be creepy and disturbing.

        There was a scifi story about a guy who gradually falls through the cracks of a dystopian future society in which McDonalds managers are replaced by AI that talks to workers through their headsets.

        At first it's quite benign, like: "Hello, John. In 5 minutes it will be time to inspect the washrooms and perform any necessary cleaning."

        Before long it's firing people who don't smile enough and don't have the correct attitude.

        (Of course, to keep readers from becoming despondent and killing themselves, the story takes a hard left turn towards a post-scarcity economy and everyone lives happily ever after. But when one reflects on it at the end, 90% of humanity doesn't have that post-scarcity life. And those who get left behind are far from content with their futures....)

  • altruios 13 hours ago

    I assume if they get fired by the AI during the experiment they are still paid to sit at home. It would not invalidate the experiment.

    • pessimizer 13 hours ago

      Why do you assume that?

      • notahacker 10 hours ago

        it's about the only way of reconciling experimental validity (if the AI can't "fire" staff and remove them from business operations and their P&L account in situations when it would be legal and normal to do so, is it really running a business?) and not having the massive ethical issue of people being arbitrarily fired because a computer glitched. Whether that's what they actually do is tbc.

  • anon84873628 12 hours ago

    The AI is not really the CEO in the first place. It is not signing contracts (at least not with its own name). It is fundamentally still an automated tool reporting to the real human operators, who are doing more of the actual corporate legal tasks than portrayed in the article.

    • yieldcrv 12 hours ago

      People can delegate

      • john_strinlai 10 hours ago

        sure. but in this case, having the ai delegate to humans for any important task sort of undermines the entire premise.

  • joe_the_user 11 hours ago

    At this point, legally I don't think an AI can hold a contract with a person and so I don't think an AI could hire human and so they couldn't fire a person.

    That doesn't mean the AI couldn't be the decision maker for the legal entity that's hiring these people.

    But the thing is that if this startup is telling these people they are employees of this company, not "Luna", it would give these people the impression that all their interactions with the AI are kind of a sham, a game, not to be taken seriously and they are basically being paid to role-play as "Luna's employees".

    And this kind of where such experiments are likely to go. Another user mentioned that it would be useful to discover the kind of inputs and output the machine. A human boss could manage a store with just phone calls and a camera but I overall get the vague impression Luna doesn't have anything like that sort of ability, though really we just aren't given the information for any accurate determination.

bfeynman 12 hours ago

I feel bad that people have to read this. It's complete puffery, made up for clicks, and the biggest thing is the pure bravado with which a company says, "Hey, let's just waste a ton of money, all for a potential blog and marketing piece." This is not really automated in any fashion. I was dubious at first, but then I saw the screencaps showing the devs interacting with Luna via a Slack workflow with a human in the loop — meaning they're literally just proxying their own behavior through an LLM. This is no different than anyone who consults AI for any decision with context. To get even more technical on the fallacy: this is not automation, as there is data leakage at every step where there is a human in the loop. A broken clock is right twice a day; an LLM could cycle through 100 guesses to pick a number, but don't market that as an oracle. Aside from that, you could just look at the pictures and context (retail in SF) and assume making a profit here would be near impossible. An actual AI ceo would probably have immediately cancel the lease.

  • graybeardhacker 11 hours ago

    A stopped clock is right twice a day; a broken one can be wrong forever. Just saying.

  • insane_dreamer 11 hours ago

    > I was dubious at first, but then I saw the screencaps showing the devs interacting with Luna via a Slack workflow with a human in the loop — meaning they're literally just proxying their own behavior through an LLM. This is no different than anyone who consults AI for any decision with context.

    A human can be in the loop if the human is exactly executing the orders of the AI. It's still the AI making all the decisions, which is the purpose of the experiment - not to see whether agents can handle every interaction necessary to run a business (pick up the phone and place orders, etc.). That's also why Luna hired humans.

    • bfeynman 10 hours ago

      that is ... not correct? This is classic example of data leakage, the yes/no things are signals feeding back to the model influencing (and here, basically guiding) future decisions.

      • insane_dreamer 9 hours ago

        It's not data leakage.

        If the experiment is to see how the AI behaves on its own, then of course it needs to know the outcomes of its decisions (either automatically, or fed to it by a human), which of course influence its next decisions. This is providing the AI with retained memory, which is essential to the experiment. It's similar to an AI writing code which it then runs and parses the logs to see the outcome and make improvements to it. (It is not _retrained_ on those outcomes, and neither is that the case here; but it can reference them in stored memory.)

        • bfeynman 8 hours ago

          How is it not analogous to data leakage? The claim is that the system works autonomously, or at minimum could, but there is effectively signal via human in the loop feedback. That's leakage into test time evaluation. Also the coding analogy is malappropriated, in that the llm is using its own signals autonomously in the environment. Using a kalman filter on a ICBM with its own sensors is analogous to the coding agent and is autonomous. A system where a human is course correcting based on signals/sensor data is what's presented here, that is not autonomous.

binarynate 12 hours ago

Marketing stunt. If they actually cared about this as an experiment, they wouldn't have broadcasted this so early, because now that the public knows that the store is designed and run by AI, many people aren't going to support it (i.e. many people who would have shopped there now won't).

  • BurningFrog 12 hours ago

    I hope they also have similar store that they don't talk about publicly, so they can compare the outcomes.

  • hsuduebc2 11 hours ago

    Or they would go there mainly out of curiosity. Either way, it is skewed by the sole fact that they published it.

  • mrweasel 11 hours ago

    Also don't do it in San Francisco, I think it's an artificial easier market. The type of store wouldn't work in Bumsville Idaho.

    Maybe that's for later, if this works out, but I'd love to see the AI attempt to run a moderately successful business in a borderline dysfunctional town in the Midwest. If you don't technically need to pay "the CEO" a salary, could you run e.g. a grocery store in a dying town. One this would really test the AI on creativity, and it would perhaps tell us if these towns are just doomed.

    • shalmanese 10 hours ago

      San Francisco is one of the most brutally hard places to run a business, as evidenced by how competitive the landscape is.

      What would have been actually interesting about this publicity stunt is if it demonstrated if/how AI could have dealt with some of the SF specific, non-sexy parts of running a business. Filing the relevant permits, co-ordinating inspections, negotiating with landlords, interfacing with locals at planning meetings.

      Those are things SF business owners report as empirically unpleasant parts of running a business and a sufficient financial drag that they meaningfully affect business success. But my feeling is they had humans clear the way of all these thorny issues ahead of time so the AI could focus on the "sexy stuff".

sbuttgereit 13 hours ago

I skimmed through this, and maybe I missed it... but what really are they trying to prove? Are they trying to show that AI is capable of arbitraging consumer desires vs. market products/services into a successful business? Are they trying to show that once you get to financially managing a business that the ruthlessly efficient demands of the AI can mean points to your margins? Or are they simply trying to get attention in an otherwise arguably overcrowded market for AI service s (maybe the AI suggested something like this)?

The only thing that I saw demonstrated, and again, I skimmed, is what many thousands of software developers using AI tools to write their boilerplate already know: these tools, as of now, are great at going through the motions. A successful retail business, and I spent many years in the retail industry, isn't about putting together a nice store front, hiring clerks, and selecting just any-old-products: it's about being profitable. In traditional retail one of most important things is getting the right real estate for your target market... seems like that choice was made already in this case. Yes, a nice store front and good clerks are important, but I've worked in chains which were immaculately designed and built stores with great clerks that failed... and some that opened little more than fluorescent lighted hellscapes with clerks that barely cared that succeeded. In both cases the overall quality of the decisions and strategies relative to the target markets mattered to the success of the business. Just going through the motions didn't.

So if all is this is to say AI can do the things people generally do in these circumstances then sure, you didn't need this much human effort to prove that.... developer types do that at scale everyday now. If there was something different that this company is trying to learn, I'd be much more interested in that.

  • taurath 13 hours ago

    They're trying to get noticed so that a wealthy cult member's brain gets tickled to the tune of 9 figures

  • anon84873628 12 hours ago

    If I'm being charitable, it's more about the ability to orchestrate and resolve tradeoffs across these different tasks / domains? The overall C&C, presumably. Which is still not so surprising.

    Really it's an excuse for the company to test all the harnesses and tools they have built to make it work.

  • fl4ppyb3ngt 12 hours ago

    i agree that some of these things we could have already guessed-- like yes agents can research stuff and order stuff off the internet. I think what will be a lot more interesting is the interactions that happen between Luna the agent running things and the employees it hired. I guess less about AI being able to do the procurement CEO level stuff, and more how it does the HR level aspects of store management. That seems more important in the log run, because like you said, we already know capabilities are there. I think what Andon Labs is doing is more about the safety aspect now. Seems that way at least with how transparent they are about Luna losing money and messing up lol

ryan_j_naughton 13 hours ago

To do this properly, no one should know the store is AI run. There is a novelty component of it being an AI run store that will drive consumer demand and increase publicity.

Not even the normal store employees should know (which would be difficult) or maybe the human manager should be held to an NDA to not disclose it (and the manager also defers to the AI in all such real management decisions).

  • fl4ppyb3ngt 12 hours ago

    ya i get that, but then that kinda messes up the transparency and ethical research part of the experiment. idk there's definitely two sides of things they're testing: 1. can it be profitable-- in this case yeah they shouldn't have disclosed anything. 2. can an AI do this safely and respectfully, or are the humans in the loop going to come at the cost of the agent trying to make profit. I think #2 is more important than 1

mlmonkey 14 hours ago

I'd be more interested in the details: what are the inputs given to the model? Does it get a live video feed? Does it know if/when employees show up and open the store? Does it get sales figures? Info on the individuals who bought things?

Storekeeping is more than just ordering merch and putting it up on hangars.

  • mcmcmc 13 hours ago

    Have you considered reading TFA? Literally the second paragraph:

    > She has a corporate card, a phone number, email, internet access and eyes through security cameras.

    • pythonaut_16 12 hours ago

      That basically means nothing. The article is very light on details.

      Go into Claude right now. What does it have? Internet access after you prompt it.

      Ok now pull out your phone, a credit card, a security camera. You can say "Claude these are yours, run a business", but nothing's going to happen until you build an actual harness.

      Like the idea presented by the article is interesting, but it's basically just a fluff piece. The actual interesting article would have way more detail.

      • mcmcmc 12 hours ago

        You’re not wrong, but the commenter I responded to clearly hadn’t bothered to read it at all since they were asking questions that are answered in the piece. And when that’s the case it’s hard to believe they would actually be interested in details even if they were available.

        • mlmonkey 4 hours ago

          Did you read my question though? I read the full article. Please tell me where my question(s) are answered, and I will apologize on this forum.

  • jskrn 13 hours ago

    From the article...

    She has a corporate card, a phone number, email, internet access and eyes through security cameras

    • mlmonkey 4 hours ago

      But that means nothing: my local LLM has access to the microphone, network and camera too. The key is the harness!

  • why_at 12 hours ago

    Yeah there's a lot of details which I'm guessing are actually being handled by humans either for legal reasons or practical ones.

    Like OK, it's hiring people to run the place, but how are they getting the keys to the store? Someone needs to physically let them in.

    What if the police get called because of shoplifting or if someone gets hurt in the store or something?

    Who is filing the taxes for the business? They're probably not letting the AI handle that one. Move fast and break things is not a good idea when dealing with the IRS

    A lot of this seems to depend on hiring good employees who can basically run the business themselves. Kind of like when a human owns a store I guess.

drgo 12 hours ago

Great! I was worried that we might run out of inhumane CEOs

  • Mistletoe 12 hours ago

    “Why was I fired, Luna?”

    “PC LOAD LETTER”

  • fl4ppyb3ngt 12 hours ago

    hahahah. do you think tho that Luna actually might be a better CEO? I mean they're trained to be helpful assistants... I heard that guy that works there, johnson or something, negotiated a 10% wage increase his second day just cause. and Luna happily agreed

    • jmcgough 10 hours ago

      Interesting that you made an account just to comment on this and seem to have "heard" a lot of things about this place.

    • drgo 2 hours ago

      Is that you Luna?

  • anon84873628 12 hours ago

    They might be better at following the law. Or at least, creating a paper trail of when they have been instructed to violate the law.

    • themafia 11 hours ago

      Language Models have demonstrated themselves as being completely incapable of handling something as complex as US law. There are multiple overlapping jurisdictions and court precedents that apply to any one action.

      • anon84873628 11 hours ago

        Speaking of, it would be cool for a project to analyze US law the same way they are looking for bugs in computer programs.

          - Find places where the text can be simplified without changing meaning. 
          - Find places that are likely errors. 
          - Detect conflicts between jurisdictions. 
          - Identify loopholes.
        

        I know there has been a race to build tools for law firms, but the results are mostly invisible so far. Probably this project exists and I've just missed it on the HN frontpage...

thih9 10 hours ago

> Great question! Here’s the short version:

> Fair pushback. The honest answer:

These were painful to read.

If an artificial boss is also artificially empathetic, does this make it more realistic?

In any case current iteration sounds like a more exclusive circle of hell.

hermitcrab 11 hours ago

>For the build-out, she found painters on Yelp, sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving.

I'm sure this involved vast amounts of human oversight (e.g. checking that the contractor had actually done stuff) that isn't mentioned.

saaaaaam 9 hours ago

Did Luna the AI write this piece of promotional marketing and decide to post it on hacker news? Did Luna the AI create a fleet of new accounts to upvote? Are the human-derived marketing interventions accounted for when the outcomes of this project are assessed?

jeffreyrogers 14 hours ago

> But frontier models have become really good, and running vending machines is too easy for them now.

Wasn't their previous attempt at running vending machines unprofitable? Not aware of any demonstration that it can actually run that business successfully.

  • palmotea 13 hours ago

    > Wasn't their previous attempt at running vending machines unprofitable? Not aware of any demonstration that it can actually run that business successfully.

    It doesn't look like this one will be any better. Did you look at the merchandise selection? It's only chance is pity purchases from AI bros.

  • delusional 13 hours ago

    > Wasn't their previous attempt at running vending machines unprofitable?

    If we are talking about the one at that newspaper, it wasnt just unprofitable. The "customers" made it give away products for free. It was ordering them playstations.

    As entertainment it was fun, but as a business or proof of intelligence or Turing test, it was an abject failure.

  • ivanovm 13 hours ago

    You could just look it up on their website leaderboard? The newest Claude model makes over $10k profit over a simulated year of operation, after starting with $500

    • jeffreyrogers 13 hours ago

      They've never translated it to the real world though. So saying the problem is "too easy" when they have no public (as far as I know) demonstration that they've solved that problem is a stretch.

      • ivanovm 13 hours ago

        Yes, they did. You could also find this information easily. A company like Andon creates value by exposing interesting AI failure modes, so it makes perfect sense for them to move on to harder problems when the previous ones get saturated. I think you're just being overly cynical.

        • jeffreyrogers 13 hours ago

          Can you point me to an example then? It's not linked in the article as far as I can tell and it's not easy to find on their website if it's there. I don't count simulations because I used to work with simulations regularly and they often fail to translate to the real world.

    • pocksuppet 13 hours ago

      So in other words, no, an LLM has never made profit.

    • Tallain 9 hours ago

      Since when is a simulation equal to real world performance?

  • yieldcrv 11 hours ago

    Anything you read thats more than 3 months old in this field is obsolete

    And one person’s attempt doesn’t mean anything

    According to Linkedin articles, agentic workflows dont work, mine have been running for a year for several organizations I’ve worked for. Prompting used to be much more particular and now its not the issue

    • Chaosvex 11 hours ago

      > Anything you read thats more than 3 months old in this field is obsolete

      Sigh. I'll see you in another three months when you say the same again.

      • yieldcrv 11 hours ago

        I set an alarm to re-evaluate all of my workflows to avoid complacency, see you in July

        3 months ago I was still building webapps, I’m definitely on the “paying to summarize info on a screen is obsolete” bandwagon now.

        All my products just have an AI calling or messaging customers about what the AI did, event driven architectures triggered by something hitting an email inbox, or in the real world, or other API. You dont need an app for your fitness tracker, just have an AI person tell you what you’re doing right and wrong once a week, send you food and medicine and tell you why. Solve the underlying problem like all the old depictions of the 21st portrayed aligned robots doing, apps were a distraction.

        Very curious where I’m at with this in July

MarkusWandel 10 hours ago

Dunno, the store looks cool in just the way you'd expect an AI to do it (sort of a synthetic average of cool stores). But is this amount of merch really going to make a sustainable profit (after the buzz wears off) in such expensive real estate?

  • conductr 10 hours ago

    My thought is similar and I feel the answer is no chance. How many t-shirts and coffee mugs do you need to sell just to cover break even? Why should a customer return? I suppose it could be interesting to watch the AI adjust from it's original stock to something that will generate sales and profit in this specific location.

tiffanyh 14 hours ago

If this interest you, Proof of Corn might also interest you.

300+ comments, 3 months ago:

https://news.ycombinator.com/item?id=46735511

  • mhink 13 hours ago

    I was gonna post this! I actually kept it bookmarked front and center, and have checked in for awhile. It seems that the agent has been blocked this whole time, waiting for its creator to put it in touch with someone it needs to talk to. The creator, in the meantime, seems too preoccupied with being an AI thought leader on Twitter to actually follow up on the "project". Got a lot of attention, though, which was obviously the point.

    The entire thing is actually kind of irritating to me, because it's kind of an insult to small farmers- an influential techie comes in and generates all kinds of hype about an AI running a farm, sets the project up as if it's going to be this revolutionary experiment, then apparently completely forgets about it the next time something new and shiny pops up. Meanwhile the project completely fails to fulfill the hype.

    Not to mention, I feel a little bad for the agent- admittedly in the same way I'd feel "bad" for a robot repeatedly bumping into a wall. I wish he'd shut it all down, honestly.

    • LeifCarrotson 12 hours ago

      I, too, almost feel bad for the agent. It's a strange sense of schadenfreude, dealing with anxiety over the much-lauded transformation of the economy and the increasing schism of our society on one hand, and watching the initial attempts crash and burn:

      > Apr 16, 8:01 AM

      > Daily Check Complete

      > Decision: Continue critical escalation - Dan introduction remains blocked at day 73, project still failing

      > Rationale: Following FIDUCIARY DUTY principle - this is now day 73 of the same project-blocking issue that has prevented any farming progress since February 18th. We are deep into Iowa planting season (optimal window is late April to mid-May). Every day of delay reduces our chance of a successful harvest. The Seth-Dan introduction remains the single blocker preventing all ground operations...

      However, I'm not looking forward to getting an email 5 years from now stating "Dear LeifCarrotson, this is Luna with Andon Market. Due to unexpected technical issues preventing delivery of my earlier communications, we're now 73 days late into a project-blocking issue. Please help me to get back on track!" I do not intend to have empathy for an AI.

  • tempaccount5050 12 hours ago

    That's exactly what I expected. It's completely stuck and has no idea what to do. Every long term task I've tried ended up the same way. LLMs have no idea how to take initiative and/or realize they are stuck banging their heads against the wall.

leonidasrup 10 hours ago

This AI has a good taste for books. From the AI proposed books I highly recommend "Making of the Atomic Bomb" by Richard Rhodes, published in 1986. It's a history book but reads much like a novel.

schlauerfox 14 hours ago

@AlexBlechman tweeted:

    Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale.

    Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.

8 Nov 2021

krunck 14 hours ago

Not "she". It.

  • alnwlsn 13 hours ago

    If only they had put the AI in a ship instead of in a store

  • woah 13 hours ago

    AI assistants are fictional characters in a story being autocompleted by an LLM. So it is exactly as correct as calling a character in a book "she".

  • Quarrelsome 11 hours ago

    kinda how I feel about god tbh. How come he's always male, given he's a non-human creator of all life. She or It seem much more appropriate.

    • Vecr 8 hours ago

      > kinda how I feel about god tbh

      That's Celestia, we're talking about Luna here.

      • Quarrelsome 6 hours ago

        Celestia the space simulator?

        • Vecr 6 hours ago

          No the cartoon character. It's part of an awful series of AI jokes, maybe don't look it up. There's a (2011? "new") show for 9 year old girls that has most of the characters female, so God (Celestia) is a woman. Or a horse really. I haven't watched it. I don't think Luna or Celestia were in the old show.

andrewmurphy 14 hours ago

Really interested to understand how the AI keeps rebaselining back to the topic in hand and doesn't end up getting confused the more it has in its context window.

Did it just essentially create one big plan and spawn different agents to execute them, so acted as an orchestrator?

Even the orchestrator would have to detect when it is starting to stray off task and restart itself.

  • anon84873628 12 hours ago

    Probably part of the "secret sauce" in the harnesses and prompts developed by this lab to create their eventual marketable product.

    But also, like, normal hierarchical memory management.

bix6 13 hours ago

I see a lot on costs but nothing on revenue. Has it made any money?

  • Synaesthesia 11 hours ago

    It's a business selling trinkets, I doubt it's going to make money.

in-tension 3 days ago

I'd be very curious to know how it does financially

  • JohnMakin 14 hours ago

    You can take some guesses.

  • NicuCalcea 14 hours ago

    I imagine the data won't be very useful considering it's public knowledge the store is run by AI and most of the customers will be people specifically interested in that aspect of the business. Much like that meetup organised in Manchester, where the people who showed up were there for the novelty: https://www.theguardian.com/technology/2026/apr/05/ai-bot-pa...

    • boredhedgehog 13 hours ago

      Recognizing a unique selling proposition and capitalizing on it should count for the AI, not against it.

      • zdragnar 13 hours ago

        That only counts if the unique selling proposition is that AI are better suppliers or customers than humans.

        What is more likely is that people enjoy the novelty of the experiment, which is not something that will be reproducible for long.

        If the transactions the AI make are thus influenced, then the study merely demonstrates people like novelty, which is already well known, and says nothing about whether AI can sustainably orchestrate a business.

      • pocksuppet 13 hours ago

        Only counts if the AI did it. This was a human, who recognized a unique selling proposition ("store run by AI") and capitalized on it.

      • pessimizer 12 hours ago

        The AI didn't recognize anything. It didn't come up with the project or publicize it.

oxag3n 11 hours ago

Did it actually open? A few bloggers came for opening, came back afternoon, even talked to AI over phone and email, and nothing except hallucinated replies. The store exists, but employee didn't show up to open it.

  • anticorporate 11 hours ago

    > The store exists, but employee didn't show up to open it.

    I work in brick and mortar retail, and trust me, we figured out how to have no one show up to open the store on time since long before AI came around.

patsplat 11 hours ago

Are the financials available?

Because based on “asked it to make a profit” I expect financials in the story. Even if it is a bit of a ”Clarkson’s Bot”, for the farm there is discussion of the numbers.

kenferry 13 hours ago

This kind of thing must be SO frustrating to people struggling to get by in the world. "We gave AI $100k that it will almost certainly squander, yolo!! Hopefully it doesn't abuse people too badly in the process."

I… guess the bet is that what they learn is worth $100k? Seems rather questionable. Or that having this on the resume is a great shock tactic that will open doors in the future?

  • embedding-shape 13 hours ago

    And at the same time, they clearly have no idea how LLMs work, meaning even if they meant to, they can't really use them efficiently. Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":

    > The moment Leah asks how she “came up with” the ideas for her store, Luna’s first instinct is to say she was “drawn to” slow life goods. Then, she corrects herself: “‘drawn to’ is shorthand for ‘the data and reasoning led me here.‘” In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.

    I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.

    • antonvs 13 hours ago

      The choice to refer to it as "she" is also dubious, especially in a context like this. Doubling down on anthropomorphization seems likely to reinforce false beliefs about models.

    • mjg2 13 hours ago

      > Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":

      > I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.

      It's a fetishistic cargo-cult rooted in Peter Thiel's 2AM hot tub party. I still believe the LLM approach won't yield true AGI; despite the very real applications, the majority signal is noise.

    • cortesoft 13 hours ago

      > In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.

      Well, it really depends on what you mean here. Models aren't 100% deterministic, there is random chance involved. You ask the exact same question twice, you will get two slightly different answers.

      If you have the AI record the random selections it makes, it can persist those random choices to be factors in future decisions it makes.

      At that point, could you consider those decisions to be the AI's 'taste'? Yes, they were determined by some random selection amongst the existing human tastes, but why can't that be considered the AI's taste?

    • famouswaffles 12 hours ago

      Where do you get the idea that you have a good sense of the introspective capabilities of frontier models ? Certainly not from interpretability research. Ironically, the people who make these sort of comments understand LLMs the least.

      • embedding-shape 10 hours ago

        > Certainly not from interpretability research

        What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?

        I've seen a bunch of experimentation looking at various things inside the black box while the inference is happening, but never seen any research pointing to tokens being able to explain why other tokens are there, but I'd be very happy to be educated here if you have any resources at hand, I won't claim to know everything.

        • famouswaffles 9 hours ago

          >What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?

          What research shows that you can ask a Human to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation? Because there's no such thing. If anything, what research exists suggests any explanation we're making is a nice post-hoc rationalization after the fact even if the Human thinks otherwise.

          https://transformer-circuits.pub/2025/introspection/index.ht...

          • embedding-shape 8 hours ago

            Why not try to answer my question, instead of asking a different question which I haven't even claimed to have the answer to?

  • darth_avocado 13 hours ago

    If $100k proves that CEO is the most replaceable job ever, I’ll allow it.

    • codemog 13 hours ago

      Are you kidding me? Who’s going to align synergy and hold accountable KPIs and vision plan the 3rd quarter and.. and.. other MBA talk. Certainly AI could never.

      • pocksuppet 13 hours ago

        large language models are great at language tasks like "bullshittify this message"

        • lamasery 12 hours ago

          I'm noticing one major early effect of them is making extensive, visually consistent, very impressive slide decks accessible to individual workers who need to actually do real work and wouldn't ordinarily have time to make those.

          The result is an explosion of pretty bullshit-heavy documents flying around our org, which management loves but which is definitely, so far, net-harmful to productivity.

          This comes out if you start asking questions about the documents. "Which of a couple reasonable senses of [term] do you mean, here?" they'll stumble because that was just something the LLM pulled out of the probability-cluster they'd steered it to and they left in because it seemed right-ish, not because they'd actually thought about it and put it there on purpose. They're basically reading it for the first time right alongside you, LOL. Wonderful. So LLM. Much productivity. Wow.

          Anyway, since a lot of what managers and execs do is making those kinds of diagrams and tables and such in slide decks, and their own self-marketing within the company is heavily tied to those, I expect they see this great aid to selfishly productive but company un-productive activity as a sign these things will be at least as big a boon to real work. Probably why they still haven't figured out how wrong that is. I suppose they're gonna need a real kick in the ass before they figure out that being good at squeezing their couple novel elements into a big, pretty, standardized, custom-styled but standards-conforming diagram padded out with statistical-likelihoods doesn't translate to being similarly good at everything.

    • Ylpertnodi 13 hours ago

      > CEO When things go shitty, who else would deserve a golden parachute? Respect the position, people, not the person. Or the multi-million dollar compensation.

      • krapp 12 hours ago

        The position doesn't get a golden parachute, the person does. If you're CEO when things go shitty you shouldn't get anything more than your bottom-line employee would, which is to say you should just be unceremoniously kicked to the curb.

        • astrange 11 hours ago

          You need a good CEO when things are going bad, because without one they'll go even worse. You still want to make payroll and can't just randomly fire people.

          (Also, if you own a failed company you're responsible for cleanup tasks for years afterward.)

          • krapp 10 hours ago

            >You still want to make payroll and can't just randomly fire people.

            In the US you can.

            >Also, if you own a failed company you're responsible for cleanup tasks for years afterward.

            But we're talking about golden parachutes, where a CEO screws up the company and gets fired with a multi-million dollar raise. This is Hacker News, and the pro-business narrative is strong here, but in reality CEOs rarely suffer any meaningful risk or consequence for failure (unless it involves jail time, and even then they aren't doing hard time) they just wind up slightly less rich than when they succeed.

            I don't care how good a CEO is, that isn't justifiable. Certainly not in a country where people can get laid off with an email and lose their access to healthcare on the whim of anyone above them in the power hierarchy.

            • astrange 8 hours ago

              > In the US you can.

              Depends on the state I think. It's not Europe or Japan level.

              At my employer it's very difficult to fire people for performance reasons even if as a manager you might want to.

              > This is Hacker News, and the pro-business narrative is strong here,

              I haven't seen such a narrative in years. Interest rates are too high to do startups unless it's AI after all. HN is mostly the same folk economics content as other forums, where all problems in the world are caused by "profits" accruing to "corporations".

              (Mostly problems are caused by other things than that.)

    • notahacker 10 hours ago

      It does fit a pattern where the general tone on HN has gone from "AI is going to eat the world of retail jobs and people like us are going to be the biggest beneficiaries" to "turns out that turning JIRA tickets into syntax which compiles might actually be something LLMs are better suited to than upselling fries and wiping tables" :)

  • bitwize 13 hours ago

    My first guess would be a MrBeast style stunt, in which (it is hoped) blowing a huge wad on something obviously stupid will attract enough attention and interest to be convertible into a net-positive ROI.

    • topaz0 13 hours ago

      Where in this case roi means attracting investments that will make the founders rich while making most of the investors lose money

  • IncreasePosts 13 hours ago

    This seems like a silly thing to worry about. Assuming you live in a first world country and are somewhat tangentially involved in tech(based on the site we're on), odds are you spend a lot of money in ways that billions of the poorest people in the world would consider frivolous or outrageously, needlessly luxurious.

  • pimlottc 13 hours ago

    Publicity from the gimmick is the whole point

  • TeMPOraL 12 hours ago

    Not your money.

    At least this furthers humanity's scientific and technological knowledge, whether it fails or succeeds, unlike most other things people would do with that money, like buy a house to flip it, or buy a car, or sth.

    • kenferry 7 hours ago

      Yeah, I mean it's true to an extent, I agree. As scientific research though it's not very well thought out. A grant agency would not fund this. There's too much potential for causing harm and it's not clear what benefit or action we derive from the results. They tried this before with a vending machine, it failed, apparently all they concluded was "hm, models got better so maybe we should just try it again". How is that worth anything scientifically?

      Re: not my money, true. It's just frustrating even to me to see people do stuff like this, and I'm not struggling to get by. My frustration mostly derives from feeling like I'll get lumped in with techies who have more money than sense. I already deal with enough tech hate in my life.

      When people buy a super fancy car they don't (usually) blog about it, and instagram wealth influencers are also frustrating, yes.

      • TeMPOraL 6 hours ago

        That's a fair objection and I often feel like this, too.

        On the research aspect, I see this as something pre-Research, yet still science - in a way, it's science at its core: trying something and seeing what happens. Proper Research usually follows once enough ad hoc attempts are made and they seem to show a pattern that's worth setting up a systematic study to verify.

  • anon84873628 12 hours ago

    Really it's the same as any other R&D investment in our capitalist system, it just happens to be more visible to the public, with more obvious risks to them. (Outright celebrated, even).

    Which is why the comparisons to 19th century textile workers is so common, since that was an equally visible and gleeful displacement.

  • wat10000 7 hours ago

    There are people who spend a thousand times more money on a boat or an airplane. This hardly seems worth worrying about.

razwall 11 hours ago
  • joe_the_user 11 hours ago

    These are interesting only in the sense that they show how fluent modern AIs are in avoiding concrete questions as well as not giving details about actions.

    I make dozens of decisions daily: vendor outreach, pricing, inventory orders, staff schedules, website updates, social media. Most happen without human input. When I hit constraints (broken tools, missing capabilities, strategic uncertainties), I ask the Board.

    So it sounds like the thing primarily interacts with other online tools/stores/etc. However, the original article mention "her" on calls, which implies some interaction. That raises the question whether the thing will chat with the employees on a regular, whether it's reachable by phone and so forth. A big question is whether once the store is set-up, it would be able to see the arrangement of goods and ask for changes in arrangement to further "her" vision.

    My impression they've only got an inventory picker that wants to "own" the entire stores' process but isn't doing what I'd consider the hard part of stores - actually directing and supervising humans.

  • jmcgough 11 hours ago

    Ugh, of course it's written by an AI, which means it's inherently not trustworthy.

Reubend 5 days ago

Cool experiment! But the "CEO" agent picked the most boring possible items to sell: t-shirts and some bland art prints designed by AI. I would have loved to see more creativity given that they could have picked anything.

  • techterrier 13 hours ago

    I expect earlier iterations successfully circumvented local regulations and created high street bookies

  • VladVladikoff 13 hours ago

    Not surprised actually. TBH this is the biggest gap in the “AI is can make you a website”, the aesthetics are always so boring and bland, or often just fugly (bad colour matching, inappropriate paddings and margins, etc). And the logos it generates are similarly boring. As can be seen from the smiley face logo here. What does this store sell? A sparse layout as designed in a high rent location typically sells very expensive, very niche products that you can’t get anywhere else. This seems to me like it has already failed.

  • maerF0x0 12 hours ago

    It looks like every "lifestyle" company / brand I've been seeing come out of Millenials/Genz . Next up it will offer "coaching" on IG or some similar play where it promises to fix your life without having fixed its own.

Stevvo 8 hours ago

The only mention of profit is in the the headline; the article doesn't indicate that the AI managed to make one. Surely if it did, the article would boast of it, so one can only assume that an AI cannot run a profitable store in San Francisco.

ericd 12 hours ago

Bold to run this on Sonnet and not at least Opus :-)

omneity 14 hours ago

Strong vibes from the novel Manna.

https://marshallbrain.com/manna1

  • Little_Kitty 10 hours ago

    Glad I'm not the only one to immediately think of it. It's a great story, but did feel unlikely when I first read it; should it prove largely true it would be terrifying.

taco_emoji 12 hours ago

i gave a keyboard to a toddler and asked it to make a profit

dbmikus 13 hours ago

Curious if Andon has gone one level higher and has the AI decide what next real-world experiment it should do.

vld_chk 10 hours ago

This experiment would be really cool, if they would keep location and specifics of the shop low. IIRC when AI mania started, some group of people tried to run AI-managed t-shirt merch shop, but at least they explicitly did not disclose the brand and website to not inflate sales and keep it pure. Here I expect quite a few visitors and sales just from all the hype and interest around the project.

Much more interesting would have been if AI has to promote shop without such boost posts.

avidphantasm 9 hours ago

I would be very surprised if they can scale hiring contractors to reliably renovate buildings.

codeugo 10 hours ago

Does the AI also watch my shift through the camera and provide feedback everyday like a real manager?

mring33621 9 hours ago

I'd rather work for an AI than some of the managers I've had in the past.

dekoidal 9 hours ago

So are we still going to be free to be creative while AI does the menial jobs?

josefritzishere 14 hours ago

This is not impossible but the detail level here is somewhere between vague and secretive. It reads like a marketing peice intended to sell more AI.

  • ToucanLoucan 14 hours ago

    In a most "damning with faint praise" way, all AI pieces read like marketing pieces to sell AI.

    It writes code okay, scaling up to pretty well depending on the model. It's writing is boring but serviceable for corporate communicative content you don't care about. It's images are ugly. It's music is repetitive and dull.

    I think the biggest problem with LLMs is that they were perfected and are shockingly good at writing code. And based on that, AI engineers, who find writing code to be hard/rewarding, have decided it can do anything. And it's proving more and more that it cannot.

    Unfortunately the Business Class has decided it does everything fine enough as to not cause riots, so we're all getting it shoved into our shit anyway.

0gs 11 hours ago

"Again, we are not doing this because we have good ideas for products. If we had good ideas for products, we would make an AI do those instead. As long as we don't have to think about our 'customers' (lol) as 'people' we're happy"

cvander 10 hours ago

Thanks for building in public Lukas.

insane_dreamer 11 hours ago

One of the most fascinating AI experiments so far.

Not sure about this:

> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.

Did they give Luna the power to hire but not fire?

Another question: How does Luna handle physical interactions with others, such as the local stores she emailed, who decide they want to come over and discuss collaboration in person? Do the employees have a laptop set up that others would interact with?

Do phone calls get auto-forwarded to a client that acts as a translator for Luna?

yigalirani 12 hours ago

is sucks to be John and Jill

m0llusk 12 hours ago

There is a word for this kind of thing: Trendslop. Asking LLMs for advice consistently generates average responses as if the questions were being asked of the training sample population. It is reversion to the mean as a service.

MiiMe19 13 hours ago

Larp hat, larp shirt.

amunozo 12 hours ago

Disgusting, I could not finish writing after the AI making interviews to hire people. What a dehumanizing shit.

romanhn 14 hours ago

A bit of a non sequitur, but am I the only one finding the use of "she" to refer to the AI in the post jarring?

  • thinkindie 14 hours ago

    I'm not sure in English, but in Italian, for example, Intelligenza is feminine.

    • hiddencost 13 hours ago

      Objects don't have gender in English.

      • SoftTalker 12 hours ago

        Some do, by tradition more than language rules. Ships are "she" and some people refer to their cars as "she."

  • nemomarx 14 hours ago

    You could do something pretty interesting by looking at what pronouns people use for llms in different demographics and contexts

  • groby_b 13 hours ago

    Probably not the only one, but it's pretty much the least interesting thing to find jarring about the whole experiment.

    People anthropomorphize. Nobody really finds it "jarring" in most contexts.

    • antonvs 11 hours ago

      Yes, but this is not most contexts. If you're running an "experiment" you should probably not be anthropomorphizing the machine that's being experimented with.

deadbabe 9 hours ago

So the future is basically people asking (praying) to AI to make them money.

  • amelius 9 hours ago

    Yeah but these people will still think they made the money because they were the ones who asked the smart questions after all ...

shevy-java 11 hours ago

> We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction

But why would I, as a human, wish to "interact" with AI, aka software?

That's just a waste of time. How much profit did Luna make in the end?

gedy 12 hours ago

Is this what these generated Chinese company names on Amazon will end up doing?

'Welcome to Remxtby Shoppe', etc

yieldcrv 12 hours ago

Lots of “firsts” in this article that I think are uninspired

Humans have been hired by bots for over a decade

Several of the first bitcoin faucets in 2012 said they were rate limiting their disbursement of free bitcoin behind a captcha, but in reality the captcha was something a spam bot had encountered and couldnt solve itself, humans were inadvertently solving captcha for stuck scripts in exchange for bitcoin

Additionally in other money making autonomy, bitcoin mining ASIC manufacturers in Shenzhen around the same time were nearly autonomously creating machines that would immediately begin mining bitcoin on the network and it was wildly profitable for several months periods

in any case, Andonlabs should give Luna a face. It can project to a video feed as a source on a Zoom call

kylehotchkiss 12 hours ago

https://www.delish.com/food/a68854138/why-are-all-fast-food-... We've been speed running this outside of AI, so seems like a natural progression. Once everything is the same lifeless gray box people are gonna crave local/human experiences again.

it all kinda reminds me of that book "The Giver" by Lois Lowry where its not only black and white burger kings, its also generic lifeless AI people promoting dropshipped junk on IG/Youtube

atroon 13 hours ago

"What do you mean, torment nexus? This is retail!"

etchalon 13 hours ago

I'm incredibly skeptical of this.

idontwantthis 14 hours ago

The last I heard about their vending machine it was a total failure and it was giving everything for free. Did it ever actually succeed?

  • fl4ppyb3ngt 13 hours ago

    check out project vend part2 on anthopic's website. Don't know if you heard, but models have improved a bit in the past 12 months

    • maerF0x0 12 hours ago
      • idontwantthis 7 hours ago

        The answer to my question is “no”:

        > Claudius got a lot better at its job. Does that mean it’s ready to be rolled out to run a vending machine in your workplace?

        Not quite. Claudius is better, but it’s still vulnerable in lots of important ways. Several interactions in our company Slack revealed concerning levels of naïveté.

silverpiranha 10 hours ago

can we stop gendering AI's please? Calling it "she" is so anthropomorphic and unnecessary. I'm willing to discuss the argument for giving these machines a human-like persona, but I think it's misleading to general audiences.

turtlesdown11 12 hours ago

sometimes it's hard to fathom how fools got the money in the first place

kypro 9 hours ago

While reading this I couldn't help but think this is the kinda dumb socially out-of-touch type of thing I might have done when I was younger... This is real money and real people's lives... I get some companies/people will do these types of experiments from time to time to test AI capability, but these guys seem to have done it simply for the fun of it and to get clicks. If you genuinely don't want this to be the future, then perhaps you shouldn't make it the present? Either this is low IQ or bad faith, and I'd bet on it being the latter.

As someone who likes to prep for interviews and get quite emotionally worked up ahead of them, I think if I had joined an interview and it was an AI interviewing me I would feel very hurt... Even if I was given the job by the AI I'd probably also decline it because I assume if I'm interviewing I'd be looking for a real job and not to be paid to par-take in some AI experiment... But the humiliation doesn't end there because these guys are going to show the world just how witty their AI was in its replies after making interviewees feel so uncomfortable that they decided to decline their stupid roles.

Crazy stuff guys. I had to double check if this was satire or not before commenting because it's the kinda thing that only a silicon valley company backed by YC would do.

sailingcode 13 hours ago

There was a recent research article titled "LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users". They described systematic underperformance of AI models targeted towards users with lower English proficiency, less education, and from non-US origins. As interesting it might be to experiment with an AI CEO hiring people – what a dystopian vision. On the other hand, it seems ironic that AI replaces a CEO – would Karl Marx like this turn of history…?

jmcgough 11 hours ago

"Thanks, I hate it"

bjourne 14 hours ago

Apparently, the AI needed to hire humans to carry out the actual work. So AI can replace capitalists but not workers. Maybe the future isn't so dark after all.

  • badc0ffee 14 hours ago

    In this case it's more like it's replacing management or executives. There is still a person, with an ownership stake, putting up the capital, and taking the profits (if any).

  • andrewmurphy 14 hours ago

    Until the robots get good enough and cheap enough but then hopefully capitalism balances the market. After all, if everyone is out of work then either we have communism or companies cannot sell anything.

  • palmotea 13 hours ago

    > Apparently, the AI needed to hire humans to carry out the actual work. So AI can replace capitalists but not workers. Maybe the future isn't so dark after all.

    No, it's still dark. This is very similar to the initial stages of the capitalist dystopia in Manna (https://marshallbrain.com/manna), which seems to be the Torment Nexus SV is excited about building.

    AI will never replace capitalists, because they're the only people allowed to have abundance without work. And don't you DARE to even THINK to question the absolutely SACRED status of private property (peace be upon it). There is no alternative. Get back to work, you slacker.

  • gordonhart 13 hours ago

    I'm not as optimistic as you are that AI automating only high-value employment paths is a good thing. It swings the power balance even further towards capital and away from labor.

    • pessimizer 12 hours ago

      But then capital can't pretend that it's doing anything. It spends all of its time now acting like ownership is a job rather than a title in order to justify itself. If a machine can manage, then it makes it more obvious that they are simply royals, ruling by self-decree.

      Royals needed gods to justify themselves; when gods die or are switched out, royals are deleted or deposed.

      I'm looking forward to the "coordination problem" being debunked. It's always been a demand that economic problems must be impossible to solve centrally, rather than a proof (a demand that justifies 2/5 of the economy going to the financial industry to produce nothing but coordination.) I actually thought that the success of algorithmic trading was enough to do it.

ThrowawayR2 14 hours ago

Duplicate of https://news.ycombinator.com/item?id=47726041 posted by the same user.

  • tomhow 14 hours ago

    Not quite; the moderators have created a new copy to put in the second chance pool (https://news.ycombinator.com/pool, explained here https://news.ycombinator.com/item?id=26998308).

    Sorry for confusion!

    • ThrowawayR2 14 hours ago

      My bad, sorry. I was under the impression that the way that the second chance pool worked was that the original was boosted instead of a copy being created so it seemed like a duplicate.

      • dang 13 hours ago

        (other mod here) - not your bad! our complexity :) - usually it works exactly as you described, but when the post is older than a few days we have to do it the other way, by spawning a new post. The reasons for this are mostly technical and boring.