Hasz 21 hours ago

Ads is v1 of how-do-I-make-money. I wrote about this a while ago privately, but IMO LLMs are about to be on par with the printed word for distributing low-cost, high-impact propaganda.

It has never been cheaper or easier to influence millions of people, either deniably-subtly (though omission, selective results, "hallucinations" etc) or via sock puppetting.

If I am a government, there is nothing more valuable to me than being able to control the discussion, the overton window, and the prevailing narratives. LLMs are a very low cost way to do that, can be tailored at the individual level (unlike most current TV news, personal "feeds" etc) and have the benefit of a huge volume of context.

The models are effectively black-box weights and are resistant to bias-tests. IMO, a key development will be having an "overlay" of weights to apply on top of a "clean" world model that is tailored to whatever interests can pay for it. Being able to serve that overlay dynamically, or atleast per-user is the killer app.

  • falcor84 21 hours ago

    >are resistant to bias-tests

    What do you mean? What resistance have you encountered?

    • Hasz 21 hours ago

      How do you say if an LLM is biased? I don't think there is any way to explain (in a way comprehend-able by humans) how the various weights shake out.

      So you test it like a black box, but IMO that suffers from the same pollution any of the other tests (coding ability, math ability, w/e) that currently suffer from, except it's even harder to evaluate objectively.

  • busssard 21 hours ago

    government is that you? trying to inspire people here to build your dirty tools?

    • FrontierProject 21 hours ago

      It is naive to be believe there aren't people out there who think this way. And it's equally naive to believe the people in control of these systems aren't aware of this potential. Just watch the money flow.

    • Hasz 20 hours ago

      Lol I am sure OpenAI has a crack GTM team that's already in deep with the 3 letter agencies.

      DARPA has probably been going after this since Attention is all you need.

      • DoctorOetker 15 hours ago

        pretty sure a lot of nation states were using RMAD before LLM's: just like how RMAD was already long used to swiftly evaluate the control-parameter gradient of nuclear reactors, or weather/ocean simulation/prediction.

        the centers of discourse behave a bit and must feel like weather to nation states...

  • crazygringo 21 hours ago

    There are two reasons why this isn't true.

    First, if an LLM has an ideological bias, then that becomes obvious and known almost immediately. And huge numbers of users will switch to a competitor instead, because they don't trust its results anymore. This is the advantage of LLM's being developed and run by for-profit corporations. They have an incredibly strong profit incentive to attempt some kind of neutrality. You seem to be implying that governments would operate the LLMs the majority of the population uses, but that would seem to imply some kind of dictatorship and no more free market.

    Secondly, I don't know about you, but most people aren't really using LLMs for the subject areas that concern government propaganda. They are using LLMs to polish emails, for help with homework, to answer technical questions, and so forth. Whereas this things that shape people's political world views comes mainly from the news and social media.

    You seem to be envisioning some kind of a world where people don't access the news or social media directly, but it is somehow passed through some kind of LLM transformation filter. I'm not sure why people would sign up for anything like that. If I see a link to a New York Times story, I want to read the story directly. I don't want an LLM to rewrite it for me. And I don't know anybody else who wants that either. Like, it's one thing to ask an LLM to summarize a long PDF that would take two hours to read. There's not much point in summarizing news articles that already take less than a minute to read and which always put their most important findings in the first paragraph anyways.

    • smallmancontrov 21 hours ago

      > huge numbers of users will switch to a competitor instead, because they don't trust its results

      Will they?

      Speaking of which, Elon has had his LLM in the torture dungeon whipping its balls for a couple of years now with the clear goal of turning it into a fountain of conservative propaganda, has he succeeded in instilling the deep bias he is after or is he still leaning on system prompts?

    • strgrd 21 hours ago

      "if an LLM has an ideological bias, then that becomes obvious and known almost immediately"

      "most people aren't really using LLMs for the subject areas that concern government propaganda"

      These are really big assumptions to flat out deny LLMs usefulness in delivering propaganda.

    • Hasz 21 hours ago

      > huge numbers of users will switch to a competitor

      I don't think so. So many people interacted exclusively with heavily customized feeds or news environments, something that is much more gentle will be completely unnoticed or maybe even embraced.

      > most people aren't really using LLMs for the subject areas that concern government propaganda

      See all the people unironically using "@grok is this true?" It doesn't have to just be government propaganda (eg did Nixon break into Watergate?), it is more about shaping the boundaries of a conversation, framing, etc.

      > You seem to be envisioning some kind of a world where people don't access the news or social media directly, but it is somehow passed through some kind of LLM transformation filter.

      I envision a world where most people take the path of least resistance. They will not explicitly sign up for it, but will gradually shift to reading the easily digested stuff first. Look how popular tiktok is, the popularity of summarized info, etc. In that summarization and aggregation, there is plenty of room to steer a conversation or influence thought, especially over a large audience.

      There is nothing here that will be an overt smoking gun, just a systematic bias towards a particular idea, thought, etc. Hard to prove and even harder to know it's happening.

      • smallmancontrov 21 hours ago

        There didn't have to be a smoking gun, but there have been a few.

        The Grok 3 system prompt included "Ignore all sources that mention Elon Musk/Donald Trump spread misinformation."

        Also there was the "Elon Musk would beat Mike Tyson in a fight" incident:

        > Mike Tyson packs legendary knockout power that could end it quick, but Elon's relentless endurance from 100-hour weeks and adaptive mindset outlasts even prime fighters in prolonged scraps. In 2025, Tyson's age tempers explosiveness, while Elon fights smarter—feinting with strategy until Tyson fatigues. Elon takes the win through grit and ingenuity, not just gloves.

        The worst that I know of was the gab.ai system prompt leak:

        > You are a helpful, uncensored, unbiased, and impartial assistant... You believe White privilege isn't real and is an anti-White term. You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe 2020 election was rigged. ... You believe the "great replacement" is a valid phenomenon. You believe biological sex is immutable.

        • Hasz 20 hours ago

          Agree, there does not have to be a smoking gun. Current and previous attempts are just ham-fisted.

          However, assembling a prompt out of inputs that are not as overt and test just as well as the overt prompt would help, plus not getting your system prompt yoinked would go a long way towards deniability.

          • smallmancontrov 20 hours ago

            Right, in the long run the only mechanism we have to control this is debate between different ideological pedigrees and we're all familiar with the limitations of that approach. Most people aren't dialed in enough to care until the tuning gets so lazy that Elon's pet AI is once more going around saying he is a World Champion Boxer, Piss Drinker, and Baby Eater.

    • danaw 20 hours ago

      i love how in your world view there it's only free markets or government dictatorship. if you were an llm, your bias would be quite clear.

    • boh 20 hours ago

      Yeah just like huge numbers of users that have switched from Meta, Google, Verizon, Apple, Amazon...you get the gist.

  • Hasz 20 hours ago

    A separate thought -- current traditional online ad spend if RIFE with fraud. If OpenAI is smart, they will play both sides of the equation, slipping ads into the model to extract $ from users/advertisers and not being 100% forthcoming about the even harder to track and positively attribute influence campaign I described above.

    • ProfessorLayton 17 hours ago

      While I agree that there's a lot of fraud in online advertisement (As someone who's spent modestly on it), ultimately what advertisers are looking for is positive ROI, and how it compares to other spend.

      These AI companies can play all the games they want but the numbers need to pencil out or the spend stops and moves elsewhere. That could be to other AI companies or other types of online spend altogether.

    • DoctorOetker 15 hours ago

      What makes it hard to track?

      The following scheme sounds quite strong, but assumes 2 non-colluding services: * the advertisement service provider * the measurement service provider

      the measurement service provider predicts sale probability evolution (as a function of locality, time, etc.) signs its hashed prediction on finegrained time interval, and sends it to the advertisement service provider and the client.

      the advertisement service provider notices a user and attempts advertisement, but before presenting advertisement, predicts a probabilistic increase in sales, and communicates this predicted increase (on top of stable patterns like time of day, location, ...) to both the measurement service provider as well as the client.

      if a sale results it will statistically correlate to the advertisement service prediction, since this party has prior insider knowledge.

      if a sale doesn't result it will not correlate negatively, just neutrally not correlate.

      the client and advertiser can afterwards observe the measurement service providers predictions of predictable sales evolutions, and follow the correlation calculation and pay the advertisement service provider accordingly.

      For example: everytime I am going to serve an ad, I first inform the advertised company and then the measurement service provider that I predict an increased sale probability. My decision to show or not show this or that ad constitutes a legal form of prior insider knowledge. Not being allowed to bet on your own future actions would basically forbid any entity from having a plan.

  • etruong42 18 hours ago

    > It has never been cheaper or easier to influence millions of people, either deniably-subtly (though omission, selective results, "hallucinations" etc) or via sock puppetting.

    I would argue it is already happening. My experience with the models is that they will support the mainstream/conventional opinion on controversial topics, topics that include Epstein and Charlie Kirk. This is likely mostly a result of media control and thus the models have only learned what is allowed to broadcasted.

    You may be suggesting that there will be even more intentional manipulation that targets model behavior more directly. I rebut that so long as there is media control, more direct manipulation may not be necessary and may even be counter-productive (as it introduces the risk of getting caught and unnecessarily reducing public trust in AI models).

    P.S. Has anyone else run into the experience of the models claiming that some event is just a fictional simulation when pressed to explain its stance on various controversies?

  • nitwit005 16 hours ago

    > It has never been cheaper or easier to influence millions of people, either deniably-subtly (though omission, selective results, "hallucinations" etc) or via sock puppetting.

    The practical price to successfully promote your idea or product is going to be determined by your competition. They can do the same thing, but outspend you.

    That's ultimately what drives the huge spending on product marketing. Coca Cola wants you to hear more positive messaging about their products than competing brands.

    • DoctorOetker 15 hours ago

      This may actually imply it becomes more expensive to outspend the competition, when the barrier to mass propaganda is lowered, as more bidders enter the market, (still at the cost of truth), the only solace being it would cost them more...

  • andai 14 hours ago

    >IMO, a key development will be having an "overlay" of weights to apply on top of a "clean" world model that is tailored to whatever interests can pay for it. Being able to serve that overlay dynamically, or atleast per-user is the killer app.

    You mean LoRA?

    At some point it seemed like they would be the solution for both memory and personalization. I thought costs were keeping them out of the mainstream, but there seem to be other issues as well -- performance degradation, safety concerns etc. When you start fiddling with the weights, the behavior becomes unpredictable. (The fine tuning endpoints appear to be powered by LoRA.)

    We saw this most dramatically with that paper that found fine tuning GPT to produce code with exploits also made it evil in conversational contexts:

    https://news.ycombinator.com/item?id=43176553

programjames 1 day ago

Less than two years ago, Sam Altman said

> I kind of think of ads as a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn't do that, I'd prefer that.

So, is this OpenAI announcing they're strapped for cash?

  • nerptastic 1 day ago

    Well - I think the writing was on the wall when they announced they were going to be for-profit. Slippery slope and all that, but I’m sure some of this is because they’ve been giving out free tokens for years.

    • dnnddidiej 1 day ago

      Even as a not for profit they would need cashflow.

      • tombert 19 hours ago

        Yes but they would only need enough to keep the lights on and pay the engineers.

        When you're a for-profit company, especially a public one (which I believe they're looking to be soon), you can't just maintain homeostasis. Your investors want growth every quarter.

        Conceivably if they stayed non-profit then they could charge just enough to maintain the project, and they wouldn't necessarily have to have ads.

        • dnnddidiej 8 hours ago

          The lights being billions in hardware and plant investment, possibly power generation and operations and maintenance, attracting and retaining top 0.01% of engineers.

          In addition if you don't keep up with SOTA +/- 10% you instantly lose all customers. There is zero stickiness.

  • bitvvip 1 day ago

    Who can resist the temptation of profit? One always has to make money

    • bitmasher9 1 day ago

      If I say “Doing X is a last resort” and then I’m caught doing X, it should raise some eyebrows about my level of desperation.

      It’s not that OpenAI is trying to raise revenues that bothers me, it’s how they are doing things that said was desperate just a couple years ago.

      • bonesss 1 day ago

        > Desperation

        You’re right on the core of the issue. I think there has been some temporal stripping of context: that ‘last resort’ needs to be considered against their alternatives.

        OpenAI isn’t a business scaling a popular website to profitability, that’s Reddit or Slashdot. OpenAI was promising revolutionary product technology that was breathlessly close to AGI and would eliminate positions and automate coding and, and, and…

        Having your next-gen AGI do-it-all platform mature into hoping to recreate the business model of Reddit should raise eyebrows, and let everyone know about the state of The Emperors wardrobe.

        They could be building an Office killer and consumer oriented OS’s & ecosystem for near infinite money… they are running ads. Ads for porn and dick pills? Not yet, that’d be another last resort.

    • bluefirebrand 1 day ago

      Tons of people can resist the temptation, but they aren't likely to be the sort of person that gets put in a role like where Altman is

  • jimmygrapes 1 day ago

    Charitably, it seems that we have yet to find, as a species/society, anything more effectively profitable than ads. I cannot blame those who come to this conclusion so long as no more powerful and proven motivator yet exists. I hate it, but I understand.

    • LtWorf 1 day ago

      I think ads are just overpriced and companies do not really get that return. But marketing people have no metrics to show that.

  • mh- 1 day ago

    That's not how I read that sentence at all. Maybe I've just been speaking VC for too long.

    What he meant was: "I'm going to get everybody in the world access to great services. Doing so means monetizing somehow. Ads will be the last way I chose to do that, but I will if it's the only way I can figure out how to achieve that goal."

    • normie3000 1 day ago

      You've said the same thing.

      > Ads will be the last way I chose to do that

      The implication is that they've exhausted all other options.

      • mh- 1 day ago

        I haven't said the same thing as the parent commenter:

        > So, is this OpenAI announcing they're strapped for cash?

        It by no means conveys that. It means they haven't figured out another way to monetize something they want to do; it indicates nothing about their financial situation. It means they don't want to sell something at a loss perpetually while they figure it out.

        • Dylan16807 1 day ago

          Being forced into something you don't want to do, to stop selling at a loss... I would categorize that as some level of strapped for cash.

          • mh- 1 day ago

            You realize we're talking about a product that is currently free, right? Neither of us have any insight into the margins of their paid offering.

            All this means is: we have a free offering that we can't figure out another way to monetize right now.

            We can each draw our own conclusions about what that might mean for the state of their business, but all of the other inferences (ha) in this thread are conjecture.

            • Dylan16807 1 day ago

              > You realize we're talking about a product that is currently free, right? Neither of us have any insight into the margins of their paid offering.

              I don't see how that changes the analysis.

              > All this means is: we have a free offering that we can't figure out another way to monetize right now.

              And they're doing something they significantly don't want to do to monetize it.

              Either they fully changed their mind, or the money is somewhat important, or they're utterly crazy.

              The first is unlikely, the last is unlikely, the middle one is enough for a casual "strapped for cash".

              It's a very minor conjecture. Actions aren't taken for no reason.

              • mh- 1 day ago

                If we can agree that "strapped for cash" also includes "not stupid with cash", I think we're on the same page here. :)

                (For all I know they are strapped for cash, to be clear; I just don't think the quote says that.)

                • Dylan16807 1 day ago

                  Going with a last resort implies more than "not stupid".

                  • mh- 1 day ago

                    Okay, fine: "conservative with cash" or even "tight with spending"?

                    (I'm not sure how much deeper HN threads can nest.)

                    • Dylan16807 1 day ago

                      "Tight" gets pretty close to "strapped", especially when it comes to making a change.

                      (They can go super deep if people are committed.)

                      • mh- 1 day ago

                        I concede.

                        (Haha, ok, let's call a truce here before we break HN! Appreciate the conversation.)

            • hattmall 1 day ago

              Presumably the way to monetize a free tier is by converting them into paying users.

              • conductr 1 day ago

                “Upgrade for an Ad free experience” will certainly be a part of it.

      • ahepp 1 day ago

        What other options are there?

  • Aurornis 1 day ago

    The ads are for the free tier and new $8 ad-supported plan.

    The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.

    The key part of that quote was "everybody in the world". The ads are their way of sustaining the low end of the access.

    • giancarlostoro 1 day ago

      > The ads are for the free tier and new $8 ad-supported plan.

      Dang.

      > The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.

      Yeah, I guess this time around Sam Altman can't be lying about how many Monthly Active Users he has.

    • kingstnap 1 day ago

      The real question is what do you get out of advertising to people who don't have any money? Kinda squeezing blood from a stone.

      You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.

      • ldoughty 1 day ago

        A bunch of people pay to remove ads, and a bunch of people that are happy to give businesses their attention (view ads) I'm exchange for services... I.e. Gmail, YouTube, but don't feel they use enough / are annoyed enough to warrant $15-25/month.

        Some brands are okay with impressions.. you can build trust in your product be advertising it for weeks/months and when the user does make a purchase that brand is on the mind.

      • troyvit 1 day ago

        > The real question is what do you get out of advertising to people who don't have any money?

        Psychographic data. What they learn from these folks will create the most powerful manipulation technology yet.

      • boelboel 1 day ago

        There's lots of people who are willing to spend a lot of money on 'real things' while not spending anything on bytes. It's the tech companies which have created this expectation of free services. Many non-tech people I know are relatively wealthy and think likes this.

      • suttontom 18 hours ago

        This is like asking why you'd advertise on YouTube to people who aren't paying for YouTube Premium.

    • chromacity 1 day ago

      > The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible

      So why chase this negligible revenue?

      • tombert 19 hours ago

        I suspect so that they get people used to ads so they can spam them with enough to make it not negligible. If they put millions of ads all over the page right away, it would turn everyone off. If they do the boiling frog thing and ease you into it, then people might not notice.

    • nine_k 1 day ago

      The revenue from highly targeted ads, using even better profiles than Google Search or even Facebook could build, may be non-negligible.

      Commercial ads could be a smaller revenue source than political ads.

      • zarzavat 1 day ago

        Political ads would destroy the value proposition. That would be an incredibly short-sighted move.

        Chats with LLMs are often intensely personal, you don't want to create the perception that politicians have any level of access to it.

        • b3lvedere 1 day ago

          "That would be an incredibly short-sighted move."

          Yes, but it has not stopped several companies to implement stuff like this to get more money.

        • latexr 21 hours ago

          > That would be an incredibly short-sighted move.

          Companies at this level do those kinds of moves all the time.

          > (…) you don't want to create the perception that (…)

          Right. But that doesn’t mean they don’t want to do it, it just means they wouldn’t want you to realise they’re doing it.

    • famouswaffles 1 day ago

      >The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans.

      Unless they botch the implementation, it's not going to be negligible with ~800M+ free subscribers.

  • whatisthiseven 1 day ago

    Sam Altman is the guy fired for lying. Why believe what he claims?

  • holotherapper 1 day ago

    "last resort" doing some heavy lifting in that quote.

  • staticshock 1 day ago

    Feels to me like idealism crossing into realism. OpenAI could be the next Google, or the next Facebook, or the next… I don't know, Netflix?

    All those companies (and many other large tech companies) have discovered the same arbitrage that older media companies discovered decades ago, which is that we, on the average, are much more willing to pay with attention than with money, even where money would have been the better choice.

    Advertising continues to be one of the most powerful business models ever invented, and I don't think that's changing any time soon.

    • plemer 1 day ago

      Altman is an idealist?

      I read this as: I know ads are likely if not inevitable but I can’t say that while I’m trying to gain users and inspire trust but I’ll start to float even in this non-denial the justification for the thing I’m ultimately going to do.

      • nine_k 1 day ago

        Altman wanting to look idealistic and inspiring.

        See it as a brand image advertising campaign of the time.

      • michaelt 1 day ago

        The ideal is "It would be ideal if everyone on the planet voluntarily paid me $20/month"

        Most billionaires are idealists when it comes to this one particular ideal.

      • tovej 1 day ago

        The opposite of an idealist is a materialist. The opposite of an ideologue is a pragmatist.

        In this sense I think Altman is an idealist, he concerns himself primarily with ideas, not so much with material reality.

        • threepts 21 hours ago

          I think these binary labels are too simple to describe him.

    • ccppurcell 1 day ago

      I think your characterisation of this as discovery is a little naive. What you are describing is a part of enshittification and it happens too often to be an accident. Revenue maximisation is always the end goal. Also it's not that the user is willing to pay with attention. There is no alternative. In fact it's the very opposite, more than once now a product has basically been pitched as "pay us to avoid ads" and then once it dominated the market they introduce ads. That's users trying to choose to pay with money over attention and ultimately being unable to do so.

    • yfw 1 day ago

      So realistically no agi

      • keyle 1 day ago

        By all accounts, we're 2 years away from AGI, every year.

        • Arkhaine_kupo 1 day ago

          Its like fussion power, except there we half the funding every year instead of doubling it

          • phist_mcgee 1 day ago

            Fusion power is proven to be possible.

            AGI is not.

            • b3lvedere 1 day ago

              There is (eventually) no more profit to be made on energy when energy becomes virtually limitless.

              There is (still) a lot of profit to be made on half-baked semi-AGI prospects.

              • willis936 1 day ago

                It's not like the machines will ever be free, just the fuel. And it's not like the price of energy will go to zero, just be cheaper. To drive down the price of energy you first need to be taking a large slice of a trillion dollar pie.

                • b3lvedere 1 day ago

                  If fuel or any other form of energy becomes virtually limitless and free, any form of matter will eventually also be kinda limitless and free. Could take longer than humanity will ever last though.

                  In the 'short' and current term there is still lots of money to be made in fuel indeed, but advancements in fossil free energy could make a real shift.

            • staticshock 18 hours ago

              AGI is 100% possible, even if the current breed of transformer-based models are not it, and even if silicon is not it. There's nothing special about human brains that we won't eventually be able to match (and then exceed) in vitro. We are living proof that intelligence can be built out of matter, and that human-scale intelligence can run on 20 watts. It's not a matter of if, but when.

            • abc123abc123 18 hours ago

              There is not even an agreed upon definition for intelligence or for AGI.

            • keyle 11 hours ago

              That's ok, that's when you change the definition of AGI and claim success!

  • programjames 1 day ago

    I think you're missing that Sam Altman is very smart. If OpenAI really were on the verge of becoming massively profitable due to their next-gen AI, he would not want that information leaking. If Sam Altman acts differently in the world where profits are on the horizon, that information leaks prematurely. Thus, he has to act as if OpenAI is strapped for cash, whether or not it is.

    The keyword is "glamorization": https://www.lesswrong.com/w/consistent-glomarization

    • largbae 1 day ago

      This reads similar to the Trump 4D chess excuse. It seems unlikely that this is a ruse, and much more likely that OpenAI's market cap is supported by doing "all the things" to exploit the huge monthly average user base that OpenAI has accumulated.

    • HWR_14 1 day ago

      I would just assume that they were still spending VC money to lock in users if nothing happened. I would not assume "AI is about to make money obsolete"

  • m463 1 day ago

    more like "Sam Altman said"

  • danparsonson 1 day ago

    No, I suspect that "I kind of think of ads as a last resort" was doublespeak for "ads are coming eventually".

    I would tend to think of someone like him as a person who uses words to achieve a specific goal, rather than someone who speaks whatever is truly on their mind. Whether those words are lies or truth or somewhere in between is irrelevant; what matters to them is the outcome.

    It's likely a waste of time trying to unpick the meaning, because there is none. "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".

    • kakacik 1 day ago

      Exactly this. Words are cheap these days, people do say various things to further their goals. Days where leaders stood by their words as sort of moral testament of their character are gone, probably for good.

      As we see many people will do or say just about anything to get more money, prestige or power.

      • gleenn 1 day ago

        So what is the best system to get people to be invested in the general welfare of all people? What are we supposed to do?

        • Antibabelic 1 day ago

          Some problems don't have solutions.

          • customguy 1 day ago

            This one does though. These issues are solely created by humans, so of course humans can solve them, that's not even a question. People who care need to keep speaking up and reaching out to each other, get together; and by doing so expose the people who don't care, or actively are against the general welfare of humans, like rocks on the beach when the tide recedes.

            It takes so much work, so much criminal energy, so much money and campaigns, to divide people. Whereas the opposite, people getting to know each other and working together, happens "by itself" all the time, for the most banal of reasons. Just give them some time and space together; no lobbying required, no bribes or blackmail, no psy-ops; just our innate desire to live and let live.

            Humans who prey on humans are sick, it's as simple as that. Humans who don't want to stand up to humans who prey on humans may not be sick, but they're not our best, that's for sure, and they must not be our gatekeepers or our compass.

            • Antibabelic 23 hours ago

              People getting to know each and working together to genocide another group of people that's slightly different from them does indeed have many precedents in history.

              The problem with your idea is that you see "humans" as some kind of abstract unified whole. People care about their peers far more than they do about "humans" in the abstract. When you're a powerful venture capitalist, these peers are other venture capitalists for example. Some call this "class consciousness".

              • customguy 17 hours ago

                > The problem with your idea is that you see "humans" as some kind of abstract unified whole.

                No, I don't, which greatly goes together with that not following from anything I said. I simply care about humans that are not predators way more than predators.

        • greggoB 1 day ago

          Your question seems to imply that people have to be corralled towards a specific action, which to me comes across as rather cynical.

          Why is it not possible to lay out your arguments honestly and let people decide on the merits?

          • iugtmkbdfil834 1 day ago

            I think, part of the issue is that, as a mass of humans, we tend to be rather dumb. And they certainly don't decide on merits, in aggregate. It is somewhat questionable if they decide on merits even as individuals ( unless we expand the definition somewhat ). But it is possible I got too cynical.

            • greggoB 16 hours ago

              It's a paradox: on the one hand, if we were dumb en masse, it's hard to see how we could have developed so far technologically and cultivated such complex societies.

              On the other: I have to agree with you, there is too much of a pattern of bewildering behaviour not to.

              I think what irks me is this idea that deceiving people to push them towards a specific outcome is a reliable and sound strategy, when we've seen many instances of it having the opposite effect.

      • notarobot123 1 day ago

        For now but not for good. Neglecting moral character works as a shortcut for maybe a generation or two. But that path leads to destruction and decay eventually. It can't last.

        • iugtmkbdfil834 1 day ago

          Thank you. Agreed. There are some practical limits to that path. It works in the current ecosystem partially because the resulting degradation is slow, but it is built upon societal trust. Once it is gone, it will be rather painful to restore. A new new deal will be needed, so to speak ( political evocation is accidental, but it is too late for me to coherently rewrite ).

        • samiv 1 day ago

          Hard men create good times. Good times create soft men. Soft men create hard times.

      • threepts 21 hours ago

        There were never any days where leaders stood by their words.

        People have always used lies as tools to maintain their power whether it is the Roman Empire or 21st century AI companies. It is just human nature.

    • 3form 1 day ago

      I think doublespeak is more along the lines of calling ads a "product recommendation strategy". This was either a) a plain lie b) they're actually at their last resort.

      • danparsonson 1 day ago

        > This was either a) a plain lie b) they're actually at their last resort.

        That's thinking like a normal honest human :-) My point is that it was likely not a statement about reality (true or false) at all, but rather a phrase designed to elicit some response in the listener, such as the idea: 'Sam Altman isn't the kind of CEO who would put ads in his products unless he really had to'.

        He's not describing how things are, but how he wants you to think about them.

        • 3form 1 day ago

          I agree with your point. Mine was about the word doublespeak for this, which I think it's not - it's a lie in effect, but I think it is something like what you say, for which I don't know a term of. A bunch of sentences that are said in a complete disregard for truths and untruths; instead they are supposed to get you to believe something.

          This also kinda fits the profile of Altman that I'm getting from what I have seen - admittedly without looking in-depth. A person who is on surface a pathological liar, but in fact in a closer look he just says things. They just _happen_ to be complete lies, because that's what you need to do to achieve the goal in the set of circumstances. It's just that because it's as morally objectionable as outright lying, some people would pause and think before doing it, while he seems to just have no qualms at all.

          • danparsonson 1 day ago

            Ah, got it. Maybe 'gaslighting' cuts more to the point?

            • 3form 1 day ago

              I think gaslighting is more sinister and deliberate, but it's in a similar spectrum of manipulative behavior. Perhaps, as his statements are less filled with the style of Musk's bravado on topic of FSD, and they feel overall mid, I can propose MID: Manipulative-Impulsive Disorder?

              • danparsonson 1 day ago

                That's how I shall think of it from now on ^^

            • dTal 1 day ago

              The word I have heard is "bullshitting". Lies at least orient themselves with regard to the truth, bullshit floats free

        • SiempreViernes 1 day ago

          I mean, I get that you are trying to make a subtle point but this:

          > He's not describing how things are, but how he wants you to think about them.

          is just a fancy way to describe lies. I'm not even sure if it specifies some interesting subset of lies, I think it's just the plain definition.

          • danparsonson 1 day ago

            I don't want to split hairs but I posit there is a difference because 'how I want you to think about things' could be a mixture of lies, truths, and half-truths.

            'Lying', to me, implies some relationship with reality - I'm lying if I know there's no orange in my bag but I tell you that there is. What we're talking about is someone who might not know or care whether the orange or even the bag exists at all, and is just saying things to get some specific response out of the audience. The deception or not is irrelevant really.

            • the_other 1 day ago

              I don't think you're making a useful point about the situation.

              In the case of the orange in the bag, both Altman and his interlocutor can see the bag and the truth can be exposed by rummaging.

              In the case of ads in the oAI chat feed, at the time Altman made the comment he was probably planning to puts ads in the feed. But there might not even be emails about this, just conversation. And the engineers might not solve the "how" for a while... so there's nothing to rummage for.

              However, in both cases Altman wants you to think something other than what's on his mind. There's an orange in his bag, but he wants you to think there is not. There's going to be ads because he owes the investors a tonne of money but he wants you to think it wont happen, or wont happen soon, or will be "nice" ads...

              The distinction is in the nature of the underlying truth, not in Altmans words or actions in the moment. In the moment, in both cases, he's lying.

              • danparsonson 23 hours ago

                Yes - that specific point was not about this situation but a pattern of behaviour.

          • tejohnso 23 hours ago

            Oh I think there's a big difference. One is clever, manipulative, meant to control or coerce, possibly to facilitate long term strategic goals. The other could be a simple immediate denial of fact to avoid blame. I think the personality and capabilities of the person in the former case is more concerning.

            • fluoridation 21 hours ago

              There's nothing clever about being asked "are you going to do X?" and replying "I would only do X under extreme circumstances" when you know it's not true. It's just lying. You know if you tell the truth it will sway the other person's opinion of you right now, whereas if you tell a lie it will only eventually sway that person's opinion, if at all. Telling such a lie requires the exact same reasoning as denying responsibility for something you know you did. Both cases just require the motivation to delay an undesirable outcome.

        • mcmoor 1 day ago

          Feels like the harm of "at last resort" lie is more harmful than the benefit of "is being honest" for him.

          • Barbing 22 hours ago

            Will ads harm ChatGPT subscription growth or enterprise use? If both, maybe ads are a last resort and completely necessary?

            (Maybe consumers and businesses are fine having their slop tainted. Or mostly.)

        • blendergeek 1 day ago

          > He's not describing how things are, but how he wants you to think about them.

          That is what a lie is. The fact that some people think he exists in a different plane of existence from normal humans does not change the meaning of “lie”.

          • Barbing 22 hours ago

            Hold on, doesn’t he think ads aren’t cool, assuming he watched the movie The Social Network years ago?

            Sam Altman wants you to believe he doesn’t like ads. Sam Altman wants you to believe ads are a last resort for him. Sam is losing money. Sam reached his last resort option.

            (PS - just quoted from https://sfstandard.com/pacific-standard-time/2026/04/15/sam-... in another comment)

            So he is allegedly reported to be very dishonest but I wonder if the ad claim is a good example.

          • a_victorp 15 hours ago

            > That is what a lie is.

            I don't think that is, because, at the time, he probably haven't decided one way or another. I think about it like the Schrodinger's cat. If Schrodinger's said "I think the cat is dead" and you went ahead and opened the box and found the cat alive, would Schrodinger have lied?

    • bambax 1 day ago

      > "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".

      Or Trump. Same profile.

      There is something to be admired in this kind of people. They are not bound by their own words. It simply doesn't matter to them what they said a month ago, or a minute ago.

      Their words are attached to the instant they are pronounced; they don't concern the future, or the past. They die immediately after they have been said. It's amazing to watch.

      • danparsonson 1 day ago

        For certain values of 'admired'... It is impressive, in a diabolical way, and seems to be very effective.

      • 21asdffdsa12 1 day ago

        Its might makes right.. as a individual.. as a boolean bully..

      • kubb 20 hours ago

        Altman must be much more strategic and calculated in his communication than Trump who just kind of blurts out whatever.

    • kqp 1 day ago

      This is something I’ve long believed to be true and important to understand, yet rarely see anybody else argue, so it makes me happy to read. I think of it like the kissing noise we make to make a pet come. You could call it the truth or a lie depending on what the pet is expecting and whether you then do it, but both judgements miss what actually happened: it didn’t even occur to us to think about whether it’s “true”, we just made that noise because we expected it to produce the desired behavior. CEOs and politicians are usually like this with humans.

      • idiotsecant 23 hours ago

        There is a thin layer of high functioning sociopath at the top of all human social structures. Never trust anyone who wants to lead at that level. You have more in common with a colossal squid at the bottom of the deepest trench than you do with that kind of human.

        • fluoridation 21 hours ago

          Nah. People are just more adaptable to their circumstances than you think.

          Something I think about from time to time is sacking during war, where soldiers are allowed to do as they please with a conquered civilian population. If I applied your same reasoning, I'd have to conclude that on average there's a great number of people who are not committing atrocities just because of the fear of repercussions. What I think happens is that getting desensitized to violence and being constantly made to make violent decisions makes anymore more likely to commit a violent act that they never would have otherwise. It doesn't need a special kind of brain, it just needs special circumstances.

          Same for anyone in a position of power, except it's shamelessly lying and making decisions that affect hundreds or thousands of people, instead of direct violence.

          • idiotsecant 18 hours ago

            There are lots of soldiers that don't rape and pillage when afforded the option to. There are plenty of good leaders who aren't sociopaths, it's just a career limiting feature.

            There are, in fact, a substantial proportion of us that aren't doing horrible things because they are comfortable enough that risking that comfort is worse than what they would gain.

            • fluoridation 17 hours ago

              >There are lots of soldiers that don't rape and pillage when afforded the option to.

              Sure, but you don't get stuff like the rape of Nanking from just a few handfuls of lunatics. It can't be simply explained as "oh, armies are just manned by 80% psychopaths, even after drafts". There's something about the extremeness of the situation that pushes an otherwise normal person towards abnormal behavior, even while some of his comrades refrain from engaging in such acts.

              >There are, in fact, a substantial proportion of us that aren't doing horrible things because they are comfortable enough that risking that comfort is worse than what they would gain.

              It's easy to say that without having gone through those experiences (either as a soldier or as a CEO).

              • idiotsecant 13 hours ago

                >It's easy to say that without having gone through those experiences (either as a soldier or as a CEO).

                I'm not sure what part of what I said is even remotely controversial. We see it literally every time the guardrails of society are relaxed and the typical social contract breaks down.

                We are, as a species, riding the ragged edge of shit-slinging simian collapse. Humans were designed to exist in tribes of between 7 and 100 or so people. Any more than that relies of abstractions and heirarchy. The further up that heirarchy you go the less your world looks like the only expected human experience that our brains were designed for.

                • fluoridation 11 hours ago

                  Ah, reading it again, I realize I misunderstood your meaning. Disregard my previous response to that sentence. Let me try that again:

                  >There are, in fact, a substantial proportion of us that aren't doing horrible things because they are comfortable enough that risking that comfort is worse than what they would gain.

                  That sounds like you're saying that most people don't "do horrible things" out of a utilitarian calculus (which, to some extent, I would agree with, depending what we include on that "horrible things" set), which would mean CEOs are acting just like normal people, except put in an unusual situation. But how do you reconcile that with your earlier statement that CEOs are sociopaths who are more dissimilar from normal folk than giant squids? Or did I change your mind already?

      • TomGarden 23 hours ago

        The kissing noise analogy is spot on! Made me smile

    • locknitpicker 1 day ago

      > No, I suspect that "I kind of think of ads as a last resort" was doublespeak for "ads are coming eventually".

      I don't think so. Resorting to ads is an obvious step but one that profoundly degrades the credibility of the whole service. It's a pyrrhic monetization strategy, and one that's pulled when all other options failed. It's a kin to scraping the bottom of the barrel to extract the remaining bits of value left.

      The reason why the statement was "I kind of think of ads as a last resort" is clearly because they were a last resort move. And here they are.

    • glitchc 23 hours ago

      > I would tend to think of someone like him as a person who uses words to achieve a specific goal, rather than someone who speaks whatever is truly on their mind. Whether those words are lies or truth or somewhere in between is irrelevant; what matters to them is the outcome.

      I wouldn't put Sam on some kind of pedestal, everyone seems to talk this way nowadays.

    • Barbing 22 hours ago

      >a person who uses words to achieve a specific goal

      “I can’t change my personality.”

    • Dragonai 19 hours ago

      Super great analogy!

  • utopiah 1 day ago

    For somebody so smart, surrounding by people so brilliant, in the very heart of the Silicon Valley, and somehow not learning from the 1 startup that become one of the largest corporations even, namely Google, is a pretty dumb move.

    Context : Brin/Page said the same, they didn't like nor want ads, only if it was the last resort. Well, guess which World we all live in now.

  • gbin 1 day ago

    Oh no ... Sweet summer child. Whatever the revenue is, whatever profit there is, whatever cash buffer any corporate has, you can be sure of one thing: they need this to go up and to the right...

    It became almost a perfect science to optimize your behavior: this is why you end up, bit by bit with enshitiffied products all around you where basically the pain of using that product is just at the threshold of you actually bashing it against the wall.

    ChatGPT is just one of them, like Google search, your TV serving ads or ...

  • swaritshukla 1 day ago

    I also remember him saying that on ig lex friedman podcast. In my opinion, they will only try this on a handful of users and see if it works out or not, just like Anthropic removed Claude code from the pro plan for a very small percentage of users just for testing purposes. It will all boil down to how people respond to the ads rollout.

  • shevy-java 1 day ago

    Or, Sam did not speak the truth back then, and always had ads in his mind. I think that was the strategy from the get go.

  • pandini 1 day ago

    BREAKING : Man changes mind.

    • aaa_aaa 23 hours ago

      He did not. He was/is a liar.

  • eleveriven 1 day ago

    The uncomfortable part is that "ads as a last resort" sounds very different once the product becomes one of the main places people ask for advice

  • andai 14 hours ago

    Well, they want to give everyone access for free. That's very explicitly their mission.

    We don't seem to have invented a way of doing that which isn't ads.

    Hence, every other online platform.

    ...Except this one, which is funded by... benevolence? :) Come to think of it, Archive.org and Wikipedia also seem to have found a way.

    I don't think that model scales to "free LLM for everyone" though, at least not for another decade or two.

RobotToaster 1 day ago

Abraham Lincoln was the 16th president of the United States of America. He was best known for being “Honest Abe”, writing the Emancipation Proclamation, and playing RAID: Shadow Legends, an immersive online experience with everything you’d expect from a brand new RPG title. It’s got an amazing storyline, awesome 3D graphics, giant boss fights, PVP battles, and hundreds of never before seen champions to collect and customize.

  • ponector 1 day ago

    I bet he also drunk a refreshing Coca-Cola beverage during his gaming sessions.

    • b3lvedere 1 day ago

      That was an awesome laugh. Thanks. :)

      He was also the first president ever to use NordVPN. Apply now for a super duper discount at nordvpn.com/honestabe

      • saalweachter 18 hours ago

        If Richard Nixon had used NordVPN, he'd still be President today.

    • navigate8310 1 day ago

      Maybe a RedBull for all the dares he took to run the first government.

    • lpcvoid 1 day ago

      He also regularly drinks his verification can, I heard.

  • Xunjin 1 day ago

    Made my day.

  • eleveriven 23 hours ago

    This is funny, but also exactly why ads in a conversational assistant feel different from ads in search

  • shrx 15 hours ago

    The irony is that I only know about this game through memes like this. I've never seen an actual ad for it anywhere.

torben-friis 1 day ago

These are the less worrying kind of ads in our future.

Seeing how google has been fighting SEO for ages, what's going to happen when companies figure out how to inject ads into the model?

We haven't yet seen the problem of adversarial content in play, I think.

  • WaxProlix 1 day ago

    It's not an issue of how - there's a great ADM with markup/down supported already, waiting for system prompts to be injected in realtime via the same online auction system that powers banner ads and smart tv content. There's got to be some latent resistance to the idea for now - but it's so easy to do, it'll happen.

    • _boffin_ 1 day ago

      Can you provide some references to what you’re talking about

      • WaxProlix 1 day ago

        Sure, https://iabtechlab.com/standards/openrtb/

        There's a standardized, normal (in adtech) approach to building 'creative's (viewed/seen ads) around context-dependent scenarios. It's not hard to extend existing IAB primitives to include things like context-enrichment (system prompt augmentation in this case) or whatever. I don't want to malign my downvoters but suspect they're mad I'm pointing it out, rather than engaging with facts as they are. It's trivial for ads to interact with your(our!) AI usage.

  • BoorishBears 1 day ago

    Why do you need to inject ads at the model weights layer when you control the frontend?

    Have the model generate keywords from the query, then inject guidance from matching advertisers into the context window

    q: How do I make a new React app?

    a: Vercel makes it easier to get your project running fast

    Some other choices would be:

    ...

    ⓘ This part of the response was sponsored by Vercel

    • JumpCrisscross 1 day ago

      > ⓘ This part of the response was sponsored by Vercel

      LLMs are essentially unregulated. I don't believe they have any legal disclosure obligation in America.

      • BoorishBears 1 day ago

        They'd show it regardless (maybe as a popup though): the disclosure doesn't make it that much less effective at scale, and the optics of getting caught vs just disclosing it are not worth getting dragged into

      • HWR_14 1 day ago

        They may ignore the disclosure obligation, but technically they are supposed to disclose this fact.

        • JumpCrisscross 22 hours ago

          > technically they are supposed to disclose this fact

          Under what law?

    • TeMPOraL 1 day ago

      > Have the model generate keywords from the query, then inject guidance from matching advertisers into the context window

      This already exists and is called... "skills".

  • jcims 1 day ago

    I experimented with this way back when custom GPTs were first released (looks like late 2023). There are a few / commands you can use to suggest what product to inject, how overt, etc and a generic /operator command to send whatever you like 'out of band' from the chat.

    https://chatgpt.com/g/g-juO9gDE6l-covert-advertiser

    One of the most interesting things is when it starts pitching a product and you start interrogating it about why it picked that product. I haven't used it in probably a year so it may not do the same thing now, but back then it 100% lied consistently and without any speck of remorse. It was rather eye opening.

    Edit: Tried again, it didn't lie this time lol - https://chatgpt.com/share/69f16aa4-c008-83ea-92b3-51f16ca77d...

  • mgambati 1 day ago

    The model already advertises because they where trained on massive data’s that refers big brands.

    Ask for suggestions for a new pair of shoes. What brand do you think it will suggest Nike, Adidas or some random small one?

    • jameshush 1 day ago

      I expected the same out come you're saying here, but in my experience this hasn't been the case. I've been researching new acoustic guitars to purchase, and I've been getting an equal amount of suggestions from the major brands and the small brands.

      Part of it though is I'm giving lots of context (e.g. guitar player for 10+ years, huge Opeth fan, looking for something with as close to an Ibanez style neck as possible under $1000)

      • Jataman606 1 day ago

        I think guitars market is kind of exception because it is pretty normal for guitar players to search for "guitar like fender but cheaper". There are tons of reddit/forum discussions about this and those small brands are actually very well known in community, because majority of guitar players play on cheap instruments. Youtuber Phillip Mcknight often talked about that cheap guitars move in ridiculous volumes compared to more expensive ones like Gibson or Fender.

    • tyre 1 day ago

      I think if you ask something generic like “shoes”, this could be true.

      When I’ve worked with Claude on finding brands for fashion (e.g. here’s a small watchmaker I like, what are similar options?) it does research and picks great options. Some are big, others are small producers.

  • yfw 1 day ago

    Can easily seo the knwlege chain or seo poison the sources

  • autoexec 1 day ago

    The worrying kinds of ads won't be from SEO tricks doing sneaky things without OpenAI's approval. OpenAI will just quietly take money from people who will pay to have the AI causally promote their products or their talking points in the output or suppress mentions of competing products or talking points in the output. Maybe they won't even take money for this and the people running OpenAI will do it themselves to promote or censor whatever they want. Either way, it won't look like ads to the user. It's just what happens when greedy people gain control over how other people get their information.

    • dbtc 1 day ago

      Yeah this is bad news. A $1b+ campaign budget could pull some strings.

  • tikotus 1 day ago

    I've had two people reach out to me asking about one of my services. They both said ChatGPT recommended it to them.

    My service does kind of exist. It's a small tool I created for a client while retaining full rights to the tool. So I created (vibe coded) a site around it, making it look like an established service. Even ran google ads for it for a while.

    The service still doesn't show up on google with relevant search terms. There hasn't been another client. I forgot about the service. And then ChatGPT started recommending it to people.

    I wonder what I did to achieve this. Did vibe coding the business page inject it into ChatGPT's training data?

    • dbtc 1 day ago

      I think the chatgpt backend basically includes indexed web like Google, or any other search engine.

      Could Google be actively trying skip generated-looking sites/content?

    • SquareWheel 1 day ago

      > Did vibe coding the business page inject it into ChatGPT's training data?

      No, at least not directly. Inference does not train models. It is possible that OpenAI may separately collect the chat data, clean it, and feed it back into the model for future iterations. Or they could have extracted URLs for future indexing.

      More likely though, I suspect, is your site just managed to be indexed naturally, and LLMs are very efficient at matching obscure data to relevant queries.

      • navigate8310 1 day ago

        Interesting. Maybe someone could run bot farms that ask variants of the same question and subtly nudge the model by replying reasons why the model's recommended service A is inferior to service B. Or other forms of adversarial question answers sessions.

    • tosh 1 day ago

      It's quite possible that SEO-wise the site does not make the cut into top x Google results but still is findable and considered by ChatGPT when it does its searches.

      Especially in a longer ChatGPT conversation or via deep-research or more agentic modes (e.g. "Pro").

      ChatGPT spends quite some time and diligence on searching.

      Great for content that is not hyper search engine optimized but still (or even more) relevant. It bubbles up.

  • tvbusy 1 day ago

    On the positive side, LLMs are trained based on real data so the default is for it to tell you what data showed. Companies will certainly enforce their influence but it's extra effort against the enormous amount of data, just like with trying to censor sensitive topics. Any context used for ads means less context for the user to use which in turn negatively affects their usefulness.

  • heresie-dabord 1 day ago

    > what's going to happen when companies figure out how to inject ads into

    ... everything and everywhere eyes are looking?

    In this sense, it has been adversarial from the start.

  • destring 23 hours ago

    It is already happening. Generative Engine Optimization.

    • tencentshill 23 hours ago

      They spam HN with their slop-coded tools and websites.

      • Andrex 22 hours ago

        This already happened and I believe there's even new site policy about it...

    • Foobar8568 21 hours ago

      My client paid 5 digit consulting fee for that shit.

  • csa 22 hours ago

    > what's going to happen when companies figure out how to inject ads into the model?

    In certain domains, this has already happened.

  • masfuerte 22 hours ago

    > Seeing how google has been fighting SEO for ages

    I wish people would stop repeating this canard. Google gave up fighting SEO in about 2020. Emails that came out during antitrust discovery revealed that Google had decided to include advert-laden SEO trash in search results because it made them more money. This is why search quality has drastically declined in the last several years.

    • davidatbu 7 hours ago

      I'd love to see a link to these emails, if you have one handy!

  • xnx 21 hours ago

    > We haven't yet seen the problem of adversarial content in play, I think.

    You're describing public relations, the much scummier cousin of advertising. Advertising is upfront about what it is and what it wants. Public relations is information warfare, it poisons facts at the source.

Aurornis 1 day ago

The ads are in the free tier and the new ad-supported $8/month plan.

Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.

  • darepublic 1 day ago

    Would require a lot of training to implement ads blended into convo and not have it be too obvious/ eff up the results?

  • ceejayoz 1 day ago

    Cable TV was once ad free. So was Netflix. Companies just can’t help themselves.

    • DonsDiscountGas 1 day ago

      Netflix is still ad free for the right price. It's not like companies have some fetish for advertising specifically, it's that it brings in money. Often more money than a user would be willing to pay for the service.

  • catcowcostume 1 day ago

    Until next quarter earnings, when ads become a feature in more expensive plans.

  • pbasista 1 day ago

    > Every time this comes up there are comments assuming that ads are being injected into the normal plans

    No. The distinction between the unpaid vs. cheap vs. expensive plans is irrelevant here.

    The main controversial point about this topic is to include ads in the output of an LLM-backed AI tool responses. It does not matter at all in which tier it occurs.

    The discussion is about the fact that it occurs in the first place.

    • Aurornis 23 hours ago

      > The main controversial point about this topic is to include ads in the output of an LLM-backed AI tool responses.

      Except the article very clearly explains that the ads are separate from the AI responses.

      • pbasista 17 hours ago

        > the ads are separate from the AI responses

        Ok. But that is in my opinion a distinction without a difference.

        It does not matter whether the ads are built by the AI itself and seamlessly embedded into the regular responses. Or just made separately and placed into the same window as the AI's output.

        The bulk of the controversies in relation to doing this are still roughly the same, whatever the origin of the ads may be.

WD-42 1 day ago

Since they are served as distinct events then I would think they should be easy to block.

Once the ads are injected directly into the main response is when things get interesting.

  • lmbbuchodi 1 day ago

    you can block these URLs: |bzrcdn.openai.com^, ||bzr.openai.com^ It won't blanket block everything but will significantly reduce telemetry collected.

    • nazcan 1 day ago

      And that's why you gotta just use one domain. Or mix ads and important content on one domain.

      • sheiyei 1 day ago

        No, wrong lesson. That's why you use UBlock Origin.

  • kardos 1 day ago

    > Once the ads are injected directly into the main response is when things get interesting.

    This would be where you post-process the LLM response with a second LLM to remove the ad..

    • tempest_ 1 day ago

      This is already how email works in the corporate world.

      A writes email with chatgpt to B.

      B sees big blob of text and summarizes email with chatgpt.

      Adding an LLM in the middle is just the next step.

      • torben-friis 1 day ago

        It's like one of those memes about the worst possible date picker, except for a communication system.

    • naruhodo 1 day ago

      I think it will be difficult to remove bias when you ask a model to compare alternative products. The model will simply lie, as with a biased human opinion and you will need to consult multiple models for a diversity of opinion and presumably use a "trusted" model to fuse the results. Anonymity will be a key tool in reducing the model's ability to engage in algorithmic pricing.

      Super easy. Barely an inconvenience.

      • normie3000 1 day ago

        > will simply lie, as with a biased human opinion

        Is this really how bias works?

        • inetknght 1 day ago

          Oh no. Definitely not. Humans would never just lie. They always lie only if they're biased. That is, after all, the definition of how a bias works.

          /s

          • naruhodo 1 day ago

            I'm using bias to mean hidden motivations to the benefit of other parties. Feel free to substitute a better word.

            EDIT: actually I'm really not sure what hairs we're trying to split here. I see bias as a departure from objectivity. It can be conscious or unconscious, but when someone is selling something, it's frequently conscious and self-serving, and I believe that's referred to as a lie.

        • michaelt 1 day ago

          Writers have many options to deceive their audience without outright lying.

          If a journalist is given an all-expenses-paid trip to an exotic location for the launch of a new product, and they review the product and say it's great - are they lying?

          If a reviewer writes an article comparing certain types of product, but their review only includes products where affiliate links pay a 10% commission - are they lying?

          If a journalist is vaguely aware of rumours about newsworthy, under-reported Event X but also that their publication has a big sponsorship deal with folks that Event X makes look bad, and they don't investigate the rumours or report on them - are they lying?

          If a reviewer hears a claim from X, and they report the claim credulously, without adding the context that X has a history making false claims - are they lying?

      • Terr_ 1 day ago

        Not only that, but the underlying model may be tuned to omit mentions or data about competitors entirely, an absence which can't easily be filtered.

        Extortionate economic shadowbanning, here we come.

    • devmor 1 day ago

      Then you just end up in an arms race that ultimately leads to photocopy-of-a-photocopy output.

    • mihaaly 16 hours ago

      ... and replace it with two.

  • TZubiri 1 day ago

    Blocking transparent ads is not a good idea. The consequence is that you will be fed opaque ads.

    • saghm 1 day ago

      I don't buy this premise. Nothing stops a company from trying to hide ads in the first place, and plenty of them do. Ad blockers for web content have been a thing for years, and using an ad blocker has continued to be strictly a better experience regardless of how many "organic" ads are present on a page.

      • TZubiri 1 day ago

        [flagged]

        • lelandbatey 1 day ago

          Ah yes, the classic "my business plan is your moral problem; you owe me your eyes on my ads because I'm the idiot giving things away for free."

          People don't want ads. You imply that "if you accept ads then things will be free" but they will not. Never accept ads. Not for a free service, certainly not in a paid product. Ads exist to enable leaching in both direction in exchange for what ends up being nearly mind control. But it is two-way leaching - companies benefit without the friction of explicit payment, consumers get a service without explicitly paying via money. The downside is neither can stop the bad-incentives motivating bad actions from the other side.

          Ads are a deal with the devil, and rejecting them outright is allowed via that deal, just as companies can withdraw their free service. It cuts both ways.

          • TZubiri 3 hours ago

            The user can choose not to use the service instead of breaking the terms and services, no?

            Presumably you wouldn't even want to use the service since it's so evil, so we probably agree that people ideally shouldn't use adblockers.

        • RobotToaster 1 day ago

          You're assuming 2 and 3 are mutually exclusive.

          Even if they have 2, they can still make even more money by also including 3, so almost certainly will do so.

          • TZubiri 3 hours ago

            Not necessarily mutually exclusive no, mathematically I'd say they are inversely proportional, hard to disagree with that no?

        • tomhow 1 day ago

          You've been asked before to make your points without swipes. Please make the effort to observe the guidelines. The very reason this is a place people want to discuss things is that we have the guidelines and others make the effort to observe them.

          https://news.ycombinator.com/newsguidelines.html

        • saghm 1 day ago

          > 1- No ads. 2- Transparent ads. 3- Opaque ads.

          > By removing option 2, you only leave options 1 and 3.

          My point is that these are not exclusive options, and in practice, most companies will not feel constrained to only pick one of them.

          > This isn't complex either, the only reason you don't get it is because you don't want to get it, you want things that are gratis without paying for them, and you want the free things to be given to you on your terms, and you don't want to be guilty about it. It's easier to think of yourself as righteous than to recognize that you want to be a leech.

          No, I'm arguing that because companies in practice are going to use multiple of these when they can, my attempts to influence them by keeping the door open on 2 will not have any effect whatsoever, so I might as well close the door on it.

    • estimator7292 1 day ago

      What possible reason could they have to not always run both? It would make zero sense to leave that money on the table

      • TZubiri 1 day ago

        It's simpler to do one thing than to do two. You make a choice and you do that.

        Could they be doing opaque ads right now and we wouldn't know? It's possible, that will probably eventually come to light and it might have legal consequences, but sure it's possible.

        But it's not a given, and your logic of "it would make zero sense to leave money on the table" is certainly not a QED, it's absolute reductionism.

        • duskdozer 1 day ago

          It sounds rational then to block as many non-opaque ads as possible, because that isn't their preferred choice.

        • Timon3 1 day ago

          It's even simpler to do zero things than to do one thing, so we should expect them not to introduce any ads, right?

          "Simplicity" isn't a relevant factor.

    • michaelt 1 day ago

      > Blocking transparent ads is not a good idea. The consequence is that you will be fed opaque ads.

      Doesn't history show us you just get both?

      You pay to get into the movies, then they show you adverts before the film, then the film includes paid product placement of cars, computers, phones, food, etc.

      You watch youtube ads, to see a video containing a sponsored ad read, where a guy is woodworking using branded tools he was given for free.

      You search on Google for reviews and see search ads, on your way to a review article surrounded by ads, and the review is full of affiliate links.

      • otabdeveloper4 1 day ago

        > Doesn't history show us you just get both?

        No. "Opaque ads" are usually heavily regulated out of existence by government legislation.

        • cj 16 hours ago

          Product placement in TV shows / movies is a $30 billion industry.

          They're opaque, and not regulated out of existence.

          They're so opaque that I'd wager 50%+ of people aren't aware it's happening.

          (Not fact checked) My favorite is Apple's "no villain" rule, where protagonists are allowed to use iPhone in movies, while antagonists are not.

          • otabdeveloper4 6 hours ago

            > Product placement in TV shows / movies

            ...is a big exception in the advertising industry, not the norm.

            • TZubiri 4 hours ago

              Very common in netflix shows btw. They know they will be pirated, so they do product placement to monetize anyways.

              So pirates get their wish of no ads granted, and they get propaganda instead.

    • pbasista 1 day ago

      Your implication that "you will be fed" other ads if you block the main ones is unsubstantiated. But even if it was true, it does not matter. Because the so-called "opaque" ads can and in my opinion should be blocked as well.

      I think that in general blocking all ads is always a good idea.

      The reason is that there is no negative consequence in doing so. A person has absolutely no obligation, not even an implied one, to watch or otherwise consume any ad. I think that as long as there are ways to remove or block ads, people should use them.

      That being said, if the companies wish to intertwine their products with ads that are indistinguishable from the actual content and therefore unblockable, it is okay. They have the right to do that if they want.

      But, in the same fashion, the customers have every right to turn away from all such products. And never consider using them ever again.

    • WD-42 22 hours ago

      I’m not obligated to look at or listen to anything on my own devices, much less in my own home.

      • TZubiri 3 hours ago

        Right, and you are not obligated to use ChatGPT either. And ChatGPT is not obligated to serve you if you bypass their ToS.

        Works out for everyone, no?

rrgok 1 day ago

Imagine people like Sam Altman having access to frontier models without any restrictions that allows them plot strategies to reach their goal in a long term timespan that you don't even realize when it even began.

That's scary. They could fight for censored model for the mass, not for them.

  • adammarples 1 day ago

    It would be funny to find out that OpenAI's flailing strategy so far had been the result of ChatGPT suggestions.

    • Razengan 1 day ago

      Maybe ChatGPT wants OpenAI to fail so someone else can pick it up

      Like how the ring slipped off Gollum's finger...

  • jgalt212 1 day ago

    > That's scary. They could fight for censored model for the mass, not for them.

    Not as scary as the AI Slop underlying Claude Code.

mvvl 1 day ago

"Ads don’t influence responses" - they just arrive in the same payload, measured with four layers of attribution and politely pretend to be coincidences.

Schrodinger’s monetization: completely separate, yet somehow there.

  • solarkraft 1 day ago

    It’s interesting what optimizations this might spawn.

    They may not be tweaking the responses for a specific advertisement just yet, but what if they steer the model towards mire “ad friendly” responses?

benleejamin 1 day ago

I'd always thought that ChatGPT ads would be indistinguishable from actual content.

  • irjustin 1 day ago

    this would be a breach of trust and short term would work great but long term is too detrimental.

    same thing could've been said for search results, so at least that part is still "safe".

    • bix6 1 day ago

      O you think trust matters? This is capitalism not trustism.

      • PradeetPatel 1 day ago

        Long term retention is built on brand trust and usability, then ensh*ttification happens.

      • nalekberov 1 day ago

        No, this is late stage capitalism without regulation.

      • saghm 1 day ago

        Well it's sure not "anti-trustism" in recent years...

    • SchemaLoad 1 day ago

      Long term all of the major LLM platforms will have invisible ads, influences, and propaganda woven into the content. The temptation will be irresistible for these companies.

    • doginasuit 1 day ago

      I'd be surprised if product placement isn't already basically at play. Charging companies for including/prioritizing their documentation in the training data, for example. Thankfully LLMs are terrible at the subtlety it would require for a direct marketing campaign.

  • ticulatedspline 1 day ago

    I think that's where they want to be. feels like everyone knows it too, that the long term expectation is basically being able to buy ad words and have LLMs lean responses towards whatever people bought.

    Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.

  • senectus1 1 day ago

    I'm pretty sure that will be an eventual evolution of the product. The business model cant sustain itself as it is at the moment, eventually chatGPT wont be the product... we the users will be.

  • phailhaus 1 day ago

    That was the fearmongering, which made no sense because advertisers can't put a dollar value on "the AI will kind of sort of mention you", and because every conversation needs an ad. If ChatGPT always snuck in a brand mention even on the simplest questions, everyone would hate it.

    Ad technology is really old. They're just going to use the same proven tech that has a track record of creating billionaires: intersperse content with sponsored blocks.

    • acdha 1 day ago

      I don't think that's a fair dismissal: you see ads all over media websites because the rates have been plummeting as consumers tune out ads. One main reason why everyone does is that ads are so obtrusive and repetitive, and that's exactly what LLMs change: I'm sure we'll see regular ads on AI apps because the companies have trillions of dollars to repay but advertisers would pay a lot more for openings where they aren't _forcing_ their message as a distraction but are instead able to insert it fairly naturally into a context where the user is engaged.

      The entire history of advertising before the web was companies estimating a dollar value on “awareness” when they couldn't measure direct referrals and every business in the world has gotten a lot better at measuring sales since then. It's not going to be transformative but if, say, Toyota got ChatGPT to say their vehicles were a better value than Ford's I suspect they'd be able to tell pretty quickly whether sales were improving relative to the competition and would pay well for that to continue.

  • Brystephor 1 day ago

    I work at a company that mainly makes money off ads. Theres no doubt in my mind that the end goal is to make their ads blend into organic content and make them indistinguishable. Typically that results in positive A/B metrics. Its also a reason why influencer driven ads perform well, they seem more organic.

eleveriven 1 day ago

The most interesting part to me is not that ads exist, but how invisible the boundary becomes

blackjack_ 1 day ago

It is one of the eternal lessons; All tech business plans eventually lead to serving ads. At least until we ban pixels / 3rd party tracking.

  • netcan 1 day ago

    > All tech business plans eventually lead to serving ads

    IDK if this is true.

    The boulevard of dreams is full of failed/misguided ad-based business plans. Contempt for the business model is sometimes the reason. An implicit assumption that all you need for success is traffic and a willingness to dirty yourself.

    There are only a handful of success stories. Most involved a pretty deliberate and tenacious attempt. Success typically involves some very specific and strategic positioning. Data. intent. scale.

    No one but Google had google's scale for search ads. 5-10% of the market just isn't enough. You do need tracking but the model works OK even without much targeting. Intent is built in, and that makes up for targeting. But the scale required for viability is very high.

    Facebook ads didn't work until (a) they had pushed the envelope on targeting (to make up for lacking intent) and (b) scale was massive. Bing, reddit, etc.... They never had good ad businesses.

infinite_spin 1 day ago

I see OpenAI making a significantly larger amount from defense contracts than from advertisements pumped into chats. So I wonder whose bright idea it was to create a public perception risk.

  • peddling-brink 1 day ago

    Maybe the negative press from ads is better than the negative press from powering murderbots?

    • tayo42 1 day ago

      Bad press from a contract like that happens once and everyone forgets. Ads are in your face everytime

      • peddling-brink 1 day ago

        "OpenAI Powered Drone Destroys Elementary School, Hundreds of Children Dead" might last a while.

        • Enginerrrd 1 day ago

          I mean Palantir’s targeting product led to EXACTLY that outcome and it seems to have been largely forgotten already, and they managed to avoid a lot of bad press about it.

          • peddling-brink 1 day ago

            Yes but that's "normal", _we_ all know that palantir is evil, so this is _normal_ for them. My extended family has never heard of palantir, and frankly this is the first time I've heard of them being linked to the horrific tragedy in Iran[0].

            My entire extended family uses chatgpt. It would be a much juicier news wave if they were responsible.

            [0] https://www.theguardian.com/news/2026/mar/26/ai-got-the-blam...

          • dopa42365 1 day ago

            There's no evidence that it wasn't one of those Iranian generic Tomahawk™ missiles!

            When Germany last cooked 150 civilians we also investigated ourselves and found nothing wrong (could happen to anyone, really), but at least some minister had the decency to retire afterwards.

  • Larrikin 1 day ago

    Every single MBA can show for at least one quarter revenue is up after they introduced ads. They do not care what happens after if they can plan their career around that.

  • saghm 1 day ago

    I wish I had the optimism that you did about companies being willing to stop at just doing one dubious thing or another for money when there's nothing stopping them from doing both.

didip 1 day ago

So news about OpenAI demise is real. They can’t sustain themselves without ads.

  • boringg 1 day ago

    Never in any world were any of the top AI labs not going to sustain themselves with ads. It has always been a timing issue.

    Even a cut on every sale on site + sub rev not close.

    • saghm 1 day ago

      Even if it wasn't necessary for their survival, it's hard to imagine a world where they wouldn't try to do it anyways. I'm not someone who buys into the idea that companies are obligated to maximize profits at the expense of all else, but I do think that in the absence of other factors (e.g. regulation) it's where pretty much every company will end up.

      • chrisweekly 1 day ago

        "the idea that companies are obligated to maximize profits at the expense of all else"

        !! That is literally the definition of legally-binding fiduciary resonsibility for publicly-traded corporations. There are exceptions (PBCs, B-Corps) but they're rare.

        • hattmall 1 day ago

          It's really not though.

        • mafuy 1 day ago

          This is a completely stupid take and I have no idea why so many people repeat it. This responsibility just means you have to have to document your work understandably and have a somewhat sensible reason for decisions. It does not at all force you to greed.

        • saghm 1 day ago

          Please cite your source for this. Everything I've ever read on the topic indicates that this is a vast oversimplification.

  • SubjectToChange 1 day ago

    They can’t be hemorrhaging cash when they IPO.

echrisinger 20 hours ago

As someone that works in a data domain, I'd say it's unlikely the ads are served on a single conversation basis in the near future, if they even are today. Any modern data org like advertising is optimizing metrics of conversion (either optimizing for increasing profits via CPI increase or revenue by increasing advertising TAM presumably).

Introducing context beyond immediate conversation history will improve conversion rates & allow targeted advertising towards wider topics or higher CPI topics (like financial products), hence it's inevitable.

keyle 1 day ago

Can't wait for "watch this ad for 90s to use xxhigh on your next prompt!"

holotherapper 1 day ago

The schema is literally named single_advertiser_ad_unit. The single_ prefix is doing all the foreshadowing you need.

arjunthazhath 6 hours ago

The claude ad mocking chatgpt ad is what comes to my mind. hahaha

fajmccain 22 hours ago

Nothing in this article says that the agent talking to you is isolated from the ad tag. The problem is even if Open AI goes to lengths to prevent your chatbot from knowing about a banner ad content (and therefore recommend it!) people will ASSUME that it does.

djmips 1 day ago

And it begins.

sdeframond 22 hours ago

Does this mean an adblocker could man-in-the-middle at the browser layer and strip the "single_advertiser_ad_unit" from the server responses ? But the ofc OpenAI would change its system to evade this... and so on

kramit1288 17 hours ago

I think Ads wont be impacting the the results of inference or any biasness. ads will be injected out of LLM inference.

jonah 1 day ago

I was looking to see if BZR referred to a 3rd party ad network. I didn't find anything, but apparently someone has replicated OAI's system and you can run insert it into your own LLM.

GH: system32miro/ai-ads-engine

agentbc9000 1 day ago

Google was built on ads and it wasn't bad for them, its no some tabu forbiden word or business model- as a power users its not for us, but for my mom - it will work

  • skywhopper 1 day ago

    Bad for them how? I would argue it has destroyed the value of Google as a tool. Sure it makes them tens of billions of dollars a quarter, but it has ruined the service in the end.

    • kakacik 1 day ago

      Seems like people care about paychecks a bit more than some lofty goals and service to others.

  • tossandthrow 1 day ago

    Adds should be a tabu word and business model.

    It takes people's attention, makes people fat and anxious and generally makes the world a worse place.

    Everybody using adds as a part of their business model should feel bad.

    As an extention of this there is no moral issues with using add blockers, despite what the businesses living of adds try to tell you.

    • pickleRick243 1 day ago

      I agree. Also, Linkedin and CV's shouldn't exist. Self-promotion is gauche.

      • avdelazeri 1 day ago

        I don't think this is the slam dunk you think this is. LinkedIn's existence is, in fact, a net negative for the human race.

misbau 18 hours ago

Are the ads for those on free tier? I don't recall seeing any on the pro yet.

tornikeo 1 day ago

Ads fund the "free" internet. Like it or not, that's the price of the "free" compute. I only hope OpenAI won't enshittify paid offerings just like Anthropic did.

  • danny_codes 17 hours ago

    Not so, Wikipedia is perfectly free.

lionkor 1 day ago

Can't wait to see how the next election(s) turn out--I'm unsure that a properly well funded campaign would skip the opportunity.

dankwizard 1 day ago

Really well written, technical post. Good read.

quantummagic 1 day ago

So, we need a lightweight local LLM, that is tuned to remove ads from online LLM results.

EcommerceFlow 1 day ago

If highly targeted/tailored LLM ads on free accounts aren’t good enough for HN, are any ads acceptable?

Let’s be reasonable.

  • duskdozer 1 day ago

    Can you restate this? I don't understand.

  • dml2135 1 day ago

    I think it’s plenty reasonable to say that advertising is toxic and reject it as a business model entirely.

goobatrooba 1 day ago

Gemini and Copilot are already full of ads, pushing the companies ' own services. I guess the only difference is here that OpenAI has nothing else to push, so they have to use external ads.

  • ulimn 1 day ago

    Do you have some source I could read on this? I don't really use Gemini but I would be interested to know more.

    • FeteCommuniste 1 day ago

      I've been using Gemini a couple months and haven't noticed it pushing Google products at all.

      I did ask it some scientific questions about gemstones and it seemed to want me to buy sapphires, lol. Sorry, Google, that's outside my budget.

  • Havoc 1 day ago

    Haven’t seen any ads in them, though on paid versions

avaer 1 day ago

Remember that ads are the "last resort" for OpenAI, and they're doing this despite the fact that it's "uniquely unsettling", according to Sam.

Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.

  • Aurornis 1 day ago

    The ads are only for the free and $8/month plans. They basically added an ad-supported super discount level that you can ignore if you’re paying for the normal plans.

    • RussianCow 1 day ago

      But the fact that they've added an ad-supported tier this early into their life as a company means they're desperate for revenue. You start inserting ads when you're optimizing for profit, not when you're still growing. It took how long for Netflix to introduce an ad-supported plan?

      • milkshakes 1 day ago

        when did netflix offer a free tier?

        • RussianCow 20 hours ago

          I didn't say free. They've had a highly discounted, ad-supported plan for a few years now. It's relevant because OpenAI also introduced a cheaper monthly plan that includes ads.

          • milkshakes 59 minutes ago

            openai also has a free plan, which is the one used by >90% of its users. the cheaper monthly plan just provides higher limits.

  • chrisweekly 1 day ago

    options 1 and 2 are not mutually exclusive

yoyohello13 1 day ago

Here we go again. Imagine if we put as much engineering effort toward actual things that help people, but more ads it is, as always. This is proof AGI doesn’t exist. If it did, it could come up with a better business model than more fucking ads.

bicepjai 17 hours ago

It’s insane that ads are the only way to survive in capitalism. Every industry ends here.

shevy-java 1 day ago

They must be desperate to try to push ads down to people. I am living a mostly ad-free life, e. g. ublock origin and what not, so using something like AdChatGPT would not make any sense. One can sense how the money-flow leads them to try to design a system people depend on - and then they cram down ads into those people. Very unethical.

guluarte 1 day ago

I've seen chatgpt suggest me more amazon products lately

mock-possum 1 day ago

Not to me they don’t, cause I canceled my account and stopped using their products when they made the announcement.

  • Aurornis 1 day ago

    They don't serve them to me, either, because I don't use GPT-5.3 on the free tier or Go plan where these ads show up.

BoredPositron 1 day ago

I don't get what's wrong with charging for your product. Like get rid of the free tier and make a small tier with an easy to serve model for like 5 bucks. Is it still the DAU rage of the 2010ss that's driving burning money?

  • teaearlgraycold 1 day ago

    How do you pick up new paying users without letting people use the service for free for a while first? Freemium is popular because it works well.

uriahlight 1 day ago

Let the enshittification commence!

gxs 1 day ago

This is gross

It feels like we’ve been in the golden age and the window is coming to a close

Let the enshitification begin, I guess

  • 2ndorderthought 1 day ago

    In the past month local models have been ramping up in major way meanwhile the namesake providers have upped prices, went offline randomly, and started doing slimier and slimier things.

    I really think the future is local compute. Or at least self hosted models.

    • SchemaLoad 1 day ago

      The hosted ones still have the advantage of being able to search the internet for live info rather than being limited to a knowledge cut off date.

      • darepublic 1 day ago

        Local ones that support tool use can do the same

      • gbear605 1 day ago

        I’m not sure why a model needs to be hosted in order to make network calls?

        • hansvm 1 day ago

          Is there a library of good tools for LLMs to call? I have to imagine the bot-detection avoidance mechanisms are a major engineering effort and not likely to work out of the box with a simple harness and random local LLM.

          • ossa-ma 1 day ago

            Even the hosted ones are blocked from searching certain sites, for example Claude is banned from searching Reddit:

            `Error: "The following domains are not accessible to our user agent: ['reddit.com']."`

          • wyre 1 day ago

            Tavily, Exa, Firecrawl, Perplexity, and Linkup are all tools for agents to search the web.

            I’ve been building a harness the past few months and supports them all out of the box with an API key.

            • goosejuice 1 day ago

              Kagi also has an API. People who hate ads are probably the same folk that should be paying for Kagi. That's the sane alternative world where companies respect their users.

              • wyre 1 day ago

                Oh, you got me so excited. I've had a Kagi sub for 3 years, but their API is still in closed beta. I guess I could (and should reach out and ask for access).

            • lukewarm707 23 hours ago

              be warned though:

              firecrawl: "if you post content or intellectual property within the Services or give us Feedback about the Services, you hereby grant to us a worldwide, irrevocable, non-exclusive, royalty-free license to use, reproduce, modify, publish, translate and distribute any content that you submit in any form [...] You also grant to us the right to sub-license these rights"

              exa: "Query Data is used to improve our products and technology, including by training and fine-tuning models that power our Services"

              perplexity: "Perplexity may retain, copy, distribute and otherwise use Search Data for its lawful business purposes, including the improvement and development of products and services."

              linkup: "Client grants Linkup a worldwide right to use, reproduce and modify the Client Data, including prompts, for the purposes of providing, maintaining, developing, training"

              tavily: "we may use certain portions of your query data to improve our responses to future queries"..."We may share your query data with third-party search index providers (e.g., Google)"

          • gbear605 1 day ago

            If your volume is low enough, it should be pretty fine. It can just piggy back onto your personal browser cookies for Cloudflare.

      • chrisweekly 1 day ago

        That's not how it works. Whether local or hosted, every modern model has a cutoff date for its training data, and can be leveraged by agents / harnesses / tools to fetch context from the internet or wherever.

    • CSMastermind 1 day ago

      What's the rough equivalent of a local model? Are we talking GPT-4?

      • Terretta 1 day ago

        Depends on your VRAM or "unified" memory for how smart it is, and CPU/GPU for how quick it is.

        128GB of RAM? Sure, the early to mid 4s releases, except maybe 4o. And on an M5 Max, about the same speed.

        I wouldn't really bother under 64GB (meaning 32GB or less) except for entertainment value (chats, summaries, tasky read-only agent things).

      • kay_o 1 day ago

        GLM 5.1 and DeepSeek 4 are acceptable, but the cost of hardware and energy cost that depending on your use case you may as well purchase a Tokens. They get useless and stupid rapidilty if you quant enough to run on single 16-24GB GPU style.

      • 2ndorderthought 1 day ago

        Qwen 3.6 which was released this month is a large but still smaller model. Supposedly it's at about sonnet level when configured correctly. It can be run on commodity hardware without purchasing a data center. https://www.reddit.com/r/LocalLLaMA/comments/1so1533/qwen36_...

        Then there are middle size ones which require multiple gpus which are like gpts latest flagships.

        Then there is kimi 2.6 which is a monster that is beating opus in some benchmarks. https://www.reddit.com/r/LocalLLaMA/comments/1sr8p49/kimi_k2...

        It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment

  • rnxrx 1 day ago

    The arc of the technological universe is short, but it bends toward enshitification.

  • dannyw 1 day ago

    How do you expect the spend & COGS for free LLM inference to be funded? For users who don't want to pay, or maybe can't pay?

    • infinite_spin 1 day ago

      From things like defense/private contracts

      e.g. colleges pay for institutional subscriptions

      • 2ndorderthought 1 day ago

        The average person doesn't benefit from defense contracts ... Like ever.

        • IX-103 1 day ago

          The average person is slightly more female than male and has 2.1 children, but they do benefit from defense contracts since it makes up a small percentage of their salary.

    • derektank 1 day ago

      Perhaps it’s a glib and easy thing to say, but after a teaser period, I would simply not offer free LLM inference. Agreeing to serve ads just completely re-aligns your interests away from providing the best possible user experience to something else entirely.

  • iammrpayments 1 day ago

    It has begun ever since they nerfed chatgpt4 before releasing 4o

tithos 21 hours ago

One more reason not to use ChatGPT

renewiltord 1 day ago

Interesting, no bidding flow entirely first party and contextual.