sharadov 1 hour ago

Strange but I smelled a fraud as soon as I finished reading the NYT article.

This is the second time in my life that this has happened to me.

First time was when I interviewed at this social media startup which had raised a 100 mil during the pandemic, and promptly collapsed, all numbers there were cooked.

datadrivenangel 3 hours ago

The NYT article says that they saw financial statements, and apparently the owner has cleared $60-70m already. Also the article said that they're expanding to selling ED treatments as well.

Also they had at least 10 direct contractors, and OpenLoop is a whole platform as well.

rossdavidh 2 hours ago

So, "BS-as-a-service" allows you to make more money off BS, including deepfake before-and-after, and falsifying doctor profiles at scale. I mean, fraud is definitely one application where "hallucinations" are not so much of a problem, that is a valid point.

m_ke 2 hours ago

So just a typical dropshipper

parsimo2010 2 hours ago

So he basically just set up a middleman that connected customers to physicians to sell a popular drug.

On the one hand, it's easy to blame them for not being more careful with customer data or not being more honest with their advertising or being more compliant with any/all laws. And it's easy to point the finger at AI because they used it to code the product.

On another hand, a whole lot of non-AI companies have also had embarrassing data breaches or misleading ads or some other misconduct. Uber basically just coded an app and the majority of their workforce are actually gig contractors. Theranos basically lied their ass off about their blood tests. I can't count how many companies have lost my personal information.

So I don't really think this is the AI's fault. This is a founder in "move fast and break things" mode and it's their fault for not taking the rules seriously. The AI just means they aren't abusing some young coder eager for a good payday when the company exits. The young coder would have never pushed back on the founder and warned them about the FDA and potential class action lawsuits. So the AI can't really be blamed for not doing that either- the founder has to take responsibility for the crap their company did, whether it was an AI or a human employee.

  • notatoad 2 hours ago

    i don't think the article is saying the problems are AI's fault, just that the success is not the result of AI.

    this is being promoted as an AI success story, but it's actually a fraud success story.

    • bradleyankrom 1 hour ago

      Yeah, it sounds like they didn't even try to do things "the right way," whatever that is. You don't accidentally create hundreds of fake Facebook profiles, you don't accidentally create deepfakes for your marketing materials. The most charitable read I can give is that they just have faulty scruples. But it's hard to find the seed of a good idea that just went off the rails.