jpkt 3 days ago

I have been using llms.txt for that

  • joao_sobhie 2 days ago

    I really don't know why he did this, his website has llms.txt

    • selvmvde 2 days ago

      Fair point on llms.txt, but I think we're talking about two different layers here; llms.txt helps agents find content -> API helps agents collaborate with a human

      • joao_sobhie 2 days ago

        Got you. You made me realize something. I've been developing a scraping api and after reading your post I tried to do the same as you did. First I've asked claude about the company, the services and prices. Than I've used the api to test if the response is the same and It's not. Basiclly using the api I got thinks that just asking claude, didn't show. The difference is that api turn the website into markdown and don't see the llms.txt. Using api looks like the AI understand the company as a whole. Thanks for sharing that experiment, you gave me a good ideia on creating a brain for my scenario. I'm thinking of use Obsidian as the brain, and use my scraping api to colect the data from my company website, linkedin and instagram. With all the knoledge, use paperclip to orchestrate marketing, projects, fixes. I will build that.

dormento 3 days ago

> Why your website is invisible to the agents visiting it.

I only _wish_ I could make a website invisible to agents. Instead now we're hostages to Cloudflare /shrug

Edit: so this isn't as "driveby-ish": the tldr; seems to be, if you want your site crawl-able, don't go overboard on the JS. This was good wisdom back then just as it is now in the current situation.

devansh0718 3 days ago

i think its better use llms.txt for this