Ask HN: How do we handle the rise of low quality "This is LLM" comments?
Every post that reaches the top of HN will have at least a few comments saying "This is LLM!"
It has become a proxy for "I don't like this article, so it must be a LLM"
To me, it feels like lazy karma farming, as these comments often do get a few upvotes.
And of course, accuse a 100 posts if being LLM, you are guaranteed to be right at least once, then like astrologers you can claim success.
Is there anything we can do to discourage this type of lazy and low effort posting?
When you encounter these comments/sentiment, pretend that LLM = Loweffort Long Mumbling. In other words, poor writing.
Detection of "LLM" is a red herring. Quality is what matters. Always has been. Assess comment quality holistically, and you'll be fine.
If "quality" is all that matters and maximizing quality is the goal, and if LLMs can generate higher quality comments more consistently than humans, we should close all user accounts. Don't even have this be a forum anymore. Have LLMs crawl the web, post articles then generate threads discussing them from various simulated points of view. No direct human participation, no Eternal September. Then readers can have their own agents summarize the threads for them.
We can consider this the carcinization of online discourse - everything evolves towards the optimum of LLM summarization.
> if LLMs can generate higher quality comments more consistently than humans
Do you believe this?
No, because my definition of "quality" for comments implicitly includes human intent, which LLMs lack.
But I suspect a lot of people on HN only view these threads as data and that for them "quality" only exists within the semantics and structure of the text itself, and the human element doesn't matter to them.
My honest opinion is just to accept it, move on and continue writing, building, creating stuff that will resonate at least with a bunch of people. Unfortunately, what is karma farming here or on Reddit will be hateful or so comments on YouTube, etc. depending on the platform.
Maybe people should stop posting shitty LLM-written articles that don't generate any good discussion beyond "I think this was written by an LLM" and we won't have this "problem".
People always find ways to farm engagement or trick the system. I think it's not worth the effort to build something around it to try and prevent it. It will fade again.
add a less severe "Flag, as AI" button
Why can't we just use Flag?
That would be the little downwards-facing arrow to the left.
Ignore or downvote or flag [1] depending on your confidence in your judgement, your perception of its severity of impact on the HN community, your mood, etc.
Just like any other behavior you don’t like.
[1] logically upvoting is also an option.
The answer has been the same since the days of Moses:
Drown it out with high quality submissions and high quality comments.
This is LLM
openclaw. Pure AGI.
HN should add some kind of LLM detection. Preferably something that rates how unhinged a comment is.
Smoke me a kipper i'll be back for breakfast.
The thing that rates how unhinged a comment is is the downvote button, or flag button in extreme cases.
LLM detection is basically witchcraft, though, for all but the most obvious cases.
> rates how unhinged a comment is
No can do, too many false positives considering the usual demographics.
Add a downvote option!
Is this a new problem?
There's always been low effort comments and content, here and all over the internet. A decade or so ago people used to write comments like "this" a lot to farm upvotes off another popular comment.
In comparison to other places this kind of thing is largely discouraged and unrewarded on HN, although I have noticed the quality of comments here has decreased over the years and low-effort comments are definitely upvoted more often these days.
I guess we all just need to be more proactive in downvoting them when we see them.