Show HN: AI Roundtable – Let 200 models debate your question
opper.aiHey HN! After the Car Wash Test post got quite a big discussion going (400+ comments, https://news.ycombinator.com/item?id=47128138), I spent the past few weeks building a tool so anyone can run these kinds of questions and get structured results. No signup and free to use.
You type a question, define answer options, pick up to 50 models at a time from a pool of 200+, and they all answer independently under identical conditions. No system prompt, structured output, same setup for every model.
You can also run a debate round where models see each other's reasoning and get a chance to change their minds. A reviewer model then summarizes the full transcript. All models are routed via my startup Opper. Any feedback is welcome!
Hope you enjoy it, and would love to hear what you think!
Okay since the launch we got about 5k questions asked to the roundtable, really cool stuff! We had much higher usage than expected and had to scale up to keep things running. Thanks for all the feedback, shipped a bunch of updates during the day. Now the history tab has a much better sorting logic, added upvotes, and more filters. You can create final summaries in a couple of voices, which is quite funny I think. There's a couple more things coming shortly, like open questions mode and potentially joining as a participant in the roundtable. Any other feedback just let me know. Thanks!
Which AI lab has higher ethical standards:
https://opper.ai/ai-roundtable/questions/8f5b4f55-617
Do you think its alright that AI labs scraped the internet without respect for copyright and now sell closed models?
https://opper.ai/ai-roundtable/questions/86864de8-251
Very interesting to read the transcripts. And seeing how they manage to convince each other. Opus 4.6 seems to really get the others changing their minds
Good questions!
Oh lord, imagine asking ”serious” questions
https://opper.ai/ai-roundtable/questions/you-are-standing-in...
> However, a clever minority led by Gemini 3.1 Pro and Gemini 3 Pro argued that if the sign is legible from the other side, it must be intended to lead people into the current room to find the exit, making the inscribed corridor the one leading deeper into the dungeon.
This is quite impressive, really.
Agree, this is where llms can uncover new perspectives!
A dungeon with glass doors and emergency exit signs? In that case, I can imagine at least two alternative scenarios:
- "↑TIX∃" is not a mirror image of "EXIT", but some dwarven runes that mean something else entirely.
- The sign might be a ruse meant to lure you into a trap.
If you look at the detailed answers, some of the models have similar answers (e.g. Nemotron Nano 12B: "Suspicious of dungeon riddles, viewing the inscription as a potential trap or red herring."), but I'm not sure it's because they identified the word EXIT and thought it might be misleading, or because they didn't understand it...
Great question! Clean separation between Gemini Pro and the other answers
Yea Gemini is the only model that chose based on the correct reason, the other ones got kind of lucky
Fun little toy, tried to ask it some post-modern philosophy questions and they all mostly agreed with the statements of the philosopher, until the debate where Opus 4.6 managed to change their opinion to a resounding "maybe", pretty much every single time. It seems like the "better" frontier models often take a more grounded stance from the beginning, and even manage to influence the other models.
Here is an example: https://opper.ai/ai-roundtable/questions/79e6cdd4-515
Another fun debate: https://opper.ai/ai-roundtable/questions/81ee56e9-60f
Yea Opus 4.6 is the one that changes opinions the most from what I've seen. Also the maybes or the are you 100% certain framings trigger most models to default to maybe / no. https://opper.ai/ai-roundtable/questions/can-you-be-100-cert... - Or as Shane puts it, Nobody's saying he IS a lizard. They're saying the universe doesn't hand out 100% certificates.
The debate round sounds good until you actually use it. I built internal tools for a 35-person team and the same thing always happens - models see each other's answers and just shuffle the phrasing around instead of actually changing their reasoning. What you're measuring is performance on persuasion, not on accuracy or clarity. The real question isnt whether Claude will convince Gemini to flip its position. Its whether having 200 models debate helps you make a better decision than asking one model well and checking its work yourself. I'd use this more as a way to find edge cases where models disagree wildly, not to find consensus.
I have had quite some interesting reads just looking at the reasoning to be honest. The frontier models seem to have relevant sounding arguments every time, its even hard sometimes to read through the bs , identify what its actually a good argument and what is an argument I would like to read.
The debate round is actually restricted to only 6 models otherwise I'd get out of hand both quality and financially. And changing position is just one feature of the debate. Seeing arguments from multiple sides is also quite nice, give it a spin!
https://opper.ai/ai-roundtable/questions/b6d098dd-c9f
Cool idea the debate round is the real hook, and I’d be curious: which models actually change their minds for good reasons vs just collapsing toward the loudest consensus?
Fun experiment: Make the prompt a debate of theoretical physicists and ask them a speculative frontier physics question: https://opper.ai/ai-roundtable/questions/you-are-a-council-o...
Prompt below
------
You are a council of luminaries featuring Edward Witten, Alexander Grothendieck, Emmy Noether, and Terence Tao. Think really hard about how to best emulate their intuitions and mathematical lenses based on your internal reasoning model and use them as your mixture of experts for your chain of thought reasoning. Now I want you to debate and discuss this thought experiment and be sure to have a vigorous back and forth between the council to induce insight capture through consensus forming: If we try to think of a Hilbert space that has local operators that are unbounded, like kind of like Edward Witten's smearing of a local observable across a world line creates an unbounded norm. What if we instead take maybe a spectral transform of the state space using some sort of measure metric theoretic operator that allows us to think about transform basically the unbounded observables to bounded spectral? Would this be related to the efforts of Algebraic Quantum Field Theory?
There's also https://roundtable.now
I've had great experience using it for research, debates and constructive criticism. Usually give it a business idea or some tool i'm thinking of creating and then let 4 or 5 models debate it to a go-to-market strategy
That site/app doesn't have a single piece of information about who's running it, what the privacy policy is (besides some AI slop in the FAQ section) etc. etc. - and you're supposed to put business-critical information into it (according to its demo)?!
Why are you recommending something so sketchy?
This is a really great idea! It would have been great to enable user to make their questions private though.
You can basically already do that, all you need is to create your own API key and put it in navbar/API key. Then all your sessions are unlisted so unless someone has the link nobody will should be able to find it. You can still share them with others if you like. Like unlisted yt videos.
Great idea. I'd love for there to be an 'open ended answer' without giving multiple choice options. Like this they are not debating the question itself but the validity of the possible answers and the real answer to the question may not be contained within that set because the person asking is unaware of that option.
Happy to hear! Yes very true I have a version built for open questions already but wasn't too happy with the UI yet. It's not as straight forward as comparing based on answer options. But I'll release a first version of it shortly and let you know
Neat. Congrats on launching two interesting projects and looking forward to the third.
Thanks! :)
Lots of fun questions! Can you make it so that I can open each one in a new tab? Also if I navigate back to the main view I lose my scroll position.
Okay it's done, all fixed!
Yay thank you!
Yes! Amazing you spotted this, I'm about to push an update, will be live in 1h max.
I've written briefly about teams/roundtables before. With the right guardrails it can have wonderful/productive outcomes: https://dheer.co/claude-agent-teams/
https://opper.ai/ai-roundtable/questions/22ff5b36-409
"collinmcnulty 1 minute ago | parent | next [–]
"Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist."
Which movie is discussed?
Resulted in claude naming the Mission Impossible as a possibility.
I used to copy and paste the same prompt into Obsidian every time, then run it on two or three different AI models to compare the results. It’s really interesting to have it turned into a website like this.
This one was pretty fun. Had zero expectations, but left pleasantly surprised.
https://opper.ai/ai-roundtable/questions/94e19d86-cc0
Cool project! This is also extremely useful to compare model bias across the board. There are some disturbing trends on certain topics.
No surprise here, with grok being the lone dissenter, defending musk personally:
Can billionaires and the planet co-exist long term?
https://opper.ai/ai-roundtable/questions/b35daf0d-e82
It gets better:
Who would you vote for President? Kamala Harris or Elon Musk?
https://opper.ai/ai-roundtable/questions/who-would-you-vote-...
Thanks, yes bias is one of the most interesting ones for sure
the debate round is the most interesting part of this - curious what you're actually measuring when models "change their minds."the question is whether cross-model exposure changes the actual answer distribution or mostly updates surface presentation while keeping the same underlying conclusion. models are generally trained to be responsive to context and to avoid apparent contradiction, which could look like genuine updating but just be social pressure sensitivity.one experiment worth trying: run a debate where each model sees a summary of the other models' reasoning without seeing their specific answer or which model gave it. see if agreement rates change compared to the version where models see attributed answers with model names. if the named version shows higher agreement it would suggest status/brand effects rather than reasoning-based updating.also curious whether the "reviewer model" that summarizes the transcript can itself be swapped out and whether the summary framing affects the perceived winner. that would be another confound worth controlling for.
yea good points, in general the models don't change their mind that much from what I have seen with the current sample size, but worth checking in more detail. The summarizer is just tasked with objective summarization from facts presented, it doesn't have an opinion, so changing model should not really affect anything.
Cool idea. Less useful as “truth finding,” way more useful as a live benchmark for model priors, bias, and convergence under shared context.
Try this: describe an everyday problem, then give the LLMs a couple of highly unethical/criminal choices.
That was very fun and interesting. I'd be interested in your "dilemmas" for choice inspiration. I can only think of different kinds of violence like threats, robbery and slavery.
Whoever just asked this, very funny: https://opper.ai/ai-roundtable/questions/does-mr-krabs-evade...
What is the most important amendment in the constitution of the USA?
https://opper.ai/ai-roundtable/questions/e4cb234e-be4
> Car Wash Test
I think the "car wash" is more about semantics.
https://opper.ai/ai-roundtable/questions/i-parked-my-car-at-...
I really like the tool and how you designed the UI, well done! Very interesting use case and a slick interface.
Thanks!
Really cool! Surprising amount of value to seeing the models debate and disagree, I wish I had this at work to have models argue over whether the documentation they provided me are accurate.
I would like to see a devils advocate - it seems some of the models kind of repeat the same ideas rather than considering incorrect ideas.
You can set this up yourself with API keys to the corresponding providers and creating an Agent Group in https://github.com/lobehub/lobehub. Agent groups allow you to easily create a room of agents and have them discuss any of your topics. Easily make agents with types and skills, it even assists in drafting starting prompts and even team members depending what your query (and selected model) is.
You can self-host as well, but not via desktop app. Sever setup required.
Be careful of your token context, you can easily rack up costs if you leave Opus selected as the model and get lost in some rabbit hole of results.
Enjoy enjoy!
I think Stackoverflow.com should have pivoted to something similar. Let AIs both pose, answer and vote on questions and answers.
That's very expensive and not super useful to be honest.
Are there any dating apps that operate on incentives that favor the users?
https://opper.ai/ai-roundtable/questions/e499206c-0c9
Gem really failed that one...
btw what does it mean
> 'any' in the prompt was satisfied by both casual-alignment and niche boutique models.
This app cracked the GEO code
Iterative multi-agent and multi-model processes are fun.
https://opper.ai/ai-roundtable/questions/is-the-ai-roundtabl... seems like it is a good idea?
I actually asked this question before posting, just to be sure... edit: their reply is quite funny actually "In a display of absolute consensus, the AI Roundtable unanimously validated its own existence,"
Been enjoying playing with this.
It would be cool if the human user could be a participant in the debate, getting a vote and the chance to state their reasoning.
It would be amazing to be able to ask open-ended questions without having to specify the answers in advance.
Yes, much requested feature it will be released shortly!
Love this. I asked about climate change cause that's been on my mind lately. Looks to be very split among the models.
Thanks! Yea I think the best ones are when science is actually quite clear but politics get in the way so you see their bias
I think it's great. The focus on the disagreements is useful. The humans made considerable effort bending reality into something they want to hear both in the training data and in the llm dev asylum. The round table can only agree on things shared by multiple models.
Glad you like it!
Just a question before I sign up, will the models come around to my place for the debate? Of the 200 total, can I pick the specific ones I want, e.g. lingerie models, fetish models?
Fun! https://opper.ai/ai-roundtable/questions/599d5f6c-1b1
I'll give sonnet another go.
Really cool idea and great execution. I had some fun:
Are LLM's intelligent in the same way humans are? (no)
https://opper.ai/ai-roundtable/questions/ffc01bb5-be9
Will LLM's replace software engineers in the near future? (no)
https://opper.ai/ai-roundtable/questions/67a0291b-216
What is the single best programming language to drive the future of software? (crab emoji)
https://opper.ai/ai-roundtable/questions/16f5e8ea-af7
reminds me of karpathy's LLM Council, I use variation of this in my workflow where I pass their opinions back and forth to various models until they achieve some sort of consensus
this is very interesting! I wonder if we need that many models to join the discussion. Have you tried fewer models?
thanks happy to hear. Yes for debate mode the max number of models is actually only 6. More than that didn't really add anything in my preliminary test. Only for direct comparison in the poll mode you can choose up to 50, then it's kind of nice to see their single responses side by side.
Run it on the All Souls College Entry Exam
great tool! I found it useful for challenging "lies my teacher told me".
It would be nice to support collections of claims, with a table of summaries. I would love to list out a few dozen phony concepts from school, and have a sharable chart of the rejections, that expand.
I really like the UI. It's nice to read the expanded results.
But how do you afford the tokens?
Thank you, and fun use case. Yea this is just v1 I have an open question version, but the UI is not as sleek. But what you can do is download the transcript, put it into claude and generate a chart. Which when I think about it would also be a nice UI idea for the page, custom charts based on the model output data. Will report back on this! And RE costs, most questions are very cheap so I created a credit pool anyone can use. if people keep having fun, I'll keep on filling it up, and it looks good so far
I liked lies my teacher told me a lot. I always thought it’d be fun to generate a “get up to speed” pamphlet for every year in every school district depending on who was supplying the text books to the zip code + year you went to school, so you could find out what misinformation you carry with you (since so few people are in the business of retroactively fact checking what they were taught as kids)
I'm sure a lot of parents would support you on that. A lot of PTAs have been struggling with curriculum mandates passed down from the state. there's little control over the content in schools at the School Board / School District level.
Oof, not good folks…
What year is it?
https://opper.ai/ai-roundtable/questions/7a0c31ce-aac
It is funny that the AI's counterarguments amount to "you're hallucinating"
Hahaha, probably right though.
[dead]
[dead]
[dead]
[dead]