It's weird to me that this is "suddenly" an issue.
It has been known for decades that Red Hat Inc's largest customer is the U.S. Army[1]. It's a very large part of the reason why Red Hat took over development of SELinux and made it on by default in their distros.
And the Army isn't exactly known for handing out cupcakes...
[1] "Red Hat’s partnership with the U.S. Army spans 10 years
starting with the deployment of Red Hat Enterprise Linux in
2002 and, to this day, the U.S. Army remains one of Red Hat’s
largest customers by volume."
Optics. One can argue that Red Hat was working with DoD on just security. But after this white paper on how to better kill people, that facade has fallen over.
Suddenly war and associated killings stopped being theoretical. Bunch of people that used to be as dangerous (in practice) to civilization as paintballers started actually using real weapons on real people.
Military in peacetime is cosplaying (larping?) war. So there's little resistance to aiding them in their silliness. When they actually start to bomb people, it's another story.
They deal was, we aid you in your pretend-wars, but you don't start actual ones. This deal has been violated and people don't abide by that.
It’s low latency for realtime. Discussion of these things have been public for twenty years since Red hat released MRG, and this is a conspiracy theory website.
>The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion).
Carving out the particular military engagements your company deems less than justified sounds nice but isn't workable in practice. You have to swallow the whole pill if you want to sell to the DoD.
Better to have smart bombs than dumb ones. Or rather, better to have 1 smart bomb than 1000 dumb ones spread across an entire city in order to pick off the particular building, vehicle, or person you want.
Specially AI Hallucination bombs, that hit a park named "Police Park", because it thinks it's killing policemen[1], or a children school with Shahed in the name[2], because it thinks It has something to do with drones.
This isn't even that new. Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators. But AI does bring it to a new level.
> Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators.
Details please. Because I can see the reality being most likely an attempt to avoid conflict by solidifying MAD, by trying to prevent a human from vetoing a second-strike.
sadly that's also true within Ukraine. like, I know that Russians are handling Ukrainian prisoners of war very brutally (no sources, why: [0]) but, if not for [0] AND if I wouldn't be killed by my co-citizens for that, I would point out a good chunk of misconduct on Ukrainian side as well.
I also recall the history lessons. I can't remember anyone who committed a war crime against Nazi Germany that also was internationally prosecuted. yep, the West did prosecute domestically, and there were some loud cases with German POWs, but I can't recall any, any Soviet soldier being charged for e.g. rape.
[0]: there is nothing public to link to that remained up, and I'm long out from private Telegram channels where such videos are posted; plus, even if I could, you and mods wouldn't want to see the video of someone getting beheaded
Your links talk about the places that were bombed, but I don't see anything apart for conjecture that this was the product of AI targeting.
Also this is a vast underestimate of the ability of organizations that were able to locate most of Iranian leadership throughout the war in their hiding places, but suddenly their Farsi is so bad they need a twitter account to tell them this is a Park
It's a popular conspiracy theory, without evidence, and without any perspective on any information that intelligence had. Using civilians as shields is well documented/known for Iranian military and groups they sponsor. For example, hospitals [1].
> Gatestone Institute is an American far-right think tank known for publishing anti-Muslim articles.
> The organization has attracted attention for publishing false or inaccurate articles, some of which were shared widely.
> The Gatestone Institute has been frequently described as anti-Muslim, regularly publishes false reports to stoke anti-Muslim fears, and has published false stories pertaining to Muslims and Islam.
The US and Israel have repeatedly claimed that schools and hospitals are legitimate military targets with no evidence. A highly partisan think tank which is known for putting out misinformation is not a valid source.
If you're going to destroy hospitals and target civilian infrastructure and kill children, you should be accountable on a world stage and provide evidence. Unless you would you accept Iran bombing elementary schools in the US because they claim to have intel that there are terrorists hiding under them?
There are MANY examples of Iranian backed terrorist organizations doing this (which I thought might be too indirect), but here's something more recent [1].
Regardless, left leaning news reports things that make them look good and the opposition look bad. Right leaning news reports things that make them look good and the opposition look bad. Both are needed to find truth because they're all biased for profit corporate entities owned by 6 different billionaires that will only report what's convenient for that bias.
And no, I have no trust in the claims of the Iranian government. Do you? Who do you believe does?
You're mistaking it for Shajareh Tayyebeh Elementary School[1], double tapped with tomahawks in the opening salvo of the war. That was another school, hit later. There was multiple schools attacked.
What's the running rumor right now of which AI was involved? I heard Claude awhile back, but this makes me wonder how much Redhat could have been involved?
That “smart” vs “dumb” distinction doesn’t apply here though. What is discussed has nothing to do with the ability to physically land a bomb in a precise location, that problem seems to be solved reasonably well already. “Smart” in this case has more to do with using ML/LLM to select a target.
Smart bombs are no good if they are directed by a dumb AI targeting system, a dumb alcoholic accelerationist religious fanatic Secretary of War, or a dumb narcissistic genocidal pedophile President.
In fact, it didn’t. Trump continued to make “no new wars” a plank of his platform.
Some of his base will follow wherever he goes, but he would not have been elected without those who supported him on the basis of this (broken) promise.
The reality looks more like the worst of both worlds to me.
If you genuinely needed only a handful of "surgical strikes", thete would be no need to "compress the kill cycle".
What we see in Gaza, Lebanon and Iran looks more like "smart carpet bombing": Some AI system generates a continuous stream of "targets" from sensor and intelligence data, according to whatever criteria political leadership defines and according to a given level of allowed "collateral damage", then those targets are immediately fed to drones or warplanes to destroy - essentially a continuous "pipeline" that probably "ideally" (in the dreams of those people) should become fully automated.
For THAT kind of vision, "efficiency" in destroying any particular target and checking all legally required boxes as quickly as possible is probably paramount.
(And in addition to that, there are probably still enough "dumb bombs" if no one is looking)
As someone who works for the DoD, the so called "disturbing" language in the paper is very commonplace in this industry. Idk if or why Red Hat is trying to redact the paper, but I'm sure it's not because they're embarrassed their software is killing people. That's par for the course for defense contractors.
This post looks artificially buried on page 3 now, and the topic is one of the most important things that tech company workers should be thinking about right now.
75. Team from ETH Zurich make high quality quantum swap gate using a geometric phase (ethz.ch)
231 points by joko42 1 day ago | flag | hide | 54 comments
76. The disturbing white paper Red Hat is trying to erase from the internet (osnews.com)
153 points by choult 6 hours ago | flag | hide | 51 comments
77. Code is run more than read (2023) (olano.dev)
137 points by facundo_olano 1 day ago | flag | hide | 102 comments
> With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled “Compress the kill cycle with Red Hat Device Edge”, the 2024 white paper details how Red Hat’s products and technologies can make it easier and faster to, well, kill people.
IBM suffered no consequences for any of that so there were no lessons to learn. IBM dominated the computer industry from the 1960s-1980s ("Nobody ever got fired for buying IBM") and was a more brutal monopolist than any of the FANGAM corporations.
Words have meaning, and their emotional force derives from that meaning. The knowing misuse of a term like “genocide” for its emotional force is manipulative sophistry.
The Gaza Genocide is a verified fact. It has been recognized by a United Nations special committee and a commission of inquiry, the International Association of Genocide Scholars, Amnesty International, Doctors Without Borders, B'Tselem, Physicians for Human Rights–Israel, International Federation for Human Rights, and almost all genocide scholars.
What has happened? Mass killings, deliberate starvation, prevention of births, blockading, destroying healthcare facilities and killing healthcare and aid workers, blocking medical evacuations, systematically killing journalists, destroying civilian infrastructure, intentionally causing mass displacement, mass torture and death camps, the use of crying drones to inflict psychological anguish, the use of mass surveillance that far exceeds what any other population experiences, tracking militants to their home to intentionally bomb their entire family, sexual violence, destruction of agriculture, ecocide, and the intentional destruction of educational, religious, and cultural sites. It's likely that more than 150,000 people have been killed by the IDF in Gaza. All of it has been confirmed countless times, and the perpetrators have admitted to it countless times.
Besides external PR, does anyone know how this affects internal morale?
Some of the earlier Red Hat people I knew would not be OK with working on weapons systems even under the most legitimate circumstances. And they'd be much more opposed to collaborating with fascist regimes. And I think horrified by the idea of shoveling AI slop and grifter hype into life&death decisions.
Of course the tech industry makeup has changed (overall culture transitioning from hacker idealists, to finance bros), and some IBM-ification of Red Hat has has also happened. But I'd like to think Red Hat still attracts a more principled pool of talent than FAANG.
I was characterizing the objections I think some people would've had, from their perspective. Not trying to make a persuasive argument to people not already onboard with those perspectives.
But to one of your points: in some cases, it's not name-calling, but an objective assessment. And "fascist regime" and "collaboration" have historical meaning. I suggest that people of integrity would do well to consider the connotations. Especially at IBM, which infamously was a collaborator with one of the worst fascist regimes. "Never again" should still be in the minds of every executive and board member.
> I don’t think there’s something inherently wrong with working together with your nation’s military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies’ products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense
The core purpose of a military is to destroy things and kill people, and the world is controlled by the people who can do that better than others. You can put all the "defense" and "disaster aid" lipstick on that you like but that doesn't change what they train for and what their real purpose is.
> and the world is controlled by the people who can do that better than others
Yes, welcome to Earth.
There's absolutely no morality in deciding to be weaker than you have to be. If you are eaten by a predator when you had the option not to be eaten, you're not some high-minded righteous peace-lover, you're simply dead.
I must be missing your point. You're talking about a thought / belief framework. I'm talking about how the world actually works. In any fight between theory and practice, practice always wins.
The world largely moves forward through cooperarion, then regularly regresses through violence.
I have no desire whatsoever to undermine and bludgeon my neighbors in order to come out on top. Anyone who feels this way should probably be institutionalized for the safety of society at large.
It's weird to me that this is "suddenly" an issue.
It has been known for decades that Red Hat Inc's largest customer is the U.S. Army[1]. It's a very large part of the reason why Red Hat took over development of SELinux and made it on by default in their distros.
And the Army isn't exactly known for handing out cupcakes...
[1] https://unixdigest.com/includes/files/Army-RedHat-Whitepaper...
[1] "Red Hat’s partnership with the U.S. Army spans 10 years starting with the deployment of Red Hat Enterprise Linux in 2002 and, to this day, the U.S. Army remains one of Red Hat’s largest customers by volume."
Optics. One can argue that Red Hat was working with DoD on just security. But after this white paper on how to better kill people, that facade has fallen over.
Before it was a maybe, now it's certainty.
Ambiguity is quiet comforting.
Suddenly war and associated killings stopped being theoretical. Bunch of people that used to be as dangerous (in practice) to civilization as paintballers started actually using real weapons on real people.
Military in peacetime is cosplaying (larping?) war. So there's little resistance to aiding them in their silliness. When they actually start to bomb people, it's another story.
They deal was, we aid you in your pretend-wars, but you don't start actual ones. This deal has been violated and people don't abide by that.
haha, larping, https://en.wikipedia.org/wiki/List_of_wars_involving_the_Uni...
I'm sure all the people who died in Iraq and Afghanistan would be glad to know it was all just pretend.
https://web.archive.org/web/20260402155236/https://www.redha...
Archive URL to original paper
It’s low latency for realtime. Discussion of these things have been public for twenty years since Red hat released MRG, and this is a conspiracy theory website.
>The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion).
Carving out the particular military engagements your company deems less than justified sounds nice but isn't workable in practice. You have to swallow the whole pill if you want to sell to the DoD.
Better to have smart bombs than dumb ones. Or rather, better to have 1 smart bomb than 1000 dumb ones spread across an entire city in order to pick off the particular building, vehicle, or person you want.
Specially AI Hallucination bombs, that hit a park named "Police Park", because it thinks it's killing policemen[1], or a children school with Shahed in the name[2], because it thinks It has something to do with drones.
[1] https://x.com/MarioNawfal/status/2029575052535173364
[2] https://www.aljazeera.com/news/2026/3/6/elementary-school-in...
There's also a chasm of (non-)accountability.
You or your subordinates target an elementary school: that's a war crime.
Your "battlefield AI" targets an elementary school: software bug, it happens, can't be helped.
This isn't even that new. Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators. But AI does bring it to a new level.
> Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators.
Details please. Because I can see the reality being most likely an attempt to avoid conflict by solidifying MAD, by trying to prevent a human from vetoing a second-strike.
At least it is the plot of a lot of movies of this era: Dr. stragelove, wargames, colossus.
The software is never accountable, so the human running it is always accountable.
that is how it should be, not how it is.
"War Crimes" only apply to the loser of the war and are prosecuted by the victor.
Meaning whatever horrors are done on either side, only the horrors committed by the loser will be "crimes". The inclusion of AI doesn't change that.
sadly that's also true within Ukraine. like, I know that Russians are handling Ukrainian prisoners of war very brutally (no sources, why: [0]) but, if not for [0] AND if I wouldn't be killed by my co-citizens for that, I would point out a good chunk of misconduct on Ukrainian side as well.
I also recall the history lessons. I can't remember anyone who committed a war crime against Nazi Germany that also was internationally prosecuted. yep, the West did prosecute domestically, and there were some loud cases with German POWs, but I can't recall any, any Soviet soldier being charged for e.g. rape.
[0]: there is nothing public to link to that remained up, and I'm long out from private Telegram channels where such videos are posted; plus, even if I could, you and mods wouldn't want to see the video of someone getting beheaded
Your links talk about the places that were bombed, but I don't see anything apart for conjecture that this was the product of AI targeting.
Also this is a vast underestimate of the ability of organizations that were able to locate most of Iranian leadership throughout the war in their hiding places, but suddenly their Farsi is so bad they need a twitter account to tell them this is a Park
It's a popular conspiracy theory, without evidence, and without any perspective on any information that intelligence had. Using civilians as shields is well documented/known for Iranian military and groups they sponsor. For example, hospitals [1].
Shitty, but possibly a valid military target.
[1] https://www.gatestoneinstitute.org/8666/yemen-human-shields
> Gatestone Institute is an American far-right think tank known for publishing anti-Muslim articles.
> The organization has attracted attention for publishing false or inaccurate articles, some of which were shared widely.
> The Gatestone Institute has been frequently described as anti-Muslim, regularly publishes false reports to stoke anti-Muslim fears, and has published false stories pertaining to Muslims and Islam.
- https://en.wikipedia.org/wiki/Gatestone_Institute
The US and Israel have repeatedly claimed that schools and hospitals are legitimate military targets with no evidence. A highly partisan think tank which is known for putting out misinformation is not a valid source.
If you're going to destroy hospitals and target civilian infrastructure and kill children, you should be accountable on a world stage and provide evidence. Unless you would you accept Iran bombing elementary schools in the US because they claim to have intel that there are terrorists hiding under them?
It's more complex than that, you have direct evidence of Iran recruiting 12 year old child soldiers in this war (https://www.bbc.com/news/articles/c9wqgjn7x89o).
Using stadiums as security forces hiding places (https://www.wsj.com/world/middle-east/israel-iran-leadership...)
Misusing hospitals for military purposes (https://www.iranintl.com/en/202602215486) and schools (https://x.com/IranIntl_En/status/2032846253189411146 https://www.instagram.com/p/DV8NUQPFJJv), while the last ones are weak visual evidence and backed up by rumors, Iran is currently under internet blackout for over a month as the regime is interested in controlling information flowing out of the country.
There are MANY examples of Iranian backed terrorist organizations doing this (which I thought might be too indirect), but here's something more recent [1].
Regardless, left leaning news reports things that make them look good and the opposition look bad. Right leaning news reports things that make them look good and the opposition look bad. Both are needed to find truth because they're all biased for profit corporate entities owned by 6 different billionaires that will only report what's convenient for that bias.
And no, I have no trust in the claims of the Iranian government. Do you? Who do you believe does?
[1] https://www.msn.com/en-in/politics/international-relations/i...
This has nothing to do with AI, the school got hit because it was directly next door to a military base.
You're mistaking it for Shajareh Tayyebeh Elementary School[1], double tapped with tomahawks in the opening salvo of the war. That was another school, hit later. There was multiple schools attacked.
[1] https://en.wikipedia.org/wiki/2026_Minab_school_attack
There are no mainstream sources about the police park story when I looked. I’m pretty sure it’s a hoax.
What's the running rumor right now of which AI was involved? I heard Claude awhile back, but this makes me wonder how much Redhat could have been involved?
It is completely unfounded speculation that AI was involved.
That is target selection and has nothing to do with dumb vs smart bombs.
You might be right, but that's terrible
Channeling my inner Socrates:
You want consensus from non-experts for a plan to use 20 smart bombs.
Your opponent wants consensus for a plan to live-stream a demo of 1 smart bomb, and then use 19 dumb ones.
Your team has more expertise.
Your opponent's plan saves enough money to buy a better PR team than yours, and is still more cost effective than your plan.
Who wins?
That “smart” vs “dumb” distinction doesn’t apply here though. What is discussed has nothing to do with the ability to physically land a bomb in a precise location, that problem seems to be solved reasonably well already. “Smart” in this case has more to do with using ML/LLM to select a target.
Smart bombs are no good if they are directed by a dumb AI targeting system, a dumb alcoholic accelerationist religious fanatic Secretary of War, or a dumb narcissistic genocidal pedophile President.
There is one more layer - America voted for this.
In fact, it didn’t. Trump continued to make “no new wars” a plank of his platform.
Some of his base will follow wherever he goes, but he would not have been elected without those who supported him on the basis of this (broken) promise.
Trump said this wasn’t a war.
Americans voted for this confrontational, disrespectful, and chaotic way of governing.
And the 36% that sat out for reasons also contributed.
Use vote.gov to find out if youre registered
Use 5calls.org to find out about top issues to call your reps about.
Use govtrack.us to find out about what's going on.
If there was a github.com view of the govt agenda it'd be easy for more people to comment on things. At all scales of governance
I had a thought about that. Using weapons that don't give the opponent an opportunity to surrender feels like a war crime.
You can rationalize anything by only considering the upside relative to alternatives' downsides.
The reality looks more like the worst of both worlds to me.
If you genuinely needed only a handful of "surgical strikes", thete would be no need to "compress the kill cycle".
What we see in Gaza, Lebanon and Iran looks more like "smart carpet bombing": Some AI system generates a continuous stream of "targets" from sensor and intelligence data, according to whatever criteria political leadership defines and according to a given level of allowed "collateral damage", then those targets are immediately fed to drones or warplanes to destroy - essentially a continuous "pipeline" that probably "ideally" (in the dreams of those people) should become fully automated.
For THAT kind of vision, "efficiency" in destroying any particular target and checking all legally required boxes as quickly as possible is probably paramount.
(And in addition to that, there are probably still enough "dumb bombs" if no one is looking)
Its the plot of Captain American 2 with those 3 aircraft carriers at the end
As someone who works for the DoD, the so called "disturbing" language in the paper is very commonplace in this industry. Idk if or why Red Hat is trying to redact the paper, but I'm sure it's not because they're embarrassed their software is killing people. That's par for the course for defense contractors.
We all have to face the reality that people are going to use our software in ways we don't like.
Any productivity improvement software in the wrong hands could make doing bad things more efficient.
They don’t like it so much they wrote a gushing press release bragging about it!
who let the Streisand effect out of its cage!?
This post looks artificially buried on page 3 now, and the topic is one of the most important things that tech company workers should be thinking about right now.
"I give permission to IBM, its customers, partners, and minions, to use JSLint for evil."
In evil mode it indents by mixing tabs and spaces.
I chuckled. This is, in fact, actual quote, see[1] for explanation.
[1] https://gist.github.com/kemitchell/fdc179d60dc88f0c9b76e5d38...
> With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled “Compress the kill cycle with Red Hat Device Edge”, the 2024 white paper details how Red Hat’s products and technologies can make it easier and faster to, well, kill people.
It appears IBM learned no lessons after WWII: https://en.wikipedia.org/wiki/IBM_and_the_Holocaust
That book will need a sequel soon.
Ah, now I see where they got the name: https://en.wikipedia.org/wiki/Redcap
IBM suffered no consequences for any of that so there were no lessons to learn. IBM dominated the computer industry from the 1960s-1980s ("Nobody ever got fired for buying IBM") and was a more brutal monopolist than any of the FANGAM corporations.
I dunno that 'removes from their website' is sufficient for 'trying to erase from the Internet'
Can we rename this "RedHat removes paper from website on using their software to 'shrink the kill-chain'"
They still might pull an Anthropic move and send a C&D or DMCA to archive.org.
> With things like the genocide in Gaza ...
Population: ~2,050,000
Density: 15,455.8/sq mi
Words have meaning, and their emotional force derives from that meaning. The knowing misuse of a term like “genocide” for its emotional force is manipulative sophistry.
Nice genocide denial you got there.
The Gaza Genocide is a verified fact. It has been recognized by a United Nations special committee and a commission of inquiry, the International Association of Genocide Scholars, Amnesty International, Doctors Without Borders, B'Tselem, Physicians for Human Rights–Israel, International Federation for Human Rights, and almost all genocide scholars.
What has happened? Mass killings, deliberate starvation, prevention of births, blockading, destroying healthcare facilities and killing healthcare and aid workers, blocking medical evacuations, systematically killing journalists, destroying civilian infrastructure, intentionally causing mass displacement, mass torture and death camps, the use of crying drones to inflict psychological anguish, the use of mass surveillance that far exceeds what any other population experiences, tracking militants to their home to intentionally bomb their entire family, sexual violence, destruction of agriculture, ecocide, and the intentional destruction of educational, religious, and cultural sites. It's likely that more than 150,000 people have been killed by the IDF in Gaza. All of it has been confirmed countless times, and the perpetrators have admitted to it countless times.
Besides external PR, does anyone know how this affects internal morale?
Some of the earlier Red Hat people I knew would not be OK with working on weapons systems even under the most legitimate circumstances. And they'd be much more opposed to collaborating with fascist regimes. And I think horrified by the idea of shoveling AI slop and grifter hype into life&death decisions.
Of course the tech industry makeup has changed (overall culture transitioning from hacker idealists, to finance bros), and some IBM-ification of Red Hat has has also happened. But I'd like to think Red Hat still attracts a more principled pool of talent than FAANG.
Former Red Hatter here.
People who use terms like ‘fascist regime’ don’t get consideration. That’s like someone on the other side referring to ‘unprincipled savages’.
Name calling just doesn’t win. Maybe it makes the name caller feel better, but it loses the audience.
I was characterizing the objections I think some people would've had, from their perspective. Not trying to make a persuasive argument to people not already onboard with those perspectives.
But to one of your points: in some cases, it's not name-calling, but an objective assessment. And "fascist regime" and "collaboration" have historical meaning. I suggest that people of integrity would do well to consider the connotations. Especially at IBM, which infamously was a collaborator with one of the worst fascist regimes. "Never again" should still be in the minds of every executive and board member.
> People who use terms like ‘fascist regime’ don’t get consideration.
Are they referring to IBM, or?
So the hat is red because of all that blood?
Was this written by an Iranian propaganda machine?
How could it be? The US has won the war against them many times over, to the point that they no longer exist.
> I don’t think there’s something inherently wrong with working together with your nation’s military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies’ products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense
The core purpose of a military is to destroy things and kill people, and the world is controlled by the people who can do that better than others. You can put all the "defense" and "disaster aid" lipstick on that you like but that doesn't change what they train for and what their real purpose is.
> and the world is controlled by the people who can do that better than others
Yes, welcome to Earth.
There's absolutely no morality in deciding to be weaker than you have to be. If you are eaten by a predator when you had the option not to be eaten, you're not some high-minded righteous peace-lover, you're simply dead.
This directly contradicts most religions, which the people with the biggest weapons somehow claim to follow.
I must be missing your point. You're talking about a thought / belief framework. I'm talking about how the world actually works. In any fight between theory and practice, practice always wins.
The world largely moves forward through cooperarion, then regularly regresses through violence.
I have no desire whatsoever to undermine and bludgeon my neighbors in order to come out on top. Anyone who feels this way should probably be institutionalized for the safety of society at large.