Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Dune reference earlier in the thread is spot on. We often think of the Butlerian Jihad as a fight against sentient robots, but Herbert's core warning was actually about humans delegating their thinking and agency to machines. We are seeing that play out now not through some sci-fi uprising, but through the quiet erosion of accountability. When we let an algorithm decide who gets a loan or who is targeted in a conflict, we are not just using a tool. We are essentially offloading our moral responsibility to a black box that cannot be held accountable.
 help



We don't need to look to science fiction for lessons about where this will lead. Fact has already caught up.

The IDF used tools like Lavender and "Is Daddy Home" to analyze communications, identify members of Hamas, and learn when they were home so they could be killed with bomb strikes.

This has long been possible for humans to do, but it's a laborious process. In the past, only people high up in chains of command received such bespoke treatment. AI tools permitted the IDF to grant the same treatment to raw recruits who had been given the sum total of a pep-talk and a pistol.

The result was widespread destruction and indiscriminate killing of civilians. The IDF didn't spend much time scrutinizing AI recommendations and were willing to act on false positives. Every bomb strike, by design (i.e. "Is Daddy home"'s purpose was to determine when targets were in their family homes), took out civilians. Just taking a pizza order from a Hamas member years before the war might have been enough to get entire families and their neighbours killed.

If humans hate another group of humans enough and an AI says "Kill", they'll kill. Without thought or remorse. We don't merely need to be worried about murderous robots on battlefields, we also need to worry about humans implementing the recommendations of AI without thinking for themselves.


>We are essentially offloading our moral responsibility to a black box that cannot be held accountable.

We already did it with companies, buddy!


Except that companies are not a black box, at every step there is a human making a comprehensible decision (probably with a paper trail). Yes, they dilute accountability to nearly nothing in some cases, but LLMs are sufficiently opaque to claim (ingenuously) that “nobody is responsible”.

In principle yes but in practice even executives who are supposed to have the final responsibility for malfeasance don't actually get prosecuted.

So you're aware of accountability dilution AND the opacity of LLMs making them not responsible for anything, therefore you agree with the point that was made.

I guess your point could be: LLMs are just another level of capitalistic opacity to maximize opacity and dilution of accountability.


“A computer can never be held accountable, therefore a computer must never make a management decision.”

>We are essentially offloading our moral responsibility to a black box that cannot be held accountable.

Do you have the same sentiment about self-driving cars?


To be fair, AI will likely be more moral then Hegseth, Vance and Trump combo. At worst, it will be as bad as them.

Whether someone agrees with your politics or not, this comment doesn't really help the discussion.

That's the point. AI doesn't do politics or religion.

A computer that values the life of a Israeli the same as that of a Palestinian... Ah a man can dream.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: