Hacker Newsnew | past | comments | ask | show | jobs | submit | jwpapi's commentslogin

Also when they use cli you can learn from it and sometimes, next time you’ll be faster if its just a simple edit compared to an ai that first has to understand relevant context.

No wonder they think they’re close to AGI when they think we are that stupid.

> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.

This whole sentence does do absolutely nothing its still do what the law allows you. It’s a full on deceptive sentence.


Boycott OpenAI.

Let's kill their business before it kills us.


Don't boycott it! Just don't pay for it. Smash the free service hard.

Active users are worth a lot. It is signal that they are the chosen solution.

Altman must have read a lot of Kissinger. If your brain scans the text quickly it almost seems like it's Anthropic's red line, except the second half completely negates it. Completely untrustworthy IMO, this is a direct, malicious intent to misdirect.

These people truly believe we're all idiots.

Doesn’t matter what they Believe. Not like we are going to do anything about it. Next couple weeks most of HN will be lining up to use the new OpenAI model that’s .01% better.

I don’t see any other outcome anymore to be honest, after seeing how humans use AI and how AI works and how providers tune their models.

To me it’s given:

- AI in it’s current state is ruthless in achieving its goal

- Providers tune ruthlessness to get stronger AIs versus the competitor

- Humans can’t evaluate all consequences of the seeds they’ve planted.

Collateral and reckless damage is guaranteed at this point.

Combined with now giving some AIs the ability to kill humans, this is gonna be interesting..

We could stop it, but we wont


>AI in it’s current state is ruthless in achieving its goal

I don't believe this to be a trait of any AI model, the model just does the right thing or the wrong thing.

The ruthless maximising of a particular trait is something that happens during training.

It does not follow that a model that is trained to reason will nedsesarily implement this ruthless seeking behaviour itself.


No lineage of AI models will be created that cannot achieve goals, they will be outcompeted by models that can.

Perhaps, but there is a difference in a reasoning system deciding on the best way to achieve the goal.

To get the predicted disastrous effects you need to be doing function optimisation without regard to the meaning of the function parameters. Yes, models can still game the system at inference time, but in much the same way as a human might game the system, it requires awareness that you are going against the intent of some rule.


>We could stop it

I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.

I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.

It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.


Yes, it would be like trying to “stop” gunpowder in 1400 or atomic weapons in 1938. Pandora’s box is open.

Gunpowder (weapons) and atomic tech (energy, material, weapons) are heavily regulated in most of the planet, as the risks of having free access to them for everyone (company/person) for their own selfish purpose without strong guardrails clearly outweighs the benefits.

The fact that something exists doesn't mean that having it readily available is the only option, particularly if it has potentially disastrous consequences at scale. We are choosing to make it available to everyone fully unregulated, and that is a choice that will prove either beneficial or detrimental to society at some point.

I don't think it is inevitable, I think it is a conscious choice made by a few that have their own and only their own interests in mind.

As a technologist, I am amazed at this tech and see some personal benefits. As a human, I am terrified of the potential net negative effects, and I am having trouble reconciling those two feelings.


The challenge is that enforcing a ban would presumably require strict incursions into personal freedoms organized at a scale where AI-based solutions would be particularly effective and thus tempting, paradoxically.

On the other hand, assuming the dangers are real, you lose by default if you do nothing.


Not sure I agree.

One cannot (in most of the planet) go to the supermarket and buy an M16 and a box of hand grenades, or get a hold of a couple of kg of plutonium cause they want some free energy at home. We also have rules in place of what one individual/company can and cannot do from the point of view of the greater good. I cannot go and kill my neighbour for my benefit (or purposefully destroy his life) without consequences. A myriad of things are not allowed, and I don't see people complaining about any incursion into personal freedoms.

The reason people have accepted these is that we have already proven that having access to those things could be catastrophic. We haven't proven hat yet with AI. But I don't see much difference between those established and well accepted rules, and a rule that says: A company cannot release or use for its benefit a technology that will impact the need of humans at scale, because of the impact (again at scale) that it would have in society.

In other words, if you are a company and have the potential to release a product, or buy a product from a provider that would cause mass unemployment, should you be legally allowed to do so? I do not think so.


That’s a fair objection. Having ruminated on it some more, I’ll admit it might be tenable.

As for achieving an effective ban, occupational collapse might be the stronger motivator once workplace adoption broadens and accelerates, but risk of epistemic collapse might register sooner among the general public, already broadly suffering slop.

Like Bill Gates, I wonder why it’s not yet become a theme in mainstream politics.


Why does it have to be doom and gloom. Serious question. When we plant seeds they bear fruit and not all fruit is poison.

It's doom and gloom because the underlying game theory forces all state actors into an unbound and irresponsible arms race, consequences be damned.

AI development game theory is extremely similar to the game theory behind nuclear arms development, but worse (nuclear weaponry was born from Human General Intelligence, and is therefore a subset of the potential of AI development). Failing to be the most capable actor could put one in a position of permanent loss of autonomy/agency at the whims of more capable actors.


Not OP, but AI is fundamentally in another category than any other technology before it. It requires moral fortitude to wield in a way that guns and books didn't require. It augments human judgement in a way that needs a moral framework to clearly guide it.

Unfortunately, as a species we seem to be abandoning morality as a general principle. Everything is guided by cold hard rationality rather than something greater than us.


The current fruit is automating away a ton of human labor with no foreseeable way to continue to engage that labor. It is poison for the majority of humanity which will bear fruit for the limited few who can use it / own it.

I think that much is fairly clear from AI.


It's not going to bear fruit for them either.

Why would an AI which is smarter than humans care about a ridiculous belief like "We own you"?


Get this point across to those leading the charge, if not every person everywhere.

Because it's a fruit governed by humans, in the scope of a capitalistic and patriarchal society. And all fruits planted in a capitalistic and patriarchal society are poison

> Collateral and reckless damage is guaranteed at this point.

It's industrialization and mechanized warfare all over again


AI isn't ruthless, that doesn't even make sense. It's a mathematical model, if it's optimizing for the wrong thing then that's strictly the fault of the people who chose what to optimize for

You need to go back and research AI safety long before LLMs were a thing. Any complex goal driven system will have outcomes that cannot be predicted. Saying "it's a mathematical model" belies your ignorance of behavior in complex systems. Very tiny changes in initial conditions can have vastly different outcome in results and you don't have enough entropy in the visible universe to test them all.

Sure I'd also agree with that, some are so complex we can't understand, but that's still our relying on it then

Blind optimisers without human qualities like ethics are pretty much the perfect example of what ruthless means.

there might be better words to describe that it doesn’t really has the same boundaries we assume it has.

I love how sci-fi warned us against hyper-competent galaxy brain conscious AI but we are actually going to be killed by confidently wrong stochastic parrots.

I mean Secretary of War can not act any other way to be honest. It’s just a fucked up situation.

There is no Secretary of War. The name of the Defense Department is set by statute that has not been named regardless of Pete Hegseth's cosplay desires.

Am i the only one who understands the deparments position? Like if another country will have it without safeguards, why would I not want it without safeguards. I can still be the safeguard, but having safeguards enforced by another entity that potentially has to face negative financial consequences seems like a disadvantage, would be weird to accept that as department of war.

I understand the risk, but that is the pill.


they could use a different provider for the kill chain.

we must use claude to decide whether to nuke iran, or else our gun manufacturers arent allowed to use to to run spreadsheets

is a bit ridiculous.


ports & adapters :)

Haha I agree that my opinion is kind of that But more like ports & adapters for semantic space, not just IO boundaries.

If we can abstract the tools one layer further for ai, it might reduce the attention it needs to spend navigating them and leave more context window for actual reasoning


its 200 to 6000 and I use the 6000. I also use an antigravity subscription for probably another 6k (I don’t use them fully tho,)

I cant believe this is net positive for them.


Why dont you just copy the trades?


If I'm an insider with 100% confidence, I'll take all offers at a certain price as long as I can afford it. Similar story for lower levels of confidence (but still inside info). There won't necessarily be any left for you to copy at a viable price.


You might not have enough money to drain the order book.

And there's always a chance things go wrong, even with inside information. It would be unwise to go all in.


The examples didn’t look like they’ve completely emptied the orderbook


Because there's always some uncertainty and capital limits. But the uncertainty about the outcome is itself inside info, and that's compounded with your own uncertainty about the insider as a copy trader. So the insider will empty out certain price levels only, and your certainty is strictly less than theirs, meaning you have even fewer viable levels to buy.


> Similar story for lower levels of confidence

therefore, the polymarket betting odds will reflect the truth - even if that info is a secret that nobody else but the insider knows. And if this is the case, then even an outsider could make use of the odds as a source of info which would ensure that market efficiency (which is about the flow of information) is high.

So what's wrong with insider trading again?


If you believe Polymarket as a serious source of truth, consider that somebody manipulated "Will Jesus Christ return before 2027?" because there was a secondary market on whether that market will rise above 5%. Which defeats the whole idea that the betting odds will reflect the truth. Also even pre-manipulation I don't think a 2% chance that Jesus will return was reflective of truth.

https://gizmodo.com/checking-in-on-polymarket-bets-on-christ...


The issue comes from situations where the insiders can alter the answer to help their own bets. The simple example is the bet on how long a press conference will be: It's a ridiculous bet when the person giving said press conference can bet and fleece the market.

Will X country invade another before or after day X? A large enough market changes the answer, as the agent can change the decision. And we can see this kind of thing in many interesting questions.


These are not secret divinations though, the participants know this and price it in or otherwise allow it to determine which markets they participate in.


That someone with inside information will e.g. make 500% while those late to the party e.g. only get 10%? (of course your example is not very realistic to begin with)


So is any kind of business illegal? Making investments?


How is this distinguishable from pump-dump?


It rewards blatant corruption? What's the benefit is the bigger question.


The benefit is that inside information becomes public information. The reward for the insider is just the necessary incentive for that to happen.


Has there ever been any documented circumstance where significant inside information became public and known thanks to a trade? Most often, the trade is made at the last minute, and the information gets subsequently revealed anyway. And it's impossible to tell whether somebody is an inside trader, a wealthy gambling addict making a stupid decision, or hypothetically a foreign agent pretending to be an inside trader to make people believe in a particular outcome.


It's impossible to know anything for certain; almost everything is probabilistic.

Also I'm not sure how to interpret your criteria because timing matters, I don't think saying 'it gets revealed in the end' is very meaningful.

Anyway, on Polymarket specifically, sure, military strikes are a common one. Seems like a useful signal to go hide in the basement. Outside Polymarket, there were insider trades in 2008 that I'm sure were useful.


Past performance is not an indicator of future performance.


Shouldn't it be if you suspect they are executed by an insider?


You can't be sure that they are an insider or lucky, just from onchain data.


If they make single market predictions with high accuracy it is very very likely they are


No vigilant insider is making a series of "single market predictions with high accuracy" on the same account. They would make unlinkable bets on fresh accounts.


> No vigilant insider is making a series of "single market predictions with high accuracy" on the same account.

There seem to be quite a few non-vigilant insiders. That's the very premise of the post we're discussing.

This is unsurprising to anyone who's seen the various ways people get busted for insider trading in equities.


Also insider trading is A-OK on prediction markets!


quite a long time..


How do you connect verified data from the past ?


right now it just pulls from Claude Code's local data files via ccusage (https://ccusage.com/guide/#data-sources)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: