Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think it all boils down to, which is higher risk, using AI too much, or using AI too little?

This framing is exactly how lots of people in the industry are thinking about AI right now, but I think it's wrong.

The way to adopt new science, new technology, new anything really, has always been that you validate it for small use cases, then expand usage from there. Test on mice, test in clinical trials, then go to market. There's no need to speculate about "too much" or "too little" usage. The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.

The fact that AI discourse has devolved into a Pascal's Wager is saddening to see. And when people frame it this way in earnest, 100% of the time they're trying to sell me something.



Those of us working from the bottom, looking up, do tend to take the clinical progressive approach. Our focus is on the next ticket.

My theory is that executives must be so focused on the future that they develop a (hopefully) rational FOMO. After all, missing some industry shaking phenomenon could mean death. If that FOMO is justified then they've saved the company. If it's not, then maybe the budget suffers but the company survives. Unless of course they bet too hard on a fad, and the company may go down in flames or be eclipsed by competitors.

Ideally there is a healthy tension between future looking bets and on-the-ground performance of new tools, techniques, etc.


>must be so focused on the future

They're focused no the short-term future, not the long-term future. So if everyone else adopts AI but you don't and the stock price suffers because of that (merely because of the "perception" that your company has fallen behind affecting market value), then that is an issue. There's no true long-term planning at play, otherwise you wouldn't have obvious copypcat behavior amongst CEOs such as pandemic overhiring.


Every company should have hired over the pandemic due to there being a higher EV than not hiring. It's like if someone offered an opportunity to pay $1000 for a 50% chance to make $8000, where the outcome is the same between everyone taking the offer. If you are maximizing for the long term everyone should take the offer even if it does result in a reality where everyone loses $1000.


Where did they get the notion that the EV of overhiring was high by any measure?


There is a reality where the COVID boost tech companies had would persist after COVID is over. The small chance of such a future raised the EV.


To be fair, that's what I have done. I try to use AI every now and then for small, easy things. It isn't yet reliable for those things, and always makes mistakes I have to clean up. Therefore I'm not going to trust it with anything more complicated yet.


We should separate doing science from adopting science.

Testing medical drugs is doing science. They test on mice because it's dangerous to test on humans, not to restrict scope to small increments. In doing science, you don't always want to be extremely cautious and incremental.

Trying to build a browser with 100 parallel agents is, in my view, doing science, more than adopting science. If they figure out that it can be done, then people will adopt it.

Trying to become a more productive engineer is adopting science, and your advice seems pretty solid here.


> The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.

This is fair. And what I've been doing it. I still mostly code the way I've always coded. The AI stuff is mostly for fun. I haven't seen it transformatively speed me up or improve things.

So I make that assessment, cool. But then my CEO lightly insists every engineer should be doing AI coding because it's the future and manual coding is a dead end towards obsolescence. Uh oh now I gotta AI-signal for the big guy up top!


> Test on mice, test in clinical trials, then go to market.

You're neglecting the cost of testing and validation. This is the part that's quite famous for being extremely expensive and a major barrier to developing new therapies.


There is also opportunity cost. Most people ignore most things because there are simply not enough hours in a day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: