Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is sometimes a lie machine because it lacks grounding in verification. Humans get more grounding than language models but even we are not 100% there - remember the antivax hysteria. The most grounded field is science, but even in scientific papers most things don't replicate. Verification is hard on all levels and requires extensive work. In particle physics all scientists clump together around the CERN accelerator as it is the only source of verification they have (almost, I exaggerate a bit).

It's going to be important to develop AI methods to test and verify, I think unverified model outputs are worthless verbiage. Verification can be based on references, code execution, physical simulations, lab experiments and even language based simulations.

In a few years the situation is going to flip, AI is going to become more reliable than humans. Being tested on millions of cases, it will be more trustworthy than us, no human can be tested to that extent. It's going to be interesting to see how we react to super-valid AI. Our guiding role is going to shrink more and more, we will be the children.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: