Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs got good at search last year. You need to use the right ones though - ChatGPT Thinking mode and Google AI mode (that's https://www.google.com/ai - which is NOT the same as regular Google's "AI overviews" which are still mostly trash) are both excellent.

I've been tracking advances in AI assisted search here - https://simonwillison.net/tags/ai-assisted-search/ - in particular:

- https://simonwillison.net/2025/Apr/21/ai-assisted-search/ - April is when they started getting good, with o3 and the various deep research tools

- https://simonwillison.net/2025/Sep/6/research-goblin/ - GPT-5 got excellent. This post includes several detailed examples, including "Starbucks in the UK don’t sell cake pops! Do a deep investigative dive".

- https://simonwillison.net/2025/Sep/7/ai-mode/ - AI mode from Google



> LLMs got good at search last year. You need to use the right ones though - ChatGPT Thinking mode and Google AI mode (that's https://www.google.com/ai - which is NOT the same as regular Google's "AI overviews" which are still mostly trash) are both excellent.

I disagree. You might have seen some improvements in the results, but all LLMs still hallucinate quite hard on simple queries where you prompt them to cite their sources. You'll see ChatGPT insist quite hard that the source of their assertions is the 404 link that it asserts is working.


This is just completely the opposite to what i've experienced within Claude and Gemini. Sources are identified and if inaccessible are not included in the citations. I recently tried a quite specific search aimed towards finding information about specific memo's and essays cited within a 90s memo by bill gates, and it was succesful at finding a vast majority of them, something google search failed with.

I don't want to say that it's a skill issue, but you may just be using the wrong tools for the job.


Oh boy, someone's claiming that chatgpt is actually great now, time to ask it some questions

I asked chatgpt's thinking mode if the adm formalism is strictly equivalent to general relativity, and it made several strongly incorrect statements

This is my favourite:

>3. Boundary terms matter

>To be fully equivalent:

>One must add the correct Gibbons–Hawking–York boundary term

>And handle asymptotic conditions carefully (e.g. ADM energy)

>Otherwise, the variational principle is not well-defined.

Which is borderline gibberish

>The theory still has 2 propagating DOF per spacetime point

This is pretty good too

>(lapse and shift act as Lagrange multipliers, not dynamical fields).

This is also as far as I'm aware just wrong, as the gauge conditions are nonphysical. In general, lapse and shift are generally always treated as dynamical fields

Its full answer reads like someone with minimal understanding of physics trying to bullshit you. Then I asked it if the BSSN formalism is strictly equivalent to the ADM formalism (it isn't, because it isn't covariant)

This answer is actually more wrong, surprisingly

>Yes — classically, the BSSN formalism is equivalent to ADM, but only under specific conditions. In practice, it is a reparameterization plus gauge fixing and constraint handling, not a new theory. The equivalence is more delicate than ADM ↔ GR.

The ONE thing that doesn't change in the BSSN formalism is the gauge conditions

>Rewriting the evolution equations, adding terms proportional to constraints.

This is also pretty inadequate

>Precise equivalence statement

>BSSN is strictly equivalent to ADM at the classical level if:

...

>Gauge choices are compatible >(e.g. lapse and shift not over-constraining the system)

This is complete gibberish

It also states:

>No extra degrees of freedom are introduced

I don't think chatgpt knows what a degree of freedom is

>Why the equivalence is more subtle than ADM ↔ GR >1. BSSN is not a canonical transformation

>Unlike ADM ↔ GR:

>BSSN is not manifestly Hamiltonian

>The Poisson structure is not preserved automatically

>One must reconstruct ADM variables to see equivalence

This is all absolute bollocks. Manifestly hamiltonian is literally gibberish. Neither of these formalisms have a "poisson structure" whatever that means, and sure yes you can construct the adm variables from the bssn variables whoopee

>When equivalence can fail

>Discretized (numerical) system -> Equivalence only approximate

Nobody explain to chatgpt that the ADM formalism is also a discretiseable series of PDEs!

>BSSN and ADM describe the same classical solutions of Einstein’s equations, but BSSN reshapes the phase space and constraint handling to make the evolution well-behaved, sacrificing manifest Hamiltonian structure off-shell.

We're starting to hit timecube levels of nonsense

It also gets the original question completely wrong: The BSSN formalism isn't covariant or coordinate free - there's an alterative bssn-like formalism called cBSSN (covariant bssn), which is similar to ccz4 and z4cc (both covariant). Its an important property that the regular BSSN formalism lacks, which is one of the ways you can identify it as being not a strict equivalence to the ADM formalism on mathematical grounds. So in the ADM formalism you can express your equations in polar coordinates, but if you make that transformation in the BSSN formalism - its no longer the same

This has actually gotten significantly worse than last time I asked chatgpt about this kind of thing, its more confidently incorrect now


Perhaps try asking it a question that other people in HN could also answer, lol...


How did it do when you posed these arguments to it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: