It seems like many people focus on reasoning capabilities of the GPT models.
The me the real value is in the industrial scale pattern recognition capabilities. I can indicate something I vaguely know or ask it to expand on a concept for further research.
Within the last hours I have used it to kick-start my research on the AT1 bond and why Credit Suisse let them default and it helped me recall that it was the GenServer pattern I was looking for in Elixir when you have a facade that calls to an independent process.
Yep, it's saved me a lot of time on data transformation tasks. For instance, I wanted to convert the colors in Tailwind to CSS variables. I had the JSON listing all of the names and hex colors, I just needed to rewrite the names and convert the hex to base 10. A rather straightforward mapping, but I'd need to write the function for it. I just asked ChatGPT to give me the function. I read the function, it looked good. Boom, done in less than a minute. What's funny is that ChatGPT started spitting out the expected output of the function. And it was right! Perhaps surprising on the face of it, but really it's a simple pattern mapping.
For most things, verification is far easier than getting the answer. The same is the case with using stack overflow, where I think that at least half the answer doesn't answer my query, but once I have the potential solution, it is easy to look for the documentation of the key function call etc. Or purely by running it if it is simple and doesn't seem dangerous.
The me the real value is in the industrial scale pattern recognition capabilities. I can indicate something I vaguely know or ask it to expand on a concept for further research.
Within the last hours I have used it to kick-start my research on the AT1 bond and why Credit Suisse let them default and it helped me recall that it was the GenServer pattern I was looking for in Elixir when you have a facade that calls to an independent process.