Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think claude code is like 3d printing.

The difference is that 3D printing still requires someone, somewhere to do the mechanical design work. It democratises printing but it doesn't democratise invention. I can't use words to ask a 3d printer to make something. You can't really do that with claude code yet either. But every few months it gets better at this.

The question is: How good will claude get at turning open-ended problem statements into useful software? Right now a skilled human + computer combo is the most efficient way to write a lot of software. Left on its own, claude will make mistakes and suffer from a slow accumulation of bad architectural decisions. But, will that remain the case indefinitely? I'm not convinced.

This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.

There are already some companies using fine tuned AI models for "red team" infosec audits. Apparently they're already pretty good at finding a lot of creative bugs that humans miss. (And apparently they find an extraordinary number of security bugs in code written by AI models). It seems like a pretty obvious leap to imagine claude code implementing something similar before long. Then claude will be able to do security audits on its own output. Throw that in a reinforcement learning loop, and claude will probably become better at producing secure code than I am.



> This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.

Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.

Though geeks absolutely like raving about go and especially chess.


> Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.

Yeah but, does that actually matter? Is that actually a reason to think LLMs won't be able to outpace humans at software development?

LLMs already deal with imperfect information in a stochastic world. They seem to keep getting better every year anyway.


This is like timing the stock market. Sure, share prices seem to go up over time, but we don't really know when they go up, down, and how long they stay at certain levels.

I don't buy the whole "LLMs will be magic in 6 months, look at how much they've progressed in the past 6 months". Maybe they will progress as fast, maybe they won't.


I’m not claiming I know the exact timing. I’m just seeing a trend line. Gpt3 to 3.5 to 4 to 5. Codex and now Claude. The models are getting better at programming much faster than I am. Their skill at programming doesn’t seem to be levelling out yet - at least not as far as I can see.

If this trend continues, the models will be better than me in less than a decade. Unless progress stops, but I don’t see any reason to think that would happen.


> I can't use words to ask a 3d printer to make something

Setting aside any implications for your analogy. This is now possible.


Meshy?


That's one. You can also do it just with Gemini: https://www.youtube.com/watch?v=9dMCEUuAVbM

Workflow can be text-to-model, image-to-model, or text-to-image to model.


The design work remains.

I’m not a fan of analogies, but here goes: Apple don’t make iPhones. But they employ an enormous number of people working on iPhone hardware, which they do not make.

If you think AI can replace everyone at Apple, then I think you’re arguing for AGI/superintelligence, and that’s the end of capitalism. So far we don’t have that.


There is verification and validation.

The first part is making sure you built to your specification, the second thing is making sure you built specification was correct.

The second part is going to be the hard part for complex software and systems.


I think validation is already much easier using LLMs. Arguably this is one of the best use cases for coding LLMs right now: you can get claude to throw together a working demo of whatever wild idea you have without needing to write any code or write a spec. You don't even need to be a developer.

I don't know about you, but I'd much rather be shown a demo made by our end users (with claude) than get sent a 100 page spec. Especially since most specs - if you build to them - don't solve anyone's real problems.

Demo, don't memo.


Hm, how much real life experience do you have in delivering production SW systems?

Demo for the main flow is easy. The hard part is thinking through all the corner cases and their interactions, so your system robustly works in real world, interacting with the everyday chaos in a non-brittle fashion.


Well he said - anyone can (or will soon) vibe-program their own MS Word - there is no way he is a programmer, sorry. The complexity of these systems is crazy. Unless he meant ah HTML text area with "save" button - then sure, why not.


> there is no way he is a programmer, sorry

Lol I've been programming for 30 years.

> The complexity of these systems is crazy. Unless he meant ah HTML text area with "save" button - then sure, why not.

What do you see as the difference between an LLM making an HTML text area and a save button, and an LLM making MS word? It just sounds like a scaling problem to me. We've been scaling computers since long before I was born. My first computer was a 386 with 4mb of ram. You needed a special add-in chip to enable floating point calculations. Now look at what we have.

As far as I can tell, the only difference between opus 4.6 and some future AI model that could code up MS word is a difference in scale. Are you betting against the entire computing (software and hardware) industry being unable to scale LLMs past their current point? That seems like a really bad bet to me. Especially seeing how far they've come in the last few years. Claude code can already do some quite complex tasks. I got it to write a simple web based email client for me yesterday. It took about an hour in total. It has some bugs, but the email client works.

We scaled hard drives. We scaled down silicon chips. We scaled digital camera sensors. And display resolutions. And networking bandwidth. We went from the palm pilot to the first iphone to modern phones. Do you really think we'll be unable to scale AI models?


>> industry being unable to scale LLMs past their current point

100% bet - no way any "AI" will be able to generate you anything close to a complex piece of software like Ms Word within reasonable time and budget. Given infinite time and money - sure, anything is possible, just like a trilling monkeys randomly printing "War and Peace" once in a trillion years in some remote galaxy. I don't even understand your confidence given how much guidance and hand holding LLMs need at the moment to produce anything useful.


Looks like a failure of imagination?

There are clearly two camps - one points to existing deficiencies, another - to trends, and getting wildly different predictions.


Yep. Claude today? No way can it achieve this. It can barely write a working C compiler.

I'm looking at the trend line. A few years ago it couldn't make a simple webpage. Now it can make a bad C compiler in thousands of dollars of tokens. What does it look like in another few years? Or another 2 decades?


Hard disagree, clients/users often don't know what the best/right solution is, simply because they don't know what's possible or they haven't seen any prior art.

I'd much rather have a conversation with them to discuss their current problems and workflow, then offer my ideas and solutions.


I don’t think you are using validation in the same sense as PC


> The second part is going to be the hard part for complex software and systems.

Not going to. Is. Actually, always has been; it isn’t that coding solutions wasn’t hard before, but verification and validation cannot be made arbitrarily cheap. This is the new moat - if your solutions require time consuming and expensive in dollar terms qa (in the widest sense), it becomes the single barrier to entry.


There was recent discussion about how making AI to write the validation for the code is a good approach. If you have formal proofs for your code, your QA needs go down.


Amazon Kiro starts with making the detailed specification based on human input in natural language.


> I can't use words to ask a 3d printer to make something.

You can: the words are in the G-code language.

I mean: you are used to learn foreign languages in school, so you are already used to formulate your request in a different language to make yourself understood. In this case, this language is G-code.


This is a strange take; no one is hand-writing the g-code for their 3d print. There are ways to model objects using code (eg openscad), but that still doesn't replace the actual mechanical design work involved in studying a problem and figuring out what sort of part is required to solve it.


Funny you should mention that.

I spent years writing a geometry and gcode generator in grasshopper. I wasn’t generating every line of gcode (my typical programs are about 500k lines), but I write the entire generator to go from curves to movements and extrusions.

I used opus to rewrite the entire thing, more cleanly, with fewer bugs and more features, in an afternoon. Admittedly it would have taken a lot longer without the domain expertise from years of staring at geometry and gcode side by side.


Produce the g code needed to 3D print the object of the attached illustrations from various angles.

Produce the 3D images of xxx from various angles.xxx should be able to do yyy.


Re: Produce the 3D images of xxx from various angles.xxx should be able to do yyy.

This is the tricky part. Do you know anything about mechanical engineering?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: