Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am always puzzled when biologists make analogies between living systems (or sub-systems) and human-designed devices (even complex devices). Human designed devices have an essential property that biological systems do not -- they were designed. While radios may seem complex, they are built from a relatively small number of component types with very well understood behaviors and typically limited numbers of interactions. Biological systems have no such constraints -- they were not designed, there can be many different ways to do the same thing (in the same cell), there are at least thousands, if not hundreds of thousands of different components, many of which have lots and lots of interactions. Considering how messy the systems are, it is quite remarkable that genetics and biochemistry discovered the basis of heredity (or at least some of it), the genetic code, hundreds of signaling pathways, etc etc. But there is no missing language, because there was no design that used that language.


As a CS graduate, I've always had a related inferiority complex towards "real scientists" like biologists, chemists, physicists who confront the real mess of reality, instead of our cozy self-made thought-castles as computer people. We live in a world where everything is intelligible in principle, if we take the time to dig into how it works; they live in a world of true mystery. However, I take consolation in the fact that seemingly CS can touch the esoteric and mysterious regarding computability and computational complexity.


Agreed on the whole inferiority complex thing. Though in addition, honestly, this is why I've always liked computing and electronics - exactly as you said, everything is intelligible in principle, everything is intentionally designed and runs on simple rules, it's just a matter of how much you dig into it.

It's interesting how you'll often, in culture and media, see computers described as cold and inhuman. Like, I get it, of course. But in a way, there's also something very human about working with them, because every part of the stack was designed, meticulously and painstakingly (or maybe haphazardly) by other humans. The analog EE might worry mainly about the laws of physics, sure, and as you said, the CS theorist might fiddle with pure mathematics. But it seems the majority of working software engineers, and even a good chunk of electronics designers, will spend their time dealing mainly with things designed by other people.

I get the impression it's not the same as like you said, the biologist that deals with the squishy, incomprehensible products of evolution, or the chemist and physicist that have to try to understand stuff like quantum mechanics. Those are the "cold" fields to me; ours is positively warm and cozy.


I think this contributes to collaboration problems across these types of experts. Experts of the squishy have to think in a holistic way that seems jumbled, irrational and arbitrary to the more structured, reductionist, modular, step-by-step input/output thinking of computer experts. In a squishy field you have to be constantly vigilant about unknown factors and have to integrate intuitions into your work. This can make the experience of collaborating on e.g. medicine+machine learning projects very frustrating for both sides. In my experience, biologists/medical people can't think in a layers-of-abstraction manner and kind of talk about every level at the same time. Even when they model something mathematically, they blend their sentences about the concrete use case with the abstract reasoning that is really independent of it and is just math.

Of course at a high enough level, software development relies on judgments as well, and what architecture will be most maintainable or future-proof, when to go with which principle (DRY, YAGNI, KISS etc.), what approach to use etc. will be similarly squishy in a way.


The idea that biologists have difficulty with abstractions probably reflects the “solidity” of modern molecular biology. Biologists were doing genetics, working out the rules of heredity, mapping genes onto chromosomes, understanding meiotic cross-over, for decades before the physical nature of the gene was understood. Likewise immunology is full of terms and factors that today have a biochemical explanation, but again, for years only existed as abstractions. Biology is much easier to understand today, now that we have molecules for almost everything. But biologists did decades of very insightful and productive work that was completely based on abstractions.


It's a bit like dropping a kid into programming without foundational courses on anything. They'll learn from the "outside inward". As I did, before doing a CS degree. 12 year old me knew Excel, so I understood function calls with parentheses are a thing, but didn't have any idea what programming really means, but built HTML+JavaScript pages through trial and error. I didn't know what compilation is, didn't understand that lines in the code are executed sequentially or in parallel. I could configure port redirects in the router to set up multiplayer games without understanding what "protocol" or "port" meant.

Then at university I learned it all from the ground up, clearing many misconceptions. But my misconceptions also were helpful. When we learned about OpenMP, I remembered I had thought that "for" loops would run in parallel. And indeed it turns out it was possible to run them in parallel. Or I had misconceptions about pass-by-value and pass-by-reference but at least had a prepared mental framework for this when we formally learned about it.

Biologists arrived at the scene similarly, without manuals or foundational courses handed over by God. So it had to start with this "competent ignorance" at first. You don't quite know what you're doing but it works. And then you figure out the building blocks.


This comment and the one above hold more than a handful of nuggets of gold, mark my words!


>But it seems the majority of working software engineers, and even a good chunk of electronics designers, will spend their time dealing mainly with things designed by other people.

Often resulting in building things for 'customers' (internal or external or even ones self) which result in a complaint that you built the thing they asked for, not the thing they need.

There is occasional grateful acknowledgement that the only reason they now know what they need is because they got what they asked for, but usually its somehow your fault you got it wrong...

"Computer Science" as a term has always felt like a misnomer to me. Its a mash-up of building on whats already known (or assumed) and also exploration by trying things out. It lacks the rigor of the 'hard' sciences - hardly anyone writes mathematical proofs proving their computing is correct let alone optimal.

But I agree with you in general. Because you're continually 'building'/creating stuff with a reasonably quick feedback loop, rather than measuring/proving whether your idea is valid or not which can take up to or more than a whole career does make computing seem warm/cozy.


There's some interesting parallels here.

If you go deep enough into electronics, you'll run into physics again. The engineers working at the chip fabs (and chip design) work very hard to shield us mortals from the messy details - the idealized transistors and gates we work with in the digital world are a useful abstraction. (I hope never to need to learn about quantum tunneling!)

In the same way, if you go deep enough into software design (whether "user-facing" or for other developers, you'll run into the messy vagaries of humans and our wetbrains.

Whether you're dealing with the subcultural expectations of your audience for a drop-down vs radio-button, or writing a tutorial on how to use your library, or thinking about what features your fancy new programming language needs, we rely on the abstractions and rules-of-thumb that we've learned. But those rules come from deep places, using results from neurology, sociology, psychology, etc!

Everything is deep, in every direction and all the way down. :)


Biological systems have also designs that emerged through evolution. Although the complexity may seem at different scales, the main difference is the measurements you can do. Both biological and electrical circuits are dynamic systems that have designs that gives them emergent functional properties.

As the article describes imagine having the list of radio components instead only instead of their topology (wiring diagram). The problem of figuring out how a radio works with this information, if youbknow little about their design, becomes quite similar with how figuring out how a biological system works.

The absence of a design diagram and our inability to measure components at the molecular level without disturbing the state of a system is the main reason bilogical systems are so challenging to understand.


I am a bit more comfortable saying "Biological systems also have 'solutions' that emerged through evolution." That is certainly true. But unlike designers, evolution is perfectly happy to re-invent the wheel (even if it is a less functional wheel). So different, evolutionarily independent, processes may provide the same solution, and of course solutions are constantly re-used to provide slightly different solutions. So I'm not sure that "complexity" is the hardest problem, though it certainly doesn't make things easy. The diversity of solutions for the same problem makes generalization/abstraction even more difficult.


Sure, but... how often do we object when fellow techies make analogies between just about anything and their field of expertise? We feel that our knowledge makes us quite qualified to explain the economy, to chime in on microbiology, and so on.

It's really pretty universal.


Luca Cardelli has a very famous article elaborating on this topic, which I recommend for a slightly more hopeful perspective:

https://worrydream.com/refs/Cardelli_2005_-_Abstract_Machine...


I'm sorry, but I could not get beyond the first sentence in the abstract: "Living cells are extremely well-organized autonomous systems, consisting of discrete interacting components." Living cells (and most other biological systems) are extremely poorly-organized. They work. Often very inefficiently, with lots of duplication. Any "organization" is purely accidental (in some sense, by definition, since it arose through evolution).

Reading a bit farther (RNA is lists, DNA doubly-linked lists) is embarrassing. DNA is double stranded, which does not make a doubly-linked list. And somehow we fail to recognize the difference between template-driven molecules (DNA,RNA,proteins), which have a genetic history, and lipids and carbohydrates, which do not.


Living systems are not hyper efficient, because efficiency leads to fragility, and they have evolved to survive on many scenarios.


As far as I remember, this paper draws an analogy not to system structures, but to the method of approach in their analysis. If a purely statistical approach to a designed system is presented as flawed, it is even more problematic when applied to more complex living systems. So this paper makes good point here.


The two are not entirely at odds- modern bioengineering and synthetic biology are getting pretty good at designing living systems that actually work as designed, e.g. a cell factory that produces a useful molecule.

Modularity and simplicity do evolve naturally when selection pressures make those properties beneficial- and in such cases engineering is then possible, and engineering analogies make sense.

A few examples that come to mind: DNA, modular assembly line proteins, etc. In such cases there seems to have been a selection pressure for rapid reconfiguration, which favors composable modular systems where one small change can lead to a new functional system - often in a way that follows simple predictable rules. In some cases the systems are not messy at all- and rival the most carefully planned out human designed systems.


It doesn't matter if something was evolved or designed; if there is a penalty on increased complexity, the systems will converge towards low-kolmogorov-complexity implementations. The information theoretic complexity of the human genome is low enough that we have a structural argument that it is scrutable with a reasonable amount of effort.


The right column on page 181 addresses these arguments


> Human designed devices have an essential property that biological systems do not -- they were designed.

... assuming you don't believe in creationism or some branches of pre-astronautics (aliens used genetic engineering to create/modify a lot of life on earth). ;-)


one is trying to understand things made by people the other is basically trying to understand things made by adversarial networks for some billions years

oh and there were uncountable networks, at the same time, that were just lucky or not




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: