Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And when you track fine-grained dependencies that way the result is that upgrading an insecure version of a library still leaves a dozen other copies in memory that leave you still vulnerable.

Dependency hell isn't just dealing with incompatible dependencies. It has another side of forcing upgrades when they are needed for very, very good reason.



The old version of the function could get security patches.

There could be a dialog that warns the user and offers to disable functionality.

We will have to eventually do an accurate permission system anyway.


Who's going to write the patches?

Out-of-support branches aren't suddenly going to start being supported again, just because you've changed how dependencies are managed.

Lets say there are 10 versions of my function. A critical vulnerability is discovered, and I publish a fix in version 11. I hire someone to backport the fix to all 10 historic versions of the function.

Now I have 21 distinct versions of the same function.

Oh no, another critical vulnerability has been discovered!

I fix it in version 22, and hire a team to backport the fix to all 21 other versions.

Now I have 43 versions of the same function!

Oh no...

Sure, you could be a bit more pragmatic about which history nodes you backport to, but it's still fundamentally O(n^2). It's hard enough to manage backports at a whole-project level, let alone per-function.


The old version of the function could get security patches.

But then it wouldn't be the old version of the function any more, but a brand new function. If you allow arbitrarily changing 'old' function then you just lost all the guarantees of stability and reproducability that this approach was supposed to offer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: