1GB RAM per 1TB storage is really only required if you enable deduplication, which rarely makes sense.
Otherwise, the only benefit more RAM gets you is better performance. But it's not like ZFS performs terribly with little RAM. It's just going to more closely reflect raw disk speed, similar to other filesystems that don't do much caching.
I've run ZFS on almost all my machines for years, some with only 512MiB of RAM. It's always been rock-solid. Is more RAM better? Sure. But it's absolutely not required. Don't choose a different file system just because you think it'll perform better with little RAM. It probably won't, except under very extreme circumstances.
ZFS doesn't really need huge amounts of RAM. Most of the memory usage people see is the Adaptive Replacement Cache (ARC), which will happily use as much memory as you throw at it, but will also shrink very quickly under memory pressure. ZFS really works fine with very little RAM (even less than the recommended 2GB), just with a smaller cache and thus lower performance. The only exception is if you enable deduplication, which will try to keep the entire Deduplication Table (DDT) in memory. But for most workloads, it doesn't make sense to enable that feature anyways.
In my experience, writing a few lines to handle errors is really not as big of a deal as a lot of people make it out to be. However, I've seen numerous times how error handling can become burdensome in poorly structured codebases that make failure states hard to manage.
Many developers, especially those in a rush, or juniors, or those coming from exception-based languages, tend to want to bubble errors up the call stack without much thought. But I think that's rarely the best approach. Errors should be handled deliberately, and those handlers should be tested. When a function has many ways in which it can fail, I take it as a sign to rethink the design. In almost every case, it's possible to simplify the logic to reduce potential failure modes, minimizing the burden of writing and testing error handling code and thus making the program more robust.
To summarize, in my experience, well-written code handles errors thoughtfully in a few distinct places. Explicit error handling does not have to be a burden. Special language features are not strictly necessary. But of course, it takes a lot of experience to know how to structure code in a way that makes error handling easy.
SerenityOS serves as a cool side project for those who like to tinker with OS dev. I don't think it was "born" with any other goals in mind. Neither was their browser project, it just happened to turn into something a lot more serious.
This is big news. Defer can simplify control flow a lot, especially in early-return cases like handling errors. No longer do we have to write deeply nested ifs or the madness that is goto. Resource acquisition and cleanup can now be right next to each other.
Now we just have to hope that standardization goes well. The C standard moves very slowly, and that's probably a good thing. But defer is such a simple yet powerful feature that the cost/benefit ratio should easily justify its inclusion.
In fact, the standard reflects what is happening in the field. So, use `defer` where this is already possible and ping your compiler vendor for those that do not have it yet. If there is enough demand, `defer` may even happen for the next standard release.
> maybe today you can still build a win10 binary with a win11 toolchain, but you cannot build a win98 binary with it for sure.
In my experience, that's not quite accurate. I'm working on a GUI program that targets Windows NT 4.0, built using a Win11 toolchain. With a few tweaks here and there, it works flawlessly. Microsoft goes to great lengths to keep system DLLs and the CRT forward- and backward-compatible. It's even possible to get libc++ working: https://building.enlyze.com/posts/targeting-25-years-of-wind...
What does "a Win11 toolchain" mean here? In the article you link, the guy is filling missing functions, rewriting the runtime, and overall doing even more work than what I need to do to build binaries on a Linux system from 2026 that would work on a Linux from the 90s : a simple chroot. Even building gcc is a walk in the park compared to reimplementing OS threading functions...
Strongly agree with this article. It highlights really well why overcommit is so harmful.
Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.
But it feels like a lost cause these days...
So much software breaks once you turn off overcommit, even in situations where you're nowhere close to running out of physical memory.
What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory. Large virtual memory buffers that aren't fully committed can be very useful in certain situations. But it should be something a program has to ask for, not the default behavior.
>terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software
Having an assumption that your process will never crash is not safe. There will always be freak things like CPUs taking the wrong branch or bits randomly flipping. Parting of design a robust system is being tolerant to things like this.
Another point also mentioned is this thread is that by the time you run out of memory the system already is going to be in a bad state and now you probably don't have enough memory to even get out of it. Memory should have been freed already by telling programs to lighten up on their memory usage or by killing them and reclaiming the resources.
It's not harmful. It's necessary for modern systems that are not "an ECU in a car"
> Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.
The big software is not written that way. In fact, writing software that way means you will have to sacrifice performance, memory usage, or both because you either
* need to allocate exactly what you always need and free it when it gets smaller (if you want to keep memory footprint similar)m and that will add latency
* over-allocate, and waste RAM
And you'd end up with MORE memory related issues, not less. Writing app where every allocation can fail is just nightmarish waste of time for 99% of the apps that are not "onboard computer of a space ship/plane"
> What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory.
mmap with PROT_NONE is such a reservation and doesn't count towards the commit limit. A later mmap with MAP_FIXED and PROT_READ | PROT_WRITE can commit parts of the reserved region, and mmap calls with PROT_NONE and MAP_FIXED will decommit.
That's a normal failure state that happens occasionally. Out of memory errors come up all the time when writing robust async job queues. There are a lot of other reasons a failure could happen but running out of memory is just one of them. Sure I can force the system to use swap but that would degrade performance for everything else so it's better to let it die and log the result and check your dead letter queue after.
There's still plenty of mandatory reading. It's not unusual for high schoolers to have to read at least two books per semester.
Here's the problem though: It's just too easy to... you know... not do it. Teachers have no way of reliably telling the difference between those students who complete their reading assignments honestly and those who make due with summaries and AI assistance. Don't ask me how I know ;-)
It's the closest thing to a Unix successor we ever got, taking the "everything is a file" philosophy to another level and allowing to easily share those files over the network to build distributed systems. Accessing any remote resources is easy and robust on Plan9, meanwhile on other systems we need to install specialized software with bad interoperability for each individual use case.
Plan9 also had some innovative UI features, such as mouse chording to edit text, nested window managers, the Plumber to run user-configurable commands on known text patterns system-wide, etc.
Its distributed nature should have meant it's perfect for today's world with mobile, desktop, cloud, and IoT devices all connected to each other. Instead, we're stuck with operating systems that were never designed for that.
There are still active forks of Plan9 such as 9front, but the original from Bell Labs is dead. The reasons it died are likely:
- Legal challenges (Plan9 license, pointless lawsuits, etc.) meant it wssn't adopted by major players in the industry.
- Plan9 was a distributed OS during a time when having a local computer became popular and affordable, while using a terminal to access a centrally managed computer fell out of fashion (though the latter sort of came back in a worse fashion with cloud computing).
- Bad marketing and posing itself as merely a research OS meant they couldn't capitalize on the .com boom.
- AT&T lost its near endless source of telephone revenue. Bell Labs was sold multiple times over the coming years, a lot of the Unix/Plan9 guys went to other companies like Google.
The reason Plan 9 died a swift death was that, unlike Unix –
which hardware manufacturers could license for a song and adapt to their own hardware (and be guaranteed compatibility with lots of Unix software) – Bell Labs tried to sell Plan 9, as commercial software, for $350 a box.
Version 1 was never licensed to anyone. Version 2 was only licensed to universities for an undiscolsed price. Version 3 was sold as a book, I think this is the version you are referring to. However note that this version contained a license that only allowed non commercial uses of the source code. It also came with no support, no community and no planned updates (the project was shelved half a year later in favor of inferno)
More than the price tag the problem is that plan 9 wasn't really released until 2004.
Had UNIX also been something like other OSes price points, instead of a song as you say, it would never even taken off, it was more about the openess and being crazy cheap than the alternatives, than anything else.
The team moved on to work on Inferno, which Plan 9 afficionados tend to forget about, which was also a much better idea as UNIX evolution, Plan 9 combined with a managed userspace, which also didn't went down well.
Probably the fact that it's a pretty terrible idea. It means you take a normal properly typed API and smush it down into some poorly specified text format that you now have to write probably-broken parsers for. I often find bugs in programs that interact with `/proc` on Linux because they don't expect some output (e.g. spaces in paths, or optional entries).
The only reasons people think it's a good idea in the first place is a) every programming language can read files so it sort of gives you an API that works with any language (but a really bad one), and b) it's easy to poke around in from the command line.
Essentially it's a hacky cop-out for a proper language-neutral API system. In fairness it's not like Linux actually came up with a better alternative. I think the closest is probably DBus which isn't exactly the same.
I think you have to standardize a basic object system and then allow people to build opt-in interfaces on top, because any single-level abstraction will quickly be pulled in countless directions for as many users.
Probably that not everything can be cleanly abstracted as a file.
One might want to, e. G., have fine control over a how a network connection is handled. You can abstract that as a file but it becomes increasingly complicated and can make API design painful.
> Probably that not everything can be cleanly abstracted as a file.
I would say almost nothing can be cleanly abstracted as a file. That’s why we got ioctl (https://en.wikipedia.org/wiki/Ioctl), which is a bad API (calls mean “do something with this file descriptor” with only conventions introducing some consistency)
If everything can be represented as a Foo or as a Bar, then this actually clears up the discussion, allowing the relative merits of each representation to be discussed. If something is a universal paradigm, all the better to compare it to alternatives, because one will likely be settled on (and then mottled with hacks over time; organic abstraction sprawl FTW).
The fact that everything is not a file. No OS actually implements that idea including Plan9. For example, directories are not files. Plan9 re-uses a few of the APIs for them, but you can't use write() on a directory, you can only read them.
Pretending everything is a file was never a good idea and is based on an untrue understanding of computing. The everything-is-an-object phase the industry went through was much closer to reality.
Consider how you represent a GUI window as a file. A file is just a flat byte array at heart, so:
1. What's the data format inside the file? Is it a raw bitmap? Series of rendering instructions? How do you communicate that to the window server, or vice-versa? What about ancillary data like window border styles?
2. Is the file a real file on a real filesystem, or is it an entry in a virtual file system? If the latter then you often lose a lot of the basic features that makes "everything is a file" attractive, like the ability to move files around or arrange them in a user controlled directory hierarchy. VFS like procfs are pretty limited. You can't even add your own entries like adding symlinks to procfs directories.
3. How do you receive callbacks about your window? At this point you start to conclude that you can't use one file to represent a useful object like a window, you'd need at least a data and a control file where the latter is some sort of socket speaking some sort of RPC protocol. But now you have an atomicity problem.
4. What exactly is the benefit again? You won't be able to use the shell to do much with these window files.
And so on. For this reason Plan9's GUI API looked similar to that of any other OS: a C library that wrapped the underlying file "protocol". Developers didn't interact with the system using the file metaphor, because it didn't deliver value.
All the post-UNIX operating system designs ignored this idea because it was just a bad one. Microsoft invested heavily in COM and NeXT invested in the idea of typed, IDL-defined Mach ports.
Sure, why would they? COM was rendered irrelevant by the move to the web. Microsoft lost out on the app serving side, and when they dropped the ball on ActiveX by not having proper UI design or sandboxing they lost out on the client too. Probably the primary use case outside of legacy OPC is IT departments writing PowerShell scripts or Office plugins (though those are JS based now too).
COM has been legacy tech for decades now. Even Microsoft's own security teams publish blog posts enthusiastically explaining how they found this strange ancient tech from some Windows archaeological dig site, lol. Maybe one day I'll be able to mint money by doing maintenance consulting for some old DCOM based systems, the sort of thing where knowing what an OXID resolver is can help and AI can't do it well because there's not enough example code on GitHub.
Because since Windows Vista all new APIs are COM based, Win32 C API is basically stuck on Windows XP view of the universe, with minor exceptions here and there.
Anyone that has to deal with Windows programming quickly discovers that COM is not the legacy people talk about on the Internet.
Sure I mean, obviously the Windows API is COM based and has been for a long time. My point is, why seriously invest in the Windows API at all? A lot of APIs are only really being used by the Chrome team at this point anyway, so the quality of the API hardly matters.
Game development for one, and there are still plenty of native applications on Windows to chose from, like most stuff in graphics, video editing, DAW, life sciences and control automation, thankfully we don't need Chrome in box for everything.
Your remark kind of proves the point Web is now ChromeOS Platform, as you could have mentioned browser instead.
They have to an extent. The /proc file system on Linux is directly inspired by plan 9 IIRC. Other things like network sockets never got that far and are more related to their BSD kin.
Abstractions are inherently a tradeoff, and too much abstraction hurts you when the assumptions break.
For a major example, treating a network resource like a file is neat and elegant and simple while the network works well, however, once you have unreliable or slow or intermittent connectivity, the abstraction breaks and you have to handle the fact that it's not really like a local file, and your elegant abstraction has to be mangled with all kinds of things so that your apps are able to do that.
UNIX is for dorks. We needed a Smalltalk style, "everything is an object and you can talk to all objects" but thankfully we got Java and, "object oriented" C++. The Alto operating system was leaps and bounds ahead of the Mac and Windows 3.1 system and it took Steve Jobs a decade to realize, "oh shit we could have just made everything an object." Then we get WebObjects and the lousy IPod and everything is fascist history.
I had a UNIX zealot phase back in the 1990's, until the university library opened my eyes to Xerox PARC world, tucked away at the back there were all the manuals and books about Smalltalk from Xerox, eventually I also did some assigments with Smalltalk/V, and found a way to learn about Interlisp and Mesa/Cedar as well.
My graduation project was porting a visualisation framework from Objective-C/NeXTSTEP to Windows.
At the time, my X setup was a mix of AfterStep or windowmaker, depending on the system I was at.
Dichotomizing the world as either UNIX based or Windows is pretty myopic. I want the computer architecture Douglas Engelbart dreamed of. I want a realization of the ideas of Seymour Papert and Brenda Laurel.
We're still after 50 years using Xerox Alto clones fundamentally. What would a, "modern Alto" play like? What if we spent X,XXX,XXX's of dollars to create a, "in to the future time machine" like they did at PARC? What if the project had the high ethics of Bush and Engelbart as an operational paradigm?
...and yes Lisp is the best programming language. Suck it.
Otherwise, the only benefit more RAM gets you is better performance. But it's not like ZFS performs terribly with little RAM. It's just going to more closely reflect raw disk speed, similar to other filesystems that don't do much caching.
I've run ZFS on almost all my machines for years, some with only 512MiB of RAM. It's always been rock-solid. Is more RAM better? Sure. But it's absolutely not required. Don't choose a different file system just because you think it'll perform better with little RAM. It probably won't, except under very extreme circumstances.
reply