Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am CTO of a large global data center provider posting with throwaway account.

As a technologist, I really appreciate what they have done. Impressive work, high quality, however I don't understand who this is for.

The meaningful market for Data Center hardware is pretty well defined in two clusters. People that build/make custom gear (such as Hyperscalers) and people that buys HP/Cisco/IBM/Dell... (blades or hyper-converged). To scale, you obviously want your DCs as standardized as possible.

Until this company has a certain/size and scale, no one serious will trust their black boxes at any type of scale.

Beyond the tech, how would support services really work? We can have a technician from any of the large vendors on-site in less than 2 hours. In some of our DC clusters we actually have vendor support personnel 24x7 on-site with vendor paid spare parts inventory. How would they provide that level of service?

Maybe I am not the target audience for this offering.



I think they're mostly targeting customers who want an AWS- or GCP-like experience from a developer perspective (compute is abstracted and you can provision it with an API, etc.), but want to own their own compute infrastructure and have it on-prem. That market has mostly had to cobble together consumer-inspired HP/Cisco/whatever stuff historically (like, one of the early talks about the Oxide value proposition was complaining about why every server in the rack needs a CD drive, which was the norm from Dell), because the kinds of stripped-down, super-efficient hardware designs the hyperscalers were building weren't available to the general public, so this is that: hyperscaler-like technology for people who want to own it themselves.

I think the motivations for why people would want to own their own are probably a mix of financial (at a certain scale there's a tipping point and it gets cheaper), and regulatory/compliance/whatever, like if it's healthcare data, or defense, etc.


Thank you for the response. The problem you described has been solved by the large vendors with Hyper-converged offerings for many years so it sounds like Oxide might be a bit late to the party.

I do understand well the rational of running your own servers vs hyperscalers, as well as the repatriation trend but I see Oxide at best as a niche player.


> The problem you described has been solved by the large vendors with Hyper-converged offerings for many years

All Oxide has to do to win that market is ship software and firmware that doesn't suck, because there are incumbents but the incumbents are clearly incapable of doing so.


It’s a bit like Apple entering the Bluetooth headphone market. Tons of players, but they all suck for a variety of reasons. Apple announced the AirPods, which pair extremely well with your phone, and they really don’t suck.


The bar really is sooooo low


Oxide has customers who have been waiting for real integration and innovation from HP, Dell, and Cisco and are ready to take a risk on something new.

I set up some Cisco server hardware a few years ago, and only by the time I'd managed to order it I was already wishing I had a better choice. When it arrived and the remote serial was unusable to fix the BIOS ("American Megatrends copyright 1984" at 9600 baud? No thanks.) I was ready to give up and go back to AWS.

This is a market ready for a kick in the ass, which Oxide plans to do.


This might boil down to "why does anyone need Stripe when there's Visa"?


Stripe and Visa are two different parts of the same ecosystem though. You can't really replicate Stripe with entirely Visa services


Genuine question, you mention Hyper-converged, can you point to anything that comes even close to the experience you presumably get from the Oxide offering.


Part of the problem here is that the people who make the purchasing decisions (like this CTO guy) don't care about "the experience" because they're not the ones unpacking boxes and plugging in cables.

They pay other people to do that and they don't really care if it's a miserable time. And if it takes days instead of hours, who cares? Rarely is someone setting up a data center under the gun (unless you're Elmo and we all saw how that went).

Factors like scalability and ongoing support are much more top of mind.

Not saying that Oxide can't address this, and I love Oxide's focus on the experience, but I think this bottom-up approach to convincing customers is going to be a steep climb..

But they seem to be up for steep climbs, so I wish them all the best!


Yeah I think this is the interesting question. My (naive, uninformed) interpretation is that Oxide is betting that there is some area in the overlap of the venn diagram between "people who have reasons not to run their businesses on the public clouds" and "people who are not suits but technologists and care about the user experience of setting up and maintaining their infrastructure because they want to do it themselves / without hiring lots of people and service contracts".

I'm not sure how big that space is, or whether it is likely to grow or shrink over time, but I'm intrigued by the proposition they're testing here!


I wonder if we'll see this space you describe grow as AI startups and scaleups - or even internal departments in large enterprises - start to increase in number...

I know there's already plenty of concerns about giving cloud-based AI services access to corporate data, so maybe there's a growing market there..


> we all saw how that went

I mean... it went quite well all considering? There was some site instability for a brief period of time and now it's back to working normally. The initial hypothesis was that the data center was vital and so couldn't be shut down quickly. Turns out the hypothesis was incorrect. So I'm not quire sure that makes the point you're trying to make.

Of course Twitter is definitely not all use cases, so trying to generalize from one data point isn't a great idea in general.


I was making a bit of a joke there, sorry. It was more about the rather rare scenario of setting up a data centre in a hurry than about the long-term quality of the result.

Elmo's experiment came to mind so I tossed it in there.


Who is Elmo?


Elon Musk, Twitter/X CEO, went to the data center and started unplugging things. You can’t make this stuff up! https://www.techdirt.com/2023/09/12/the-batshit-crazy-story-...

Why “Elmo”? https://www.businessinsider.com/twitter-insiders-users-calli...


I got curious about the book mentioned, and found an EPUB:

data:text/plain;base64,aHR0cHM6Ly92ay5jb20vd2FsbC00ODQxMzg3Ml82NjAyNg==


Elon Musk


Dell's VxRail has been very popular and successful. Not perfect, but pretty good, and I think the market leader.

HPE Simplivity has done well.

Also, Nutanix.


These seem to solve the ‘host your own cloud’ problem, but are still standard server blades requiring a ton of surrounding hardware and maintenance. Oxide is entirely integrated.


"A ton of surrounding hardware" is vague, generally speaking it's bog standard network switches that are needed.

Oxide is entirely integrated with their own custom (though based on OSS?) networking via Tofino based switches and software. Could be good, but a lot of subtle bugs can occur at this layer. It's a risk.


VxRail's the market leader? Last I touched it, it was a mountain of integration faults and vendors pointing fingers at each other. Maybe it's easier in larger or more traditional environments, but I'd be apprehensive of hyperconvered if VxRail's the best available


Evo:RAIL from VMware had those issues, but Dell VxRail has been relatively smooth at scale IME.


Although it might be true that the existing vendors don't provide a good experience, I'm still a little miffed that the OP does not even mention the existence of the existing vendors.


This I don't get. It's marketing copy, from a small upstart. They don't need to link to their established competitors...


OK, then at least I'm glad someone in the comments pointed out their competitors.


Agreed, it's why I read stuff like HN comments! Marketing copy is not a good source for comparative information.


Lack of local support does make them a niche player, but everything starts from a niche and those who believe they don't start from a niche disperse their efforts. So, with the hypothesis that Oxyde is smart, the question therefore is: what niche is Oxyde focusing on ?


I mean, I don't know anything about this space like you do, I'm just an intrigued observer, but my impression is just: They are going to try to compete with those large vendors. Maybe they won't be able to! Such is life, and indeed is the most common outcome for new ventures.

But precious few new ventures have ever entered a market where nobody said some form of "I don't get it, there are already established products in this space, how will this new thing ever get traction?". And yet, lots of new ventures turn out to be competitive with the established players in their market, for one reason and another.


>I think the motivations for why people would want to own their own are probably a mix of financial (at a certain scale there's a tipping point and it gets cheaper), and regulatory/compliance/whatever, like if it's healthcare data, or defense, etc.

Yea, there's definitely a market in defense here. Because even though Azure/AWS offer Govcloud, its inadequate for non-civilian connected infrastructure. This offers benefits of writing "modern software" and deploying it in similar modern fashion while keeping it completely running isolated. Imagine being able to make your command and control operations actually decentralized and not vulnerable to a missile strike on a single datacenter.


Essentially this same sentiment, applies to any number of things:

- Why would anyone buy the Framework laptop, they don't have nearly the support/pedigree that Dell, HP, etc. has?

- Why would anyone use iPhones in the enterprise/IT world, they don't have nearly the support/pedigree that Blackberry, Microsoft, etc. has?

- Why would anyone use Google Fiber, they don't have nearly the network or support that AT&T, Spectrum, etc. has?

- Why would anyone ever use Linux (in enterprise, let's say), compared to the support and adoption that Microsoft/Windows offers?

- ...

I'm purposely picking different examples with varying degrees of success or adoption. I am not claiming that Oxide will be an instant category-dominating success. I don't think Oxide expects to replace HP/Cisco/Dell/etc. overnight, and I don't think a business has to launch with that ambition from the start, to prove that it's worth launching.

But this take is so repetitive as to be bordering on cliche -- I don't know if you're self-aware enough to realize, you are literally just a living embodiment of the "Innovator's Dilemma" right now...


Your examples are strange.

Framework is niche.

iPhones do have the pedigree.

Google Fiber is barely used.

Most folks do use a supported Linux distribution, they don’t roll their own.


> iPhones do have the pedigree

Not in 2007-2008 which is equivalent to Oxide today.


Apple was an incredibly well established company and the initial iPhone was not used in enterprise… it didn’t really start to take off there until iPhone 4 (2010-2011).

Not to mention the comparison is inane to begin with. Using an iPhone for your enterprise and moving your tech infra to a relatively unknown company are not equivalent at all.


You must have missed the "beleaguered" era.


it's so funny to me that you're harping on this analogy when it's clear you are so stubborn you won't accept anyone else's answer anyways.


iPhones became popular through the bring your own device movement. You aren’t going to see that with racks in a data center


It think it was the other way around. The success of the iPhone (and Android) was a large factor behind the BYD movement.


iPhone was truly transformative.

This just seems like a twist on hyper-converged infrastructure + open source.

I mean, I love their design. I just don’t think it’s special enough to warrant mass adoption. And I certainly wouldn’t be deploying at scale in an enterprise environment with zero widespread adoption.

Small entities will just use cloud same as always. Large companies have a multitude of unique needs that won’t all be catered for by a single box. The big vendors will clean up on that front as usual.


They were not immediately adopted by enterprise customers, in fact. It took years for that.


> Framework is niche.

"Niche" is not a dirty word. Framework is niche, but if they can make their cost structure work (which I have no idea whether or not they can), that's ok. New products don't need to dominate their market in order to be succesful.

(But I do agree that Google Fiber is a bad example / a good example of a new product in an established market not working.)


Bryan Cantrill is famously against vendor lock-in. He wrote a[n in]famous blog about the "FYO point" while at Sun. Oxide may be going for customers that also have the same aversion to vendor lock-in.

One thing that Bryan understands is that you can "lock" the customer in with great products and services, as well as continuing development, while also making the customer feel secure in having a way out should you turn into a company that treats locked-in customers as cash cows. The open source strategy (it is a strategy for Bryan and Oxide) is there precisely to do this: make the customer feel they can leave you, but then not.

For your deeply technical staff, having source code access is a big deal too, since it enables them to better understand the products they use.

How big is the market of sufficiently-vendor-lock-in-averse customers? I don't know -- that's not my remit. But there's the size of that market right now, and whether Oxide (and any other companies with similar visions) can grow that market by sheer willpower. I make no predictions.

What if Oxide can get the next Netflix to use their stuff instead of a public cloud?


Oxide is the definition of vendor lock in. All of their hardware is unique... even down to the choice of fans. Fan burns out? Now you've got to buy another one... from them.

One of the amazing shifts in the last 20 years was realizing that commodity hardware, when deployed correctly, could do the job.


Not sure if we're talking metaphorical fans or literal ones but assuming the latter: replacement is covered under warranty. And for me personally, the "amazing shift" that you describe was in fact a decade-long experiment that left me with an inescapable conclusion: commodity hardware cannot, in fact, do the job -- and not for lack of trying!


Hi Brian, thanks for the response.

Metaphorical fans, but also fans, and everything else. It was an example from the blog post. I looked at that super innovative and cool backplane. Sure, warranty covers it for the first N years, but then what? What happens when things turn into a Tesla situation and you get horror stories of delays and poor results?

I think 'commodity' is generalized at this point. Given the choice of Supermicro and Oxide. I'm going to pick Supermicro. They can deliver me a clean rack of machines too. Why would I go with Supermicro? Big company, lots of products, lots of choices, I can work with them to get what I want.

Oxide is too singular. It is one rack, one design, one set of specifications. Don't get me wrong, there is some value in that... and I'm sure you'll sell plenty of hardware, but I'm not finding the value in it for myself. That's the part that I mean by 'commodity'.


You are ignoring the software stack, which is where a huge portion of the value-add is found.


I'm not ignoring the software stack, I just don't see any value in the additional vendor lock in on it. I'd rather use open source stuff developed by a large community of people and not a single small vendor tied to a very specific and limited hardware stack.


I don't think they're planning on forcing you to buy their products :)

As far as I can tell, there exists no "open source stuff developed by a large community of people" to run integrated server rack hardware. But if you're using such a thing, then I'd like to know what it is!


If their SW and FW source code is MPL 2.0, that's good enough to limit the extent of vendor lock-in. Sure, it would take time to take over maintenance of that code and then add support for different HW and so on, but there can be a cottage industry of consultancies that can help if ever Oxide vendor lock-in or bankruptcy becomes a problem.


No it isn't good enough. This is a hardware play because you could theoretically take that software and run it on whatever hardware you want. You're not going with this business because of their open source software though, you're going with it because they are making innovative hardware.

If you're buying millions of this stuff, what says that you're going to get support for it in 5 years. Who knows... maybe Cisco wakes up and gives them an offer they can't refuse and then shuts down the company.

By the way, people endlessly gripe about Google deprecating things and that's just software...


Their hardware isn't really innovative though... Even the hardware integration isn't really innovative, as others here have pointed out.

My two cents is that this isn't really an innovation business model, it's an execution business model. Their proposition seems to be that they can execute a a server rack with integrated hardware and open-source software so well that customers will love their product.

Maybe they'll just end up with tons of nerds our there thinking, "I really wish I could justify investing in Oxide racks, but it just doesn't make sense for my business / I just can't sell it to my CIO", but hey, maybe not!


Solution looking for a problem.


You are doing the low-brow dismissal thing.

It's better to just think to yourself: "this doesn't seem like a useful product to me, I don't understand why people find it interesting, oh well", and move on.


No, I'm taking all of your words and distilling it down to a common saying.

I actually do think what they are doing is innovative across the board. They are taking all of the common feedback anyone who has run large scale data centers (which I have) and applying it to a brand new product. Unfortunately, they are doing it in a way that is extremely vendor specific.

  `import { Api } from "@oxide/api"`
No, thanks.

  `Understand and debug issues faster`
How is this any different than throwing Netdata onto a server?

They've got 231 repos in github...


> How big is the market of sufficiently-vendor-lock-in-averse customers?

Very. Just look at the USA defense spending budget. If you’ve ever worked on AWS-govcloud or secret, you know there’s a market here.

This has huge use for military too. Imagine having a black site or off-grid location but still needing a rack of things. What if you could spin up an entire enterprise infrastructure by just loading up this rack?

If this team manages to get this thing government certified, there’s a lot of profit to be had.


> What if you could spin up an entire enterprise infrastructure by just loading up this rack?

This is already being done quite successfully with existing offerings. Flexpod, VxRack, Nutanix, there are a ton of options.


And now there's another competitor with a different take on it!

"Other products already exist" is never a strong counterargument to the existence of new products.


You are strongly misinterpreting both how real-world customers perceive vendor lock in and how DoD procurement works. Everything here is so far off from reality I don't even know where to begin.


Unfortunately your comment is of very limited value until you actually begin explaining.


> "FYO point"

https://web.archive.org/web/20080705140230/http://blogs.sun....

And "FYO" stands for Fuck You Oracle.


You may not be aware of the pain that many large, non-software companies currently have on AWS. Gigantic monthly bills (hundreds of thousands per month) coming from subdivisions that aren't capable or motivated to reduce their AWS budget or usage. To the office of the CTO, Oxide's value proposition (buy instead of rent) could be very motivating.

"Hey subdivision A, could we buy a few Oxide racks and move your workload there from AWS? It looks like they would have all the storage and compute you need. Yes? Ok, in 36 months we'll pay your current IT department employees a bonus of 50% of whatever it has saved us vs your current AWS budget."


If those subdivisions aren't capable of reducing their AWS usage how do you imagine that they are capable of migrating to an Oxide rack?

Or in other words, is migrating to Oxide somehow assumed to be easier than migrating to some other non-locked-in cloud infrastructure?


Yeah, my example communication was a bit contrived.

But the point still stands. There's a lot of AWS spend happening (even after being optimized) that is frustrating when you look at the raw numbers and consider how much server capability you could outright buy for the same amount. And Oxide would make it so much easier to run a bunch of VMs (and infrastructure-as-code) than standard racked x86 servers.

Oxide appears to be a complete shoo-in for companies that used to run a bunch of VMs on racked Dell/HP servers, migrated their VMs/storage to AWS, hate their monthly AWS bills, and still have the old server rooms available.


Having worked somewhere with an AWS spend of $30k/mo on _virtually nothing_, I can attest to this. I think most of it was sales demos that never got cleaned up.


Converting your OpEx to CapEx is not a remedy for sloppy bookkeeping.


But if it significantly reduces total cost, it could still be a useful bandaid.


Most people who can do this (aren't as entrenched in AWS) end up moving to a cheap VPS provider so that they don't end up having to pay for all the internetworking, facilities, throw redundancy out the window, and then still have to pay the IT burden to heavy-lift all their workloads to this whole new "Oxide" system.


> end up moving to a cheap VPS provider

Like which?


Hetzner, Linode, Rackspace, take your pick.


You’re writing like the status quo is a law of nature. At best it’s been that way for a decade or two

How many times has computing hardware changed in response to the economics of the parts and the economics of the businesses buying hardware?

There are downsides to new models, but money solves a lot of problems

So I don’t know about Oxide in particular, but it seems short sighted to bet on stagnation

Also Oxide is doing what Google did 20 years ago, and Facebook open sourced ~10 years ago, so it’s not exactly unproven


I guess there must be a largish market for this since AWS introduced Outpost to provide the "cloud" to onprem industries. I feel like this is competing with that market.

Since many of those use cases probably already run extensive on-prem infrastructure this could appeal to them. AWS outpost talks about industries like healthcare, telecom, media and entertainment, manufacturing, or highly regulated spaces like financial services. I've heard of media companies that process through things like IMAX cameras that have just tons of TB's of data sometimes just for 5 minutes worth of footage. That would simply be too cost prohibitive - in bandwidth alone - to try and move around in the cloud and you don't want to have to wait for things like AWS snowball or whatever.

While I think the space is "niche" those niche spaces are not small. Big companies with big budgets.


I think they are a competitor to the 'HP/Cisco/IBM/Dell... (blades or hyper-converged)' part of this. They basically saying 'we will do it better'.

Their marketing and story is supposed to convince you that you could save money running their things rather then Dell. And instead of paying for VMWare you get Open Source Software for most of it.

> Until this company has a certain/size and scale, no one serious will trust their black boxes at any type of scale.

I guess that a risk they are willing to take. Some costumers might wait for a few years until they see Oxide being big enough.

Other costumers might be sick of HP/Dell and might take a Risk on a smaller company.

Since they seem to have some costumers, some organizations are willing to take the risk to get away from Dell and friends.

So I think you are the target audience but you are not willing to risk it until they are larger and less likely to fail and they have a good story in regards to support. I assume they have a support story of some kind, no idea what it is 'Contact Sales' ....

In terms of 'trusting they will continue exists' all they can do is survive for a few years until they are pretty established, then more people will be willing buy their product. And hopefully in that time their existing costumers rave about how amazing the product is.

Lets hope they don't go bust because all potential costumers are just waiting. Then again, you can't anybody for not buying from a startup.


> Their marketing and story is supposed to convince you that you could save money running their things rather then Dell. And instead of paying for VMWare you get Open Source Software for most of it.

As someone who has dealt with mostly Debian and Ubuntu in recent years, every time I had to deal with even small numbers of RHEL licenses I often asked myself "Why do put up with this?" (I know why, but still… such overhead.)


I agree, it seems the same to me. They have the allure of "we do it different" along with the promise of "we do it well".

Unrelated, but a "costumer" makes costumes. I think you were looking for "customer".


I remember the first time I heard of someone being fired for buying IBM, a thing that many people thought would never happen.


I think that ship has sailed in the 90s. From the 80s to the 90s IBM dropped like 50% of the people that worked there and lots of companies were moving away from mainframes in droves. I'm sure some people got fired for sticking to mainframes to long and wasting money. And likely some companies went bust in the late 90s for having a IT infrastructure based on IBM Mainframes.

That's mostly speculation but it seems like it has to be true.


I think you actually said it without recognizing.

The current state of the art is a fucking train wreck. Choose any layer of abstraction and start picking at it and you’ll find mostly gaffer tape. Scaling up sucks. Rewriting old monoliths to micro services wasn’t a panacea except maybe for cloud vendor profits. You said the hardware market is well defined, which is another way of saying ossified. Particularly when you start comparing it to the expected pace of software. How’s Open19 going? How much liquid cooling do you have in customer racks?

In my opinion this is ultimately a software offering that happens to come on vertically integrated hardware. It offers a complete, highly polished API to a minuscule-scale DC. If they can find market fit and make a little bit of money, the next step is to start making deep improvements to things behind the abstraction. From where you sit, you are well aware that you could make huge improvements inside the DC demarc if you could do it without disrupting customers. But you’re probably limited by the terrible, terrible APIs you would be forced to use, that don’t offer the capabilities you need, from vendors who would be happy to chat but would provide timeframes in the 6-18 month range for even a modest improvement.

So this, IMO, is about defining that interface to a DC on a single hardware platform with vertical integration, and then scaling up from there. The current hyperscalers will adopt the suite of APIs and capabilities directly, make lower quality copies, or die.

Of course all assuming they’re successful enough to get the revenue machine going in the first place which seems likely to me, given the absolute dogshit state of the cloud world today, the trend towards multi cloud, and the business case for moving certain loads back on prem or to a bare metal colo++ offering.


> you obviously want your DCs as standardized as possible. Until this company has a certain/size and scale, no one serious will trust their black boxes at any type of scale.

This gets to the crux of my first thoughts when I read the marketing copy: can they deliver (reliability).

They do admit very clearly that what they're doing is hard and that at many points during development they were reluctant to be too ambitious (for obvious reasons), but at each stage they did just that: proceeded with the most ambitious option. That takes a huge amount of self belief that might be warranted or might be hubris. As you point out, untimely the success of Oxide won't only hinge on ability to deliver on that self belief, but more on their ability to convince prospective customers that they have competency to do what noone else can.

I fully support every part of their approach in theory but my wizened traveled self thinks it smells a little too good to be true.


Maybe you are not. This offering definitely sounds like something for on prem and not a large data center. Basically, if your core competence is hosting stuff you don't need the extra value they provide. But if your core competence is basically anything else and just need more than a single server under the IT guys' desk then this begins to look very exciting.


You might be right but if a customer won't have the size/scale, it won't value the unique proposition from Oxide. I hope I am wrong because it would be great to see a new player with a fresh perspective in the hardware market.


Bryan Cantril claims the cost of running these could be worth it for many small companies, and if you compare it to EC2 costs for running let's say a CI I find that easy to believe. Maybe it won't be cheaper than Hetzner and co. but that's not what they are competing against on a product level.


If you could drop ship a rack of gear to the Colo before, with the puny compute and bandwidth potential in that number of Rack Units, didn’t it just become massively more appealing?


To be completely honest, this is for the idealists out there. Those of us who are itching to replace our vSphere with oVirt because 1) we have the time and skill to do it, 2) we believe in open source and 3) we believe we can make huge savings by using open source.

I expect the oxide supporters to have a hard few years ahead of them of finding bugs in high throughput environments. But at the end of the day it will be worth it just to have another competitor in a pretty boring playing field.


They literally had CTOs of F100 companies that want to buy this gear as part of VCs pitch. Because as you can imagine your question was the first question VC's asked.


Purchasing a ton of hardware from a startup seems extremely risky for a F100. It's one thing to be left holding the bag when a SaaS startup goes under, but when you just spent millions on gear that is now completely unsupported... eek.

I'd be curious to see what companies are interested, Oxide doesn't have any logos on their website which is a little odd given the space.


The fact that they're willing to absorb that risk is a strong signal they're solving a real problem. It's been a while since I've worked somewhere with on-prem hardware, but I remember long build-outs, unhelpful vendors: A RAID card firmware bug bricked our SAN. Our extremely expensive support contract gave us front-row seats to finger-pointing between the card manufacturer and Dell but ultimately no solution was provided to us. Our IT director, who was absolutely furious, basically had to twisted Dells arm to get them to send us replacement hardware. Whole thing was a giant fiasco.

This is the secret none of those existing vendors (Dell, Lenovo, HP, etc) are willing to tell you: They have very limited technical expertise on what they sell you and outside of some specialized troubleshooting they can do, they'll defer to their vendors. The understanding is that you've got the intellectual horsepower on staff to cope with their various shortcomings.


> Purchasing a ton of hardware from a startup seems extremely risky for a F100.

If you're working for a megacorp nothing tends to happen quickly, so there will be a slow roll-out over a multi-year period as old hardware gets phased out and new hardware is brought in for a refresh.

If there's a hiccup at any point they'll simply keep the previously purchased stuff running and start a new roll-out with another vendor next fiscal.

> I'd be curious to see what companies are interested, Oxide doesn't have any logos on their website which is a little odd given the space.

1. They're just starting out. 2. Some of their customers want to be (or start-out initially) discreet:

> Oxide customers include the Idaho National Laboratory as well as a global financial services organization. Additional installments at Fortune 1000 enterprises will be completed in the coming months.

* https://www.bloomberg.com/press-releases/2023-10-26/oxide-un...


> completely unsupported

That equation changes with the entire software stack being open-source


What about the hardware?


In terms of physical hardware repairs or replacement, sure, that could be a risk when the supplier is an early-stage startup. Though I wonder if the open nature would also make it easier to create eg. third-party sleds


I doubt any F100 would go all-in on new vendor like Oxide for anything at first. I bet they could spend millions on a a few racks as a trial and have some groups work with it. A couple years down the line maybe they start expanding usage if they like it.

Of course that framing itself is bad - F100 companies aren't usually quite that monolithic. By the time they get that big there's a heterogeneous set of processes, equipment, systems, etc. Some parts of the company may use Oxide right away because they see it as a solution, and others may keep using the IBM mainframes, and other still will keep using racks/blade servers from Dell for eternity.


F100 waste so many millions already on projects that never see the light of day, why couldn't they throw money at Oxide and see if it works better than their AWS contract?


Megacorp Boondoggles sounds like a lucrative market.


I wonder how to break into it.

I bet the trick is to keep the perfect balance of high profile wins and and losses. You want to be defensible as an expert to the non-technical folks while obviously a fall guy to the technical ones I guess.

I think these guys aren’t that, though, they seem to be selling a real, cool product.


Yes, an industry where FUD and being SOTA is coveted, like cybersecurity can really get enterprise interest with the right connections. Few contracts of sorts to trade for logos on a slide deck and high profile references can get you further. Does your product actually do anything y is useful? Maybe, not yet, maybe never, but you were convincing enough and hey 1/10 startups fail.


Maybe something like Dell VXBlock didn't exist when they pitched their idea?

Any hardware contracts are very long term and you'll have a hard time getting me to switch to a different vendor, especially when they also want to come in with an unknown operating system which I have to run.


Dell's hyperconverged solutions did exist when the company started. We believe we can compete.


When my company looked into it VXBlock looked less like fully integrated hardware and software and more like a smattering of pre-selected components wired into a box ready to go with VMWare with a support contract. If you're already in deep with VMWare they're probably great. But the software side made my head spin. It looks like Oxide is a better fit for orgs that are more IaC.


I guess time will tell VXBlock just looks like amalgamation of SKUs Dell has. Oxide was built as "clean slate" from firmware to every minute detail to offer a compelling product for companies that want to have a hyperscaler style systems for their on-prem workloads.


Is it now bad to have a few offerings that you can tailor to your needs?

With compute one size doesn't fit all, maybe you need more disk space or maybe you need GPUs... I'm sure Oxide will come out with different spec modules over time.

The idea is similar: It's a rack which runs virtualized workloads and you don't have to think much about individual machines.


> Beyond the tech, how would support services really work? We can have a technician from any of the large vendors on-site in less than 2 hours. In some of our DC clusters we actually have vendor support personnel 24x7 on-site with vendor paid spare parts inventory. How would they provide that level of service?

Forgive me for being naive, but that sounds like a great way to differentiate their offering.

E.g. the famous Maytag Repairman, who sits at his desk doing nothing because Maytag washers are so reliable that he has nothing to do.

> In a time in which the laundry appliances of major manufacturers had reached maturity, differing mostly in minor details, the campaign was designed to remind consumers of the perceived added value in Maytag products derived from the brand's reputation for dependability.

https://en.wikipedia.org/wiki/Maytag_Repairman#Ol'_Lonely


> however I don't understand who this is for.

I can't think of a lot of examples right now but I can already imagine one type of customers for such a product - universities wanting high performance computing. At my alma mater, the HPC cluster/server was in a different slightly distant location. Using something like AWS wouldn't be liked by almost any uni admin, and running a server on premises isn't a great idea in a place that gets the occasional (but rare) power or internet cut. Outsourcing some of these responsibilities may have been nice for our admin.


Every single one of the companies you listed started off with unverified hardware, some that failed in the field and continues to today. Why wouldn't you try something else, considering the status quo? This is a nothing-to-lose situation as long as you don't put all your chips into the bet. Every single datacenter has capacity and a need to diversify. It's not even a heroic feat of risk taking.


Having worked in a variety of related fields, my take on this is that hypervisors like VMware have largely eliminated the need for proprietary hardware with “support” contracts. Just buy 3x the capacity in white boxes and turn off anything that breaks.

The problem is that the industry hasn’t noticed this yet. The hyperscalers have, and are printing money as a consequence.


It's basically Sun Microsystems but RIR (rewrite in Rust). The incumbents will eventually release similar products and will eat their lunch.


I might not be some brilliant CTO, but this product obviously seems to fit _very well_ between two very large product offerings: fully cloud being one and fully on-prem being the other. I’ve often thought it would be cool if you could _buy_ a physical space in a server farm, like literally an area of square feet where the racks are all serviced and provided, but meets all of the needs a company who doesn’t want fully cloud has. This is actually way more clever than that.

A lot of people didn’t understand the need for the iPad mini or Mac mini or myriad other products without an obviously existing market. I think this is extremely keen of Oxide to fine the borders between these offerings and fit very snugly between them.


Re: who is this for.

My guess, someone who buys hyper-converged infrastructure today (e.g. Nutanix, VCE, etc) ... but that market is getting smaller and smaller by the day.


I wonder if they aim to target small operations and startups initially.


What about from an Integrator or VAR? Would you buy it then?


It’s an acquihire play. It might be possible to beat the hypers with $100 billion and some really good engineers. But building custom racks ain’t the angle to do it.


I really wonder what their mental image of a product market fit is. They're undeniably doing some cool stuff but myself and it seems like everybody else has to do some serious mental gymnastics to figure out who they sell this to and what needs it fulfills or niches it can fit into.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: