This should have been no struggle for Sega. They basically invented the modern 3D game and dominated in the arcade with very advanced 3D games at the time. Did they not leverage Yu Suzuki and the AM division when creating the Saturn?
Then again rumor has it they were still stuck on 2D for the home market and then saw the PlayStation specs and freaked and ordered 2 of everything in the Saturn.
This should have been no struggle for Sega. They basically
invented the modern 3D game and dominated in the arcade with
very advanced 3D games at the time
Way different challenges!
The Model 2 arcade hardware cost over $15,000 when new in 1993. Look at those Model 1 and Model 2, that's some serious silicon. Multiple layers of PCB stacked with chips. The texture mapping chips were from partnerships with Lockheed Martin and GE. There was no home market for 3D accelerators yet; the only companies doing it were folks creating graphics chips for military training use and high end CAD work.
Contrast that with the Saturn. Instead of a $15,000 price target they had to design something that they could sell for $399 and wouldn't consume a kilowatt of power.
Although, in the end, I think the main hurdle was a failure to predict the 3D revolution that Playstation ushered in.
> The Model 2 arcade hardware cost over $15,000 when new in 1993. Look at those Model 1 and Model 2, that's some serious silicon.
That's an even bigger miss on Sega's part then.
Having such kit out in the field, should have given Sega good insight into the "what's hot, and what's not" for (near-future) gaming needs.
Which features are essential, what's low hanging fruit, what's nice to have but (too) expensive, performance <-> quality <-> complexity tradeoffs, etc.
Besides having hardware & existing titles to test-run along the lines of "what if we cut this down to... how would it look?"
Not saying Sega should have built a cut-down version of their arcade systems! But those could have provided good guidance & inspiration.
But they had the insight. And the insight they got was that 3D was not there yet for the home market, it was unrealistic to have good 3D for cheap (eg. no wobbly textures, etc), as it was still really challenging to have good 3D on expensive dedicated hardware.
Yeah. The 3D revolution was obvious in hindsight, but not so obvious in the mid 1990s. I was a PC gamer as well at the time so even with the benefit of seeing things like DOOM it wasn't necessarily obvious that 2.5D/3D games were going to be popular with the mainstream any time soon.
A lot of casual gamers found early home 3D games kind of confusing and offputting. (Honestly, many still kind of do)
We went from highly evolved colorful, detailed 2D sprites to 3D graphics that were frankly rather ugly most of the time, with controllers and virtual in-game cameras that tended to be rather janky. Analog controllers weren't really even prevalent thing for consoles at this point.
Obviously in hindsight the Saturn made a lot of bad bets and the Playstation made a lot of winning ones.
The secret to high 3D performance (particularly in those simpler days before advanced shaders and such) wasn't exactly a secret. You needed lots of computing horsepower and lots of memory to draw and texture as many polys as possible.
The arcade hardware was so ridiculous in terms of the number of chips involved, I don't even know how many lessons could be directly carried over. Especially when they didn't design the majority of those chips.
Shrinking that down into a hyper cost optimized consumer device relative to a $15K arcade machine came down to design priorities and engineering chops and Sega just didn't hit the mark.
In interviews IIRC ex-Sega staff has stated that they thought they had one more console generation before a 3D-first console was viable to the home market. Sure, they could do it right then and there, but it would be kind of janky. Consumers would rather have solid arcade-quality 2D games than glitchy home ports of 3D ones. Then Sony decided that the wow factor was worth kind of janky graphics (affine texture mapping, egregious pop-in, only 16-bit color, aliasing out the wazoo, etc.) and the rest is history.
Nintendo managed largely not-janky graphics with the N64, but it did come out 2-3 years after the Saturn and Playstation.
It will always be easy to make 3D games that look bad, but on the N64 games tend to look more stable than PS1 or Saturn games. Less polygon jittering[0], aliasing isn't as bad, no texture warping, higher polygon counts overall, etc.
If you took the same animated scene and rendered it on the PS1 and the N64 side by side, the N64 would look better hands down just because it has an FPU and perspective texture mapping.
[0] Polygon jittering caused by the PS1 only being capable of integer math, so there is no subpixel rendering and vertices effectively snap to a grid.
I thought the problem was that it only had 12 or 16-bit precision for vertex coords, which is not enough no matter whether you encode it as fixed-point or floating-point. Floats aren't magic.
Compare it to the Playstation, which could not manage proper texture projection and also had such poor precision in rasterization that you could watch polygons shimmer as you moved around.
The N64 in comparison had an accurate and essentially modern (well, "modern" before shaders) graphics pipeline. The deficiencies in it's graphics were not nearly enough graphics specific RAM (you only had 4kb total as a texture cache, half that if you were using some features! Though crazy people figured out you could swap in more graphics from the CARTRIDGE if you were careful) and a god awful bilinear filtering on all output.
Interestingly, the N64 actually had some sort of precursor in form of the RSP "microcode". Unfortunately there was initially no documentation, so most developers just used the code provided by Nintendo, which wasn't very optimized and didn't include advanced features. Only in the last years did homebrew people really push the limits here with "F3DEX3".
> and a god awful bilinear filtering on all output.
I think that's a frequent misconception. The texture filtering was fine, it arguably looks significantly worse when you disable it in an emulator or a recompilation project. The only problem was the small texture cache. The filtering had nothing to do with it. Hardware accelerated PC games at the time also supported texture filtering, but I don't think anyone considered disabling it, as it was an obvious improvement.
But aside from its small texture cache, the N64 also had a different problem related to its main memory bus. This was apparently a major bottleneck for most games, and it wasn't easy to debug at the time, so many games were not properly optimized to avoid the issue, and wasted a large part of the frame time with waiting for the memory bus. There is a way to debug it on a modern microcode though. This video goes into more detail toward the end: https://youtube.com/watch?v=SHXf8DoitGc
Fun trivia for readers, it isn't even normal 4-tap bilinear filtering, it's 3-tap, resulting in a characteristic triangular blurring that some N64 emulators recreate and some don't. (A PC GPU won't do this without special shaders)
Wiggling is down to lack of precision and lack of subpixel rendering, unrelated to Z buffering. Z buffers are for hidden surface removal, if you see wiggling on a single triangle floating in a void, it's not a Z buffer problem.
When you see models clipping through themselves because the triangles can't hide each other, that's the lack of Z buffer.
Thanks for clarifying. I knew I was getting something wrong, but can never remember all the details. IIRC PS1 also suffered from render order issues that required some workarounds, problems the N64 and later consoles didn't have.
The lack of media storage was thing that kind of solidified a lot of those issue. Many that worked on the N64 have said that the texture cache on the system was fine enough for the time. Not great but not terrible. The issue was that you were working in 8MB or 16MB space for the entire game. 32MB carts where rare and less than a dozen ever used 64MB carts.
Yeah. I'm not what one would call a graphics snob, but I found the N64 essentially unplayable even at the time of its release. With few exceptions, nearly every game looked like a pile of blurry triangles running at 15fps.
I always felt like N64 games were doing way too much to look good on the crappy CRTs they were usually hooked up to. The other consoles of the era may have had more primitive GPUs, but for the time I think worse may have actually been better, because developers on other platforms were limited by the hardware in how illegible they could make their games. Pixel artists of the time had learned to lean into and exploit the deficiencies of CRTs, but the same tricks can't really be applied when your texture is going to be scaled and distorted by some arbitrary amount before making it to the screen.
A part of this was due to the TRC of Nintendo. It also didn't hep that due to the complexity of the graphics hardware, most developers where railroaded into using things like the Nintendo provided Microcode just to run the thing decently.
No, it's due to limited precision in the vertices. If you had 64 bit integers you could have 32.32 fixed-point and it would look as good as floating-point.
Quake did not use floating point in it's rasterization math, and it exhibited none of the jittery polygon issues that the ps1 did. It's largely a lack of subpixel accurate rasterization causing it (not even sure if PS1 is pixel accurate, let alone subpixel :)) .
Sure, but lack of perspective correct texturing is a separate issue, with a separate visual artifact.
Jittery polygons refers to the artifacts you get when polygon vertices are snapped to integer pixel coordinates, rather than taking into account subpixel positions. Quake did not have this issue, despite not using floating-point calculation in it's rasterization. It did use floating point when texturing spans, after rasterization, but this was more of an optimization than a fundamental requirement for accurate texturing :)
A heroic, and ultimately unnecessary considering the mundane reasons that slowed the N64 down, attempt to consumerize exotic hardware.
The hardware was actually pretty great in the end. The unreleased N64 version of Dinosaur Planet holds up well considering how much more powerful the GameCube was.
/edit
Nintendo were largely the architects of their own misery. First, they set expectations sky high with their “Ultra 64” arcade games, then were actively hostile to developers in multiple ways.
I'm not 100% sure of the specifics, but Nintendo took a pretty different approach from Sony or Sega at this time. Sony and Sega both rolled their own graphics chips, and both of them made some compromises and strange choices in order to get to market more quickly.
Nintendo instead approached SGI, the most advanced graphics workstation and 3D modeling company in the world at the time, and formed a partnership to scale back their professional graphics hardware to a consumer price point.
Might be one of those instances where just getting something that works from scratch is relatively easy, but taking an existing solution and modifying it to fit a new use case is more difficult.
The cartridge ended up being a huge sore spot too.
Nintendo wanted it because of the instant access time. That’s what gamers were used to and they didn’t want people to have to wait on slow CDs.
Turns out that was the wrong bet. Cartridges just cost too much and if I remember correctly there were supply issues at various points during the N64 era pushing prices up and volumes down.
In comparison CDs were absolutely dirt cheap to manufacture. And people quickly fell in love with all the extra stuff that could fit on a desk compared to a small cartridge. There was simply no way anything like Final Fantasy 7 could have ever been done on the N64. Games with FMV sequences, real recorded music, just large numbers of assets.
Even if everything else about the hardware was the same, Nintendo bet on the wrong horse for the storage medium. It turned out the thing they prioritized (access time) was not nearly as important as the things they opted out of (price, storage space).
Tangentially related, but if you haven't already, you should read DF Retro's writeup of the absolutely incredible effort to port the 2 CD game Resident Evil 2 to a single 64MB N64 cartridge: https://www.eurogamer.net/digitalfoundry-2018-retro-why-resi...
Not just dirt cheap, the turn around time to manufacture was significantly lower. Sony had an existing CD manufacturing business and could produce runs of discs in the span of a week or so, whereas cartridges typically took months. That was already a huge plus to publishers since it meant they could respond more quickly if a game happened to be a runaway success. With cartridges they could end up undershooting, and losing sales, or overshooting and end up with expensive, excess inventory.
Then to top it all off, Sony had much lower licensing fees! So publishers got “free” margin to boot. The Playstation was a sweet deal for publishers.
>There was simply no way anything like Final Fantasy 7 could have ever been done on the N64.
Yes but I don't see how a game like Ocarina of time with its streaming data in at high speed would have been possible without a cartridge. Each format enabled unique gaming experiences that the other typically couldn't replicate exactly.
Naughty Dog found a solution - constantly streaming data from the disk, without regard for the hardware's endurance rating:
> Andy had given Kelly a rough idea of how we were getting so much detail through the system: spooling. Kelly asked Andy if he understood correctly that any move forward or backward in a level entailed loading in new data, a CD “hit.” Andy proudly stated that indeed it did. Kelly asked how many of these CD hits Andy thought a gamer that finished Crash would have. Andy did some thinking and off the top of his head said “Roughly 120,000.” Kelly became very silent for a moment and then quietly mumbled “the PlayStation CD drive is ‘rated’ for 70,000.”
> Kelly thought some more and said “let’s not mention that to anyone” and went back to get Sony on board with Crash.
Crash Bandicoot is a VERY different game from Ocarina Of Time. They are not comparable at all. They literally had to limit the field of view in order to get anything close to what they were targeting. Have you played the two games? The point still stands, Zelda with its vast open worlds is not feasible on a CD based console that has a max transfer rate of 300KB/s and the latency of an iceberg.
What ND did with Crash Bandicoot was really cool to see in action (page in/out data in 64KB chunks based on location) but you are right - this relied on a very strict control of visuals. OoT didn't have this limitation.
Nintendo did not approach SGI. SGI was rejected by Sega for the Saturn - Sega felt their offering was too expensive to produce, too buggy at the time despite spending man hours helping fix hardware issues,, and had no chance to make it to market in time for their plans.
For all we know, Nintendo had no plans past the SNES, except for the VirtualBoy. But then again, the VirtualBoy was another case of Nintendo being approached by a company rejected by Sega…
It's been years since since I read the book "Console Wars", but if memory serves me correctly SGI shopped their tech to SEGA first before Nintendo secured it for the N64.
Yep, Sega had a look at SGI's offering and rejected it. One of the many reasons they did so was because they thought the cost would be too high due to the die size of the chips.
Kind of funny considering the monstrosity the Saturn ended up becoming.