During production of Finding Nemo, we started using Linux boxes in addition to SGIs.Why?
-
How do I know this?
I was the project lead, although the best parts of it were written by my smarter collaborator Michael O’Brien (eventually SVP of R&D at Technicolor).
This story is *not* the 32 to 64 bit transition.
This is just us trying to get another GB of address space, where we leveraged the ongoing Linux port work w/alot of our own.
Now that I think about it, all the impactful work I’ve done in my career happened in 32 bits of address space.
4GB always seemed like a lot to me.
@Drwave I started my career working on a split-harvard architecture machine (yay, PDP-11!) and I have often wondered if we wouldn't have been better off leaving instruction space pointers at 32 bits with the data space pointers at 64. (Separate from how I suspect it might give chip designers room for interesting optimizations, since caching is very different for data vs instructions)
-
@Drwave So, where was Nemo found, in the end?
(sorry)
-
@Bluedonkey totally agree.
But I tell this particular story because I expect folks today look at Finding Nemo (and the Pixar movies that preceded it) and don’t expect those were done on computers with less memory than pretty much every PC you can buy today.
@Drwave @Bluedonkey Thanks for sharing this, reminds me of my Summer 2025 project trying to find viable Linux distros & configs to make old machine relevant. Core 2 Duo Asus UL30vt w SU7300 4GB RAM & switchable gpu iGPU/G 210M. Was lot easier than I expected, any solid distro w XFCE makes it fully usable today. Though SSD replacing HDD also helps.
-
@Drwave I don't know if it's a certain vintage of worker or shared sensibilities, but this reminds me of a recent piece from my buddy Simo:
https://simo.virokannas.fi/2025/11/memories/
Also, these stories never get old. Keep 'em coming.
@isaact @Drwave Though not practical everywhere Command Line Text based browsers are really fast to use when you only care about text. I use for reading ebooks and some Reddit/Forum. They load pages so quick I often think they failed to load even though I know better now. Both Links (my preferred) and Lynx Browsers are good. https://links.twibright.com/
-
During production of Finding Nemo, we started using Linux boxes in addition to SGIs.
Why?3D painting software we wrote for laying out coral was written in C++ using templates, and the debug info was too large for IRIX, but was debuggable on Linux.
Was this a 32 bit vs. 64 bit issue?
No.
IRIX reserved half the address space for the kernel, while Linux only did a quarter.
So on Linux, we had 3GB, and the symbols fit.
It was a 32 bit show, both machines had 4GB max.
Plenty for Finding Nemo.
@Drwave IIRC Windows (NT line) had a boot flag to switch between 2GB and 3GB
-
During production of Finding Nemo, we started using Linux boxes in addition to SGIs.
Why?3D painting software we wrote for laying out coral was written in C++ using templates, and the debug info was too large for IRIX, but was debuggable on Linux.
Was this a 32 bit vs. 64 bit issue?
No.
IRIX reserved half the address space for the kernel, while Linux only did a quarter.
So on Linux, we had 3GB, and the symbols fit.
It was a 32 bit show, both machines had 4GB max.
Plenty for Finding Nemo.
This was a while ago, so forgive my foggy memory (I wrote a chapter of a popular Linux book that described this at the time):
I believe Linux started with the 2 GiB / 2 GiB split and also a direct map of all physical memory (all physical memory was mapped into the kernel’s VA space). This had a few advantages:
- You can tell apart user and kernel addresses by looking at the top bit.
- You can always pin a page and then use the result of the current translation to get a VA you can use for access anywhere in the kernel, not just threads associated with the current process.
By the early 2000s, machines with more than 2 GiB of physical memory were affordable by places running Linux and so the direct map had to go away. I think this work started with PAE. PAE meant you could have more physical memory than you had virtual, which made this much worse, so 32-bit kernels with PAE couldn’t use a direct map at all and had to allocate mappings to fill things into the kernel’s address space on demand, but gradually this became necessary for everyone (I think PAE was a compile-time option for the kernel for a while).
User processes grew memory requirements faster than the kernel, so there was a configuration option to choose where this split was. I’m not sure when Linux went to 3 GiB, but by the have enough physical memory for this to make sense you’re past the point where a direct map is feasible, so there are no downsides. I think it was possible to arrange the split the other way, which was useful for file servers and similar where you wanted a lot of memory for the buffer cache and very little for userspace.
Red Hat went further. They had kernel builds with a 4 GiB / 4 GiB split. Userspace got almost the full 32-bit address space and the syscall code switched to an entirely different page table on transition (meltdown mitigation later required a similar mechanism. Sensible ISAs have different page-table base registers for user and kernel space and make this easy. Sadly, x86 is not one of these). This was slow because every system call that took a pointer required the kernel to look up the address translation in the userspace page tables, map the relevant pages into its address space, and then copy. Even with caching, this was painful. Oh, and the trick of self-mapped page tables can’t be used in this configuration (and was covered by a Microsoft patent at the time).
64-bit kernels are much nicer for running 32-bit userspace because they can give the userspace process the entire low 4 GiB and map that into the kernel’s address space for use by top-half threads. This leads to a slightly odd situation that it’s possible to find 32-bit userspace programs that will run successfully only on 64-bit kernels.
-
How do I know this?
I was the project lead, although the best parts of it were written by my smarter collaborator Michael O’Brien (eventually SVP of R&D at Technicolor).
This story is *not* the 32 to 64 bit transition.
This is just us trying to get another GB of address space, where we leveraged the ongoing Linux port work w/alot of our own.
Now that I think about it, all the impactful work I’ve done in my career happened in 32 bits of address space.
4GB always seemed like a lot to me.
@Drwave I first browsed the web with 48 MiB of RAM. Nowadays there are plenty of websites whose resources would not fit in that amount of memory. Nevermind actually parsing & rendering them.
-
@dgriffinjones @androcat @Drwave Unbelievable, next you gonna tell that he was swimming as well… /s /j
-
@Bluedonkey totally agree.
But I tell this particular story because I expect folks today look at Finding Nemo (and the Pixar movies that preceded it) and don’t expect those were done on computers with less memory than pretty much every PC you can buy today.
@Drwave @Bluedonkey I was a kid when the movie came out (and saw it), and got to know a bit about computers of the day and older... so if you asked me about the memory, I'd be at least on the fence. But I will recall that my personal laptop in 2011 had 1 gigabyte and it was kind of reasonable.
-
@Bluedonkey totally agree.
But I tell this particular story because I expect folks today look at Finding Nemo (and the Pixar movies that preceded it) and don’t expect those were done on computers with less memory than pretty much every PC you can buy today.
@Drwave @Bluedonkey I would have had no idea about this if I didn't come across it. Reminds me of the statement that someone made that most car key fobs have more computing power than the Apollo moon missions (no clue about this statement validity)
-
undefined oblomov@sociale.network shared this topic on