Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

During production of Finding Nemo, we started using Linux boxes in addition to SGIs.Why?

Uncategorized
25 21 0
  • During production of Finding Nemo, we started using Linux boxes in addition to SGIs.
    Why?

    3D painting software we wrote for laying out coral was written in C++ using templates, and the debug info was too large for IRIX, but was debuggable on Linux.

    Was this a 32 bit vs. 64 bit issue?

    No.

    IRIX reserved half the address space for the kernel, while Linux only did a quarter.

    So on Linux, we had 3GB, and the symbols fit.

    It was a 32 bit show, both machines had 4GB max.

    Plenty for Finding Nemo.

    @Drwave

    This was a while ago, so forgive my foggy memory (I wrote a chapter of a popular Linux book that described this at the time):

    I believe Linux started with the 2 GiB / 2 GiB split and also a direct map of all physical memory (all physical memory was mapped into the kernel’s VA space). This had a few advantages:

    • You can tell apart user and kernel addresses by looking at the top bit.
    • You can always pin a page and then use the result of the current translation to get a VA you can use for access anywhere in the kernel, not just threads associated with the current process.

    By the early 2000s, machines with more than 2 GiB of physical memory were affordable by places running Linux and so the direct map had to go away. I think this work started with PAE. PAE meant you could have more physical memory than you had virtual, which made this much worse, so 32-bit kernels with PAE couldn’t use a direct map at all and had to allocate mappings to fill things into the kernel’s address space on demand, but gradually this became necessary for everyone (I think PAE was a compile-time option for the kernel for a while).

    User processes grew memory requirements faster than the kernel, so there was a configuration option to choose where this split was. I’m not sure when Linux went to 3 GiB, but by the have enough physical memory for this to make sense you’re past the point where a direct map is feasible, so there are no downsides. I think it was possible to arrange the split the other way, which was useful for file servers and similar where you wanted a lot of memory for the buffer cache and very little for userspace.

    Red Hat went further. They had kernel builds with a 4 GiB / 4 GiB split. Userspace got almost the full 32-bit address space and the syscall code switched to an entirely different page table on transition (meltdown mitigation later required a similar mechanism. Sensible ISAs have different page-table base registers for user and kernel space and make this easy. Sadly, x86 is not one of these). This was slow because every system call that took a pointer required the kernel to look up the address translation in the userspace page tables, map the relevant pages into its address space, and then copy. Even with caching, this was painful. Oh, and the trick of self-mapped page tables can’t be used in this configuration (and was covered by a Microsoft patent at the time).

    64-bit kernels are much nicer for running 32-bit userspace because they can give the userspace process the entire low 4 GiB and map that into the kernel’s address space for use by top-half threads. This leads to a slightly odd situation that it’s possible to find 32-bit userspace programs that will run successfully only on 64-bit kernels.

  • How do I know this?

    I was the project lead, although the best parts of it were written by my smarter collaborator Michael O’Brien (eventually SVP of R&D at Technicolor).

    This story is *not* the 32 to 64 bit transition.

    This is just us trying to get another GB of address space, where we leveraged the ongoing Linux port work w/alot of our own.

    Now that I think about it, all the impactful work I’ve done in my career happened in 32 bits of address space.

    4GB always seemed like a lot to me.

    @Drwave I first browsed the web with 48 MiB of RAM. Nowadays there are plenty of websites whose resources would not fit in that amount of memory. Nevermind actually parsing & rendering them.

  • @androcat @Drwave Hiding underwater, if you can believe it

    @dgriffinjones @androcat @Drwave Unbelievable, next you gonna tell that he was swimming as well… /s /j

  • @Bluedonkey totally agree.

    But I tell this particular story because I expect folks today look at Finding Nemo (and the Pixar movies that preceded it) and don’t expect those were done on computers with less memory than pretty much every PC you can buy today.

    @Drwave @Bluedonkey I was a kid when the movie came out (and saw it), and got to know a bit about computers of the day and older... so if you asked me about the memory, I'd be at least on the fence. But I will recall that my personal laptop in 2011 had 1 gigabyte and it was kind of reasonable.

  • @Bluedonkey totally agree.

    But I tell this particular story because I expect folks today look at Finding Nemo (and the Pixar movies that preceded it) and don’t expect those were done on computers with less memory than pretty much every PC you can buy today.

    @Drwave @Bluedonkey I would have had no idea about this if I didn't come across it. Reminds me of the statement that someone made that most car key fobs have more computing power than the Apollo moon missions (no clue about this statement validity)

  • oblomov@sociale.networkundefined oblomov@sociale.network shared this topic on

Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    1 Posts
    2 Views
    Slug Algorithm for On-GPU Rendering of Fonts with Bézier Curves now in Public DomainThe Slug Algorithm has been around for a decade now, mostly quietly rendering fonts and later entire GUIs using Bézier curves directly on the GPU for games and other types of software, but due to its proprietary nature it didn’t see much adoption outside of commercial settings. This has now changed with its author, [Eric Lengyel], releasing it to the public domain without any limitations.Originally [Eric] had received a software patent in 2019 for the algorithm that would have prevented anyone else from implementing it until the patent’s expiration in 2038. Since 2016 [Eric] and his business have however had in his eyes sufficient benefit from the patent, making it unnecessary to hold on to it any longer and retain such exclusivity.To help anyone with implementing their own version of the algorithm, there is a GitHub repository containing reference shader implementations with plenty of inline comments that should help anyone with some shader experience get started.Although pretty niche in the eyes of the average person, the benefits of using on-GPU rendering of elements like fonts are obvious in terms of rendering optimization. With this change open source rendering engines for games and more can finally also use it as well.Thanks to [Footleg] for the tip.hackaday.com/2026/03/20/slug-a…
  • 0 Votes
    1 Posts
    3 Views
    We report during the half hour in the evening when birds cannot help but sing. Recently, a few more species have joined the sparrows that kept on chirping through winter. We can only really pick out the blackbirds in the mix. It gets late, and dark, and the birds do not quiet.
  • 0 Votes
    6 Posts
    2 Views
    @aeva I'm so mad about console exclusives 😭😭😭
  • 0 Votes
    2 Posts
    0 Views
    @nada_eid_88 donation sent.