uhm, did you know that waypipe with ssh is fast enough to use blender remotely over wi-fi?
-
@uecker @tauon @mntmn Unfortunately that's really not how the GPU works at all in the present day, it made more sense back in OpenGL 1.1 when there were pretty straightforward sets of "commands" and limited amounts of data passing between the GPU and the CPU. Nowadays with things like bindless textures and gpu-driven rendering, and compute, practically every draw call can access practically all the data on the GPU, and the CPU can write arbitrary data directly to GPU VRAM at any time.
@uecker @tauon @mntmn For very simple GPU programs you can make it work, but more advanced programs just do not work under a model with such restricted bandwidth between the GPU and the CPU. Plus, as was mentioned up-thread, you still need to somehow compress and decompress those textures online, which is itself a complex task. Plus you still need the GPU power on the thin client to render it. It's very much easier to render on the host, and then compress and transfer the whole framebuffer.
-
@uecker @tauon @mntmn For very simple GPU programs you can make it work, but more advanced programs just do not work under a model with such restricted bandwidth between the GPU and the CPU. Plus, as was mentioned up-thread, you still need to somehow compress and decompress those textures online, which is itself a complex task. Plus you still need the GPU power on the thin client to render it. It's very much easier to render on the host, and then compress and transfer the whole framebuffer.
-
@uecker @tauon @mntmn For very simple GPU programs you can make it work, but more advanced programs just do not work under a model with such restricted bandwidth between the GPU and the CPU. Plus, as was mentioned up-thread, you still need to somehow compress and decompress those textures online, which is itself a complex task. Plus you still need the GPU power on the thin client to render it. It's very much easier to render on the host, and then compress and transfer the whole framebuffer.
-
-
-
@uecker @tauon @mntmn we keep gigabytes of constantly changing data in GPU memory. so yes, but unless you want to stream 10GB of data before you render your first frame, then no. (obviously blender is less extreme here, but cad applications still deal with tremendous amounts of geometry, to say nothing of the online interactive path tracing and whatnot)
-
@uecker @tauon @mntmn we keep gigabytes of constantly changing data in GPU memory. so yes, but unless you want to stream 10GB of data before you render your first frame, then no. (obviously blender is less extreme here, but cad applications still deal with tremendous amounts of geometry, to say nothing of the online interactive path tracing and whatnot)
@uecker @tauon @mntmn The PCIe bus lets us move hundreds of megabytes of data between VRAM and RAM every frame. And so we do that. Our engine also relies on CPU read-back of the downsampled depth buffer from the previous frame, so that's a non-starter, however that's not something you'd run into outside of games, probably.
-
@uecker @tauon @mntmn we keep gigabytes of constantly changing data in GPU memory. so yes, but unless you want to stream 10GB of data before you render your first frame, then no. (obviously blender is less extreme here, but cad applications still deal with tremendous amounts of geometry, to say nothing of the online interactive path tracing and whatnot)
@dotstdy @tauon @mntmn We found it critically important to treat the GPU as "remote" in the sense that we keep all hot data on the GPU, keep the GPU processing pipelines full, and hide latency of data transfer for the GPU. I am sure it is similar for you. But I can see that in gaming you may want to render closer to the CPU than to the screen. But this does not seem to change the fact that GPU is "remote", or?
-
@uecker @tauon @mntmn The PCIe bus lets us move hundreds of megabytes of data between VRAM and RAM every frame. And so we do that. Our engine also relies on CPU read-back of the downsampled depth buffer from the previous frame, so that's a non-starter, however that's not something you'd run into outside of games, probably.
@uecker @tauon @mntmn But like I hinted at before, there's also just issues like applications which just map all the GPU memory into the CPU address space and write it whenever they like (with their own internal synchronization of course). That's *really* hard to deal with, even for tools which trace GPU commands straight to disk. Doing it transparently over the internet is really really really hard.
-
@dotstdy @tauon @mntmn We found it critically important to treat the GPU as "remote" in the sense that we keep all hot data on the GPU, keep the GPU processing pipelines full, and hide latency of data transfer for the GPU. I am sure it is similar for you. But I can see that in gaming you may want to render closer to the CPU than to the screen. But this does not seem to change the fact that GPU is "remote", or?
@uecker @tauon @mntmn Similar, but likely at a narrower scale of latency tolerance. The issue is just the bandwidth v.s. the size of the working set, the GPU is remote (well, unless it's integrated) but PCIe 4 bandwidth is ~300 times greater than you get with a dedicated gigabit link. and vaguely ~15000 times greater than what you might have used to stream a compressed video.
-
@uecker @tauon @mntmn Similar, but likely at a narrower scale of latency tolerance. The issue is just the bandwidth v.s. the size of the working set, the GPU is remote (well, unless it's integrated) but PCIe 4 bandwidth is ~300 times greater than you get with a dedicated gigabit link. and vaguely ~15000 times greater than what you might have used to stream a compressed video.
@dotstdy @tauon @mntmn Yes, this makes sense and I am not disagreeing with any of it. But my point is merely that a display protocol that treats the GPU as remote is not fundamentally flawed as some people claim, because the GPU *is* remote even when local. And I could imagine that for some applications such as CAD, remote rendering might still could make sense. We use remote GPU for real-time processing of imaging data, and the network adds negligible latency.
-
@dotstdy @tauon @mntmn Yes, this makes sense and I am not disagreeing with any of it. But my point is merely that a display protocol that treats the GPU as remote is not fundamentally flawed as some people claim, because the GPU *is* remote even when local. And I could imagine that for some applications such as CAD, remote rendering might still could make sense. We use remote GPU for real-time processing of imaging data, and the network adds negligible latency.
@uecker @tauon @mntmn The reason it's flawed imo is that while it will work fine in restricted situations, it won't work in many others. Comparatively, streaming the output always works (modulo latency and quality), and you have a nice dial to adjust how bandwidth and CPU heavy you want to be (and thus latency and quality). If you stream the command stream you *must* stream all the data before rendering a frame, and you likely need to stream some of it without any lossy compression at all.
-
@uecker @tauon @mntmn The reason it's flawed imo is that while it will work fine in restricted situations, it won't work in many others. Comparatively, streaming the output always works (modulo latency and quality), and you have a nice dial to adjust how bandwidth and CPU heavy you want to be (and thus latency and quality). If you stream the command stream you *must* stream all the data before rendering a frame, and you likely need to stream some of it without any lossy compression at all.
@dotstdy @tauon @mntmn The command stream is streamed anyway (in some sense). I do not understand your comment about the data. You also want this to be in GPU memory at the time it is accessed. Of course, you do not want to serialize your data through a network protocol, but in X when rendering locally, this is also not done. The point is that you need a protocol for manipulating remote buffers without involving the CPU. This works with X and Wayland and is also what we do (manually) in compute
-
@dotstdy @tauon @mntmn Yes, this makes sense and I am not disagreeing with any of it. But my point is merely that a display protocol that treats the GPU as remote is not fundamentally flawed as some people claim, because the GPU *is* remote even when local. And I could imagine that for some applications such as CAD, remote rendering might still could make sense. We use remote GPU for real-time processing of imaging data, and the network adds negligible latency.
@uecker@mastodon.social @dotstdy@mastodon.social @mntmn@mastodon.social
because the GPU is remote even when local
this is a good point & why i find it so cromulent that plan 9 treats all devices network transparently -
@dotstdy @tauon @mntmn The command stream is streamed anyway (in some sense). I do not understand your comment about the data. You also want this to be in GPU memory at the time it is accessed. Of course, you do not want to serialize your data through a network protocol, but in X when rendering locally, this is also not done. The point is that you need a protocol for manipulating remote buffers without involving the CPU. This works with X and Wayland and is also what we do (manually) in compute
@uecker @tauon @mntmn I'm not sure I even understand what you're saying in that case, waypipe does just stream the rendered buffer from the host to a remote client. It just doesn't serialize and send the GPU commands required to render that buffer on the remote client. The latter is very hard, the former is very practical.
-
@uecker @tauon @mntmn I'm not sure I even understand what you're saying in that case, waypipe does just stream the rendered buffer from the host to a remote client. It just doesn't serialize and send the GPU commands required to render that buffer on the remote client. The latter is very hard, the former is very practical.
@dotstdy@mastodon.social @uecker@mastodon.social @mntmn@mastodon.social the latter is only hard because each protocol has to do it instead of having 1 specific protocol for it like 9p
-
@dotstdy@mastodon.social @uecker@mastodon.social @mntmn@mastodon.social the latter is only hard because each protocol has to do it instead of having 1 specific protocol for it like 9p
-
@tauon @mntmn @uecker Let me put it this way, when I render frame 1 of a GPU program, in order to execute the GPU commands to produce that frame, I need to have the *entire up to date contents of vram* in the 'client' gpu's vram (or at least memory accessible to the 'client' gpu). That's really hard, and is bounded only by the GPU memory that the application wants to use. But using the 'server' gpu to render a frame, and sending it to the remote, is a bounded, and much smaller amount of work.
-
@tauon @mntmn @uecker Let me put it this way, when I render frame 1 of a GPU program, in order to execute the GPU commands to produce that frame, I need to have the *entire up to date contents of vram* in the 'client' gpu's vram (or at least memory accessible to the 'client' gpu). That's really hard, and is bounded only by the GPU memory that the application wants to use. But using the 'server' gpu to render a frame, and sending it to the remote, is a bounded, and much smaller amount of work.
@tauon @mntmn @uecker If I have a tool like blender which might have GPU memory requirements of say, 10GB, for a particular scene. Then in order to remote that very first frame, I need to send those entire 10GB all to the client! And then every frame the data in the GPU working set changes, and all that data needs to be sent as well.
-
@tauon @mntmn @uecker Let me put it this way, when I render frame 1 of a GPU program, in order to execute the GPU commands to produce that frame, I need to have the *entire up to date contents of vram* in the 'client' gpu's vram (or at least memory accessible to the 'client' gpu). That's really hard, and is bounded only by the GPU memory that the application wants to use. But using the 'server' gpu to render a frame, and sending it to the remote, is a bounded, and much smaller amount of work.
@dotstdy@mastodon.social @mntmn@mastodon.social @uecker@mastodon.social in 9p there is literally no distinction between the "client gpu" or "server gpu". it's literally just a gpu file somewhere