uhm, did you know that waypipe with ssh is fast enough to use blender remotely over wi-fi?
-
@dotstdy @tauon @mntmn Yes, this makes sense and I am not disagreeing with any of it. But my point is merely that a display protocol that treats the GPU as remote is not fundamentally flawed as some people claim, because the GPU *is* remote even when local. And I could imagine that for some applications such as CAD, remote rendering might still could make sense. We use remote GPU for real-time processing of imaging data, and the network adds negligible latency.
@uecker @tauon @mntmn The reason it's flawed imo is that while it will work fine in restricted situations, it won't work in many others. Comparatively, streaming the output always works (modulo latency and quality), and you have a nice dial to adjust how bandwidth and CPU heavy you want to be (and thus latency and quality). If you stream the command stream you *must* stream all the data before rendering a frame, and you likely need to stream some of it without any lossy compression at all.
-
@uecker @tauon @mntmn The reason it's flawed imo is that while it will work fine in restricted situations, it won't work in many others. Comparatively, streaming the output always works (modulo latency and quality), and you have a nice dial to adjust how bandwidth and CPU heavy you want to be (and thus latency and quality). If you stream the command stream you *must* stream all the data before rendering a frame, and you likely need to stream some of it without any lossy compression at all.
@dotstdy @tauon @mntmn The command stream is streamed anyway (in some sense). I do not understand your comment about the data. You also want this to be in GPU memory at the time it is accessed. Of course, you do not want to serialize your data through a network protocol, but in X when rendering locally, this is also not done. The point is that you need a protocol for manipulating remote buffers without involving the CPU. This works with X and Wayland and is also what we do (manually) in compute
-
@dotstdy @tauon @mntmn Yes, this makes sense and I am not disagreeing with any of it. But my point is merely that a display protocol that treats the GPU as remote is not fundamentally flawed as some people claim, because the GPU *is* remote even when local. And I could imagine that for some applications such as CAD, remote rendering might still could make sense. We use remote GPU for real-time processing of imaging data, and the network adds negligible latency.
@uecker@mastodon.social @dotstdy@mastodon.social @mntmn@mastodon.social
because the GPU is remote even when local
this is a good point & why i find it so cromulent that plan 9 treats all devices network transparently -
@dotstdy @tauon @mntmn The command stream is streamed anyway (in some sense). I do not understand your comment about the data. You also want this to be in GPU memory at the time it is accessed. Of course, you do not want to serialize your data through a network protocol, but in X when rendering locally, this is also not done. The point is that you need a protocol for manipulating remote buffers without involving the CPU. This works with X and Wayland and is also what we do (manually) in compute
@uecker @tauon @mntmn I'm not sure I even understand what you're saying in that case, waypipe does just stream the rendered buffer from the host to a remote client. It just doesn't serialize and send the GPU commands required to render that buffer on the remote client. The latter is very hard, the former is very practical.
-
@uecker @tauon @mntmn I'm not sure I even understand what you're saying in that case, waypipe does just stream the rendered buffer from the host to a remote client. It just doesn't serialize and send the GPU commands required to render that buffer on the remote client. The latter is very hard, the former is very practical.
@dotstdy@mastodon.social @uecker@mastodon.social @mntmn@mastodon.social the latter is only hard because each protocol has to do it instead of having 1 specific protocol for it like 9p
-
@dotstdy@mastodon.social @uecker@mastodon.social @mntmn@mastodon.social the latter is only hard because each protocol has to do it instead of having 1 specific protocol for it like 9p
-
@tauon @mntmn @uecker Let me put it this way, when I render frame 1 of a GPU program, in order to execute the GPU commands to produce that frame, I need to have the *entire up to date contents of vram* in the 'client' gpu's vram (or at least memory accessible to the 'client' gpu). That's really hard, and is bounded only by the GPU memory that the application wants to use. But using the 'server' gpu to render a frame, and sending it to the remote, is a bounded, and much smaller amount of work.
-
@tauon @mntmn @uecker Let me put it this way, when I render frame 1 of a GPU program, in order to execute the GPU commands to produce that frame, I need to have the *entire up to date contents of vram* in the 'client' gpu's vram (or at least memory accessible to the 'client' gpu). That's really hard, and is bounded only by the GPU memory that the application wants to use. But using the 'server' gpu to render a frame, and sending it to the remote, is a bounded, and much smaller amount of work.
@tauon @mntmn @uecker If I have a tool like blender which might have GPU memory requirements of say, 10GB, for a particular scene. Then in order to remote that very first frame, I need to send those entire 10GB all to the client! And then every frame the data in the GPU working set changes, and all that data needs to be sent as well.
-
@tauon @mntmn @uecker Let me put it this way, when I render frame 1 of a GPU program, in order to execute the GPU commands to produce that frame, I need to have the *entire up to date contents of vram* in the 'client' gpu's vram (or at least memory accessible to the 'client' gpu). That's really hard, and is bounded only by the GPU memory that the application wants to use. But using the 'server' gpu to render a frame, and sending it to the remote, is a bounded, and much smaller amount of work.
@dotstdy@mastodon.social @mntmn@mastodon.social @uecker@mastodon.social in 9p there is literally no distinction between the "client gpu" or "server gpu". it's literally just a gpu file somewhere
-
@uecker @tauon @mntmn I'm not sure I even understand what you're saying in that case, waypipe does just stream the rendered buffer from the host to a remote client. It just doesn't serialize and send the GPU commands required to render that buffer on the remote client. The latter is very hard, the former is very practical.
@dotstdy @tauon @mntmn I think we may be talking past each other. I am not arguing for remote rendering (although I do something like this, but this is more compute than rendering), but my point is that GPU programming *is* programming a remote device because the PCI bus - although less than a network - is a bottleneck and this is why both X and Wayland a remote buffer management protocols and not fundamentally different in this respect.
-
@dotstdy @tauon @mntmn I think we may be talking past each other. I am not arguing for remote rendering (although I do something like this, but this is more compute than rendering), but my point is that GPU programming *is* programming a remote device because the PCI bus - although less than a network - is a bottleneck and this is why both X and Wayland a remote buffer management protocols and not fundamentally different in this respect.
@uecker @tauon @mntmn Right, I think I see your meaning now. I was just confused because X (as well as some other products) did of course allow you to send actual rendering commands to the remote machine, for them to be executed on the remote rather than the host. And tauon was talking specifically about forwarding gl commands. But if you just consider the direct rendering paths, then they're pretty similar between something like X and waypipe, it's just copying and forwarding the output frames.
-
@dotstdy @tauon @mntmn I think we may be talking past each other. I am not arguing for remote rendering (although I do something like this, but this is more compute than rendering), but my point is that GPU programming *is* programming a remote device because the PCI bus - although less than a network - is a bottleneck and this is why both X and Wayland a remote buffer management protocols and not fundamentally different in this respect.
@uecker @tauon @mntmn nitpick aside: The reason that these are remote buffer management protocols is actually subtly different from bandwidth concerns, it's more about queuing and throughput. the latency between the GPU and the CPU is actually pretty low, and the bandwidth quite high, however we deliberately introduce extra latency in order to allow the GPU to run out-of-lockstep from the CPU to increase throughput. It prevents us from starving the GPU, by ensuring a continuous stream of work.
-
@uecker @tauon @mntmn nitpick aside: The reason that these are remote buffer management protocols is actually subtly different from bandwidth concerns, it's more about queuing and throughput. the latency between the GPU and the CPU is actually pretty low, and the bandwidth quite high, however we deliberately introduce extra latency in order to allow the GPU to run out-of-lockstep from the CPU to increase throughput. It prevents us from starving the GPU, by ensuring a continuous stream of work.
@uecker @tauon @mntmn So when you communicate, the reason that's bad is partially due to the bandwidth, but mostly due to the huge delays you incur in order to wait for execution to complete on the GPU, and then to start back up again once you produce more GPU commands. Readback at a fixed latency which exceeds that queuing delay is totally fine, and bandwidth-wise you can yeet something like 20 4k uncompressed images across the pcie link per 60Hz frame, if you really really wanted to.
-
undefined cwebber@social.coop shared this topic on