uhm, did you know that waypipe with ssh is fast enough to use blender remotely over wi-fi?
-
uhm, did you know that waypipe with ssh is fast enough to use blender remotely over wi-fi? what? this works much better/faster than x11 forwarding ever did
@mntmn Well damn, now you have my attention.
-
uhm, did you know that waypipe with ssh is fast enough to use blender remotely over wi-fi? what? this works much better/faster than x11 forwarding ever did
@mntmn Waypipe is awesome, been using it to run stuff on my desktop from my #postmarketOS phone and it's the most seamless and frustration-free remote GUI system I've ever used.
-
uhm, did you know that waypipe with ssh is fast enough to use blender remotely over wi-fi? what? this works much better/faster than x11 forwarding ever did
@mntmn I did _not_ know this and now I need to play around with it. Thanks for the encouragement!
-
uhm, did you know that waypipe with ssh is fast enough to use blender remotely over wi-fi? what? this works much better/faster than x11 forwarding ever did
@mntmn@mastodon.social I tried using it to run Blender remotely over Wireguard from my home computer once. However, the lag was a bit too much to use. Tried experimenting with compression but that didn't seem to change anything, so maybe the problem wasn't bandwidth?
-
-
@dotstdy@mastodon.social @uecker@mastodon.social @mntmn@mastodon.social can't you just forward the opengl instructions? presumably the computer has a gpu too
-
@dotstdy@mastodon.social @uecker@mastodon.social @mntmn@mastodon.social can't you just forward the opengl instructions? presumably the computer has a gpu too
@dotstdy@mastodon.social @uecker@mastodon.social @mntmn@mastodon.social actually then you'd run into the same problem sending the textures over the network too wouldn't you
-
@dotstdy@mastodon.social @uecker@mastodon.social @mntmn@mastodon.social actually then you'd run into the same problem sending the textures over the network too wouldn't you
@tauon @mntmn @dotstdy Both could work just fine with X in theory. The GLX extension - a long time in the past - could do remote 3D rendering, but pixel shuffling over X could also work fine. X is a very generic and flexible remote buffer handling protocol. The issues with ssh -X are mostly latency related because toolkits (and blender if not using a standard one then has one builtin) use it synchronously instead of asynchronously.
-
@tauon @mntmn @dotstdy Both could work just fine with X in theory. The GLX extension - a long time in the past - could do remote 3D rendering, but pixel shuffling over X could also work fine. X is a very generic and flexible remote buffer handling protocol. The issues with ssh -X are mostly latency related because toolkits (and blender if not using a standard one then has one builtin) use it synchronously instead of asynchronously.
-
-
-
@dotstdy @tauon @mntmn But I think even for many 3D applications that render locally, a remote rendering protocol is actually the right thing, because for all intents and purposes a discrete GPU is *not* local to the CPU and whether you stream the commands via PCI or the network is not so different. In fact, Wayland is also a designed for remote rendering in this sense just in a much more limited way.
-
@dotstdy @tauon @mntmn But I think even for many 3D applications that render locally, a remote rendering protocol is actually the right thing, because for all intents and purposes a discrete GPU is *not* local to the CPU and whether you stream the commands via PCI or the network is not so different. In fact, Wayland is also a designed for remote rendering in this sense just in a much more limited way.
@uecker @tauon @mntmn Unfortunately that's really not how the GPU works at all in the present day, it made more sense back in OpenGL 1.1 when there were pretty straightforward sets of "commands" and limited amounts of data passing between the GPU and the CPU. Nowadays with things like bindless textures and gpu-driven rendering, and compute, practically every draw call can access practically all the data on the GPU, and the CPU can write arbitrary data directly to GPU VRAM at any time.
-
@uecker @tauon @mntmn Unfortunately that's really not how the GPU works at all in the present day, it made more sense back in OpenGL 1.1 when there were pretty straightforward sets of "commands" and limited amounts of data passing between the GPU and the CPU. Nowadays with things like bindless textures and gpu-driven rendering, and compute, practically every draw call can access practically all the data on the GPU, and the CPU can write arbitrary data directly to GPU VRAM at any time.
@uecker @tauon @mntmn For very simple GPU programs you can make it work, but more advanced programs just do not work under a model with such restricted bandwidth between the GPU and the CPU. Plus, as was mentioned up-thread, you still need to somehow compress and decompress those textures online, which is itself a complex task. Plus you still need the GPU power on the thin client to render it. It's very much easier to render on the host, and then compress and transfer the whole framebuffer.
-
@uecker @tauon @mntmn For very simple GPU programs you can make it work, but more advanced programs just do not work under a model with such restricted bandwidth between the GPU and the CPU. Plus, as was mentioned up-thread, you still need to somehow compress and decompress those textures online, which is itself a complex task. Plus you still need the GPU power on the thin client to render it. It's very much easier to render on the host, and then compress and transfer the whole framebuffer.
-
@uecker @tauon @mntmn For very simple GPU programs you can make it work, but more advanced programs just do not work under a model with such restricted bandwidth between the GPU and the CPU. Plus, as was mentioned up-thread, you still need to somehow compress and decompress those textures online, which is itself a complex task. Plus you still need the GPU power on the thin client to render it. It's very much easier to render on the host, and then compress and transfer the whole framebuffer.
-
-
-
@uecker @tauon @mntmn we keep gigabytes of constantly changing data in GPU memory. so yes, but unless you want to stream 10GB of data before you render your first frame, then no. (obviously blender is less extreme here, but cad applications still deal with tremendous amounts of geometry, to say nothing of the online interactive path tracing and whatnot)
-
@uecker @tauon @mntmn we keep gigabytes of constantly changing data in GPU memory. so yes, but unless you want to stream 10GB of data before you render your first frame, then no. (obviously blender is less extreme here, but cad applications still deal with tremendous amounts of geometry, to say nothing of the online interactive path tracing and whatnot)
@uecker @tauon @mntmn The PCIe bus lets us move hundreds of megabytes of data between VRAM and RAM every frame. And so we do that. Our engine also relies on CPU read-back of the downsampled depth buffer from the previous frame, so that's a non-starter, however that's not something you'd run into outside of games, probably.