Any D3D experts know why ByteAddressBuffer exists?
-
Any D3D experts know why ByteAddressBuffer exists? It seems like it's literally just a regular Uint32 buffer, except I have to multiply the index by 4. I don't understand why it was added to the spec.
-
Any D3D experts know why ByteAddressBuffer exists? It seems like it's literally just a regular Uint32 buffer, except I have to multiply the index by 4. I don't understand why it was added to the spec.
@TomF
My understanding is that the original buffers in D3D (Buffer<uint>) are "texture" buffers, meaning the reads go through the texture unit. ByteAddressBuffer allows the shader writer to exexute raw memory loads and interpret the data as they wish. They are apparently performance implications.
I think this is mentioned in the "modern data" section here:
https://www.sebastianaaltonen.com/blog/no-graphics-api -
@TomF
My understanding is that the original buffers in D3D (Buffer<uint>) are "texture" buffers, meaning the reads go through the texture unit. ByteAddressBuffer allows the shader writer to exexute raw memory loads and interpret the data as they wish. They are apparently performance implications.
I think this is mentioned in the "modern data" section here:
https://www.sebastianaaltonen.com/blog/no-graphics-api@jristic He is mixing together a bunch of different arguments there. Yes, using SOA data structures can be a speed hit, but so... don't do that. Use a DWORD buffer and pack the data into SOA yourself. It's the same whether you use a ByteAddressedBuffer or a DWORD buffer.
Slight side note, but I do not believe any modern hardware has to use the texture sampler to do the format conversion. For obvious reasons, anything supported as a vertex shader input format should not need it.
-
@jristic He is mixing together a bunch of different arguments there. Yes, using SOA data structures can be a speed hit, but so... don't do that. Use a DWORD buffer and pack the data into SOA yourself. It's the same whether you use a ByteAddressedBuffer or a DWORD buffer.
Slight side note, but I do not believe any modern hardware has to use the texture sampler to do the format conversion. For obvious reasons, anything supported as a vertex shader input format should not need it.
@TomF
I'm uncertain on the topic as I've never come across concrete reference.
How does the claim, "raw buffer load instructions nowadays have up to 2x higher throughput and up to 3x lower latency than texel buffers" fit into that? Is he referring to a different thing when he says "texel buffers"? -
undefined oblomov@sociale.network shared this topic
-
Any D3D experts know why ByteAddressBuffer exists? It seems like it's literally just a regular Uint32 buffer, except I have to multiply the index by 4. I don't understand why it was added to the spec.
@TomF I'm curious to know about the answer if you find it. I know that dinner older hardware was unable to address individual bytes (hence the cl_khr_byte_addressable_store extension in OpenCL for example), so maybe it's related to that?