Current design decisions for my game engine:
-
We have pixels! It's only earlier than anticipated because I hardcoded a graphics pipeline after all instead of doing abstraction first. And it was good, now I have a much better idea of how I want the abstraction to work.
Btw ImGui's docking functionality is great :D
So typically, tutorials will tell you to create one Renderer class that holds everything - VkInstance, VkDevice, the graphics queue, frames-in-flight data like command buffers, semaphores, fences, swapchains etc. This is a good starting point.
Now I'm thinking about how to split all this up to support multiple windows with multiple viewports that can be rendered to. The game won't need multiple windows, but the editor does.
-
So typically, tutorials will tell you to create one Renderer class that holds everything - VkInstance, VkDevice, the graphics queue, frames-in-flight data like command buffers, semaphores, fences, swapchains etc. This is a good starting point.
Now I'm thinking about how to split all this up to support multiple windows with multiple viewports that can be rendered to. The game won't need multiple windows, but the editor does.
So, I want to support having multiple windows. #ImGui has a "multiviewport" feature for that where the ImGui context can be shared, but it's not supported under Linux with Wayland. So my only option is initializing multiple ImGui contexts and rendering them separately.
That's all fine. I'm just wondering if doing so will prevent me from doing drag-and-drop operations with ImGui across windows 🤔
-
So, I want to support having multiple windows. #ImGui has a "multiviewport" feature for that where the ImGui context can be shared, but it's not supported under Linux with Wayland. So my only option is initializing multiple ImGui contexts and rendering them separately.
That's all fine. I'm just wondering if doing so will prevent me from doing drag-and-drop operations with ImGui across windows 🤔
Guess I'll just use multiple contexts for now and not worry too much about drag and drop. I don't need it super badly to move panels from one window to another, and for simpler stuff like dragging a resource over everything _should_ be able to be handled with lower level API.
The refactor is interesting - there's different swapchains but still only one thread so using fences correctly will be fun!
-
Guess I'll just use multiple contexts for now and not worry too much about drag and drop. I don't need it super badly to move panels from one window to another, and for simpler stuff like dragging a resource over everything _should_ be able to be handled with lower level API.
The refactor is interesting - there's different swapchains but still only one thread so using fences correctly will be fun!
I guess there isn't actually a reason to give each window its own command buffer. And it's not like there's any multithreading involved or that windows would need to run at different framerates. So the window-specific data will be the surface, the swapchain, and the related release semaphore. Frames-in-flight will have one command pool/buffer each, and they'll record commands for all windows. Then each frame there's a single vkQueuePresent, handling all swapchains. Yeah.
-
I guess there isn't actually a reason to give each window its own command buffer. And it's not like there's any multithreading involved or that windows would need to run at different framerates. So the window-specific data will be the surface, the swapchain, and the related release semaphore. Frames-in-flight will have one command pool/buffer each, and they'll record commands for all windows. Then each frame there's a single vkQueuePresent, handling all swapchains. Yeah.
Wait! But one of my monitors is 60Hz while the other is 144Hz! I _do_ want windows to run at different framerates or else windows on my main monitor will be bottlenecked by the other one! Ahhhhh
-
Wait! But one of my monitors is 60Hz while the other is 144Hz! I _do_ want windows to run at different framerates or else windows on my main monitor will be bottlenecked by the other one! Ahhhhh
Ok after doing some research my preliminary conclusion is that the only way to reliably have multiple windows with independent refresh rates is to use multithreading, one render thread per window. Threads are a blind spot of mine that I've been meaning to overcome anyway, guess the time has come 🥴
-
Ok after doing some research my preliminary conclusion is that the only way to reliably have multiple windows with independent refresh rates is to use multithreading, one render thread per window. Threads are a blind spot of mine that I've been meaning to overcome anyway, guess the time has come 🥴
Once I have my game engine where every panel in the editor is detachable I'll get a third monitor to make it worth the effort
-
Once I have my game engine where every panel in the editor is detachable I'll get a third monitor to make it worth the effort
Hmm I wonder if it's worth checking out Qt as well as an option for editor GUI... it seems like a heavy dependency but it might be more suited for what I want to do. ImGui is great for in-game debug UI but is not really meant to be used for a full application
-
Hmm I wonder if it's worth checking out Qt as well as an option for editor GUI... it seems like a heavy dependency but it might be more suited for what I want to do. ImGui is great for in-game debug UI but is not really meant to be used for a full application
Should... should I write my own UI library? I'm gonna need one for ingame UI anyway, could do it like Godot did and make it so it's usable for an editor as well.
But I was kind of hoping my ingame UI tooling wouldn't need to be constrained by having to support regular desktop widgets and interactions, so I really should just pick a third party solution for the editor... nyeh
-
Should... should I write my own UI library? I'm gonna need one for ingame UI anyway, could do it like Godot did and make it so it's usable for an editor as well.
But I was kind of hoping my ingame UI tooling wouldn't need to be constrained by having to support regular desktop widgets and interactions, so I really should just pick a third party solution for the editor... nyeh
Ok I think I'll stop shedding the bike for now regarding editor gui and multi-window support. Just thinking about it was valuable and clarified some future requirements, but they can be implemented later. Next steps:
- Hardcode a 3D pipeline (like a cube)
- Load a gltf and render it bindlessly
- Dehardcode the pipeline -
Ok I think I'll stop shedding the bike for now regarding editor gui and multi-window support. Just thinking about it was valuable and clarified some future requirements, but they can be implemented later. Next steps:
- Hardcode a 3D pipeline (like a cube)
- Load a gltf and render it bindlessly
- Dehardcode the pipelineLast time the internal mesh data format I used was huge - it was just the vertex and index buffers dumped into a file so they could be memcopied into the GPU quickly without processing. This time I'll also add a (de)compression step, that should still be pretty fast. And use meshoptimizer during the import step.
-
Last time the internal mesh data format I used was huge - it was just the vertex and index buffers dumped into a file so they could be memcopied into the GPU quickly without processing. This time I'll also add a (de)compression step, that should still be pretty fast. And use meshoptimizer during the import step.
ok forget hardcoding a cube mesh in the shader, going straight for glTF model loading. I can copy paste parts of that from the previous iteration of the engine, while improving it at the same time.
-
ok forget hardcoding a cube mesh in the shader, going straight for glTF model loading. I can copy paste parts of that from the previous iteration of the engine, while improving it at the same time.
glTF model loading and 3D rendering with depth work! There's some strange bug when I try to load in larger models, and there are a bunch of validation errors about synchronization issues, but those are for another day
-
glTF model loading and 3D rendering with depth work! There's some strange bug when I try to load in larger models, and there are a bunch of validation errors about synchronization issues, but those are for another day
Progress of the evening: fixed the problem that prevented some models from loading (it was just a wrong variable), loading all meshes / surfaces in a glTF now, loading normals. Textures will be next!
-
Progress of the evening: fixed the problem that prevented some models from loading (it was just a wrong variable), loading all meshes / surfaces in a glTF now, loading normals. Textures will be next!
biblically accurate bindless textures
-
Engine rewrite progress update after an extended weekend:
- Bindless textures and materials
- Normal mapping
- Scene tree with transform hierarchy
- Camera and scene data in per-frame uniform buffer instead of push constants
- MSAA
- Lots of refactoring and new abstractions -
Engine rewrite progress update after an extended weekend:
- Bindless textures and materials
- Normal mapping
- Scene tree with transform hierarchy
- Camera and scene data in per-frame uniform buffer instead of push constants
- MSAA
- Lots of refactoring and new abstractionsSome stuff was mostly copy-pasted from my previous project. The really new thing is the bindless paradigm. This finally helped me understand descriptor sets - I was really confused about them last time when I followed vkguide.dev.
Not sure if I'm doing bindless "right" or if I'm bound for troubles down the road but so far it works pretty well! I only do a single vkCmdBindDescriptorSets per frame, binding global resources, and push material IDs per object in push constants.
-
Some stuff was mostly copy-pasted from my previous project. The really new thing is the bindless paradigm. This finally helped me understand descriptor sets - I was really confused about them last time when I followed vkguide.dev.
Not sure if I'm doing bindless "right" or if I'm bound for troubles down the road but so far it works pretty well! I only do a single vkCmdBindDescriptorSets per frame, binding global resources, and push material IDs per object in push constants.
I don't know yet how this will change once I add more shaders / material types though. I only have a single graphics pipeline right now, figuring out a system to expand on that will be the next big task.
Also Slang has some stuff dedicated to "bindless" but I didn't really understand it and also it seems to work without it? Might refactor the shader once I figure out the advantage of DescriptorHandle and how to use it
-
I don't know yet how this will change once I add more shaders / material types though. I only have a single graphics pipeline right now, figuring out a system to expand on that will be the next big task.
Also Slang has some stuff dedicated to "bindless" but I didn't really understand it and also it seems to work without it? Might refactor the shader once I figure out the advantage of DescriptorHandle and how to use it
Anyway, possible next steps:
- Mipmaps
- Basic lighting
- Finally start cleaning up resources correctly
- Maybe look into batching the uploads of assets the GPU -
Anyway, possible next steps:
- Mipmaps
- Basic lighting
- Finally start cleaning up resources correctly
- Maybe look into batching the uploads of assets the GPUWith the combined power of albedo, normal maps, and a simple dot product I present: basic lighting