..

Renderer Architecture

I’ve recently been trying to nail down some on the foundations of a hobby engine that I’m developing. One of the big tasks has been to decide where to draw the lines that separate the responsibilities of different components of the overall Renderer. I wanted to take the time to talk a bit about my approach here, in the hopes that it might offer some inspiration for others like me that are very much still at the beginning of their engine programming journey.

A Matter of Perspective

Look to any beginners resource on engine architecture, and you’ll very quickly encounter the concept of a Scene. In its purest form the Scene defines what really exists in our game world, and it’s the job of the Renderer to inspect all of that data and to try to produce some view into that world. This is the first core concept of my approach. The Renderer’s main job is to give me a picture of the Scene. While that might sound obvious, note that this definition doesn’t enforce any notion of what that picture should look like, nor how the Renderer should interpret the scene data.

The way I’ve chosen to implement this is to define an abstract SceneRenderer base class, which currently only requires a single method to be implemented. RenderToFramebuffer takes in a Scene and Framebuffer target, and paints the scene onto the Framebuffer (we’ll come to the RenderQueue later). Any user can then inherit from this base and implement the RenderToFramebuffer method however they see fit.

/* "graphics/scenerenderer.hpp" */
class SceneRenderer {
	public:
		SceneRenderer() {};
		virtual ~SceneRenderer() {};
		virtual void RenderToFramebuffer(
			std::shared_ptr<Scene> scene, 
			std::shared_ptr<Framebuffer> framebuffer) {};
		
	protected:
		std::shared_ptr<Renderer2D> m_Renderer2D;	
		std::shared_ptr<Renderer3D> m_Renderer3D;
};

class BasicSceneRenderer : public SceneRenderer {
	public:
		BasicSceneRenderer();
		~BasicSceneRenderer() override;
		virtual void RenderToFramebuffer(std::shared_ptr<Scene> scene, std::shared_ptr<Framebuffer> framebuffer) override;
		
	private:
		FramebufferConfig m_MainTargetConfig;
		std::shared_ptr<Framebuffer> m_MainTarget;
};

class EditorSceneRenderer : public SceneRenderer {
	public:
		EditorSceneRenderer(std::shared_ptr<EditorState> editorState);
		~EditorSceneRenderer() override;

		virtual void RenderToFramebuffer(std::shared_ptr<Scene> scene, std::shared_ptr<Framebuffer> framebuffer) override;
		
	private:
		std::shared_ptr<EditorState> m_EditorState;
		FramebufferConfig m_MainTargetConfig;
		std::shared_ptr<Framebuffer> m_MainTarget;
};

An example of how we could use the Scene renderer system to define 2 ways of rendering the same scene.

As an example, one might desire to offer multiple views into the scene within an editor tool. Take Unity, where we can tab between the Scene and game views. In our implementation we would render the scene differently depending on which tabs are displaying, by passing the framebuffer to one of two SceneRenderers. The first would likely render the scene in the traditional way, from the perspective of the Scene’s currently active camera. Meanwhile the second could ignore the Scene’s camera, and take in a camera defined and controlled by the Editor itself. It could then do a bunch of other useful things, like drawing grids, allowing toggles to draw in wireframe modes, ignoring shadows, ignoring lighting, toggling the skybox etc. The other advantage here is that we can easily pass the same scene and framebuffer through multiple scene renderers, allowing us to easily apply overlays. We can also have scene renderers that hold their own scene renderers, such as a series of simple postprocessing subpasses or a custom antialiasing solution.

Unity Editor The Unity editor allows for multiple simultaneous views into the same scene.

As well as choosing what scene data to consume or ignore, the Scene Renderer will also be responsible for managing all of the various passes that will be required to obtain the final image. It will hold all of the resources such as intermediate Framebuffers and Texture maps that it needs. It’s a persistent object, so it will be able to cache state between frames, and will contain references to a pair of configurable 2D and 3D renderers. I haven’t mentioned the renderer objects yet, because I haven’t settled precisely on where the scene renderers responsibilities should end and where theirs should begin. For now, the renderers will simply be responsible for taking requests to render certain geometry, and submitting the relevant RenderCommands to the RenderQueue. Speaking of which, it’s time we turned our attention to the other key aspect of the renderer.

Commands, Queues, and Submission

The idea of a render command queue is common, and is even a requirement in some graphics APIs (see Vulkan’s vkCommandBuffer). Instead of the program making direct draw calls all over the place it submits commands to some queue, which are then executed sequentially by the driver at some known point in time. There are a number of advantages that this provides. Firstly, it eases debugging allowing us to just slap in a breakpoint and inspect the queue at any given time. Can’t see that triangle? Maybe you just got your draw order wrong, this will make it easier to confirm or deny that hunch. Second, in the future we may wish to move all of those expensive driver calls over to a separate thread. Having a command pattern allows us to capture all the necessary state those commands need in order to execute, and will help us more easily ensure thread safety in the future.

RenderCommands are essentially a wrapper around some driver draw call, alongside the necessary state and state-changing commands. For example, we can implement a RenderVertexArray command which captures weak pointers to a vertex array and shader. At the point of execution the command can then check if the vertex array and shader are still alive, and if so, bind them and issue a glDrawElements call. We then also define a RenderQueue, which is responsible for accrueing RenderCommands and executing them all in order when flushed.

/* "graphics/rendercommands.hpp" */

class RenderCommand {
	public:
		virtual void Execute() = 0;
		virtual ~RenderCommand() {}
};

class RenderVertexArray : public RenderCommand {
	public:
		RenderVertexArray(std::weak_ptr<VertexArray> vertexArray, std::weak_ptr<Shader> shader) 
			: m_VertexArray(vertexArray), m_Shader(shader) {}
		~RenderVertexArray() {}
		void Execute() override;
	private:
		std::weak_ptr<VertexArray> m_VertexArray;
		std::weak_ptr<Shader> m_Shader;
};

/* "graphics/renderqueue.hpp" */

class RenderQueue {
	public:
		RenderQueue() {}
		~RenderQueue() {}
		void Init();
		void Shutdown();
		void Submit(std::unique_ptr<RenderCommand> cmd);
		void Flush();

	private:		
		std::queue<std::unique_ptr<RenderCommand>> m_Commands;
};

I’m quite pleased with this solution, but there’s a fair bit of room for improvement. Firstly, everything is still a bit loosey-goosey at the moment. I need to build some more complex scenes and implement things like shadow mapping and postprocessing before I can begin to understand how I want the final Scene Renderer API to function. There’s almost certainly some issues with safety too. Right now there’s nothing stopping me from submitting a framebuffer that is completely incompatible with what the scene renderer expects. As for the RenderQueue and Commands, I think the current API is a little clunky. It’s not the most intuitive thing to use, requiring the user to manually construct a unique pointer to a RenderCommand, and then std::move it in the call to RenderQueue::Submit(). I could easily just write a bunch of helper functions to simplify the process, but then would the RenderCommand objects still be needed?

Anyway, that’s enough for now. Hopefully you’ve found this useful, if you have any questions or comments feel free to email me at samhaskell@proton.me