some API updates
continuous-integration/drone/push Build is passing Details

main
cosmonaut 2022-03-02 00:20:53 -08:00
parent b1aef3198d
commit feb57b5729
9 changed files with 69 additions and 243 deletions

View File

@ -6,7 +6,7 @@ weight: 4
Now that we have our rendering state set up, we can talk about issuing commands. Now that we have our rendering state set up, we can talk about issuing commands.
There are three kinds of overarching operations that we perform using the command buffer: applying render state, binding resources, and issuing draw commands. Let's go through each of these and then tie it all together. There are four kinds of overarching operations that we perform using the command buffer: applying render state, uploading data, binding resources, and performing draws. Let's go through each of these and then tie it all together.
First, we need to acquire a command buffer. First, we need to acquire a command buffer.
@ -25,24 +25,37 @@ myCommandBuffer.SetTextureData(myTexture, myPixels);
myCommandBuffer.SetBufferData(myBuffer, myData); myCommandBuffer.SetBufferData(myBuffer, myData);
``` ```
## Acquiring a swapchain texture
The swapchain is used for presenting an image to the operating system's window. Swapchain textures are acquired per-window.
```cs
var swapchainTexture = myCommandBuffer.AcquireSwapchainTexture(Window);
```
Swapchain textures can be used like any other texture for copy operations or as render pass attachments.
Once a swapchain is acquired, it will be presented to the window when the command buffer is submitted.
## Beginning the render pass ## Beginning the render pass
All bindings and draw calls must be made within a render pass. You should think of a render pass as being a set of draw calls made to the same group of render targets. All bindings and draw calls must be made within a render pass. You should think of a render pass as a set of draw calls rendering to the same group of textures.
Beginning a render pass requires a minimum of three parameters: a render pass object, a framebuffer object, a rectangle representing the area of the framebuffer that will be rendered to. You may also optionally provide clear colors per color target, and a depth/stencil clear value for the depth/stencil target. Beginning a render pass requires a minimum of one ColorAttachmentInfo struct. Up to four ColorAttachmentInfos can be provided, and an optional DepthStencilAttachmentInfo can be provided as well.
There can only be one active render pass at a time per command buffer. The ColorAttachmentInfo struct always includes a texture. Additional info can be provided, like depth, layer, and level regions. You can use these to do things like render into a cube map or different depths of a 3D texture. The most common case is just providing a texture and a clear color. Note that a texture used in ColorAttachmentInfo *must* either be a swapchain texture or have had `TextureUsageFlags.ColorTarget` set on creation.
The DepthStencilAttachmentInfo is similar, but describes depth/stencil parameters.
There can only be one active render pass on a command buffer.
```cs ```cs
myCommandBuffer.BeginRenderPass( myCommandBuffer.BeginRenderPass(
myRenderPass, new ColorAttachmentInfo(swapchainTexture, Color.CornflowerBlue)
myFramebuffer,
myRenderArea,
Color.CornflowerBlue
); );
``` ```
This example assumes there is one color target in the framebuffer, and we will clear it to the cornflower blue color. In this example, the render pass will render to the swapchain texture we acquired, and we will clear it to the cornflower blue color at the start of the render pass.
You can also clear the framebuffer by calling `CommandBuffer.Clear`. It's strongly recommended to clear when beginning a render pass if you need the render target cleared, but it is possible to clear mid-pass if you wish. You can also clear the framebuffer by calling `CommandBuffer.Clear`. It's strongly recommended to clear when beginning a render pass if you need the render target cleared, but it is possible to clear mid-pass if you wish.
@ -56,6 +69,8 @@ myCommandBuffer.BindGraphicsPipeline(myGraphicsPipeline);
Note that if you use multiple graphics pipelines that all share a render pass, you can call this method again and it will bind a new graphics pipeline without needing to end the render pass. Note that if you use multiple graphics pipelines that all share a render pass, you can call this method again and it will bind a new graphics pipeline without needing to end the render pass.
Note that the graphics pipeline must be *compatible* with the current render pass.
## Binding Buffers ## Binding Buffers
```cs ```cs
@ -147,18 +162,6 @@ When you need to change framebuffers or present to the screen, it's time to end
myCommandBuffer.EndRenderPass(); myCommandBuffer.EndRenderPass();
``` ```
## Presenting the Frame
"Presenting" is when we provide a final image that we want to show in our game window.
To present, we must provide a texture or texture slice, a filter, and an optional destination rectangle. If a destination rectangle is not provided the resulting image will be automatically scaled to the size of the game window.
```cs
myCommandBuffer.QueuePresent(myColorTarget, Filter.Nearest);
```
Note that if you use multiple command buffers, you still only want to call `QueuePresent` exactly once per frame.
## Copying Data ## Copying Data
Sometimes you might want to copy the contents of a texture to another texture. Sometimes you might want to copy the contents of a texture to another texture.

View File

@ -17,10 +17,10 @@ var commandBuffer = GraphicsDevice.AcquireCommandBuffer();
GraphicsDevice.Submit(commandBuffer); GraphicsDevice.Submit(commandBuffer);
``` ```
You will also need to provide the GraphicsDevice to load graphics resources. For example: The GraphicsDevice can query the swapchain format for a window.
```cs ```cs
var myTexture = Texture.LoadPNG(GraphicsDevice, "grass.png"); var swapchainFormat = GraphicsDevice.GetSwapchainFormat(Window);
``` ```
There is one last thing that the GraphicsDevice can do, and that is `Wait`. To understand waiting, you need to understand a little bit about how MoonWorks.Graphics processes rendering. There is one last thing that the GraphicsDevice can do, and that is `Wait`. To understand waiting, you need to understand a little bit about how MoonWorks.Graphics processes rendering.

View File

@ -4,7 +4,7 @@ date: 2021-01-24T20:54:00-08:00
weight: 1 weight: 1
--- ---
If the other MoonWorks tools have seemed simple so far, MoonWorks.Graphics may surprise you. Graphics is implemented using the [Refresh](https://gitea.moonside.games/MoonsideGames/Refresh) library. Refresh is based on Vulkan, an industry-standard cross-platform graphics API. It provides no "standard" rendering system that you can easily hook into, unlike what you may be used to with an engine like Unity. In order to render to the screen, you will need to set up, at the minimum, a Color Target, Framebuffer, a Render Pass, some Shader Modules, a Graphics Pipeline, and provide vertex data to be rendered through the pipeline. If the other MoonWorks tools have seemed simple so far, MoonWorks.Graphics may surprise you. Graphics is implemented using the [Refresh](https://gitea.moonside.games/MoonsideGames/Refresh) library. Refresh is based on Vulkan, an industry-standard cross-platform graphics API. It provides no "standard" rendering system that you can easily hook into, unlike what you may be used to with an engine like Unity. In order to render to the screen, you will need to set up some Shader Modules, a Graphics Pipeline, and provide vertex data to be rendered through the pipeline.
This might sound complicated, and I can't deny that, yeah, it's complicated. But everything is complicated at first. I promise to walk you through it as best I can, and once you understand how the pipeline works, you will have total control over your rendering and you will be able to create extremely high performance games that look exactly how you want. This might sound complicated, and I can't deny that, yeah, it's complicated. But everything is complicated at first. I promise to walk you through it as best I can, and once you understand how the pipeline works, you will have total control over your rendering and you will be able to create extremely high performance games that look exactly how you want.

View File

@ -1,37 +0,0 @@
---
title: "Framebuffer"
date: 2021-01-26T17:58:24-08:00
weight: 7
---
A framebuffer is a collection of render targets. When we want to draw things, we "bind" the framebuffer so that it can be drawn to. You can create a framebuffer for a single render target, but you can also create a framebuffer consisting of up to 4 color targets and a depth/stencil target. This is useful for techniques such as [deferred rendering](https://learnopengl.com/Advanced-Lighting/Deferred-Shading), because it is much cheaper to render to a bunch of render targets all at once instead of switching them out.
To create a framebuffer, we must provide dimensions, a Render Pass, and some render targets.
```cs
var myFramebuffer = new Framebuffer(
GraphicsDevice,
1280,
720,
myRenderPass,
myColorTarget
);
```
This will create a framebuffer of size 1280x720 that uses `myColorTarget` as its color target and is based on `myRenderPass`. There is a concept of "render pass compatibility" that will let you use a framebuffer with multiple different render passes, but it's usually easier to just make framebuffers you need per render pass that you will be using.
```cs
var myMultiTargetFrameBuffer = new Framebuffer(
GraphicsDevice,
1280,
720,
myGBufferRenderPass,
myPositionTarget,
myNormalTarget,
myAlbedoTarget,
myMetallicRoughnessTarget,
myDepthTarget
);
```
This will create a multiple render target framebuffer. That's pretty much all there is to say about framebuffers for now.

View File

@ -0,0 +1,35 @@
---
title: "Attachment Info"
date: 2021-01-28T12:55:51-08:00
weight: 10
---
GraphicsPipelineAttachmentInfo tells the graphics pipeline what kinds of render passes will be used with the pipeline. These are *compatible* render passes. It also describes how the pipeline should blend colors. Blending does not affect compatibility, but the other properties do.
This structure is composed of a set of ColorAttachmentDescriptions, and an optional depth/stencil format.
Each ColorAttachmentDescription is composed of a texture format, a multisample count, and a ColorAttachmentBlendState.
Our color attachment always has color information stored in it - the blend state tells the renderer how to combine new pixel values with the original values to produce a new color.
`BlendEnable` is a bool. If we turn off blending entirely, the new pixel color will just overwrite any existing values. Simple!
`ColorWriteMask` tells the renderer which color channels it is allowed to write to.
Next we have `AlphaBlendOp` and `ColorBlendOp`, which are `BlendOp` values, and the rest of the fields are `BlendFactor` values. Rather than spend a bunch of time explaining the nuances of various blend modes, I will simply link you to a [very thorough explanation of graphics blending](https://learnopengl.com/Advanced-OpenGL/Blending).
Here is an example blend mode, which does something we call "additive blending". This kind of blending is very useful for lighting and particle effects.
```cs
var myColorTargetBlendState = new ColorTargetBlendState
{
BlendEnable = true,
ColorWriteMask = ColorComponentFlags.RGBA,
AlphaBlendOp = BlendOp.Add,
ColorBlendOp = BlendOp.Add,
SourceColorBlendFactor = BlendFactor.SourceAlpha,
SourceAlphaBlendFactor = BlendFactor.SourceAlpha,
DestinationColorBlendFactor = BlendFactor.One,
DestinationAlphaBlendFactor = BlendFactor.One
};
```

View File

@ -4,35 +4,9 @@ date: 2021-01-27T14:11:42-08:00
weight: 1 weight: 1
--- ---
Color blend state is comprised of four fields - a logical operation, whether that logical operation is enabled, some blending constants, and a collection of color *target* blend states. Let's talk about the target states first. Color blend state is comprised of three fields - a logical operation, whether that logical operation is enabled, and some blending constants.
### Color Target Blend State `LogicOp` lets you do a bitwise combination of the original and new color. You can read about that in more detail [over here](https://www.khronos.org/registry/vulkan/specs/1.2/html/chap28.html#framebuffer-logicop).
You might remember our old friend RenderTarget. A color target is just a render target in a color format. Our color target always has color information stored in it - the blend mode tells the renderer how to combine new pixel values with the original values to produce a new color.
`BlendEnable` is a bool. If we turn off blending entirely, the new pixel color will just overwrite any existing values. Simple!
`ColorWriteMask` tells the renderer which color channels it is allowed to write to.
Next we have `AlphaBlendOp` and `ColorBlendOp`, which are `BlendOp` values, and the rest of the fields are `BlendFactor` values. Rather than spend a bunch of time explaining the nuances of various blend modes, I will simply link you to a [very thorough explanation of graphics blending](https://learnopengl.com/Advanced-OpenGL/Blending).
Here is an example blend mode, which does something we call "additive blending". This kind of blending is very useful for lighting and particle effects.
```cs
var myColorTargetBlendState = new ColorTargetBlendState
{
BlendEnable = true,
ColorWriteMask = ColorComponentFlags.RGBA,
AlphaBlendOp = BlendOp.Add,
ColorBlendOp = BlendOp.Add,
SourceColorBlendFactor = BlendFactor.SourceAlpha,
SourceAlphaBlendFactor = BlendFactor.SourceAlpha,
DestinationColorBlendFactor = BlendFactor.One,
DestinationAlphaBlendFactor = BlendFactor.One
};
```
Next we have `LogicOp`. This lets you do bitwise combination of the original and new color. You can read about that in more detail [over here](https://www.khronos.org/registry/vulkan/specs/1.2/html/chap28.html#framebuffer-logicop).
`BlendConstants` are used by the blend factors to produce result colors. You can read about that [here](https://www.khronos.org/registry/vulkan/specs/1.2/html/chap28.html#framebuffer-blendconstants). `BlendConstants` are used by the blend factors to produce result colors. You can read about that [here](https://www.khronos.org/registry/vulkan/specs/1.2/html/chap28.html#framebuffer-blendconstants).
@ -49,9 +23,10 @@ var myBlendConstants = new BlendConstants
var myColorBlendState = new ColorBlendState var myColorBlendState = new ColorBlendState
{ {
LogicOpEnable = false, LogicOpEnable = true,
BlendConstants = myBlendConstants, LogicOp = LogicOp.And,
ColorTargetBlendStates = myColorTargetBlendStates BlendConstants = myBlendConstants
}; };
``` ```
Further blending state control is provided in GraphicsPipelineAttachmentInfo, which will be discussed later.

View File

@ -1,55 +0,0 @@
---
title: "Render Pass"
date: 2021-01-26T17:55:09-08:00
weight: 6
---
Render passes are a sort of intermediary between graphics pipelines and framebuffers that describe the kind of render targets that will be used in a particular section of rendering. This is probably a little hard to understand because we haven't talked about graphics pipelines or framebuffers yet, but I might as well introduce you to them now.
Render passes describe the kinds of targets that will be included in framebuffers so that the pipeline can set itself up correctly. They do this using `ColorTargetDescription`s and an optional `DepthStencilTargetDescription`.
```cs
var myColorTargetDescription = new ColorTargetDescription
{
Format = TextureFormat.R8G8B8A8,
MultisampleCount = SampleCount.One,
LoadOp = LoadOp.Clear,
StoreOp = StoreOp.Store
};
var myRenderPass = new RenderPass(
GraphicsDevice,
myColorTargetDescription
);
```
With this definition we are saying that any framebuffer that will be used with this render pass will have exactly one render target in the RGBA8 format with no multisampling.
StoreOp tells the renderer what to do with the render target when the render pass finishes. `Store` means that the pixel data on the render target will be saved for later use. `DontCare` will do something or another, generally this operation will be whatever the driver thinks is fastest, but you don't have any guarantees about what it will be.
LoadOp tells the renderer what to do with the render target when the render pass starts. `Load` will load in the pixel information as it was stored. `Clear` will clear the render target to a single color that you provide when you begin the render pass. `DontCare` is same as above.
So in this example, we are saying that at the beginning of the render pass the render target will be cleared, and at the end of the render pass the render target will be stored for later use.
There is also a special type of description that you must use for depth/stencil targets.
```cs
var myDepthStencilTargetDescription = new DepthStencilTargetDescription
{
Format = TextureFormat.D32S8,
LoadOp = LoadOp.DontCare,
StoreOp = StoreOp.DontCare,
StencilLoadOp = LoadOp.DontCare,
StencilStoreOp = StoreOp.DontCare
};
var myRenderPass = new RenderPass(
GraphicsDevice,
myDepthStencilTargetDescription,
myColorTargetDescription
);
```
`StencilLoadOp` and `StencilStoreOp` allow you to specify different ops for the stencil buffer in particular. `LoadOp` and `StoreOp` here refer to the depth buffer. If your format does not provide a stencil format, `StencilLoadOp` and `StencilStoreOp` will be ignored.
That's it for render passes for now. Let's talk about framebuffers so this all starts making more sense.

View File

@ -1,89 +0,0 @@
---
title: "Render Target"
date: 2021-01-26T17:59:45-08:00
weight: 5
---
When you write rendering code, you might intuitively think we are just drawing to the screen, but you would be wrong! In MoonWorks, we are always drawing to an object called a render target. You can think of a render target like a canvas. It can be whatever size we want and whatever color format, and we can composite them together to get our final render.
To create a render target, we need a texture slice. We can also optionally provide a multisample count.
Multisampling is a technique which can produce higher-quality images at a processing cost. You can [read more about multisampling elsewhere](https://www.khronos.org/opengl/wiki/Multisampling) if you're interested. If you don't want multisampling, simply use `SampleCount.One` or omit the sample count argument.
```cs
var myMultisampleRenderTarget = new RenderTarget(
GraphicsDevice,
myTextureSlice,
SampleCount.Four
);
var myRegularRenderTarget = new RenderTarget(
GraphicsDevice,
myTextureSlice
)
```
When we use a texture slice to create a render target, we say that the render target is "backed" by the texture slice. This code will create a render target that is backed by `myTextureSlice`. In order to use a texture as a backing texture, you need to make sure you create it with the `TextureUsageFlags.RenderTarget` flag.
A lot of the time you will be creating render targets and textures at the same time. We have a nice shortcut for that.
```cs
var myRenderTarget = RenderTarget.CreateBackedRenderTarget(
GraphicsDevice,
1280,
720,
TextureFormat.R8G8B8A8,
false
);
```
This code creates a 1280x720 color render target, and creates the backing texture alongside it.
Another important kind of render target is a "depth buffer". Depth buffers are used so that the graphics card can know to skip rendering on pixels that will be obscured by other objects. This can be a major optimization.
To create a depth buffer, you must use one of the depth texture formats.
```cs
var textureCreateInfo = new TextureCreateInfo
{
Width = 1280,
Height = 720,
Depth = 1,
Format = TextureFormat.D32,
IsCube = false,
LevelCount = 1,
SampleCount = SampleCount.One,
UsageFlags = TextureUsageFlags.RenderTarget
};
var depthTexture = new Texture(GraphicsDevice, textureCreateInfo);
var depthBuffer = new RenderTarget(
GraphicsDevice,
new TextureSlice(depthTexture)
);
```
This sets up a 1280x720 32-bit depth buffer. We can also use our above-mentioned method to do the same thing.
```cs
var depthBuffer = RenderTarget.CreateBackedRenderTarget(
GraphicsDevice,
1280,
720,
TextureFormat.D32,
false
);
```
One last thing to note - depth buffers can also contain a "stencil buffer". Stencil buffers provide a much more nuanced way to cull pixels from rendering in particular cases. You should [read more about that](https://www.khronos.org/opengl/wiki/Stencil_Test) if you're interested.
```cs
var depthStencilBuffer = RenderTarget.CreateDepthStencilBuffer(
GraphicsDevice,
1280,
720,
TextureFormat.D32S8,
false
);
```

View File

@ -8,18 +8,12 @@ MoonWorks provides eight different kinds of graphics resources that you use to c
`Buffer` holds generic data, like vertex information for example. The way this data is interpreted is determined by the pipeline. `Buffer` holds generic data, like vertex information for example. The way this data is interpreted is determined by the pipeline.
`Texture` holds image data in a specified format. `Texture` holds image data in a specified format. Textures can also be rendered to.
`Sampler` tells a shader how it should sample texture data. `Sampler` tells a shader how it should sample texture data.
`GraphicsPipeline` sets up the graphics device to do rendering work. `GraphicsPipeline` sets up the graphics device to do rendering work.
`RenderTarget` is a structure that can be rendered to.
`Framebuffer` is essentially a collection of `RenderTarget`s.
`RenderPass` is a structure that tells the `GraphicsPipeline` how it should use a `Framebuffer`.
`ComputePipeline` sets up the graphics device to do computational work using [compute shaders](https://anteru.net/blog/2018/intro-to-compute-shaders/index.html). `ComputePipeline` sets up the graphics device to do computational work using [compute shaders](https://anteru.net/blog/2018/intro-to-compute-shaders/index.html).