lots more docs
continuous-integration/drone/push Build is passing Details

main
cosmonaut 2021-01-27 15:03:07 -08:00
parent bfdc99601e
commit 8aed199e3c
8 changed files with 250 additions and 10 deletions

View File

@ -40,3 +40,5 @@ myPixelBuffer.GetData(myPixelData, width * height * pixelSizeInBytes);
``` ```
There are some important things to note here. The buffer is not guaranteed to actually contain your data until `Submit` is called. Transferring memory to and from the GPU is expensive, so MoonWorks batches these operations aggressively. It *is* safe to assume that any command buffer commands you use after calling `SetData` will have access to the data, but it is *not* safe to assume that the buffer will contain the data you expect right after calling `SetData`. You need to call `GraphicsDevice.Submit` and then `GraphicsDevice.Wait` before the data is guaranteed to exist in the buffer. You should have a *very* good reason for doing this - this kind of behavior is called a "synchronization point" and it is extremely detrimental to performance. There are some important things to note here. The buffer is not guaranteed to actually contain your data until `Submit` is called. Transferring memory to and from the GPU is expensive, so MoonWorks batches these operations aggressively. It *is* safe to assume that any command buffer commands you use after calling `SetData` will have access to the data, but it is *not* safe to assume that the buffer will contain the data you expect right after calling `SetData`. You need to call `GraphicsDevice.Submit` and then `GraphicsDevice.Wait` before the data is guaranteed to exist in the buffer. You should have a *very* good reason for doing this - this kind of behavior is called a "synchronization point" and it is extremely detrimental to performance.
Let's use an analogy: when you store data in a buffer using `SetData`, this is *not* equivalent to writing something down in a notebook and being able to look it up whenever you want. It's more like you just sealed a letter in an envelope and put it in the mailbox. Eventually when you call `Submit` the mailperson will come along, pick up your letter, and the postal service will deliver it to where it needs to go. When you use `Wait` this is basically equivalent to waiting around for the postal service to deliver you a response. You could have been walking your dog, cooking, playing a video game, or doing literally anything other than just sitting around by your mailbox! This is why using `Wait` is generally a bad idea unless you absolutely need that information, and if you do absolutely need it, that might indicate a need to structure your program differently anyway.

View File

@ -1,6 +1,37 @@
--- ---
title: "Framebuffer" title: "Framebuffer"
date: 2021-01-26T17:58:24-08:00 date: 2021-01-26T17:58:24-08:00
weight: 6 weight: 7
--- ---
A framebuffer is a collection of render targets. When we want to draw things, we "bind" the framebuffer so that it can be drawn to. You can create a framebuffer for a single render target, but you can also create a framebuffer consisting of up to 4 color targets and a depth/stencil target. This is useful for techniques such as [deferred rendering](https://learnopengl.com/Advanced-Lighting/Deferred-Shading), because it is much cheaper to render to a bunch of render targets all at once instead of switching them out.
To create a framebuffer, we must provide dimensions, a Render Pass, and some render targets.
```cs
var myFramebuffer = new Framebuffer(
GraphicsDevice,
1280,
720,
myRenderPass,
myColorTarget
);
```
This will create a framebuffer of size 1280x720 that uses `myColorTarget` as its color target and is based on `myRenderPass`. There is a concept of "render pass compatibility" that will let you use a framebuffer with multiple different render passes, but it's usually easier to just make framebuffers you need per render pass that you will be using.
```cs
var myMultiTargetFrameBuffer = new Framebuffer(
GraphicsDevice,
1280,
720,
myGBufferRenderPass,
myPositionTarget,
myNormalTarget,
myAlbedoTarget,
myMetallicRoughnessTarget,
myDepthTarget
);
```
This will create a multiple render target framebuffer. That's pretty much all there is to say about framebuffers for now.

View File

@ -0,0 +1,57 @@
---
title: "Color Blend State"
date: 2021-01-27T14:11:42-08:00
weight: 1
---
Color blend state is comprised of four fields - a logical operation, whether that logical operation is enabled, some blending constants, and a collection of color *target* blend states. Let's talk about the target states first.
### Color Target Blend State
You might remember our old friend RenderTarget. A color target is just a render target in a color format. Our color target always has color information stored in it - the blend mode tells the renderer how to combine new pixel values with the original values to produce a new color.
`BlendEnable` is a bool. If we turn off blending entirely, the new pixel color will just overwrite any existing values. Simple!
`ColorWriteMask` tells the renderer which color channels it is allowed to write to.
Next we have `AlphaBlendOp` and `ColorBlendOp`, which are `BlendOp` values, and the rest of the fields are `BlendFactor` values. Rather than spend a bunch of time explaining the nuances of various blend modes, I will simply link you to a [very thorough explanation of graphics blending](https://learnopengl.com/Advanced-OpenGL/Blending).
Here is an example blend mode, which does something we call "additive blending". This kind of blending is very useful for lighting and particle effects.
```cs
var myColorTargetBlendState = new ColorTargetBlendState
{
BlendEnable = true,
ColorWriteMask = ColorComponentFlags.RGBA,
AlphaBlendOp = BlendOp.Add,
ColorBlendOp = BlendOp.Add,
SourceColorBlendFactor = BlendFactor.SourceAlpha,
SourceAlphaBlendFactor = BlendFactor.SourceAlpha,
DestinationColorBlendFactor = BlendFactor.One,
DestinationAlphaBlendFactor = BlendFactor.One
};
```
Next we have `LogicOp`. This lets you do bitwise combination of the original and new color. You can read about that in more detail [over here](https://www.khronos.org/registry/vulkan/specs/1.2/html/chap28.html#framebuffer-logicop).
`BlendConstants` are used by the blend factors to produce result colors. You can read about that [here](https://www.khronos.org/registry/vulkan/specs/1.2/html/chap28.html#framebuffer-blendconstants).
Let's put it all together:
```cs
var myBlendConstants = new BlendConstants
{
R = 1f,
G = 1f,
B = 1f,
A = 1f
};
var myColorBlendState = new ColorBlendState
{
LogicOpEnable = false,
BlendConstants = myBlendConstants,
ColorTargetBlendStates = myColorTargetBlendStates
};
```

View File

@ -0,0 +1,54 @@
---
title: "DepthStencilState"
date: 2021-01-27T14:42:40-08:00
weight: 2
---
Depth stencil state determines how data is written to and read from the depth/stencil buffer.
`DepthTestEnable` determines whether the depth buffer will be used at all when determining whether to render a particular pixel. If you turn it off, the pixel will always be written no matter what is in the depth buffer.
`DepthWriteEnable` determines whether the depth buffer will be written to. This enables you to, for example, perform a depth test but not update the depth buffer with the results.
`CompareOp` determines how the depth buffer will be checked when determining whether to write a pixel. For example, if you use `CompareOp.LessOrEqual`, the new pixel will be checked for a depth value less than or equal. If it is, then the pixel will be written. Otherwise it will be discarded.
`DepthBoundsTestEnable` determines whether bounds will be checked. `MinDepthBounds` determines the minimum, and `MaxDepthBounds` the maximum. If a pixel's depth is outside of this range it will be ignored. If `DepthBoundsTestEnable`, pixels will not be ignored on the basis of their depth value.
`StencilTestEnable` enables or disables the stencil test. Obviously this should be `false` for pipelines that don't use a stencil buffer, and `true` for pipelines that do.
Next there are two kinds of `StencilOpState`s. Stencil tests and writes use the back stencil state when handling back-facing polygons and use the front stencil state when handling front-facing polygons. We'll get into this distinction later when we talk about rasterizer state.
If you want to understand why and how to use stencil testing you should [read this handy explanation](https://learnopengl.com/Advanced-OpenGL/Stencil-testing). For now I'll just detail how to set up stencil state.
The `StencilOpState` fields are as follows:
`FailOp` specifies the action performed on samples that fail the stencil test.
`PassOp` specifies the action performed on samples that pass both the depth and stencil tests.
`DepthFailOp` specifies the action performed on samples that pass the stencil test and fail the depth test.
`CompareOp` specifies the comparison operator used in the stencil test.
`CompareMask` selects the bits of the stencil values participating in the stencil test.
`WriteMask` selects the bits of the stencil values updated by the stencil test in the stencil buffer.
`Reference` is an integer reference value that is used in the unsigned stencil comparison.
Let's put it all together.
```cs
var myDepthStencilState = new DepthStencilState
{
DepthTestEnable = true,
DepthWriteEnable = true,
CompareOp = CompareOp.LessOrEqual,
DepthBoundsTestEnable = false,
StencilTestEnable = false
};
```
The above depth stencil state will write to and read from the depth buffer, letting through pixels that are at a lower or equal depth value to the current depth value.
Sorry for the relatively basic example state, but if you get to the point where you need something more complex you'll probably understand all this stuff pretty well anyway. Let's move on.

View File

@ -0,0 +1,23 @@
---
title: "Graphics Pipeline"
date: 2021-01-27T14:03:22-08:00
weight: 8
---
Graphics pipelines are where we put together every concept we have discussed so far, along with a few other details. A graphics pipeline sets up your graphics hardware so that it can process your rendering as quickly and efficiently as possible. It has a lot of state that we need to explain, so let's get started.
A graphics pipeline needs the following information in order to generate:
* Color Blend State
* Depth Stencil State
* Vertex Shader
* Fragment Shader
* Multisample State
* Pipeline Layout Creation Info
* Rasterizer State
* Primitive Type
* Vertex Input State
* Viewport State
* Render Pass
Whew, that's a lot! Let's break it down one thing at a time.

View File

@ -1,6 +1,55 @@
--- ---
title: "Render Pass" title: "Render Pass"
date: 2021-01-26T17:55:09-08:00 date: 2021-01-26T17:55:09-08:00
weight: 7 weight: 6
--- ---
Render passes are a sort of intermediary between graphics pipelines and framebuffers that describe the kind of render targets that will be used in a particular section of rendering. This is probably a little hard to understand because we haven't talked about graphics pipelines or framebuffers yet, but I might as well introduce you to them now.
Render passes describe the kinds of targets that will be included in framebuffers so that the pipeline can set itself up correctly. They do this using `ColorTargetDescription`s and an optional `DepthStencilTargetDescription`.
```cs
var myColorTargetDescription = new ColorTargetDescription
{
Format = TextureFormat.R8G8B8A8,
MultisampleCount = SampleCount.One,
LoadOp = LoadOp.Clear,
StoreOp = StoreOp.Store
};
var myRenderPass = new RenderPass(
GraphicsDevice,
myColorTargetDescription
);
```
With this definition we are saying that any framebuffer that will be used with this render pass will have exactly one render target in the RGBA8 format with no multisampling.
StoreOp tells the renderer what to do with the render target when the render pass finishes. `Store` means that the pixel data on the render target will be saved for later use. `DontCare` will do something or another, generally this operation will be whatever the driver thinks is fastest, but you don't have any guarantees about what it will be.
LoadOp tells the renderer what to do with the render target when the render pass starts. `Load` will load in the pixel information as it was stored. `Clear` will clear the render target to a single color that you provide when you begin the render pass. `DontCare` is same as above.
So in this example, we are saying that at the beginning of the render pass the render target will be cleared, and at the end of the render pass the render target will be stored for later use.
There is also a special type of description that you must use for depth/stencil targets.
```cs
var myDepthStencilTargetDescription = new DepthStencilTargetDescription
{
Format = TextureFormat.D32S8,
LoadOp = LoadOp.DontCare,
StoreOp = StoreOp.DontCare,
StencilLoadOp = LoadOp.DontCare,
StencilStoreOp = StoreOp.DontCare
};
var myRenderPass = new RenderPass(
GraphicsDevice,
myDepthStencilTargetDescription,
myColorTargetDescription
);
```
`StencilLoadOp` and `StencilStoreOp` allow you to specify different ops for the stencil buffer in particular. `LoadOp` and `StoreOp` here refer to the depth buffer. If your format does not provide a stencil format, `StencilLoadOp` and `StencilStoreOp` will be ignored.
That's it for render passes for now. Let's talk about framebuffers so this all starts making more sense.

View File

@ -6,15 +6,21 @@ weight: 5
When you write rendering code, you might intuitively think we are just drawing to the screen, but you would be wrong! In MoonWorks, we are always drawing to an object called a render target. You can think of a render target like a canvas. It can be whatever size we want and whatever color format, and we can composite them together to get our final render. When you write rendering code, you might intuitively think we are just drawing to the screen, but you would be wrong! In MoonWorks, we are always drawing to an object called a render target. You can think of a render target like a canvas. It can be whatever size we want and whatever color format, and we can composite them together to get our final render.
To create a render target, we need a multisample count and a texture slice. To create a render target, we need a texture slice. We can also optionally provide a multisample count.
Multisampling is a technique which can produce higher-quality images at a processing cost. You should probably read more about multisampling elsewhere if you're interested, but know that it's an option. If you don't want multisampling, simply use `SampleCount.One` or omit the sample count argument. Multisampling is a technique which can produce higher-quality images at a processing cost. You can [read more about multisampling elsewhere](https://www.khronos.org/opengl/wiki/Multisampling) if you're interested. If you don't want multisampling, simply use `SampleCount.One` or omit the sample count argument.
```cs ```cs
var myRenderTarget = new RenderTarget( var myMultisampleRenderTarget = new RenderTarget(
GraphicsDevice,
myTextureSlice,
SampleCount.Four
);
var myRegularRenderTarget = new RenderTarget(
GraphicsDevice, GraphicsDevice,
myTextureSlice myTextureSlice
); )
``` ```
When we use a texture slice to create a render target, we say that the render target is "backed" by the texture slice. This code will create a render target that is backed by `myTextureSlice`. In order to use a texture as a backing texture, you need to make sure you create it with the `TextureUsageFlags.RenderTarget` flag. When we use a texture slice to create a render target, we say that the render target is "backed" by the texture slice. This code will create a render target that is backed by `myTextureSlice`. In order to use a texture as a backing texture, you need to make sure you create it with the `TextureUsageFlags.RenderTarget` flag.
@ -58,4 +64,22 @@ var depthBuffer = new RenderTarget(
); );
``` ```
This sets up a 1280x720 32-bit depth buffer. This sets up a 1280x720 32-bit depth buffer. Guess what - we have a shortcut for that too!
```cs
var depthBuffer = RenderTarget.CreateDepthBuffer(
GraphicsDevice,
1280,
720
);
```
One last thing to note - depth buffers can also contain a "stencil buffer". Stencil buffers provide a much more nuanced way to cull pixels from rendering in particular cases. You should [read more about that](https://www.khronos.org/opengl/wiki/Stencil_Test) if you're interested.
```cs
var depthStencilBuffer = RenderTarget.CreateDepthStencilBuffer(
GraphicsDevice,
1280,
720
);
```

View File

@ -29,13 +29,13 @@ var mySamplerState = new SamplerState
var mySampler = new Sampler(GraphicsDevice, mySamplerState); var mySampler = new Sampler(GraphicsDevice, mySamplerState);
``` ```
There are three kinds of sample filters: Nearest, Linear, and Cubic. Nearest is the cheapest and doesn't do any kind of filtering. This filter is ideal for upscaling pixel games - Linear will make your pixel art look terrible. Cubic is the most expensive filter but upscales and downscales in decent quality. There are three kinds of sample filters: Nearest, Linear, and Cubic. Nearest is the cheapest and doesn't do any kind of filtering. This filter is ideal for upscaling pixel games - Linear or Cubic will make your pixel art look terrible. Cubic is the most expensive filter but upscales and downscales in decent quality.
MinFilter describes which filter to use when the result image is smaller than the sampled image - MagFilter, likewise, is used when the result image is larger. MinFilter describes which filter to use when the result image is smaller than the sampled image - MagFilter, likewise, is used when the result image is larger.
Anisotropic filtering can be used in conjunction with mipmaps to reduce aliasing and blur on obliquely-rendered textures. Explaning this is beyond the scope of this tutorial but I encourage you to look into the technique if you are planning to make 3D games. [Anisotropic filtering](https://en.wikipedia.org/wiki/Anisotropic_filtering) can be used in conjunction with mipmaps to reduce aliasing and blur on obliquely-rendered textures. Explaning this is beyond the scope of this tutorial but I encourage you to look into the technique if you are planning to make 3D games.
When you look up pixels using a sampler, you generally look them up using coordinates in the range of [0, 1]. This coordinate space is often referred to as "texture coordinate space" or "UV space". SamplerAddressMode refers to the behavior that will occur if the texture is sampled *outside* of [0, 1] coordinates. ClampToEdge will just take the pixel at the nearest edge. Repeat will loop the image, and MirroredRepeat will mirror the image. When you look up pixels using a sampler, you generally look them up using coordinates in the range of [0, 1]. This coordinate space is often referred to as "texture coordinate space" or "UV space". SamplerAddressMode refers to the behavior that will occur if the texture is sampled *outside* of [0, 1] coordinates. ClampToEdge will just take the pixel at the nearest edge. Repeat will loop the image, and MirroredRepeat will mirror the image. ClampToBorder will return a predefined border color.
U-coordinates are horizontal, V-coordinates are vertical, and W-coordinates correspond to depth. You can use different address spaces for each of these directions. U-coordinates are horizontal, V-coordinates are vertical, and W-coordinates correspond to depth. You can use different address spaces for each of these directions.