Problems with attribute binding and shaders - c#

Currently i'm upgrading our OpenGL-Engine to a new shading and attribute system which is more dynamic and are not somehow static in usage and programming.
For this i'm replacing the old VertexBuffer class with a new BufferRenderer to which multiple DataBuffer ( RenderDataBuffer, RenderIndexBuffer ) objects are assigned, those are holding my rendering data. This new system allows instancing with glDrawElementsInstanced and also static rendering with glDrawElements.
The Problem
It looks like a attribute corrupts a existing position attribute and leads to unexpected results. I tested this with different settings
Test setup
This code sets up the test data:
_Shader = new Shader(ShaderSource.FromFile("InstancingShader.xml"));
_VertexBuffer = new BufferRenderer();
RenderDataBuffer positionBuffer = new RenderDataBuffer(ArrayBufferTarget.ARRAY_BUFFER, ArrayBufferUsage.STATIC_DRAW,
new VertexDeclaration(DeclarationType.Float, DeclarationSize.Three, AttributeBindingType.Position));
// Set the position data of the quad
positionBuffer.BufferData(new[] { new Vector3(0, 0, 0), new Vector3(0, 0, 1), new Vector3(1, 0, 0), new Vector3(1, 0, 1) });
RenderDataBuffer instanceBuffer = new RenderDataBuffer(ArrayBufferTarget.ARRAY_BUFFER, ArrayBufferUsage.DYNAMIC_DRAW,
new VertexDeclaration(DeclarationType.Float, DeclarationSize.Four, AttributeBindingType.Instancing),
new VertexDeclaration(DeclarationType.Float, DeclarationSize.Four, AttributeBindingType.Color));
// Buffer the instance data
instanceBuffer.BufferData<InstanceTestData>(new[] {
new InstanceTestData() { Color = Colors.Red, PRS = new Color(0.1f, 1f, 0.5f, 1) },
new InstanceTestData() { Color = Colors.Blue, PRS = new Color(1f, 1f, 0.5f, 1) },
new InstanceTestData() { Color = Colors.Green, PRS = new Color(0.1f, 1f, 1f, 1) },
new InstanceTestData() { Color = Colors.Yellow, PRS = new Color(1f, 1f, 1f, 1) }
});
// Set up a index buffer for indexed rendering
RenderIndexBuffer indiciesBuffer = new RenderIndexBuffer(type: IndiciesType.UNSIGNED_BYTE);
indiciesBuffer.BufferData(new Byte[] { 2, 1, 0, 1, 2, 3 });
// Register the buffers ( second parameter is used for glVertexAttribDivisor )
_VertexBuffer.AddBuffer(positionBuffer);
_VertexBuffer.AddBuffer(instanceBuffer, 1);
_VertexBuffer.IndexBuffer = indiciesBuffer;
The vertex shader ( pixel just outputs the color ):
uniform mat4 uModelViewProjection;
varying vec4 vColor;
attribute vec3 aPosition; // POSITION0
attribute vec4 aColor; // COLOR 0
attribute vec4 aInstancePosition; // INSTANCING0
void main()
{
gl_Position = uModelViewProjection * vec4(vec2((aPosition.x * 20) + (gl_InstanceID * 20), aPosition.z * 20), -3, 1);
vColor = aColor;
}
Rendering ( Pseudocode to simplify reading; Not final for all performance guys out there )
glUseProgram
foreach (parameter in shader_parameters)
glUniformX
foreach (buffer in render_buffers)
glBindBuffer
foreach (declaration in buffer.vertex_declarations)
if (shader.Exists(declaration)) // Check if declaration exists in shader
glEnableVertexAttribArray(shader.attributeLocation(declaration))
glVertexAttribPointer
if (instanceDivisor != null)
glVertexAttribDivisor
glBindBuffer(index_buffer)
glDrawElementsInstanced
Shader attribute binding
The shader attribute binding is done at initialization and looks like this:
_VertexAttributes = source.Attributes.ToArray();
for (uint i = 0; i < _VertexAttributes.Length; i++)
{
ShaderAttribute attribute = _VertexAttributes[i];
GLShaders.BindAttribLocation(_ShaderHandle, i, attribute.Name);
}
So there should be not attribute aliasing in the shader, each of them gets a unique number ( Matrices are not implemented yet, i know they require more than one index, but i'm also not using them as vertex attributes right now ). As mentioned in the comment, i filter the attributes after linking the shader so no location is bound which does not exists.
This is the code for the attribute binding:
Bind();
Int32 offset = 0;
for (UInt32 i = 0; i < _Declarations.Length; i++)
{
VertexDeclaration data = _Declarations[i];
ShaderAttributeLocation location;
if (shader.GetAttributeLocation(data.Binding, out location))
{
GLVertexBuffer.EnableVertexAttribArray(location);
GLVertexBuffer.VertexAttribPointer(location, (AttributeSize)data.Size, (AttributeType)data.Type, data.Normalized, _StrideSize, new IntPtr(offset));
if (instanceDivisor != null)
GLVertexBuffer.VertexAttribDivisor(location, instanceDivisor.Value);
}
offset += data.ComponentSize;
}
Test results
The results are as seen here:
Now, if i change the binding on the code side ( AttributeBindingType.Color <-> AttributeBindingType.Instancing ) it looks like this:
If i change now vColor = aColor; to vColor = aInstancePosition; the results are simple: Instead of having multiple small quads with color i have one big fullscreen quad which is red. The locations of each of the attributes is different from the others so technically the values should be correct, but they seem to be not. Also using both attributes in the shader doesn't solve the problem.
I'm searching for a idea or solution to solve this problem.
Tracking the problem down
I've started to track it down more and more, with this complex code it costs me hours but i found something: The shader i'm using is only working when i leave out the attribute index 0 when calling BindAttribLocation. With other words this is a workaround which only works for the specified shader:
foreach (attribute in vertexAttributes)
{
if (shader == problemShader)
// i is index of the attribute
glBindAttribLocation(_ShaderHandle, i + 1, attribute.Name);
// All other shaders
else glBindAttribLocation(_ShaderHandle, i, attribute.Name);
}
I guess it has something to do with either instancing or the multiple VBO's which i'm using for instancing. This are the only differences to the normal shaders. The normal ones are also only working when i start the attribute location index at 0, they are not working when starting at 1.

I found the solution to the problem and it has been really simple.
After rendering with instancing i need to call glVertexAttribDivisor(location, 0); on the attributes which had the divisor enabled before.

Related

Cut faraway objects based on depth map

I would like to do grabcut which uses a depth map that cuts away far objects, that is used in mixed reality application. So I would like to show just the front of what I see and the background as virtual reality scene.
The problem right now I tried to adapt so code and what I get is front which is cut but in black color, the mask actually.
I don't know where is the problem settle.
The input is a depth map from zed camera.
here is a picture of the behaviour:
My trial:
private void convertToGrayScaleValues(Mat mask)
{
int width = mask.rows();
int height = mask.cols();
byte[] buffer = new byte[width * height];
mask.get(0, 0, buffer);
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int value = buffer[y * width + x];
if (value == Imgproc.GC_BGD)
{
buffer[y * width + x] = 0; // for sure background
}
else if (value == Imgproc.GC_PR_BGD)
{
buffer[y * width + x] = 85; // probably background
}
else if (value == Imgproc.GC_PR_FGD)
{
buffer[y * width + x] = (byte)170; // probably foreground
}
else
{
buffer[y * width + x] = (byte)255; // for sure foreground
}
}
}
mask.put(0, 0, buffer);
}
For Each depth frame from Camera:
Mat erodeElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(4, 4));
Mat dilateElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(7, 7));
depth.copyTo(maskFar);
Core.normalize(maskFar, maskFar, 0, 255, Core.NORM_MINMAX, CvType.CV_8U);
Imgproc.cvtColor(maskFar, maskFar, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_BINARY);
Imgproc.dilate(maskFar, maskFar, erodeElement);
Imgproc.erode(maskFar, maskFar, dilateElement);
Mat bgModel = new Mat();
Mat fgModel = new Mat();
Imgproc.grabCut(image, maskFar, new OpenCVForUnity.CoreModule.Rect(), bgModel, fgModel, 1, Imgproc.GC_INIT_WITH_MASK);
convertToGrayScaleValues(maskFar); // back to grayscale values
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_TOZERO);
Mat foreground = new Mat(image.size(), CvType.CV_8UC4, new Scalar(0, 0, 0));
image.copyTo(foreground, maskFar);
Utils.fastMatToTexture2D(foreground, texture);
In this case, the graph cut on the depth image might not be the correct method to solve all of your issue.
If you insist the processing should be done in the depth image. To find everything that is not on the table and filter out the table part. You may first apply the disparity based approach for finding the object that's is not on the ground. Reference: https://github.com/windowsub0406/StereoVision
Then based on the V disparity output image, find the locally connected component that is grouped together. You may follow this link how to do this disparity map in OpenCV which is asking the similar way to find the objects that's not on the ground
If you are ok with RGB based approaches, then use any deep learning-based method to recognize the monitor should be the correct approaches. It can directly detect the mointer bounding box. By apply this bounding box to the depth image, you may have what you want. For deep learning based approaches, there are many available package such as Yolo series. You may find one that is suitable for you. reference: https://medium.com/#dvshah13/project-image-recognition-1d316d04cb4c

OpenGL 4.2 LookAt matrix only works with -z value for eye position

I am trying to understand and apply modern OpenGL matrix transformations. I already read a lot of different sources but I am not sure what I am actually doing wrong.
The issue I have is also commented in the code: If I set the eye coordinates of the Matrix4.LookAt to a z value that is greater or equal 0 or lower -2 the triangle is not visible anymore.
Can someone explain why? As far as I understood the method the triangle should just be visible just from the other side (explicitly disabling face culling does not change anything).
Another thing is strange: if i rotate the triangle it seems to get clipped if I use eye-z = -2; if I use -1 it looks "smoother". Any ideas?
Here is the complete program:
using System;
using OpenTK;
using OpenTK.Graphics;
using OpenTK.Graphics.OpenGL4;
namespace OGL420_Matrices
{
// OpenTK version 3.1.0
internal class Program
{
public static void Main(string[] args)
{
var program = new Program();
program.Run();
}
private GameWindow _gameWindow;
private Matrix4 _projectionMatrix;
private Matrix4 _viewMatrix;
private Matrix4 _viewProjectionMatrix;
private Matrix4 _modelMatrix;
private int _vbaId, _programId, _viewProjectionUniformId, _modelMatrixUniformId;
private void Run()
{
// 4, 2 is OpenGL 4.2
using (_gameWindow = new GameWindow(800, 600, GraphicsMode.Default, "", GameWindowFlags.Default,
DisplayDevice.Default, 4, 2, GraphicsContextFlags.Default))
{
_gameWindow.Load += OnLoad;
_gameWindow.Resize += OnResize;
_gameWindow.RenderFrame += OnRenderFrame;
_gameWindow.Run();
}
}
private void OnResize(object sender, EventArgs e)
{
var clientArea = _gameWindow.ClientRectangle;
GL.Viewport(0, 0, clientArea.Width, clientArea.Height);
}
private void OnLoad(object sender, EventArgs e)
{
_projectionMatrix = Matrix4.CreateOrthographic(3, 3, 0.001f, 50);
// change -1 to -2.1f you dont see anything
// change -1 to -2f you still see the same
// change -1 to >= 0 you dont see anything; of course 0 doesn't make sense but 1 would
_viewMatrix = Matrix4.LookAt(
new Vector3(0, 0, -1f), // eye
new Vector3(0, 0, 0), // target
new Vector3(0, 1, 0)); // up
_modelMatrix = Matrix4.Identity;
var data = new float[]
{
0, 0, 0,
1, 0, 0,
0, 1, 0
};
var vboId = GL.GenBuffer();
GL.BindBuffer(BufferTarget.ArrayBuffer, vboId);
GL.BufferData(BufferTarget.ArrayBuffer, data.Length * sizeof(float), data, BufferUsageHint.StaticDraw);
_vbaId = GL.GenVertexArray();
GL.BindVertexArray(_vbaId);
GL.BindBuffer(BufferTarget.ArrayBuffer, vboId);
GL.EnableVertexAttribArray(0);
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, 0);
var vertexShaderId = GL.CreateShader(ShaderType.VertexShader);
GL.ShaderSource(vertexShaderId, #"#version 420
layout(location = 0) in vec3 position;
uniform mat4 viewProjection;
uniform mat4 model;
out vec3 outColor;
void main()
{
gl_Position = viewProjection * model * vec4(position, 1);
outColor = vec3(1,1,1);
}");
GL.CompileShader(vertexShaderId);
GL.GetShader(vertexShaderId, ShaderParameter.CompileStatus, out var result);
if (result != 1)
throw new Exception("compilation error: " + GL.GetShaderInfoLog(vertexShaderId));
var fragShaderId = GL.CreateShader(ShaderType.FragmentShader);
GL.ShaderSource(fragShaderId, #"#version 420
in vec3 outColor;
out vec4 fragmentColor;
void main()
{
fragmentColor = vec4(outColor, 1);
}");
GL.CompileShader(fragShaderId);
GL.GetShader(fragShaderId, ShaderParameter.CompileStatus, out result);
if (result != 1)
throw new Exception("compilation error: " + GL.GetShaderInfoLog(fragShaderId));
_programId = GL.CreateProgram();
GL.AttachShader(_programId, vertexShaderId);
GL.AttachShader(_programId, fragShaderId);
GL.LinkProgram(_programId);
GL.GetProgram(_programId, GetProgramParameterName.LinkStatus, out var linkStatus);
if (linkStatus != 1) // 1 for true
throw new Exception("Shader program compilation error: " + GL.GetProgramInfoLog(_programId));
GL.DeleteShader(vertexShaderId);
GL.DeleteShader(fragShaderId);
_viewProjectionUniformId = GL.GetUniformLocation(_programId, "viewProjection");
_modelMatrixUniformId = GL.GetUniformLocation(_programId, "model");
}
private void OnRenderFrame(object sender, FrameEventArgs e)
{
GL.Clear(ClearBufferMask.ColorBufferBit);
_viewProjectionMatrix = _projectionMatrix * _viewMatrix;
GL.UniformMatrix4(_viewProjectionUniformId, false, ref _viewProjectionMatrix);
GL.UniformMatrix4(_modelMatrixUniformId, false, ref _modelMatrix);
GL.UseProgram(_programId);
GL.BindVertexArray(_vbaId);
GL.DrawArrays(PrimitiveType.Triangles, 0, 3);
_gameWindow.SwapBuffers();
}
}
}
First I'll quote a comment to the OpenTK issue: Problem with matrices #687:
Because of how matrices are treated in C# and OpenTK, multiplication order is inverted from what you might expect in C/C++ and GLSL. This is an old artefact in the library, and it's too late to change now, unfortunately.
In compare to glsl, where column major order matrices have to be multiplied form the right to the left, where the right matrix is the matrix which is applied "first", in OpenTK the matrices have to be multiplied from the left to the right.
This means, if you want to calculate the viewProjectionMatrix in glsl, which does the view transformation followed by the projection, then in glsl it is (for column major order matrices):
mat4 viewProjectionMatrix = projectionMatrix * viewMatrix;
If you want to do the same in in OpenTK, by the use of Matrix4, then you've to do:
_viewProjectionMatrix = _viewMatrix * _projectionMatrix;

WPF applying geometry transform

Why this geometry transform doesn't work?
private void DrawElement(DrawingVisual dv)
{
using (var dvc = dv.RenderOpen())
{
GeometryGroup gg = new GeometryGroup();
var pen = new Pen(Brushes.Black, 2);
TranslateTransform tt = new TranslateTransform(0, (!IsInverted ? -1 : 1) * _pixelsPerMillimeter);
TransformGroup tg = new TransformGroup();
Point lu, lb, ru, rb;
lu = new Point(rct_center.Left, (!IsMaxillar ? rct_center.Bottom - _pixelsPerMillimeter : _pixelsPerMillimeter + rct_center.Top));//
lb = new Point(lu.X, lu.Y + (!IsInverted ? -1 : 1) * rct_center.Height / 2);//
ru = new Point(rct_center.Right - _pixelsPerMillimeter, lu.Y);//
rb = new Point(ru.X, lb.Y);//
LineGeometry upperLeft = new LineGeometry(lu, new Point(lu.X + _pixelsPerMillimeter, lu.Y));
LineGeometry bottomLeft = new LineGeometry(lb, new Point(lb.X + _pixelsPerMillimeter, lb.Y));
LineGeometry upperRight = new LineGeometry(ru, new Point(ru.X + _pixelsPerMillimeter, ru.Y));
LineGeometry bottomRight = new LineGeometry(rb, new Point(rb.X + _pixelsPerMillimeter, rb.Y));
gg.Children.Add(upperLeft);
gg.Children.Add(bottomLeft);
gg.Children.Add(upperRight);
gg.Children.Add(bottomRight);
for (int j = 30 - 1; j >= 0; --j) //-1 because the start line isn't drawn.
{
dvc.DrawGeometry(Brushes.Transparent, pen, gg);
tg.Children.Add(tt);
gg.Transform = tg;
}
}
}
I want to draw repeatedly the 4 lines group below the previous or viceversa (if IsInverted).
When I debug the transformgroup matrix is modified, but the transformation isn't applied in the geometry group.
I'm newbie in wpf, from winforms and i'm a little bit lost without GDI GraphicsPath.
There is someway to do it better? Maybe i'm thinking a lot in GDI way to do it.
What you want to do here is use the rendering context's (dvc's) Push method instead of relying on the Transform of the geometry object itself. When using the drawing contexts, WPF relies on Push/Pop operations for setting the transform. You can see a related answer here: How do you apply a Scale Translation to a DrawingContext?
In addition, I think you need to study how C# handles reference types. The way you have it in your sample code, you are assigning the same instance of the transform to each geometry object, which I don't think was intended. I'm not sure if the Push method uses the context during the call or after; you'd have to experiment with that. Creating a new TranslateTransform inside your inner loop would be a sure way to avoid that. However, I agree with the commentators in that using some offset property would be more performant.

Cannot draw primitives with SharpDX on Windows 10 Universal App (but still able to clear the background)

Sorry because of my bad English. I am trying to write a very simple DirectX 11 on Windows 10 app with SharpDX, which draws a triangle at the middle of the windows. The problem is the triangle is not displayed while I can still change the background color (using ClearRenderTargetView). I verified that the render function is called periodically and my triangle is front-face (clockwise). What I have tried:
Disable back-face culling
Set static width and height
Try other primitives (line, triangle strip)
Change vertex shader input from float3 to float4 and vice-versa
I found that this post has very similar symptoms but still not work!
I have posted my code on GitHub at: https://github.com/minhcly/UWP3DTest
Here is my initialization code (where I think the problem resides):
D3D11.Device2 device;
D3D11.DeviceContext deviceContext;
DXGI.SwapChain2 swapChain;
D3D11.Texture2D backBufferTexture;
D3D11.RenderTargetView backBufferView;
private void InitializeD3D()
{
using (D3D11.Device defaultDevice = new D3D11.Device(D3D.DriverType.Hardware, D3D11.DeviceCreationFlags.Debug))
this.device = defaultDevice.QueryInterface<D3D11.Device2>();
this.deviceContext = this.device.ImmediateContext2;
DXGI.SwapChainDescription1 swapChainDescription = new DXGI.SwapChainDescription1()
{
AlphaMode = DXGI.AlphaMode.Ignore,
BufferCount = 2,
Format = DXGI.Format.R8G8B8A8_UNorm,
Height = (int)(this.swapChainPanel.RenderSize.Height),
Width = (int)(this.swapChainPanel.RenderSize.Width),
SampleDescription = new DXGI.SampleDescription(1, 0),
Scaling = SharpDX.DXGI.Scaling.Stretch,
Stereo = false,
SwapEffect = DXGI.SwapEffect.FlipSequential,
Usage = DXGI.Usage.RenderTargetOutput
};
using (DXGI.Device3 dxgiDevice3 = this.device.QueryInterface<DXGI.Device3>())
using (DXGI.Factory3 dxgiFactory3 = dxgiDevice3.Adapter.GetParent<DXGI.Factory3>())
{
DXGI.SwapChain1 swapChain1 = new DXGI.SwapChain1(dxgiFactory3, this.device, ref swapChainDescription);
this.swapChain = swapChain1.QueryInterface<DXGI.SwapChain2>();
}
using (DXGI.ISwapChainPanelNative nativeObject = ComObject.As<DXGI.ISwapChainPanelNative>(this.swapChainPanel))
nativeObject.SwapChain = this.swapChain;
this.backBufferTexture = this.swapChain.GetBackBuffer<D3D11.Texture2D>(0);
this.backBufferView = new D3D11.RenderTargetView(this.device, this.backBufferTexture);
this.deviceContext.OutputMerger.SetRenderTargets(this.backBufferView);
deviceContext.Rasterizer.State = new D3D11.RasterizerState(device, new D3D11.RasterizerStateDescription()
{
CullMode = D3D11.CullMode.None,
FillMode = D3D11.FillMode.Solid,
IsMultisampleEnabled = true
});
deviceContext.Rasterizer.SetViewport(0, 0, (int)swapChainPanel.Width, (int)swapChainPanel.Height);
CompositionTarget.Rendering += CompositionTarget_Rendering;
Application.Current.Suspending += Current_Suspending;
isDXInitialized = true;
}
InitScene() function:
D3D11.Buffer triangleVertBuffer;
D3D11.VertexShader vs;
D3D11.PixelShader ps;
D3D11.InputLayout vertLayout;
RawVector3[] verts;
private void InitScene()
{
D3D11.InputElement[] inputElements = new D3D11.InputElement[]
{
new D3D11.InputElement("POSITION", 0, DXGI.Format.R32G32B32_Float, 0)
};
using (CompilationResult vsResult = ShaderBytecode.CompileFromFile("vs.hlsl", "main", "vs_4_0"))
{
vs = new D3D11.VertexShader(device, vsResult.Bytecode.Data);
vertLayout = new D3D11.InputLayout(device, vsResult.Bytecode, inputElements);
}
using (CompilationResult psResult = ShaderBytecode.CompileFromFile("ps.hlsl", "main", "ps_4_0"))
ps = new D3D11.PixelShader(device, psResult.Bytecode.Data);
deviceContext.VertexShader.Set(vs);
deviceContext.PixelShader.Set(ps);
verts = new RawVector3[] {
new RawVector3( 0.0f, 0.5f, 0.5f ),
new RawVector3( 0.5f, -0.5f, 0.5f ),
new RawVector3( -0.5f, -0.5f, 0.5f )
};
triangleVertBuffer = D3D11.Buffer.Create(device, D3D11.BindFlags.VertexBuffer, verts);
deviceContext.InputAssembler.InputLayout = vertLayout;
deviceContext.InputAssembler.PrimitiveTopology = D3D.PrimitiveTopology.TriangleList;
}
Render function:
private void RenderScene()
{
this.deviceContext.ClearRenderTargetView(this.backBufferView, new RawColor4(red, green, blue, 0));
deviceContext.InputAssembler.SetVertexBuffers(0,
new D3D11.VertexBufferBinding(triangleVertBuffer, Utilities.SizeOf<RawVector3>(), 0));
deviceContext.Draw(verts.Length, 0);
this.swapChain.Present(0, DXGI.PresentFlags.None);
}
Thank you for your help.
I have solved the problem. I have used the Graphics Debugger in Visual Studio and detect that the OutputMerger has no RenderTarget. So I move the line
this.deviceContext.OutputMerger.SetRenderTargets(this.backBufferView);
to the RenderScene() function and it works. However, I can't understand why I must reset this every frame. I'm new in Direct3D so if any one has an anwswer, please comment. Thank you.
P/s: I have committed the working project on the GitHub for anyone who encounters the same problem with me.

Dynamic Vertexbuffer in SharpDX

I have massive Problems figuring out how to set up a dynamic VertexBuffer and IndexBuffer using SharpDX.
I have to generate Triangles where ever the User presses on the Screen.
I think i have to set up a transformation function that converts my screen coordinates to projection coordinates.
But i dont ever come this far...
I want to set up a Buffer with space for 10000 Vertices.
layout = new InputLayout(d3dDevice, vertexShaderByteCode, new[]
{
new SharpDX.Direct3D11.InputElement("POSITION", 0, Format.R32G32B32A32_Float, 0, 0),
new SharpDX.Direct3D11.InputElement("COLOR", 0, Format.R32G32B32A32_Float, 16, 0)
});
vb = Buffer.Create(d3dDevice, BindFlags.VertexBuffer, stream, 10000, ResourceUsage.Dynamic, CpuAccessFlags.Write);
vertexBufferBinding = new VertexBufferBinding(vb, Utilities.SizeOf<Vector4>() * 2, 0);
That Buffer i want to update every time i have to add new triangles using:
d3dDevice.ImmediateContext.UpdateSubresource(updateVB, vb);
updateVB are the new Triangles to be added.
Rendering works the following way:
// Prepare matrices
var view = Matrix.LookAtLH(new Vector3(0, 0, -5), new Vector3(0, 0, 0), Vector3.UnitY);
var proj = Matrix.PerspectiveFovLH((float)Math.PI / 4.0f, width / (float)height, 0.1f, 100.0f);
var viewProj = Matrix.Multiply(view, proj);
// Set targets (This is mandatory in the loop)
d3dContext.OutputMerger.SetTargets(render.DepthStencilView, render.RenderTargetView);
// Clear the views
d3dContext.ClearDepthStencilView(render.DepthStencilView, DepthStencilClearFlags.Depth, 1.0f, 0);
d3dContext.ClearRenderTargetView(render.RenderTargetView, Colors.Black);
// Calculate WorldViewProj
var worldViewProj = Matrix.Scaling(1f) * viewProj;
worldViewProj.Transpose();
// Setup the pipeline
d3dContext.InputAssembler.SetVertexBuffers(0, vertexBufferBinding);
d3dContext.InputAssembler.InputLayout = layout;
d3dContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
d3dContext.VertexShader.Set(vertexShader);
d3dContext.PixelShader.Set(pixelShader);
d3dContext.Draw(vertexCount, 0);
I am new to DirectX and the DirectX9 tutorials on the web don't help me very good with DirectX11.1.
Thanks
vb = Buffer.Create(d3dDevice, BindFlags.VertexBuffer, stream, 10000, ResourceUsage.Dynamic, CpuAccessFlags.Write);
Is wrong, since you want 10000 vertices, but allocate 10000 bytes, so should be:
10000 * sizeof(Vector4) * 2
According to your input layout.
Also to write into your buffer, you should look at context.MapSubresource instead.

Categories