I have massive Problems figuring out how to set up a dynamic VertexBuffer and IndexBuffer using SharpDX.
I have to generate Triangles where ever the User presses on the Screen.
I think i have to set up a transformation function that converts my screen coordinates to projection coordinates.
But i dont ever come this far...
I want to set up a Buffer with space for 10000 Vertices.
layout = new InputLayout(d3dDevice, vertexShaderByteCode, new[]
{
new SharpDX.Direct3D11.InputElement("POSITION", 0, Format.R32G32B32A32_Float, 0, 0),
new SharpDX.Direct3D11.InputElement("COLOR", 0, Format.R32G32B32A32_Float, 16, 0)
});
vb = Buffer.Create(d3dDevice, BindFlags.VertexBuffer, stream, 10000, ResourceUsage.Dynamic, CpuAccessFlags.Write);
vertexBufferBinding = new VertexBufferBinding(vb, Utilities.SizeOf<Vector4>() * 2, 0);
That Buffer i want to update every time i have to add new triangles using:
d3dDevice.ImmediateContext.UpdateSubresource(updateVB, vb);
updateVB are the new Triangles to be added.
Rendering works the following way:
// Prepare matrices
var view = Matrix.LookAtLH(new Vector3(0, 0, -5), new Vector3(0, 0, 0), Vector3.UnitY);
var proj = Matrix.PerspectiveFovLH((float)Math.PI / 4.0f, width / (float)height, 0.1f, 100.0f);
var viewProj = Matrix.Multiply(view, proj);
// Set targets (This is mandatory in the loop)
d3dContext.OutputMerger.SetTargets(render.DepthStencilView, render.RenderTargetView);
// Clear the views
d3dContext.ClearDepthStencilView(render.DepthStencilView, DepthStencilClearFlags.Depth, 1.0f, 0);
d3dContext.ClearRenderTargetView(render.RenderTargetView, Colors.Black);
// Calculate WorldViewProj
var worldViewProj = Matrix.Scaling(1f) * viewProj;
worldViewProj.Transpose();
// Setup the pipeline
d3dContext.InputAssembler.SetVertexBuffers(0, vertexBufferBinding);
d3dContext.InputAssembler.InputLayout = layout;
d3dContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
d3dContext.VertexShader.Set(vertexShader);
d3dContext.PixelShader.Set(pixelShader);
d3dContext.Draw(vertexCount, 0);
I am new to DirectX and the DirectX9 tutorials on the web don't help me very good with DirectX11.1.
Thanks
vb = Buffer.Create(d3dDevice, BindFlags.VertexBuffer, stream, 10000, ResourceUsage.Dynamic, CpuAccessFlags.Write);
Is wrong, since you want 10000 vertices, but allocate 10000 bytes, so should be:
10000 * sizeof(Vector4) * 2
According to your input layout.
Also to write into your buffer, you should look at context.MapSubresource instead.
Related
What is the way to set and get position of imported STL file. I'm looking for a solution to set position x,y,z to imported STL file like is possible for example to Joint.
Normally things are moved in eyeshot by a transformation matrix. This matrix consists of a rotation matrix 3 x 3, a location 1 x 3, and skew/stretch 4 x 1. All together this makes a 4 x 4 transformation matrix.
an imported stl actually contains lots of locations. But all you need to do is grab one of these. below I have just grabbed the min point of the bounding box.
Then to get to a place create an identity transformation matrix to ensure the rotation and skew are Zero. Now insert your location into the location part of the matrix. The transformBy function will now move every point of the stl to a new location.
to move between points you need the vector difference between the points.
Mesh myMesh = Mesh.CreateBox(10, 10, 10);
//Mesh myMesh = new Mesh();
Point3D getLocation = myMesh.BoxMin;
Point3D setLocation = new Point3D(20, -10, 0);
Point3D moveVector = setLocation - getLocation;
Transformation goPlaces = new Transformation(1);
goPlaces[0, 3] = moveVector.X;
goPlaces[1, 3] = moveVector.Y;
goPlaces[2, 3] = moveVector.Z;
//Transformation goPlaces = new Transformation(
// new double[,]
// {
// { 1, 0, 0, 20 },
// { 0, 1, 0,-10 },
// { 0, 0, 1, 0 },
// { 0, 0, 0, 1 }
// }
//);
Transformation goBack = (Transformation)goPlaces.Clone();
goBack.Invert();
myMesh.TransformBy(goPlaces);
myMesh.TransformBy(goBack);
Cheers!
I'm trying to learn how to use OpenGL in a 2D application by using OpenTK and have read that using the inbuilt calls glMatrixMode are not modern. I want to use top left origin and pixel co-ordinates in my shader inputs and assumed I could define a matrix to do these translations.
I am trying to do this using my own matrix using the OpenTK matrix clases. However I think I have made a mistake in setting up the projection matrix and want to verify what I should be doing:-
TranslationMatrix = Matrix4.Identity * Matrix4.CreateScale(1, -1, 1);
TranslationMatrix = TranslationMatrix * Matrix4.CreateOrthographicOffCenter(0, bounds.Width, 0, bounds.Height, -1, 1);
var TranslatedPoint = TranslationMatrix * new Vector4(new Vector3(1024, 768, 0), 1); // bounds = {0, 0, 1024, 768 }
This results in x.Xyz == { 2, -2, 0 }. I thought that the x and y co-ordinates used in gl_position in the vertex shader should range from -1 to 1.
I guess I've got a major misunderstanding somewhere, what should I be looking at?
OpenTK stores the matrices in transposed form. This means you have to write everything in reversed order.
var TranslationMatrix = Matrix4.CreateOrthographicOffCenter(0, bounds.Width, 0, bounds.Height, -1, 1);
TranslationMatrix = TranslationMatrix * Matrix4.CreateScale(1, -1, 1);
var TranslatedPoint = new Vector4(1024, 768, 0, 1) * TranslationMatrix;
The result should now be [1, -1, 0, 1].
I'm trying to calculate Gradient Magnitude and Orientation of a garyscale Image using OpenCvSharp. The problem is that "Pow" function seems to not be the right for the IplImage.
I also want to know how can I calculate tan-1 (or arctan) of featureImage.
Thank you
using (IplImage cvImage = new IplImage("grayImage.png", LoadMode.AnyDepth |
LoadMode.GrayScale))
using (IplImage dstXImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels))
using (IplImage dstYImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels))
{
float[] data = { 0, -1, -1, 2 };
CvMat kernel = new CvMat(2, 2, MatrixType.F32C1, data);
Cv.Sobel(cvImage, dstXImage, 1, 0, ApertureSize.Size1);
Cv.Sobel(cvImage, dstYImage, 0, 1, ApertureSize.Size1);
Cv.Normalize(dstXImage, dstXImage, 1.0, 0, NormType.L1);
Cv.Filter2D(cvImage, dstXImage, kernel, new CvPoint(0, 0));
Cv.Normalize(dstYImage, dstYImage, 1.0, 0, NormType.L1);
Cv.Filter2D(cvImage, dstYImage, kernel, new CvPoint(0, 0));
// to calculate gradient magnitude, sqrt[(dy)power 2 + (dx)power 2]
dstXImage.Mul(dstXImage, dstXImage);
dstYImage.Mul(dstYImage, dstYImage);
IplImage dstXYImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels);
dstXImage.Add(dstYImage, dstXYImage);
dstXYImage.Pow(dstXYImage, 1/2); //this line not working,output image is black page
// to calculate gradient orientation, arctan(dy/dx)
IplImage thetaImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels);
dstYImage.Div(dstXImage, thetaImage); //afterwards need help to calculate arctan
using (new CvWindow("SrcImage", cvImage))
using (new CvWindow("DstXImage", dstXImage))
using (new CvWindow("DstYImage", dstYImage))
using (new CvWindow("DstXYImage", dstXYImage))
using (new CvWindow("thetaImage", thetaImage))
{
Cv.WaitKey(0);
}
You can use the "cartToPolar" function for your purpose.
This function calculates the magnitude and angle of 2D vectors.
magnitude(I)= sqrt(x(I)^2+y(I)^2),
angle(I)= atan2(y(I), x(I))[ *180 / pi ]
For example:
IplImage dstXYImage;
IplImage thetaImage;
CartToPolar(dstXImage, dstYImage, dstXYImage, thetaImage, true);
Currently i'm upgrading our OpenGL-Engine to a new shading and attribute system which is more dynamic and are not somehow static in usage and programming.
For this i'm replacing the old VertexBuffer class with a new BufferRenderer to which multiple DataBuffer ( RenderDataBuffer, RenderIndexBuffer ) objects are assigned, those are holding my rendering data. This new system allows instancing with glDrawElementsInstanced and also static rendering with glDrawElements.
The Problem
It looks like a attribute corrupts a existing position attribute and leads to unexpected results. I tested this with different settings
Test setup
This code sets up the test data:
_Shader = new Shader(ShaderSource.FromFile("InstancingShader.xml"));
_VertexBuffer = new BufferRenderer();
RenderDataBuffer positionBuffer = new RenderDataBuffer(ArrayBufferTarget.ARRAY_BUFFER, ArrayBufferUsage.STATIC_DRAW,
new VertexDeclaration(DeclarationType.Float, DeclarationSize.Three, AttributeBindingType.Position));
// Set the position data of the quad
positionBuffer.BufferData(new[] { new Vector3(0, 0, 0), new Vector3(0, 0, 1), new Vector3(1, 0, 0), new Vector3(1, 0, 1) });
RenderDataBuffer instanceBuffer = new RenderDataBuffer(ArrayBufferTarget.ARRAY_BUFFER, ArrayBufferUsage.DYNAMIC_DRAW,
new VertexDeclaration(DeclarationType.Float, DeclarationSize.Four, AttributeBindingType.Instancing),
new VertexDeclaration(DeclarationType.Float, DeclarationSize.Four, AttributeBindingType.Color));
// Buffer the instance data
instanceBuffer.BufferData<InstanceTestData>(new[] {
new InstanceTestData() { Color = Colors.Red, PRS = new Color(0.1f, 1f, 0.5f, 1) },
new InstanceTestData() { Color = Colors.Blue, PRS = new Color(1f, 1f, 0.5f, 1) },
new InstanceTestData() { Color = Colors.Green, PRS = new Color(0.1f, 1f, 1f, 1) },
new InstanceTestData() { Color = Colors.Yellow, PRS = new Color(1f, 1f, 1f, 1) }
});
// Set up a index buffer for indexed rendering
RenderIndexBuffer indiciesBuffer = new RenderIndexBuffer(type: IndiciesType.UNSIGNED_BYTE);
indiciesBuffer.BufferData(new Byte[] { 2, 1, 0, 1, 2, 3 });
// Register the buffers ( second parameter is used for glVertexAttribDivisor )
_VertexBuffer.AddBuffer(positionBuffer);
_VertexBuffer.AddBuffer(instanceBuffer, 1);
_VertexBuffer.IndexBuffer = indiciesBuffer;
The vertex shader ( pixel just outputs the color ):
uniform mat4 uModelViewProjection;
varying vec4 vColor;
attribute vec3 aPosition; // POSITION0
attribute vec4 aColor; // COLOR 0
attribute vec4 aInstancePosition; // INSTANCING0
void main()
{
gl_Position = uModelViewProjection * vec4(vec2((aPosition.x * 20) + (gl_InstanceID * 20), aPosition.z * 20), -3, 1);
vColor = aColor;
}
Rendering ( Pseudocode to simplify reading; Not final for all performance guys out there )
glUseProgram
foreach (parameter in shader_parameters)
glUniformX
foreach (buffer in render_buffers)
glBindBuffer
foreach (declaration in buffer.vertex_declarations)
if (shader.Exists(declaration)) // Check if declaration exists in shader
glEnableVertexAttribArray(shader.attributeLocation(declaration))
glVertexAttribPointer
if (instanceDivisor != null)
glVertexAttribDivisor
glBindBuffer(index_buffer)
glDrawElementsInstanced
Shader attribute binding
The shader attribute binding is done at initialization and looks like this:
_VertexAttributes = source.Attributes.ToArray();
for (uint i = 0; i < _VertexAttributes.Length; i++)
{
ShaderAttribute attribute = _VertexAttributes[i];
GLShaders.BindAttribLocation(_ShaderHandle, i, attribute.Name);
}
So there should be not attribute aliasing in the shader, each of them gets a unique number ( Matrices are not implemented yet, i know they require more than one index, but i'm also not using them as vertex attributes right now ). As mentioned in the comment, i filter the attributes after linking the shader so no location is bound which does not exists.
This is the code for the attribute binding:
Bind();
Int32 offset = 0;
for (UInt32 i = 0; i < _Declarations.Length; i++)
{
VertexDeclaration data = _Declarations[i];
ShaderAttributeLocation location;
if (shader.GetAttributeLocation(data.Binding, out location))
{
GLVertexBuffer.EnableVertexAttribArray(location);
GLVertexBuffer.VertexAttribPointer(location, (AttributeSize)data.Size, (AttributeType)data.Type, data.Normalized, _StrideSize, new IntPtr(offset));
if (instanceDivisor != null)
GLVertexBuffer.VertexAttribDivisor(location, instanceDivisor.Value);
}
offset += data.ComponentSize;
}
Test results
The results are as seen here:
Now, if i change the binding on the code side ( AttributeBindingType.Color <-> AttributeBindingType.Instancing ) it looks like this:
If i change now vColor = aColor; to vColor = aInstancePosition; the results are simple: Instead of having multiple small quads with color i have one big fullscreen quad which is red. The locations of each of the attributes is different from the others so technically the values should be correct, but they seem to be not. Also using both attributes in the shader doesn't solve the problem.
I'm searching for a idea or solution to solve this problem.
Tracking the problem down
I've started to track it down more and more, with this complex code it costs me hours but i found something: The shader i'm using is only working when i leave out the attribute index 0 when calling BindAttribLocation. With other words this is a workaround which only works for the specified shader:
foreach (attribute in vertexAttributes)
{
if (shader == problemShader)
// i is index of the attribute
glBindAttribLocation(_ShaderHandle, i + 1, attribute.Name);
// All other shaders
else glBindAttribLocation(_ShaderHandle, i, attribute.Name);
}
I guess it has something to do with either instancing or the multiple VBO's which i'm using for instancing. This are the only differences to the normal shaders. The normal ones are also only working when i start the attribute location index at 0, they are not working when starting at 1.
I found the solution to the problem and it has been really simple.
After rendering with instancing i need to call glVertexAttribDivisor(location, 0); on the attributes which had the divisor enabled before.
I've ported 1-1 this code from C++/OpenGL to C# SharpGL:
float[] cameraAngle = { 0, 0, 0 };
float[] cameraPosition = { 0, 0, 10 };
float[] modelPosition = { 0, 0, 0 };
float[] modelAngle = { 0, 0, 0 };
float[] matrixView = new float[16];
float[] matrixModel = new float[16];
float[] matrixModelView = new float[16];
// clear buffer
gl.ClearColor(0.1f, 0.1f, 0.1f, 1);
gl.Clear(OpenGL.COLOR_BUFFER_BIT | OpenGL.DEPTH_BUFFER_BIT | OpenGL.STENCIL_BUFFER_BIT);
// initialze ModelView matrix
gl.PushMatrix();
gl.LoadIdentity();
// ModelView matrix is product of viewing matrix and modeling matrix
// ModelView_M = View_M * Model_M
// First, transform the camera (viewing matrix) from world space to eye space
// Notice all values are negated, because we move the whole scene with the
// inverse of camera transform
gl.Rotate(-cameraAngle[0], 1, 0, 0); // pitch
gl.Rotate(-cameraAngle[1], 0, 1, 0); // heading
gl.Rotate(-cameraAngle[2], 0, 0, 1); // roll
gl.Translate(-cameraPosition[0], -cameraPosition[1], -cameraPosition[2]);
// we have set viewing matrix upto this point. (Matrix from world space to eye space)
// save the view matrix only
gl.GetFloat(OpenGL.MODELVIEW_MATRIX, matrixView); // save viewing matrix
//=========================================================================
// always Draw the grid at the origin (before any modeling transform)
//DrawGrid(10, 1);
// In order to get the modeling matrix only, reset OpenGL.MODELVIEW matrix
gl.LoadIdentity();
// transform the object
// From now, all transform will be for modeling matrix only. (transform from object space to world space)
gl.Translate(modelPosition[0], modelPosition[1], modelPosition[2]);
gl.Rotate(modelAngle[0], 1, 0, 0);
gl.Rotate(modelAngle[1], 0, 1, 0);
gl.Rotate(modelAngle[2], 0, 0, 1);
// save modeling matrix
gl.GetFloat(OpenGL.MODELVIEW_MATRIX, matrixModel);
//=========================================================================
// re-strore OpenGL.MODELVIEW matrix by multiplying matrixView and matrixModel before drawing the object
// ModelView_M = View_M * Model_M
gl.LoadMatrixf(matrixView); // Mmv = Mv
gl.MultMatrixf(matrixModel); // Mmv *= Mm
// save ModelView matrix
gl.GetFloat(OpenGL.MODELVIEW_MATRIX, matrixModelView);
//=========================================================================
// Draw a teapot after ModelView transform
// v' = Mmv * v
//DrawAxis(4);
//DrawTeapot();
gl.PopMatrix();
It doesn't look like the ModelView matrix gets multiplied, the result is the Identity Matrix!
What could be wrong??
Thanks
wrong glMatrixMode?