I'm trying to calculate Gradient Magnitude and Orientation of a garyscale Image using OpenCvSharp. The problem is that "Pow" function seems to not be the right for the IplImage.
I also want to know how can I calculate tan-1 (or arctan) of featureImage.
Thank you
using (IplImage cvImage = new IplImage("grayImage.png", LoadMode.AnyDepth |
LoadMode.GrayScale))
using (IplImage dstXImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels))
using (IplImage dstYImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels))
{
float[] data = { 0, -1, -1, 2 };
CvMat kernel = new CvMat(2, 2, MatrixType.F32C1, data);
Cv.Sobel(cvImage, dstXImage, 1, 0, ApertureSize.Size1);
Cv.Sobel(cvImage, dstYImage, 0, 1, ApertureSize.Size1);
Cv.Normalize(dstXImage, dstXImage, 1.0, 0, NormType.L1);
Cv.Filter2D(cvImage, dstXImage, kernel, new CvPoint(0, 0));
Cv.Normalize(dstYImage, dstYImage, 1.0, 0, NormType.L1);
Cv.Filter2D(cvImage, dstYImage, kernel, new CvPoint(0, 0));
// to calculate gradient magnitude, sqrt[(dy)power 2 + (dx)power 2]
dstXImage.Mul(dstXImage, dstXImage);
dstYImage.Mul(dstYImage, dstYImage);
IplImage dstXYImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels);
dstXImage.Add(dstYImage, dstXYImage);
dstXYImage.Pow(dstXYImage, 1/2); //this line not working,output image is black page
// to calculate gradient orientation, arctan(dy/dx)
IplImage thetaImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels);
dstYImage.Div(dstXImage, thetaImage); //afterwards need help to calculate arctan
using (new CvWindow("SrcImage", cvImage))
using (new CvWindow("DstXImage", dstXImage))
using (new CvWindow("DstYImage", dstYImage))
using (new CvWindow("DstXYImage", dstXYImage))
using (new CvWindow("thetaImage", thetaImage))
{
Cv.WaitKey(0);
}
You can use the "cartToPolar" function for your purpose.
This function calculates the magnitude and angle of 2D vectors.
magnitude(I)= sqrt(x(I)^2+y(I)^2),
angle(I)= atan2(y(I), x(I))[ *180 / pi ]
For example:
IplImage dstXYImage;
IplImage thetaImage;
CartToPolar(dstXImage, dstYImage, dstXYImage, thetaImage, true);
Related
I'm trying to learn how to use OpenGL in a 2D application by using OpenTK and have read that using the inbuilt calls glMatrixMode are not modern. I want to use top left origin and pixel co-ordinates in my shader inputs and assumed I could define a matrix to do these translations.
I am trying to do this using my own matrix using the OpenTK matrix clases. However I think I have made a mistake in setting up the projection matrix and want to verify what I should be doing:-
TranslationMatrix = Matrix4.Identity * Matrix4.CreateScale(1, -1, 1);
TranslationMatrix = TranslationMatrix * Matrix4.CreateOrthographicOffCenter(0, bounds.Width, 0, bounds.Height, -1, 1);
var TranslatedPoint = TranslationMatrix * new Vector4(new Vector3(1024, 768, 0), 1); // bounds = {0, 0, 1024, 768 }
This results in x.Xyz == { 2, -2, 0 }. I thought that the x and y co-ordinates used in gl_position in the vertex shader should range from -1 to 1.
I guess I've got a major misunderstanding somewhere, what should I be looking at?
OpenTK stores the matrices in transposed form. This means you have to write everything in reversed order.
var TranslationMatrix = Matrix4.CreateOrthographicOffCenter(0, bounds.Width, 0, bounds.Height, -1, 1);
TranslationMatrix = TranslationMatrix * Matrix4.CreateScale(1, -1, 1);
var TranslatedPoint = new Vector4(1024, 768, 0, 1) * TranslationMatrix;
The result should now be [1, -1, 0, 1].
I am trying to create a function that takes a gray scale image and a color and colors the gray scale image using that color shade but keeps the shading levels of the gray scale image. The function also should not color the transparent parts of the image. I have multiple layers (multiple png's) I will be combining later and only need to color certain layers. I have looked around and found similar things but not quite what I need. I know how to do it in HTML5 on front end for the user using Canvas but I need a way to achieve same thing on the backend using I am guessing either a manual method using unlocked bitmap memory calls or a ColorMatrix class. Can anyone help me, graphics aren't my strongest area but I am slowly learning. See the function below for what I need in C# that I did in javascript. Doing the hidden canvas stuff isn't as important because I am doing this server side for saving to PNG file...
function drawImage(imageObj, color) {
var hidden_canvas = document.createElement("canvas");
hidden_canvas.width = imageObj.width;
hidden_canvas.height = imageObj.height;
var hidden_context = hidden_canvas.getContext("2d");
// draw the image on the hidden canvas
hidden_context.drawImage(imageObj, 0, 0);
if (color !== undefined) {
var imageData = hidden_context.getImageData(0, 0, imageObj.width, imageObj.height);
var data = imageData.data;
for (var i = 0; i < data.length; i += 4) {
var brightness = 0.34 * data[i] + 0.5 * data[i + 1] + 0.16 * data[i + 2];
//red
data[i] = brightness + color.R;
//green
data[i + 1] = brightness + color.G;
//blue
data[i + 2] = brightness + color.B;
}
//overwrite original image
hidden_context.putImageData(imageData, 0, 0);
}
var canvas = document.getElementById('card');
var context = canvas.getContext('2d');
context.drawImage(hidden_canvas, 0, 0);
};
This should do the job:
public static Bitmap MakeChromaChange(Bitmap bmp0, Color tCol, float gamma)
{
Bitmap bmp1 = new Bitmap(bmp0.Width, bmp0.Height);
using (Graphics g = Graphics.FromImage(bmp1))
{
float f = (tCol.R + tCol.G + tCol.B) / 765f;
float tr = tCol.R / 255f - f;
float tg = tCol.G / 255f - f;
float tb = tCol.B / 255f - f;
ColorMatrix colorMatrix = new ColorMatrix(new float[][]
{ new float[] {1f + tr, 0, 0, 0, 0},
new float[] {0, 1f + tg, 0, 0, 0},
new float[] {0, 0, 1f + tb, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {0, 0, 0, 0, 1} });
ImageAttributes attributes = new ImageAttributes();
attributes.SetGamma(gamma);
attributes.SetColorMatrix(colorMatrix);
g.DrawImage(bmp0, new Rectangle(0, 0, bmp0.Width, bmp0.Height),
0, 0, bmp0.Width, bmp0.Height, GraphicsUnit.Pixel, attributes);
}
return bmp1;
}
Note that I kept a gamma parameter; if you don't need it keep the value at 1f;
Here it is at work, adding first red then more red and some blue :
Transparent pixels are not affected.
For more on ColorMatrix here is a really nice intro!
As a fun project I applied the known colors to a known face:
I need help from any C# and or OpenCV experts in making my circle detection script more accurate.
In OpenCV circle detection is accomplished by something called HoughCircles algorithm or framework.
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
I am using a C# wrapper of OpenCV (for Unity)OpenCVforUnity HughCircles
which in turn is directly based on the official java wrapper of OpenCV.
My circle detection code is as follows (without the OpenCv dependencies of course)
I've also attached 2 images so you can see the results.
What changes are needed to improve the results? I've also included the original 2 images for reference.
using UnityEngine;
using System.Collections;
using System;
using OpenCVForUnity;
public class HoughCircleSample : MonoBehaviour{
Point pt;
// Use this for initialization
void Start ()
{
Texture2D imgTexture = Resources.Load ("balls2_bw") as Texture2D;
Mat imgMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC3);
Utils.texture2DToMat (imgTexture, imgMat);
//Debug.Log ("imgMat dst ToString " + imgMat.ToString ());
Mat grayMat = new Mat ();
Imgproc.cvtColor (imgMat, grayMat, Imgproc.COLOR_RGB2GRAY);
Imgproc.Canny (grayMat, grayMat, 50, 200);
Mat circles = new Mat();
int minRadius = 0;
int maxRadius = 0;
// Apply the Hough Transform to find the circles
Imgproc.HoughCircles(grayMat, circles, Imgproc.CV_HOUGH_GRADIENT, 3, grayMat.rows() / 8, 200, 100, minRadius, maxRadius);
Debug.Log ("circles toString " + circles.ToString ());
Debug.Log ("circles dump" + circles.dump ());
if (circles.cols() > 0)
for (int x = 0; x < Math.Min(circles.cols(), 10); x++)
{
double[] vCircle = circles.get(0, x);
if (vCircle == null)
break;
pt = new Point(Math.Round(vCircle[0]), Math.Round(vCircle[1]));
int radius = (int)Math.Round(vCircle[2]);
// draw the found circle
Core.circle(imgMat, pt, radius, new Scalar(255, 0, 0), 1);
}
Texture2D texture = new Texture2D (imgMat.cols (), imgMat.rows (), TextureFormat.RGBA32, false);
Utils.matToTexture2D (imgMat, texture);
gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
}
}
This code is in C++, but you can easily convert to C#.
I needed to change the param2 of HoughCircle to 200, resulting in:
HoughCircles(grayMat, circles, CV_HOUGH_GRADIENT, 3, grayMat.rows / 8, 200, 200, 0, 0);
which is
the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first.
You also should't feed HoughCircles with a "Canny-ed" image, since will already take care of this. Use the grayMat without Canny edge detection step applied.
Results are shown below. The second one is more tricky, because of the light conditions.
Here is the whole code. Again, it's C++, but may be useful as a reference.
#include <opencv2/opencv.hpp>
using namespace cv;
int main(){
Mat3b src = imread("path_to_image");
Mat1b src_gray;
cvtColor(src, src_gray, CV_BGR2GRAY);
vector<Vec3f> circles;
HoughCircles(src_gray, circles, CV_HOUGH_GRADIENT, 3, src_gray.rows / 8, 200, 200, 0, 0);
/// Draw the circles detected
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle(src, center, 3, Scalar(0, 255, 0), -1, 8, 0);
// circle outline
circle(src, center, radius, Scalar(0, 0, 255), 3, 8, 0);
}
imshow("src", src);
waitKey();
return 0;
}
In the fourth parameter you have set a 3, but most of your images have a ratio close to 1, this could be a probable improvement, also you have to try another set of values in the parameters 6 and 7, because this values depend on the contours extracted by a canny edge detector, I hope this could help you.
I'm getting much closer now with 2 overlapping circles for each ball object. If I can correct for this it is basically solved.
Imgproc.Canny (grayMat, grayMat, 500, 200);
Mat circles = new Mat();
int minRadius =50;
int maxRadius = 200;
Imgproc.HoughCircles(grayMat, circles, Imgproc.CV_HOUGH_GRADIENT, 1, grayMat.rows() / 4, 1000, 1, minRadius, maxRadius);![solution3][1]
I have massive Problems figuring out how to set up a dynamic VertexBuffer and IndexBuffer using SharpDX.
I have to generate Triangles where ever the User presses on the Screen.
I think i have to set up a transformation function that converts my screen coordinates to projection coordinates.
But i dont ever come this far...
I want to set up a Buffer with space for 10000 Vertices.
layout = new InputLayout(d3dDevice, vertexShaderByteCode, new[]
{
new SharpDX.Direct3D11.InputElement("POSITION", 0, Format.R32G32B32A32_Float, 0, 0),
new SharpDX.Direct3D11.InputElement("COLOR", 0, Format.R32G32B32A32_Float, 16, 0)
});
vb = Buffer.Create(d3dDevice, BindFlags.VertexBuffer, stream, 10000, ResourceUsage.Dynamic, CpuAccessFlags.Write);
vertexBufferBinding = new VertexBufferBinding(vb, Utilities.SizeOf<Vector4>() * 2, 0);
That Buffer i want to update every time i have to add new triangles using:
d3dDevice.ImmediateContext.UpdateSubresource(updateVB, vb);
updateVB are the new Triangles to be added.
Rendering works the following way:
// Prepare matrices
var view = Matrix.LookAtLH(new Vector3(0, 0, -5), new Vector3(0, 0, 0), Vector3.UnitY);
var proj = Matrix.PerspectiveFovLH((float)Math.PI / 4.0f, width / (float)height, 0.1f, 100.0f);
var viewProj = Matrix.Multiply(view, proj);
// Set targets (This is mandatory in the loop)
d3dContext.OutputMerger.SetTargets(render.DepthStencilView, render.RenderTargetView);
// Clear the views
d3dContext.ClearDepthStencilView(render.DepthStencilView, DepthStencilClearFlags.Depth, 1.0f, 0);
d3dContext.ClearRenderTargetView(render.RenderTargetView, Colors.Black);
// Calculate WorldViewProj
var worldViewProj = Matrix.Scaling(1f) * viewProj;
worldViewProj.Transpose();
// Setup the pipeline
d3dContext.InputAssembler.SetVertexBuffers(0, vertexBufferBinding);
d3dContext.InputAssembler.InputLayout = layout;
d3dContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
d3dContext.VertexShader.Set(vertexShader);
d3dContext.PixelShader.Set(pixelShader);
d3dContext.Draw(vertexCount, 0);
I am new to DirectX and the DirectX9 tutorials on the web don't help me very good with DirectX11.1.
Thanks
vb = Buffer.Create(d3dDevice, BindFlags.VertexBuffer, stream, 10000, ResourceUsage.Dynamic, CpuAccessFlags.Write);
Is wrong, since you want 10000 vertices, but allocate 10000 bytes, so should be:
10000 * sizeof(Vector4) * 2
According to your input layout.
Also to write into your buffer, you should look at context.MapSubresource instead.
Good day,
I am trying to display a real-time stereo video using nvidia 3DVision and two IP cameras. I am totally new to DirectX, but have tried to work through some tutorials and other questions on this and other sites. For now, I am displaying two static bitmaps for left and right eyes. These will be replaced by bitmaps from my cameras once I have got this part of my program working.
This question NV_STEREO_IMAGE_SIGNATURE and DirectX 10/11 (nVidia 3D Vision) has helped me quite a bit, but I am still struggling to get my program working as it should. What I am finding is that my shutter glasses start working as they should, but only the image for the right eye gets displayed, while the left eye remains blank (except for the mouse cursor).
Here is my code for generating the stereo images:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows.Forms;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
using SlimDX;
using SlimDX.Direct3D11;
using SlimDX.Windows;
using SlimDX.DXGI;
using Device = SlimDX.Direct3D11.Device; // Make sure we use DX11
using Resource = SlimDX.Direct3D11.Resource;
namespace SlimDxTest2
{
static class Program
{
private static Device device; // DirectX11 Device
private static int Count; // Just to make sure things are being updated
// The NVSTEREO header.
static byte[] stereo_data = new byte[] {0x4e, 0x56, 0x33, 0x44, //NVSTEREO_IMAGE_SIGNATURE = 0x4433564e;
0x00, 0x0F, 0x00, 0x00, //Screen width * 2 = 1920*2 = 3840 = 0x00000F00;
0x38, 0x04, 0x00, 0x00, //Screen height = 1080 = 0x00000438;
0x20, 0x00, 0x00, 0x00, //dwBPP = 32 = 0x00000020;
0x02, 0x00, 0x00, 0x00}; //dwFlags = SIH_SCALE_TO_FIT = 0x00000002
[STAThread]
static void Main()
{
Bitmap left_im = new Bitmap("Blue.png"); // Read in Bitmaps
Bitmap right_im = new Bitmap("Red.png");
// Device creation
var form = new RenderForm("Stereo test") { ClientSize = new Size(1920, 1080) };
var desc = new SwapChainDescription()
{
BufferCount = 1,
ModeDescription = new ModeDescription(1920, 1080, new Rational(120, 1), Format.R8G8B8A8_UNorm),
IsWindowed = false, //true,
OutputHandle = form.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput
};
SwapChain swapChain;
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug, desc, out device, out swapChain);
RenderTargetView renderTarget; // create a view of our render target, which is the backbuffer of the swap chain we just created
using (var resource = Resource.FromSwapChain<Texture2D>(swapChain, 0))
renderTarget = new RenderTargetView(device, resource);
var context = device.ImmediateContext; // set up a viewport
var viewport = new Viewport(0.0f, 0.0f, form.ClientSize.Width, form.ClientSize.Height);
context.OutputMerger.SetTargets(renderTarget);
context.Rasterizer.SetViewports(viewport);
// prevent DXGI handling of alt+enter, which doesn't work properly with Winforms
using (var factory = swapChain.GetParent<Factory>())
factory.SetWindowAssociation(form.Handle, WindowAssociationFlags.IgnoreAll);
form.KeyDown += (o, e) => // handle alt+enter ourselves
{
if (e.Alt && e.KeyCode == Keys.Enter)
swapChain.IsFullScreen = !swapChain.IsFullScreen;
};
form.KeyDown += (o, e) => // Alt + X -> Exit Program
{
if (e.Alt && e.KeyCode == Keys.X)
{
form.Close();
}
};
context.ClearRenderTargetView(renderTarget, Color.Green); // Fill Screen with specified colour
Texture2DDescription stereoDesc = new Texture2DDescription()
{
ArraySize = 1,
Width = 3840,
Height = 1081,
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Write,
Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Staging,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0)
};
// Main Loop
MessagePump.Run(form, () =>
{
Texture2D texture_stereo = Make3D(left_im, right_im); // Create Texture from two bitmaps in memory
ResourceRegion stereoSrcBox = new ResourceRegion { Front = 0, Back = 1, Top = 0, Bottom = 1080, Left = 0, Right = 1920 };
context.CopySubresourceRegion(texture_stereo, 0, stereoSrcBox, renderTarget.Resource, 0, 0, 0, 0);
texture_stereo.Dispose();
swapChain.Present(0, PresentFlags.None);
});
// Dispose resources
swapChain.IsFullScreen = false; // Required before swapchain dispose
device.Dispose();
swapChain.Dispose();
renderTarget.Dispose();
}
static Texture2D Make3D(Bitmap leftBmp, Bitmap rightBmp)
{
var context = device.ImmediateContext;
Bitmap left2 = leftBmp.Clone(new RectangleF(0, 0, leftBmp.Width, leftBmp.Height), PixelFormat.Format32bppArgb); // Change bmp to 32bit ARGB
Bitmap right2 = rightBmp.Clone(new RectangleF(0, 0, rightBmp.Width, rightBmp.Height), PixelFormat.Format32bppArgb);
// Show FrameCount on screen: (To test)
Graphics left_graph = Graphics.FromImage(left2);
left_graph.DrawString("Frame: " + Count.ToString(), new System.Drawing.Font("Arial", 16), Brushes.Black, new PointF(100, 100));
left_graph.Dispose();
Graphics right_graph = Graphics.FromImage(right2);
right_graph.DrawString("Frame: " + Count.ToString(), new System.Drawing.Font("Arial", 16), Brushes.Black, new PointF(200, 200));
right_graph.Dispose();
Count++;
Texture2DDescription desc2d = new Texture2DDescription()
{
ArraySize = 1,
Width = 1920,
Height = 1080,
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Write,
Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Staging,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0)
};
Texture2D leftText2 = new Texture2D(device, desc2d); // Texture2D for each bmp
Texture2D rightText2 = new Texture2D(device, desc2d);
Rectangle rect = new Rectangle(0, 0, left2.Width, left2.Height);
BitmapData leftData = left2.LockBits(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
IntPtr left_ptr = leftData.Scan0;
int left_num_bytes = Math.Abs(leftData.Stride) * leftData.Height;
byte[] left_bytes = new byte[left_num_bytes];
byte[] left_bytes2 = new byte[left_num_bytes];
System.Runtime.InteropServices.Marshal.Copy(left_ptr, left_bytes, 0, left_num_bytes); // Get Byte array from bitmap
left2.UnlockBits(leftData);
DataBox box1 = context.MapSubresource(leftText2, 0, MapMode.Write, SlimDX.Direct3D11.MapFlags.None);
box1.Data.Write(left_bytes, 0, left_bytes.Length);
context.UnmapSubresource(leftText2, 0);
BitmapData rightData = right2.LockBits(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
IntPtr right_ptr = rightData.Scan0;
int right_num_bytes = Math.Abs(rightData.Stride) * rightData.Height;
byte[] right_bytes = new byte[right_num_bytes];
System.Runtime.InteropServices.Marshal.Copy(right_ptr, right_bytes, 0, right_num_bytes); // Get Byte array from bitmap
right2.UnlockBits(rightData);
DataBox box2 = context.MapSubresource(rightText2, 0, MapMode.Write, SlimDX.Direct3D11.MapFlags.None);
box2.Data.Write(right_bytes, 0, right_bytes.Length);
context.UnmapSubresource(rightText2, 0);
Texture2DDescription stereoDesc = new Texture2DDescription()
{
ArraySize = 1,
Width = 3840,
Height = 1081,
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Write,
Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Staging,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0)
};
Texture2D stereoTexture = new Texture2D(device, stereoDesc); // Texture2D to contain stereo images and Nvidia 3DVision Signature
// Identify the source texture region to copy (all of it)
ResourceRegion stereoSrcBox = new ResourceRegion { Front = 0, Back = 1, Top = 0, Bottom = 1080, Left = 0, Right = 1920 };
// Copy it to the stereo texture
context.CopySubresourceRegion(leftText2, 0, stereoSrcBox, stereoTexture, 0, 0, 0, 0);
context.CopySubresourceRegion(rightText2, 0, stereoSrcBox, stereoTexture, 0, 1920, 0, 0); // Offset by 1920 pixels
// Open the staging texture for reading and go to last row
DataBox box = context.MapSubresource(stereoTexture, 0, MapMode.Write, SlimDX.Direct3D11.MapFlags.None);
box.Data.Seek(stereoTexture.Description.Width * (stereoTexture.Description.Height - 1) * 4, System.IO.SeekOrigin.Begin);
box.Data.Write(stereo_data, 0, stereo_data.Length); // Write the NVSTEREO header
context.UnmapSubresource(stereoTexture, 0);
left2.Dispose();
leftText2.Dispose();
right2.Dispose();
rightText2.Dispose();
return stereoTexture;
}
}
}
I have tried various methods of copying the Texture2D of the stereo image including signature (3840x1081) to the backbuffer, but none of the methods I have tried display both images...
Any help or comments will be much appreciated,
Ryan
If using DirectX11.1 is an option, there is a much easier way to enable stereoscopic features, without having to rely on nVidia's byte wizardry. Basically, you create a SwapChan1 instead of a regular SwapChain, then it is as simple as setting Stereo to True.
Have a look at this post I made, it shows you how to create a Stereo swapChain. The code is a porting to C# of MS's own stereo sample. Then you'll have two render targets and it is much more simple. Before rendering you have to:
void RenderEye(bool rightEye, ITarget target)
{
RenderTargetView currentTarget = rightEye ? target.RenderTargetViewRight : target.RenderTargetView;
context.OutputMerger.SetTargets(target.DepthStencilView, currentTarget);
[clean color/depth]
[render scene]
[repeat for each eye]
}
where ITarget is an interface for a class providing access to the backbuffer, rendertargets, etc.
That's it, DirectX will take care of everything. Hope this helps.
Try creating the backbufer with width = 1920 and not 3840.
stretch each image to half the size in width and put them side by side.
I remember seeing this exact same question while searching a couple of days ago on the Nvidia Developer forums. Unfortunately the forums are down due to a recent hacker attack. I remember that the OP on that thread was able to get it working with DX11 and Slimdx using the signature hack. You do not use the stretchRectangle method its was something like createResuroseRegion() or but not that exactly I can't remember. It might be these methods CopyResource() or CopySubresourceRegion() found in this similar thread on stack over flow.
Copy Texture to Texture
Also are you rendering the image continuously or at least a few times? I was doing the same thing in DX9 and had to tell DX to render 3 frames before the driver recognized it as 3D vision. Did your glasses kick on? Is your backbuffer = (width*2), (Height+1) and are you writing the backbuffer like so:
_________________________
| | |
| img1 | img2 |
| | |
--------------------------
|_______signature________| where this last row = 1 pix tall