I need to expose a Unity texture/Rendertexture to some native plugin, which requires the "D3D11_RESOURCE_MISC_SHARED" flag on the texture.
textures created by unity doesn't have this flag, so, I created it from the plugin side, and then created a reference texture within unity using CreateExternalTexture, and copied the contents to this native texture using Graphics.CopyTexture.
the 2 textures have the same dimension, same size, same format, and same mipCount(0)
the problem is, when I show it in unity (for debugging purpose), I can see nothing and no error occurs.
btw, if I copy by ReadPixel, an error occures :
ReadPixels called on undefined image 0 (valid values are 0 - -1
if I create the texture using unity api, CopyTexture succeeds and result can be seen. but then, I lose the "D3D11_RESOURCE_MISC_SHARED" flag.
so, maybe the texture I created is not valid?
my code:
D3D11_TEXTURE2D_DESC desc = { 0 };
desc.Width = width;
desc.Height = height;
desc.MipLevels = 0;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;//这里,格式能不能调整?比如,A8是需要的吗?
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;//普通资源
//desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;//应该不需要cpu访问的,不是read也不是write
desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED;//for "OpenSharedHandle" d3d11 api
HRESULT hr = E_FAIL;
if (FAILED(hr = pDevice->CreateTexture2D(&desc, nullptr, &g_unityEquirectTexture)))
{
Log(" Create Shared Texture Failed!");
return NULL;
}
Log("CreateSharedTexture success");
//return g_unityEquirectTexture;
Unity CopyTexture Code:
if (output == null)
{
Debug.Log($"limit = {QualitySettings.masterTextureLimit}");
//output = new Texture2D(equirect.width, equirect.height, TextureFormat.RGBA32,false);//uncomment this line and then copyTexture below succeeds
IntPtr externalTextureData = CGDKInterOp.cgdk_c_CreateExternalTexture(equirectLeft.GetNativeTexturePtr(), equirectLeft.width * 2, equirectLeft.height);
if (externalTextureData != IntPtr.Zero)
{
output = Texture2D.CreateExternalTexture(equirectLeft.width * 2, equirectLeft.height, TextureFormat.RGBA32, false, true, externalTextureData);
}
}
if (output == null)
{
Debug.LogError("create texture from external failed!");
return;
}
//RenderTexture.active = equirect;
//output.ReadPixels(new Rect(0, 0, equirect.width, equirect.height), 0, 0);
//RenderTexture.active = null;
Graphics.CopyTexture(equirect, output);
OK, solved by myself.
the problem is: Miplevel == 0.this causes d3d11 creating a texture with 0B memory allocated!
change Miplevel to 1 solved the problem
note:
In unity inspector, we can see the memory allocated by textures. I found that my texture has 0B memory from the inspector and then, I search using this clue and found the solution
There might be a better solution for this but the last time I needed to copy over texture data from a native side to managed (Unity), I did it through marshalling of the data.
You essentially just need to expose a method in the native plugin to pass you the texture data as an array and then have a method in the C# code to fetch it data (by calling the method), releasing the pointer to native memory when you're done. You can find information about marshalling and interop in Microsoft's documentation (e.g. https://learn.microsoft.com/en-us/dotnet/framework/interop/marshalling-different-types-of-arrays).
If your texture is always guaranteed to be of the same size and format, it's easier - but if you need to know some additional parameters so that you know how the Texture should be represented in managed-land, you can always pass yourself the additional data through the same method.
Related
I get the below error when trying to use DXGI to capture the builtin screen on my laptop that runs on an Intel 630 HD with the latest driver. The code works when I capture the external screen running on my GTX 1070.
SharpDX.SharpDXException
HResult=0x80070057
Message=HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.
The code in my form:
desktopDuplicator = new DesktopDuplicatorD11(1,0, DesktopDuplicatorD11.VSyncLevel.None);
The section of the code that errors:
private bool RetrieveFrame()
{
if (desktopImageTexture == null)
desktopImageTexture = new Texture2D(mDevice, mTextureDescription);
frameInfo = new OutputDuplicateFrameInformation();
try
{
mDeskDuplication.AcquireNextFrame(500, out frameInfo, out desktopResource);
}
catch (SharpDXException ex)
{
if (ex.ResultCode.Code == SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
{
return true;
}
if (ex.ResultCode.Failure)
{
throw new DesktopDuplicationException("Failed to acquire next frame.");
}
}
using (var tempTexture = desktopResource.QueryInterface<Texture2D>())
{
mDevice.ImmediateContext.CopyResource(tempTexture, desktopImageTexture);
}
return false;
}
It errors specifically on the line:
desktopImageTexture = new Texture2D(mDevice, mTextureDescription);
What is causing the error when using the internal display and the intel 630?
Edit #1:
mTextureDescription creation:
this.mTextureDescription = new Texture2DDescription()
{
CpuAccessFlags = CpuAccessFlags.Read,
BindFlags = BindFlags.None,
Format = Format.B8G8R8A8_UNorm,
Width = this.mOutputDescription.DesktopBounds.Right,
Height = this.mOutputDescription.DesktopBounds.Bottom,
OptionFlags = ResourceOptionFlags.None,
MipLevels = 1,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = ResourceUsage.Staging
};
The whole Desktop Duplication process is done on the same thread.
Update #2:
On the intel 630 Width = this.mOutputDescription.DesktopBounds.Right, returns 0 where as on my 1070 it returns 1920.
The most simplest reason is usually the actual problem.
Intel's final WDDM 2.6 drivers do not work properly with switchable graphics, update to the DCH WDDM 2.7 driver.
First, to get hints from API about invalid argument (this is exactly what you have) you need to enable Direct3D Debug Layer. The article explains it for C++ and it is possible to do a similar trick with C# as well.
Second, important is what are effectively the arguments in the failing call, not just the code.
The code is about right but if coordinates in mOutputDescription are zero or invalid, the mentioned API call is going to fail as well. You need to set a break point and inspect the variable.
I am trying to use the Intel Real Sense SDK to acquire some guided frames and then I am trying to stitch them using the Intel Stitching algorithm.
However, the SetFileName function is not writing the file to the directory.
Can you please help?
PXCMSenseManager pp = PXCMSenseManager.CreateInstance();
RaiseMessage("Starting...");
pp.EnableFace()
pp.Enable3DScan()
if (!string.IsNullOrEmpty(StreamOutputFilename))
{
if (File.Exists(StreamOutputFilename)) throw new Exception("File already exists");
{
System.Diagnostics.Debug.WriteLine(StreamOutputFilename);
pp.QueryCaptureManager().SetFileName(StreamOutputFilename,true);
Please refer to the solution here
int wmain(int argc, WCHAR* argv[]) {
// Create a SenseManager instance
PXCPointF32 *uvmap = 0;
pxcCHAR *file = L"C:\\new.rssdk";
PXCSenseManager *sm = PXCSenseManager::CreateInstance();
// Set file recording or playback
sm->QueryCaptureManager()->SetFileName(file, 1);
// Select the color stream
sm->EnableStream(PXCCapture::STREAM_TYPE_DEPTH, 640, 480, 0);
// Initialize and Record 300 frames
sm->Init();
for (int i = 0; i<300; i++) {
// This function blocks until a color sample is ready
if (sm->AcquireFrame(true)<PXC_STATUS_NO_ERROR) break;
// Retrieve the sample
PXCCapture::Sample *sample = sm->QuerySample();
sm->ReleaseFrame();
}
// close down
sm->Release();
return 0;
}
I want to use directx on C# and I am using SharpDX wrapper. I got a book called Direct3D rendering cookbook and I got the basic code from it. I want to create a 3d world view. For that I will need a camera view and a grid that helps to recognize world position just like in Autodesk Maya but I do not know how to do that. My mind is rally mixed what should I do to start ?
Here I have code that is ready to render something I think:
using System;
using SharpDX.Windows;
using SharpDX.DXGI;
using SharpDX.Direct3D11;
using Device = SharpDX.Direct3D11.Device;
using Device1 = SharpDX.Direct3D11.Device1;
namespace CurrencyConverter
{
static class Program
{[STAThread]
static void Main()
{
// Enable object tracking
SharpDX.Configuration.EnableObjectTracking = true;
SharpDX.Animation.Timer timer = new SharpDX.Animation.Timer();
#region Direct3D Initialization
// Create the window to render to
Form1 form = new Form1();
form.Text = "D3DRendering - EmptyProject";
form.Width = 640;
form.Height = 480;
// Declare the device and swapChain vars
Device device;
SwapChain swapChain;
// Create the device and swapchain
// First create a regular D3D11 device
using (var device11 = new Device(
SharpDX.Direct3D.DriverType.Hardware,
DeviceCreationFlags.None,
new[] {
SharpDX.Direct3D.FeatureLevel.Level_11_1,
SharpDX.Direct3D.FeatureLevel.Level_11_0,
}))
{
// Query device for the Device1 interface (ID3D11Device1)
device = device11.QueryInterfaceOrNull<Device1>();
if (device == null)
throw new NotSupportedException(
"SharpDX.Direct3D11.Device1 is not supported");
}// Rather than create a new DXGI Factory we reuse the
// one that has been used internally to create the device
using (var dxgi = device.QueryInterface<SharpDX.DXGI.Device2>())
using (var adapter = dxgi.Adapter)
using (var factory = adapter.GetParent<Factory2>())
{
var desc1 = new SwapChainDescription1()
{
Width = form.ClientSize.Width,
Height = form.ClientSize.Height,
Format = Format.R8G8B8A8_UNorm,
Stereo = false,
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.BackBuffer | Usage.RenderTargetOutput,
BufferCount = 1,
Scaling = Scaling.Stretch,
SwapEffect = SwapEffect.Discard,
};
swapChain = new SwapChain1(factory,
device,
form.Handle,
ref desc1,
new SwapChainFullScreenDescription()
{
RefreshRate = new Rational(60, 1),
Scaling = DisplayModeScaling.Centered,
Windowed = true
},
// Restrict output to specific Output (monitor)
adapter.Outputs[0]);
}
// Create references for backBuffer and renderTargetView
var backBuffer = Texture2D.FromSwapChain<Texture2D>(swapChain,
0);
var renderTargetView = new RenderTargetView(device,
backBuffer);
#endregion
// Setup object debug names
device.DebugName = "The Device";
swapChain.DebugName = "The SwapChain";
backBuffer.DebugName = "The Backbuffer";
renderTargetView.DebugName = "The RenderTargetView";
#region Render loop
// Create and run the render loop
RenderLoop.Run(form, () =>
{
// Clear the render target with...
var lerpColor = SharpDX.Color.Lerp(SharpDX.Color.White,
SharpDX.Color.DarkBlue,
(float)((timer.Time) / 10.0 % 1.0));
device.ImmediateContext.ClearRenderTargetView(
renderTargetView,
lerpColor);
// Execute rendering commands here...
//...
//I DO NOT HAVE ANY IDEA
//...
// Present the frame
swapChain.Present(0, PresentFlags.RestrictToOutput);
});
#endregion
#region Direct3D Cleanup
// Release the device and any other resources created
renderTargetView.Dispose();
backBuffer.Dispose();
device.Dispose();
swapChain.Dispose();
#endregion
}
}
}
Generally speaking, with Direct3D you need a substantial amount of code before to have anything happening on the screen.
In the SharpDX repository you have the MiniCube sample which contains enough to really get you started, as it has all the elements required to draw a 3d scene.
I recommend to particularily look for:
Depth buffer creation (DepthStencilView)
fx file, as you need shaders to have anything on the screen (no more fixed funtion)
How the Vertex Buffer is created, you need to split geometry in triangles (in common cases, there are other possibilities).
Don't forget the SetViewport (it's really common to have it omitted)
The calls referring to Input Assembler are assigning the geometry to be drawn
Constant buffer creation : this is to pass matrices and changing data (like diffuse)
Also make sure to have DeviceCreationFlags.None with the Device.CreateWithSwapChain call, and in visual studio debug options, use "Enable Native Code Debugging". This will give you errors and warnings if something is not set properly, plus a meaningful reason in case any for the resource creation fails (instead of "Invalid Args", which is quite pointless).
As another recommendation, all the Direct3D11 resource creation parameters are incredibly error prone and tedious (many options are non compatible between each other), so it quite important to wrap those into some easier to use helper functions (and make a small amount of unit tests to validate them once and for all). The old Toolkit has quite some of those examples
SharpDX wrapper is relatively close to the c++ counterpart, so anything in the c++ documentation applies to it too.
I would like to store the motion capture data from Kinect 2 as a BVH file. I found code which does so for Kinect 1 which can be found here. I went through the code and found several things that I was not able to understand.
For example, in the mentioned code I've tried to understand what exactly the Skeleton skel object, found in several places in the code, actually is. If not, are there any known application available to accomplish the intended?
EDIT: I tried to change Skeleton skel to Body skel which I think is the correspondant object for kinect SDK 2.0. However I've got an error when I try to get the position of the body:
tempMotionVektor[0] = -Math.Round( skel.Position.X * 100,2);
tempMotionVektor[1] = Math.Round( skel.Position.Y * 100,2) + 120;
tempMotionVektor[2] = 300 - Math.Round( skel.Position.Z * 100,2);
I've gotten errors when calling the function Position for the Body skel. How can I retrieve the X, Y, Z of the skeleton in sdk 2.0?? I tried to change the above three lines to:
tempMotionVektor[0] = -Math.Round(skel.Joints[0].Position.X * 100, 2);
tempMotionVektor[1] = Math.Round(skel.Joints[0].Position.Y * 100, 2) + 120;
tempMotionVektor[2] = 300 - Math.Round(skel.Joints[0].Position.Z * 100, 2);
EDIT: Basically I managed to store the a bvh file after combining bodyBasicsWPF and kinect2bvh. However, it seems that the skeleton I am storing is not efficient. There are strange movements in the elbows. I am trying to understand if I have to change something in the file kinectSkeletonBVH.cp. More specifically, what are the changes in the joint axis orientation for the kinect 2 version. How can I change the following line: skel.BoneOrientations[JointType.ShoulderCenter].AbsoluteRotation.Quaternion; I tried to change that line with skel.JointOrientations[JointType.ShoulderCenter].Orientation. Am I right? I am using the following code to add the joint to BVHBone objects:
BVHBone hipCenter = new BVHBone(null, JointType.SpineBase.ToString(), 6, TransAxis.None, true);
BVHBone hipCenter2 = new BVHBone(hipCenter, "HipCenter2", 3, TransAxis.Y, false);
BVHBone spine = new BVHBone(hipCenter2, JointType.SpineMid.ToString(), 3, TransAxis.Y, true);
BVHBone shoulderCenter = new BVHBone(spine, JointType.SpineShoulder.ToString(), 3, TransAxis.Y, true);
BVHBone collarLeft = new BVHBone(shoulderCenter, "CollarLeft", 3, TransAxis.X, false);
BVHBone shoulderLeft = new BVHBone(collarLeft, JointType.ShoulderLeft.ToString(), 3, TransAxis.X, true);
BVHBone elbowLeft = new BVHBone(shoulderLeft, JointType.ElbowLeft.ToString(), 3, TransAxis.X, true);
BVHBone wristLeft = new BVHBone(elbowLeft, JointType.WristLeft.ToString(), 3, TransAxis.X, true);
BVHBone handLeft = new BVHBone(wristLeft, JointType.HandLeft.ToString(), 0, TransAxis.X, true);
BVHBone neck = new BVHBone(shoulderCenter, "Neck", 3, TransAxis.Y, false);
BVHBone head = new BVHBone(neck, JointType.Head.ToString(), 3, TransAxis.Y, true);
BVHBone headtop = new BVHBone(head, "Headtop", 0, TransAxis.None, false);
I can't understand where inside the code the axis for every Joint is calculated.
The code you used for Kinect 1.0 to obtain a BVH file use the joints information to build bone vectors by reading the Skeleton.
public static double[] getBoneVectorOutofJointPosition(BVHBone bvhBone, Skeleton skel)
{
double[] boneVector = new double[3] { 0, 0, 0 };
double[] boneVectorParent = new double[3] { 0, 0, 0 };
string boneName = bvhBone.Name;
JointType Joint;
if (bvhBone.Root == true)
{
boneVector = new double[3] { 0, 0, 0 };
}
else
{
if (bvhBone.IsKinectJoint == true)
{
Joint = KinectSkeletonBVH.String2JointType(boneName);
boneVector[0] = skel.Joints[Joint].Position.X;
boneVector[1] = skel.Joints[Joint].Position.Y;
boneVector[2] = skel.Joints[Joint].Position.Z;
..
Source: Nguyên Lê Đặng - Kinect2BVH.V2
Except in Kinect 2.0, Skeleton class has been replaced by the Body class, so you need to change it to deal with a Body instead, and obtain the joints by following the steps quoted below.
// Kinect namespace
using Microsoft.Kinect;
// ...
// Kinect sensor and Kinect stream reader objects
KinectSensor _sensor;
MultiSourceFrameReader _reader;
IList<Body> _bodies;
// Kinect sensor initialization
_sensor = KinectSensor.GetDefault();
if (_sensor != null)
{
_sensor.Open();
}
We also added a list of bodies, where all of the body/skeleton related
data will be saved. If you have developed for Kinect version 1, you
notice that the Skeleton class has been replaced by the Body class.
Remember the MultiSourceFrameReader? This class gives us access on
every stream, including the body stream! We simply need to let the
sensor know that we need body tracking functionality by adding an
additional parameter when initializing the reader:
_reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color |
FrameSourceTypes.Depth |
FrameSourceTypes.Infrared |
FrameSourceTypes.Body);
_reader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived;
The Reader_MultiSourceFrameArrived method will be called whenever a
new frame is available. Let’s specify what will happen in terms of the
body data:
Get a reference to the body frame
Check whether the body frame is null – this is crucial
Initialize the _bodies list
Call the GetAndRefreshBodyData method, so as to copy the body data into the list
Loop through the list of bodies and do awesome stuff!
Always remember to check for null values. Kinect provides you with
approximately 30 frames per second – anything could be null or
missing! Here is the code so far:
void Reader_MultiSourceFrameArrived(object sender,
MultiSourceFrameArrivedEventArgs e)
{
var reference = e.FrameReference.AcquireFrame();
// Color
// ...
// Depth
// ...
// Infrared
// ...
// Body
using (var frame = reference.BodyFrameReference.AcquireFrame())
{
if (frame != null)
{
_bodies = new Body[frame.BodyFrameSource.BodyCount];
frame.GetAndRefreshBodyData(_bodies);
foreach (var body in _bodies)
{
if (body != null)
{
// Do something with the body...
}
}
}
}
}
This is it! We now have access to the bodies Kinect identifies. Next
step is to display the skeleton information on-screen. Each body
consists of 25 joints. The sensor provides us with the position (X, Y,
Z) and the rotation information for each one of them. Moreover, Kinect
lets us know whether the joints are tracked, hypothsized or not
tracked. It’s a good practice to check whether a body is tracked
before performing any critical functions.
The following code illustrates how we can access the different body
joints:
if (body != null)
{
if (body.IsTracked)
{
Joint head = body.Joints[JointType.Head];
float x = head.Position.X;
float y = head.Position.Y;
float z = head.Position.Z;
// Draw the joints...
}
}
Source: Vangos Pterneas Blog - KINECT FOR WINDOWS VERSION 2: BODY TRACKING
Update 10/03
Ok, so i figured out the problem. C# is a funny old language..I changed
int navMeshCount = navMesh.GetPath(ref start, ref end, routePolys, navMeshRoute, false);
to
var navMeshCount = navMesh.GetPath(ref start, ref end, routePolys, navMeshRoute, false);
and finally im getting the pathfinding values to work!
When i debug the routeNavMesh array fills up with the values i require.
http://i.imgur.com/QgH8UL6.png [Screencap of debug in question]
Ill update again once im done.
Having some major problems getting a navigational mesh to work, in particular i cant get it to do any pathfinding or get it to respond to my vector3 input in any useful way.
Im working on the sunburn engine, it sits on top of XNA 4.0, so all of XNAs functions are still usable.
Recently Recast Navigation from digesting Duck's blog was ported over to c# and i've used it to create navigation meshes for my levels. Here is a link to the tutorial/guide i've been trying to follow: http://xboxforums.create.msdn.com/forums/p/109484/647991.aspx
im not sure how to pass in the start and end values on the getpath segment, are these meant to be values in world space or object space or something? If i do vector3 values and enter them as the start and end values( and then subsequently set them in a place on the array), i dont get an array of values that shows path finding, i just get those two verbatim values inside the navmeshroute array.
Here is where i try and run the path, its in my Update.
routePolys = new ushort[dtStatNavMesh.MAX_POLYS];
navMeshRoute = new Vector3[dtStatNavMesh.MAX_POLYS];
if (ks.IsKeyDown(Keys.L))
{
if (!runOnce)
{
start = new Vector3(0, 0, 0);
end = new Vector3(1000, 0, 1000);
ppos = x;
navMeshRoute[0].X = start.X;
navMeshRoute[0].Y = start.Y;
navMeshRoute[0].Z = start.Z;
navMeshRoute[count].X = end.X;
navMeshRoute[count].Y = end.Y;
navMeshRoute[count].Z = end.Z;
int navMeshStart = 0;
navMesh.GetPath(ref start, ref end, routePolys, navMeshRoute, true);
Console.WriteLine("navMeshPC" + navMesh.getPolyCount());
int navMeshCount = navMesh.GetPath(ref start, ref end, routePolys, navMeshRoute, true);
while (navMeshStart < navMeshCount)
{
routeNode = navMeshRoute[navMeshStart++];
}
Console.Out.WriteLine("NMc : " + navMeshCount);
}
}