Programmaticaly enable stereoscopic 3D in the graphic card - c#

We have a NVIDIA Quadro 5000 and want to set at the beginning of our C#-program the following settings in the graphics card, so that the screen automatically detects the 3D application.
The following settings need to be set:
Stereoscopic Settings: Enable Stereoscopic 3D
stereo display mode: Generic Active Stereo
Enable Stereo: On
Vertical Sync: On
Is this possible, maybe even with XNA?

I had the same problem a while ago and found that you can use an API called NVAPI provided by Nvidia. Please see this topic:
How do you prevent Nvidia's 3D Vision from kicking in? (HOW TO CALL NVAPI FROM .NET)

Related

How to take high quality photo from integrated camera in WPF

I'm implementing WPF app where I need to take high quality photos from integrated camera. So far I've been successful with capturing video and taking frames from it (described for example here: Wpf and C # capture webcam and network cameras).
But this is not what I want - because video frame quality is not so great. I have MS Surface 4 Pro which has 8Mpx camera with full HD video support and with the above method I'm able to just get full HD frame from it. But I would like to have full 8Mpx picture, like it is possible to take in the native Windows Camera app.
In UWP I would probably have been successful with CameraCaptureUI class, but I didn't find any clues for WPF.
Does anyone has an idea how this could be implemented?
I've found out that XAML Islands do work with .NET Framework 4.8. So I've been able to implement a WPF solution using UWP components MediaCapture and CaptureElement. With that I can take photos with full resolution which was my goal.
Simple sample project can be found here: https://github.com/ondrasvoboda/WPFCamera, consider it just as a proof of concept.
If your app will run on Windows 10 or above, you can now use most of the APIs from Windows 10 in a WPF application.
https://blogs.windows.com/windowsdeveloper/2019/04/30/calling-windows-10-apis-from-a-desktop-application-just-got-easier/

DirectShow IVideoWindow can't be wider than 4096px

I have a C# application which uses DirectShow to show play video clips. We recently tried to play a video that is 9600x1080 px and it would not show. DirectShow emits events that everything.
We use K-Lite codec pack (1295) and utilise ffdshow libx264 as codec and video renderer. Media Player Classic using the same renderer can play the clip just fine. The latest version of our application use DirectShow.NET while the older versions call DirectShow interfaces directly. Both old and new versions of our application have the same issues.
After some experimentation we have found out the following:
If the video window width is 4096px or narrower it will render video. If it is 4097 or wider it will not render any video. We tried playing a HD-clip and a 720-clip with the same results. They will play when the video window is 4096x1080 but not when the window is 4097x1080 or wider.
When changing resolution or graphic settings there are some flashes (a few frames) of the video when the settings is applied, so that would suggest that it is in fact playing the video but it displays only black.
Tested on Windows 10, 64 bit.
Any ideas of what the we can do to fix this?
The essential part is the video renderer you are using. Even though you did not mention, it is likely that you just use the defaults and this is a VMR-7 in windowed mode. This gets you an aged legacy component with limitations you are hitting.
You are interested in updating your application to use EVR.
Choosing the Right Video Renderer
[…]
In Windows Vista and later, applications should use the EVR if the hardware supports it.
[…] methods use the VMR-7 by default. […] The EVR and VMR-9 are never the default renderers.

Change the resolution of stream interactively in Klak Syphon on Unity

I'm very new to Unity and C#. I'm creating an application which can stream the camera output in Unity desktop to a web browser with RebRTC. I'm using this solution:
Unity camera → Syphon → CamTwist → broadcast client → smart phone.
First, I've sent a window size from smart phone to Unity (done). Then, I have to stream a video with the resolution that matches the smart phone one from Unity to Syphon.
My question is that How can I adjust a resolution of Syphon from Unity? Is there another good system?
I know I have to think the way to change a resolution in CamTwist, so I'm looking forward to another system .
I found a solution in Klack Syphon test project.
Setting camera size in Klack Syphon is a little bit tricky than other version such as Funnel or Syphon for Unity.
If you want to set the camera size for Klack Syphon,
Create Render Texture and set size
Add Camera Object
Set Render Texture at Target Texture field in Camera Object
Add Empty, attach Syphon Server(script) component and set Render Texture at Source texture field

How disabled auto white balance in Kinect 2.0 (Win)?

I develop chroma key effect for Kinect 2.0. In static image (only background) it's ok. But Kinect doing auto balance when people appear in frame and colors is changing. Algorithm doesn't work in this case. How disabled auto white balance in Kinect 2.0 for Windows?
You can't.
The SDK doesn't give you any control over camera settings. You can read the camera settings using the ColorCameraSettings class, but you can't change them.
There was a thread in the official support forum about "Auto Exposure Compensation", basically doing some post processing on the color image. Maybe you can do something like that.

Ask about kinect fusion for AR

I want to create an AR application using Kinect Fusion.
I want to insert a 3d model into my kinect fusion reconstruction. The expected output is like these two videos:
https://www.youtube.com/watch?v=gChlRebNloA
https://www.youtube.com/watch?v=MvP4cHfUD5g
How to overlay the 3D object onto the 3d reconstruction?
Is there function in the SDK that can be used to achieve my goal?
Thanks
ps: I use C#
You should probably start to explore the KinectFusionExplorer-WPF KinectFusion available on codeplex:
http://kinectforwindows.codeplex.com/SourceControl/latest#v1.x/ToolkitSamples1.8.0
KinectFusionExplorer-WPF has some nice utility methods for matrix transformations, (virtual) camera configuration, etc.
The process for doing the AR augmentation is:
viewport: set your viewport using the same resolution as the one for the camera image,
video background: retrieve the video image from the Kinect and displays it in the background (either as done in the SDK examples or using a textured quad with the video image as done more traditionally),
geometric registration: use WPF3D and set a Camera object using: the intrinsic parameters of the Kinect camera (or the same ones used by KF), the camera pose you get via GetCurrentWorldToVolumeTransform() from your volume,
rendering: set any local transformation you want for your model (xaml) and render (using the way it's generally done with WPF3D).

Categories