I develop chroma key effect for Kinect 2.0. In static image (only background) it's ok. But Kinect doing auto balance when people appear in frame and colors is changing. Algorithm doesn't work in this case. How disabled auto white balance in Kinect 2.0 for Windows?
You can't.
The SDK doesn't give you any control over camera settings. You can read the camera settings using the ColorCameraSettings class, but you can't change them.
There was a thread in the official support forum about "Auto Exposure Compensation", basically doing some post processing on the color image. Maybe you can do something like that.
Related
I have implemented background removal(aka:Green Screen Effect) using Kinect for windows V2 in Windows-RT C# platform.
Now it's working fine for me but the issue I'm facing is with noise, when i mapping color co-ordinate to depth co-ordinate it's very noisy.
I have option to go through the Open-CV but for use Open-CV i need to convert my application in Native(C++) application.
Also another option is with Emgu-CV which are wrapper of the Open-CV for C#. But it not support in the windows-RT.
So Any another solution for the smoothing of Kinect acquired object.
With OpenCV you will also get noise: it's caused by Kinect's precission, not by the API. Try Microsoft's background removal API, they have implemented a smoothing function that greatly improves the results.
On MSDN: https://msdn.microsoft.com/en-us/library/dn435663.aspx.
There is a sample on the Kinect SDK as well: https://msdn.microsoft.com/en-us/library/dn435686.aspx.
We have a NVIDIA Quadro 5000 and want to set at the beginning of our C#-program the following settings in the graphics card, so that the screen automatically detects the 3D application.
The following settings need to be set:
Stereoscopic Settings: Enable Stereoscopic 3D
stereo display mode: Generic Active Stereo
Enable Stereo: On
Vertical Sync: On
Is this possible, maybe even with XNA?
I had the same problem a while ago and found that you can use an API called NVAPI provided by Nvidia. Please see this topic:
How do you prevent Nvidia's 3D Vision from kicking in? (HOW TO CALL NVAPI FROM .NET)
I am trying to design a new application which basically aims at providing biometric authentication services. What I want to do is that the app will present the user with an interface where the user can get his eye scanned for authentication. The most important feature I want to incorporate is that the user need not have a webcam, the app must be able to read the eye from the display device i.e. CRT or LCD screen itself.
I want info about the best framework available for this. Once successfully tested, I am planning to provide it as a webservice. Any one who will help me will get a royalty from my income.
I think you're want Microsofts new multi-eye monitors. This is a special version of Multi-Touch intended for eye validation, much like how Microsoft Surface is intended for surface finger interaction. For example, you can just lay an eye on the table, and the table can sense the eye is there and validate it, using blue-tooth or whatever. I saw a demo where this guy just shakes his eye near the table and it validated him. I was so cool. SDK's will be available for Retina, Iris, etc.
I know for a fact that there has not been a lot of work done in this area, but the potential is big. I wish you luck.
The best way to do this is to use (old) monitors with electron tubes (LCD screens are not suited for your purpose). By applying a rectifier for the electric current input, swapping the polarity of the cable set to the electron tube and focussing the electron ray to a radio button on your user interface where the user is required to stare at you can make sure that the ray hits directly his eye and is reflected back to a small canvas you need on your UI (users should look a bit cross-eyed for this purpose). The electron pressure paints the retina layout directly to the canvas and you can read it out as a simple bitmap. No special SDK required.
You might try Apple's new iEye. This fantastic, magical add-on to the iPad rests on the eye, and is operated via a single easy-to-use button at the bottom of the device. Unfortunately, it only works with the iPad, and the SDK is proprietary.
I don't get you.
How do you propose the image of the eye is collected without some kind of image capture device.
A bog standard 'display device' is an 'output device' as opposed to an 'input device' - this means there would be no signal.
Are you talking mobile phone apps, custom manufacture eye scanning devices, desktop pc's?
please elaborate.
aaah Patrick Karcher - has the correct answer. plus one for that - i should have been more prepared for coming to stackoverflow on april fool's day.
If you mean getting images from devices without using encoders and drivers, have a look at TWAIN (Technology Without Any Interface). and it's faq.
The most important feature I want to incorporate is that the user need not have a webcam, the app must be able to read the eye from the display device i.e. CRT or LCD screen itself.
are you sure it's possible with the current CRT and LCD technologies? i think you have to have a reading device.
more info from TWAIN.org:
The TWAIN initiative was originally launched in 1992 by leading industry vendors who recognized a need for a standard software protocol and applications programming interface (API) that regulates communication between software applications and imaging devices (the source of the data). TWAIN defines that standard. The three key elements in TWAIN are the application software, the Source Manager software and the Data Source software. The application uses the TWAIN toolkit which is shipped for free.
good lucks.
I know this is an April Fools, but... Actually, if you remove the condition about the fact that it must come from a CRT or LCD screen it might be possible to do it without image capture device attached to their computer.
Possibly using their facebook username and some red-eye photos of them (reflection of the flash off the back of the retina) + a lot of luck and R+D.
Authentication then might simply come from some way of proving that you are the person in the photo.
Does WPF use the installed color profile in windows for correcting the colors that are rendered?
I'm pretty sure old forms/gdi-based applications are not "automatically" color corrected, but I wonder if WPF does (or can be made to do) this automatically?
(I know I can do this manually in my own WPF apps by creating a gpu shader to do the color correction.)
The reason I ask is because more and more monitors are now wide gamut, this means that colors that look "normal" on "old" monitors will seem much more vibrant on wide gamut monitors. An example is my new monitor which has much stronger red and green colors than my other monitors.
I can correct this problem on a per-application basis for some applications (firefox, photoshop, media players using a custom shader etc.., my own wpf apps using gpu shaders..) but it would be nice if there was a way to have WPF do it automatically for all WPF applications.
It is not a big problem but it is however annoying and I had hoped that Microsoft would take the opportunity with WPF to introduce color correction by default.
edit: question clarified for posterity.
Nope. You have to implement it yourself in your app. Just like the the old forms/gdi apps.
I think this is a pretty big problem! The only application I have which supports color profiles is photoshop and I'm hardly use that. In all other situations my new wide gamut display is much worse than the sRGB one it replaced.
At a 92% gamut the colors were a little over saturated, but displays keep pushing this up and are at 110%+ now. The farther they push this the worse these displays get in non managed apps. Since there are almost no managed apps then most of the the time these displays are very bad.
Each app impelmenting color management support does not seem realistic. This needs to be done on an OS or driver level. You mention that you can do this with GPU shaders, but I know of nobody doing that except for an unofficial plugin to an open source media player.
How does one convert an image from one color profile to another (screen to printer, or scanner to screen). In Visual C++ you would use the function in ICM.h, is there a managed way to do this with GDI+?
I need to use GDI+, not WPF. I'd prefer to have a managed solution, but if it is not available, I guess PInkvoke will have to suffice.
There are a number of solutions.
For GDI+, check out this article at MSDN (HOW TO: Use GDI+ and Image Color Management to Adjust Image Colors).
For WPF (.NET 3.0), see the System.Windows.Media namespace. There are a number of different classes, such as the BitmapEncoder, that have the concept of a ColorContext, which "Represents the International Color Consortium (ICC) or Image Color Management (ICM) color profile that is associated with a bitmap image."
Both of these seem pretty complex, so there's always the option of buying somebody else's code. Atalasoft's DotImage Photo Pro has ICC profile setting capabilities built in. The code is expensive; a dev license is almost 2k. But based on their participation in the dotnet community, I'd give them a whirl.
You should take a look at Lcms. Its a colour management system, fairly complete, but written in C. you can use pinvoke, but I would recommend Managed C++ wrapper. I am actually currently working on a managed wrapper around the engine (just the basics, colour profile conversion, lab readings). I can post a link to the code after i am complete. It may be a week or so though.