motion detection and object extraction in c#? - c#

can any one help me in detect moving objects in images and extract them out using c# ? i tried to extract them by subtracting image from previous image but this did't work with me

OpenCV is a good library to get started with this sort of thing. Be warned that it's not very well written, but the algorithms are there.

Motion detection is a very complex subject, and isn't related to C# that much. What exactly would you like to do? What are the "objects" in the video?
Do you want to detect any motion of anything, or motion of specific objects?
Can your application recognize the objects and differentiate them from the background?
Alex.

Related

How can i make an overlay which can see whats on the screen?

I want to make an overlay which can analyse the screen and search for specific pictures on it. E.g in a game to see how much your invetory is worth or to count something in a picture.
Use OpenCV (with python, I would almost always prefer to do such tasks with python).
But if you have to do it using C#, then you need the C# OpenCV wrapper "Emgu CV".

Rubber Band (Implement in Unity 2D)

I'm trying to make a rubber band in Unity and I don't get out. I found this example but it's done in Actionscript Flash https://www.deviantart.com/willmh93/art/Ball-Elastic-142211333..... and I can't even convert the code to javaScript or C#.
I managed to do something similar, but this one uses line renderer, and it's not so real.
Sling Shot img
My simulation uses the Slingshot mechanism, but it doesn't benefit me. I want to have this rubber band and act on a draggable object.
I don't need you to write me code, but give me some ideas or some sources from which I can orient myself, what and how to use it to get the same result.Thanks a lot.
I believe you are looking for Spring Joint:
https://docs.unity3d.com/Manual/class-SpringJoint.html
It allows you to attach 2 objects with springiness. I'm sure you can fiddle with the values to get a more 'rubber-band' feel.

different images from different point of view

I want different images to be displayed from different point of view. For the whole concept explaination please look at the images. they explain my idea/query!
As in the first image you see that there are three people at different angle looking at the monitor. Now i want the webcam to track the eyes and show the particular defined image to the user> For example: If user is at 45 degree angle then show image1.png
Depending upon the user's prespective of watching. The computer should show the image.
(the lady is the game character for representation purpose)
Can you please guide me on what steps can be taken to accomplish this? Is there any plugin available for unity that tracks faces? Please guide me
Also thanks for the compliments on my sketching skills xD
Stackoverflow is not really meant to recommend plugins, since the choice is usually opinion based so there is no exact answer.
That being said, on of the most common used API for computer vision (meaning interpreting images, including face recognition) is OpenCV, so that could be a good start for you to look at that.
And fortunately for you, there is a Unity plugin for OpenCV
It is too broad to give you more details about how it works here. You should try to make it work, and if you have a problem with your code, open a new question with the code portion that you struggle with.
PS: nice sketching skills
Perhaps easier option would be to use Kinect
(trying to detect face or eyes from that far might be shaky?)
With Kinect you can get skeletons for multiple people, and getting the angle between target and those kinect avatars would be easy.
If there is no space to put kinect in good position,
could consider placing it on the ceiling above (and then use depth data only to detect people in its view)
Only issue is that apparently Microsoft has stopped Windows kinect support,
so you would need to find 2nd hand versions.. (Unity Asset store still has some kinect plugins and examples available)
https://www.polygon.com/2018/1/2/16842072/xbox-one-kinect-adapter-out-of-stock-production-ended
Or look for kinect alternatives that work with unity, try RealSense cameras:
https://www.intel.sg/content/www/xa/en/architecture-and-technology/realsense-overview.html

Get user inputs from the webcame for the game

I'm creating a simple game using Unity Studio which uses arrow keys to move the player. Now what I want to do is, use webcam as a movement detecting device and track user's movements and move the player according to them. (For example, when user move his hand to right, webcam can track it and move the player to the right...)
So, is this possible ? If so, what are the techniques APIs I should use for this...?
Thanks!
Have a look at OpenCV, it is being used a lot in the field of body and head tracking, and there's a unity plugin which implements it that might be useful.
Video Demo
It can't. But there is a lot of stuff out there on the internet.
This one has some interesting looking links.
Emgu CV looks interesting too.
There is some JavaScript handtracking tool too.
And of course there's kinect, but you need the 3d sensor.
You could also use LeapMoution.

XNA 4.0 Glowing objects

I am trying to achieve a simple task (or so I thought) in XNA 4.0. I have objects that need to glow (rendering and Gaussian blurring them and then adding them to the main scene). These objects can be at different depths so I will need to make sure that they are obscured by objects in front of them in the main scene. It's 3D.
Because depth buffers cannot be re-used in XNA 4.0, I am having a hard time figuring out how I can achieve this?
I can find no examples or tutorials or explanations of this process. The bloom post process example of XNA also does not do exactly what I need, as it post processes the entire scene.
I know that I can preserve RenderTarget info by using PreserveContents, but it sounded like it was slow. Is there a way to achieve this without using PreserveContents?
Any help would be appreciated.
Thank you,
Riaan.
I'd try to render the glowing objects using a second rendertarget to store its depth... this is a multi target rendering technique...
I suppose that you will need another for the non glowing objects..(or use the same, storing both depths with an addtive blending) to compare them at postproccess time...
later in the post process it would be possible to know if the glowing object is visible or is hidden.

Categories