I have a C# WPF MVVM application that needs to record and take pictures. The camera is an industrial camera STC-MCA5MUSB3.
The only experience I have with image manipulation is with OpenCV in C++, so I am thinking about using Emgu, which is just OpenCV for C#, but I feel like this might be too much.
My program has to show a preview of the image in a 400x300 pixels and be able to take 5MP pictures and record Full HD 1920x1080 movies.
The algorithm I have in my mind is:
Receive a 5MP frame from the camera.
If the user pressed the "Take Picture" button, save this frame on the pictures folder.
Scale down the frame to Full HD and pass it to a library that will add this frame to a movie, like a pipeline, and recorded it in real time in the computer's HDD.
Scale down the frame to 400x300px and show it in the preview area in the screen.
I am afraid this might be too much for the computer to process, specially in C#, so I am open to suggestion on the algorithm and on libraries.
Related
I have a camera and I want to use C# to real-time display the object taken by the camera. In C# program I use a C-based .exe to retrieve images from the camera to one fixed folder in my computer, and then try to display and update the image (like a live video). My problem:
(1) how to update photo viewer smoothly in C# (I hate refreshing the interface) or shall I make those photos into a video first?
(2)as the pics in the folder are real-time updated, how could I make sure the sequence of playing, or avoid conflicting? I plan to keep only 10 images, and the 11th image retrieved will replace the 1st image..., I am not very clear about that...
Thanks for the reading, sorry I might have very silly questions above...
I'm currently developing using Unity 3D, however my background is Apple development. In iOS/OSX development I can use AVFoundation to load an mp4 and get frame by frame data from it as it plays in BGRA format.
I was wondering what the equivalent of this is for .NET (Unity uses .NET 2.0)?
I know it's possible to call functions in an Objective-C++ file from unity but I need a way to do this on any platform not just iOS.
AFAIK there is no builtin way to access the video frame data on mobile in Unity. Playing videos in Unity on mobile devices is just sad. All they offer out of the box is this that basically only works for full screen videos, like cut scenes.
If you want to do more complex things, like pipe a videos to a texture, you have two options:
Mobile Movie Texture
Easy Movie Texture
Note: there are more available on the asset store, but these are the two we use.
The best option?..
Easy Movie Texture
We use Easy Movie Texture for our VR apps (Gear VR & Cardboard). There we decode up to 4k videos on an S6.
For stereo videos we use a component that comes with the plugin, VideoCopyTexture, that copies the texture data from the video into a material (to avoid duplicate decoding & rendering). It references MediaPlayerCtrl.GetVideoTexture() that returns a Unity Texture2D, so from there you should have access to all the data you need!
The not so great option, but we kind of use it so I thought I would mention it anyway...
Mobile Movie Texture
Mobile Movie Texture only works with ogg encoded videos. I don't think it gives you access to the raw frame data, I would invite you to contact the dev directly or check the docs.
But that can be "hacked". You can use a render texture, a separate camera that looks at a plane/quad with the video playing on it. Then grab that texture's bitmap data. It's not ideal, but should do the trick.
Long story short, I use a similar render to texture mechanism to pipe the video data to Scaleform, in Unity, and didn't see any performance loss. Mobile Movie Texture is not the fastest to start with and in our case even decodes in software on some devices. But it works everywhere. So we only use it for small videos.
Cheers,
J.
I am currently doing a project where I am using 3 webcams and take a photo on button click. The image will be saved to specific folder on my hard drive C:\. That is working fine now. I would now like to display the image in another NEW WINDOWS FORM and I will be able to pick out eye coordinates of the person in the image saved. The purpose of picking out the eye coordinates is to ensure it is able to go through the normalization process for facial recognition purposes.
Now, my main issue is how is it possible for me to pick out the eye coordinates from an picture box in C# before coordinates would be saved to python scripts and normalization process would occur. If anyone has done anything similar, please give me some advice.
I'd like to apply various modifications to every frame in a video, for instance make every pixel in the first 100 pixels wide black for the length of the frame.
It's actually just for a bit of fun i.e. spoofing some newsreaders back ground pictures with company staff "stories".
Ive looked into FFMPeg just briefly but I'm wondering if there is anything more straight forward considering the nature of the project (I dont want to burn too much time on it).
I literally just want to overwrite patches of screen for x number of frames.
Take a look at this. http://directshownet.sourceforge.net/
there should be a sample where they draw a spider on the video
I'm working on a project where I need to take a single horizontal or vertical pixel row (or column, I guess) from each frame of a supplied video file and create an image out of it, basically appending the pixel row onto the image throughout the video. The video file I plan to supply isn't a regular video, it's actually just a capture of a panning camera from a video game (Halo: Reach) looking straight down (or as far as the game will let me, which is -85.5°). I'll look down, pan the camera forward over the landscape very slowly, then take a single pixel row from each frame the captured video file (30fps) and compile the rows into an image that will effectively (hopefully) reconstruct the landscape into a single image.
I thought about doing this the quick and dirty way, using a AxWindowsMediaPlayer control and locking the form so that it couldn't be moved or resized, then just using a Graphics object to capture the screen, but that wouldn't be fast enough, there would be way too many problems, I need direct access to the frames.
I've heard about FFLib, and DirectShow.NET, I actually just installed the Windows SDK but haven't had a chance to mess with and of the DirectX stuff yet (I remember it being very confusing for me a while back when I messed with it). Hopefully someone can give me a pointer in the right direction.
If anyone has any information they think might help, I'd be super grateful for it. Thank you!
You could use a video rendered in renderless mode (E.g. VMR9, EVR), which allows you to process every frame yourself. By using frame stepping playback you can step one frame each time and process the frame.
DirectShow.NET can help you to use managed code where possible, and I can recommend it. It is however only a wrapper to DirectShow, so it might be worthwhile to look for more advanced libraries as well.
A few sidenotes: wouldn't you experience issues with lighting which differs from angle to angle? Perhaps it's easier to capture some screenshots and use existing stitching algorithms?