I need some help deciding what to use to acquire an image from a webcam. I want to acquire a single image. I know you can typically acquire a still image at a higher resolution than a single video frame.
Currently, I am using MATLAB's image acquisition toolbox.. which apparently only supports obtaining frames in video mode(so lower resolution). Which other libraries do you recommend? Has anyone else encountered this problem?
Are you referring to the fact that the largest resolution reported by the Image Acquisition Toolbox is (for example) 1024x768 but the webcam claims that it can acquire 6 megapixel still images? If so, every webcam that I have ever seen has a note in very small print somewhere that explains that the higher resolution is achieved via software interpolation.
You can just acquire the image in the largest format supported by the toolbox and then use IMRESIZE to scale the image to whatever resolution that you want.
We've used WIA at work before. I can't share our code, but we basically bring up the WIA capture screen (which the user has to interact with before the image is captured). For an automated solution, have a look at this: http://www.codeproject.com/KB/cs/WebCamService.aspx
Related
, Hi, everyone. I'm currently experiencing some problems with an image capture application I need to complete in C#.
I have a Windows Form C# program that uses the DirectShow .NET library to obtain a constant flow of still images from a webcam at about 20+ frames per second. Each image is loaded into a System.Drawing.Bitmap object and processed with some filters and logic. The result is presented to the user in a picture box as the modified capture from the webcam at the same frame rate of capture or saved to disk in JPEG format.
It all works well except for the capture itself. After capturing still images constantly for a random period of time (anywhere from minutes to hours), I start getting exceptions from DirectShow .NET that the still capture timed out, or simply crashes and the whole program stops working.
I have tried to dispose of the DirectShow .NET resources and re-acquiring them, but in some instances, the program hangs waiting for either the disposal to complete or for the re-acquisition to happen.
I don't know if I'm doing it right by using the still capture, or if there is another way to access each frame from the camera with DirectShow .NET or any other technology to do the described processing that will not cause any problems after extended periods of uninterrupted work.
I do not need to show the processed video frames to the user all the time, so, I cannot have a preview window showing all the time. When the user is not monitoring, the program saves the frames to disk in JPEG format.
Any suggestions on what could be the problem, or if I should switch to another technology for the capture (if so, suggestions and information are welcome)?
I am Developing a Windows Store application. There are seven resolutions in Windows. I want to set different image sizes for different resolutions. Can any one know what are the image sizes for Windows Simulator Resolutions.
These are the resolutions Simulator provide:
10.6" 1024*768,
10.6" 1366*768,
10.6" 1920*1080,
10.6" 2560*1440,
12" 1280*800,
23" 1920*1080,
27" 2560*1440,
My question is Image sizes for these resolutions regarding Background Image, Launch image(splash screen). I want to Select image based on Screen Size. Guide me the what are the image sizes regarding of Screen size.
I searched through out in internet. Help me regarding this.
Depending of the kind of application that you are dealing with, the best option may be to scale or to adjust the image size.
Take a look a this resource Guidelines for scaling to screens (Windows Store apps) it will provide a complete picture of this very important topic.
In the same reference guide you have Guidelines for scaling to pixel density (Windows Store apps)
I truly recommend to read the MSDN reference to Windows 8 apps. It's very well documented and has tons of examples.
It's worth mentioning that the system can automatically swap images for you when the screen resolution goes above 140% and 180% of 1366 x 768. To take advantage of this "auto swapping" simply include three versions of your image:
MyImage-100.jpg
MyImage-140.jpg
MyImage-180.jpg
Then, when you reference the image in your application just reference it as MyImage.jpg. The system will take care of the rest.
If you need to swap images out at resolution thresholds other than 140% and 180% you will need to write your own custom code.
In addition to the great resources that Agustin has provided, please note that the launch image/splash screen must always be 620 x 300 pixels, regardless of your screen size/resolution. You can find this in your Package.appxmanifest file under the "Application UI" tab, at the bottom.
I have a small video clip that I've run through my video to image software and noticed that the images come out different. Both set of images are identical in which they are cut at 1 second segments. Where they vary is one of the images seem to be brighter then the other set. I'm trying to think what can cause this subtle difference, but I'm at a loss.
I thought that maybe because the hardware is different that would cause this, but I'm not doing anything on the GPU. I also thought that it could be the codec being used, but if the video is encoded the same way using the same codec and information then would decoding really effect it in this way?
Below is a list of what the program is:
Takes a video and saves it out as 1 second images
Uses DirectX in C# to load in a video and saves out the texture.
Video is encoded using MPEG-4 similar compression
I understand that this may not be much information to go off of, but I am at a loss of where I can look.
Any advice is greatly appreciated.
I'd say that images are not actually different. It is unlikely that MPEG-4 decoding uses any GPU resources. Well, it's possible to hardware decode MPEG-4 Part 10, but it's subject to certain conditions too. Far more likely, the effect is due to on of the reasons below (or both):
if you show the picture up within video streaming context, or as you mentioned textures in use - the images might be appearing from YUV surfaces which video hardware is managing differently from regular stuff like desktop, and video hardware might be having a different set of brightness/contrast/gamma controls for those, which result in different presentation
you have different codecs/decoders installed and they decode video with certain differences, such as with post-processing; with all the same encoded video, decoded presentation might be a bit different
I'm working on a project where I need to take a single horizontal or vertical pixel row (or column, I guess) from each frame of a supplied video file and create an image out of it, basically appending the pixel row onto the image throughout the video. The video file I plan to supply isn't a regular video, it's actually just a capture of a panning camera from a video game (Halo: Reach) looking straight down (or as far as the game will let me, which is -85.5°). I'll look down, pan the camera forward over the landscape very slowly, then take a single pixel row from each frame the captured video file (30fps) and compile the rows into an image that will effectively (hopefully) reconstruct the landscape into a single image.
I thought about doing this the quick and dirty way, using a AxWindowsMediaPlayer control and locking the form so that it couldn't be moved or resized, then just using a Graphics object to capture the screen, but that wouldn't be fast enough, there would be way too many problems, I need direct access to the frames.
I've heard about FFLib, and DirectShow.NET, I actually just installed the Windows SDK but haven't had a chance to mess with and of the DirectX stuff yet (I remember it being very confusing for me a while back when I messed with it). Hopefully someone can give me a pointer in the right direction.
If anyone has any information they think might help, I'd be super grateful for it. Thank you!
You could use a video rendered in renderless mode (E.g. VMR9, EVR), which allows you to process every frame yourself. By using frame stepping playback you can step one frame each time and process the frame.
DirectShow.NET can help you to use managed code where possible, and I can recommend it. It is however only a wrapper to DirectShow, so it might be worthwhile to look for more advanced libraries as well.
A few sidenotes: wouldn't you experience issues with lighting which differs from angle to angle? Perhaps it's easier to capture some screenshots and use existing stitching algorithms?
I need to perform video scaling in my C++ app. I know I can use COM to communicate to C#, and I know C# can perform video scaling, so I am thinking of bridging the two worlds.
How can I, using C#, scale a video image which is in memory, and then get the video data after the image is scaled?
This question seems similar but how do I get the data after scaling instead of showing it to screen?
High Quality Image Scaling Library
C# (using GDI+ a.k.a. System.Drawing) can scale individual images, but it has no built-in way of scaling full videos (like MPEGs or AVIs).
Assuming that you actually only need to scale individual images (i.e. not full videos), then you would be better off doing this from C++ (StretchBlt would be the main API method that you would use for this.