I need some guidance.
I have to create a simple program, that captures still images every n Seconds, from 4 cameras, attached with USB.
My problem is, that the cameras cannot be "open" at the same time, as it will exceed the USB Bus bandwidth, so I will have to cycle between them.
How to do this, i'm not so sure about.
I've tried using EmguCV, which is a .NET wrapper for OpenCV, but I'm having trouble controlling the quality of my images. I can capture 1280x720, as intended, but it seems like they are just scaled up, and all the image files are around 200kb.
Any ideas on how to do this, properly?
I hate to answer my own question, but this is how I have ended up doing it.
I continued with EmguCV. The images are not very large (file size), but it seems like that is not an issue. They are saving with 96 dpi, and it looks pretty good.
I cycle through the attached cameras, by initiating the camera, taking a snapshot and then releasing the camera again.
It's not as fast as I had hoped, but it works. In average, there is 2 seconds between each image.
Related
I'm confused as to what is going wrong here but I am sure that the problem is my rendertextures/video players - I have maybe 20 gameobjects that are iPhones, and I need animated .mov files that I made to play behind the screens.
To do this I followed tutorials to hook up Videoplayers with render textures (now there are about 8 of them) like this, then plugging the render texture into the emission slot in a material:
And with even 2 render textured cubes the game is INCREDIBLY laggy, here are stats
I tried turning the depth off but don't know whats wrong here - my movie files are just in the KB range. How can I play videos without lagging?
Based on the CPU taking 848ms per frame rendered, you are clearly bottlenecked on CPU. If you want to run at 30 frames per second, you'll need to get CPU time below 33ms per frame.
Since the CPU time is getting markedly worse after adding video players, it seems that the video codec is taxing your CPU heavily. Consider reducing the video quality as much as possible, especially reducing the resolution.
If that won't work, you may need to implement a shader-based solution using animated sprite sheets. That's more work for you, but it will run much more efficiently in the engine.
I have made a very simple hypercasual game everything works fine but after some few minutes of gameplay, the fps goes from 60 to 50 even the phone gets heated up. Similar to this question. I tried profiling but just can't see anything off. Tried even removing some UI elements but still no luck. Tried various vsync settings. Also, I had used this to display the fps. Even without it, the lag can be seen. Even if I just open the game and do nothing then after 5 minutes the fps will become 50. If go back using the home button and re-enter the game then the fps becomes 60 again. Using unity 2018.2.6f1. Never experienced this behavior in my other Android games.
Basically it was a faulty custom vertex shader which was applied to a plane to change the background color which changed color over time. I had not used the mobile vertex color because I was not getting the desired output. But now I'll stick to the mobile one.
The two symptoms you observed are very much likely to be connected.
The phone might heat up, as you are using its full power, which in turn makes the throttling kick in, reducing the perform
I've had the EXACTLY same problem. I was trying to fix it for a very long time. You said something about faulty shaders you use. And this is the key to solve our problem.
I use a 2-color gradient as a BG, so I have to use a shader too. Due to the fact that I'm a total noob in the writing "shader-code", I have to find something in the Internet. And it was my biggest fail)
To fix the problem and remove this fps drop you should remove your gradient and shader attached to it from the scene. And try to find a more optimized shader for 2D-game (or you can always write your own one c:)
I ran into a problem with my GUI.
My GUI has multiple parts in it.
The first one is for an image (from 500x500 to 3000x3000 and it has to update up to 4 times each seconds)
The second one is the main menu
The third one has buttons with options regarding the image. I am not showing all of them at once, I scroll through several menus and i render only the ones who are visible. (An example for the otpions is the pixelsize of the image)
I display the image inside a Viewbox which is 800x800px big. I stretch the image inside of the box with Stretch="{Binding Path=StretchMode}"
The Image I get is some kind of stream. I get multiple Images each second.
Now when I display an image, the first GUI part works fine (regardless of the image size) but the others sometimes have a heavy framedrop. I will give some examples:
Example 1:
The image is 500x500 pixel big. I can work without a framedrop and the whole gui updates correctly.
Example 2:
The image is 1500x1500 pixel big. I can work without a Framedrop and the whole gui updates correctly.
Example 3:
The image is 2500x2500 pixel big. The Image updates fast but the rest of the GUI have framedrop from 60 fps to fps and sometimes even to 1 fps.
My thoughts about this problem were:
Hardware is at its limit. But a look on the Taskmanager and CPU/RAM analysis with Visual Studios says it is allright.
It is too much to render for the GUI because 3000x3000 is big and 4 times each seconds is not slow either. After a test with loaded (I loaded them from a hdd-drive) 3000x3000 images (Same Datatype) it worked fast and without problems.
Too much changes of the GUI at once. I tried the Software with only 5 updates. Still the same problem.
Microsoft Prism event not occuring. This wasn't it either because it comes inside the Controller and is Raising the changes if there is a change. (I have a logger which writes logfiles and it is raising the Change event more than the GUI is really changing)
Usage of a different datatype. Possible but i tried out several. (BitmapSource, BitmapImage and WriteableBitmap)
Outsourcing some rendering options in another thread. Still no change.
I hope you can give me some ideas why the GUI has this behaviour.
If you need code, please tell me.
I am not using a VM.
The problem occurs on Windows 7 64 bit and Windows 8.1 64 bit (not testet on Windows 10)
My hardware differs. The problem appears on my laptop (Intel i7-4702MQ #2.2 Ghz, 8 gb ddr ram, intel board graphics) and on workpc's with different specs (the highest are: Intel Xeon with 3.5 Ghz, 128 GB DDR4 Ram and a Titan X and a 4k Monitor)
EDIT: Apologies, stack doesn't have my score high enough to comment, so comment-ish content here instead.
EDIT1: Process explorer (from Sysinternals) should be used perhaps as a low-hanging fruit to see what (if any) video resources have a high consumption. there is a tab/area in process explorer for looking at graphic resources. In process explorer you can do a 'select columns' and add GPU resources for easy viewing. Might be a good angle to try.
I would venture a guess at this point that we are dealing with a
An ETW trace with DPC data might be helpful. Also you don't mention if you are in a VM or not, a VM would make taskmanager reflect inaccurate resource consumption on CPU.
An example of how to collect such a trace is here: https://blogs.technet.microsoft.com/jeff_stokes/2012/09/18/how-to-collect-a-trace-for-audio-or-video-problems-in-windows-7/
In any event, I suspect you are looking at kernel drivers or IPC/DPC issues rather than just raw CPU consumption here. More data needs to be provided I think. (hardware specs, OS used, VM/not).
I have a .NET Winform application and i need to show specific frames of a video file. Frames aren't necessarily in sequential order and are loaded when the user moves a slider, or when the application fires some events. I tried the following things:
Using EmguCV (OpenCV Wrapper): The problem here is that when i use SetCaptureProperty (With CAP_PROP.CV_CAP_PROP_POS_FRAMES, AVI_RATIO or MSEC ) to sets capture's position, the position isn't seted correctly (I checked it using GetCaptureProperty next to the SetCaptureProperty instruction). So, the frame returned by QueryFrame isn't the needed frame.
Using WPF MediaElement with Clock driven behavior: I can set the position of the video at the place that i need. The problem is that i don't know how get only one frame of the video sequence. By default, i have the Clock controller paused. When I set the position, If I call Clock.Controller.Resume(), then the video start playing from here. If I don't call Clock.Controller.Resume(), or if i call Clock.Controller.Resume() and then Clock.Controller.Pause() nothing is happening.
Im looking for another video library that can be used for accomplish this work, but i am not sure about what could be used. Any idea?
Thanks a lot for all comunity members, not only for help with this answer, but for the very big help that you give me with my problems every day. Iam new, but i would try to return these helping others with your problems.
Sorry for my terrible english! (Im spanish speaker and english speaking is not my best quality :S)
I'm working on a project where I need to take a single horizontal or vertical pixel row (or column, I guess) from each frame of a supplied video file and create an image out of it, basically appending the pixel row onto the image throughout the video. The video file I plan to supply isn't a regular video, it's actually just a capture of a panning camera from a video game (Halo: Reach) looking straight down (or as far as the game will let me, which is -85.5°). I'll look down, pan the camera forward over the landscape very slowly, then take a single pixel row from each frame the captured video file (30fps) and compile the rows into an image that will effectively (hopefully) reconstruct the landscape into a single image.
I thought about doing this the quick and dirty way, using a AxWindowsMediaPlayer control and locking the form so that it couldn't be moved or resized, then just using a Graphics object to capture the screen, but that wouldn't be fast enough, there would be way too many problems, I need direct access to the frames.
I've heard about FFLib, and DirectShow.NET, I actually just installed the Windows SDK but haven't had a chance to mess with and of the DirectX stuff yet (I remember it being very confusing for me a while back when I messed with it). Hopefully someone can give me a pointer in the right direction.
If anyone has any information they think might help, I'd be super grateful for it. Thank you!
You could use a video rendered in renderless mode (E.g. VMR9, EVR), which allows you to process every frame yourself. By using frame stepping playback you can step one frame each time and process the frame.
DirectShow.NET can help you to use managed code where possible, and I can recommend it. It is however only a wrapper to DirectShow, so it might be worthwhile to look for more advanced libraries as well.
A few sidenotes: wouldn't you experience issues with lighting which differs from angle to angle? Perhaps it's easier to capture some screenshots and use existing stitching algorithms?