I'm confused as to what is going wrong here but I am sure that the problem is my rendertextures/video players - I have maybe 20 gameobjects that are iPhones, and I need animated .mov files that I made to play behind the screens.
To do this I followed tutorials to hook up Videoplayers with render textures (now there are about 8 of them) like this, then plugging the render texture into the emission slot in a material:
And with even 2 render textured cubes the game is INCREDIBLY laggy, here are stats
I tried turning the depth off but don't know whats wrong here - my movie files are just in the KB range. How can I play videos without lagging?
Based on the CPU taking 848ms per frame rendered, you are clearly bottlenecked on CPU. If you want to run at 30 frames per second, you'll need to get CPU time below 33ms per frame.
Since the CPU time is getting markedly worse after adding video players, it seems that the video codec is taxing your CPU heavily. Consider reducing the video quality as much as possible, especially reducing the resolution.
If that won't work, you may need to implement a shader-based solution using animated sprite sheets. That's more work for you, but it will run much more efficiently in the engine.
Related
We are working on optimizing our webgl build (intended to be run on chromebooks, chrome latest version).
Currently we have achieved about ~40 fps throughout the game which is quite near our requirement.
The issue is that if the game is left "on" for some time (e.g 30-45 minutes), the fps gradually drop from the initial 40 fps to about 20 fps and then continue to decrease in the same manner if game is left on.
We can say this isnt due to gpu because in all our scenes the draw calls are about 100-150 and they stay constant. Furthermore we have optimized as far as gpu is considered (static/dynamic batching, gpu instancing, disabled shadows, texture compression etc).
Currently we are unable to profile the actual build (since the development build is about 2gb which cant be loaded in any browser), hence we are profiling the editor.
Deep profiling the cpu scripts doesnt reveal anything obvious that could be gradually eating up the fps over a period of 45 minutes.
Has anyone else encountered this in their WebGl builds?
Any advice for optimization and maintaining a consistent fps?
Thanks.
Unity's audio source was causing the fps drop in the Webgl build. We replaced it with this asset and the fps drop vanished.
I have made a very simple hypercasual game everything works fine but after some few minutes of gameplay, the fps goes from 60 to 50 even the phone gets heated up. Similar to this question. I tried profiling but just can't see anything off. Tried even removing some UI elements but still no luck. Tried various vsync settings. Also, I had used this to display the fps. Even without it, the lag can be seen. Even if I just open the game and do nothing then after 5 minutes the fps will become 50. If go back using the home button and re-enter the game then the fps becomes 60 again. Using unity 2018.2.6f1. Never experienced this behavior in my other Android games.
Basically it was a faulty custom vertex shader which was applied to a plane to change the background color which changed color over time. I had not used the mobile vertex color because I was not getting the desired output. But now I'll stick to the mobile one.
The two symptoms you observed are very much likely to be connected.
The phone might heat up, as you are using its full power, which in turn makes the throttling kick in, reducing the perform
I've had the EXACTLY same problem. I was trying to fix it for a very long time. You said something about faulty shaders you use. And this is the key to solve our problem.
I use a 2-color gradient as a BG, so I have to use a shader too. Due to the fact that I'm a total noob in the writing "shader-code", I have to find something in the Internet. And it was my biggest fail)
To fix the problem and remove this fps drop you should remove your gradient and shader attached to it from the scene. And try to find a more optimized shader for 2D-game (or you can always write your own one c:)
I'd like to apply various modifications to every frame in a video, for instance make every pixel in the first 100 pixels wide black for the length of the frame.
It's actually just for a bit of fun i.e. spoofing some newsreaders back ground pictures with company staff "stories".
Ive looked into FFMPeg just briefly but I'm wondering if there is anything more straight forward considering the nature of the project (I dont want to burn too much time on it).
I literally just want to overwrite patches of screen for x number of frames.
Take a look at this. http://directshownet.sourceforge.net/
there should be a sample where they draw a spider on the video
I need some guidance.
I have to create a simple program, that captures still images every n Seconds, from 4 cameras, attached with USB.
My problem is, that the cameras cannot be "open" at the same time, as it will exceed the USB Bus bandwidth, so I will have to cycle between them.
How to do this, i'm not so sure about.
I've tried using EmguCV, which is a .NET wrapper for OpenCV, but I'm having trouble controlling the quality of my images. I can capture 1280x720, as intended, but it seems like they are just scaled up, and all the image files are around 200kb.
Any ideas on how to do this, properly?
I hate to answer my own question, but this is how I have ended up doing it.
I continued with EmguCV. The images are not very large (file size), but it seems like that is not an issue. They are saving with 96 dpi, and it looks pretty good.
I cycle through the attached cameras, by initiating the camera, taking a snapshot and then releasing the camera again.
It's not as fast as I had hoped, but it works. In average, there is 2 seconds between each image.
I'm working on a wallpaper application. Wallpapers are changed every few minutes as specified by the user.
The feature I want is to fade in a new image while fading out the old image. Anyone who has a mac may see the behavior I want if they change their wallpaper every X minutes.
My current thoughts on how I would approach this is to take both images and lay one over the other and vary the opacity. Start the old image at 90% and the new image at 10%. I would then decrease the old image by 10% until it is 0%, while increasing the new image by 10% until 90%. I would then set the wallpaper to the new image.
To make it look like a smooth transition I would create the transition wallpapers before starting the process instead of doing it in real-time.
My question is, is there a more effective way to do this?
I can think of some optimizations such as saving the transition images with a lower quality.
Any ideas on approaches that would make this more efficient than I described?
Sounds like an issue of trade-off.
It depends on the emphasis:
Speed of rendering
Use of resources
Speed of rendering is going to be an issue of how long the process of the blending images is going to take to render to a screen-drawable image. If the blending process takes too long (as transparency effects may take a long time compared to regular opaque drawing operations) then pre-rendering the transition may be a good approach.
Of course, pre-rendering means that there will be multiple images either in memory or disk storage which will have to be held onto. This will mean that more resources will be required for temporary storage of the transition effect. If resources are scarce, then doing the transition on-the-fly may be more desirable. Additionally, if the images are on the disk, there is going to be a performance hit due to the slower I/O speed of data outside of the main memory.
On the issue of "saving the transition images with a lower quality" -- what do you mean by "lower quality"? Do you mean compressing the image? Or, do you mean having smaller image? I can see some pros and cons for each method.
Compress the image
Pros: Per image, the amount of memory consumed will be lower. This would require less disk space, or space on the memory.
Cons: Decompression of the image is going to take processing. The decompressed image is going to take additional space in the memory before being drawn to the screen. If lossy compression like JPEG is used, compression artifacts may be visible.
Use a smaller image
Pros: Again, per image, the amount of memory used will be lower.
Cons: The process of stretching the image to the screen size will take some processing power. Again, additional memory will be needed to produce the stretched image.
Finally, there's one point to consider -- Is rendering the transition in real-time really not going to be fast enough?
It may actually turn out that rendering doesn't take too long, and this may all turn out to be premature optimization.
It might be worth a shot to make a prototype without any optimizations, and see if it would really be necessary to pre-render the transition. Profile each step of the process to see what is taking time.
If the performance of on-the-fly rendering is unsatisfactory, weigh the positives and negatives of each approach of pre-rendering, and pick the one that seems to work best.
Pre-rendering each blended frame of the transition will take up a lot of disk space (and potentially bandwidth). It would be better to simply load the two images and use the graphics card to do the blending in real time. If you have to use something like openGL directly, you will probably be able to just create two rectangles, set the images as the textures, and vary the alpha values. Most systems have simpler 2d apis that would let you do this very easily. (eg. CoreAnimation on OS X, which will automatically vary the transparency over time and make full use of hardware acceleration.)
On the fly rendering should be quick enough if handled by the graphics card, especially if it's the same texture with a different opacity (a lot of graphics rendering time is often loading textures to the card, ever wondered what game loading screens were actually doing?)