I have a scene with a few panoramic images.. trouble is that they each weigh a lot. On disk between 400kB ~ 1MB, on RAM load it weighs around 30MB, and after applying Texture2D.Compress() it weighs around 3MB ~ 4MB.
This scene needs to run with Unity WebGL, and for that reason it needs to take up as little memory as possible. When i run the conpression function on the image, it freezes the scene and causes very bad user experience...
I tried to find a workaround with LZMA compression but it's being too slow. Are there any other compression algorithms I can use within Unity C# environment, or any alternatives to Compress() that I'm not aware of?
Related
We are working on optimizing our webgl build (intended to be run on chromebooks, chrome latest version).
Currently we have achieved about ~40 fps throughout the game which is quite near our requirement.
The issue is that if the game is left "on" for some time (e.g 30-45 minutes), the fps gradually drop from the initial 40 fps to about 20 fps and then continue to decrease in the same manner if game is left on.
We can say this isnt due to gpu because in all our scenes the draw calls are about 100-150 and they stay constant. Furthermore we have optimized as far as gpu is considered (static/dynamic batching, gpu instancing, disabled shadows, texture compression etc).
Currently we are unable to profile the actual build (since the development build is about 2gb which cant be loaded in any browser), hence we are profiling the editor.
Deep profiling the cpu scripts doesnt reveal anything obvious that could be gradually eating up the fps over a period of 45 minutes.
Has anyone else encountered this in their WebGl builds?
Any advice for optimization and maintaining a consistent fps?
Thanks.
Unity's audio source was causing the fps drop in the Webgl build. We replaced it with this asset and the fps drop vanished.
I'm confused as to what is going wrong here but I am sure that the problem is my rendertextures/video players - I have maybe 20 gameobjects that are iPhones, and I need animated .mov files that I made to play behind the screens.
To do this I followed tutorials to hook up Videoplayers with render textures (now there are about 8 of them) like this, then plugging the render texture into the emission slot in a material:
And with even 2 render textured cubes the game is INCREDIBLY laggy, here are stats
I tried turning the depth off but don't know whats wrong here - my movie files are just in the KB range. How can I play videos without lagging?
Based on the CPU taking 848ms per frame rendered, you are clearly bottlenecked on CPU. If you want to run at 30 frames per second, you'll need to get CPU time below 33ms per frame.
Since the CPU time is getting markedly worse after adding video players, it seems that the video codec is taxing your CPU heavily. Consider reducing the video quality as much as possible, especially reducing the resolution.
If that won't work, you may need to implement a shader-based solution using animated sprite sheets. That's more work for you, but it will run much more efficiently in the engine.
I have raw bitmap data (pixel colors, 32 bit) in an array.
Is there a simple way to display such a "sprite" on a D3DImage?
Preferably without use of thirdparty libs (such as SharpDX).
Background:
I have raw bitmap frames coming from unmanaged DLL and I need to display them with high framerate/low CPU usage in WPF. So far I've tried WriteableBitmap and InteropBitmap, but they are too slow (I get ~110 fps on maximized window (1680x1050 display), Core i5; and this is just a simple test with filling the memory. I believe that modern computers should do this much faster without using 100% CPU core time).
I'm also looking at VideoRendererElement (Displaying live video from a raw uncompressed byte source in C#: WPF vs. Win forms), but this is a bit overcomplicated (involves unmanaged dlls, regsvr32 registration, etc).
As I see from D3DImage samples, it achieves good framerates without stressing the CPU. I wonder if it is possible to use it for bitmap display.
UPD:
I've found this: http://www.codeproject.com/Articles/113991/Using-Direct2D-with-WPF
Which is basically what I was looking for. I've plugged it into my project, and it is on par with InteropBitmap, even a little bit slower. Looks like memory performance is a bottleneck in my case. In both cases most time is spent on copying bitmap (I use unmanaged memcpy).
I need some guidance.
I have to create a simple program, that captures still images every n Seconds, from 4 cameras, attached with USB.
My problem is, that the cameras cannot be "open" at the same time, as it will exceed the USB Bus bandwidth, so I will have to cycle between them.
How to do this, i'm not so sure about.
I've tried using EmguCV, which is a .NET wrapper for OpenCV, but I'm having trouble controlling the quality of my images. I can capture 1280x720, as intended, but it seems like they are just scaled up, and all the image files are around 200kb.
Any ideas on how to do this, properly?
I hate to answer my own question, but this is how I have ended up doing it.
I continued with EmguCV. The images are not very large (file size), but it seems like that is not an issue. They are saving with 96 dpi, and it looks pretty good.
I cycle through the attached cameras, by initiating the camera, taking a snapshot and then releasing the camera again.
It's not as fast as I had hoped, but it works. In average, there is 2 seconds between each image.
I'm working on a wallpaper application. Wallpapers are changed every few minutes as specified by the user.
The feature I want is to fade in a new image while fading out the old image. Anyone who has a mac may see the behavior I want if they change their wallpaper every X minutes.
My current thoughts on how I would approach this is to take both images and lay one over the other and vary the opacity. Start the old image at 90% and the new image at 10%. I would then decrease the old image by 10% until it is 0%, while increasing the new image by 10% until 90%. I would then set the wallpaper to the new image.
To make it look like a smooth transition I would create the transition wallpapers before starting the process instead of doing it in real-time.
My question is, is there a more effective way to do this?
I can think of some optimizations such as saving the transition images with a lower quality.
Any ideas on approaches that would make this more efficient than I described?
Sounds like an issue of trade-off.
It depends on the emphasis:
Speed of rendering
Use of resources
Speed of rendering is going to be an issue of how long the process of the blending images is going to take to render to a screen-drawable image. If the blending process takes too long (as transparency effects may take a long time compared to regular opaque drawing operations) then pre-rendering the transition may be a good approach.
Of course, pre-rendering means that there will be multiple images either in memory or disk storage which will have to be held onto. This will mean that more resources will be required for temporary storage of the transition effect. If resources are scarce, then doing the transition on-the-fly may be more desirable. Additionally, if the images are on the disk, there is going to be a performance hit due to the slower I/O speed of data outside of the main memory.
On the issue of "saving the transition images with a lower quality" -- what do you mean by "lower quality"? Do you mean compressing the image? Or, do you mean having smaller image? I can see some pros and cons for each method.
Compress the image
Pros: Per image, the amount of memory consumed will be lower. This would require less disk space, or space on the memory.
Cons: Decompression of the image is going to take processing. The decompressed image is going to take additional space in the memory before being drawn to the screen. If lossy compression like JPEG is used, compression artifacts may be visible.
Use a smaller image
Pros: Again, per image, the amount of memory used will be lower.
Cons: The process of stretching the image to the screen size will take some processing power. Again, additional memory will be needed to produce the stretched image.
Finally, there's one point to consider -- Is rendering the transition in real-time really not going to be fast enough?
It may actually turn out that rendering doesn't take too long, and this may all turn out to be premature optimization.
It might be worth a shot to make a prototype without any optimizations, and see if it would really be necessary to pre-render the transition. Profile each step of the process to see what is taking time.
If the performance of on-the-fly rendering is unsatisfactory, weigh the positives and negatives of each approach of pre-rendering, and pick the one that seems to work best.
Pre-rendering each blended frame of the transition will take up a lot of disk space (and potentially bandwidth). It would be better to simply load the two images and use the graphics card to do the blending in real time. If you have to use something like openGL directly, you will probably be able to just create two rectangles, set the images as the textures, and vary the alpha values. Most systems have simpler 2d apis that would let you do this very easily. (eg. CoreAnimation on OS X, which will automatically vary the transparency over time and make full use of hardware acceleration.)
On the fly rendering should be quick enough if handled by the graphics card, especially if it's the same texture with a different opacity (a lot of graphics rendering time is often loading textures to the card, ever wondered what game loading screens were actually doing?)