I am creating a WPF application with a Viewport3D and inside the viewport I have meshes with text on them. The text is different for each of the mesh. (This means that I can have a single reference for a material for the regular meshes, but for the text I need to create different materials every time.)
I have also froze all the regular meshes since they are static.
However I can create any number of meshes, with a SolidColorBrush, that I want and the performance stays stable. (I have tried up to about 700 - 800 meshes)
However if I implement the text meshes the performance drops drastically. For example when I have around 200 regular Meshes and 200 text Meshes, the performance is very bad.
I have tried two different ways to render text;
- I have tried rendering text as a Viewport2DVisual3D. (However I presume this is a terrible way, since it means in my prior example there are 200 viewports additional to the Viewport3D itself.)
- I have tried rendering text as a GeometryModel3D, so the creation is the same as the regular brushes. However the material consists of a VisualBrush instead of a SolidColorBrush. (This does enhance the performance quite a bit, but still not perfect)
Does anyone have solutions to further enhance my performance with rendering text, so that I can render many more?
(I already followed most of the performance guidelines on the following site:
https://msdn.microsoft.com/en-us/library/bb613553%28v=vs.110%29.aspx)
#edit:
I have found that if I do the following with a visualbrush:
VisualBrush v = new VisualBrush(Text.createCubeStackText(text1));
RenderOptions.SetCachingHint(v, CachingHint.Cache);
viewport.Material = new DiffuseMaterial(v);
It improves the performance really much. I have tried, and can now render 700 regular meshes and 300 text meshes without any performance problems. Performance starts to drop with 550 text meshes and 550 regular meshes.
(I would still like any other suggestion though.)
Text in 3D can be very slow due to the tessellation that is needed to render the text. You might want to consider:
Remove the 3D rendering of text by overlaying the 3DViewport with a canvas and calculate the proper location of the text and render the text as a 2D overlay. You could even change the font size dependent on the Z value of the text's position
Render the text to a (transparent) bitmap and add the text as a texture to a billboard in the 3D scene
Use IsHitTestVisible=false to click-through the overlay and project the 3D position of the text to the canvas by using this: Projecting a 3D point to a 2D screen coordinate
Related
I've put a TextBlock in a 3D panel (Planerator) and I used a Storyboard to animate it. (as crawl text)
When the field of view is 1 everything works fine, But if I set the field of view to more than 50 the frame rate will drop sharply and rendering will be choppy.
I used theCompositionTarget.rendering.
Please see the following images:
I need to 2D animations in 3d view with good performance.
Please tell me how can I solve this problem? Should I leave WPF and go to the DirectX?
UPDATE 1 :
I just want to move ONE 2Dtext in 3D space , but the performance is poor.(rendering isn't smooth it is choppy)
This is a sample project.
UPDATE 2:
This is the sample project updated version based on cokeman19's answer. (the performance have been improved ~10 frames, But I need to perfect rendering)
UPDATE 3 :
Finally, I got an acceptable performance with the help of the cokeman19's answer and the contents of this page.
I'm not sure if it's just a byproduct of the sample app, but under Planerator.CreateVisualChild(), it doesn't seem to be necessary to set the GeometryModel3D.BackMaterial. For reference:
VisualBrush vb = new VisualBrush(_logicalChild);
SetCachingForObject(vb); // big perf wins by caching!!
Material backMaterial = new DiffuseMaterial(vb);
...
GeometryModel3D backModel = new GeometryModel3D() { ..., BackMaterial = backMaterial };
The BackMaterial is a VisualBrush wrapper around the logical child, which doesn't belong to the visual tree, so rendering doesn't seem to make sense here. Moreover, the logical child (the LayoutInvalidationCatcher class), is in turn a wrapper around the visual child, which is already rendered (using _logicalChild) in setting frontModel.Visual.
Removing the code for the creation and setting of BackMaterial brings the FPS up to ~55.
In addition, if it's an option, setting the following brings the FPS back up to 60, with no noticeable degradation in quality.
RenderOptions.SetEdgeMode(_viewport3d, EdgeMode.Aliased);
Update:
The only other gain I was able to make was to set the CacheMode to BitmapCache, which may not be appliable for your needs.
frontModel.CacheMode = new BitmapCache(20) { EnableClearType = false };
Even on my slowest machine, this allowed for maximum FPS, but there are some drawbacks. Because the zoom level is so high on the text element, and this technique creates a picture to use in the animation (instead of animating the UIElement itself), I had to set the scale level to 20 before it became almost visually imperceptible. This of course has memory implications, as well.
I'm trying to develop object detection algorithm. I plan to compare 2 image with different focus length. One image that correct focus on the object and one image that correct focus on background.
By reading about autofocus algorithm. I think it can done with contrast detection passive autofocus algorithm. It work on light intensity on the sensor.
But I don't sure that light intensity value from the image file has the same value as from the sensor. (it not a RAW image file. a jpeg image.) Is the light intensity value in jpeg image were the same as on the sensor? Can I use it to detect focus correctness with contrast detection? Is there a better way to detect which area of image were correct focus on the image?
I have tried to process the images a bit and I saw some progress. THis is what I did using opencv:
converted images to gray using cvtColor(I, Mgrey, CV_RGB2GRAY);
downsampled/decimated them a bit since they are huge (several Mb)
Took the sum of absolute horizontal and vertical gradients using http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=sobel#cv.Sobel.
The result is below. The foreground when in focus does look brighter than background and vice versa.
You can probably try to match and subtract these images using translation from matchTemplate() on the original gray images; and then assemble pieces using the convex hull of the results as initialization mask for grab cut and plugging in color images. In case you aren’t familiar with the grab cut, chceck out my answer to this question.
But may be a simpler method will work here as well. You can try to apply a strong blur to your gradient images instead of precise matching and see what the difference give you in this case. The images below demonstrate the idea when I turned the difference in the binary masks.
It will be helpful to see your images. It I understood you correctly you try to separate background from foreground using focus (or blur) cue. Contrast in the image depends on focus but it also depend on the contrast of the target. So if the target is clouds you will never get sharp edges or high contrast. Finally jpeg image that use little compression should not affect the critical properties of your algorithm.
I would try to get a number of images at all possible focus lengths in a row and then build a graph of the contrast as a function of focal length (or even better focusing distance). The peak in this graph will give you the distance to the object regardless of object's own contrast. Note, however, that the accuracy of such visual cues goes down sharply with viewing distance.
This is what I expect you to obtain when measuring the sum of absolute gradient in a small window:
The next step for you will be to combine areas that are in focus with the areas that are solid color that is has no particular peak in the graph but none the less belong to the same object. Sometimes getting a convex hull of the focused areas can help to pinpoint the raw boundary of the object.
I am working on an Application that loads live video images from a camera and displays an overlay on top of said image. The Overlay does not change often so it can be considered as still. However it usually contains about 1,000 to 10,000 Lines.
When the video image is updated there is a notable impact to the CPU load depending on whether the overlay is visible or not. The overlay does neither get invalidated nor changed, just the image behind it is changing.
My setup is this:
<Canvas>
<Image/>
<Canvas>
<OverlayElement 1/>
<OverlayElement 2/>
<OverlayElement 3/>
<.../>
</Canvas>
</Canvas>
The Image's Source is a WriteableBitmap. Every time a new camera image (type byte[]) is available, the main Canvas' Dispatcher is invoked to write the image data by using WriteableBitmap.WritePixels().
The inner Canvas contains all Overlay Elements, being
- a contour (PolyLine)
- a circle (Path with EllipseGeometry) and
- a set of Rays (Path with one Figure containing LineSgements).
The number n of Points in the contour equals the number of line Segments in the last mentioned Path. n is usually around 1,000 - 3,000.
Depending on the count and length of Lines shown in the overlay the CPU load for showing a live image varies (increases if length or count go up) even if the overlay does not change. At some point this affects the frame rate and makes the program unusable. Line length is mostly correlated with line intersection, so maybe the Path is struggling to calculate it's fill area despite it is not painted?
So how could I improve the performance here?
What bugs me most is that even if the overlay does not change, the render time increases with it's primitive count. I would expect to have constant render time once the overlay has been drawn in it's last set state. What could I do to achieve that aside from rendering the whole overlay to a bitmap?
I am also open minded for suggestions on how to get the byte[] onto the screen more efficiently. Just keep in mind this problem is part of a bigger Application and i cannot change all paradigms concentrating on how to get the image drawn.
What I have tried so far:
Override the OnRender() method of the inner Canvas, drawing the overlay myself. This works fine but has the performance issue that brings me here ;)
Use Shapes (PolyLine, Ellipse, Path) as the inner Canvas' children to hold the overlay elements. This works, too. It is faster to redraw the overlay when it changes but on the other hand worsens the performance issue when updating the background image.
Like 2., but use Freeze() on Geometries wherever possible. Has no or little performance impact.
Thanks for your help in advance.
I have an image like below.
What I want is a monochrome image such that white parts are kept white, the rest is black. However, the tricky part is that I also want to reduce the white parts to be one pixel in thickness.
It's the second part that I'm stuck with.
My first thought was to do a simple threshold, then use a sort of "Game of Life" type iterative process where a white pixel was removed if it had neighbours on one side but not the other (i.e. it's an edge) however I have a feeling this would reduce ends of lines to nothing over time so I'd end up with a blank image.
What algorithm can I use to get the image I want, given the original image?
(My language of choice is C#, but anything is fine)
Original Image:
After detecting the morphological extended maxima of a given height:
and then thinning gives:
You can also manipulate the height parameter, or prune the thinned image.
Code in Mathematica:
img = ColorConvert[Import["http://i.stack.imgur.com/zPtl6.png"], "Grayscale"];
max = MaxDetect[img, .55]
Thinning[max]
EDIT I followed my own advice and a height of .4 gives segments which are more precisely localized:
I suggest that you look into Binary Morphological transformations such as Erosion and Dilation. Graphics libraries such as OpenCV() http://opencv.willowgarage.com/wiki/ and that statistical/matrix tool Gnu Octave http://octave.sourceforge.net/image/function/bwmorph.html support these operations.
I am looking for options on how to draw 2 rulers at different scales on a canvas (assume a canvas) that will scale based on user-entered data.
Placing the tick marks and text one-time isn't a big deal, it is how to scale the data as the max/min values are changed by the user AND getting the points (ellipses) on the canvas to look correctly.
Foolishly, I set the size of the canvas to the max values of the current data, but as the data changes that won't work... I had hoped for a 1:1 translation...
Something like taking the current canvas size and redrawing the rulers is where I am headed
thanks,
rusty
I was over thinking the issue plus have just started in 'drawing', 'graphics', etc. Have a lot to learn. It was basic logical to physical translation math that most everyone 'knows'.
rusty