Resizing System.Controls.Image with Nearest Neighbour resizing mode - c#

I'm trying to make a game which uses a pixel art style. I've created my assets and used them as resources, which are added into the window as System.Controls.Image objects.
In the designer, they always turn out too small. As a result, when I resize them, they become somewhat blurred due to some sort of bicubic interpolation being applied onto them.
I have managed to avoid this by avoiding resizing; I resize in the designer to find out what size is appropriate, and then use nearest neighbour resizing on the original image (using an external program) to get the source file to that size. I then update the image in the project and remove any resizing, therefore leaving it at original size, i.e. interpolation-free.
As you can imagine, this is a rather tedious process. I looked into interpolation choices during resizing, but most answers I can find relate to System.Drawing.Image, not System.Controls.Image. I feel like any such solution (if adapted) would be horribly messy and involve multiple (and perhaps unecessary) conversions/casts.
Is there any way to get nearest neighbour resizing on System.Controls.Image?

To set the resize mode, you need to set the RenderOptions.BitmapScalingMode="NearestNeighbor" option for the visual tree. You can set this at the window level.
To address the larger issue, it seems that something is causing your images to be scaled in the first place:
Ensure you are setting the Stretch="None" option on the Image control,
Ensure that you are using SnapsToDeveicePixels or Layout rounding
Lastly if all else fails, explicitly set the width and height of the image.
I have also run into instances where the image file's DPI not being set to (I believe) 90, causes the renderer to apply scaling.

Related

Properly Scale UI Images in Unity

I have a png that's size 150x100 , and I set the UI image to the same, but it makes a bunch of extra space around it (that can be interacted with). How do I fix this?
Image of Problem: https://imgur.com/a/2ILXY1t
Unity isn't adding extra space. The image itself HAS that space.
There are options to crop out the alpha space in Unity by using the sprite editor, but by my experience i prefer using a proper Image editor like Gimp. using one is the best way to handle your image assets.
To crop out the extra space you just have to reduce the canvas size.
Well, first you could check (in Unity) wether your Image has its property Preserve Aspect set to True.
You could click Set Native Size which is right below it, so the 'box' around your image will take it's size.
Edit: Nevermind the first two. I do not know why i thought they could solve it, i looked at your image again and i, too, think there are transparent pixels above and below it. So you should try this:
Then you could check whether your picture has any transparent pixels around it, using an image editor. If it has, you would need to cut them out.

Visualising multi-layer image in WPF

My data structure has two fields:
* BackgroundImage (of type Bitmap/Image)
* Points (of type Point2D [])
My use case is as follows: a user can load an image into the app. After the image appears on user's screen, they might add points to it (by clicking a mouse button). The points should be visualized on the top of the image, but a user should be albe to reposition them if needed (e.g. drag'n'drop).
At the moment I solve the problem by doing the following every time the user adds / moves a point:
* clone the BackgroundImage
* draw all the points on the cloned image (using System.Drawing.Graphics)
* return the cloned image with the points (expose it as a property and bind to Image in WPF).
The time performance of this solution is ok, however it consumes a lot of memory, as everytime I end up copying whole image. I'm wondering if there's a better way of doing this (e.g. by using layers - then my BackgroundImage remains the same all the time while I keep modifying only the top layer).
My code is quite long, but if it's needed just let me know and I'll post it.
when it comes to memory consumption, there is nothing wrong with the aproach you described as long as you make sure the old instances of the image are not rooted anymore so that the GC can remove them.
However, during the timespans in which the cloned image is modified, occupation of memory might of course be up to double the lowest possible value when not cloning.
To reduce this memory consumption, the movable points can be implemented using UIElements. This could also help keeping the implementation simpler by using WPF hittesting for the "drag'n'drop" part.
Since UIElements require more memory than the points in the BitmapImage, the actual savings depend on the ratio of points in the BitmapImage to movable points.
To implement the points using UIElements, place the BitmapImage together with a Canvas in a Panel. Then use the Canvas as a container for the points and set their positions using the attached properties Canvas.Top and Canvas.Left.
To make the points appear in front of the BitmapImage, set the Panel.ZIndex of the Canvas.
But if you are seeing unreasonable memory consumption, you should use a memory profiler to take a closer look at what parts of the Process are actually taking up the space.

Comparing focus of 2 image

I'm trying to develop object detection algorithm. I plan to compare 2 image with different focus length. One image that correct focus on the object and one image that correct focus on background.
By reading about autofocus algorithm. I think it can done with contrast detection passive autofocus algorithm. It work on light intensity on the sensor.
But I don't sure that light intensity value from the image file has the same value as from the sensor. (it not a RAW image file. a jpeg image.) Is the light intensity value in jpeg image were the same as on the sensor? Can I use it to detect focus correctness with contrast detection? Is there a better way to detect which area of image were correct focus on the image?
I have tried to process the images a bit and I saw some progress. THis is what I did using opencv:
converted images to gray using cvtColor(I, Mgrey, CV_RGB2GRAY);
downsampled/decimated them a bit since they are huge (several Mb)
Took the sum of absolute horizontal and vertical gradients using http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=sobel#cv.Sobel.
The result is below. The foreground when in focus does look brighter than background and vice versa.
You can probably try to match and subtract these images using translation from matchTemplate() on the original gray images; and then assemble pieces using the convex hull of the results as initialization mask for grab cut and plugging in color images. In case you aren’t familiar with the grab cut, chceck out my answer to this question.
But may be a simpler method will work here as well. You can try to apply a strong blur to your gradient images instead of precise matching and see what the difference give you in this case. The images below demonstrate the idea when I turned the difference in the binary masks.
It will be helpful to see your images. It I understood you correctly you try to separate background from foreground using focus (or blur) cue. Contrast in the image depends on focus but it also depend on the contrast of the target. So if the target is clouds you will never get sharp edges or high contrast. Finally jpeg image that use little compression should not affect the critical properties of your algorithm.
I would try to get a number of images at all possible focus lengths in a row and then build a graph of the contrast as a function of focal length (or even better focusing distance). The peak in this graph will give you the distance to the object regardless of object's own contrast. Note, however, that the accuracy of such visual cues goes down sharply with viewing distance.
This is what I expect you to obtain when measuring the sum of absolute gradient in a small window:
The next step for you will be to combine areas that are in focus with the areas that are solid color that is has no particular peak in the graph but none the less belong to the same object. Sometimes getting a convex hull of the focused areas can help to pinpoint the raw boundary of the object.

High CPU load when changing background image of Canvas containing overlay elements

I am working on an Application that loads live video images from a camera and displays an overlay on top of said image. The Overlay does not change often so it can be considered as still. However it usually contains about 1,000 to 10,000 Lines.
When the video image is updated there is a notable impact to the CPU load depending on whether the overlay is visible or not. The overlay does neither get invalidated nor changed, just the image behind it is changing.
My setup is this:
<Canvas>
<Image/>
<Canvas>
<OverlayElement 1/>
<OverlayElement 2/>
<OverlayElement 3/>
<.../>
</Canvas>
</Canvas>
The Image's Source is a WriteableBitmap. Every time a new camera image (type byte[]) is available, the main Canvas' Dispatcher is invoked to write the image data by using WriteableBitmap.WritePixels().
The inner Canvas contains all Overlay Elements, being
- a contour (PolyLine)
- a circle (Path with EllipseGeometry) and
- a set of Rays (Path with one Figure containing LineSgements).
The number n of Points in the contour equals the number of line Segments in the last mentioned Path. n is usually around 1,000 - 3,000.
Depending on the count and length of Lines shown in the overlay the CPU load for showing a live image varies (increases if length or count go up) even if the overlay does not change. At some point this affects the frame rate and makes the program unusable. Line length is mostly correlated with line intersection, so maybe the Path is struggling to calculate it's fill area despite it is not painted?
So how could I improve the performance here?
What bugs me most is that even if the overlay does not change, the render time increases with it's primitive count. I would expect to have constant render time once the overlay has been drawn in it's last set state. What could I do to achieve that aside from rendering the whole overlay to a bitmap?
I am also open minded for suggestions on how to get the byte[] onto the screen more efficiently. Just keep in mind this problem is part of a bigger Application and i cannot change all paradigms concentrating on how to get the image drawn.
What I have tried so far:
Override the OnRender() method of the inner Canvas, drawing the overlay myself. This works fine but has the performance issue that brings me here ;)
Use Shapes (PolyLine, Ellipse, Path) as the inner Canvas' children to hold the overlay elements. This works, too. It is faster to redraw the overlay when it changes but on the other hand worsens the performance issue when updating the background image.
Like 2., but use Freeze() on Geometries wherever possible. Has no or little performance impact.
Thanks for your help in advance.

System.Drawing.Region resizing

I'm writing a paint application. User must be able to move with all objects after it's painted or edited. I have a brush and erase tool, so user can erase all or any part of object painted with brush. So I made an object DrawBrush that holds a System.Drawing.Region made from GraphicsPath.
But I don't know how to size it. I need to change size in every direction separately on mouse move (for example only to left)
can someone help me?
I'm able to do anything with this object (moving), but no sizing...
A region is like a fence - it simply marks out the boundary of an area. It does not "contain" any graphics, so resizing a region will have no direct/visible effect.
If you wish to be able to move or resize portions of a bitmap image within your editor, you will need to copy a piece of your main image (as specified by your region) into a temporary Bitmap. Then you can draw the tempoary bitmap back to your main image (in a different location and/or at a different size).
If you wish to be able to draw multiple objects in your painting program, and then edit them (move them around and resize them) independently later, then you will need to store each of them in a separate bitmap object and composite them together to display the final image on screen or save it to a flat bitmap format. If you don't keep all the shapes separately like this, you will lose too much information and you won't be able to edit them later.
Before you try to work out write the code to do this, you may need to think about the design of your editor - what does it need to do, and how will you achieve it? How is your "document" going to be described? (A single bitmap? many small bitmaps that are drawn at different locations? Vector paths?). If you write code before you understand how you will represent the document, you're likely to paint yourself into a corner (sorry about the pun) and get totally stuck.

Categories