I've come across strange behavior of pixel shader in WPF.
This problem is 100% reproducible, so I wrote small demo program. You can download source code here.
The root of all evil is tiny class titled MyFrameworkElement:
internal sealed class MyFrameworkElement : FrameworkElement
{
public double EndX
{
get
{
return (double)this.GetValue(MyFrameworkElement.EndXProperty);
}
set
{
this.SetValue(MyFrameworkElement.EndXProperty, value);
}
}
public static readonly DependencyProperty EndXProperty =
DependencyProperty.Register("EndX",
typeof(double),
typeof(MyFrameworkElement),
new FrameworkPropertyMetadata(0d, FrameworkPropertyMetadataOptions.AffectsRender));
protected override void OnRender(DrawingContext dc)
{
dc.DrawLine(new Pen(Brushes.Red, 2), new Point(0, 0), new Point(this.EndX, 100));
dc.DrawLine(new Pen(Brushes.Green, 3), new Point(10, 300), new Point(200, 10));
}
}
As you can see this framework element renders 2 lines: lower line has permanent coordinates but upper line depends on EndX dependency property.
So this framework element is target for pixel shader effect. For simplicity's sake I use grayscale shader effect found here. So I applied GrayscaleEffect to MyFrameworkElement. You can see result, it looks nice.
Until I increase EndX property drastically.
Small line is blurred and big line is fine!
But if I remove grayscale effect, all lines will look as they should.
Can anybody explain what's the reason of this blurring?
Or even better how can I solve this problem?
With a custom pixel shader it has to create an Intermediate Bitmap and then that texture gets sampled by the pixel shader.
You're creating a massive rendering, so your hitting some limitation in the render path.
A quick fix is to clip what you want rendered as follows:
Geometry clip = new RectangleGeometry(new Rect(0,0,this.ActualWidth, this.ActualHeight));
dc.PushClip(clip);
dc.DrawLine(new Pen(Brushes.Red, 2), new Point(0, 0), new Point(this.EndX, 100));
dc.DrawLine(new Pen(Brushes.Green, 3), new Point(200, 10), new Point(10, 300));
dc.Pop();
UPDATE:
One theory is that it's using a filter to scale the bitmap when it exceeds the maximum texture size (which can vary depending on your graphics card architecture)...so it goes through the pixel shader at a different size....then it gets scaled back to original size.
Thus the scaling filter is causing artifacts depending on the content of your bitmap (i.e. horizontal lines and vertical lines survive a scale down and up better than diagonal lines).
.NET 4 changed the default filter it uses for filtering to a lowerquality one...Bilinear, instead of Fant...maybe this impacts the quality that you get too.
http://10rem.net/blog/2010/05/16/more-on-image-resizing-in-net-4-vs-net-35sp1-bilinear-vs-fant
UPDATE2:
This kind of confirms what I was thinking above.
If you use the Windows Performance Toolkit/Suite (part of Windows SDK), then you can see the Video Memory being gobbled up in the orange graph while you increase the slider value because a bigger Intermediate Bitmap texture is being created. It keeps increasing until it hits a limit, then it flatlines...and thats when the pixelation becomes evident.
UPDATE3:
If you set the render mode to the "Software Renderer" (Tier 0) then you can see how it copes with rendering such a large visual - the artifacts start appearing at a different point....presumably because the texture size limit is larger/different to your GPUs. But the artifacts still appear because it's using a Bilinear filter internally.
Trying to use RenderOptions.SetBitmapScalingMode to up the filter to Fant doesn't seem to change the rendering quality in any way (I guess because it isn't honoured when it goes through the custom pixel shader path).
Put this in Application_Startup to see the software renderer results:
RenderOptions.ProcessRenderMode = RenderMode.SoftwareOnly;
Note that image is normally blurred in vertical direction, but is jagge in horizontal.
Since shaders are applied to raster images, not vector, the lines are rasterized into texture. Hardware usually supports textures up to 8198*8192.
In my case the "blurring", as you call it, appears at slider value of 16384. So, my virtualBox virtual graphics card supports up to 16384*16384.
Your limit may differ.
So just keep this value lower than that.
But it's strange that WPF rasterizes whole image, since only small part of it visible.
So there is also another possible reason, that lies inside shader itself, but it is compiled into binary, so i can't check it.
Update:
In my case it looks this way:
Looks like it is filtered vertically but not horizontally.
Ok, I've got this!
I decompiled the library with your grayscale effect and also decompiled WCF PresentationCore library to check why BlurEffect works perfect in the same situation.
And i found that BlurEffect implements abstract method Effect.GetRenderBounds which is absent in GrayscaleEffect. I also noticed that GrayscaleEffect is built against PresentationCore v 3.0.0 where Effect does not have GetRenderBound.
So this is an incompatibility between 3rd and 4th versions of WPF.
There are three ways to fix it:
If you have source code of GrayscaleEffect - add needed methods and compile it against 4.0.0 version of runtime.
You can switch the runtime your application use to version 3.*.
If you don't have sources of GrayscaleEffect but can't use 3rd version of runtime, write wrapper for GrayscaleEffect that inherits Effect (v4) and implements absent methods.
I tried 2nd way and the problem disappeared.
old question, but might be useful for someone having problem with blurring of image after applying Custom ShaderEffect.
Also problem OP mentioned might be releated to scale of rendered content,
I had similar problem with blurring after applying ShaderEffects from WPFShadersLibrary to video, text and any other content within normal window.
What I noticed that that image shifts down by a tiny bit, resulting in "pixel splitting", so I created two new properties for chosen ShaderEffect : XOffset and YOffset, and applied them in HLSL (see code below), then binded to Sliders in XAML :
float2 newPos;
newPos.x = uv.x + offsetX;
newPos.y = uv.y + offsetY;
Then I experimented with some arbitrary offsets and was able to re-align the picture. There is still some minimal blurring (or loss in detail) but result was noticeably better.
Problem with this solution currently, that I don't know how to predict offset either depending on resolution or window size.
Related
your help is much appreciated. I am using C# and EmguCV for image processing
I have tried noise removal, but nothing happens. I also tried image median filter, and it only works on the first image, but it does not work on the second image. It only makes the second image blurry and the objects larger and more square-like.
I want to remove obviously distinct objects(green ones) in my image below so that it would turn all black because they are obviously separated and are not grouped unlike the second image below.
Image 1:
At the same way, I want to do it in my image below, but remove only those objects -- (the black ones) -- that are not grouped/(lumped?) so that what remains on the image are the objects that are grouped/larger in scale?
Image 2:
Thank you
You may try Gaussian Blur and then apply Threshold to the image.
Gaussian Blur is a widely used effect in graphics software, typically to reduce image noise and reduce detail, which I think matches your requirements well.
For your 1st image:
CvInvoke.GaussianBlur(srcImg, destImg, new Size(0, 0), 5);//You may need to customize Size and Sigma depends on different input image.
You will get:
Then
CvInvoke.Threshold(srcImg, destImg, 10, 255, Emgu.CV.CvEnum.ThresholdType.Binary);
You will get:
For your 2nd image:
CvInvoke.GaussianBlur(srcImg, destImg, new Size(0, 0), 5);//You may need to customize Size and Sigma depends on different input image.
You will get:
Then
CvInvoke.Threshold(srcImg, destImg, 240, 255, Emgu.CV.CvEnum.ThresholdType.Binary);
You will get:
Hope this help!
You should first threshold the image using Otsus method. Second run connected component analysis on the threshold image. Third go over all the component that you found and for the ones that have a size that is smaller than some min size delete it from the original image.
cvThreshold (with CV_THRESH_BINARY/CV_THRESH_BINARY_INV(choose according to the image) + CV_THRESH_OTSU) http://www.emgu.com/wiki/files/1.3.0.0/html/9624cb8e-921e-12a0-3c21-7821f0deb402.htm + http://www.emgu.com/wiki/files/1.3.0.0/html/bc08707a-63f5-9c73-18f4-aeab7878d7a6.htm
CvInvoke.FindContours (RetrType == External,ChainApproxNone)
For each contour that we found in 2 calculate CvInvoke.ContourArea
If area is smaller than minArea draw on the original image(The one you want to filter) the value you want for them(0 I suppose) in the original image using CvInvoke.DrawContours with the current contour and with thickness==-1 to fill the inside of the contour.
In the image above, the first image is loaded via C# script. The second is assigned via the inspector in Unity editor. Note the dark gray border around the first image. How can I load the image via C# and have it not have the border?
The source image is a white-on-transparent PNG 512x512 pixels. It's being displayed in an UnityEngine.UI.Image sized at 30x30 with a red color assigned. The source image is identical (same location on disk) for both examples above.
The code I am using for the first image is as follows;
var texture = new Texture2D(512, 512);
texture.LoadImage(File.ReadAllBytes(Path.Combine(TexturePath, name)));
image.sprite = Sprite.Create(texture, new Rect(0,0, texture.width, texture.height), new Vector2(.5f,.5f), 100);
where image is the appropriate UnityEngine.UI.Image.
Note
The advantage of using the code above is that the images do not need to be embedded in the game that unity ends up building. It means these images can be distributed separately from the game. Using Resources.Load does not cater for this, and I suspect, is the same as assigning the image via the inspector, meaning that unity has already done something to the texture prior to assignment (likely something by the UnityEditor.TextureImporter)
Update
I investigated the Texture2D constructor some more and determined that the following code results in the image above, where the edges of the sprite no longer have the grey border, but now appear jagged. (Setting the last parameter to true retains the grey border).
var texture = new Texture2D(512, 512, TextureFormat.Alpha8, false);
Some googling has me thinking that the issue is mipmap related, and that the Unity Editor may be resolving this on import due to whatever occurs with UnityEditor.TextureImporter.borderMipMap as seen here. However, the UnityEditor namespace is not available when building the project.
The issue is that the PNG format uses a non-premultiplied Alpha and Unity uses straight alpha blending designed to work best with pre-multiplied alpha colors.
Better in-depth descriptions of Pre vs non-Pre can be found from:
NVidia
Microsoft
A Unity-specific discussion can be found Here (however, note this problem has nothing to do with mip-mapping, but can be exacerbated by filtering and resizing techniques)
You can also look to Unity's documentation on alpha importing to see a visual example of a common solution to this problem if you have access to pipeline-side creation of these PNGs (I have typically solved this by applying a post-process to modify PNGs after/during their creation).
Using a cutout shader is also a solution though it can result in jaggy/visual artifacts.
You would think rendering a sprite using the features of Unity's primary supported runtime load image file format would be a simple affair but, alas, it is quite a bit more complicated than that. I don't know the inner workings of UnityEditor.TextureImporter.borderMipMap but I suspect its inception is built around a similar problem (weight between the edge pixel alpha and any neighbors used in the filtering)
Use Resources.Load, thus you can configure the texture settings in the editor.
I noticed ugly banding issues when using Gradients in WPF, and saw that a solution was to set the "bits per pixel" property to 32.
The thing is that the property seem to be Windows Phone only, ie not working on a program for desktop devices, since trying to add this string in the ApplicationManifest didn't seem to do anything.
Does anyone know if/how I can set this property?
Thank you.
My function which draws the gradients:
public LinearGradientBrush getGradient(Color c1, Color c2, double opacity)
{
LinearGradientBrush gradient = new LinearGradientBrush();
gradient.StartPoint = new Point(0, 0);
gradient.EndPoint = new Point(1, 1);
gradient.GradientStops.Add(new GradientStop(c1, 0.0));
gradient.GradientStops.Add(new GradientStop(c2, 1.0));
gradient.Opacity = opacity;
return gradient;
}
I draw the gradients off of the two most dominant colors in an AlbumCover. You can see the two colors on the top left of the window. I then call the getGradient-function with this:
getGradient(Colors[0], Colors[1], 0.5); // 0.5 is dynamic depending on the brightness of those colors. Tried with 1 opacity but it's still the same.
Here are the sample images (in PNG and uploaded without compression)
Image1
Image2
Image3
As you can see, there is banding going on. There are worse examples but I can't remember what Cover gave it.
Please notice that Image1 does not have banding on it's AlbumCover. Even though there is a gradient on it.
By doing a quick search I found some suggestions that the issue may be just a visual effect that is a result of having only 256 values for each of R, G and B channels that defines a color and the way that gradients work. If You try to cover a large area with a gradient, it'll divide it into smaller areas filled with solid colors, slightly changing between neighbouring areas. Additionally, there is an optical illusion called Mach bands that makes the borders of the areas even more visible.
Take a look at those links for more information and some suggested solutions:
how to make the brush smooth without lines in the middle
http://social.msdn.microsoft.com/Forums/vstudio/en-US/cea96578-a6b3-4b29-b813-e3643d7770ae/lineargradientbrush-can-see-individual-gradient-steps?forum=wpf
After digging around a long time I finally found the best solution:
Adding a little bit of noise to the image! This does mean I have to draw the gradient myself, but I believe the quality will be much better.
I will update this post with the algorithm itself and examples when I'm done writing.
Stay tuned I guess.
I'm writting a GUI wich uses OpenGL via the OpenTK and the GLControl on C# and i'm trying to use dirty rectangles for drawing only the controls that need to be drawed. Obviusly it's not wise to redraw an entire maximized form just for refreshing a mouse-hover button.
My first attempt was to use glScissors but this doesn't limit the SwapBuffers, wich in my platform, I suspect (because of the performance almost entirely dependent on the window size) doesn't 'swap' but do a full copy of the back buffer onto the front buffer.
The second attempt was the glAddSwapHintRectWIN wich in theory would limit the swapped (in this case copied) area of the SwapBuffers, but this is only a hint and it doesn't do anything at all.
The third attempt was the glDrawBuffer to copy a part of the back buffer onto the frame buffer, for some unknown reason, even when i copy only a part of the buffer, the performance still decreases the same way before when the window size increase.
It seams that a full-area refresh it's still hapening no matter what i do.
So i'm trying to use the glReadPixels () and somehow get a pointer to draw directly onto a hDC pixel data getted from the CreateGraphics() of the control. Is this possible?
EDIT:
I think something is wrong with the GLControl, why the performance of this code depends on the screen size, i'm not doing any swapbuffers or clearing, just drawing a constant-size triangle on the front buffer:A driver problem, maybe?
GL.DrawBuffer(DrawBufferMode.Front);
Vector4 Color;
Color = new Vector4((float)R.NextDouble(), 0, 0, 0.3F);
GL.Begin(BeginMode.Triangles);
GL.Color4(Color.X, Color.Y, Color.Z, Color.W);
GL.Vertex3(50, 50, 0);
GL.Vertex3(150F, 50F, 0F);
GL.Vertex3(50F, 150F, 0F);
GL.End();
GL.Finish();
EDIT 2
This solutions are not viable:
Drawing onto a texture and using glGetTexImage for drawing onto a GDI bitmap and then drawing that bitmap onto the window hDC
Reading buffer pixels from the buffer using glReadPixels onto a GDI bitmap and then drawing that bitmap onto the window hDC.
Splitting the window onto a grid of viewports and updating only the cells that contains the dirty rectangle
First of all, what platform (GPU and OS) are you using? What kind of performance are we talking about?
Keep in mind that there are several limitations when trying to combine GDI and OpenGL on the same hDC. Indeed, in most cases this will turn off hardware acceleration and give you OpenGL 1.1 through Microsoft's software renderer.
Hardware accelerated OpenGL is optimized for redrawing the entire window every frame. SwapBuffers() invalidates the contents of the backbuffer, which makes dirty rectangles impossible to implement when double buffering on the default framebuffer.
There are two solutions:
do not call SwapBuffers(). Set GL.DrawBuffer(DrawBufferMode.Front) and use single-buffering to update the rectangles that are dirty. This has severe drawbacks, including turning off desktop composition on Windows.
do not render directly to the default framebuffer. Instead, allocate and render into a framebuffer object. This way, you can update only the regions of the FBO that have been modified. (You will still need to copy the FBO to screen every frame, so it may or may not be a performance win depending on your GUI complexity.)
Edit:
40-60ms for a single triangle indicates that you are not getting any hardware acceleration. Check GL.GetString(StringName.Renderer) - does it give the name of your GPU or does it return "Microsoft GDI renderer"?
If it is the latter, then you must install OpenGL drivers from the website of your GPU vendor. Do that and the performance problem will disappear.
After several test with OpenTK, it appears that in single or double buffered mode, the slowdown observed with control size increasing still remains, even with constant size scissor enabled. Even the use or not of GL.Clear() doesn't impact slowdown.
(Note that only height changes has significant impact.)
Testing with ansi c example, I had the same results.
Making the same couple of tests under linux gave the same results too.
Under linux I noticed that frame rate changes when I move from one display to the other. Even with vsync disabled.
Next step would be to check if directX has the same behaviour. If yes, than the limitation is located on the bus between display and graphic card.
EDIT: conclusion:
This behaviour is leading you to false impression. Consider only building your interface on a FBO with dirty rect mechanisms and render it on a quad (made of tri's is better) and swap as usual without thinking that you can improve swapping for a given window size by clipping some operations.
I hope I do not make my first mistake with my first post.
I am writing a library for several graphical effects and filters (for example Sobel or Gauß mask).
Because of the low Speed, doing this on the CPU, I wrote some shaders with the Shazzam tool.
My concret Problem is, that I am not able to use this shader in C#.
In the Internet I found only advice how to apply a pixelshader as a effect in XAML directly to a element, which is not usable for my application, because this makes it impossible to apply several shaders on one Image, which is needed, for example the Canny Edge Detector.
To illustarte this issue a Little pseudo-code, which should Show, what I expect from the method.
PixelShader somePixelShader = new PixelShader(pixelshader.ps);
somePixelShader.Input = Bitmap;
somePixelShader.Height = 200;
somePixelShader.Width = 800;
somePixelShader.Execute();
Bitmap = somePixelShader.Result;
As you see, everything should be done in C#.
Perhaps you can help me with my issue.
You can make a copy of the current effect output as a bitmap with RenderTargetBitmap, then submit this outputted image as the new input for the next effect, rinse, repeat.
Update : after a small (and inconclusive) test, this will not work : Can't render pixel shader to RenderTargetBitmap! Please help!
Check out these white papers for step-by-step instructions + examples on how to compile and use a pixel shader in WPF or SL.
You may also want to check out the WPF Pixel Shader Effects Library here.