I hope I do not make my first mistake with my first post.
I am writing a library for several graphical effects and filters (for example Sobel or Gauß mask).
Because of the low Speed, doing this on the CPU, I wrote some shaders with the Shazzam tool.
My concret Problem is, that I am not able to use this shader in C#.
In the Internet I found only advice how to apply a pixelshader as a effect in XAML directly to a element, which is not usable for my application, because this makes it impossible to apply several shaders on one Image, which is needed, for example the Canny Edge Detector.
To illustarte this issue a Little pseudo-code, which should Show, what I expect from the method.
PixelShader somePixelShader = new PixelShader(pixelshader.ps);
somePixelShader.Input = Bitmap;
somePixelShader.Height = 200;
somePixelShader.Width = 800;
somePixelShader.Execute();
Bitmap = somePixelShader.Result;
As you see, everything should be done in C#.
Perhaps you can help me with my issue.
You can make a copy of the current effect output as a bitmap with RenderTargetBitmap, then submit this outputted image as the new input for the next effect, rinse, repeat.
Update : after a small (and inconclusive) test, this will not work : Can't render pixel shader to RenderTargetBitmap! Please help!
Check out these white papers for step-by-step instructions + examples on how to compile and use a pixel shader in WPF or SL.
You may also want to check out the WPF Pixel Shader Effects Library here.
Related
Hello! I am trying to implement a simple way to display the deformed shape of a beam. I found HelixToolkit that offers perfect tools, but I can't find the way to display different tiles of the same mesh with a different colour, or gradient. I found this: https://github.com/helix-toolkit/helix-toolkit/issues/885 that is the adding of the VertColorMaterial property, but it looks like it is for SharpDX library, but I started with HelixToolkit wpf (don't understand if in HelixToolkit is also available).
I can't even find a way to do it with SharpDX: it looks that there is almost no doc in internet.
Additionaly, SharpDX stopped its developement.
So:
do you know any example?
do you suggest me another library, which is fast/offers the ability of navigate the model, and it is compatible/use the wpf framework?
I also would like the ability to refine and subdivide a mesh.
Any kind of advice would be useful, I am new to the world of computer 3d graphic.
Thanks
EDIT 1:
I followed JonasH hint applying a texture, but it apply the texture for each tile. (See image).
I can only dinstict by out materian and in materia (set in the picture as Hue and the arrow Texture).
I need to apply one color for each polygon to give to the mesh a "FEM" style. Do you know how is it possibile with HelixToolkit?
You might consider using Kitware VTK instead of HelixToolkit. It’s extremely powerful library for scientific data visualization, well documented, perfect for finite element pre and post processing. You can take a look on my app, unfortunately it has not been documented yet, but just as an example:
https://github.com/galuszkm/STAN
I assume you have a color per vertex you want to use. I would recommend using wpf or helixToolkit wpf since they are quite easy to use. But as far as I'm aware they do not support vertex coloring.
A workaround would be to use a texture. I would assume you want to visualize some scalar property as a color. You would first need to create your MeshGeometry and assign the TextureCoordinates, simply assign the value you want to visualize to one of the texture coordinates in the 0-1 range. You would also need to create a gradient texture, either a gradientBrush or create an image. You would then assign the brush like:
var brush = new ImageBrush()
{
ImageSource = new BitmapImage(new Uri("gradient.png", UriKind.Relative))
};
var material = new DiffuseMaterial(brush);
GeometryModel3D model = new GeometryModel3D(mesh, material);
In the image above, the first image is loaded via C# script. The second is assigned via the inspector in Unity editor. Note the dark gray border around the first image. How can I load the image via C# and have it not have the border?
The source image is a white-on-transparent PNG 512x512 pixels. It's being displayed in an UnityEngine.UI.Image sized at 30x30 with a red color assigned. The source image is identical (same location on disk) for both examples above.
The code I am using for the first image is as follows;
var texture = new Texture2D(512, 512);
texture.LoadImage(File.ReadAllBytes(Path.Combine(TexturePath, name)));
image.sprite = Sprite.Create(texture, new Rect(0,0, texture.width, texture.height), new Vector2(.5f,.5f), 100);
where image is the appropriate UnityEngine.UI.Image.
Note
The advantage of using the code above is that the images do not need to be embedded in the game that unity ends up building. It means these images can be distributed separately from the game. Using Resources.Load does not cater for this, and I suspect, is the same as assigning the image via the inspector, meaning that unity has already done something to the texture prior to assignment (likely something by the UnityEditor.TextureImporter)
Update
I investigated the Texture2D constructor some more and determined that the following code results in the image above, where the edges of the sprite no longer have the grey border, but now appear jagged. (Setting the last parameter to true retains the grey border).
var texture = new Texture2D(512, 512, TextureFormat.Alpha8, false);
Some googling has me thinking that the issue is mipmap related, and that the Unity Editor may be resolving this on import due to whatever occurs with UnityEditor.TextureImporter.borderMipMap as seen here. However, the UnityEditor namespace is not available when building the project.
The issue is that the PNG format uses a non-premultiplied Alpha and Unity uses straight alpha blending designed to work best with pre-multiplied alpha colors.
Better in-depth descriptions of Pre vs non-Pre can be found from:
NVidia
Microsoft
A Unity-specific discussion can be found Here (however, note this problem has nothing to do with mip-mapping, but can be exacerbated by filtering and resizing techniques)
You can also look to Unity's documentation on alpha importing to see a visual example of a common solution to this problem if you have access to pipeline-side creation of these PNGs (I have typically solved this by applying a post-process to modify PNGs after/during their creation).
Using a cutout shader is also a solution though it can result in jaggy/visual artifacts.
You would think rendering a sprite using the features of Unity's primary supported runtime load image file format would be a simple affair but, alas, it is quite a bit more complicated than that. I don't know the inner workings of UnityEditor.TextureImporter.borderMipMap but I suspect its inception is built around a similar problem (weight between the edge pixel alpha and any neighbors used in the filtering)
Use Resources.Load, thus you can configure the texture settings in the editor.
Question:
What is a fast way to scale and/or crop a bitmap provided from a WritableBitmap for display in the UI?
Requirements:
Have Low CPU usage
Handle large images (5 Megapixel, abt 2500x2000 pixels)
Resize and/or crop to the same resolution/area as the UI element the bitmap is displayed in.
Use WPF
Specifically, it must allow a 14FPS 5 Megapixel camera image stream to be displayed in a WPF UI element at full speed.
Update:
I have been able to speed up the drawing quite a bit by painting to a Canvas control using an ImageBrush as follows, where m_bitmap is my WriteableBitmap:
ImageBrush brush = new ImageBrush();
brush.ImageSource = m_bitmap;
brush.Stretch = Stretch.Uniform;
canvas.Background = brush;
I'm now able to get the full 14FPS, though it still using about 20% CPU, so I'm not sure how well it preform if I add another camera or two (the plan is to have 4 or so running).
Another thing I think might be slowing down the drawing is the images are in a mono, Gray8, format, not the standard RGB32 (or is it bgra32 for WPF?) format. If I understand correctly, the image has to be converted to the standard format to be displayed, which would add significant overhead to each frame's drawing time.
Some background:
I'm currently working with a 5 Megapixel, 14 FPS, video camera and am trying to get the frames to render to the screen at full speed. I would like to do this using WPF.
I currently have an example in WinForms that runs full speed for an unscaled image, but (as I would expect) it has major trouble if I set the pictureBox.SizeMode = Zoom;. The example reads raw data directly from the camera stream to a buffer and then copies the data from the buffer into the bitmap set to the PictureBox control. The copy algorithm uses LockBits to speed things up.
I converted that example into WPF, rewriting the parts using Bitmap objects to instead use WritableBitmap objects and an Image control instead of PictureBox. Unforunately this is not able to render the stream to the screen at any decent rate, scaled or unscaled. Both have significant CPU load and very slow updates.
The performance when rendering to the screen is turned off is great. It is able to process the image stream at full speed and resolution while using around 3% CPU and less than 100MB memory.
Note: when I say rendering to the screen is turned off, the WritableBitmap is still being continuously updated, only is not set to the Image control.
I've seen a lot of discussion about getting fast bitmap updating in WPF, but have been unsuccessful in getting it to work at an reasonable speed/cpu load. Also I would like to have the image scaled in such a way that I can see the whole image.
I imagine the key will lie in some sort of scaling/crop combination that needs to be done so that WPF will not try to render(cache?) all 5 million pixels, but only those on the screen, and only at the current screen resolution. I imagine/hope this can be done fairly easily and without too much hit to memory or CPU, but currently have no idea how to do so. I have found the DecodePixelWidth and DecodePixelHeight properties, but those are only applicable when loading an image from a file to a BitmapImage.
Did you have a look at the following post?
Resizing WritableBitmap
If it does not solve your problem, I have more questions for you:
What is the resolution of your image?
Is the size of you UI element constant? What's its size?
Edit:
After your edit, I noticed that you want to display a BitmapImage in Gray8 PixelFormat, why don't you try to set this property when creating your BitmapImage (m_bitmap)?
m_bitmap.Format = PixelFormat.Gray8; // could not test
I am certain that taking your 8 bits/pixel and multiplying the amount of bits needed per pixel by 4 while not gaining any quality is slowing down your application. Especially because you run operations on 32 bits per pixel images when you could be running those operations on 8 bits per pixel images.
While its interface is a bit old-fashioned, I believe that convert (see http://en.wikipedia.org/wiki/ImageMagick) is very often used (and may in fact be the industry standard).
Edit: StackOverflow has about 2,300 question tagged with imagemagick here. See for example What is the difference for sample/resample/scale/resize/adaptive-resize/thumbnail operators in ImageMagick convert?
The OP for https://apple.stackexchange.com/a/41531 decided to go with ImageMagick. And the accepted answer to Efficient JPEG Image Resizing in PHP also suggests ImageMagick, with 19 votes.
However, I don't know whether ImageMagick is capable of meeting your requirements of 14FPS, 5 Megapixels.
The only answer to Recommendation for real time image processing tools on Linux suggests a fork graphicsmagick, which seems to also be available for Windows.
I've come across strange behavior of pixel shader in WPF.
This problem is 100% reproducible, so I wrote small demo program. You can download source code here.
The root of all evil is tiny class titled MyFrameworkElement:
internal sealed class MyFrameworkElement : FrameworkElement
{
public double EndX
{
get
{
return (double)this.GetValue(MyFrameworkElement.EndXProperty);
}
set
{
this.SetValue(MyFrameworkElement.EndXProperty, value);
}
}
public static readonly DependencyProperty EndXProperty =
DependencyProperty.Register("EndX",
typeof(double),
typeof(MyFrameworkElement),
new FrameworkPropertyMetadata(0d, FrameworkPropertyMetadataOptions.AffectsRender));
protected override void OnRender(DrawingContext dc)
{
dc.DrawLine(new Pen(Brushes.Red, 2), new Point(0, 0), new Point(this.EndX, 100));
dc.DrawLine(new Pen(Brushes.Green, 3), new Point(10, 300), new Point(200, 10));
}
}
As you can see this framework element renders 2 lines: lower line has permanent coordinates but upper line depends on EndX dependency property.
So this framework element is target for pixel shader effect. For simplicity's sake I use grayscale shader effect found here. So I applied GrayscaleEffect to MyFrameworkElement. You can see result, it looks nice.
Until I increase EndX property drastically.
Small line is blurred and big line is fine!
But if I remove grayscale effect, all lines will look as they should.
Can anybody explain what's the reason of this blurring?
Or even better how can I solve this problem?
With a custom pixel shader it has to create an Intermediate Bitmap and then that texture gets sampled by the pixel shader.
You're creating a massive rendering, so your hitting some limitation in the render path.
A quick fix is to clip what you want rendered as follows:
Geometry clip = new RectangleGeometry(new Rect(0,0,this.ActualWidth, this.ActualHeight));
dc.PushClip(clip);
dc.DrawLine(new Pen(Brushes.Red, 2), new Point(0, 0), new Point(this.EndX, 100));
dc.DrawLine(new Pen(Brushes.Green, 3), new Point(200, 10), new Point(10, 300));
dc.Pop();
UPDATE:
One theory is that it's using a filter to scale the bitmap when it exceeds the maximum texture size (which can vary depending on your graphics card architecture)...so it goes through the pixel shader at a different size....then it gets scaled back to original size.
Thus the scaling filter is causing artifacts depending on the content of your bitmap (i.e. horizontal lines and vertical lines survive a scale down and up better than diagonal lines).
.NET 4 changed the default filter it uses for filtering to a lowerquality one...Bilinear, instead of Fant...maybe this impacts the quality that you get too.
http://10rem.net/blog/2010/05/16/more-on-image-resizing-in-net-4-vs-net-35sp1-bilinear-vs-fant
UPDATE2:
This kind of confirms what I was thinking above.
If you use the Windows Performance Toolkit/Suite (part of Windows SDK), then you can see the Video Memory being gobbled up in the orange graph while you increase the slider value because a bigger Intermediate Bitmap texture is being created. It keeps increasing until it hits a limit, then it flatlines...and thats when the pixelation becomes evident.
UPDATE3:
If you set the render mode to the "Software Renderer" (Tier 0) then you can see how it copes with rendering such a large visual - the artifacts start appearing at a different point....presumably because the texture size limit is larger/different to your GPUs. But the artifacts still appear because it's using a Bilinear filter internally.
Trying to use RenderOptions.SetBitmapScalingMode to up the filter to Fant doesn't seem to change the rendering quality in any way (I guess because it isn't honoured when it goes through the custom pixel shader path).
Put this in Application_Startup to see the software renderer results:
RenderOptions.ProcessRenderMode = RenderMode.SoftwareOnly;
Note that image is normally blurred in vertical direction, but is jagge in horizontal.
Since shaders are applied to raster images, not vector, the lines are rasterized into texture. Hardware usually supports textures up to 8198*8192.
In my case the "blurring", as you call it, appears at slider value of 16384. So, my virtualBox virtual graphics card supports up to 16384*16384.
Your limit may differ.
So just keep this value lower than that.
But it's strange that WPF rasterizes whole image, since only small part of it visible.
So there is also another possible reason, that lies inside shader itself, but it is compiled into binary, so i can't check it.
Update:
In my case it looks this way:
Looks like it is filtered vertically but not horizontally.
Ok, I've got this!
I decompiled the library with your grayscale effect and also decompiled WCF PresentationCore library to check why BlurEffect works perfect in the same situation.
And i found that BlurEffect implements abstract method Effect.GetRenderBounds which is absent in GrayscaleEffect. I also noticed that GrayscaleEffect is built against PresentationCore v 3.0.0 where Effect does not have GetRenderBound.
So this is an incompatibility between 3rd and 4th versions of WPF.
There are three ways to fix it:
If you have source code of GrayscaleEffect - add needed methods and compile it against 4.0.0 version of runtime.
You can switch the runtime your application use to version 3.*.
If you don't have sources of GrayscaleEffect but can't use 3rd version of runtime, write wrapper for GrayscaleEffect that inherits Effect (v4) and implements absent methods.
I tried 2nd way and the problem disappeared.
old question, but might be useful for someone having problem with blurring of image after applying Custom ShaderEffect.
Also problem OP mentioned might be releated to scale of rendered content,
I had similar problem with blurring after applying ShaderEffects from WPFShadersLibrary to video, text and any other content within normal window.
What I noticed that that image shifts down by a tiny bit, resulting in "pixel splitting", so I created two new properties for chosen ShaderEffect : XOffset and YOffset, and applied them in HLSL (see code below), then binded to Sliders in XAML :
float2 newPos;
newPos.x = uv.x + offsetX;
newPos.y = uv.y + offsetY;
Then I experimented with some arbitrary offsets and was able to re-align the picture. There is still some minimal blurring (or loss in detail) but result was noticeably better.
Problem with this solution currently, that I don't know how to predict offset either depending on resolution or window size.
So here are the details (I am using C# BTW):
I receive a 32bpp image (JPEG compressed) from a server. At some point, I would like to use the Palette property of a bitmap to color over-saturated pixels (brightness > 240) red. To do so, I need to get the image into an indexed format.
I have tried converting the image to a GIF, but I get quality loss. I have tried creating a new bitmap in an index format by these methods:
// causes a "Parameter not valid" error
Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Indexed)
// no error, but the resulting image is black due to information loss I assume
Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Format8bppIndexed)
I am at a loss now. The data in this image is changed constantly by the user, so I don't want to manually set pixels that have a brightness > 240 if I can avoid it. If I can set the palette once when the image is created, my work is done. If I am going about this the wrong way to begin with please let me know.
EDIT: Thanks guys, here is some more detail on what I am attempting to accomplish.
We are scanning a tissue slide at high resolution (pathology application). I write the interface to the actual scanner. We use a line-scan camera. To test the line rate of the camera, the user scans a very small portion and looks at the image.
The image is displayed next to a track bar. When the user moves the track bar (adjusting line rate), I change the overall intensity of the image in an attempt to model what it would look like at the new line rate. I do this using an ImageAttributes and ColorMatrix object currently.
When the user adjusts the track bar, I adjust the matrix. This does not give me per pixel information, but the performance is very nice. I could use LockBits and some unsafe code here, but I would rather not rewrite it if possible. When the new image is created, I would like for all pixels with a brightness value of > 240 to be colored red. I was thinking that defining a palette for the bitmap up front would be a clean way of doing this.
Going from 32bpp to 8bpp indexed will almost always result in quality loss, unless the original image has less than 256 colors total.
Can you create another image that is a overlay with the affected pixels red, then show both of those?
Since you are going for brightness > 240, you can convert the overlay to grayscale first, then to indexed to get the overbright pixels.
You don't specify what you are doing with it once you have tagged the offenders, so I don't know if that will work.
Sounds like something you could do easily with a pixel shader. Even very early shader models would support something as easy as this.
The question is however:
Can you include shader support in your application without too much hastle?
Do you know shader programming?
EDIT:
You probably don't have a 3D context where you can do stuff like this =/
I was mostly just airing my thoughts.
Manipulating the picture pixel by pixel should be doable in real-time with a single CPU shouldn't it?
If not, look into GPGPU programming and Open CL.
EDIT AGAIN:
If you gave some more details about what the app actually does we might help a bit more? For example, if you're making a web-app none of my tips would make sense.
Thanks for the help everyone. It seems that this can be solved using the ImageAttributes class and simply setting a color remap table.
ColorMap[] maps = new ColorMap[someNum]
// add mappings
imageAttrs.SetRemapTable(maps);
Thanks for the help again, at least I learned something.