Writing a HLSL shader for rescaling floating point textures - c#

When using HalfSingle/Single format for my Texture2D, XNA complains that sampling must be set to PointClamp, and this makes my texture look jagged. I am actually using this to pass depth data to the shader, so I am trying to get a better dynamic range than simply using RGBA grayscale values.
If I use Color or Bgra, then I basically only have 255 levels. If I encode the depth values as color pixels, then I can enable antialiasing, but then it doesn't work correcly because the sampler treats each byte/nibble separately when lerping.
Question:
Is there a way to tell HLSL to sample my floating point texture using anti-aliasing filters, or do I need to write the shader myself?

It turns out there are two ways to solve this:
Pack .NET 32-bit floats into a RGBA using my own conversion methods, or
Use SurfaceFormat.Rg32 to pack two 16-bit floats into RGBA (this format supports texture filtering).
I went for the first method.

Related

Byte array or bitmap to svg in c#

How can we convert byte array or bitmap to svg in .net and save the svg file.
Is there any library provided by .net or any third party library which can handle this.
The problem is that Bimap, JPG, PNG, etc. files are raster graphics: they store a fixed array of pixels in various shades of red, green and blue (or hue, lightness and shade - whatever) while SVG files are vector graphics - which means they store images as "draw line from (x1, y1) to (x2, y2)" and "draw an arc here" commands.
And the two are not compatible.
It is possible to do - the process is called "vectorisation" - but the results are unlikely to be perfect, and if you are talking about a natural world object (like a photo of a face) it's very, very unlikely to work without some considerable effort on your part.
read the original article at codeproject.com

Uniform buffer size on Nvidia GPUs

I use C# with OpenTK to access OpenGL API. My project uses tessellation to render a heightmap. My tessellation control shader splits a square into a grid of 64 squares and my tessellation evaluation shader adds vertical offsets to those points. Vertical offsets are stored in a uniform float buffer like this:
uniform float HeightmapBuffer[65 * 65];
Everything works fine, when I run the project on my laptop with AMD Radeon 8250 GPU. The problems start when I try to run it on Nvidia graphic cards. I tried an older GT 430 and a brand new GTX 1060, but results are same:
Tessellation evaluation info
----------------------------
0(13) : error C5041: cannot locate suitable resource to bind variable "HeightmapBuffer". Possibly large array.
As I researched this problem, I found GL_MAX_UNIFORM_BLOCK_SIZE variable which returns ~500MB on the AMD and 65.54 kB on both Nvidia chips. It's a little strange, since my array actually uses only 16.9 kB, so I am not even sure if the "BLOCK SIZE" actually limits the size of one variable. Maybe it limits the size of all uniforms passed to one shader? Even so, I can't believe that my program would use 65 kB.
Note that I also tried to go the 'common' way by using a texture, but I think there were problems with interpolation, so when placing two adjacent heightmaps together, the borders didn't match. With a uniform buffer array on the other side, things work perfectly.
So what is the actual meaning of GL_MAX_UNIFORM_BLOCK_SIZE? Why is this value on Nvidia GPUs so low? Is there any other way to pass a large array to my shader?
As I researched this problem, I found GL_MAX_UNIFORM_BLOCK_SIZE variable which returns ~500MB on the AMD and 65.54 kB on both Nvidia chips.
GL_MAX_UNIFORM_BLOCK_SIZE is the wrong limit. That applies only to Uniform Buffer Objects.
You just declare an array
uniform float HeightmapBuffer[65 * 65];
outside of a uniform block. Since you seem to use this in a tesselation evaluation shader, the relevant limit is MAX_TESS_EVALUATION_UNIFORM_COMPONENTS (there is a separate such limit for each programmable stage). This component limit counts just the number of float components, so a vec4 will consume 4 components, a float just one.
In your particular case, the latest GL spec, [GL 4.6 core profile] (https://www.khronos.org/registry/OpenGL/specs/gl/glspec46.core.pdf)
at the time of this writing, just guarantees a minimum value of 1024 for that (=4kiB), and you are way beyond that limit.
It is actually a very bad idea to use plain uniforms for such amounts of data. You should consider using UBOs, Texture Buffer Objects, Shader Storage Buffer Objects or even plain textures to store your array. UBOs would probably be the most natural choice in your scenario.

C# Convert Bitmap to indexed colour format

How can I convert a 24-bit colour System.Drawing.Bitmap to an indexed (256-colour) format? I'm having trouble working out how to calculate the palette. I can iterate over the pixels and use an int[] to contain the various colours but the problem comes when there are more than 256 colours. Is there a way to convert to an indexed format and extract a 256-colour palette from an Bitmap ?
Using the Bitmap Clone Method you can directly convert the Source Image to a 256 color Palette Indexed image like this:
Bitmap Result = Source.Clone(new Rectangle(0, 0, Source.Width, Source.Height), PixelFormat.Format8bppIndexed);
Then if you want access the Palette Colors, just use the Result.Palette.Entries property.
I had the same challenge earlier. It's possible to solve using GDI+ in .Net.
This article helped me a lot (including samples): http://msdn.microsoft.com/en-us/library/Aa479306
For best quality use "Octree-based Quantization".
WPF has access to the Windows Imaging Component, from there you can use a FormatConvertedBitmap to convert the image to a new pixel format. WIC is much much faster than the System.Drawing methods on Vista and 7 and will allow you a lot more options.
This is not built-in but you can either use external .NET libraries for this or shell out to the console to invoke ImageMagic.
Some reading material to get you started.
Graphic Gems I pp. 287-293, "A Simple Method for Color Quantization: Octree Quantization"
B. Kurz. Optimal Color Quantization for Color Displays. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1983, pp. 217-224.
Graphic Gems II pp. 116-125, "Efficient Inverse Color Map Computation"
This paper describes an efficient technique to map actual colors to a reduced color map, selected by some other technique described in the other papers.
Graphic Gems II pp. 126-133, "Efficient Statistical Computations for Optimal Color Quantization"
Xiaolin Wu. Color Quantization by Dynamic Programming and Principal Analysis. ACM Transactions on Graphics, Vol. 11, No. 4, October 1992, pp 348-372.

Convert BitmapImage to grayscale, and keep alpha channel

I'm having an issue with converting a BitmapImage (WPF) to grayscale, whilst keeping the alpha channel. The source image is a PNG.
The MSDN article here works fine, but it removes the alpha channel.
Is there any quick and effective way of converting a BitmapImage to a grayscale?
You should have a look at image transformation using matrices.
In particular, this article describes how to convert a bitmap to grayscale using a ColorMatrix. (It is written in VB.NET, but it should be easy enough to translate to C#).
I haven't tested if it works with the alpha channel, but I'd say it's worth a try, and it definitely is a quick and effective way of modifying bitmaps.
It really depends upon what your source PixelFormat is. Assuming your source is PixelFormats.Bgra32 and that you want to go to grayscale, you might consider using a target pixel format of PixelFormats.Gray16. However, Gray16 doesn't support alpha. It just has 65,535 graduations between black and white, inclusive.
You have a few options. One is to stay with Bgra32 and just set the blue, green and red channels to the same value. That way you can keep the alpha channel. This may be wasteful if you don't require an 8-bit alpha channel (for differing levels of alpha per pixel).
Another option is to use an indexed pixel format such as PixelFormats.Indexed8 and create a palette that contains the gray colours you need and alpha values. If you don't need to blend alpha, you could make the palette colour at position zero be completely transparent (an alpha of zero) and then progress solid black in index 1 through to white in 255.
if relying on API calls fails. You can always try the 'do it yourself' approach: Just get access to the RGBA bytes of the picture, and for every RGBA replace it with MMMA, where M = (R+G+B)/3;
If you want it more perfect, you should add weights to the contribution of the RGB components. I believe your eye is more receptive for green, and as such that value should weigh more.
While not exactly quick and easy, a ShaderEffect would do the job and perform quite well. I've done it myself, and it works great. This article references how to do it and has source associated. I've not used his source, so I can't vouch for it. If you run into problems, ask, and I may be able to post some of my code.
Not every day you get to use HLSL in your LOB app. :)

image conversion

I need to convert a RGB (jpg) grayscale CMYK using only to black channel (K).
I'm trying to do this with imageglue, but the result is not what i'm looking for since it converts the grays using the C,M and Y channel and leaves the black channel to 0%.
What I need is if anyone has experience in using any other library/api in .net that could work?
I would start by looking at the ColorConvertedBitmap class in WPF. Here is a link to the docs and a basic example:
http://msdn.microsoft.com/en-us/library/system.windows.media.imaging.colorconvertedbitmap(VS.85).aspx
Have you triedAForge.Net?
There is also ImageMagick, a c++ framework for image processing, with a .net wrapper (google for MagickNet)
Her is RGB to/from CMYK question which is related this one:
How is 1-bit bitmap data converted to 8bit (24bpp)?
I found The bitmap transform classes useful when trying to do some image format conversions but ... CYMK is one of the most complicated conversions you can tackle because there is more than one way to represent some colours. In particular equal CYM percentages give you shades of grey which are equivalent to the same percentage of K. Printers often use undercolour removal/transformation which normalises CYMK so that the a large common percentage is taken from CYM and transfered to the K. This is suppose to give purer blacks and grey tones. So even if you have a greyscale image represented using nothing but CYM with a zero black channel it could still print using nothing but K when you get it to a printer using undercolour removal.

Categories