I'm working on a scientific imaging software for my university, and I've encountered a major problem. Scientific camera (Apogee Alta U57) at my lab provides images as 16bpp array - it's 0-65535 values per pixel! We want to keep this range, but in fact we can't display them on monitor (0-255 grayscale range). So I found a way to resolve this problem - simply to make use of colors, and to display whole image as a heatmap (from black, blue, through green and red, to pure white).
I mean something like this - Example heatmap image I want to achieve
My only question is: How to efficiently convert 16bpp array of pixel values to complete heatmap bitmap in c#? Are there any libraries for doing that? If not, how do I achieve that using .NET resources?
My idea was to create function that maps 65536 values into (255 R, 255G, 255B), but it's a tough job - especially without using HSV model.
I would be much obliged for any help provided!
Your question consist of several parts:
reading in the 16 bit pixel data values
mapping them to 24 bit rgb colors
writing them out to an image file
I'll skip part one and three and give you a few ideas about part 2.
It is in fact harder than it seems. A unique mapping that doesn't lose any information is simple, in fact trivial, just a little bit shifting will do.
But you also want the result to work visually, meaning not so much is should be visually appealing but should make sense to a human eye. so we need a mapping that has a credible yet large enough gradient.
For this you should experiment a little. I suggest to make use of the LinearGradientBrush, as I show here. Have a look at the interpolateColors function! It uses only 6 colors in the example, way to few for your case!
You should pick many more; you may need to go through the color space in a spiral..
The trick for you will be to choose both nice and enough stop colors to create a 64k large set of unique colors, best going from blueish to reddish..
You will need to test the result for uniqueness; in fact you may want to create a pair of Dictionary and Dictionary for the mappings..
Related
I am trying to develop an application for image processing.
Here is my complete code in DotNetFiddle.
I have tested my application with different images from the Internet:
Cameraman is GIF.
Baboon is PNG.
Butterfly is PNG.
Pheasant is JPG.
Butterfly and Pheasant are re-sized to 300x300.
The following two images show correct Fourier and Inverse Fourier spectrum:
The following two images do not show the expected outcome:
What could be the reason?
Are there any problem with the later two images?
Do we need to use images of specific quality to test Image-processing applications?
The code you linked to is a radix-2 FFT implementation which would work for any image with sizes that are exact powers of 2.
Incidentally, the Cameraman image is 256 x 256 (powers of 2) and the Baboon image is 512 x 512 (again powers of 2). The other two images, being resized to 300 x 300 are not powers of 2. After resizing those images to an exact power of 2 (for example 256 or 512), the output of FrequencyPlot for the brightness component of the last two images should look somewhat like the following:
butterfly
pheasant
A common workaround for images of other sizes is to pad the image to sizes that are exact powers of 2. Otherwise, if you must process arbitrary sized images, you should consider other 2D discrete Fourier transform (DFT) algorithms or libraries which will often support sizes that are the product of small primes.
Note that for the purpose of validating your output, you also have option to use the direct DFT formula (though you should not expect the same performance).
I got not time to dig through your code. Like I said in my comments you should focus on the difference between those images.
There is no reason why you should not be able to calculate the FFT of one image and fail for another. Unless you have some problem in your code that can't handle some difference between those images. If you can display them you should be able to process them.
So the first thing that catches my eye is that both images you succeed with have even dimensions while the images your algorithm produces garbage for have at least one odd dimension. I won't look into it any further as from experience I'm pretty confident that this causes your issue.
So befor you do anything else:
Take one of those images that work fine, remove one line or row and see if you get a good result. Then fix your code.
I'm having difficulty of knowing how to approach or how to tackle this problem. I've looked at some tutorials but they are meant for programmers that already know what they're doing. I followed a video on how to perform a form of pixel collision that applies to regular bounding boxes, where if the bounding boxes collide it checks if any non-transparent pixel in both intersecting boxes are overlapping. If they do, then a boolean will return a true value. Where and how could I start to implement the changing of the bounding box's axis in a rotating object to compliment the texture's appreance? I wouldn't prefer being pointed to an external tutorial because most of the ones I've read assumes the programmer knows everything the writer is talking about.
I've also looked at some source code that perfectly demonstrates what I'm looking for, but it seems I need a very in depth explanation to make any use of reading code as well.
First off, I don't really recommend doing this, as it's gonna be either computing- or resource-intensive (or both).
That said, one idea is to still do your aforementioned AABB method of straight-up pixel on pixel. This requires you to maintain your own pixel data in memory to only be used for collision, as opposed to relying strictly on the texture's data.
To be more specific, using this method you will have to generate what is essentially an "image", or 2 dimensional matrix of some kind, one that represents/follows your rotated image's pixels. But you will not be storing color information in it, as you would with a normal image. Instead, each "pixel" or entry in the structure shall be collision data: "block" or "not block". You could easily use a bitmask to represent this, with 1 meaning "block" and 0 meaning "not block", and you'd need one bit per pixel. (NOTE: Usually you don't need more than just a boolean "on" & "off" for this, but it's possible you may want different types of collision per pixel; if so, bitmasks won't work, instead encode whatever you need per pixel, regardless the overall idea remains the same)
Generating a bitmask (or other such structure) for your sprite will enable you to just use the AABB method; all you'd have to do is use the generated bitmask instead of the texture data directly, and everything else is the same as before. But how do we generate this? That's the true difficulty of this method, because you generating your own image is basically replicating the work of your graphics card when you tell it to do rotations.
You would essentially "draw" out the rotated image yourself. This could be done by stepping through your base texture image data pixel by pixel, and applying a rotational transformation matrix to each pixel to get it to the correct destination in your bitmask/buffer. Once you have the correct destination, you then would test the image data for "block" or "not block" (using transparency as you mentioned) and write a 1 or 0 there accordingly.
While you're generating, you should also keep track of local minima and maxima; that is, how far left, right, up, and down your rotated image goes, just to give it an actual true AABB to live inside for quick checks (i.e. "Do I even need to check per-pixel collision?")
To be fully accurate, you will probably need to know which interpolation/rounding algorithm you're using (bilinear, nearest neighbor, etc.), which can get ugly. Graphics systems often do very complicated things, so taking ALL of this into account just for collision is pretty extreme. At the end of the day, even applying this method, it may not truly be "pixel perfect" as far as "perfectly synchronized with the rendered image output", unless you really go far in replicating exactly what XNA / DirectX is doing.
Finally, when does this generation occur? The answer is every time anything rotates! Otherwise you'll be checking stale data. Obviously you could just keep one buffer per sprite and just keep changing that, to not hog so much memory. But this does mean potentially once per frame if you're rotating consistently. Which means multiple times per frame if multiple sprites are all rotating a lot. Might not be the most computationally friendly.
I'm using Oxyplot HeatMapSeries for representing some graphical data.
For a new application I need to represent the data with isosurfaces, something looking like this:
Some ideas around this:
I know the ContourSeries can do the isolines, but I can't find any option that allows me to fill the gaps between lines. Does this option exists?
I know the HeatMapSeries can be shown under the contourSeries so I can get a similar result but it does not fit our needs. .
Another option wolud be limiting the HeatMapSeries colours and eliminate the interpolation. Is this possible?
If anyone has another approach to the solution I will hear it!
Thanks in advance!
I'm evaluating whether Oxyplot will meet my needs and this question interests me... from looking at the ContourSeries source code, it appears to be only for finding and rendering the contour lines, but not filling the area between the lines. Looking at AreaSeries, I don't think you could just feed it contours because it is expecting two sets of points which when the ends are connected create a simple closed polygon. The best guess I have is "rasterizing" your data so that you round each data point to the nearest contour level, then plot the heatmap of that rasterized data under the contour. The ContourSeries appears to calculate a level step that does 20 levels across the data by default.
My shortcut for doing the rasterizing based on a step value is to divide the data by the level step you want, then truncate the number with Math.Floor.
Looking at HeatMapSeries, it looks like you can possibly try to turn interpolation off, use a HeatMapRenderMethod.Rectangles render method, or supply a LinearColorAxis with fewer steps and let the rendering do the rasterization perhaps? The Palettes available for a LinearColorAxis can be seen in the OxyPalettes source: BlueWhiteRed31, Hot64, Hue64, BlackWhiteRed, BlueWhiteRed, Cool, Gray, Hot, Hue, HueDistinct, Jet, and Rainbow.
I'm not currently in a position to run OxyPlot to test things, but I figured I would share what I could glean from the source code and limited documentation.
I'm looking at creating a heatmap of numerical data spread over various locations within a building. I've spent a few hours researching data mapping, etc. and am looking for some advice. I am new to GIS. The majority of options available are mostly tile APIs that use lat/long, and are overkill for my requirements...
Ultimately, I just want to output a background image (a floor plan) with the heatmap overlay demonstrating areas of high intensity. The data is bound to specific locations (example, activity level: 14, location: reception entrance) and so is not randomly distributed over the map. Data has timestamps, and the final objective is to print PNGs of hourly activity for animation.
I feel like I have two options.
I like this tutorial (http://dylanvester.com/post/Creating-Heat-Maps-with-NET-20-%28C-Sharp%29.aspx) as it offers a huge amount of flexibility and the final imagery is very similar to what I would like - it's a great head start. That said, I'd need to assign locations such as "reception entrance" to an x,y co-ordinate, or even a number of x,y co-ordinates. I'd then need to process a matrix prior to every heatmap, taking data from my CSV files and placing activity values in the appropriate co-ordinates.
The other option I think I have is to create a custom shapefile (?) from the floor plan. That is, create a vector graphic with defined regions, where each is attributable to a taggable location. This seems the most flexible option, but I'm really, really struggling to find out how to create shapefiles?
My unfamiliarity with GIS terminology is making searches difficult. The latter seems the most sensible solution (use the shapefile with something like https://gist.github.com/1370472) to change the activity values over time.
Links found:
guthcad.com/cad2shape.htm (but don't have CAD drawing, just raster floorplan)
stackoverflow.com/questions/4014072/arcgis-flex-overlay-floor-plan-png (unhelpful, don't want tiled)
oliverobrien.co.uk/2010/01/simple-choropleth-maps-in-quantum-gis/
gis.stackexchange.com/questions/20901/using-gis-for-interactive-floor-plan (looks great)
To summarise: I'd like to map data bound to locations within a building. There's very good code in a C# tutorial I'd like to use, but the linking of activity data to co-ordinates is potentially messy (although could allow for describing transitions of activity between locations as vectors between co-ordinates could be used...). The other option is to create an image with regions that can be linked to CSV data by something like QGIS. Could people with more experience suggest the best direction, or even alternatives?
Thank you!
I recently did something similar for a heatmap of certain events in the USA.
My input data was simply a CSV file with three columns: Latitude, Longitude, and Number of Events.
After examining available options, I ended up using GHeat.Net. It was quite easy to use and required only a little modification to meet my needs.
The output is a transparent PNG that I then overlaid onto Google Maps.
Although your scale is quite different, I imagine the same solution should work in your case.
UPDATE
If the x,y values are integers in a reasonably small range, and if you have enough samples, you might simply create a (sparse?) array, with each array element's value being the number of samples at that coordinate. Identify the "hottest" array element (the one with the most samples) and equate that to "white" in a heat map, with lesser values corresponding to colder colors (or in other words, normalize all values in the array using the highest value and map the normalized values to a color scale). Map the array to a PNG.
Heat maps like GHeat create a sphere of influence around each data point. Depending on your data, you may not need that.
If your sample rate is not high enough, you could lift the sphere of influence code out of GHeat and apply it to your own array.
The sphere of influence stuff basically adds a value of "1" to the specific coordinate in the data sample, and also adds a smaller value to adjacent pixels in the map in order to provide for smoother-looking maps. I don't know the specific algorithm used in GHeat, but the basic idea is to add to the specific x,y value as well as neighbors using a pattern something like this:
0.25 | 0.5 | 0.25
-----------------
0.5 | 1.0 | 0.5
-----------------
0.25 | 0.5 | 0.25
I'm having an issue with converting a BitmapImage (WPF) to grayscale, whilst keeping the alpha channel. The source image is a PNG.
The MSDN article here works fine, but it removes the alpha channel.
Is there any quick and effective way of converting a BitmapImage to a grayscale?
You should have a look at image transformation using matrices.
In particular, this article describes how to convert a bitmap to grayscale using a ColorMatrix. (It is written in VB.NET, but it should be easy enough to translate to C#).
I haven't tested if it works with the alpha channel, but I'd say it's worth a try, and it definitely is a quick and effective way of modifying bitmaps.
It really depends upon what your source PixelFormat is. Assuming your source is PixelFormats.Bgra32 and that you want to go to grayscale, you might consider using a target pixel format of PixelFormats.Gray16. However, Gray16 doesn't support alpha. It just has 65,535 graduations between black and white, inclusive.
You have a few options. One is to stay with Bgra32 and just set the blue, green and red channels to the same value. That way you can keep the alpha channel. This may be wasteful if you don't require an 8-bit alpha channel (for differing levels of alpha per pixel).
Another option is to use an indexed pixel format such as PixelFormats.Indexed8 and create a palette that contains the gray colours you need and alpha values. If you don't need to blend alpha, you could make the palette colour at position zero be completely transparent (an alpha of zero) and then progress solid black in index 1 through to white in 255.
if relying on API calls fails. You can always try the 'do it yourself' approach: Just get access to the RGBA bytes of the picture, and for every RGBA replace it with MMMA, where M = (R+G+B)/3;
If you want it more perfect, you should add weights to the contribution of the RGB components. I believe your eye is more receptive for green, and as such that value should weigh more.
While not exactly quick and easy, a ShaderEffect would do the job and perform quite well. I've done it myself, and it works great. This article references how to do it and has source associated. I've not used his source, so I can't vouch for it. If you run into problems, ask, and I may be able to post some of my code.
Not every day you get to use HLSL in your LOB app. :)