I'm looking at creating a heatmap of numerical data spread over various locations within a building. I've spent a few hours researching data mapping, etc. and am looking for some advice. I am new to GIS. The majority of options available are mostly tile APIs that use lat/long, and are overkill for my requirements...
Ultimately, I just want to output a background image (a floor plan) with the heatmap overlay demonstrating areas of high intensity. The data is bound to specific locations (example, activity level: 14, location: reception entrance) and so is not randomly distributed over the map. Data has timestamps, and the final objective is to print PNGs of hourly activity for animation.
I feel like I have two options.
I like this tutorial (http://dylanvester.com/post/Creating-Heat-Maps-with-NET-20-%28C-Sharp%29.aspx) as it offers a huge amount of flexibility and the final imagery is very similar to what I would like - it's a great head start. That said, I'd need to assign locations such as "reception entrance" to an x,y co-ordinate, or even a number of x,y co-ordinates. I'd then need to process a matrix prior to every heatmap, taking data from my CSV files and placing activity values in the appropriate co-ordinates.
The other option I think I have is to create a custom shapefile (?) from the floor plan. That is, create a vector graphic with defined regions, where each is attributable to a taggable location. This seems the most flexible option, but I'm really, really struggling to find out how to create shapefiles?
My unfamiliarity with GIS terminology is making searches difficult. The latter seems the most sensible solution (use the shapefile with something like https://gist.github.com/1370472) to change the activity values over time.
Links found:
guthcad.com/cad2shape.htm (but don't have CAD drawing, just raster floorplan)
stackoverflow.com/questions/4014072/arcgis-flex-overlay-floor-plan-png (unhelpful, don't want tiled)
oliverobrien.co.uk/2010/01/simple-choropleth-maps-in-quantum-gis/
gis.stackexchange.com/questions/20901/using-gis-for-interactive-floor-plan (looks great)
To summarise: I'd like to map data bound to locations within a building. There's very good code in a C# tutorial I'd like to use, but the linking of activity data to co-ordinates is potentially messy (although could allow for describing transitions of activity between locations as vectors between co-ordinates could be used...). The other option is to create an image with regions that can be linked to CSV data by something like QGIS. Could people with more experience suggest the best direction, or even alternatives?
Thank you!
I recently did something similar for a heatmap of certain events in the USA.
My input data was simply a CSV file with three columns: Latitude, Longitude, and Number of Events.
After examining available options, I ended up using GHeat.Net. It was quite easy to use and required only a little modification to meet my needs.
The output is a transparent PNG that I then overlaid onto Google Maps.
Although your scale is quite different, I imagine the same solution should work in your case.
UPDATE
If the x,y values are integers in a reasonably small range, and if you have enough samples, you might simply create a (sparse?) array, with each array element's value being the number of samples at that coordinate. Identify the "hottest" array element (the one with the most samples) and equate that to "white" in a heat map, with lesser values corresponding to colder colors (or in other words, normalize all values in the array using the highest value and map the normalized values to a color scale). Map the array to a PNG.
Heat maps like GHeat create a sphere of influence around each data point. Depending on your data, you may not need that.
If your sample rate is not high enough, you could lift the sphere of influence code out of GHeat and apply it to your own array.
The sphere of influence stuff basically adds a value of "1" to the specific coordinate in the data sample, and also adds a smaller value to adjacent pixels in the map in order to provide for smoother-looking maps. I don't know the specific algorithm used in GHeat, but the basic idea is to add to the specific x,y value as well as neighbors using a pattern something like this:
0.25 | 0.5 | 0.25
-----------------
0.5 | 1.0 | 0.5
-----------------
0.25 | 0.5 | 0.25
Related
What I'm trying to do: I want to compress a 2D grey-scale map (2D array of float values between 0 and 1) into a DFT. I then want to be able to sample the value of points in continuous coordinates (i.e. arbitrary points in between the data points in the original 2D map).
What I've tried: So far I've looked at Exocortex and some similar libraries, but they seem to be missing functions for sampling a single point or performing lossy compression. Though the math is a bit above my level, I might be able to derive methods do do these things. Ideally someone can point me to a C# library that already has this functionality. I'm also concerned that libraries that use the row-column FFT algorithm don't produce sinusoid functions that can be easily sampled this way since they unwind the 2D array into a 1D array.
More detail on what I'm trying to do: The intended application for all this is an experiment in efficiently pre-computing, storing, and querying line of sight information. This is similar to the the way spherical harmonic light probes are used to approximate lighting on dynamic objects. A grid of visibility probes store compressed visibility data using a small number of float values each. From this grid, an observer position can calculate an interpolated probe, then use that probe to sample the estimated visibility of nearby positions. The results don't have to be perfectly accurate, this is intended as first pass that can cheaply identify objects that are almost certainly visible or obscured, and then maybe perform more expensive ray-casting on the few on-the-fence objects.
I'm using Oxyplot HeatMapSeries for representing some graphical data.
For a new application I need to represent the data with isosurfaces, something looking like this:
Some ideas around this:
I know the ContourSeries can do the isolines, but I can't find any option that allows me to fill the gaps between lines. Does this option exists?
I know the HeatMapSeries can be shown under the contourSeries so I can get a similar result but it does not fit our needs. .
Another option wolud be limiting the HeatMapSeries colours and eliminate the interpolation. Is this possible?
If anyone has another approach to the solution I will hear it!
Thanks in advance!
I'm evaluating whether Oxyplot will meet my needs and this question interests me... from looking at the ContourSeries source code, it appears to be only for finding and rendering the contour lines, but not filling the area between the lines. Looking at AreaSeries, I don't think you could just feed it contours because it is expecting two sets of points which when the ends are connected create a simple closed polygon. The best guess I have is "rasterizing" your data so that you round each data point to the nearest contour level, then plot the heatmap of that rasterized data under the contour. The ContourSeries appears to calculate a level step that does 20 levels across the data by default.
My shortcut for doing the rasterizing based on a step value is to divide the data by the level step you want, then truncate the number with Math.Floor.
Looking at HeatMapSeries, it looks like you can possibly try to turn interpolation off, use a HeatMapRenderMethod.Rectangles render method, or supply a LinearColorAxis with fewer steps and let the rendering do the rasterization perhaps? The Palettes available for a LinearColorAxis can be seen in the OxyPalettes source: BlueWhiteRed31, Hot64, Hue64, BlackWhiteRed, BlueWhiteRed, Cool, Gray, Hot, Hue, HueDistinct, Jet, and Rainbow.
I'm not currently in a position to run OxyPlot to test things, but I figured I would share what I could glean from the source code and limited documentation.
I'm working on a scientific imaging software for my university, and I've encountered a major problem. Scientific camera (Apogee Alta U57) at my lab provides images as 16bpp array - it's 0-65535 values per pixel! We want to keep this range, but in fact we can't display them on monitor (0-255 grayscale range). So I found a way to resolve this problem - simply to make use of colors, and to display whole image as a heatmap (from black, blue, through green and red, to pure white).
I mean something like this - Example heatmap image I want to achieve
My only question is: How to efficiently convert 16bpp array of pixel values to complete heatmap bitmap in c#? Are there any libraries for doing that? If not, how do I achieve that using .NET resources?
My idea was to create function that maps 65536 values into (255 R, 255G, 255B), but it's a tough job - especially without using HSV model.
I would be much obliged for any help provided!
Your question consist of several parts:
reading in the 16 bit pixel data values
mapping them to 24 bit rgb colors
writing them out to an image file
I'll skip part one and three and give you a few ideas about part 2.
It is in fact harder than it seems. A unique mapping that doesn't lose any information is simple, in fact trivial, just a little bit shifting will do.
But you also want the result to work visually, meaning not so much is should be visually appealing but should make sense to a human eye. so we need a mapping that has a credible yet large enough gradient.
For this you should experiment a little. I suggest to make use of the LinearGradientBrush, as I show here. Have a look at the interpolateColors function! It uses only 6 colors in the example, way to few for your case!
You should pick many more; you may need to go through the color space in a spiral..
The trick for you will be to choose both nice and enough stop colors to create a 64k large set of unique colors, best going from blueish to reddish..
You will need to test the result for uniqueness; in fact you may want to create a pair of Dictionary and Dictionary for the mappings..
First of all, I apologize if the question has already been asked, but in about 10 hours of intensive research on every single link Google offered for every single phrase I gave it, I wasn't able to find anything that could help me with my problem.
What I want to do is the following:
I retrieve two excel sheets with data from two different scientifical measurements. Each sheet contains information that can easily be compared to the other sheet, respectively.
The only difference between the two sheets is the amount of data points they contain.
For example: The first sheet contains data for a time span of 200 seconds, with one point representing 1 second. The second sheet also contains data for the same time span, but with one point representing 0.5 seconds.
The problem I have to solve, is to "scale" the sheet with less data points in a way that they can easily be compared in a single chart, so that each line in the chart uses the same space on the X axis.
The problem I'm having with this task is that im lacking sufficient mathematical background to create an algorithm.
I've already created the entire application with a GUI, the import of the excel sheets and smoothing with moving average (only useful if datasets have equal length).
Any idea or link to any place where this could be explained is welcome.
I also want to say that any code I currently have is completely irrelevant to this question, it's just about an additional method with said functionality.
Thanks in advance,
marfuc
If there is a direct correlation between the data points of both sets - ie the time matches up for both - then it might be sufficient to do a linear interpolation on the smaller set to generate the missing points.
For instance, let's say your first set of data is:
Time Value
12:00:00.0 100.0
12:00:01.0 120.0
12:00:02.0 117.5
...and your second set looks like:
Time Value
12:00:00.0 2.5
12:00:00.5 3.0
12:00:01.0 2.6
12:00:01.5 2.9
12:00:02.0 2.8
We can fill in the gaps in the first list in a couple of ways, depending on what you're trying to do with the data afterwards.
The simplest is to do a linear interpolation of the values. If your points are equidistant from the value you're looking for (ie: you're finding the value at the half-way point) then just average them together at the missing points:
Time Value Lerp
12:00:00.0 100.0
12:00:00.5 110.0
12:00:01.0 120.0
12:00:01.5 118.75
12:00:02.0 117.5
This is OK if the sample rate is high enough with relation to the rate at which the input varies. I've seen a lot of audio processing algorithms that use this sort of calculation for doubling sample rate. Doesn't work so well when you have high frequency data with sample rates that are too low to capture the transitions well.
The second option is to use a spline function to fit a curve against the series of points, then synthesize the missing points as offsets on the curve. This will give you smoother and more natural interpolations, with humps in the data looking much more realistic. This will also give you a fairly good way to offset your data if the timing isn't well aligned between the data sets - calculate each point as an offset along the curve with distance equal to the timing offset. There are plenty of spline implementations out there that you could use for this. I'd suggest Catmull-Rom as a starting algorithm.
Warning: If you're doing some sort of statistical analysis on the outputs then you're not going to get good results doing this, no matter how you do it. Cut the bigger group down instead of fabricating data into the smaller group if analysis is your goal.
Given an elevation map consisting of lat/lon/elevation pairs, what is the fastest way to find all points above a given elevation level (or better yet, just the the 2D concave hull)?
I'm working on a GIS app where I need to render an overlay on top of a map to visually indicate regions that are of higher elevation; it's determining this polygon/region that has me stumped (for now). I have a simple array of lat/lon/elevation pairs (more specifically, the GTOPO30 DEM files), but I'm free to transform that into any data structure that you would suggest.
We've been pointed toward Triangulated Irregular Networks (TINs), but I'm not sure how to efficiently query that data once we've generated the TIN. I wouldn't be surprised if our problem could be solved similarly to how one would generate a contour map, but I don't have any experience with it. Any suggestions would be awesome.
It sounds like you're attempting to create a polygonal representation of the boundary of the high land.
If you're working with raster data (sampled on a rectangular grid), try this.
Think of your grid as an assembly of right triangles.
Let's say you have a 3x3 grid of points
a b c
d e f
g h k
Your triangles are:
abd part of the rectangle abed
bde the other part of the rectangle abed
bef part of the rectangle bcfe
cef the other part of the rectangle bcfe
dge ... and so on
Your algorithm has these steps.
Build a list of triangles that are above the elevation threshold.
Take the union of these triangles to make a polygonal area.
Determine the boundary of the polygon.
If necessary, smooth the polygon boundary to make your layer look ok when displayed.
If you're trying to generate good looking contour lines, step 4 is very hard to to right.
Step 1 is the key to this problem.
For each triangle, if all three vertices are above the threshold, include the whole triangle in your list. If all are below, forget about the triangle. If some vertices are above and others below, split your triangle into three by adding new vertices that lie precisely on the elevation line (by interpolating elevation). Include the one or two of those new triangles in your highland list.
For the rest of the steps you'll need a decent 2d geometry processing library.
If your points are not on a regular grid, start by using the Delaunay algorithm (which you can look up) to organize your pointss in into triangles. Then follow the same algorith I mentioned above. Warning. This is going to look kind of sketchy if you don't have many points.
Assuming you have the lat/lon/elevation data stored in an array (or three separate arrays) you should be able to use array querying techniques to select all of the points where the elevation is above a certain threshold. For example, in python with numpy you can do:
indices = where(array > value)
And the indices variable will contain the indices of all elements of array greater than the threshold value. Similar commands are available in various other languages (for example IDL has the WHERE() command, and similar things can be done in Matlab).
Once you've got this list of indices you could create a new binary array where each place where the threshold was satisfied is set to 1:
binary_array[indices] = 1
(Assuming you've created a blank array of the same size as your original lat/long/elevation and called it binary_array.
If you're working with raster data (which I would recommend for this type of work), you may find that you can simply overlay this array on a map and get a nice set of regions appearing. However, if you need to convert the areas above the elevation threshold to vector polygons then you could use one of many inbuilt GIS methods to convert raster->vector.
I would use a nested C-squares arrangement, with each square having a pre-calculated maximum ground height. This would allow me to scan at a high level, discarding any squares where the max height is not above the search height, and drilling further into those squares where parts of the ground were above the search height.
If you're working to various set levels of search height, you could precalculate the convex hull for the various predefined levels for the smallest squares that you decide to use (or all the squares, for that matter.)
I'm not sure whether your lat/lon/alt points are on a regular grid or not, but if not, perhaps they could be interpolated to represent even 100' ft altitude increments, and uniform
lat/lon divisions (bearing in mind that that does not give uniform distance divisions). But if that would work, why not precompute a three dimensional array, where the indices represent altitude, latitude, and longitude respectively. Then when the aircraft needs data about points at or above an altitude, for a specific piece of terrain, the code only needs to read out a small part of the data in this array, which is indexed to make contiguous "voxels" contiguous in the indexing scheme.
Of course, the increments in longitude would not have to be uniform: if uniform distances are required, the same scheme would work, but the indexes for longitude would point to a nonuniformly spaced set of longitudes.
I don't think there would be any faster way of searching this data.
It's not clear from your question if the set of points is static and you need to find what points are above a given elevation many times, or if you only need to do the query once.
The easiest solution is to just store the points in an array, sorted by elevation. Finding all points in a certain elevation range is just binary search, and you only need to sort once.
If you only need to do the query once, just do a linear search through the array in the order you got it. Building a fancier data structure from the array is going to be O(n) anyway, so you won't get better results by complicating things.
If you have some other requirements, like say you need to efficiently list all points inside some rectangle the user is viewing, or that points can be added or deleted at runtime, then a different data structure might be better. Presumably some sort of tree or grid.
If all you care about is rendering, you can perform this very efficiently using graphics hardware, and there is no need to use a fancy data structure at all, you can just send triangles to the GPU and have it kill fragments above or below a certain elevation.