OpenGL draw every n vertex - c#

I am working on writing an application that contains line plots of large datasets.
My current strategy is to load up my data for each channel into 1D vertex buffers.
I then use a vertex shader when drawing to assemble my buffers into vertices (so I can reuse one of my buffers for multiple sets of data)
This is working pretty well, and I can draw a few hundred million data-points, without slowing down too much.
To stretch things a bit further I would like to reduce the number of points that actually get drawn, though simple reduction (I.e. draw every n points) as there is not much point plotting 1000 points that are all represented by a single pixel)
One way I can think of doing this is to use a geometry shader and only emit every N points but I am not sure if this is the best plan of attack.
Would this be the recommended way of doing this?

You can do this much simpler by adjusting the stride of all vertex attributes to N times the normal one.

Related

Sampling an arbitrary point within a DFT?

What I'm trying to do: I want to compress a 2D grey-scale map (2D array of float values between 0 and 1) into a DFT. I then want to be able to sample the value of points in continuous coordinates (i.e. arbitrary points in between the data points in the original 2D map).
What I've tried: So far I've looked at Exocortex and some similar libraries, but they seem to be missing functions for sampling a single point or performing lossy compression. Though the math is a bit above my level, I might be able to derive methods do do these things. Ideally someone can point me to a C# library that already has this functionality. I'm also concerned that libraries that use the row-column FFT algorithm don't produce sinusoid functions that can be easily sampled this way since they unwind the 2D array into a 1D array.
More detail on what I'm trying to do: The intended application for all this is an experiment in efficiently pre-computing, storing, and querying line of sight information. This is similar to the the way spherical harmonic light probes are used to approximate lighting on dynamic objects. A grid of visibility probes store compressed visibility data using a small number of float values each. From this grid, an observer position can calculate an interpolated probe, then use that probe to sample the estimated visibility of nearby positions. The results don't have to be perfectly accurate, this is intended as first pass that can cheaply identify objects that are almost certainly visible or obscured, and then maybe perform more expensive ray-casting on the few on-the-fence objects.

Determine if an image is blurred

I saw a lot a topic about this, I understood the theory but I'm not able to code this.
I have some pictures and I want to determine if they are blurred or not. I found a library (aforge.dll) and I used it to compte a FFT for an image.
As an example, there is two images i'm working on :
My code is in c# :
public Bitmap PerformFFT(Bitmap Picture)
{
//Loade Image
ComplexImage output = ComplexImage.FromBitmap(Picture);
// Perform FFT
output.ForwardFourierTransform();
// return image
return = output.ToBitmap();
}
How can I determine if the image is blurred ? I am not very comfortable with the theory, I need concret example. I saw this post, but I have no idea how to do that.
EDIT:
I'll clarify my question. When I have a 2D array of complex ComplexImage output (image FFT), what is the C# code (or pseudo code) I can use to determine if image is blurred ?
The concept of "blurred" is subjective. How much power at high frequencies indicates it's not blurry? Note that a blurry image of a complex scene has more power at high frequencies than a sharp image of a very simple scene. For example a sharp picture of a completely uniform scene has no high frequencies whatsoever. Thus it is impossible to define a unique blurriness measure.
What is possible is to compare two images of the same scene, and determine which one is more blurry (or identically, which one is sharper). This is what is used in automatic focussing. I don't know how exactly what process commercial cameras use, but in microscopy, images are taken at a series of focal depths, and compared.
One of the classical comparison methods doesn't involve Fourier transforms at all. One computes the local variance (for each pixel, take a small window around it and compute the variance for those values), and averages it across the image. The image with the highest variance has the best focus.
Comparing high vs low frequencies as in MBo's answer would be comparable to computing the Laplace filtered image, and averaging its absolute values (because it can return negative values). The Laplace filter is a high-pass filter, meaning that low frequencies are removed. Since the power in the high frequencies gives a relative measure of sharpness, this statistic does too (again relative, it is to be compared only to images of the same scene, taken under identical circumstances).
Blurred image has FFT result with smaller magnitude in high-frequency regions. Array elements with low indexes (near Result[0][0]) represent low-frequency region.
So divide resulting array by some criteria, sum magnitudes in both regions and compare them. For example, select a quarter of result array (of size M) with index<M/2 and indexy<M/2
For series of more and more blurred image (for the same initial image) you should see higher and higher ratio Sum(Low)/Sum(High)
Result is square array NxN. It has central symmetry (F(x,y)=F(-x,-y) because source is pure real), so it is enough to treat top half of array with y<N/2.
Low-frequency components are located near top-left and top-right corners of array (smallest values of y, smallest and highest values of x). So sum magnitudes of array elements in ranges
for y in range 0..N/2
for x in range 0..N
amp = magnitude(y,x)
if (y<N/4) and ((x<N/4)or (x>=3*N/4))
low = low + amp
else
high = high + amp
Note that your picture shows jumbled array pieces - this is standard practice to show zero component in the center.

Offsetting calculations to the GPU

I'm performing a large number of calculations. Each calculation is independent of every other, in other words, the task could be parallelized and I'd like to offset the job to the GPU.
Specifically, I'm creating light/shadow maps for an OpenGL application, and the calculations are a bunch of Vector math, dot products, square roots, etc.
What are my options here? Does OpenGL natively support anything like this, or should I be looking for an external library/module?
Compute shader is the generic for CUDA, which is like an enhanced compute for nVidia. Note you don't need to use either, you can do calaculations using a vertex -> geomerty stream, or render to a pixel shader. So long as you can represent the results as a collection of values (a vertex buffer or texture), you can use the rendering pipeline to do your maths.

Quickly find and render terrain above a given elevation

Given an elevation map consisting of lat/lon/elevation pairs, what is the fastest way to find all points above a given elevation level (or better yet, just the the 2D concave hull)?
I'm working on a GIS app where I need to render an overlay on top of a map to visually indicate regions that are of higher elevation; it's determining this polygon/region that has me stumped (for now). I have a simple array of lat/lon/elevation pairs (more specifically, the GTOPO30 DEM files), but I'm free to transform that into any data structure that you would suggest.
We've been pointed toward Triangulated Irregular Networks (TINs), but I'm not sure how to efficiently query that data once we've generated the TIN. I wouldn't be surprised if our problem could be solved similarly to how one would generate a contour map, but I don't have any experience with it. Any suggestions would be awesome.
It sounds like you're attempting to create a polygonal representation of the boundary of the high land.
If you're working with raster data (sampled on a rectangular grid), try this.
Think of your grid as an assembly of right triangles.
Let's say you have a 3x3 grid of points
a b c
d e f
g h k
Your triangles are:
abd part of the rectangle abed
bde the other part of the rectangle abed
bef part of the rectangle bcfe
cef the other part of the rectangle bcfe
dge ... and so on
Your algorithm has these steps.
Build a list of triangles that are above the elevation threshold.
Take the union of these triangles to make a polygonal area.
Determine the boundary of the polygon.
If necessary, smooth the polygon boundary to make your layer look ok when displayed.
If you're trying to generate good looking contour lines, step 4 is very hard to to right.
Step 1 is the key to this problem.
For each triangle, if all three vertices are above the threshold, include the whole triangle in your list. If all are below, forget about the triangle. If some vertices are above and others below, split your triangle into three by adding new vertices that lie precisely on the elevation line (by interpolating elevation). Include the one or two of those new triangles in your highland list.
For the rest of the steps you'll need a decent 2d geometry processing library.
If your points are not on a regular grid, start by using the Delaunay algorithm (which you can look up) to organize your pointss in into triangles. Then follow the same algorith I mentioned above. Warning. This is going to look kind of sketchy if you don't have many points.
Assuming you have the lat/lon/elevation data stored in an array (or three separate arrays) you should be able to use array querying techniques to select all of the points where the elevation is above a certain threshold. For example, in python with numpy you can do:
indices = where(array > value)
And the indices variable will contain the indices of all elements of array greater than the threshold value. Similar commands are available in various other languages (for example IDL has the WHERE() command, and similar things can be done in Matlab).
Once you've got this list of indices you could create a new binary array where each place where the threshold was satisfied is set to 1:
binary_array[indices] = 1
(Assuming you've created a blank array of the same size as your original lat/long/elevation and called it binary_array.
If you're working with raster data (which I would recommend for this type of work), you may find that you can simply overlay this array on a map and get a nice set of regions appearing. However, if you need to convert the areas above the elevation threshold to vector polygons then you could use one of many inbuilt GIS methods to convert raster->vector.
I would use a nested C-squares arrangement, with each square having a pre-calculated maximum ground height. This would allow me to scan at a high level, discarding any squares where the max height is not above the search height, and drilling further into those squares where parts of the ground were above the search height.
If you're working to various set levels of search height, you could precalculate the convex hull for the various predefined levels for the smallest squares that you decide to use (or all the squares, for that matter.)
I'm not sure whether your lat/lon/alt points are on a regular grid or not, but if not, perhaps they could be interpolated to represent even 100' ft altitude increments, and uniform
lat/lon divisions (bearing in mind that that does not give uniform distance divisions). But if that would work, why not precompute a three dimensional array, where the indices represent altitude, latitude, and longitude respectively. Then when the aircraft needs data about points at or above an altitude, for a specific piece of terrain, the code only needs to read out a small part of the data in this array, which is indexed to make contiguous "voxels" contiguous in the indexing scheme.
Of course, the increments in longitude would not have to be uniform: if uniform distances are required, the same scheme would work, but the indexes for longitude would point to a nonuniformly spaced set of longitudes.
I don't think there would be any faster way of searching this data.
It's not clear from your question if the set of points is static and you need to find what points are above a given elevation many times, or if you only need to do the query once.
The easiest solution is to just store the points in an array, sorted by elevation. Finding all points in a certain elevation range is just binary search, and you only need to sort once.
If you only need to do the query once, just do a linear search through the array in the order you got it. Building a fancier data structure from the array is going to be O(n) anyway, so you won't get better results by complicating things.
If you have some other requirements, like say you need to efficiently list all points inside some rectangle the user is viewing, or that points can be added or deleted at runtime, then a different data structure might be better. Presumably some sort of tree or grid.
If all you care about is rendering, you can perform this very efficiently using graphics hardware, and there is no need to use a fancy data structure at all, you can just send triangles to the GPU and have it kill fragments above or below a certain elevation.

Representing a Gameworld that is Irregularly shaped

I am working on a project where the game world is irregularly shaped (Think of the shape of a lake). this shape has a grid with coordinates placed over it. The game world is only on the inside of the shape. (Once again, think Lake)
How can I efficiently represent the game world? I know that many worlds are basically square, and work well in a 2 or 3 dimension array. I feel like if I use an array that is square, then I am basically wasting space, and increasing the amount of time that I need to iterate through the array. However, I am not sure how a jagged array would work here either.
Example shape of gameworld
X
XX
XX X XX
XXX XXX
XXXXXXX
XXXXXXXX
XXXXX XX
XX X
X
Edit:
The game world will most likely need each valid location stepped through. So I would a method that makes it easy to do so.
There's computational overhead and complexity associated with sparse representations, so unless the bounding area is much larger than your actual world, it's probably most efficient to simply accept the 'wasted' space. You're essentially trading off additional memory usage for faster access to world contents. More importantly, the 'wasted-space' implementation is easier to understand and maintain, which is always preferable until the point where a more complex implementation is required. If you don't have good evidence that it's required, then it's much better to keep it simple.
You could use a quadtree to minimize the amount of wasted space in your representation. Quad trees are good for partitioning 2-dimensional space with varying granularity - in your case, the finest granularity is a game square. If you had a whole 20x20 area without any game squares, the quad tree representation would allow you to use only one node to represent that whole area, instead of 400 as in the array representation.
Use whatever structure you've come up with---you can always change it later. If you're comfortable with using an array, use it. Stop worrying about the data structure you're going to use and start coding.
As you code, build abstractions away from this underlying array, like wrapping it in a semantic model; then, if you realize (through profiling) that it's waste of space or slow for the operations you need, you can swap it out without causing problems. Don't try to optimize until you know what you need.
Use a data structure like a list or map, and only insert the valid game world coordinates. That way the only thing you are saving are valid locations, and you don't waste memory saving the non-game world locations since you can deduce those from lack of presence in your data structure.
The easiest thing is to just use the array, and just mark the non-gamespace positions with some special marker. A jagged array might work too, but I don't use those much.
You could present the world as an (undirected) graph of land (or water) patches. Each patch then has a regular form and the world is the combination of these patches. Every patch is a node in the graph and has has graph edges to all its neighbours.
That is probably also the most natural representation of any general world (but it might not be the most efficient one). From an efficiency point of view, it will probably beat an array or list for a highly irregular map but not for one that fits well into a rectangle (or other regular shape) with few deviations.
An example of a highly irregular map:
x
x x
x x x
x x
x xxx
x
x
x
x
There’s virtually no way this can be efficiently fitted (both in space ratio and access time) into a regular shape. The following, on the other hand, fits very well into a regular shape by applying basic geometric transformations (it’s a parallelogram with small bits missing):
xxxxxx x
xxxxxxxxx
xxxxxxxxx
xx xxxx
One other option that could allow you to still access game world locations in O(1) time and not waste too much space would be a hashtable, where the keys would be the coordinates.
Another way would be to store an edge list - a line vector along each straight edge. Easy to check for inclusion this way and a quad tree or even a simple location hash on each vertice can speed lookup of info. We did this with a height component per edge to model the walls of a baseball stadium and it worked beautifully.
There is a big issue that nobody here addressed: the huge difference between storing it on disk and storing it in memory.
Assuming you are talking about a game world as you said, this means it's going to be very large. You're not going to store the whole thing in memory in once, but instead you will store the immediate vicinity in memory and update it as the player walks around.
This vicinity area should be as simple, easy and quick to access as possible. It should definitely be an array (or a set of arrays which are swapped out as the player moves). It will be referenced often and by many subsystems of your game engine: graphics and physics will handle loading the models, drawing them, keeping the player on top of the terrain, collisions, etc.; sound will need to know what ground type the player is currently standing on, to play the appropriate footstep sound; and so on. Rather than broadcast and duplicate this data among all the subsystems, if you just keep it in global arrays they can access it at will and at 100% speed and efficiency. This can really simplify things (but be aware of the consequences of global variables!).
However, on disk you definitely want to compress it. Some of the given answers provide good suggestions; you can serialize a data structure such as a hash table, or a list of only filled-in locations. You could certainly store an octree as well. In any case, you don't want to store blank locations on disk; according to your statistic, that would mean 66% of the space is wasted. Sure there is a time to forget about optimization and make it Just Work, but you don't want to distribute a 66%-empty file to end users. Also keep in mind that disks are not perfect random-access machines (except for SSDs); mechanical hard drives should still be around another several years at least, and they work best sequentially. See if you can organize your data structure so that the read operations are sequential, as you stream more vicinity terrain while the player moves, and you'll probably find it to be a noticeable difference. Don't take my word for it though, I haven't actually tested this sort of thing, it just makes sense right?

Categories