Is there possibility to get array of vertices stored within display list in opengl?
From some other code I get a display list which I should draw , but I need to know a bounding box of that model. Is there a possibility that I could extract that information from display list?
Have you considered using the feedback buffer, since this is deprecated OpenGL?
You can set the render mode to GL_FEEDBACK before drawing your display list and then get a buffer full of all the vertices. Since this is a rarely used feature and a deprecated one at that (transform feedback is the modern equivalent, though it functions in a different pipeline stage), some language bindings may not have it.
Unfortunately, the feedback buffer contains more than just vertices. It contains a list of all the raster operations that occurred, and you would have to build some software to make sense of this list. The OpenGL SuperBible has an example of how to do this in C.
The other thing to note is that vertex positions are in screen space, you will need to reverse project them into object space for this to work the way you want in your example. This also means that the original positions for any vertices that had to be clipped will be lost. It is far from a perfect solution, more of a hack if anything, but it could be useful.
No. The GL has no support for inspecting display lists. DLs are just for the GL, not for the user.
Having said that, there is still a theoretical possibility to get the contents of the DL. You could intercept all GL calls the code generating the DL is calling, track dlist state and compute the bounding boxes based on the vertex data. The old chromium open source project would in principle allow you to do this. However, the effort for this would be extraordinarily high, and I doubt that it would be a viable solution to your problem.
Related
Searching for advice: We are rewriting (in c#) the graphical user interface for the Watershed Risk Analysis Management Framework model, and are using the DotSpatial libraries for our map operations. We need to perform some simple tabulations on raster data, and I'm having trouble finding examples. We need to calculate land use (using national land cover dataset) percentages within polygons, calculate average slope and aspect within polygons. Pretty standard stuff for hydrologic analysis. Does anyone know of tutorials or available code sources for DotSpatial raster analysis? Thanks for your time.
did you find a way to do it? I am in the same position. For the moment, my current workaround is this. I have converted my raster into a List<GeoAPI.Geometries.IPoint> listPts using the center coordinates of the pixels, with the Z value as the corresponding raster pixel value. Then, with my PolygonShapefile, I loop over each feature, and use the feature.Geometry.Covers(listPts[i]) methode to build a list of the points failing in each polygons. After that, I simply cross the two lists together to calculates the corresponding statistics that I need.
I would like a better suggestion, but for the moment, it fits my needs.
I try to identify changes on an object. Therefore I take a picture before and after using the object. At the moment I'm working with the absolute Difference of the two pictures and taking the contours of the resulting difference image. That works fine as long as the object is positioned perfectly and captured like in the image before. Only small differences in its position make my method useless.
Has anybody a different solution approach with OpenCV oder EmguCV? I was thinking about checking if one of the neighbor pixels is identical then there should be no change detected, but I don't know of an existing performant algorithm.
Example Images (Pictures don't match my usecase, but they should be helpful to illustrate my problem):
Before
After
Yes there are many way to do this. I like the following:
Histogram match. Get a histogram before and after and check for differences. Is sensitive to changes in lighting. Very good method if you are in a controlled lighting setting
Correlation match. If you use MatchTemplate you can get the “quality” of the match. This can be made to be less sensitive to light. But is sensitive to rotation changes between the two images.
Try to implement some and let’s see your code.
I have two render targets that i draw to, and i want to combine (blend) them, easiest by alpha value maybe, to one picture using the gpu via directx. One target is the background, the other the data i want to plot.
I can't just plot the data over the background, because i don't want to store the data between new draws. That works well for other use cases. But now not anymore. So I append the data to the one target and it would be nice to just blend this two targets efficiently.
I'm a bit lost in the documentation and can't really find an example to relate. I'm using sharpdx
Any help appreciated, thanks
You can easily combine to rendertargets in a shader. Just create two rendertargets with resource binding and render your scene to them, after that you pass the shaderresourceviews to the shader and combine them with your method of choice (multiply, adding etc.) All these things are typically used in "Deferred Shading".
Another source to see this approach in action, you can find here: RobyDX - SharpDX Samples
If the 2 targets are independent, and you want to render one into the other. I suggest you do something similar to what I do.
Which is, make first render target (on the first pass) a texture resource and sample from it. As you render to the 2nd, sample from the first. I've done that in my current project to blend 2 or 3 sources together.
Here's the scenario:
I am using Visual Studio 2008 with .NET framework 3.5. I am using C#. And for database I am using MySQL. I have a picturebox on a form and 10-12 buttons (each with some image manipulation function). On clicking one of the buttons openfiledialog box is shown up where the user can select the specific file to provide to the program. On clicking another button the program should perform the actions as explained below.
I have an image of a circuit. Suppose this is the image which is provided to the program. e.g.
What I intend to do is that - the program should hypothetically label the circuit as follows:
and then it should separate the image and store the information in a database.
Is there any way to do that. Can anyway tell me the approach to do that? Any help or suggestions please.
Thanks.
In image processing, the problem of finding the 'parts' of the circuit is known as connected component labeling. If you are using C#, I believe that you can use EmguCV (a wrapper to the OpenCV library) to solve the first part of the problem. To do that, you have to consider that the white pixels are the background and that the black pixels are objects.
Now that you have the separated traces, the problem is reduced to finding and labeling the white dots. Again, you can solve it by connected component labeling, but now the objects are represented by white pixels and the background are the black pixels.
At least for your example case, a very simple algorithm would work.
Find a black pixel from the image
Using a flood-fill algorithm, find all the pixels connected to it, and separate it. That's one of your traces.
Working with the separated trace, find a white pixel and use a flood-fill algorithm to find all the pixels connected to it. If you run to the edge of the image, it's not a hole. If you don't, it might be a hole, or a loop in the trace. Use a threshold for the hole size to determine if it's a terminal hole or a loop.
Label the hole and remove it from consideration. Repeat until there are no more unprocessed white pixels.
Remove the whole trace from consideration, and jump to 1.
When there are no more black pixels in consideration in step 1, you're done.
You should probably get pretty far with a basic image editing library that has a flood-fill function, a function to separate a certain color into a new image, and a function to replace colors (the last two are trivial to implement, and there are plenty of flood-fill algorithms available online). You can use different colors to mark different things, for instance, color everything "no in consideration" red. It also makes for an interesting visualization if you look at it in real time!
image http://prod.triplesign.com/map.jpg
How can I produce a similar output in C# window forms in the easiest way?
Is there a good library for this purpose?
I just needs to be pointed in the direction of which graphic library is best for this.
You should just roll your own in a 3d graphics library. You could use directx. If using WPF it is built-in, you can lookup viewport3d. http://msdn.microsoft.com/en-us/magazine/cc163449.aspx
In graphics programming what you are building is a very simple version of a heightmap. I think building your own would give your greater flexibility in the long run.
So a best library doesn't exist. There are plenty of them and some are just for different purposes. Here a small list of possibilities:
Tao: Make anything yourself with OpenGL
OpenTK: The successor of the Tao framework
Dundas: One of the best but quite expensive (lacks in real time performance)
Nevron: Quite good, but much cheaper (also has problems with real time data)
National Instruments: Expensive, not the best looking ones, but damn good in real time data.
... Probably someone else made some other experiences.
Checkout Microsoft Chart Controls library.
Here's how I'd implement this using OpenGL.
First up, you will need a wrapper to import the OpenGL API into C#. A bit of Googling led me to this:
CsGL - OpenGL .NET
There a few example programs available to demonstrate how the OpenGL interface works. Play around with them to get an idea of how the system works.
To implement the 3D map:
Create an array of vectors (that's not the std::vector/List type but x,y,z triplets) where x and y are along the horizontal plane and z is the up amount.
Set the Z compare to less-than-or-equal (so the overlaid line segments are visible).
Create a list of quads where the vertices of the quads are taken from the array in (1)
Calculate the colour of the quad. Use a dot-product of the quad's normal and a light source direction to get a value to shade value, i.e. normal.light of 1 is black and -1 is white.
Create a list of line segments, again from the array in (1).
Calculate the screen position of the various projected axes points.
Set up your camera and world->view transform (use the example programs to get an idea of how to do this).
Render the quads and lines, OpenGL will do the transformation from world co-ordinates (the list in (1)) to screen space. Draw the labels, you might not want to do this using OpenGL as the labels shouldn't scale with distance from camera, otherwise they could get too small to read.
Since the above is quite a lot of stuff, there isn't really the space (and time on my part) to post working code (but someone else might add something if you're lucky). You could break the task down and ask questions on the parts you don't quite understand.
Have you tried this... gigasoft data visualization tools (Its not free)
And you can checkout the online wireframe demo here