I am trying to figure out a way to digitally map an image of a stained glass window. Between and around every piece of glass is a line of lead solder. What I'm thinking would be the best way to go about this would be to map the image based on the range of HSL or HSI values of the soldering material, such that each piece of glass would be its own zone, with its own information and click function.
I'm trying to make this a desktop application using C#, but could probably use html/javascript if that would be easier. I have been searching for some time to try and figure out a good way to accomplish this, but I'm having a hard time figuring it out.
Does anyone have any pointers on how I might go about doing this? I'm sorry if this question seems vague; if further clarification is needed, please comment on it and I will try to be clearer if necessary.
I don't have enough reputation to post a picture, but look at these for an idea:
http://rootsofknowledge.tc.uvu.edu/Gallery
I have had a little look, but don't want to do too much in case I am barking up the wrong tree!
I like to use ImageMagick, available for free with C/C++, Perl, PHP bindings and on the command line in most Linux distros. It is here.
I started with this image:
and tried a little blurring to make it less sensitive to tonal changes in the glass, followed by a Canny Edge detector and got this:
The command I used was this:
convert glass.jpg -blur 0x2 -canny 0x1+5%+10% edges.jpg
Obviously there is loads more to do, but it is maybe a start... and maybe others will add their expertise now we have a starting point....
Related
Anyone know a binary that exports the output of the coordinates? I am looking for something related to several days, I have already visited dozens of sites, downloaded dozens of projects and researched dozens of tutorials, but I can't find anything that really works, I'm trying to create an application similar to VTuber (Virtual Youtubers) in Construct 2 where Head Tracking is crucial to make movement work, I understand the complexity of developing such an engine, but I also imagine that there is something easily accessible out there on the internet, for example...
I found this website that in the lower right corner has the information I would need, but it is a website, it is not quite a binary that I keep monitoring the output to make the character animate inside Construct 2...
Site: https://www.visagetechnologies.com/HTML5/latest/Samples/ShowcaseDemo/ShowcaseDemo.html
I also found a very cool project, but it is also a website, although I really liked it because it is very light running.
Site: https://www.auduno.com/clmtrackr/examples/clm_emotiondetection.html
I also found a project called "ARKitFaceTracker" but it was made for Unity and the level of complexity of the project is discouraging for me.
Currently a project already exists that uses "ARKitFaceTracker" is PrprLive, it was made at Unity and it would be perfect if you exported the horns in a text file, so that you could try to make a lighter version and in my own way, but I just I know how to program in VB.Net and a little bit of C#, but this "Face Camera" was basically what I wanted, but in a way I just used it as "engine", to get the direction coordinates of the head.
In short: basically what I'm looking for is a binary executable that can export the only the movement (left, right, up and down) coordinates of this "Head Tracking" so I can pull this "Output" (txt or json) through Construct 2 and make it animate in real time. I really wanted to make it happen, mainly because I have been researching and searching the internet for several days and it has been a few hours since I crashed and I came here wanting to know if you could help me to continue, because I really don't know what else do, thank you very much.
It can be easier if you modify from projects like this:
https://github.com/1996scarlet/OpenVtuber
https://github.com/virtuber/openvtuber
Both these solutions separate the face tracking module and the visualization module. It should be easy for you to set up the python servers and request coordinates from them.
we have a project at university where we have to create fractals that are controllable by the user (with kinect). We are still in an early phase where we evaluate some ideas (though we already got some working prototypes). Our idea right now is to use a blackboard-image as background and draw the fractals on top of it. To make this look more natural we'd like to use some crayon effects on the lines we draw.
Our internet-research produced two main ideas to achieve this:
Paint an image of a photoshop-like brush for every moueposition of a line.
Use shaders on drawline-functions.
The first does work for an early test, but looks awful. I guess the latter would be the best approach, but the information in the internet seems to be lacking regarding this topic. At least I didn't find anything that really helped a lot. The Questions is:
Has anyone links or general tips on how to achieve this effect, or is it not possible in C#/WPF. Might there be an even better way? And how can I apply a created shader to drawline-method/brushes?
Thanks in advance and kind regards
Michael
EDIT
Nice tip from #Bradley Uffner ! (unfortunately he deleted his answer)
There is a tutorial here on how to achieve such effect :
http://alastaira.wordpress.com/2013/11/01/hand-drawn-shaders-and-creating-tonal-art-maps/
Another thing I've found recently which might interests you :
http://blogs.msdn.com/b/hemipteran/archive/2014/03/26/generating-noise-for-applications.aspx
I've didn't read well the title of your question, Windows Forms do not support shaders at all.
Be more specific about the meaning of a crayon shader
Regarding using shaders in WPF, you cannot apply them at a line-level but on a Control level instead.
This the class representing a shader in WPF:
http://msdn.microsoft.com/en-us/library/system.windows.media.effects.effect(v=vs.110).aspx
What I suggest
use an Image and set its Source property to use a WriteableBitmap
use the excellent WriteableBitmapEx library to easily draw on it, basically it's the WriteableBitmap in WPF but with many extension methods for drawing lines, circles, rectangles etc ...
then apply your shader to the Effect property of Image
For developing your shader
Use Shazzam, it allows you to develop an HLSL shader for WPF in a cool interface, preview it instantly and it will generate the Effect class ready to paste on your project.
There might be a couple of interesting shader for you in DOSBox SVN-Daum
Here's an example of the cartoon shader :
Obviously there will be quite some work as copy-pasting the shader to Shazzam won't work right away but you'll know what are the maths behind achieving the effect.
Mark the answer as accepted if you are satisfied with it :D if not edit your question and add more details.
I have this odd little lifesim program I've been working on that involves data in a 2d array. This was never supposed to be a big thing, and I initially looked at a few snapshots of it by just writing it out to an external bitmap, pixel by pixel, which I then open and look at. This doesn't give me any sort of live update to the screen. This is a horrible way to do this, and in trying to implement drawing this directly in a window, I want to do this correct and efficiently the first time.
I did some searching and found bitblt, which will let me draw a whole rectangle at a time, but all of my graphics experience being limited to things like WPF, a lot of the terminology is lost on me. I don't know what format my data should be in order to hand it to this function as a bitmap. In reading around msdn I find references to things like DC, etc, more things I haven't yet learned about.
I don't need to know lots about Windows graphics API or .NET's drawing framework. I don't want to learn a bunch of DirectX. I want to make a Window of a specific dimension and I want to be able to set the RGB value of each of those pixels as I see fit. No drawings shapes or anything, just pixels. But I also don't want to do it one pixel at a time, a separate system call for each, because even a lame programmer like myself knows how terribly inefficient that is. Does anyone know if a good resource that will give a simple explanation of graphics in Windows and will let me do this? MSDN is great for looking things up, but it's a bit much if you're trying to learn something from scratch.
C# is preferable because the lifesim in written in it, but I don't have any qualms about rewriting it in C++ if there's a good reason for it.
You could try the WriteableBitmap class in WPF and see if it fits your purposes.
A tutorial
All you would have to do is keep the data in the 2D array and write it to the WriteableBitmap. Set the WriteableBitmap as the image source of a WPF Image and you're done.
Let me know if you need an example.
What you probably want to do is use LockBits to lock up your image data, and then manipulate your image as an array. Here's a great tutorial by Bob Powell:
https://web.archive.org/web/20121203144033/http://www.bobpowell.net/lockingbits.htm
Otherwise, if speed is not a concern, you can use the GetPixel and SetPixel methods. These are horribly slow though, but will work in a managed environment.
i have been searching for long in order to find how i can place a sphere/cube around my playfield so ill have an effect similar to sky. I have seen SkySpheres/Boxes but couldnt implement them . Im looking for a rather simple solution since my play field is really small and i just need something to replace the Clear(Color.Black) .I dont care about collision i just need it to be there with a texture on . Thanks !
You should take a look at Riemers XNA Tutorial
HERE is the post on the skybox.
As a pet project/learning experience (no this is not homework) I'm working on software to recognize barcodes from a photograph. I'm not looking for software or a library that does it - instead I'm using this as a learning exercise that I'm blogging about and will post up on Codeplex.
I have code that successfully recognizes EAN13 barcodes (which I published on CodePlex) and UPC version A/E should follow shortly. I have two areas that I'm concerned about, though. First is in decoding barcodes that are in a picture that is bit blurry or with poor contrast, etc. Second is in simply finding the actual barcode in a larger picture (right now you have to give me a photo of just the barcode).
I have the gut feeling that some form of AI is going to help me out here. I played a bit in the past with genetic algorithms and I took a course ages ago on AI so it's not totally foreign to me, but I'm not quite sure where to start.
What type of algorithm is best suited to this type of problem? Any recommended reading or code for the AI grunt work? Yes, I want to understand what's happening, but I don't necessarily want to go down to the level of coding the sorts, etc myself.
I would suggest to search for properties that a barcode has. Some that I have in mind are:
Histogram of colors shows two distinct colors in about even distribution
Doing a hough transformation finds many parallel lines
The thickness of the lines have two distinct dimensions.
Some other?
Having this I would split the image into pieces and do a classification with these features then cobine the results to calculate a liklyhood if the piece contains an barcode or not.
For your second problem (blurry image) I would suggest to calculate the 1st order derivative of the grayvalues and then detect the edges of the lines in this space. The maximum of the derivative is lower if the image is blurred but it should be detectable to a certain blurring factor.
Does this help you?
As mp already noted you don't necessary need any real AI technique for it. Have a look at chapter 12 of Real World Haskell. It implements an almost complete barcode recognizer. Sample code is in Haskell, but there is plenty of explanation, so you can probably understand the ideas and tricks even without Haskell experience.
If you want to solve it with AI then the best bet is probably using ANNs. For the given problem I would recommend to use a quite advanced technique called HyperNEAT. See my explanation (and links) as the first answer to the SO question Neural Network Size...
I would probably use two or three different networks,
The first one to find the barcode on the bigger picture. One output neuron per pixel/set of pixels, output value is the confidence if that pixel seems to be a part of a barcode. Based on the result I would use some image transformation to convert it to a "standard" format (x*y rectangle)
If you have difficulties with finding the location of the barcode use a second one. Feed the result of the first one, and ask it to give the coordinates of two corners. However, I'm not quite sure that it will be very easy to evolve this one.
Last one would work on the standardized format, output neurons for each line (or square, if you work with a possibly 2D barcode), saying if the given area should be considered black or white.
Probably it would also help to do some pre-processing of the image, e.g. those that are described in RWH.
You don't need any specific AI or softcomputing technique. You need to apply image processing technique to improve the quality of the image or to isolate the barcode from a larger image.
You could use Matlab for prototyping and learnig more about image processing.