I'm working on a personal project that involves finding the intersections of tubes (thin walled cylinders). It requires two main math computations I'm trying to find in a library or in sample code.
1) The minimum distance between two lines. (I've found code for this already)
2) The two corresponding points on the two lines that are each closest to the other line.
I've found plenty of sites with the math on how to do #2, but no sample code of anyone implementing it.
I am fully capable of writing this from scratch based on the math, but I'd much prefer saving several hours of coding, testing, and verifying by finding existing code I can incorporate in my C# app. Even if the sample is in another language, I can port things over to C# much faster than writing from scratch.
Since this is very much a "solved problem," I assume there has to be an open source library in some language already in existence, and re-inventing the wheel (and testing it and verifying it) would be a waste of time. (And, as we all know, any time we can spare from life for "personal projects" is a scarce and valuable commodity.)
There many open source library, if you familiar width javascript, you can try three.js, there is an implementation https://github.com/mrdoob/three.js/blob/master/src/extras/geometries/CylinderGeometry.js
or you can find some ascriptscript library, like papervision3d
http://papervision3d.googlecode.com/svn/trunk/as3/trunk/src/org/papervision3d/objects/primitives/Cylinder.as
Related
I have to make a simulator. Functionalities are mentioned below:
Inputs: Unchangeable C codes
Function: Compile and run those codes and produce outputs. Each C code may variables. Input function (time based) is given for all those variables. So, if the input c code file has a simple adder int add(a,b)/* a and b change with time with a specified fn. So, at t=0 a=1,b=2 at t=4 b may change to 6 and so on*/ I have to run the code at diff times and generate outputs. Check those o/ps too. For all those purposes, I would need a GUI too.
Need suggestions for both backend and Gui tool.
P.S: My research tells me C backend with GTK for GUI is a workable solution but GTK is too lengthy and tedious. I am confused about Qt/C++ as it may not work all that well with a c backend. C++ backend would be difficult for me to import and run those input codes (extern doesnt work all the time). I also looked at C# for GUI and the linkings to dll files but that is also mentioned in many blogs as not so feasible. Any suggestions? Thanks already.
P.P.S Please suggest open source and non-licensed tools only.
C++ can and does interface trivially with any C library or "code" out there, so that's not your concern.
I'd suggest using Qt. It comes with a nice, free IDE: Qt Creator. It offers high-level functionality that often surpasses GTK. Alas, GTK library ecosystem is very "hackable" -- the underlying object model allows all sorts of creative uses and abuses, but has IMHO a very shallow learning curve -- it'll take you much longer to "fully" understand GTK than Qt.
That's simply because GTK's underlying low-level libraries offer a lot of functionality that's taken over by the C++ language or simply hidden from view in Qt. I've found GTK's underpinnings to be more flexible, but it seems all a solution in search of a problem. Qt's/C++'s much simpler and less flexible object model works just as well and is easier to learn because there's much less of it.
Think also from the viewpoint of investing your time. Learning to be proficient in C++ is portable to other projects that merely use the same language. Learning to be proficient in glib only helps you with C projects that specifically use glib, and there's much less of them I'd think than C++ projects.
Sometimes too much of a good thing is a bad thing, and that seems -- to me -- to be the case with GTK. It's a library collection that's bottom heavy with low-level functionality, but seems to be light on higher-level abstractions. Never mind that the individual libraries are all designed with slightly different guiding principles and are essentially a loosely bound collection. That's good in theory for decoupling of the components, but makes learning much harder as there's really little in the way of common design elements in the various libraries that make up GTK.
Qt, in contrast, is pretty much a single, uniform large library split into modules that you can elect not to use.
PS. Most people use "steep learning curve" incorrectly to mean that something is hard to learn. A learning curve shows "knowledge" vs. time. If it's steep, that means you are learning fast. A shallow one means you're learning slowly.
I am interested in writing a generic Intellisense enabled editor for SQL and C# (et al. if possible!). I would like to do this in C# as an overridden or extended WPF richTextBox-type control. I know there are many example projects available and I have implemented a basic version of my own; but most of the examples that I have come across (and indeed my own) are just that, basic.
A couple of code examples are:
DIY Intellisense By yetanotherchris
CodeTextBox - another RichTextBox control with syntax highlighting and Intellisense By Tamas Honfi
I have however, found a great example of an SQL editor with Intellisense QueryCommander SQL Editor By Mikael HÃ¥kansson which seems to work well. Microsoft must use a XML library of command keywords, but my question is: How (in detail) do Microsoft implement their Intellisense (as-you-type Intellisense) and how hard would it be for me to create my own of the same standard?
Edit A: A year on and I have managed to develop my own editor control with basic intellisense mainly for my own "enjoyment". I thought I would come back provide a list of freely available .NET projects that helped me with my own development and can be used out-of-the-box and free of charge:
ICSharpCode (WinForms)
AvalonEdit (WPF)
ScintillaNET (WinForms)
Query Commander [for example of intellisense implementation] (WinForms)
Edit B: 15 months after the question was asked I am still looking for new improved editors. This one is nice...
RoslynPAD is cool!
Edit C: 2 years+ on from the question, I have found the following projects, both using WPF and backed by AvalonEdit.
CodeCompletion for AvalonEdit using NRefactory. This project is really nice and has a full implementation of intellisense using NRefactory.
ScriptCS ScriptCS makes it easy to write and execute C# with a simple text editor.
How (in detail) do Microsoft implement their as-you-type Intellisense?
I can describe it to any level of detail you care to name, but I don't have the time for more than a brief explanation. I'll explain how we do it in Roslyn.
First, we build an immutable model of the token stream using a data structure that can efficiently represent edits, since obviously edits are precisely what there are going to be a lot of.
The key insight to making it efficient for persistent reuse is to represent the character lengths of the tokens but not their character positions in the edit buffer; remember, a token at the end of the file is going to change position on every edit but the length of the token does not change. You must at all costs minimize the number of total re-lexings if you want to be efficient on extremely large files.
Once you have an immutable model that can handle inserts and deletions to build up an immutable token stream without re-lexing the entire file every time, you then have to do the same thing, but for grammatical analysis. This is in practice a considerably harder problem. I recommend that you obtain an undergraduate or graduate degree in computer science with an emphasis on parser theory if you have not already. We obtained the help of people with PhDs who did their theses on parser theory to design this particular bit of the algorithm.
Then, obviously, build a grammatical analyzer that can analyze C#. Remember, it has to analyze broken C#, not correct C#; IntelliSense has to work while the program is in a non-compiling state. So start by coming up with modifications to the grammar that have good error-recovery characteristics.
OK, so now you've got a parser that can efficiently do grammatical analysis without re-lexing or re-parsing anything but the edited region, most of the time, which means that you can do the work between keystrokes. I forgot to mention, of course you will need to come up with some mechanism to not block the UI thread while doing all of these analyses should the analysis happen to take longer than the time between two keystrokes. The new "async/await" feature of C# 5 should help with that. (I can tell you from personal experience: be careful with the proliferation of tasks and cancellation tokens. If you are careless, it is possible to get into a state where there are tens of thousands of cancelled tasks pending, and that is not fast.)
Now that you've got a grammatical analysis you need to build a semantic analyzer. Since you are only doing IntelliSense, it does not need to be a particularly sophisticated semantic analyzer. (Our semantic analyzer must do an analysis suitable for generating code from correct programs and correct error analysis from incorrect programs.) But of course, again it has to do good semantic analysis on broken programs, which does increase the complexity considerably.
My advice is to start by building a "top level" semantic analyzer, again using an immutable model that can persist the state of the declared-in-source-code types from edit to edit. The top level analyzer deals with anything that is not a statement or expression: type declarations, directives, namespaces, method declarations, constructors, destructors, and so on. The stuff that makes up the "shape" of the program when the compiler generates metadata.
Metadata! I forgot about metadata. You'll need a metadata reader. You need to be able to produce IntelliSense on expressions that refer to types in libraries, obviously. I recommend using the CCI libraries as your metadata reader, and not Reflection. Since you are only doing IntelliSense, obviously you don't need a metadata writer.
Anyway, once you have a top-level semantic analyzer, then you can write a statement-and-expression semantic analyzer that analyzes the types of the expressions in a given statement. Pay particular attention to name lookup and overload resolution algorithms. Method type inference will be particularly tricky, especially inside LINQ queries.
Once you've got all that, an IntelliSense engine should be easy; just work out the type of the expression at the current cursor position and display a dropdown appropriately.
how hard would it be for me to create my own of the same standard?
Well, we've got a team of, call it ten people, and it'll probably take, call it five years all together to get the whole thing done from start to finish. But we have lots more to do than just the IntelliSense engine. That's maybe only 40% of the work. Oh, and half those people work on VB, now that I think about it. But those people have on average probably five or ten years experience in doing this sort of work, so they're faster at it than you will be if you've never done this before.
So let's say it should take you about ten to twenty years of full time work, working alone, to build a Roslyn-quality IntelliSense engine for C# that can do acceptably-close-to-correct analysis of large programs in the time between keystrokes.
Longer if you need to do that PhD first, obviously.
Or, you could simply use Roslyn, since that's what it's for. That'll take you probably a few hours, but you don't get the fun of doing it yourself. And it is fun!
You can download the preview release here:
http://www.microsoft.com/download/en/details.aspx?id=27746
This is an area where Microsoft typically produces great results - Microsoft developer tools really are awesome. And there is a clear commercial advantage for sales of their developer tools and for sales of Windows to having the best intellisense so it makes sense for Microsoft to devote the kind of resources Eric describes in his wonderfully detailed answer. Still, I think it's worth pointing out a few of things:
Your customers may not actually need all the features that Microsoft's implementation provides. The Microsoft solution might be incredibly over-engineered in terms of the features that you need to provide to your customers/users. Unless you're actually implementing a generic coding environment that is intended to be competitive with Visual Studio, it is likely that there are aspects of your intended use that either simplify the problem, or that allow you to make compromises on the solution that Microsoft feels they cannot make. Microsoft will likely spend resources decreasing response times that are already measured in hundreds of milliseconds. That may not be something you need to do. Microsoft is spending time on providing an API for others to use for code analysis. That's likely not part of your plan. Prioritize your features and decide what "good enough" looks like for you and your customers then estimate the cost of implementing that.
In addition to bearing the obvious costs of implementing requirements that you may not actually have, Microsoft also carries some costs that may not be obvious if you haven't worked in a team. There are huge communication costs associated with teams. It's actually incredibly easy to have five smart people take longer to produce a solution than it takes for a single smart person to produce the equivalent solution. There are aspects of Microsoft's hiring practices and organizational structure that make this scenario more likely. If you hire a bunch of smart people with egos and then empower all of them to make decisions, you too can get a 5% better solution for 500% of the cost. That 5% better solution might be profitable for Microsoft, but it could be deadly for a small company.
Going from a 1 person solution to a 5 person solution increases the costs, but that's just the intra-team development costs. Microsoft has separate teams that are devoted to (roughly) design, development, and testing even for a single feature. The project-related communication between peers across these boundaries has higher friction than within each of the disciplines. This not only increases communication costs between individuals, but it also results in larger team sizes. And more than that - since it's not a single team of 12 individuals, but is instead 3 teams of 5 individuals, there is 3x the upward communication cost. More costs that Microsoft has chosen to carry that may not translate to similar costs for other companies.
My point here is not to describe Microsoft as an inefficient company. My point is that Microsoft makes a ton of decisions about everything from hiring, to team organization, to design and implementation that start from assumptions about profitability and risk that simply do not apply to companies that are not Microsoft.
In terms of the intellisense thing, there are various ways of thinking about the problem. Microsoft is producing a very generic, reusable solution that doesn't just solve intellisense, but also targets code navigation, refactoring, and various other uses for code analysis. You don't need to do things the same way if your sole goal is to make it easy for developers to enter code without having to type much. Targeting that feature doesn't take years of effort and there are all sorts of creative things you can do if you're not just providing an API, but you actually control the UI too.
I've been searching for resources for number recognition in images on the web. I found many links providing lots of resources on that topic. But unfortunately it's more confusing than helping, I don't know where to start.
I've got an image with 5 numbers in it, non-disturbed (no captcha or something like this). The numbers are black on a white background, written in a standard font.
My first step was to separate the numbers. The algorithm I currently use is quite simple, it just checks if a column is entirely white and thus a space. Then it trims each character, so that there is no white border around it. This works quite well.
But now I'm stuck with the actual recognition of the number. I don't know what's the best way of guessing the correct one. I don't think directly comparing to the font is a good idea, because if the numbers only differ a little, it will no more work.
Could anyone give me a hint on how this is done?
It doesn't matter to the question, but I'll be implementing this in C# or Java. I found some libraries which would do the job, but I'd like to implement it myself, to learn something.
Why not look at using an open source OCR engine such as Tesseract?
http://code.google.com/p/tesseract-ocr/
C# Wrapper for Tesseract
http://www.pixel-technology.com/freeware/tessnet2/
Java Wrapper for Tesseract
http://sourceforge.net/projects/tessocrinjava/
While you might not consider using a third-party library as implementing it yourself, there's a tremendous amount of work that goes into just integrating the third-party tool. Keep in mind also that something that may seem simple (recognizing the number 5 versus the number 6) is often very complex; we're talking thousands and thousands of lines of code complex. In the least, look at the source code for tesseract and it'll give you a good reason to want to leverage a third-party library.
Here's another SO question that'll give you some ideas about the algorithms involved: https://stackoverflow.com/questions/850717/what-are-some-popular-ocr-algorithms
I have mixed views about commercial class libraries. Am I better off using a commercial class library or starting from scratch? If buying a library is the way forward which one for a C# developer?
Put a value on your time, say $30 an hour. Estimate how long it would take you to write the library, then add two times that for debugging and testing. Subtract the time it's going to take you to learn how to use the commercial library with the given documentation. Multiply by your hourly rate. Compare.
Writing a library can be fun and rewarding, but "not invented here" syndrome keeps a lot of companies from creating anything useful, as they're stuck reinventing the wheel for additional cost. Make sure it is extensible (if you don't get access to the source) and has what you need. Buy it.
As a personal project, it's probably worth writing it from scratch at least once to see what you can learn, but on company dime you need to be productive and efficient.
Or write it from scratch and release it open source ;)
I really think you are missing out the most important strategy:
Use open source and contribute changes back to the community.
I have seen lots of commercial software integrations go bad, usually because of real-life demands that are not met by the commercial product, many of which you do not discover until you are deeply committed (oh, so you wanted search to NOT have an upper limit of 1000 responses? No, we can't do that...)
If these things happen to you with open source, you at least always have the option of forking.
Always use a good quality library wherever possible - but only if you can have the source code. Experience tells me to not use anything that I can't have the source for.
Also - depending on what the library is, it is sometimes sensible to have a layer on top of it so you reduce dependencies, thus making any future replacement of a library easier.
That depends on your goals. If you want to experiment and learn about a certain framework feature it makes sense to try to implement it.
However, if you're trying to make money (or fame or whatever) on your software, you should ask yourself how to best spend your time. How much value do you add to the application by implementing yet another Linked List? Probably not a lot, so use whatever good implementations available. On the other hand, you could add value to your application by implementing a very specific graph control (although there are plenty on the market). So look at it from a return on investment point of view.
I've created a workflow/flowchart style designer for something. At the moment it is using relatively simple Bezier curve lines to connect up the various end points of the "blocks" on the workflow.
However I would like something a bit more intuitive for the user. I want the lines to avoid obstacles like other blocks (rectangles) and possibly other lines too.
I prefer the bezier splines rather than polylines because they are prettier and seem to fit in better with the designer in general. But am willing to compromise if they are much harder to accomplish.
I know there is a whole load of science behind this. I've looked into things like Graphviz, Microsoft's GLEE and their commericial AGL (automatic graph layout) library.
GLEE seems to barely be production worthy. And their commercial alternative is, well, a commercial alternative... it's quite expensive.
Graphviz doesn't seem to have been ported to .NET in any way.
I have seen a polyline implementation used by Windows Workflow Foundation for its "freeform designer". And this works, just, but it is not really of production grade appearance.
I'm surprised there isn't some plug'n'play .NET library for this type of thing? Something like:
Point[] RoutePolyline(Point begin, Point end, Rectangle[] rectObstacles, Point[] lineObstacles);
I haven't tried it (although I'm a happy customer of their Gantt product), but ILOG have a similar tool here.
To quote:
The ILOG Diagram for .NET algorithms
share generic goals such as:
Minimizing the number of overlapping
nodes
Minimizing the number of link
crossing
Minimizing the total area
of the drawing
Minimizing the number
of bends (in orthogonal drawings)
Maximizing the smallest angle formed
by consecutive incident links
Maximizing the display of symmetries
Supporting incremental layout,
partial layout, subgraphs,
intergraph links and nested layouts
Perhaps worth a look, at least.
Diagram.NET is a free, open source diagramming library in C#. It hasn't been updated in quite some time, but it's certainly worth a look - there may something there which you can reuse.
http://www.dalssoft.com/diagram/
Are you limited to managed code only?
I did not have this restriction and the past and effectively integrated GraphViz with .Net. What we did was call an external process containing the natively compiled "dot" and parse the result in a .Net object model. It worked perfectly and was fast enough for our needs.
I'm sure you could do better and easier with C++/CLI today.