I have now coded up various graph search (A*, DFS, BFS, etc..) algorithms many times over. Every time, the only real difference is the actual search states I am searching over, and how new states are generated from existing ones.
I am now faced with yet another search-heavy project, and would like to avoid having to code and debug a general search algorithm again. It would be really nice if I could define a search state class, including information for generating successive states, heuristic cost, etc, and just plug it in to some kind of existing search framework that can do all of the heavy lifting for me. I know the algorithms aren't particularly difficult to code, but there are always enough tricks involved to make it annoying.
Does anything like this exist? I couldn't find anything.
Perhaps QuickGraph will be of interest.
QuickGraph provides generic
directed/undirected graph
datastructures and algorithms for .Net
2.0 and up. QuickGraph comes with algorithms such as depth first seach,
breath first search, A* search,
shortest path, k-shortest path,
maximum flow, minimum spanning tree,
least common ancestors, etc
This sounds like a perfect use case for either a Delegate or a Lambda Expression.
Using Lambda Expressions for Tree Traversal – C#
http://blog.aggregatedintelligence.com/2010/05/using-lambda-expressions-for-tree.html
Related
I found a lot of implementation doing the calculation of Levenshtein between 2 strings, but is there any implementation that can generate all variations using Levenshtein distance (max 2) for one given string.
The reason is, I'm using ElasticSearch to execute some fuzzy search, but with the load of queries that I have I have some performance issue because ELK will calculate those possibilities each time, I want to store those values once.
The most commonly cited reference implementation for generating an edit distance is in Python, you can see it in this answer.
The original author linked subsequent implementations in other languages at the bottom of his blog under the heading Other Computer Languages. There are 4 implementations in C#, this one in particular is functional (I'm unsure under what license those implementations are published, so I won't transcribe them into this thread).
But using wildcard searches with ElasticSearch is the correct approach. The engine will implement approximate string matching as efficiently as possible - there are a number of different algorithms this can be based on and the optimal choice depends on your data structure, etc.
You can simplify the use by generating the edit distance yourself, but in most cases if you're using a database or engine their implementation will have better performance. (This is a computationally expensive task, there's no way around that.)
I have a 3D voxel game and I'm trying to find best fitting pathfinding algorithm to use. I've been wondering if the A* algorithm is capable of handling multiple levels, for example a multi-story building and find routes through staircases or ladders.
Is this possible with A* or should I use something else?
Thanks in advance.
Any path-finding algorithms(uniform or non-uniform) can be used as long as you use them right. You have to decide first on what kind of path-finding you want to use depending on what you want to achieve.
Uniform or graph algorithms can have a lot of overhead for a multi-story building due to excessive nodes. (assuming that you are building the whole building all at once)
Non-uniforms are way slower than graph algorithms but doesn't have much of a overhead.
It really depends on what you want your characters to do, and how you will optimize it.
A* is fast, but you might want to check out some iterations of A* like jump point system.
Try using the NavMesh component based workflow first. If there is a nice staircase behaviour that can be efficiently and automatically used, NavMesh will find it.
Otherwise, you will need to use something called hierarchical search. Read an example here.
I'm using multiclass SVM and deep learning-NN.
There are a lot of argument which I can adjust,
especially choosing the right kernel.
What is the best way to choose the ideal arguments?
Can it be done iteratively with any "target function minimazation algorithm" or it will take forever?
I think it's one of the most time consuming task in data mining projects. Finding the best argument(known as hyper parameters) are hard and also essential for data mining projects. So there are some solution for it(e.g scikit, witch is a machine learning lib for python, has Grid Search to find good hyper parameters for algorithm such as SVM, also here, use Evolutionary algorithm to find the right hyper parameters for machine learning algorithm in scikit)
So for your question, I think it's better to write(or find) something like Scikit Grid Seaerch idea. Witch test a range of parameters(hyper parameters) in a specific algorithm and return best parameters according to test results.
For c# and accord frame work, it has a Grid Search to optimize the parameter: http://accord-framework.net/docs/html/T_Accord_MachineLearning_GridSearch_1.htm
In a future project I will need to implement functionality meant for searching words (either by length or given a set of characters and their position in the word) which will return all words that meet a certain criteria.
In order to do so, I will need language dictionaries that can be easily queryable in LINQ. The first thing I'd like to ask is if anyone knows about good dictionaries to use in this kind of application and environment used.
And I'd also like to ask about good ways to search the said dictionary for a word. Would a hash table help speeding up queries? The thing is that a language dictionary can be quite huge, and knowing I will have plenty of search criteria, what would be a good way to implement such functionality in order to avoid hindering the search speed?
Without knowing the exact set of stuff you are likely to need to optimize for, it's hard to say. The standard data structures for efficiently organizing a large corpus of words for fast retrieval is the "trie" data structure, or, if space efficiency is important (because say you're writing a program for a phone, or other memory-constrained environment) then a DAWG -- a Directed Acyclic Word Graph. (A DAWG is essentially a trie that merges common paths to leaves.)
Other interesting questions that I'd want to know an answer to before designing a data structure are things like: will the dictionary ever change? If it does change, are there performance constraints on how fast the new data needs to be integrated into the structure? Will the structure be used only as a fast lookup device, or would you like to store summary information about the words in it as well? (If the latter then a DAWG is unsuitable, since two words may share the same prefix and suffix nodes.) And so on.
I would search the literature for information on tries, DAWGs and ways to optimize Scrabble programs; clearly Scrabble requires all kinds of clever searching of a corpus of strings, and as a result there have been some very fast variants on DAWG data structures built by Scrabble enthusiasts.
I have recently written an immutable trie data structure in C# which I'm planning on blogging about at some point. I'll update this answer in the coming months if I do end up doing that.
This may be a silly question (with MSDN and all), but maybe some of you will be able to help me sift through amazing amounts of information.
I need to know the specifics of the implementations of common data structures and algorithms in C#. That is, for example, I need to know, say, how Linked Lists are handled and represented, how they and their methods are defined.
Is there a good centralized source of documentation for this (with code), or should I just reconstruct it? Have you ever had to know the specifics of these things to decide what to use?
Regards, and thanks.
Scott Mitchell has a great 6-part article that covers many .NET data structures:
An Extensive Examination of Data Structures
For an algorithmic overview of data structures, I suggest reading the algorithm textbook: "Introduction to Algorithms" by Cormen, et al..
For details on each .NET data structure the MSDN page on that specific class is good.
When all of them fail to address issues, Reflector is always there. You can use it to dig through the actual source and see things for yourself.
If you really want to learn it, try making your own.
Googling for linked lists will give you a lot of hits and sample code to go off of. Wikipedia will also be a good resource.
Depends on the language. Most languages have the very basics now pre-built with them, but that doesn't mean their implementations are the same. The same named object--LinkedList in C# is completely different than the LinkedList in Java or C++. Even the String library is different. C# for instance is known to create a new String object every time you assign a string a new value...this becomes something you learn quickly when it brings your program to a crashing halt when you're working with substrings in C# for the first time.
So the answer to your question is massively complicated because I don't know quite what you're after. If you're just going to be teaching a class what a generic version of these algorithms and data structures are, you can present them without getting into the problems I mentioned above. You'll just need to select, lookup, read about a particular type of implementation of them. Like for LinkedList you need to be able to instantiate the list, destroy the list, copy the list, add to the list somewhere (usually front/back), remove from the list, etc. You could get fancy and add as many methods as you want.