Anyone has a ready implementation of the Reverse Breadth First traversal algorithm in C#?
By Reverse Breadth First traversal , I mean instead of searching a tree starting from a common node, I want to search the tree from the bottom and gradually converged to a common node.
Let's see the below figure, this is the output of a Breadth First traversal :
In my reverse breadth first traversal , 9,10,11 and 12 will be the first few nodes found ( the order of them are not important as they are all first order). 5, 6, 7 and 8 are the second few nodes found, and so on. 1 would be the last node found.
Any ideas or pointers?
Edit: Change "Breadth First Search" to "Breadth First traversal" to clarify the question
Use a combination of a stack and queue.
Do the 'normal' BFS using the queue (which I presume you know to do already), and keep pushing nodes on the stack as you encounter them.
Once done with the BFS, the stack will contain the reverse BFS order.
Run a normal BFS from rootNode and let depth[i] = linked list of nodes with depth i. So for your example you'll have:
depth[1] = {1}, depth[2] = {2, 3, 4} etc.. You can build this with a simple BFS search. Then print all the nodes in depth[maxDepth], then those in depth[maxDepth - 1] etc.
The depth of a node i is equal to the depth of its father node + 1. The depth of the root node can be considered 1 or 0.
Related
I have a linked list like this:
public class Node {
public Node next;
public Node prev;
public int length;
public int weight;
}
I am trying to finding a rolling density for a non-circular linked list (has a clear begin and end) that uses a specific length as the window. This adds complexity because the end nodes will only use a percentage of the weight.
That means given 3 nodes
A (L: 10, W:10) -> B (L: 5, W:10) -> C (L:20, W:5)
(where L means length and W means weight)
and a window of 9 for the node B it would take use all of Node B, and now it has a window of 4 left over. It would evenly split the window before and after so 2 from A and 2 of C.
so the density would be:
[(2/10)*(10) + (5/5)*(10) + (2/20)*(5)] / 9 = 1.3889
This common case is not the part I am struggling with, its the end point when there is not enough on the left side, the window should take more from the right side and vice versa. There is also the case where there could not be enough length on either side.
I am un-sure if I should implement as a recursive function or a loop. I know a loop would require less calculations but a recursive function could be easier to understand.
Case 1: There is just 1 node in the linked list
take the density of the 1 node ignoring the window
Case 2: There is not enough length on the left/right side
Take the remainder from the right/left side.
Case 3: There is not enough length on both sides, but there is more than just 1 node.
Take all nodes and not require the window to be met.
With all you wrote, it seems your only question is: "should I loop or should I recurse?" Depending on your needs, whichever is easiest to read and maintain (or if performance is your highest priority, whichever is more performant).
You're dealing with a linked list. I would recommend simply looping, rather than recursing. (If you were dealing with a tree, that would be a different story.) In either case, you may find a way to save a lot of computation by doing some form of memoization. If your window involves going through hundreds of nodes to the left and right, you can store much of your density calculation for node n, and it will almost all be reusable for n+1. Before you get into that, I'd test the non-memoized version first, and see if it's sufficiently performant.
One design pattern that might help you remove your number of edge cases is to have a Null node:
Node nullNodeBeginning = new nullNode(length=0, weight=0);
nullNodeBeginning.prev = nullNodeBeginning;
Node nullNodeEnding = new nullNode(length=0, weight=0);
nullNodeEnding.next = nullNodeEnding;
If you add nullNodeBeginning to the beginning of your linked list and nullNodeEnding to the ending of your linked list, you effectively have an infinite linked list. Then your logic becomes something like:
Get the length of the specific center node
For previous, next:
Get the length of n nodes in that direction (may total to 0)
If total length = total length of list, you can't fill the window
If length < n, get nodes from the other direction
There are other ways to do it (and this one requires maintaining the length of all the nodes), but by capping your list with a nullNode, you should be able to avoid all special cases other than "insufficient nodes for the window", and make your logic much cleaner.
Interview question on building and searching an adjacency tree, but I've never worked with them before, so I'm not sure where to begin.
I have a file containing data like:
2 13556 225 235
225 2212 226
2212 8888 2213
8888 144115 72629 141336 8889 146090 129948 167357
144115 160496 163089 144114 144116
...
formatted as such:
<parent node> <child node> [ <child node> [ …] ]
Every edge has length 1.
I then need to calculate the shortest path between two of the nodes (the two are specified in the question). Then, I need to provide the estimated complexity in big-O notation.
The latter I can probably fudge, though I've never even heard of it until now and wikipedia doesn't help me much in terms of understanding how to break down a search function into big-O, but I'll worry about that later (unless someone has a good link they could share).
My concern now is trying to model this data and then search it for the shortest path. Like I said, I've never worked with this kind of structure before so I'm kind of at a loss as to where to even begin. I found another question on adjacency lists here, but it doesn't appear to be quite what I'm looking for, unless I'm just totally missing the point. Seems to me, the input data would need to be re-organized to satisfy the structure used in that question, whereas I'm reading my data from a file so I would think I'd need to traverse every node and list of nodes to determine if I have already entered a parent and that could take a long time, potentially. I also don't see how I'd create a bfs search using that structure either.
There are lots of examples of searching out there, so I can likely sort out that part, but any help in getting a data model started that would be suitable for loading from the data file and suitable for a bfs search (or, if there's a better search option out there, please school me), would be of great help.
You'll like be storing this data in a HashTable<int, List<int>> (Dictionary) (Links), key being int (NodeID) and value being List<int>, where these are the possible destinations from the node which is the key.
You'll need to have another HashTable<int, int> (ShortestPathLastStep), which will store two NodeIDs. This will represent the last step in the shortest path to arrive at a given node. You need this to be able to play back the shortest path.
To perform a BFS (Breadth-First-Search) you'll use a Queue<int> (bfsQueue). Push the start node (given in your question) onto the queue. Now execute the following algorithm
-- currentNodeID = pop bfsQueue
---- children = Links[NodeID]
------ foreach (childNodeID in children)
--------- if (childNodeID == destinationNodeID)
----------- exit and playback shortest path
----------if (!ShortestPathLastStep.contains(childNodeID))
------------ ShortestPathLastStep.Add(childNodeID, currentNodeID);
----------bfsQueue.Push(childNodeID);
----------goto first line
This solution assumes traveling between any two nodes is a constant cost. It is ideal for BFS because the first time you arrive at the destination you will have taken the shortest path (not true if links have variable length). If links are not constant length you'll have to add more logic when deciding to overwrite the ShortestPathLastStep value, you won't be able to exit until your queue is EMPTY and you'll only be pushing nodes onto the queue if you've never been to the node (it won't exist in the short path list) or you've discovered this new way of arriving there is shorter than the last way of getting there (now you'll have to recalculate shortest distances for the nodes you can get to from this node).
I have a certain subset of nodes of an undirected and unweighted graph. I am trying to determine whether there is a path between all of these nodes, and, if there is, what is the shortest path which includes the fewest nodes which are not in the subset of nodes.
I have been trying to think of a way to modify a minimum spanning tree algorithm to accomplish this, but so far I haven't come up with a workable solution.
Is there a good way to do this or is this a description of an already known algorithm?
I am trying to determine whether there is a path between all of these
nodes
(I understand from this you are looking for a single path that visits all the marked nodes)
Well my friend, this could be a problem - you are describing a variation of the Traveling Salesman Problem and the Hamiltonian Path Problem (If you are looking for a simple path, the reduction from Hamiltonian Path is straight forward: mark all the nodes).
But I am afraid these problems are NP-Hard.
An NP-Hard problem is a problem that we do not know of any polynomial time solution to solve it, and the general assumption around is - one doesn't exist1.
Thus, your best shot is probably going to be some exponential solution. There is O(n^2 * 2^n) solution to TSP using dynamic programming, or brute force solution which are O(n!)
(1) Really not a formal definition, but this is enough information to understand the problem, there is really a lot more into NP-Hard problems.
Here is an approach that may get you some of the way there:
Use Floyd-Warshall or Dijkstra's to find the distance d(i, j) between node i and node j for every i and j such that node i and node j are in the subset of nodes.
(if d(i,j) = infinity then stop now, there is no solution)
Make a new graph which contains each node from the subset. For each d(i, j), add an edge between node i, node j in the new graph with the weight = d(i, j)
Now use a traveling salesman algorithm on this new graph to find the shortest path to visit all nodes.
This shortest path gives you the length of the path but the path may visit some nodes multiple times. This means we have an upper bound on the number of nodes outside of the subset required.
Dijkstra's algorithm or use a breadth first search.
You should use Dijkstra's shortest path algorithm. First, you must assign weights(or distances) to all edges in the graph, every edge that connects two nodes that are not in the subset must be given weight 1. Every edge that connects one or two nodes from the subset must be given infinite weight. Second, you should run Dijkstra's algorithm on the resulted graph.
This algorithm will examine every edge of the graph.
Also, you can use A* (A-star) algorithm.
Update:
I don't understand this problem at first. As #amit says, this is a NP-hard problem, a combination of HCP and TSP. Maybe some sort of stochastic search algorithm can solve this in polynomial time with high probability.
For someone who doesn't really have a background in graph theory, I have tackled this problem and found that in an unweighted, undirected graph the easiest method is Depth First Search. Implementations of algorithms such as Dijkstra's often take a weighted solution and input an arbitrary value for the weight.
The solution I found to work I traversed nodes in using DFS and log every successful journey, then it's simply a case of returning the shortest successful journey.
Here's the file that does the heavy lifting:
Depth First Search Algorithm
I created a Graph/Node/Connection classes that not only shows you the shortest path but also can tell you if all nodes are connected:
var allNodesAreConnected = StartNode.AllNodes.All(n => n.IsConnectedToStartNode);
Or if you want to know what nodes are not connected change it a little bit:
var anotConnectedNodes = StartNode.AllNodes.Where(n => !n.IsConnectedToStartNode);
More examples and full code in this post:
Create your own navigation system (with a Graph, Node and Connection class)
I have a very simple question regarding BSTs. I have seen multiple definitions of BSTs regarding duplicate entries. Some define BSTs as not allowing duplicate entries, others that node's left child is <= to the nodes value and the right child is greater than the node's value, and some definitions are the opposite of that ( left child is < than the node, right child is >=).
So my question is what is the official definition (if one exists) for BSTs regarding duplicate entries? For example what would a BST look like after inserting the values : 3, 5, 10, 8, 5, 10?
Thank you in advance for clarifying the definition and answering my question!
One of the well-known books in the algorithm and data structure area is the CLRS book, also known as the bible of data structures and algorithms:
According to the definition of this book, the duplicate entries are placed in the right tree of the node that contains the same key. As an example, take a look at the insertion algorithm of BSTs adopted from this book:
the important point is that not having duplicates in the tree assures the fast lookup times.
If you have duplicates in one side of the node your search time will suffer because you have to go through all duplicates before you can continue.
http://en.wikipedia.org/wiki/Binary_search_tree
I'm doing some procesing in a treeview, I don't use neither a stack or a queue to process all the nodes, I just do this:
void somemethod(TreeNode root){
foreach(TreeNode item in root.Nodes)
{
//doSomething on item
somemethod(item);
}
}
I'm a litle block right know (can't think with clarity) and I can't see what kind of tree processing I'm doing. Is this BFS or DFS or neither of them?
My clue was DFS but wasn't sure. The CLR don't do anything weird like process two siblings before passing down taking advantage of multiprocessing? That weird tough comes to my mind that clouded my judgment
You are doing a DFS (Depth first search/traversal) right now using recursion.
Its depth first because recursion works the same way as a stack would - you process the children of the current node before you process the next node - so you go for depth first instead of breadth.
Edit:
In response to your comment / updated question: your code will be processed sequentially item by item, there will be no parallel processing, no "magic" involved. The traversal using recursion is equivalent to using a stack (LIFO = last in, first out) - it is just implicit. So your method could also have been written like the following, which produces the same order of traversal:
public void SomeMethod(TreeNode root)
{
Stack<TreeNode> nodeStack = new Stack<TreeNode>();
nodeStack.Push(root);
while (nodeStack.Count > 0)
{
TreeNode node = nodeStack.Pop();
//do something on item
//need to push children in reverse order, so first child is pushed last
foreach (TreeNode item in node.Nodes.Reverse())
nodeStack.Push(item);
}
}
I hope this makes it clearer what is going on - it might be useful for you to write out the nodes to the console as they are being processed or actually walk through step by step with a debugger.
(Also both the recursive method and the one using a stack assume there is no cycle and don't test for that - so the assumption is this is a tree and not any graph. For the later DFS introduces a visited flag to mark nodes already seen)
Im pretty sure your example corresponds to "Depth first search", because the nodes on which you "do something" increase in depth before breadth.