Here is my problem. I have a series of photos of different parts of a building and I need to link them together. After that I need to show each photo in sequence to display a path from point A to point B… I.e., from a classroom to a fire escape.
I have done a bit of research and I believe a non-directed unweighted graph should do the trick.
As I have not much experience in this area. I was wondering how I need to store the photos in a data structure and if there are any libraries out there to do the job?
Yes, you need to apply some algorith, that can solve the problem for you.
You can use this great libray:
QuickGraph
to solve this part of the problem.
As to the way of storing your data, you need to define vertices (the photos) and edges between vertices like (photo A-photoB), (photo A-photo C), and so on.
You have to recover that info from the database and load the corresponding structure in quickgraph and let it find the path for you.
Here you have extensive docs and samples:
Quick graph documentation
For something similar to this I used:
MyEdges class, which implements IEdge<T> (T should be you photo ID type, int or whatever is) - represents the edge between to photos (places)
Graph class, which inherits AdjacencyGraph<T,MyRelation>. You load it with the available MyEdges (this is directed graph)
PathFinder algorithm class: I inherited from FloydWarshallAllShortestPathAlgorithm<T, MyRelation>
Then you have to:
Create the edges (i.e. read them from the DB)
Instance a Graph class, and add all edges to it
Use the PathFinder constructor, using the graph as parameter. This finds the paths.
This algorithm lets you specify to which photos you can go from a given photo (edges), supposes that the distance is similar between them, but you have to define all the routes (from A to B, from B to A and so on). That's the "un-weighted" part of the OP. If your case differs you'll have to read the docs.
You can implement an UnDirected graph if you prefer that adding A To B also adds B to A. It can spare some lines of code, but I usually prefer adding all the possibilities myself. Its easier to think "from the Library I can go to aisle A and aisle B. From aisle B to Library and Laboratory", and so on, that trying to think of all edges.
You can create two tables in a database:
Photos (with IDs)
Path (IdFrom and IdTo)
This is easy to maintain and implement.
Related
I have a bunch of Shape-classes (classic) like Rectangle and Circle in my first module.
In my second module, I have a GUI, made with WPF. I want to show a ListBox of all Shape-classes. The ListBox shall contain the localizable name of the shape, which is saved as a resource string, and an icon, saved as a resource image.
I want my whole code to be as modular as possible, e.g. if I add a new Shape-class, I want to change as few classes as possible.
My first approach would be to make a helper class in my GUI-module, which for each shape holds the Shape's Type, its name as string, and its icon as a Bitmap (or similar). I would then initialize the list at one place, e.g.
var shapeList = new List<ShapeHelperClass>
{
new ShapeHelperClass(typeof(Rectangle), Resources.StringRectangle, Resources.IconRectangle),
new ShapeHelperClass(typeof(Circle), Resources.StringCircle, Resources.IconCircle),
};
and bind this list to the ListBox. Now, if I rename my classes or my resources, nothing will break, and localization should work properly. But, of course, if adding a new Shape-class in the first module, I also need to update this list.
Another approach would be to use reflection to find all my Shape classes, and build the list out of that. However, I would still need some Dictionaries or something similar to map the classes to the Resources. I could find the resources if they follow a pattern, like "Icon" + "Classname". However, if no icon is found, this is only noticed at runtime.
So, my questions are:
Is my first approach a good one, our could it be improved?
How can I make sure that a programmer who adds a new Shape also adds the new Resources and extends the mapping-list? Maybe by Unit-testing?
1.Is my first approach a good one, our could it be improved?
You could create a method in your first module that returns all shapes and call this one in your client application, e.g.:
var shapes = GetShapes();
List<ShapeHelperClass> helpers = new List<ShapeHelperClass>();
foreach(var shape in shapes)
helpers.Add(...);
Then you should never have to modify the client application as a shape is added or removed in the first module.
2.How can I make sure that a programmer who adds a new Shape also adds the new Resources and extends the mapping-list? Maybe by Unit-testing?
Maybe you could write a unit test that uses reflection to find all shape types and asserts that they are included in the list of shapes that is returned from the first module. And likewise for the resources. I can't think of any better automatic way of ensuring this really.
I am working to generate terrain for our project, something that will be contained in the Model class that I can draw, but I new class would be alright since I may need to look inside for specific data often, and then I would just need the basic function to work with the Game class.
Anyway, I have a fair amount of knowledge of the XNA framework, but because of how convoluted it handles anything. So my problem is I can't just make a Model, I can't instantiate that class or anything. I have what I believe the proper data to form a model's geometry, which is all I need right now, and later possibly have it textured.
I don't know where to go from here.
XNA you usually use Content.Load, to have their content pipeline read in a file and parse it specifically, but I want to avoid that because I want my terrain generated. I can compute an array of Vertex data and indices for the triangles I want to make-up a mesh, but so far my efforts have tried to instantiate any object like Model or those it contains, have failed.
If there is some factory class I can use to build it, I have no idea what that is, so if someone else can point me in the right direction there and give me a rough outline on how to build a model, that would help.
If that's not the answer, maybe I need to do something completely different, either centered on using Content.Load or not, but basically I don't want my terrain sitting in a file, consistent between executions, I want to control the mesh data on load and randomize it, etc.
So how can I get a model generated completely programmatically, to show up on the screen, and still have its data exposed?
Model and its associated classes (eg: ModelMesh), are convenience classes. They are not the only way to draw models. It is expected that sometimes, particularly when doing something "special", you will have to re-implement them entirely, using the same low-level methods that Model uses.
Here's the quick version of what you should do:
First of all, at load time, create a VertexBuffer and an IndexBuffer and use SetData on each to fill each with the appropriate data.
Then, at draw time, do this:
GraphicsDevice.SetVertexBuffer(myVertexBuffer);
GraphicsDevice.Indices = myIndexBuffer;
// Set up your effect. Use a BasicEffect here, if you don't have something else.
myEffect.CurrentTechnique.Passes[0].Apply();
GraphicsDevice.Textures[0] = myTexture; // From Content.Load<Texture2D>("...")
GraphicsDevice.DrawIndexedPrimitives(...);
I am just starting to learn about mongo db and was wondering if I am doing something wrong....I have two objects:
public class Part
{
public Guid Id;
public ILIst<Materials> Materials;
}
public class Material
{
public Guid MaterialId;
public Material ParentMaterial;
public IList<Material> ChildMaterials;
public string Name;
}
When I try to save this particular object graph I receive a stack overflow error because of the circular reference. My question is, is there a way around this? In WCF I am able to add the "IsReference" attribute on the datacontract to true and it serializes just fine.
What driver are you using?
In NoRM you can create a DbReference like so
public DbReference<Material> ParentMaterial;
Mongodb-csharp does not offer strongly typed DbReferences, but you can still use them.
public DBRef ParentMaterial;
You can follow the reference with Database.FollowReference(ParentMaterial).
Just for future reference, things like references between objects which are not embedded within a sub-document structure, are handled extremely well by a NoSQL ODB, which is generally designed to deal with transparent relations in arbitrarity complex object models.
If you are familiar with Hibernate, imagine that without any mapping file AT ALL and orders of magnitude faster performance because there is no runtime JOIN behind the scenes, all relations are resolved with the speed of a b-tree lookup.
Here is a video from Versant (disclosure - I work for them), so you can see how it works.
This is a little boring in the beginning, but shows every single step to take a Java application and make it persistent in an ODB... then make it fault tolerant, distributed, do some parallel queries, optimize cache load, etc...
If you want to skip to the cool part, jump about 20 minutes in and you will avoid the building of the application and just see the how easy it is to dynamically evolve schema, add distribution and fault tolerance to any existing application ):
If you want to store object graphs with relationships between them requiring multiple 'joins' to get to the answer you are probably better off with a SQL-style database. The document-centric approach of MongoDB and others would probably structure this rather differently.
Take a look at MongoDB nested sets which suggests some ways to represent data like this.
I was able to accomplish exactly what I needed by using a modified driver from NoRM mongodb.
Back story:
So I've been stuck on an architecture problem for the past couple of nights on a refactor I've been toying with. Nothing important, but it's been bothering me. It's actually an exercise in DRY, and an attempt to take it to such an extreme as the DAL architecture is completely DRY. It's a completely philosophical/theoretical exercise.
The code is based in part on one of #JohnMacIntyre's refactorings which I recently convinced him to blog about at http://whileicompile.wordpress.com/2010/08/24/my-clean-code-experience-no-1/. I've modified the code slightly, as I tend to, in order to take the code one level further - usually, just to see what extra mileage I can get out of a concept... anyway, my reasons are largely irrelevant.
Part of my data access layer is based on the following architecture:
abstract public class AppCommandBase : IDisposable { }
This contains basic stuff, like creation of a command object and cleanup after the AppCommand is disposed of. All of my command base objects derive from this.
abstract public class ReadCommandBase<T, ResultT> : AppCommandBase
This contains basic stuff that affects all read-commands - specifically in this case, reading data from tables and views. No editing, no updating, no saving.
abstract public class ReadItemCommandBase<T, FilterT> : ReadCommandBase<T, T> { }
This contains some more basic generic stuff - like definition of methods that will be required to read a single item from a table in the database, where the table name, key field name and field list names are defined as required abstract properties (to be defined by the derived class.
public class MyTableReadItemCommand : ReadItemCommandBase<MyTableClass, Int?> { }
This contains specific properties that define my table name, the list of fields from the table or view, the name of the key field, a method to parse the data out of the IDataReader row into my business object and a method that initiates the whole process.
Now, I also have this structure for my ReadList...
abstract public ReadListCommandBase<T> : ReadCommandBase<T, IEnumerable<T>> { }
public class MyTableReadListCommand : ReadListCommandBase<MyTableClass> { }
The difference being that the List classes contain properties that pertain to list generation (i.e. PageStart, PageSize, Sort and returns an IEnumerable) vs. return of a single DataObject (which just requires a filter that identifies a unique record).
Problem:
I'm hating that I've got a bunch of properties in my MyTableReadListCommand class that are identical in my MyTableReadItemCommand class. I've thought about moving them to a helper class, but while that may centralize the member contents in one place, I'll still have identical members in each of the classes, that instead point to the helper class, which I still dislike.
My first thought was dual inheritance would solve this nicely, even though I agree that dual inheritance is usually a code smell - but it would solve this issue very elegantly. So, given that .NET doesn't support dual inheritance, where do I go from here?
Perhaps a different refactor would be more suitable... but I'm having trouble wrapping my head around how to sidestep this problem.
If anyone needs a full code base to see what I'm harping on about, I've got a prototype solution on my DropBox at http://dl.dropbox.com/u/3029830/Prototypes/Prototype%20-%20DAL%20Refactor.zip. The code in question is in the DataAccessLayer project.
P.S. This isn't part of an ongoing active project, it's more a refactor puzzle for my own amusement.
Thanks in advance folks, I appreciate it.
Separate the result processing from the data retrieval. Your inheritance hierarchy is already more than deep enough at ReadCommandBase.
Define an interface IDatabaseResultParser. Implement ItemDatabaseResultParser and ListDatabaseResultParser, both with a constructor parameter of type ReadCommandBase ( and maybe convert that to an interface too ).
When you call IDatabaseResultParser.Value() it executes the command, parses the results and returns a result of type T.
Your commands focus on retrieving the data from the database and returning them as tuples of some description ( actual Tuples or and array of arrays etc etc ), your parser focuses on converting the tuples into objects of whatever type you need. See NHibernates IResultTransformer for an idea of how this can work (and it's probably a better name than IDatabaseResultParser too).
Favor composition over inheritance.
Having looked at the sample I'll go even further...
Throw away AppCommandBase - it adds no value to your inheritance hierarchy as all it does is check that the connection is not null and open and creates a command.
Separate query building from query execution and result parsing - now you can greatly simplify the query execution implementation as it is either a read operation that returns an enumeration of tuples or a write operation that returns the number of rows affected.
Your query builder could all be wrapped up in one class to include paging / sorting / filtering, however it may be easier to build some form of limited structure around these so you can separate paging and sorting and filtering. If I was doing this I wouldn't bother building the queries, I would simply write the sql inside an object that allowed me to pass in some parameters ( effectively stored procedures in c# ).
So now you have IDatabaseQuery / IDatabaseCommand / IResultTransformer and almost no inheritance =)
I think the short answer is that, in a system where multiple inheritance has been outlawed "for your protection", strategy/delegation is the direct substitute. Yes, you still end up with some parallel structure, such as the property for the delegate object. But it is minimized as much as possible within the confines of the language.
But lets step back from the simple answer and take a wide view....
Another big alternative is to refactor the larger design structure such that you inherently avoid this situation where a given class consists of the composite of behaviors of multiple "sibling" or "cousin" classes above it in the inheritance tree. To put it more concisely, refactor to an inheritance chain rather than an inheritance tree. This is easier said than done. It usually requires abstracting very different pieces of functionality.
The challenge you'll have in taking this tack that I'm recommending is that you've already made a concession in your design: You're optimizing for different SQL in the "item" and "list" cases. Preserving this as is will get in your way no matter what, because you've given them equal billing, so they must by necessity be siblings. So I would say that your first step in trying to get out of this "local maximum" of design elegance would be to roll back that optimization and treat the single item as what it truly is: a special case of a list, with just one element. You can always try to re-introduce an optimization for single items again later. But wait till you've addressed the elegance issue that is vexing you at the moment.
But you have to acknowledge that any optimization for anything other than the elegance of your C# code is going to put a roadblock in the way of design elegance for the C# code. This trade-off, just like the "memory-space" conjugate of algorithm design, is fundamental to the very nature of programming.
As is mentioned by Kirk, this is the delegation pattern. When I do this, I usually construct an interface that is implemented by the delegator and the delegated class. This reduces the perceived code smell, at least for me.
I think the simple answer is... Since .NET doesn't support Multiple Inheritence, there is always going to be some repetition when creating objects of a similar type. .NET simply does not give you the tools to re-use some classes in a way that would facilitate perfect DRY.
The not-so-simple answer is that you could use code generation tools, instrumentation, code dom, and other techniques to inject the objects you want into the classes you want. It still creates duplication in memory, but it would simplify the source code (at the cost of added complexity in your code injection framework).
This may seem unsatisfying like the other solutions, however if you think about it, that's really what languages that support MI are doing behind the scenes, hooking up delegation systems that you can't see in source code.
The question comes down to, how much effort are you willing to put into making your source code simple. Think about that, it's rather profound.
I haven't looked deeply at your scenario, but I have some thoughs on the dual-hierarchy problem in C#. To share code in a dual-hierarchy, we need a different construct in the language: either a mixin, a trait (pdf) (C# research -pdf) or a role (as in perl 6). C# makes it very easy to share code with inheritance (which is not the right operator for code-reuse), and very laborious to share code via composition (you know, you have to write all that delegation code by hand).
There are ways to get a kind of mixin in C#, but it's not ideal.
The Oxygene (download) language (an Object Pascal for .NET) also has an interesting feature for interface delegation that can be used to create all that delegating code for you.
Small design question here. I'm trying to develop a calculation app in C#. I have a class, let's call it InputRecord, which holds 100s of fields (multi dimensional arrays) This InputRecordclass will be used in a number of CalculationEngines. Each CalculcationEngine can make changes to a number of fields in the InputRecord. These changes are steps needed for it's calculation.
Now I don't want the local changes made to the InputRecord to be used in other CalculcationEngine's classes.
The first solution that comes to mind is using a struct: these are value types. However I'd like to use inheritance: each CalculationEngine needs a few fields only relevant to that engine: it's has it's own InputRecord, based on BaseInputRecord.
Can anyone point me to a design that will help me accomplish this?
If you really have a lot of data, using structs or common cloning techniques may not be very space-efficient (e.g. it would use much memory).
Sounds like a design where you need to have a "master store" and a "diff store", just analogous to a RDBMS you have data files and transactions.
Basically, you need to keep a list of the changes performed per calculation engine, and use the master values for items which aren't affected by any changes.
The elegant solution would be to not change the inputrecord. That would allow sharing (and parallel processing).
If that is not an option you will have to Clone the data. Give each derived class a constructor that takes the base Input as a parameter.
You can declare a Clone() method on your BaseInputRecord, then pass a copy to each CalculationEngine.