Imagine there is a tree structure of data. I load this data upon program start. Each node in the tree has several properties. Now I want to extend the data for each tree node with a plugin, that is maybe loaded, maybe not.
My question is, how could I load and save extended data for objects? Should I save all the data in one place or different places (e.g., one xml file vs. two)?
Edit
I think it would be possible to use a dictionary for additional data (i.e., var data = node.Data["pluginA"]).
The data itself might be serialized with a BinaryFormatter or XmlSerializer.
Related
I have a class which contains a list of its instances of the same class. This can get pretty nested and is fairly dynamic. This class has other members and therefore you can have many different cases / setups with different type of nestings.
I am trying to save it as JSON and deserialize when I need to access data.
Serializing is easy since it successfully navigates through the nesting and generates the appropriate jsons.
I am having trouble with updating. So when the session starts, I deserialize the json data that I have, I use this data when needed as a reference to load saved settings when certain elements are created.
My problem is when I want to add a new entry to the json or make updates.
To make updates, I can recursively loop through and find the item and make modifications it it but adding a new item is where I am having difficulty.
Say a few nestings down the line, I have added a new item, How can I add this to the saved JSON in the appropriate spot?
I would post my code but it spans several classes and I would need to post a lot of code.
Essentially, I have a root class and then subclasses of the same type added as children to the list and other items.
How do I determine where in the nestings the new item belongs?
I can find specific items by recursively seraching every branch for its unique ID and make edits to that item to update it but not sure how to dynamically add a new item to specific places throughout the nesting.
When first run, I just take the settings the user sets and store it as JSON.
Next time, when it is run, I deserialize the data I have and store it.
Based on what the user selects, I will load from the above step the settings for any item he selects.
If the user adds a new item, somewhere through the nestings, I need to be able to determine where, then I need to combine both the old JSON and the new JSON (ovveride old values and add new elements if any) then serialize and write to file.
I can't just serialize the new data and store it because then it would be missing the old data if it was not created in this session.
It is kind of confusing but hopefully I have been helpful enough.
Thanks
Then there are many class that represents Umbraco documents:
1) umbraco.cms.businesslogic.Content
2) umbraco.cms.businesslogic.web.Document
3) umbraco.MacroEngines.DynamicNode
4) umbraco.presentation.nodeFactory.Node
Are there any others?
Can you explain what they do, and when to use them?
umbraco.MacroEngines.DynamicNode and umbraco.presentation.nodeFactory.Node seem the same. Perhaps it is better to use Node class because it is faster?
I have a theory:
umbraco.cms.businesslogic.Content and umbraco.cms.businesslogic.web.Document are the representation of cmsContent and cmsDocument DB tables.
umbraco.presentation.nodeFactory.Node and umbraco.MacroEngines.DynamicNode represents the node cached in XML file, to utilize into website.
The first is the simply Node, the second is the same Node with added dynamic properties, one for property defined in nodeType.
So, I think that Node is faster than DynamicNode
Is there someone that can confirm this?
Based on personal use:
Content: Never use it directly, rather use the Document|Media|Member api (which inherits from this class).
Document: Use it for Create|Update|Delete operations. It does all of its operation directly to DB, so it should be used for Reading only when you need to values directly from the db.
Node: Use this most: when Reading|Displaying data through usercontrols, code libraries, xslt extensions, etc.
DynamicNode: Razor macros. I have not yet use this one enough to provide more info.
See below for more detail, but no, Node and DynamicNode are not the same (DynamicNode uses Examine and will also fall back to reading from the DB if needed).
umbraco.cms.businesslogic.Content:
Content is an intermediate layer between CMSNode and classes which will use generic data. Content is a datastructure that holds generic data defined in its corresponding ContentType. Content can in some sence be compared to a row in a database table, its ContentType holds a definition of the columns and the Content contains the data. Note that Content data in umbraco is not tabular but in a treestructure.
I have never had the need to use this class directly though, as all of its operations are handled by the corresponding subclass, e.g: Document, Media, Member. This class in turns inherits from CMSNode which is the base class for every piece of content data inside umbraco
umbraco.cms.businesslogic.web.Document:Document represents a webpage, published Documents are exposed to the runtime/the public website in a cached xml document.
Use this class when referencing nodes from your "Content Section". It handles CRUD operations. Through this class you also get a reference to the DataType of each property in case you want to render those controls in an aspx page.
umbraco.NodeFactory.Node: It implements the INode interface which exposes read-only methods. All of its information comes from the umbraco cached xml. You will not get access to the controls of each property, rather the values of each formatted depending on the datatype.
You can only use this class for reading operations. It makes it really fast to show data since everything comes from cache (published nodes only).
umbraco.MacroEngines.DynamicNode: It was implemented to work with razor macros. It uses NodeFactory under the hood, which means it also access the cached xml. Although if you use the related DynamicMedia be careful as it uses: 1: ExamineIndex which strips out any html tags, 2: it falls back to its default Media type (db if it isn't in runtime cache) in umbraco_v4.11.5.
Same as the above.
I just know the difference between Document and Node.
The Node class uses the data stored in the umbraco cache, the Document class will get data directly from the database.
Node is faster than Document.
Node only returns the content that is saved and published.
95% of time you should use Node.
Content allows you to retrieve/edit any content (page/media/..) from DB (including non-published content), Document allows you to retrieve/edit only page content from DB (including non-published content), Node is used for fast read-only access to (published only) page content from the XML cache and Dynamic Node is comparable to Node but implemented in later versions of Umbraco for macros using Razor
I oftem run into this case of what is the correct way to store a list of objects as a property in a class and how to properly serialize them into XML.
For instance, we have a TabGroup class which will contain zero or multiple Tabs.
Is it better to a have a list of Tabs property or a list of references to Tabs? Provided that Tabs are identified by slugs which are unique.
List<Tab>
List<string>
In the end it comes down to
Serializing only the whole TabGroup graph (which will contain all its Tabs and their content)
Serializing Tabgroups and Tabs indenpendently and maintaing them separate and referenced through list of slugs in the serialized Tabgroup graph.
Most notable pro of 1:
Tabgroup in its entirety is persisted in one serialized file, keeping the datastore structure simple.
Most notable con of 1:
each time an update is made to one of the contained Tabs, Tabgroup must be updated (reserialized) too.
Most notable pro of 2:
updating tabs does not require reserialization of Tabgroup (at least when nothing was added or removed) since the references stay the same; so only the updated Tab has to be serialized again.
Most notable con of 2 (this is the main reason why I am writing this)
individual Tab files can be deleted in filestore but list of references remains the same, so errors/exceptions occur when viewing/rendering Tabgroups; complex logic would have to be implemented to render out something like "Tab was removed from datastore in unsupported way, remove it from the Tabgroup also?"
What do you suggest to tackle this problem? I will accept the answer that will cover a wide array of implications. Please note that we are talking only about XML persistence here, obviously in SQL we have little room to experiment since Tabgroups and Tabs would normally be in separate tables anyway (with a one-many relationship between them).
Unless you have some very compelling reason why complicating the data store is a good idea, you should typically go with keeping it simple. Secondly, having read the entire post twice, I do not really understand what your question is.
I'm not quite sure what your problem is, but if you are asking whether your design should return a List<Tab> or List<string> where each string represents a link to a tab, then I would argue for List<Tab>. You can lazy load the entire structure except for the ID or whatever you were using for a link if loading is an issue. Generally it just makes things easier to get what you were looking for directly out of an object instead of having to get a list of links and load all of the links individually.
Without more information specific to the actual problem, I doubt anyone would be able to help you more than that other than to give some long winded pros/cons based on assumed circumstances.
I am currently saving a .net ( c# ) usercontrol to the disk as a XML file by saving each property as an element in the xml document. The file is used for later recreation of the controls at runtime. I am wondering if it is possible or better to save the control as a binary file. There would be many controls so I guess it would have to have a header section describing the location and length of each saved controls. Thoughts?
Brad
BTW this is a windows app
EDIT:
what I currently have inplace is a public member function that uses the propertyDescriptior class to itinerate through all the properties and create an xml document from that.
PropertyDescriptorCollection pdc = TypeDescriptor.GetProperties(this);
for (int i = 0; i <= pdc.Count - 1; i++)
{ pdc[i].Name
pdc[i].PropertyType
pdc[i].Category
}
I will look into creating the class Serializable - thanks
We had to do this for a data-driven application where a user could create persistable views. We did an XML version to start but moved to using BinaryFormatter and the ISerializable interface as this allows us to control exactly what gets persisted and which constructors to use. For the controls we actually persisted the CodeCompileUnit that the designer has created, but that means you have to actually use a designer to lay them out.
Winforms controls don't serialize especially well, and you might have a lot of difficulty getting the base-classes (i.e. not your code) to play ball. Things like Color, for example, regularly provide surprisingly troublesome to serialize.
Xml would be an obvious (if somewhat predictable) choice, but you generally need to nominate sub-classes ahead of time. And of course, the base-classes won't be marked serializable. BinaryFormatter would avoid some of that, but as a field-based serializer, you'd have problems with the "handles" etc in the base-classes, which are meaningless serialized.
I'm not saying it can't be done - but it won't be trivial either. As a starter, you'd want to look at TypeConverter.GetProperties, and use the Converter of each to get the value as an invariant string.
just had a thought: maybe you dont need to serialize the base/sub-classes. maybe you could write another searializer that only serializes the top tier of the class inheritance hierarchy? only serializing your classes (you wrote) and maybe storing meta-data for the base classes you may derive from (so that you can re-map this on de-serialization)?? this could just be pie-in-the-sky too.
I have an application that reads a table from a database.
I issue an SQL query to get a result set, based on a unique string value I glean from the results, I use a case/switch statement to generate certain objects (they inherit TreeNode BTW). These created objects get shunted into a Dictionary object to be used later.
Whilst generating these objects I use some of the values from the result set to populate values in the object via the setters.
I query the Dictionary to return a particular object type and use it to populate a treeview. However it is not possible to populate 2 objects of the same type in a treeview from the Dictionary object (you get a runtime error - which escapes me at the moment, something to with referencing the same object). So what I have to do is use a memberwiseClone and implement IClonable to get around this.
Am I doing this right? Is there a better way - because I think this is causing my program to be real slow at this point. At the very least I think its a bit clunky - any advice from people who know more than me - greatly appreciated.
Is there a reason you are using the external dictionary? I would populate the tree directly as the data is queried.
If you do require the dictionary, you could set the .Tag property of the tree node to point to the data in your dictionary.
To add to #Brad, only populate the tree as needed. That means hooking into the expand event of the tree nodes. This is similar to how Windows Explorer functions when dealing with network shares.
There should be 1 TreeNode object per actual tree node in the tree - don't try to reuse the things. You may either associate them with your data using the Tag property (this is the recommended method), or you can subclass the TreeNode itself (this is the Java method, but used less in .NET).
(The use of cloning methods is usually a hint that you're either (a) doing something wrong, or (b) need to factor your domain model to separate mutable objects from immutable.)
have you considered using a Virtual Tree view which only loads the nodes the user actually wants to look at - i've had good success with the component from www.infralution.com