Then there are many class that represents Umbraco documents:
1) umbraco.cms.businesslogic.Content
2) umbraco.cms.businesslogic.web.Document
3) umbraco.MacroEngines.DynamicNode
4) umbraco.presentation.nodeFactory.Node
Are there any others?
Can you explain what they do, and when to use them?
umbraco.MacroEngines.DynamicNode and umbraco.presentation.nodeFactory.Node seem the same. Perhaps it is better to use Node class because it is faster?
I have a theory:
umbraco.cms.businesslogic.Content and umbraco.cms.businesslogic.web.Document are the representation of cmsContent and cmsDocument DB tables.
umbraco.presentation.nodeFactory.Node and umbraco.MacroEngines.DynamicNode represents the node cached in XML file, to utilize into website.
The first is the simply Node, the second is the same Node with added dynamic properties, one for property defined in nodeType.
So, I think that Node is faster than DynamicNode
Is there someone that can confirm this?
Based on personal use:
Content: Never use it directly, rather use the Document|Media|Member api (which inherits from this class).
Document: Use it for Create|Update|Delete operations. It does all of its operation directly to DB, so it should be used for Reading only when you need to values directly from the db.
Node: Use this most: when Reading|Displaying data through usercontrols, code libraries, xslt extensions, etc.
DynamicNode: Razor macros. I have not yet use this one enough to provide more info.
See below for more detail, but no, Node and DynamicNode are not the same (DynamicNode uses Examine and will also fall back to reading from the DB if needed).
umbraco.cms.businesslogic.Content:
Content is an intermediate layer between CMSNode and classes which will use generic data. Content is a datastructure that holds generic data defined in its corresponding ContentType. Content can in some sence be compared to a row in a database table, its ContentType holds a definition of the columns and the Content contains the data. Note that Content data in umbraco is not tabular but in a treestructure.
I have never had the need to use this class directly though, as all of its operations are handled by the corresponding subclass, e.g: Document, Media, Member. This class in turns inherits from CMSNode which is the base class for every piece of content data inside umbraco
umbraco.cms.businesslogic.web.Document:Document represents a webpage, published Documents are exposed to the runtime/the public website in a cached xml document.
Use this class when referencing nodes from your "Content Section". It handles CRUD operations. Through this class you also get a reference to the DataType of each property in case you want to render those controls in an aspx page.
umbraco.NodeFactory.Node: It implements the INode interface which exposes read-only methods. All of its information comes from the umbraco cached xml. You will not get access to the controls of each property, rather the values of each formatted depending on the datatype.
You can only use this class for reading operations. It makes it really fast to show data since everything comes from cache (published nodes only).
umbraco.MacroEngines.DynamicNode: It was implemented to work with razor macros. It uses NodeFactory under the hood, which means it also access the cached xml. Although if you use the related DynamicMedia be careful as it uses: 1: ExamineIndex which strips out any html tags, 2: it falls back to its default Media type (db if it isn't in runtime cache) in umbraco_v4.11.5.
Same as the above.
I just know the difference between Document and Node.
The Node class uses the data stored in the umbraco cache, the Document class will get data directly from the database.
Node is faster than Document.
Node only returns the content that is saved and published.
95% of time you should use Node.
Content allows you to retrieve/edit any content (page/media/..) from DB (including non-published content), Document allows you to retrieve/edit only page content from DB (including non-published content), Node is used for fast read-only access to (published only) page content from the XML cache and Dynamic Node is comparable to Node but implemented in later versions of Umbraco for macros using Razor
Related
I'm trying to write some logic which applies to pages of any of a group of document types in Umbraco.
I'd like to say 'is this document's type, or any of it's parent document types equal to a certain type', but I can't work out how to even get to the document type structure.
I tried using 'HasProperty' with a property defined on the parent to achieve the same effect (because the properties are inherited) - but if the property doesn't have a value, then HasProperty returns False (which seems broken to me, but apparently is just how it works).
Using Razor in Umbraco v6.1.5 (Assembly version: 1.0.4993.19246).
(For now I'll have to check if the NodeTypeAlias is in a big list I suppose).
You can't retrieve the DocumentType hierarchy from the published cache, i.e. from the Node or IPublishedContent objects.
However, what you could do is:
On application start generate a Collection<DocumentType> object of all the content (not media, member) DocumentType objects and cache it. I certainly wouldn't do this from the UI as querying the DocumentType will cause calls to the database, so caching and reusing the objects is much better.
Possibly add the same code to the AfterNew event of a DocumentType. This way new DocumentType objects are also added to the cached list.
Create an extension method like the existing IsDocumentType(string documentTypeAlias) which queries the cached list.
The following code will retrieve the parent structure, you just need to add a while(docType.MasterContentType != null) construct:
DocumentType docType = DocumentType.GetByAlias("ContentPage");
DocumentType parentDocType = new DocumentType(docType.MasterContentType);`
Edit:
If you want to go down the root of having a property on the IPublishedContent or Node instance then I would adopt a similar approach.
Have all doc types inherit from a single doc type which has a label property called parentDocTypeAliases.
On the Newing event of a Document, add the aliases of the parent doc types as a cmma delimited list to the parentDocTypeAliases property. You'll still need the code listed above to query the doc type hierarchy.
Create a new extension method (e.g. InheritsFromDocumentType(string documentTypeAlias)) that queries the csv value.
You can do 'is this document's type, or any of it's parent document types equal to a certain type' with the following:
node.AncestorsOrSelf().FirstOrDefault(x => x.DocumentTypeAlias == "your doctype alias");
This will return an IPublishedContent object of the first node it encounters of type "your doctype alias".
Source: http://our.umbraco.org/documentation/Reference/Mvc/querying
Imagine there is a tree structure of data. I load this data upon program start. Each node in the tree has several properties. Now I want to extend the data for each tree node with a plugin, that is maybe loaded, maybe not.
My question is, how could I load and save extended data for objects? Should I save all the data in one place or different places (e.g., one xml file vs. two)?
Edit
I think it would be possible to use a dictionary for additional data (i.e., var data = node.Data["pluginA"]).
The data itself might be serialized with a BinaryFormatter or XmlSerializer.
SOME CONTEXT
one of my projects requires carrying around some of "metadata" (yes I hate using that word).
What the metadata specifically consists of is not important, only that it's more complex than a simple "table" or "list" - you could think of it as a mini-database of information
Currently I have this metadata stored in an XML file and have an XSD that defines the schema.
I want to package this metadata with my project, currently that means keeping the XML file as a resource
However, I have been looking for a more strongly-typed alternative. I am considering moving it from an XML file to C# code - so instead of using XML apis to traverse my metadata, relying on .NET code via reflection on types
Besides the strong(er) typing, some useful characteristics I see from using an assembly are for this: (1) I can refactor the "schema" to some extent with tools like Resharper, (2) The metadata can come with code, (3) don't have to rely on any external DB component.
THE QUESTIONS
If you have tried something like this, I am curious about what you learned.
Was your experience positive?
What did you learn?
What problems in this approach did you uncover?
What are some considerations I should take into account?
Would you do this again?
NOTES
Am not asking for how to use Reflection - no help is needed there
Am fundamentally asking about your experiences and design considerations
UPDATE: INFORMATION ABOUT THE METADATA
Because people are asking I'll try describing the metadata a bit more. I'm trying to abstract a bit - so this will seem a bit artificial.
There are three entities in the model:
A set of "groups" - each group has a unique name and several properites (usually int values that represent ID numbers of some kind)
Each "group" contains 1 or more "widgets" (never more than 50) - each item has properties like name (therea are multiple names), IDs, and various boolean properties.
Each widget contains a one or more "scenarios". Each "scenario" is documentation- a URL to a description of how to use the widget.
Typically I need to run these kinds of "queries"
Get the names of all the widgets
Get the names of all groups that contain at least one widget where BoolProp1=true
Get given the ID of a widget, which group contains that widget
How I was thinking about modelling the entities in the assembly
There are 3 classes: Group, Widget, Documentation
There are 25 Groups so I will have 25 Group classes - so "FooGroup" will derive from Group, same pattern follows for widgets and documentation
Each class will have attributes to account for names, ids, etc.
I have used and extended Metadata for a large part of my projects, many of them related to describing components, relationships among them, mappings, etc.
(Major categories of using attributes extensively include O/R Mappers, Dependency Injection framework, and Serialization description - specially XML Serialization)
Well, I'm going to ask you to describe a little bit more about the nature of the data you want to embed as resource. Using attributes are naturally good for the type of data that describes your types and type elements, but each usage of attributes is a simple and short one. Attributes (I think) should be very cohesive and somehow independent from each other.
One of the solutions that I want to point you at, is the "XML Serialization" approach. You can keep your current XMLs, and put them into your assemblies as Embedded Resource (which is what you've probably done already) and read the whole XML at once into a strongly-typed hierarchy of objects.
XML Serialization is very very simple to use, much simpler than the typical XML API or even LINQ2XML, in my opinion. It uses Attributes to map class properties to XML elements and XML attributes. Once you've loaded the XML into the objects, you have everything you want in the memory as "typed" data.
Based on what I understand from your description, I think you have a lot of data to be placed on a single class. This means a large and (in my opinion) ugly attribute code above the class. (Unless you can distribute your data among members making each of them small and independent, which is nice.)
I have many positive experiences using XML Serialization for large amount of data. You can arrange data as you want, you get type safety, you get IntelliSence (if you give your XSD to visual studio), and you also get half of the Refactoring. ReSharper (or any other refactoring tool that I know of) don't recognize XML Serialization, so when you refactor your typed classes, it doesn't change the XML itself, but changes all the usage of the data.
If you give me more details on what your data is, I might be able to add something to my answer.
For XML Serialization samples, just Google "XML Serialization" or look it up in MSDN.
UPDATE
I strongly recommend NOT using classes for representing instances of your data. Or even using a class to encapsulate data is against its logical definition.
I guess your best bet would be XML Serialization, provided that you already have your data in XML. You get all the benefits you want, with less code. And you can perform any query on the XML Serializable objects using LINQ2Objects.
A part of your code can look like the following:
[XmlRoot]
public class MyMetadata
{
[XmlElement]
public Group[] Groups { get; set; }
}
public class Group
{
[XmlAttribute]
public string Name { get; set; }
[XmlAttribute]
public int SomeNumber { get; set; }
[XmlElement]
public Widget[] Widgets { get; set; }
}
public class Widget
{
...
}
You should call new XmlSerializer(typeof(MyMetadata)) to create a serializer, and call its Deserialize method giving it the stream of your XML, and you get a filled instance of MyMetadata class.
It's not clear from your description but it sounds like you have assembly-level metadata that you want to be able to access (as opposed to type-level). You could have a single class in each assembly that implements a common interface, then use reflection to hunt down that class and instantiate it. Then you can hard-code the metadata within.
The problems of course are the benefits that you lose from the XML -- namely that you can't modify the metadata without a new build. But if you're going this direction you probably have already taken that into account.
I am currently saving a .net ( c# ) usercontrol to the disk as a XML file by saving each property as an element in the xml document. The file is used for later recreation of the controls at runtime. I am wondering if it is possible or better to save the control as a binary file. There would be many controls so I guess it would have to have a header section describing the location and length of each saved controls. Thoughts?
Brad
BTW this is a windows app
EDIT:
what I currently have inplace is a public member function that uses the propertyDescriptior class to itinerate through all the properties and create an xml document from that.
PropertyDescriptorCollection pdc = TypeDescriptor.GetProperties(this);
for (int i = 0; i <= pdc.Count - 1; i++)
{ pdc[i].Name
pdc[i].PropertyType
pdc[i].Category
}
I will look into creating the class Serializable - thanks
We had to do this for a data-driven application where a user could create persistable views. We did an XML version to start but moved to using BinaryFormatter and the ISerializable interface as this allows us to control exactly what gets persisted and which constructors to use. For the controls we actually persisted the CodeCompileUnit that the designer has created, but that means you have to actually use a designer to lay them out.
Winforms controls don't serialize especially well, and you might have a lot of difficulty getting the base-classes (i.e. not your code) to play ball. Things like Color, for example, regularly provide surprisingly troublesome to serialize.
Xml would be an obvious (if somewhat predictable) choice, but you generally need to nominate sub-classes ahead of time. And of course, the base-classes won't be marked serializable. BinaryFormatter would avoid some of that, but as a field-based serializer, you'd have problems with the "handles" etc in the base-classes, which are meaningless serialized.
I'm not saying it can't be done - but it won't be trivial either. As a starter, you'd want to look at TypeConverter.GetProperties, and use the Converter of each to get the value as an invariant string.
just had a thought: maybe you dont need to serialize the base/sub-classes. maybe you could write another searializer that only serializes the top tier of the class inheritance hierarchy? only serializing your classes (you wrote) and maybe storing meta-data for the base classes you may derive from (so that you can re-map this on de-serialization)?? this could just be pie-in-the-sky too.
I have an application that reads a table from a database.
I issue an SQL query to get a result set, based on a unique string value I glean from the results, I use a case/switch statement to generate certain objects (they inherit TreeNode BTW). These created objects get shunted into a Dictionary object to be used later.
Whilst generating these objects I use some of the values from the result set to populate values in the object via the setters.
I query the Dictionary to return a particular object type and use it to populate a treeview. However it is not possible to populate 2 objects of the same type in a treeview from the Dictionary object (you get a runtime error - which escapes me at the moment, something to with referencing the same object). So what I have to do is use a memberwiseClone and implement IClonable to get around this.
Am I doing this right? Is there a better way - because I think this is causing my program to be real slow at this point. At the very least I think its a bit clunky - any advice from people who know more than me - greatly appreciated.
Is there a reason you are using the external dictionary? I would populate the tree directly as the data is queried.
If you do require the dictionary, you could set the .Tag property of the tree node to point to the data in your dictionary.
To add to #Brad, only populate the tree as needed. That means hooking into the expand event of the tree nodes. This is similar to how Windows Explorer functions when dealing with network shares.
There should be 1 TreeNode object per actual tree node in the tree - don't try to reuse the things. You may either associate them with your data using the Tag property (this is the recommended method), or you can subclass the TreeNode itself (this is the Java method, but used less in .NET).
(The use of cloning methods is usually a hint that you're either (a) doing something wrong, or (b) need to factor your domain model to separate mutable objects from immutable.)
have you considered using a Virtual Tree view which only loads the nodes the user actually wants to look at - i've had good success with the component from www.infralution.com