I am currently implementing the Level content type, which is supposed to represent a game level contents (most important, refer to textures stored separately). Input level data contains full path to source texture resources.
Problem: How to determine the resulting content "names" that I can write out into the compiled level content? The textures are supposed to be reused, so baking them into the level content would be a bad idea (waste of space). It is impossible to provide the names during content creation (e.g. within the level editor).
To give an example: Source file Levels/Level01.level refers to Textures/Granite.png and Textures/Dirt.png using their full path names. I would like to infer Textures/Granite and Textures/Dirt from that data at compile time.
The correct solution was to use the (not that well documented) ExternalReference<T> class instead of plain string names.
On the writer's side, the level content object contains ExternalReference<TextureContent> (or even List<ExternalReference<TextureContent>>) instance. When serialized (via ContentWriter), this takes care of adjusting the reference properly.
The reader's side is simplified to a plain ContentReader.ReadObject<> call, e.g.
public sealed class LocationReader : ContentTypeReader<Location>
{
protected override Location Read(ContentReader input, Location existingInstance)
{
List<Texture2D> textures = input.ReadObject<List<Texture2D>>();
}
}
NOTE: Take care to properly override the GetRuntimeType() method in your writer classes; this was the source of oddball ReflectionReader<T> issues for me.
Related
I would like to determine which source file defines a specific type using Mono.Cecil.
For methods, I can use the SequencePoint collection (for example, I could grab the first SequencePoint and fetch the Url of it's Document). I'm assuming this would always work as long as the PDBs are loaded since even empty methods should have at least one instruction (nop):
if (methodDefinition.DebugInformation.HasSequencePoints) {
var firstSequencePoint = methodDefinition.DebugInformation.SequencePoints[0];
return firstSequencePoint.Document.Url;
}
However, for types, I am not sure how this would work. Do the PDB files even contain the mapping between a type and a document? Obviously a type can be defined across multiple documents (in case of partial classes, for instance) which is fine - but is this information actually available? If yes, is it exposed in Mono.Cecil? Mono.Cecil.Cil.PortablePdbReader does read CustomDebugInformation for a module but I don't think this is it (I looked at the raw data and it doesn't contain anything of interest).
I write a generic possibility to convert from database object to business object.
My business object contains custom attributes and depending on them, I like to make specific operations on them.
On reading from db its quite easy because I can use aftermap (not perfect solution, cause I have to do it by reflection and set the value depending on it)
But on writing back to the database I have to do it beforeMap but this would change the source permanent, but I just like it in a transient way. So do the operation with Source on the fly but do not change source object.
It's a generic option so I can't work with properties.
protected static T MapFromDatabaseWithConversion<T, TSource>(TSource source) where T : MappingModel, new()
{
var config = new MapperConfiguration(cfg => cfg.CreateMap<TSource, T>().AfterMap((src, dest) => dest.ConvertFromDatabase()));
return config.CreateMapper().Map<T>(source);
}
Do you have any solution for the check on the fly the attribute of a property and change the value depending on it - or you have any idea to change source only on the fly, so not write the result of source operation to src obj?
Thank you very much.
I think you have to include value tracking in your objects. For each class member you would need a boolean to reflect if the value changed, and a method that checks them all at once such as isObjectChanged(). You can hard code this or wrap your object in a Proxy object at runtime, which is more complicated, but does not clutter you class with value tracking data/methods. On the other hand, Java Data Objects (https://db.apache.org/jdo/) can do this for you by re-compiling your class files to include value tracking within the class about changing values. It takes a bit to set up and may be overkill for your specific question, but I have used it many times when targeting multiple data sources in the same project such as a database or spreadsheet. JDO allows me to use the same code with a different data type manager that can be swapped at runtime. You can also target a No-SQL database and other data stores as well.
I have an application and I'm using MEF to compose it. I want to know if it is possible to update the Metadata information of the parts after they were imported.
The reason to do this is the following: I display the imported parts' name and an typeof(int) property in a ListBox, and they are not loaded until the corresponding ListBoxItem is selected (pretty standard). Now I want to update the Metadata info of one part when some event raises, so the displayed info in the ListBox is somethind like "[Part name] ([new number])".
I'm importing the metadata as an Interface that defines it's info, but when I set the int property to be editable (with a set accesor) I receive the following execption at composition time:
"The MetadataView 'myMetadataInterface' is invalid
because property 'myInt' has a property set method."
Is there ANY way to achieve this? Or is the metadata ALWAYS read only once the part is created?
I know this question looks weird, but it doesn't make it any less difficult and therefore interesting ;-)
EDIT (based on Lee's answer, in order to keep people to the core of the question)
I just want to know if it is possible to update a Metadata property after the part is composed, but before it is actually loaded (HasValue == false). Don't worry about filtering or finding the part.
I added a property to the export inteface, which is meant only to be represented in the UI and to be updated, this property has no other function and the parts are not filtered by it.
Thanks
Metadata filtering and DefaultValueAttribute
When you specifiy a metadata view, an implicit filtering will occur to
match only those exports which contain the metadata properties defined
in the view. You can specify on the metadata view that a property is
not required, by using the
System.ComponentModel.DefaultValueAttribute. Below you can see where
we have specified a default value of false on IsSecure. This means if
a part exports IMessageSender, but does not supply IsSecure metadata,
then it will still be matched.
citation
Short Version (EDITED in after question edit).
You shouldn't ever need to update metadata at runtime. If you have some data that should be updated and belongs to a mef part, you need to choose to either have it be updated by recompiling, or store that data in a flexible storage outside of the dll. There's no way to store the change you made in the dll without recompiling, so this is a flawed design.
Previous post.
Altering values on the view would by lying about the components loaded. Sure the metadata is just an interface to an object that returns initialized values; sure you can technically update those values, but that's not the purpose of metadata.
You wouldn't be changing the Name field of an instance of Type. Why not? Because it's metadata. Updating metadata at runtime would imply that the nature of the instance of real data is somehow modified.
This line of code, if possible, wouldn't introduce the Triple type.
typeof(Double).Name = "Triple";
var IGotATriple = new Triple();
If you want to alter values, you need to just make another object with that information and bind to that. Metadata is compiled in. If you change it after a part is loaded, it doesn't change anything in the part's source, so you'd be lying. (unless you're going to have access to the source-code and you change it there and recompile).
Let's look at an example:
[Export(typeof(IPart))]
[ExportMetadata("Part Name","Gearbox")]
[ExportMetadata("Part Number","123")]
[PartCreationPolicy(CreationPolicy.NonShared)]
public class GearBoxPart : Part { public double GearRatio ... }
Now, let's assume that you had a UI that showed available parts and their numbers. Now, the manufacturer changes the part number for whatever reason and you want to update it. If this is possible, you might want to consider storing part number in a manifest or database instead. Alternatively you'd have to recompile every time a part number changes.
Recompile is possible. You have a controller UI that does the above, but instead of updating the metadata, you submit a request to rebuild the part's codefile. The request would be handled by parsing the codefile, replacing the part number, then sending off for a batch recompile and redistribute the new dll. That's a lot of work for nothing IMO.
So, you setup a database. Then you change the object metadata to this.
[ExportMetadata("OurCompanyNamePartNumber","123")]
Then you have a database/manifest/xml that maps your unique permanent static part number that your company devises to the current part number. Modifications in your control UI update the database/manifest/xml.
<PartMap>
<PartMapEntry OurCompanyNamePartNumber="123" ManufacturerPartNumber="456"/>
...
</PartMap>
Then the end-user UI does lookups for the part by manufacturer part number, and the mef code looks in the PartMap to get the mef part number.
Then there are many class that represents Umbraco documents:
1) umbraco.cms.businesslogic.Content
2) umbraco.cms.businesslogic.web.Document
3) umbraco.MacroEngines.DynamicNode
4) umbraco.presentation.nodeFactory.Node
Are there any others?
Can you explain what they do, and when to use them?
umbraco.MacroEngines.DynamicNode and umbraco.presentation.nodeFactory.Node seem the same. Perhaps it is better to use Node class because it is faster?
I have a theory:
umbraco.cms.businesslogic.Content and umbraco.cms.businesslogic.web.Document are the representation of cmsContent and cmsDocument DB tables.
umbraco.presentation.nodeFactory.Node and umbraco.MacroEngines.DynamicNode represents the node cached in XML file, to utilize into website.
The first is the simply Node, the second is the same Node with added dynamic properties, one for property defined in nodeType.
So, I think that Node is faster than DynamicNode
Is there someone that can confirm this?
Based on personal use:
Content: Never use it directly, rather use the Document|Media|Member api (which inherits from this class).
Document: Use it for Create|Update|Delete operations. It does all of its operation directly to DB, so it should be used for Reading only when you need to values directly from the db.
Node: Use this most: when Reading|Displaying data through usercontrols, code libraries, xslt extensions, etc.
DynamicNode: Razor macros. I have not yet use this one enough to provide more info.
See below for more detail, but no, Node and DynamicNode are not the same (DynamicNode uses Examine and will also fall back to reading from the DB if needed).
umbraco.cms.businesslogic.Content:
Content is an intermediate layer between CMSNode and classes which will use generic data. Content is a datastructure that holds generic data defined in its corresponding ContentType. Content can in some sence be compared to a row in a database table, its ContentType holds a definition of the columns and the Content contains the data. Note that Content data in umbraco is not tabular but in a treestructure.
I have never had the need to use this class directly though, as all of its operations are handled by the corresponding subclass, e.g: Document, Media, Member. This class in turns inherits from CMSNode which is the base class for every piece of content data inside umbraco
umbraco.cms.businesslogic.web.Document:Document represents a webpage, published Documents are exposed to the runtime/the public website in a cached xml document.
Use this class when referencing nodes from your "Content Section". It handles CRUD operations. Through this class you also get a reference to the DataType of each property in case you want to render those controls in an aspx page.
umbraco.NodeFactory.Node: It implements the INode interface which exposes read-only methods. All of its information comes from the umbraco cached xml. You will not get access to the controls of each property, rather the values of each formatted depending on the datatype.
You can only use this class for reading operations. It makes it really fast to show data since everything comes from cache (published nodes only).
umbraco.MacroEngines.DynamicNode: It was implemented to work with razor macros. It uses NodeFactory under the hood, which means it also access the cached xml. Although if you use the related DynamicMedia be careful as it uses: 1: ExamineIndex which strips out any html tags, 2: it falls back to its default Media type (db if it isn't in runtime cache) in umbraco_v4.11.5.
Same as the above.
I just know the difference between Document and Node.
The Node class uses the data stored in the umbraco cache, the Document class will get data directly from the database.
Node is faster than Document.
Node only returns the content that is saved and published.
95% of time you should use Node.
Content allows you to retrieve/edit any content (page/media/..) from DB (including non-published content), Document allows you to retrieve/edit only page content from DB (including non-published content), Node is used for fast read-only access to (published only) page content from the XML cache and Dynamic Node is comparable to Node but implemented in later versions of Umbraco for macros using Razor
I am currently saving a .net ( c# ) usercontrol to the disk as a XML file by saving each property as an element in the xml document. The file is used for later recreation of the controls at runtime. I am wondering if it is possible or better to save the control as a binary file. There would be many controls so I guess it would have to have a header section describing the location and length of each saved controls. Thoughts?
Brad
BTW this is a windows app
EDIT:
what I currently have inplace is a public member function that uses the propertyDescriptior class to itinerate through all the properties and create an xml document from that.
PropertyDescriptorCollection pdc = TypeDescriptor.GetProperties(this);
for (int i = 0; i <= pdc.Count - 1; i++)
{ pdc[i].Name
pdc[i].PropertyType
pdc[i].Category
}
I will look into creating the class Serializable - thanks
We had to do this for a data-driven application where a user could create persistable views. We did an XML version to start but moved to using BinaryFormatter and the ISerializable interface as this allows us to control exactly what gets persisted and which constructors to use. For the controls we actually persisted the CodeCompileUnit that the designer has created, but that means you have to actually use a designer to lay them out.
Winforms controls don't serialize especially well, and you might have a lot of difficulty getting the base-classes (i.e. not your code) to play ball. Things like Color, for example, regularly provide surprisingly troublesome to serialize.
Xml would be an obvious (if somewhat predictable) choice, but you generally need to nominate sub-classes ahead of time. And of course, the base-classes won't be marked serializable. BinaryFormatter would avoid some of that, but as a field-based serializer, you'd have problems with the "handles" etc in the base-classes, which are meaningless serialized.
I'm not saying it can't be done - but it won't be trivial either. As a starter, you'd want to look at TypeConverter.GetProperties, and use the Converter of each to get the value as an invariant string.
just had a thought: maybe you dont need to serialize the base/sub-classes. maybe you could write another searializer that only serializes the top tier of the class inheritance hierarchy? only serializing your classes (you wrote) and maybe storing meta-data for the base classes you may derive from (so that you can re-map this on de-serialization)?? this could just be pie-in-the-sky too.