Purpose and semantic of IMigrationMetadata interface in Entity Framework - c#

I'm trying to find out what is the semantic of System.Data.Entity.Migrations.Infrastructure.IMigrationMetadata interface in the EF. I know that it's used to manage and apply DB migrations. But I can't find detailed information about it. To be specific I would like to know:
What Source property is used for? Why it's always null when I generate migrations using tools?
What Target property is used for? I see that tools is generating something Base64-looking and placed into resources. What is it? Why it's generated in such non-friendly format?
Is it possible to develop migration manually without tools usage? I suppose it is not easy because of that Target property Base64-like value which should be generated somehow. Am I right?
When this interface is actually used? At the moment I found out that migrations not implementing this interface can't be found automatically by migrator. Am I right? Is it the only purpose of the interface?

The IMigrationMetadata Interface has the following responsibilities that I know of.
Identify the migration via the ID property so that is can be recognized and included by commands such as Update-Database.
Supply a snapshot of the model as it is after the migration is applied via the Target property. This is used to determine the changes that should be included in a new migration.
I am guessing that the Source property is often not implemented by the tooling as it is not required in the implementation of Add-Migration. That code probably just compares the model as it was at the end of the most recent, existing migration with a model generated from the code to determine the changes that need to be included in the new migration.
The Target property returns a model in EDMX format that has been both compressed using the GZipStream and encoded using Convert.ToBase64String. I wrote the following code to both decode and encode these values. You would probaly find this useful if you are going to be coding migrations manually.
using System;
using System.IO;
using System.IO.Compression;
using System.Text;
namespace ConsoleApplication6
{
class Program
{
static void Main()
{
var minimalModel = File.ReadAllText("Model1.edmx");
var encodedMinimalModel = Encode(minimalModel);
var decodedMinimalModel = Decode(encodedMinimalModel);
}
private static string Decode(string encodedText)
{
var compressedBytes = Convert.FromBase64String(encodedText);
var decompressedBytes = Decompress(compressedBytes);
return Encoding.UTF8.GetString(decompressedBytes);
}
private static string Encode(string plainText)
{
var bytes = Encoding.UTF8.GetBytes(plainText);
var compressedBytes = Compress(bytes);
return Convert.ToBase64String(compressedBytes);
}
public static byte[] Decompress(byte[] bytes)
{
using (var memorySteam = new MemoryStream(bytes))
{
using (var gzipStream = new GZipStream(memorySteam, CompressionMode.Decompress))
{
return ToByteArray(gzipStream);
}
}
}
private static byte[] ToByteArray(Stream stream)
{
using (var resultMemoryStream = new MemoryStream())
{
stream.CopyTo(resultMemoryStream);
return resultMemoryStream.ToArray();
}
}
public static byte[] Compress(byte[] bytes)
{
using (var memoryStream = new MemoryStream())
{
using (var gzipStream = new GZipStream(memoryStream, CompressionMode.Compress))
{
gzipStream.Write(bytes,0, bytes.Length);
}
return memoryStream.ToArray();
}
}
}
}
The compression probably explains your query as to why a non-human readable format was chosen. This content is repeated at least once (in the Target property) for each migration and can be large depending on the size of the model. The compression saves on space.
On that note, as far as I can see, it is really only the last migration that is required to return a true representation of the model after it has been applied. Only that migration is used by Add-Migration to calculate the changes required in the new migration. If you are dealing with a very large model and/or a very large number of migrations, removing that content could be advantageous. The remainder of this post covers my derivation of a minimal value for the Target property which can be used in all but the most recent migration.
The Target property must return a string object - an ArgumentNullException is thrown in a call to System.Convert.FromBase64String in System.Data.Entity.Migrations.DbMigrator.ApplyMigration when update-database is called if Target returns null.
Further, it must be a valid XML document. When I returned an empty string from Target I got an XmlException with the message "Root element is missing.".
From this point on, I used my code from above to encode the values.
I did not get very far with gradually building up the model starting with <root /> for example so I swapped over to discarding elements from an empty EDMX file that I generated by adding a new 'ADO.Net Entity Data Model' to my project and then choosing the 'Empty Model' option. This was the result.
<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="3.0" xmlns:edmx="http://schemas.microsoft.com/ado/2009/11/edmx">
<edmx:Runtime>
<edmx:StorageModels>
<Schema xmlns="http://schemas.microsoft.com/ado/2009/11/edm/ssdl" Namespace="Model1.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2005">
</Schema>
</edmx:StorageModels>
</edmx:Runtime>
</edmx:Edmx>
When I encoded this using my code from above, this was the result.
H4sIAAAAAAAEAJVQy07DMBC8I/EP1t6xExASRA1VVTgWIYK4W/amtfCjeN2q/D12HsqJAxdLOzOe2Z3V+uIsO2MkE3wLNa+AoVdBG79v4ZT6mwdYP11frVC7S/OSH/Y5i++KOH/31BS2hUNKx0YIUgd0krgzKgYKfeIqOCF1ELdV9SjqWhQ5ZFfGRt/3k0/G4YDMWJdClHvcBY2WJiZz3WA+xv4vURBpC+xVOqSjVNjC4F3zkoTANtbIbNmh7YG9xXA2GmOefyih488ySd5926016NMi2ElveqT0Eb4wd5Lz7mHZVozrzoeJPy6biKWGCSh95+kXfT3Qv6UBAAA=
Be careful to ensure that you retain the real Target values for each of your migrations in source control in case you need to roll back to an earlier version. You could try applying the migration to a database and then using Visual Studio to generate an EDMX file. Another alternative would be to roll back the classes that form your model and then execute Add-Migration. Take the Target value from the newly created migration.

I was just looking into this because I wanted to use the Source property to enforce a strict ordering of migrations.
The answer to question 1 is hidden in DbMigrator.Scaffold
var scaffoldedMigration
= _configuration.CodeGenerator.Generate(
migrationId,
migrationOperations,
(sourceModel == _emptyModel.Value)
|| (sourceModel == _currentModel)
|| !sourceMigrationId.IsAutomaticMigration()
? null
: Convert.ToBase64String(modelCompressor.Compress(sourceModel)),
Convert.ToBase64String(modelCompressor.Compress(_currentModel)),
#namespace,
migrationName);
In other words, the Source property is only filled if the previous migration was an "Automatic Migration". Just tested it, and a subsequent migration after an automatic migration yields something like this:
[GeneratedCode("EntityFramework.Migrations", "6.2.0-61023")]
public sealed partial class Fourth : IMigrationMetadata
{
private readonly ResourceManager Resources = new ResourceManager(typeof(Fourth));
string IMigrationMetadata.Id
{
get { return "201905250916038_Fourth"; }
}
string IMigrationMetadata.Source
{
get { return Resources.GetString("Source"); }
}
string IMigrationMetadata.Target
{
get { return Resources.GetString("Target"); }
}
}

You go to: EF6 repository on codeplex and you see:
public interface IMigrationMetadata
{
/// <summary>
/// Gets the unique identifier for the migration.
/// </summary>
string Id { get; }
/// <summary>
/// Gets the state of the model before this migration is run.
/// </summary>
string Source { get; }
/// <summary>
/// Gets the state of the model after this migration is run.
/// </summary>
string Target { get; }
}
You can get the project and check references to see how this interface is being used. The base64 thing is your model. Again with the code you should be able to track how it is done.

Related

What is the replacement for TestContext.DataRow["MyColumnName"]

Using MSTest in a .Net Core Unit test project. I am attempting to use a csv datasource to provide the data for a test method.
Previously, I would use something like below in a .Net Framework test project:
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", #"data.csv", "data#csv", DataAccessMethod.Sequential),
DeploymentItem("data.csv"),
TestMethod]
public void ValuesController_Post()
{
_controller.Post(TestContext.DataRow["body"]);
_valuesRepository.Verify(_ => _.Post(It.IsAny<string>()), Times.Once);
}
The key here being the DataRow property found in TestContext. This doesn't appear to exist in the .Net Core version of the TestContext.
How would I go about doing this in .Net Core?
Since moving to aspnet core, I've never been able to use the same [Datasource(...)] attribute to iterate through test data, my data-driven tests are always skipped.
Have you considered switching to another approach with [DataTestMethod] and [DynamicData] with a custom source that reads you file ?
Here's a good article on this :
https://www.meziantou.net/2018/02/05/mstest-v2-data-tests
Maybe another way would be to read the whole file at the begining of the test and then iterate through the dataset as One single unit test?
Hope this helps.
It took me an afternoon to fiddle with things, but I finally found a solution. Since you don't specify your test or CSV file, here is a quick example I could get working.
Long story short, I installed the CsvHelper NuGet package, because parsing CSV is dead easy right up to the point it is not. As Carl Verret pointed out, you need to use the [DynamicData(...)] attribute above your test method, and then parse the CSV using CsvHelper.
The CSV File (Example.csv)
A,B,IsLessThanZero
1,2,FALSE
3,-5,TRUE
Important: Make sure this CSV file is included in your test project and "Copy To Output Directory" is set to "Always" in the properties for the CSV file in Solution Explorer.
Data Transfer Object Used By CsvHelper
public class AdditionData
{
public int A { get; set; }
public int B { get; set; }
public bool IsLessThanZero { get; set; }
}
The Test Class
[TestClass]
public class ExampleTests
{
// HINT: Look in {Your Test Project Folder}\bin\{Configuration}\netcore3.1\FolderYourCsvFileIsIn for the CSV file.
// Change this path to work with your test project folder structure.
private static readonly string DataFilePath = Path.GetDirectoryName(typeof(ExampleTests).Assembly.Location) + #"\FolderYourCsvFileIsIn\Example.csv";
[TestMethod]
[DynamicData(nameof(GetData), DynamicDataSourceType.Method)]
public void AddingTwoNumbers(AdditionData data)
{
bool isLessThanZero = data.A + data.B < 0;
Assert.AreEqual(data.IsLessThanZero, isLessThanZero);
}
private static IEnumerable<object[]> GetData()
{
using var stream = new StreamReader(DataFilePath);
using var reader = new CsvReader(stream, new CsvConfiguration(CultureInfo.CurrentCulture));
var rows = reader.GetRecords<AdditionData>();
foreach (var row in rows)
{
yield return new object[] { row };
}
}
}
After building your solution, you will see a single test in Test Explorer. Running this single test runs all variants defined in your CSV file:

EF 6 CodeFirst Add-Migration scaffoldes and generates changes that doesn't exist

I have been using EF Migrations for a while in my current project and all was working great, that is until today, the situation is as follows:
I made a small change of adding a string property
I called an API method and got an error that there are changes in the model
I ran the command "Add-Migration MigrationXYZ"
A new migration is created with extra changes that didn't happen
I ran the "Add-Migration MigrationXYZ -Force" to make sure its not a one thing issue, I dropped the DB, restarted VS(2015) but all the same
Another issue is that even if I apply the migration as done by the scaffolder, an error still returns saying "Unable to update database to match the current model because there are pending changes..."
After looking at these changes, they all but one are about having a string property with the [Required] attribute and the scaffolder need to make it nullable, below is a sample.
public partial class MigrationXYZ: DbMigration
{
public override void Up()
{
AddColumn("dbo.Foos", "NewProperty", c => c.String());//<-- Expected Change
AlterColumn("dbo.Bars", "Name", c => c.String());//<-- Unexpected Change
}
public override void Down()
{
AlterColumn("dbo.Bars", "Name", c => c.String(nullable: false));//<-- Unexpected Change
DropColumn("dbo.Foos", "NewProperty");//<-- Expected Change
}
}
public class Bar
{
//This was not touched in ages, some even before adding the first migration
[Required]
public string Name { get; set; }
}
And now I am stuck and don't know how to fix this...Corruption in the Migration state
Edit
I have been trying to debug the Add-Migration command to understand why does EF see the model is different than it really is, but using EF source is not possible when you have dependencies like Identity which needs signed DLLs to work.
However additional research lead me to the answer here which leads to this blog post By #trailmax and the code to decipher the migrations hash, and with a little search in the EF source I made a small app to extract both the current model and the last migration model to compare side to side.
The code to get the current model representation in XML
//Extracted from EF Source Code
public static class DbContextExtensions
{
public static XDocument GetModel(this DbContext context)
{
return GetModel(w => EdmxWriter.WriteEdmx(context, w));
}
public static XDocument GetModel(Action<XmlWriter> writeXml)
{
using (var memoryStream = new MemoryStream())
{
using (var xmlWriter = XmlWriter.Create(
memoryStream, new XmlWriterSettings
{
Indent = true
}))
{
writeXml(xmlWriter);
}
memoryStream.Position = 0;
return XDocument.Load(memoryStream);
}
}
}
//In Program.cs
using (var db = new DbContext())
{
var model = db.GetModel();
using (var streamWriter = new StreamWriter(#"D:\Current.xml"))
{
streamWriter.Write(model);
}
}
The code to extract the model from the migration in XML
//Code from Trailmax Tech Blog
public class MigrationDecompressor
{
public string ConnectionString { get; set; }
public String DecompressMigrationFromSource(IMigrationMetadata migration)
{
var target = migration.Target;
var xmlDoc = Decompress(Convert.FromBase64String(target));
return xmlDoc.ToString();
}
public String DecompressDatabaseMigration(String migrationName)
{
var sqlToExecute = String.Format("select model from __MigrationHistory where migrationId like '%{0}'", migrationName);
using (var connection = new SqlConnection(ConnectionString))
{
connection.Open();
var command = new SqlCommand(sqlToExecute, connection);
var reader = command.ExecuteReader();
if (!reader.HasRows)
{
throw new Exception("Now Rows to display. Probably migration name is incorrect");
}
while (reader.Read())
{
var model = (byte[])reader["model"];
var decompressed = Decompress(model);
return decompressed.ToString();
}
}
throw new Exception("Something went wrong. You should not get here");
}
/// <summary>
/// Stealing decomposer from EF itself:
/// http://entityframework.codeplex.com/SourceControl/latest#src/EntityFramework/Migrations/Edm/ModelCompressor.cs
/// </summary>
private XDocument Decompress(byte[] bytes)
{
using (var memoryStream = new MemoryStream(bytes))
{
using (var gzipStream = new GZipStream(memoryStream, CompressionMode.Decompress))
{
return XDocument.Load(gzipStream);
}
}
}
}
//Inside Program.cs
var decompresser = new MigrationDecompressor
{
ConnectionString = "<connection string>"
};
var databaseSchemaRecord = decompresser.DecompressDatabaseMigration("<migration name>");
using (var streamWriter = new StreamWriter(#"D:\LastMigration.xml"))
{
streamWriter.Write(databaseSchemaRecord);
}
Unfortunately I still cannot find the issue, the only difference between the model and the one hashed with the last migration is the expected change of the added property, none of the unexpected changes show up, also after running the migration suggested by EF, then comparing the current model with the suggested migration, still the model doesn't match the changes, what should be not null is still not null in the model, while the suggested migration show it as nullable.
The expected changes show up
<Property Name="NewProperty" Type="String" MaxLength="Max" FixedLength="false" Unicode="true" />
.
.
.
<ScalarProperty Name="NewProperty" ColumnName="NewProperty" />
.
.
.
<Property Name="NewProperty" Type="nvarchar(max)" Nullable="true" />
Try rolling back your database to one of your previous ones with
Update-database -targetMigration "nameofpreviousmigration"
(you may need to run update-database before running the above I am not certain)
Then delete your new migration, create a completely new migration and run
update-database.
Hopefully this will fix the problem with it thinking there is an extra migration
Another option but It is probably not the best solution is too manually edit the migration and take out the unexpected part
Well, looking again at #trailmax's answer, I wanted to try something, an info that I didn't include in the question, and was dismissing as the cause since its used in other places, was not changed in this migration, and was dismissed as the cause by #trailmax as well, which is attributes and ExpressiveAnnotations attributes in specific.
My actual Bar class looks like this
public class Bar
{
//This was not touched in ages, some even before adding the first migration
[Required]
[AssertThat(#"<Condition>", ErrorMessage = "Please revise the name")]
public string Name { get; set; }
}
I commented out the AssertThat attribute, and guess what, all the changes that shouldn't exist disappeared.
Please try providing connectionstring and provider explicitly with update-database command. You can find these values in your connectionstring.
Sometimes, we may need to direct the entity framework to connect to right database. One of the cases would be, selecting wrong project as start up project, which will make entity framework assume to connect to default database.
update-database -connectionstring:"" -provider:""

Settings and IsDirty when modifying the settings through external references

I have a C# application with a settings field that is a collection of custom objects.
When I start the application I create some class instances that take instances from the settings entry collection and keep a reference to them internally and modify them. While debugging I saw that changes done through these external references are not reflected when I callSettings.Default.Save() but changes done by directly accessing the properties likeSettings.Default.<property> work fine.
I looked up the code responsible for Save() and saw that the implementation actually checks a SettingsPropertyValue.IsDirty field to decide whether to or not to serialize it. Of course when I access the external references to the objects in the settings field that value is not set.
Is there any lightweight solution to this?
I don't think I'm the first person to encounter this. One way I can think of is implementing the IsDirty property in the collections I serialize and add INotifyPropertyChanged interface event PropertyChanged for all the contained instances so that the container is being notified of changes and can reflect them to the actual settings property. But that means wrapping each of the settings classes around with this logic. So what I am asking for is if there is a lightweight solution to this found by anyone else who has encountered this issue.
Example
Consider this class:
namespace SettingsTest
{
[DataContract(Name="SettingsObject")]
public class SettingsObject
{
[DataMember]
public string Property { get; set; }
}
}
And the following program:
namespace SettingsTest
{
class Program
{
static void Main(string[] args)
{
var settingsObjects = new List<SettingsObject>();
var settingsObject = new SettingsObject{Property = "foo"};
settingsObjects.Add(settingsObject);
Settings.Default.SettingsObjects = settingsObjects;
Settings.Default.Save();
settingsObject.Property = "bar";
Settings.Default.Save();
}
}
}
After the second Save() call the final output in user.config file is:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<userSettings>
<SettingsTest.Settings>
<setting name="SettingsObjects" serializeAs="Xml">
<value>
<ArrayOfSettingsObject xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<SettingsObject>
<Property>foo</Property>
</SettingsObject>
</ArrayOfSettingsObject>
</value>
</setting>
</SettingsTest.Settings>
</userSettings>
</configuration>
As you can see the modification to the property via the external reference to the SettingsObject instance was not persisted.
You seem to be covering a bunch of issues with this question. As you correctly said "changes done by directly accessing the properties likeSettings.Default. work fine." Modifying in memory references will not cause the settings to be updated automatically as you wanted. INotifyProperty changed is one solution.
If you want a quick and dirty method of serializing these objects into your settings file you can use the following code.
Objects involved:
/// <summary>Generic class to support serializing lists/collections to a settings file.</summary>
[Serializable()]
public class SettingsList<T> : System.Collections.ObjectModel.Collection<T>
{
public string ToBase64()
{
// If you don't want a crash& burn at runtime should probaby add
// this guard clause in: 'if (typeof(T).IsDefined(typeof(SerializableAttribute), false))'
using (var stream = new MemoryStream())
{
var formatter = new BinaryFormatter();
formatter.Serialize(stream, this);
stream.Position = 0;
byte[] buffer = new byte[(int)stream.Length];
stream.Read(buffer, 0, buffer.Length);
return Convert.ToBase64String(buffer);
}
}
public static SettingsList<T> FromBase64(string settingsList)
{
using (var stream = new MemoryStream(Convert.FromBase64String(settingsList)))
{
var deserialized = new BinaryFormatter().Deserialize(stream);
return (SettingsList<T>)deserialized;
}
}
}
[Serializable()]
public class SettingsObject
{
public string Property { get; set; }
public SettingsObject()
{
}
}
The main method demonstrates the issue you are facing, with a solution.
class Program
{
static void Main(string[] args)
{
// Make sure we don't overwrite the previous run's settings.
if (String.IsNullOrEmpty(Settings.Default.SettingsObjects))
{
// Create the initial settings.
var list = new SettingsList<SettingsObject> {
new SettingsObject { Property = "alpha" },
new SettingsObject { Property = "beta" }
};
Console.WriteLine("settingsObject.Property[0] is {0}", list[0].Property);
//Save initial values to Settings
Settings.Default.SettingsObjects = list.ToBase64();
Settings.Default.Save();
// Change a property
list[0].Property = "theta";
// This is where you went wrong, settings will not be persisted at this point
// because you have only modified the in memory list.
// You need to set the property on settings again to persist the value.
Settings.Default.SettingsObjects = list.ToBase64();
Settings.Default.Save();
}
// pull that property back out & make sure it saved.
var deserialized = SettingsList<SettingsObject>.FromBase64(Settings.Default.SettingsObjects);
Console.WriteLine("settingsObject.Property[0] is {0}", deserialized[0].Property);
Console.WriteLine("Finished! Press any key to continue.");
Console.ReadKey();
}
}
So all I'm doing is storing your whole list of objects as a base64 encoded string. Then we deserialize on the next run.
From what I understand of the question you want to only hold in memory references without hitting the settings object. You could simply run some code at the end of main to persist this list and any other setting you need to. Any changes will still be in memory as long as you hold a reference the object & will remain around until you terminate the application.
If you need the settings to be saved while the application is running just create a Save() method of your own & call it from the end of Main() as well when the user performs an action that requires saving the settings. e.g.
public static void SaveSettings(SettingsList list)
{
Settings.Default.SettingsObjects = list.ToBase64();
Settings.Default.Save();
}
Edit: One caveat as mentioned in the comments below.
From my benchmarks this method is very slow, meaning it's not a good idea to persist large object lists in settings this way. Once you have more than a handful of properties you might want to look at an embedded database like SQLite. A CSV, INI or XML file could also be an option here for large numbers of trivial settings. One benefit of simpler storage formats is easy modification by non-developers eg csv in excel. Of course this may not be what you want ;)

MvvmCross ViewTypeResolver doesn't resolve Tag (fragment or custom type)

Actual situation is, that I added to the MvxViewTypeResolver class the "Fragment"-Case, so it does look like this:
#region Copyright
// <copyright file="MvxViewTypeResolver.cs" company="Cirrious">
// (c) Copyright Cirrious. http://www.cirrious.com
// This source is subject to the Microsoft Public License (Ms-PL)
// Please see license.txt on http://opensource.org/licenses/ms-pl.html
// All other rights reserved.
// </copyright>
//
// Project Lead - Stuart Lodge, Cirrious. http://www.cirrious.com
#endregion
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Android.Views;
using Cirrious.MvvmCross.Binding.Android.Interfaces.Binders;
namespace Cirrious.MvvmCross.Binding.Android.Binders
{
public class MvxViewTypeResolver : IMvxViewTypeResolver
{
private Dictionary<string, Type> _cache = new Dictionary<string, Type>();
public IDictionary<string, string> ViewNamespaceAbbreviations { get; set; }
#region IMvxViewTypeResolver Members
public virtual Type Resolve(string tagName)
{
Type toReturn;
if (_cache.TryGetValue(tagName, out toReturn))
return toReturn;
var unabbreviatedTagName = UnabbreviateTagName(tagName);
var longLowerCaseName = GetLookupName(unabbreviatedTagName);
var viewType = typeof(View);
#warning AppDomain.CurrentDomain.GetAssemblies is only the loaded assemblies - so we might miss controls if not already loaded
var query = from assembly in AppDomain.CurrentDomain.GetAssemblies()
from type in assembly.GetTypes()
where viewType.IsAssignableFrom(type)
where (type.FullName ?? "-").ToLowerInvariant() == longLowerCaseName
select type;
toReturn = query.FirstOrDefault();
_cache[tagName] = toReturn;
return toReturn;
}
private string UnabbreviateTagName(string tagName)
{
var filteredTagName = tagName;
if (ViewNamespaceAbbreviations != null)
{
var split = tagName.Split(new char[] {'.'}, 2, StringSplitOptions.RemoveEmptyEntries);
if (split.Length == 2)
{
var abbreviate = split[0];
string fullName;
if (ViewNamespaceAbbreviations.TryGetValue(abbreviate, out fullName))
{
filteredTagName = fullName + "." + split[1];
}
}
}
return filteredTagName;
}
#endregion
protected string GetLookupName(string tagName)
{
var nameBuilder = new StringBuilder();
switch (tagName)
{
case "View":
case "ViewGroup":
nameBuilder.Append("android.view.");
break;
case "fragment":
nameBuilder.Append("android.app.");
break;
default:
if (!IsFullyQualified(tagName))
nameBuilder.Append("android.widget.");
break;
}
nameBuilder.Append(tagName);
return nameBuilder.ToString().ToLowerInvariant();
}
private static bool IsFullyQualified(string tagName)
{
return tagName.Contains(".");
}
}
}
Now it is submitting the correct longLowerCaseTagName (android.app.fragment) but in the query it isn't able to resolve the type.
My suggestion is, that the fragment-control isn't loaded when the type should be resolved. Maybe there is an other way to get the type resolved?
Also if I add a custom type (giving the tag Mvx.MyCustomType in the axml) it doesn't get resolved. Do I have to add something in the MvxBindingAttributes.xml in this case?
Thanks for the help!
First an explanation of the code:
The custom XML inflater factory used by the MvvmCross Binder tries to load Views in a very similar way to the standard 2.x Android XML inflater.
The default code for the view type resolution is indeed in: https://github.com/slodge/MvvmCross/blob/master/Cirrious/Cirrious.MvvmCross.Binding/Android/Binders/MvxViewTypeResolver.cs
If your xml contains a name such as <MyCompany.MyProject.MyViews.MyFirstView /> then the view type resolver:
first checks for abbreviations and expands these into full namespaces - by default the only known abbreviation is Mvx. which is expanded to: Cirrious.MvvmCross.Binding.Android.Views.. If you want to add more abbrevations then override ViewNamespaceAbbreviations in https://github.com/slodge/MvvmCross/blob/master/Cirrious/Cirrious.MvvmCross.Binding/Android/MvxBaseAndroidBindingSetup.cs
then checks to see if the unabbreviated name is a non-namespaced name. If it is, then it assumes that the class is the Android namespace and prepends it with android.view. or android.widget.
then converts the fully namespaced name to all lowercase as a case-insensitive lookup key
uses that lowercase key to search all Types which derive from View in all loaded assemblies.
caches the result (whether its null or not) in order to speed up subsequent inflations.
All of this behaviour was designed to match the default Android xml view inflation code in http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/2.3.6_r1/android/view/LayoutInflater.java#LayoutInflater.createViewFromTag%28java.lang.String%2Candroid.util.AttributeSet%29
With that explanation out of the way - here's an answer to your questions:
MvvmCross does not yet currently contain any Fragment support. The official MonoDroid fragment support itself was only released last week, and I've not yet had anybody request fragments - Android "fragmentation" seems to have kept most people back on Activity and Dialog based code.
Briefly ;ooking at the documentation, fragment isn't an Android View - it looks like Fragment inherits directly from Java.Lang.Object - see http://developer.android.com/reference/android/app/Fragment.html
Because of this, there's no way that the MvvmCross ViewTypeResolver will currently work with fragments.
I would suggest that if you need both mvvmcross and fragments today, then your best bet is to replace the default resolver (using IoC) with your own resolver - but I can't offer much advice on this as I haven't yet fully read and understood the droid docs on http://developer.android.com/guide/topics/fundamentals/fragments.html
From my experience in creating the current inflation code, then I think you will find the source essential reading when you do this - e.g. see : http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/4.0.1_r1/android/view/LayoutInflater.java#LayoutInflater.createViewFromTag%28android.view.View%2Cjava.lang.String%2Candroid.util.AttributeSet%29
I can't give you any information on when official mvvmcross fragment support will be available - it's not something that is currently scheduled.
Custom views are supported, but will not normally live in the Mvx. abbreviated namespace.
They are much more likely to live in your UI application namespace, or in some shared library.
To see a custom view in action, see the PullToRefresh example in the tutorial - https://github.com/slodge/MvvmCross/blob/master/Sample%20-%20Tutorial/Tutorial/Tutorial.UI.Droid/Resources/Layout/Page_PullToRefreshView.axml

Is this an abuse of the Settings Feature?

I've been storing collections of user settings in the Properties.Settings.Default object and using the Visual Studio settings designer (right-click on your project, click on Properties and then click on the Settings tab) to set the thing up. Recently, several users have complained that the data this particular setting tracks is missing, randomly.
To give an idea (not exactly how I do it, but somewhat close), the way it works is I have an object, like this:
class MyObject
{
public static string Property1 { get; set; }
public static string Property2 { get; set; }
public static string Property3 { get; set; }
public static string Property4 { get; set; }
}
Then in code, I might do something like this to save the information:
public void SaveInfo()
{
ArrayList userSetting = new ArrayList();
foreach (Something s in SomeCollectionHere) // For example, a ListView contains the info
{
MyObject o = new MyObject {
Property1 = s.P1;
Property2 = s.P2;
Property3 = s.P3;
Property4 = s.P4;
};
userSetting.Add(o);
}
Properties.Settings.Default.SettingName = userSetting;
}
Now, the code to pull it out is something like this:
public void RestoreInfo()
{
ArrayList setting = Properties.Settings.Default.SettingName;
foreach (object o in setting)
{
MyObject data = (MyObject)o;
// Do something with the data, like load it in a ListView
}
}
I've also made sure to decorate the Settings.Designer.cs file with [global::System.Configuration.SettingsSerializeAs(global::System.Configuration.SettingsSerializeAs.Binary)], like this:
[global::System.Configuration.UserScopedSettingAttribute()]
[global::System.Diagnostics.DebuggerNonUserCodeAttribute()]
[global::System.Configuration.SettingsSerializeAs(global::System.Configuration.SettingsSerializeAs.Binary)]
public global::System.Collections.ArrayList SettingName
{
get {
return ((global::System.Collections.ArrayList)(this["SettingName"]));
}
set {
this["SettingName"] = value;
}
}
Now, randomly, the information will disappear. I can debug this and see that Properties.Settings.Default is returning an empty ArrayList for SettingName. I would really rather not use an ArrayList, but I don't see a way to get a generic collection to store in this way.
I'm about to give up and save this information using plain XML on my own. I just wanted to verify that I was indeed pushing this bit of .NET infrastructure too far. Am I right?
I couldn't find an answer as to why these settings were disappearing and since I kept on happening, I ended up storing the complicated sets of settings separately in an XML file, manually serializing and deserializing them myself.
I had a very similar experience when using the SettingsSerializeAs Binary Attribute in the Settings Designer class. It worked in testing, but some time later it failed to restore the property values.
In my case there had been subsequent additions to the Settings made via the designer. Source control history showed that the SettingsSerializeAs Attribute had been removed from Settings.Designer.cs without my knowledge.
I added the following code to check that the attribute hadn't been accidentally lost it the equivalent of the RestoreInfo() method.
#if(DEBUG)
//Verify that the Property has the required attribute for Binary serialization.
System.Reflection.PropertyInfo binarySerializeProperty = Properties.Settings.Default.GetType().GetProperty("SettingName");
object[] customAttributes = binarySerializeProperty.GetCustomAttributes(typeof(System.Configuration.SettingsSerializeAsAttribute), false);
if (customAttributes.Length != 1)
{
throw new ApplicationException("SettingsSerializeAsAttribute required for SettingName property");
}
#endif
Also, only because it is missing from your example, don't forget to call Save. Say after calling SaveInfo().
Properties.Settings.Default.Save();
When using the Settings feature with the User scope, the settings are saved to the currently logged in user's Application Data (AppData in Vista/7) folder. So if UserA logged in, used your application, and then UserB logged in, he wouldn't have UserA's settings loaded, he would have his own.
In what you're trying to accomplish, I would suggest using the XmlSerializer class for serializing an list of objects. The use is pretty simple:
To serialize it:
ArrayList list = new ArrayList();
XmlSerializer s = new XmlSerializer(typeof(ArrayList));
using (FileStream fs = new FileStream(#"C:\path\to\settings.xml", FileMode.OpenOrCreate))
{
s.Serialize(fs, list);
}
To deserialize it:
ArrayList list;
XmlSerializer s = new XmlSerializer(typeof(ArrayList));
using (FileStream fs = new FileStream(#"C:\path\to\settings.xml", FileMode.Open))
{
list = (ArrayList)s.Deserialize(fs);
}
From your example I don't see anything incorrect about what your trying to do. I think the root of the problem your describing might be your assembly version changing? User settings do not auto-magically upgrade themselves (at least I couldn't get them to).
I can appreciate your plight, I went through this a few months ago. I coded up a UserSettings class that provided standard name/value pair collections (KeyValueConfigurationElement) under a named group heading something like the following:
<configSections>
<section name="userSettings" type="CSharpTest.Net.AppConfig.UserSettingsSection, CSharpTest.Net.Library"/>
</configSections>
<userSettings>
<add key="a" value="b"/>
<sections>
<section name="c">
<add key="a" value="y"/>
</section>
</sections>
</userSettings>
Anyway see if this meets your needs or provides some insight into implementing a custom ConfigurationSection to allow what you need.
Oh yea, the code is here:
http://csharptest.net/browse/src/Library/AppConfig

Categories