How to Remove a Attribute of a Property in a Class - c#

I want to do some changes in a table that is already exist(i am using sqlite). I want to remove a Attribute of a property in a class. Remove [Required] attribute. How can i do that, what should i change, should i some changes in DbContext or migration folder or what commands can i use in package manager console.
public class Appointment
{
[Required]
[MaxLength(50)]
public string Company { get; set; }

This is a good example of why it is better to use fluent API then attributes to specify your database: you want the same class to be used in a different database.
The DbContext defines the database: what tables are in it, how do the tables relate towards each other, what classes do represent the tables, and what are the constraints to the tables.
For instance, maybe in one database you want Company to have a MaxLength of 50 characters, in another database you might desire a length of 100 characters. Some databases need a simple DateTime, others require a DateTime2. Or maybe you want a different precision for your decimals?
Hence it is usually better to specify the database statistics where it belongs: in the definition of the database, which is the DbContext.
Back to your question
It depends a bit on the entity framework that you are using on how the database reacts if during migration you use remove the Required attribute and move it to fluent API. I guess it is best to experiment with it.
In OnModelCreating, or the migration equivalent of it, you will have something like:
var entityAppointment = modelBuilder.Entity<Appointment>();
var propertyCompany = entityAppointment.Property(appointment => appointment.Company);
propertyCompany.IsOptional()
.HasMaxLength(50)
...; // other non-default specifications of Company
// like columnName, IsUnicode, etc
When migrating the statements might be little different, but I guess you get the gist.

Related

Backward compatibility to old db schema with Linq-To-Sql

We have a software suite with different components and a sql server database that is used by most of these components.
Let's say we have a table Configuration with two columns [Name]and [Value]. This table contains our db scheme version (as Name='Version', Value='1.0').
Then we have a table Customer with columns ID and Name.
So the class for linq-to-sql queries on this table would look something like that:
[Table(Name="Customer")]
public class Customer
{
[Column]
public long ID {get; private set;}
[Column]
public string Name {get; set; }
}
For our next release version, we need to change this Customer table and add a column Address. When we install this new scheme, we set the version value in the Configuration table to 1.1, so we can determine which version is installed.
A lot of our customers are waiting for this new version of the component because of a bugfix, but they don't actually need the new db schema (the reasons don't matter here, let's just assume our software needs backward compatibility to old db schema versions).
So my question is, is there any best practice technology to handle this situation?
If I add the new column to the Customer class like
[Column]
public string Address {get; set;}
I won't be able to use this class for requests to a db with the 1.0 schema (because of the SqlException for the missing column). If I create a new class (derived, or use interfaces or whatsoever), I have to deal with different types for different schema versions. And of course Table<Customer10> will never be convertible to Table<Customer11> no matter what the inheritance relation between Customer10 and Customer11 will be. So until now I did not find a way to encapsulate these problems inside my DataContext derivate in a transparent way:
public class MyDataContext : DataContext
{
public Table<Customer> Customers
{
get
{
// this won't compile, no matter how Customer10 and Customer11 are related
return version > 10
? GetTable<Customer11>()
: GetTable<Customer10>();
}
}
}
I also experimented with inheritance hierarchies and discriminators, but since the version is not a column of Customer, this does not seem to work.
I'd like to find a solution where I can use one type per table for different schemas and where removed or added columns would take default values if they are not supported in the installed db scheme.

Entitiy Framework, use same model for two tables with same layout but different table names

I have two tables that have the same layout -
Report Table
ID
ReportCol1
ReportCol2
In another database I have
Reporting Table
ID
ReportCol1
ReportCol2
I want to use a single entity model called Report to load the data from both of these tables.
In my context class I have
public DbSet<Report> Report{ get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new ReportMap());
}
In my call to the first database Report table I get the results as expected.
I change the connection string to point to the second database, but I can't change the name of the table in the table mapping.
I don't want to use stored procs for the reason outlined in my comment.
What can I do, short of the tables names in the database(that is not an option).
Have you tried this fluent API modelBuilder.Entity<Report>().ToTable("Reporting"); ? You may need to write this so it conditionally does this based on which database you are connecting to. You may need to have your configuration allow you to say "DatabaseA uses this mapping and connection string", and "DatabaseB uses this other mapping and conenctions string", and rather than changing the connection string, you specify which database by some name/key, and your app looks up that name to determine which mapping code to run.
if(dbMappingconfig == DbMapping.A)//some enum you create
{
modelBuilder.Entity<Report>().ToTable("Reporting");
}
If your goal is to be able to pass these entities to other methods like DisplayReport(Report r) so that you don't have to duplicate code, you could have both Reporting and Report classes implement a IReport interface.
EF also supports inheritance hierarchies, so you could have them inherit from the same class, BUT I havfe a strong feeling that will not work across databases.
If the OnModelCreating doesn't rerun, it's probably already cached. Put modelBuilder.CacheForContextType = false; in there so it doesn't cache it in future, and to clear the current cache I think you can just do a Clean+Rebuild. This will come at the price of rebuilding the model everytime instead of reusing a cache. What you'd really want is use the cache up until the connection string changes. I don't know of anyway to manually clear the cache, but there might be a way. You can manage the model building yourself:
DbModelBuilder builder = new DbModelBuilder();
// Setup configurations
DbModel model = builder.Build(connection);
DbCompiledModel compiledModel = model.Compile();
DbContext context = new DbContext(connection, compiledModel);
But that will introduce additional complexities since you will need to manage the caching yourself.
While searching on this, I came across this that looks like they are trying to accomplish the same thing, as well as having gone down the same page, see Final section in question: How to map an Entity framework model to a table name dynamically
Are you able to create the same named view in each database and map to that instead of a variable table name?
I have 2 copies of tables with different names in my solution and deal with that by having 2 contexts and 2 sets of map files (generated text templates)

Need Help With Application Design

So, I'd love some feedback on the best way to design the classes and store the data for the following situation:
I have an interface called Tasks that looks like this:
interface ITask
{
int ID{ get; set;}
string Title {get; set;}
string Description{get; set;}
}
I would like the ability to create different types of Tasks depending on who is using the application...for example:
public class SoftwareTask: ITask
{
//ITask Implementation
string BuildVersion {get; set;}
bool IsBug {get; set;}
}
public class SalesTask: ITask
{
//ITask Implementation
int AccountID {get; set;}
int SalesPersonID {get; set;}
}
So the way I see it I can create a Tasks table in the database with columns that match the ITask interface and a column that shoves all of the properties of more specific tasks in a single column (or maybe even serialize the task object into a single column)
OR
Create a table for each task type to store the properties that are unique to that type.
I really don't like either solution right now. I need to be able to create different types of Tasks ( or any other class) that all share a common core set of properties and methods through a base interface, but have the ability to store their unique properties in a fashion that is easy to search and filter against without having to create a bunch of database tables for each type.
I've starting looking into Plug-In architecture and the strategy pattern, but I don't see where either would address my problem with storing and accessing the data.
Any help or push in the right direction is greatly appreciated!!!
Your second approach (one table per type) is the canonical way to solve this problem - while it requires a bit more effort to implement it fits better with the relational model of most databases and preserves a consistent and cohesive representation of the data. The approach of using one table per concrete type works well, and is compatible with most ORM libraries (like EntityFramework and NHibernate).
There are, however, a couple of alternative approaches sometimes used when the number of subtypes is very large, or subtypes are created on the fly.
Alternative #1: The Key-Value extension table. This is a table with one row per additional field of data you wish to store, a foreign key back to the core table (Task), and a column that specifies what kind of field this is. It's structure is typically something like:
TaskExt Table
=================
TaskID : Number (foreign key back to Task)
FieldType : Number or String (this would be AccountID, SalesPersonID, etc)
FieldValue : String (this would be the value of the associated field)
Alternative #2: The Type-Mapped Extension Table. In this alternative, you create a table with a bunch of nullable columns of different data types (numbers, strings, date/time, etc) with names like DATA01, DATA02, DATA03 ... and so on. For each kind of Task, you select a subset of the columns and map them to particular fields. So, DATA01 may end up being the BuildVersion for a SoftwareTask and an AccountName for a SalesTask. In this approach, you must manage some metadata somewhere that control which column you map specific fields to. A type-mapped table will often look something like:
TaskExt Table
=================
TaskID : Number (foreign key back to task)
Data01 : String
Data02 : String
Data03 : String
Data04 : String
Data05 : Number
Data06 : Number
Data07 : Number
Data08 : Number
Data09 : Date
Data10 : Date
Data11 : Date
Data12 : Date
// etc...
The main benefit of option #1 is that you can dynamically add as many different fields as you need, and you can even support a level of backward compatibility. A significant downside, however, is that even simple queries can become challenging because fields of the objects are pivoted into rows in the table. Unpivoting turns out to be an operation that is both complicated and often poorly performing.
The benefits of option #2 is that it's easy to implement, and preserves a 1-to-1 correspondence betweens rows, making queries easy. Unfortunately, there are some downsides to this as well. The first is that the column names are completely uninformative, and you have to refer to some metadata dictionary to understand which columns maps to which field for which type of task. The second downside is that most databases limit the number of columns on a table to a relatively small number (usually 50 - 300 columns). As a result, you can only have so many numeric, string, datetime, etc columns available to use. So if you type ends up having more DateTime fields than the table supports you have to either use string fields to store dates, or create multiple extension tables.
Be forewarned, most ORM libraries do not provide built-in support for either of these modeling patterns.
You should probably take a lead from how ORMs deal with this, like TPH/TPC/TPT
Given that ITask is an interface you should probably go for TPC (Table per Concrete Type). When you make it a baseclass, TPT and TPH are also options.

Is there a .NET Polymorphic Data Framework

I'm beginning work on a new project that's would be much easier if there was some way to make different data models polymorphic. I'm looking at using the Entity Framework 4.0 (when it's released), but have been unable to determine if it will actually be able to work.
Here's the basic scenario. I'm implemented a comment system, and would like to be able to connect it to many different types of models. Maybe I want comments on a person's profile, and comments on a webpage. The way I would do this in the past is to create relationships between the person table and the comment table separately from the relationship between the webpage table and the comment table. I think this leads to an overly complicated table structure in the database, however.
It would be best if I could just be able to add an interface to the objects I want comments on, and then simplify the table structure in the database to a single relationship.
The problem I'm running into is that I don't seem to know the right terminology in order to find information about how to do this type of thing. Any help anyone can provide would be greatly appreciated.
If you design your "comments table" to be comment-type-agnostic (just the basics, like an id, date & time, and text content), you can then use a single additional table that maps them all.
public interface ICommentable
{
int CommentTypeCode
int Id
...
}
Now that mapper table contains columns:
comment_type_code
target_object_id
comment_id
Your comments all go in one table, with an Id
Your various "target objects" must all have an Id of the same type
Now you can arbitrarily add new "commentable" objects to your system without changing the comments table or the mapper table -- just assign it a new type code and create the table with the requisite Id column.
I accomplish this with LinqToSql and partial classes. For each class that I want to implement an interface, I go to create a non-tool-generated file that contains part of the partial class that declares the class to implement the interface.
For example:
Generated code:
// this code is generated by a tool blah blah
partial class FooComment {
// all the generated crap
string Author {
// ...
}
// etc
}
The interface:
interface IComment{
string Author{ get; }
// etc
}
My code:
// lovingly hand-written by me
partial class FooComment : IComment {
}
Now, if you want to cast any group of FooComments to IComment, use the Cast linq extension method:
db.FooComments.Cast<IComment>()

Map unknown amount of columns to Dictionary

I have a legacy system that dynamically augments a table with additional columns when needed. Now I would like to access said table via C#/NHibernate.
There is no way to change the behaviour of the legacy system and I dynamically need to work with the data in the additional columns. Therefore dynamic-component mapping is not an option since I do not know the exact names of the additional columns.
Is there a way to put all unmapped columns into a dictionary (column name as key)? Or if that's not an option put all columns into a dictionary?
Again, I do not know the names of the columns at compile time so this has to be fully dynamic.
Example:
public class History
{
public Guid Id { get; set; }
public DateTime SaveDateTime { get; set; }
public string Description { get; set; }
public IDictionary<string, object> AdditionalProperties { get; set; }
}
So if the table History contains the Columns Id, SaveDateTime, Description, A, B, C and D I would like to have "A", "B", "C" and "D" in the IDictionary. Or if that's too hard to do simply throw all columns in there.
For starters I would also be fine with only using string columns if that helps.
You probably need an ADO.NET query to get this data out. If you use NH, even with a SQL query using SELECT *, you'll not getting the column names.
You can try using SMO (SqlServer management objects, a .NET Port to the SqlServer) or some other way to find the table definitions. Then you build up the mapping using Fluent NHibernate with a dynamic component. I'm not sure if you can change the mappings after you already used the session factory. It's worth a try. Good luck :-)
I guess with the following code you can get your results in a Hashtable:
var hashTable = (Hashtable)Session.CreateSQLQuery("SELECT * FROM MyTable")
.SetResultTransformer(Transformers.AliasToEntityMap)
.UniqueResult();
Obviously all your data will be detached from the session...
I think the best you can do is to find the columns at runtime, create a mapping for these extra columns then write the output to an xmlfile. Once that is done you can add the mapping at runtime...
ISessionFactory sessionFactory = new Configuration()
.AddFile("myDynamicMapping.hbm.xml")
How you would use this mapping is a good question as you would have to create your class dynamically as well then you are SOL
good luck.
What's not possible in sql is not possible in NHibernate.
It's not possible to write in insert query to insert into unknown columns.
I assume that your program builds a single Configuration object on startup, by reading XML files, and then uses the Configuration object build ISessionFactory objects.
Instead of reading the XML files, building the Configuration object, and calling it a day, however, your program can send a query to the database to figure out any extra columns on this table, and then alter the Configuration, adding columns to the DynamicMapping programmatically, before compiling the Configuration object into an ISessionFactory.
NHibernate does have ways to get the database schema, provided it's supported by the database type/dialect. It is primarily used by the SchemaExport and SchemaUpdate functions.
If you're not scared of getting your hands a bit dirty;
Start by looking at the GenerateSchemaUpdateScript function in the Configuration class:
https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/nhibernate/src/NHibernate/Cfg/Configuration.cs
In particular, you'd be interested in this class, which is referenced in that method:
https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/nhibernate/src/NHibernate/Tool/hbm2ddl/DatabaseMetadata.cs
The DatabaseMetadata object will allow you to traverse the metadata for all tables and fields in the database, allowing you to figure out which fields are not mapped. If you look at the Configuration class again, it holds a list of it's mapping in the TableMappings collection. Taking hints from the GenerateSchemaUpdateScript function, you can compare a Table object from the TableMappings against the any object implementing ITableMetadata returned by the DatabaseMetadata.GetTableMetadata function to figure out which columns are unmapped.
Use this information to then rebuild the mapping file used by the "dynamic" class at runtime placing all the dynamic/runtime fields in the "AdditionalProperties" dynamic-component section of the mapping file. The mapping file will need to be included as an external file and not an embedded resource to do this, but this is possible with the Configuration AddFile function. After it is rebuilt, reload the configuration, and finally rebuild the session factory.
At this time it looks like Firebird, MsSQL Compact, MsSQL, MySQL, Oracle, SQLite, and SybaseAnywhere have implementations for ITableMetadata, so it's only possible with one of these(unless you make your own implementation).

Categories