Best practice Objects from SQL - c#

I tried searching but couldn't find a proper answer.
I am creating an application that contains a lot of different objects. The data for these objects is saved in an MSSQL database. What is the best way to get data out?
For simplicity I will use two objects here:
ItemObject
UserObject
Both of them has a constructor which will get data from the Database:
public ItemObject(int ID) //same one for UserObject
{
//Code to get the data from the database for this particular item
}
ItemObject has a property called CreatedBy which is a UserObject.
Now the question is what is the best way to create the ItemObject?
I have two possible solutions:
Solution #1:
public ItemObject(int ID)
{
DataTable dt = dal.GetDataTable("SELECT TOP 1 * FROM Items WHERE ID = #ID")
this.CreatedBy = new UserObject((int)dt.rows[0]["UserID"])
}
Solution #2
public ItemObject(int ID)
{
DataTable dt = dal.GetDataTable("SELECT TOP 1 * FROM Items INNER JOIN Users ON Items.CreatedBy = Users.ID WHERE Items.ID = #ID")
this.CreatedBy = new UserObject((int)dt.rows[0]["UserID"], dt.rows[0]["Username"].ToString())
}
public UserObject(int ID, string Username)
{
this.ID = ID;
this.Username = Username;
}
In solution #1 I ask for data twice but in solution #2 I ask for data once. Although solution #1 is much "cleaner" and easier to read.
Edited after Steves correction.

I would go with solution two. From my point of view solution 1 is not acceptable, though it is "cleaner".
And I think there is no best practice for reading to objects. I like much Entity Framework for this purpose.

Related

Generalized DTO population method with different query select lists

For reasons that I do not quite understand, I have chosen not to use an ORM Framework and have gone with a generalized ADO.NET data access layer. I initially created a single database class from which all my controllers had access. As anyone but myself could have predicted, this access object has become a monstrosity.
In an attempt to refactor my data layer, I have created a 'database adapter' class as a DI injected service and have created a 'service layer' to utilize it. So each controller now has a 'domain service' that will use the database adapter to query the database and return a generic data table. The service will then populate the result of the queries and return the domain objects back to the controller where it can assemble the view models.
I am running into an issue where I cannot seem to abstract the code designed to map the DataSets returned from the database access layer because each query may select different fields. For example, a simple reference data service:
public class ReferenceDataService : IReferenceDataService
{
private IDatabaseAdapter _dbAdapter;
public ReferenceDataService(IDatabaseAdapter dbAdapter)
{
_dbAdapter = dbAdapter;
}
public IEnumerable<ReferenceData> GetReferenceData(string table)
{
List<ReferenceData> rdList = new List<ReferenceData>();
StringBuilder sb = new StringBuilder();
sb.Append("SELECT [CODE], [LABEL] FROM [dbo].");
sb.Append(table);
sb.Append(" WHERE END_DATETIME > GETDATE()");
DataSet ds = _dbAdapter.ExecuteDataSet(sb.ToString(), null);
foreach (DataRow row in ds.Tables[0].Rows)
{
rdList.Add(PopulateRecord(row));
}
return rdList;
}
private ReferenceData PopulateRecord(DataRow row)
{
return new ReferenceData
{
ReferenceId = (int)row["REFERENCE_ID"],
Code = (string)row["CODE"],
Label = (string)row["LABEL"],
Description = (string)row["DESCRIPTION"],
BeginDatetime = (DateTime)row["BEGIN_DATETIME"],
EndDatetime = (DateTime)row["END_DATETIME"],
UpdatedBy = (string)row["UPDATED_BY"],
UpdatedOn = (DateTime)row["UPDATED_ON"],
CreatedBy = (string)row["CREATED_BY"],
CreatedOn = (DateTime)row["CREATED_ON"]
};
}
}
In this example, I have an exception thrown from the populate method, because as you can see, I am only selecting code and label for this particular method. I'd like to avoid a custom mapping for every method but I also do not want to needlessly return ALL the data from each table row to the controller. I'd like to keep the populate method generic so that any query against that table will be mapped appropriately.
I realize I'm basically almost rolling my own ORM, but I'd like to use a service pattern without it because at this point I am way too invested.
After some digging around, it appears there was a very obvious and straightforward solution that I had been missing. The DataRow instance object has the ability to check it's parent table columns for existence. By wrapping each assignment from the table row in one of these checks, then the population method will not care what was actually selected into the DataTable and will be able to populate an object regardless of the amount of data returned from the query.
So in my example, if I want to keep a generic population method for ReferenceData but use a query that only retuns the CODE and LABEL columns, the following change would keep the population of the returned business object agnostic and error free:
private ReferenceData PopulateRecord(DataRow row)
{
return new ReferenceData
{
ReferenceId = row.Table.Columns.Contains("REFERENCE_ID") ? (int)row["REFERENCE_ID"] : default(int),
Code = row.Table.Columns.Contains("CODE") ? (string)row["CODE"] : default(string),
Label = row.Table.Columns.Contains("LABEL") ? (string)row["LABEL"] : default(string),
Description = row.Table.Columns.Contains("DESCRIPTION") ? (string)row["DESCRIPTION"] : default(string),
BeginDatetime = row.Table.Columns.Contains("BEGIN_DATETIME") ? (DateTime)row["BEGIN_DATETIME"] : default(DateTime),
EndDatetime = row.Table.Columns.Contains("END_DATETIME") ? (DateTime)row["END_DATETIME"] : default(DateTime),
UpdatedBy = row.Table.Columns.Contains("UPDATED_BY") ? (string)row["UPDATED_BY"] : default(string),
UpdatedOn = row.Table.Columns.Contains("UPDATED_ON") ? (DateTime)row["UPDATED_ON"] : default(DateTime),
CreatedBy = row.Table.Columns.Contains("CREATED_BY") ? (string)row["CREATED_BY"] : default(string),
CreatedOn = row.Table.Columns.Contains("CREATED_ON") ? (DateTime)row["CREATED_ON"] : default(DateTime)
};
}
This would allow me to use PopulateRecord on a select statement that only returned CODE and LABEL (as I would want to do if I was populating a SelectItemList for a dropdown for example).
I do not know what kind of performance hit this may or may not incur so that is something to possibly consider. But this allows for the flexibility I was looking for. I hope this post will help someone else who might be looking for the same type of solution.
If there are better ways to approach this please let me know. Thanks!

How to populate collection of user detail more efficiently

I have a Module class, a User, a UserModule and a UserModuleLevel class.
_module_objects is a static ObservableCollection of Modules and gets created when the program starts, there's about 10 of them. e.g. User Management, Customer Services, etc.
User as you can probably guess is user details: ID, Name, etc. Populated from a db query.
With UserModules, I do not keep the module information in the db, just the module level, which is just the module security levels. this is kept in the db as: User_ID, Module_ID, ModuleLevel, ModuleLevelAccess.
What I'm trying to do is populate an ObservableCollection of users in the fastest manner. I have about 120,000 users, usually these users only have access to 2 or 3 of the 10 modules.
Below is what I have tried so far, however the piece with asterisks around it is the bottle neck, because it is going through every module of every user.
Hoping for some advice to speed things up.
public class UserRepository
{
ObservableCollection<User> m_users = new ObservableCollection<User>();
public UserRepository(){}
public void LoadUsers()
{
var users = SelectUsers();
foreach (var u in users)
{
m_users.Add(u);
}
}
public IEnumerable<User> SelectUsers()
{
var userModulesLookup = GetUserModules();
var userModuleLevelsLookup = GetUserModuleLevels().ToLookup(x => Tuple.Create(x.User_ID, x.Module_ID));
clsDAL.SQLDBAccess db = new clsDAL.SQLDBAccess("DB_USERS");
db.setCommandText("SELECT * FROM USERS");
using (var reader = db.ExecuteReader())
{
while (reader.Read())
{
var user = new User();
var userId = NullSafeGetter.GetValueOrDefault<int>(reader, "USER_ID");
user.User_ID = userId;
user.Username = NullSafeGetter.GetValueOrDefault<string>(reader, "USERNAME");
user.Name = NullSafeGetter.GetValueOrDefault<string>(reader, "NAME");
user.Job_Title = NullSafeGetter.GetValueOrDefault<string>(reader, "JOB_TITLE");
user.Department = NullSafeGetter.GetValueOrDefault<string>(reader, "DEPARTMENT");
user.Company = NullSafeGetter.GetValueOrDefault<string>(reader, "COMPANY");
user.Phone_Office = NullSafeGetter.GetValueOrDefault<string>(reader, "PHONE_OFFICE");
user.Phone_Mobile = NullSafeGetter.GetValueOrDefault<string>(reader, "PHONE_MOBILE");
user.Email = NullSafeGetter.GetValueOrDefault<string>(reader, "EMAIL");
user.UserModules = new ObservableCollection<UserModule>(userModulesLookup);
//**************** BOTTLENECK **********************************
foreach (var mod in user.UserModules)
{
mod.UserModuleLevels = new ObservableCollection<UserModuleLevel>(userModuleLevelsLookup[Tuple.Create(userId, mod.Module.Module_ID)]);
}
//**************************************************************
yield return user;
}
}
}
private static IEnumerable<Users.UserModule> GetUserModules()
{
foreach (Module m in ModuleKey._module_objects)
{
//Set a reference in the UserModule to the original static module.
var user_module = new Users.UserModule(m);
yield return user_module;
}
}
private static IEnumerable<Users.UserModuleLevel> GetUserModuleLevels()
{
clsDAL.SQLDBAccess db_user_module_levels = new clsDAL.SQLDBAccess("DB_USERS");
db_user_module_levels.setCommandText(#"SELECT * FROM USER_MODULE_SECURITY");
using (var reader = db_user_module_levels.ExecuteReader())
{
while (reader.Read())
{
int u_id = NullSafeGetter.GetValueOrDefault<int>(reader, "USER_ID");
int m_id = NullSafeGetter.GetValueOrDefault<int>(reader, "MODULE_ID");
int ml_id = NullSafeGetter.GetValueOrDefault<int>(reader, "MODULE_LEVEL_ID");
int mla = NullSafeGetter.GetValueOrDefault<int>(reader, "MODULE_LEVEL_ACCESS");
yield return new Users.UserModuleLevel(u_id, m_id, ml_id, mla);
}
}
}
}
In the end I'll put the users into a DataGrid with module security displayed, buttons with green show there is some type of access to this module, clicking on it will bring up actual security settings.
For performance gains you can do a few things:
Change your data access code to perform JOINs in SQL to get your data as a single result set.
SQL tends to be a fair bit faster at returning a result set of relational data than C# is at glueing the data together after the fact. This is because it's optimised to do just that and you should take advantage of that
You should probably consider paging the results - any user that says they need all 120,000 results at once should be slapped upside the head with a large trout. Paging the results will limit the amount of processing that you need to do in the application
Doing the above can be quite daunting as you would need to modify your application to include paging - often 3rd party controls such as grids etc have some paging mechanisms built in, and these days most ORM software has some sort of paging support which translates your C# code to the correct dialect for your chosen RDBMS
A good example (I've been working with a bit lately) is ServiceStack OrmLite.
I believe it to be free as long as you are using the legacy V3 version (which is pretty darn good .. https://github.com/ServiceStackV3/ServiceStackV3) and I've seen some forks of it on GitHub which are currently maintained (http://www.nservicekit.com/)
There is a small learning curve, but nothing the examples/docs can't tell you
Here's an extension method I'm using to page my queries in my service layer:
public static SqlExpressionVisitor<T> PageByRequest<T>(this SqlExpressionVisitor<T> expr, PagedRequest request)
{
return expr.Limit((request.PageNumber - 1) * request.PageSize, request.PageSize);
}
The request contains the page number and page size (from my web app), and the Limit extension method in OrmLite does the rest. I should probably add that the <T> generic parameter is the object type that OrmLite will map to after it has queried.
Here's an example of that (its just a POCO with some annotations)
[Alias("Customers")]
public class Customer : IHasId<string>
{
[Alias("AccountCode")]
public string Id { get; set; }
public string CustomerName { get; set; }
// ... a load of other fields
}
The method is translated to T-SQL and results in the following query against the DB (for this example I selected page 4 on my customer list with a page size of 10):
SELECT <A big list of Fields> FROM
(SELECT ROW_NUMBER() OVER (ORDER BY AccountCode) As RowNum, * FROM "Customers")
AS RowConstrainedResult
WHERE RowNum > 40 AND RowNum <= 50
This keeps the query time down to way less than a second and ensures I don't need to write a shedload of vendor specific SQL
It really depends on how much application you have already got - if you are too far in, it may be a nightmare to refactor for an ORM, but it's worth considering for other projects

LINQ to SharePoint 2010 getting error "All new entities within an object graph must be added/attached before changes are submitted."

I've been having a problem for some time, and I've exhausted all means of figuring this out for myself.
I have 2 lists in a MS Sharepoint 2010 environment that are holding personal physician data for a medical group...nothing special just mainly text fields and a few lookup choice fields.
I am trying to write a program that will migrate the data over from List A to List B. I am using LINQ to Sharepoint to accomplish this. Everything compiles just fine, but when it runs and hits the SubmitChanges() method, I get a runtime error that states:
"All new entities within an object graph must be added/attached before changes are submitted."
this issue must be outside of my realm of C# knowledge because I simply cannot find the solution for it. The problem is DEFINITELY stemming from the fact that some of the columns are of type "Lookup", because when I create a new "Physician" entity in my LINQ query, if I comment out the fields that deal with the lookup columns, everything runs perfectly.
With the lookup columns included, if I debug and hit breakpoints before the SubmitChanges() method, I can look at the new "Physician" entities created from the old list and the fields, including data from the lookup columns, looks good, the data is in there the way I want it to be, it just flakes out whenever it tries to actually update the new list with the new entities.
I have tried several methods of working around this error, all to no avail. In particular, I have tried created a brand new EntityList list and calling the Attach() method after each new "Physician" Entity is created, but to no avail, it just sends me around in a bunch of circles, chasing other errors such as "ID cannot be null", "Cannot insert entities that have been deleted" etc.,
I am no farther now than when I first got this error and any help that anyone can offer would certainly be appreciated.
Here is my code:
using (ProviderDataContext ctx = new ProviderDataContext("http://dev"))
{
SPSite sitecollection = new SPSite("http://dev");
SPWeb web = sitecollection.OpenWeb();
SPList theOldList = web.Lists.TryGetList("OldList_Physicians");
//Create new Physician entities.
foreach(SPListItem l in theOldList.Items)
{
PhysiciansItem p = new PhysiciansItem()
{
FirstName = (String)l["First Name"],
Title = (String)l["Last Name"],
MiddleInitial = (String)l["Middle Init"],
ProviderNumber = Convert.ToInt32(l["Provider No"]),
Gender = ConvertGender(l),
UndergraduateSchool =(String)l["UG_School"],
MedicalSchool = (String)l["Med_School"],
Residency = (String)l["Residency"],
Fellowship = (String)l["Fellowship"],
Internship = (String)l["Internship"],
PhysicianType = ConvertToPhysiciantype(l),
Specialty = ConvertSpecialties(l),
InsurancesAccepted = ConvertInsurance(l),
};
ctx.Physicians.InsertOnSubmit(p);
}
ctx.SubmitChanges(); //this is where it flakes out
}
}
//Theses are conversion functions that I wrote to convert the data from the old list to the new lookup columns.
private Gender ConvertGender(SPListItem l)
{
Gender g = new Gender();
if ((String)l["Sex"] == "M")
{
g = Gender.M;
}
else g = Gender.F;
return g;
}
//Process and convert the 'Physician Type', namely the distinction between MD (Medical Doctor) and
//DO (Doctor of Osteopathic Medicine). State Regualtions require this information to be attached
//to a physician's profile.
private ProviderTypesItem ConvertToPhysiciantype(SPListItem l)
{
ProviderTypesItem p = new ProviderTypesItem();
p.Title = (String)l["Provider_Title:Title"];
p.Intials = (String)l["Provider_Title"];
return p;
}
//Process and convert current Specialty and SubSpecialty data into the single multi-choice lookup column
private EntitySet<Item> ConvertSpecialties(SPListItem l)
{
EntitySet<Item> theEntityList = new EntitySet<Item>();
Item i = new Item();
i.Title = (String)l["Provider Specialty"];
theEntityList.Add(i);
if ((String)l["Provider SubSpecialty"] != null)
{
Item theSubSpecialty = new Item();
theSubSpecialty.Title = (String)l["Provider SubSpecialty"];
theEntityList.Add(theSubSpecialty);
}
return theEntityList;
}
//Process and add insurance accepted.
//Note this is a conversion from 3 boolean columns in the SP Environment to a multi-select enabled checkbox
//list.
private EntitySet<Item> ConvertInsurance(SPListItem l)
{
EntitySet<Item> theEntityList = new EntitySet<Item>();
if ((bool)l["TennCare"] == true)
{
Item TenncareItem = new Item();
TenncareItem.Title = "TennCare";
theEntityList.Add(TenncareItem);
}
if ((bool)l["Medicare"] == true)
{
Item MedicareItem = new Item();
MedicareItem.Title = "Medicare";
theEntityList.Add(MedicareItem);
}
if ((bool)l["Commercial"] == true)
{
Item CommercialItem = new Item();
CommercialItem.Title = "Commercial";
theEntityList.Add(CommercialItem);
}
return theEntityList;
}
}
So this may not be the answer you're looking for, but it's what's worked for me in the past. I've found that updating lookup fields using Linq to Sharepoint to be quite frustrating. It frequently doesn't work, or doesn't work efficiently (forcing me to query an item by ID just to set the lookup value).
You can set up the entity so that it has an int property for the lookup id (for each lookup field) and a string property for the lookup value. If, when you generate the entities using SPMetal, you don't generate the list that is being looked up then it will do this on it's own. What I like to do is (using your entity as an example)
Generate the entity for just that one list (Physicians) in some temporary folder
Pull out the properties for lookup id & value (there will also be private backing fields that need to come along for the ride too) for each of the lookups (or the ones that I'm interested in)
Create a partial class file for Physicians in my actual project file, so that regenerating the entire SPMetal file normally (without restricting to just that list) doesn't overwrite changes
Paste the lookup id & value properties in this partial Physicians class.
Now you will have 3 properties for each lookup field. For example, for PhysicianType there will be:
PhysicianType, which is the one that is currently there. This is great when querying data, as you can perform joins and such very easily.
PhysicianTypeId which can be occasionally useful for queries if you only need ID as it makes it a bit simpler, but mostly I use it whenever setting the value. To set a lookup field you only need to set the ID. This is easy, and has a good track record of actually working (correctly) in my experiences.
PhysicianTypeValue which could be useful when performing queries if you just need the lookup value, as a string (meaning it will be the raw value, rather than something which is already parsed if it's a multivalued field, or a user field, etc. Sometimes I'd rather parse it myself, or maybe just see what the underlying value is when doing development. Even if you don't use it and use the first property, I often bring it along for the ride since I'm already doing most of the work to bring the PhysicianTypeId field over.
It seems a bit hacky, and contrary to the general design of linq-to-SharePoint. I agree, but it also has the advantage of actually working, and not actually being all that hard (once you get the rhythm of it down and learn what exactly needs to be copied over to move the properties from one file to another).

Optimize query on unknown database

A third party application creates one database per project. All the databases have the same tables and structure. New projects may be added at anytime so I can't use any EF schema.
What I do now is:
private IEnumerable<Respondent> getListRespondentWithStatuts(string db)
{
return query("select * from " + db + ".dbo.respondent");
}
private List<Respondent> query(string sqlQuery)
{
using (var sqlConx = new SqlConnection(Settings.Default.ConnectionString))
{
sqlConx.Open();
var cmd = new SqlCommand(sqlQuery, sqlConx);
return transformReaderIntoRespondentList(cmd.ExecuteReader());
}
}
private List<Respondent> transformReaderIntoRespondentList(SqlDataReader sqlDataReader)
{
var listeDesRépondants = new List<Respondent>();
while (sqlDataReader.Read())
{
var respondent = new Respondent
{
CodeRépondant = (string)sqlDataReader["ResRespondent"],
IsActive = (bool?)sqlDataReader["ResActive"],
CodeRésultat = (string)sqlDataReader["ResCodeResult"],
Téléphone = (string)sqlDataReader["Resphone"],
IsUnContactFinal = (bool?)sqlDataReader["ResCompleted"]
};
listeDesRépondants.Add(respondent);
}
return listeDesRépondants;
}
This works fine, but it is deadly slow (20 000 records per minutes). Do you have any hints on what strategy should be faster? For info, what is really slow is transformReaderIntoRespondentList method
Thanks!!
Generally speaking anything SELECT * FROM is bad practice, but it could also be resulting in you having to pull back more data than is actually required. The transform is operating on only a few columns are more columns than required being returned? Consider replacing with:
private IEnumerable<Respondent> getListRespondentWithStatuts(string db)
{
return query("select ResRespondent, ResActive, ResCodeResult, Resphone, ResCompleted from " + db + ".dbo.respondent");
}
Also, gaurd against SQL-Injection attacks; concating strings for SQL queries is very dangerous.
When pulling data from a DataReader, I find that using the non-named lookups work best:
var respondent = new Respondent
{
CodeRépondant = sqlDataReader.GetString(0),
IsActive = sqlDataReader.IsDBNull(1) ? (Boolean?)null : sqlDataReader.GetBoolean(1),
CodeRésultat = sqlDataReader.GetString(2),
Téléphone = sqlDataReader.GetString(3),
IsUnContactFinal = sqlDataReader.IsDBNull(4) ? (Boolean?)null : sqlDataReader.GetBoolean(4)
};
I have not explcicitly tested the performance difference in a long while; but that used to make a notable difference. The ordinal checks did not have to do a named lookup and also avoided boxing/unboxing values.
Other than that, without more info it is hard to say... do you need all 20,000 records?
UPDATE
Ran a simple local test case with 300,000 records and reduced the time to load all data by almost 50%. I imagine these results will vary depending on the type of data being retrieved; but it still does make a difference on overall execution time. That being said, in my environment we are talking a drop from 650ms to just over 300ms.
NOTE
If respondent is a view, what is likely "really slow" is the database building up the result set; although the data reader will start processing information as soon as records are available, the ultimate bottleneck will be the database itself and/or network latency. Other than the above optimizations, there is not going to be much that you can do with your code unless you can index the view/table to optimize the query and or reduce the information required.

What is the correct way of using Entity Framework?

I have a DB like this that I generated from EF:
Now I'd like to add a "fielduserinput" entity so I write the following code:
public bool AddValueToField(string field, string value, string userId)
{
//adds a value to the db
var context = new DBonlyFieldsContainer();
var fieldSet = (from fields in context.fieldSet
where fields.fieldName.Equals(field)
select fields).SingleOrDefault();
var userSet = (from users in context.users
where users.id.Equals(userId)
select users).SingleOrDefault();
var inputField = new fielduserinput { userInput = value, field = fieldSet, user = userSet };
return false;
}
Obviously it's not finished but I think it conveys what I'm doing.
Is this really the right way of doing this? My goal is to add a row to fielduserinput that contains the value and references to user and field. It seems a bit tedious to do it this way. I'm imagining something like:
public bool AddValueToField(string userId, string value, string fieldId)
{
var context = new db();
var newField = { field.fieldId = idField, userInput = value, user.id = userId }
//Add and save changes
}
For older versions of EF, I think you're doing more or less what needs to be done. It's one of the many reasons I didn't feel EF was ready until recently. I'm going to lay out the scenario we have to give you another option.
We use the code first approach in EF 4 CTP. If this change is important enough, read on, wait for other answers (because Flying Speghetti Monster knows I could be wrong) and then decide if you want to upgrade. Keep in mind it's a CTP not an RC, so considerable changes could be coming. But if you're starting to write a new application, I highly recommend reading some about it before getting too far.
With the code first approach, it is possible to create models that contain properties for a reference to another model and a property for the id of the other model (User & UserId). When configured correctly setting a value for either the reference or the id will set the id correctly in the database.
Take the following class ...
public class FieldUserInput{
public int UserId {get;set;}
public int FieldId {get;set;}
public virtual User User {get;set;}
public virtual Field Field {get;set;}
}
... and configuration
public class FieldUserInputConfiguration{
public FieldUserInputConfiguration(){
MapSingleType(fli => new {
userid = fli.UserId,
fieldid = fli.FieldId
};
HasRequired(fli => fli.User).HasConstraint((fli, u)=>fli.UserId == u.Id);
HasRequired(fli => fli.Field).HasConstraint((fli, f)=>fli.FieldId == f.Id);
}
}
You can write the code...
public void CreateField(User user, int fieldId){
var context = new MyContext();
var fieldUserInput = new FieldUserInput{ User = user, FieldId = fieldId };
context.FieldUserInputs.Add(fieldUserInput);
context.SaveChanges();
}
... or vice versa with the properties and everything will work out fine in the database. Here's a great post on full configuration of EF.
Another point to remember is that this level of configuration is not necessary. Code first is possible to use without any configuration at all if you stick to the standards specified in the first set of posts referenced. It doesn't create the prettiest names in the database, but it works.
Not a great answer, but figured I'd share.

Categories