I'm building this app at night after work and have been struggling with this design problem for a week or two now.
I'm building a program that has 44 different types of entries and requires the ability to create a custom type.
Because users might change the fields in a particular type of entry and/or define their own, my first approach of generating an Entity class for each type of entry doesn't seem workable. If users change any of the fields or their version of the schema (subject to validation, of course) then my classes wouldn't really reflect that.
Even if did not allow users to change fields, I want to make sure that data schema changes do not create problems for current data.
In order to build a schema capable of all of this, I have done the following:
dtype
id
datatype
field
id fieldName
fieldDataType (linked by foreign key to dtype)
dataStore
id
dataText
dataString
dataDate
dataDouble
dataInt
fieldID (linked by foreign key to field)
entryID (linked by foreign key to id field of entries)
types
ol>id int
typeName
fields
entries
id
typeid (linked by foreign key to id of types)
Well, the schema is very denormalized but difficult to work with in ASP.NET MVC.
My second crack at it involved making a class with typed properties to store whichever datatype the entry happened to be.
Custom Class for domain level
public class entry
{
public List dataList = new List();
public entry(int id)
{
kleraDataContext s = new kleraDataContext();
var dataSet = s.dataStores.Where(c => c.citationID == id);
foreach (dataStore y in dataSet)
{
dataBlob tempd = new dataBlob();
//We get the data type id
var temp = s.fields.Where(c => c.id == y.fieldID).First();
//Get the fieldname and store the data type id for later
tempd.fieldname = temp.fieldName;
tempd.dataType = temp.dtype.dataType;
switch (tempd.dataType)
{
case "String":
tempd.dString = y.dataString;
break;
case "Text":
tempd.dString = y.dataText;
break;
default:
//Nothing
break;
}
this.dataList.Add(tempd);
}
}
}
public class dataBlob
{
private string _dString;
private DateTime _dDate;
private int _dInt;
private double _dDouble;
private object _data;
private string _fieldname;
private string _dataType;
public string dataType
{
get
{
return _dataType;
}
set
{
_dataType = value;
}
}
public object data
{
get
{
return _data;
}
}
public string dString
{
get
{
return _dString;
}
set
{
_dString = value;
_data = value;
}
}
public string fieldname
{
get
{
return _fieldname;
}
set
{
_fieldname = value;
}
}
public DateTime dDate
{
get
{
return _dDate;
}
set
{
_dDate = value;
_data = value;
}
}
public double dDouble
{
get
{
return _dDouble;
}
set
{
_dDouble = value;
_data = value;
}
}
public int dInt
{
get
{
return _dInt;
}
set
{
_dInt = value;
_data = value;
}
}
}
}
Note several problems with this.
I'm having trouble getting a generic enough property to store the data regardless of what field type it is in the physical structure. Ideally, data's accessor would just retrieve whatever the datatype happened to be.
I still don't have a good enough way to provide ASP.NET MVC's views with a coherent enough model so that the presentation code does not have to do parsing. Ideally, the view would just get an object with a list of fields and with their corresponding data.
Related to #2, I can't seem to figure out an appropriate way of persisting changes. Writing a query and having it return the fields to the view could be done. Because each field is not a strongly typed accessor, I'm not sure how to persist the change from the view to the model. Naively, I've thought of inserting a key in a hidden span and using a Dictionary object in the controller to map edits/creation.
Thoughts?
Ron
While I am not exactly sure of your ultimate goal, I may have an option for you. You need a highly dynamic "entity" which will allow your users to create their own data structures. Imperative languages like C# do not lend themselves well to such a thing...and even with a dynamic language, I think you'll likely run into some difficulties. However, XML is an excellent way to represent dynamic data structures in an ad-hoc, runtime-creatable way.
If you are using SQL Server, I recommend you create a simpler type, as depicted below, and store it in a table that uses the 'xml' data type for one of the columns:
public class DynamicEntity
{
public int ID { get; set; }
public string TypeName { get; set; }
public XDocument DynamicContent { get; set; }
}
The above entity could be stored in the following table:
CREATE TABLE DynamicEntity
(
ID int IDENTITY(1,1) NOT NULL,
NAME varchar(50) NOT NULL,
DynamicContent xml NULL
)
Given SQL Server's xml capabilities, you will still be able to query the data in that xml column. Not only that, if you want your users custom structures to be validated against a schema, you could also put that schema in the database and 'type' your xml column against that schema. Using an XML column in SQL Server does come with come caveats, but it might be a simple solution to your otherwise complicated problem.
Related
I work with a old database where everything is saved as a string. I have a client table and this table has a status column. I also use entity framework code first. Still version 6. I used reverse engineering system to start a code first from database at beginning.
[StringLength(1)]
public string status { get; set; }
What you need to understand is that everything that should be an enum in a good database design is a string in my database. In my C# I would like to use the enum. How can I do to save a enum as a string by default in my database and read is has a string and parse it as a enum?
Given
public enum MyFunkyEnum
{
SomeValue
}
You could just use a calculated property that is not in the actual db schema
public string status { get; set; }
public MyFunkyEnum MyFunkyEnumStatus => (MyFunkyEnum)Enum.Parse(typeof(MyFunkyEnum), status);
Note : You will need to use tweak the logic if you have null or empty strings
or both directions
[NotMapped]
public MyFunkyEnum MyFunkyEnumStatus
{
get => (MyFunkyEnum) Enum.Parse(typeof(MyFunkyEnum), status);
set => status = value.ToString();
}
Suppose I have a model with 20 fields, and in my index page, I want to list all models that are stored in my database.
In index page, instead of listing all fields of the model, I only to list 3 fields.
So, I make two class:
class CompleteModel {
public int Id { get; set; }
public string Field01 { get; set; }
public string Field02 { get; set; }
public string Field03 { get; set; }
public string Field04 { get; set; }
public string Field05 { get; set; }
...
public string Field20 { get; set; }
}
now, in my Controller, I can use:
await _context.CompleteModel.ToListAsync();
but I feel that it does not seem to be the right way to do it, because I'm getting all fields and using only 3 fields.
So, I made this code:
class ViewModel {
public string Field02 { get; set; }
public string Field04 { get; set; }
public string Field08 { get; set; }
}
var result = _context.CompleteModel.Select(
x => new {
x.Field02,
x.Field04,
x.Field08
}).ToListAsync();
var listResults = new List<IndexViewModel>();
if (result != null)
{
listResults.AddRange(results.Select(x => new IndexViewModel
{
Field02 = x.Field02,
Field04 = x.Field04,
Field08 = x.Field08
}));
}
I think this is a lot of code to do this.
First, I selected all the fields that I want, then, copied everything to another object.
There's a "more directly" way to do the same thing?
Like:
_context.CompleteModel.Select(x => new IndexViewModel { Field02, Field04, Field08 });
You could use AutoMapper to reduce the boiler plate so you're not manually copying field values over.
If you include the AutoMapper NuGet package then you'd need to have the following in your startup somewhere to configure it for your classes:
Mapper.Initialize(cfg => cfg.CreateMap<CompleteModel, ViewModel>());
You could then do something like the following:
var results = await _context.CompleteModel.ToListAsync();
var viewModelResults = results.Select(Mapper.Map<ViewModel>).ToList();
There are a lot of configuration options for the package so do take a look at the documentation to see if it suits your needs and determine the best way to use it if it does.
In my view this is one of the weaknesses of over abstraction and layering. The VM contains the data that is valuable to your application within the context of use (screen, process etc). The data model contains all the data that could be stored that might be relevant. At some point you need to match the two.
Use EF Projection to fetch only the data you need from the database into projected data model classes (using the EF POCO layer to define the query, but not to store the resultant data).
Map the projected classes onto your VM, if there is a naieve mapping, using Automapper or similar. However unless you are just writing CRUD screens a simple field by field mapping is of little value; the data you fetch from your data store via EF is in its raw, probably relational form. The data required by your VM is probably not going to fit that form very neatly (again, unless you are doing a simple CRUD form), so you are going to need to add some value by coding the relationship between the data store and the View Model.
I think concentrating on the count of lines of code would lead to the wrong approach. I think you can look at that code and ask "is it adding any value". If you can delegate the task to Automapper, then great; but your VM isn't really pulling its weight other than adding some validation annotation if you can consistently delegate the task of data model to VM data copying.
I am looking for design advice for the following scenario:
I have a code-first EF5 MVC application. I am building a full-text search function which will incorporate multiple weighted columns from many tables. As I cannot create view with an index from these tables (some of them contain text / binary columns), I have created a stored procedure which will output the ID of my object (eg. PersonID) and the rank associated with that object based on the search terms.
My current approach is to create a helper class for executing full text searches which call the stored procedure(s) and load all the objects from the context based on the returned IDs.
My questions are:
Does my approach seem sensible / follow reasonable best practice?
Has anyone else done something similar with any lessons learned?
Is there a way to do this more efficiently (i.e. have the results of the stored procedure return/map to the entities directly without an additional look-up required?)
UPDATE
Moved my detailed implementation from an edit of the question into its own answer to be more in line with what is recommended frequently # meta.stackexchange.com
Seeing as you can't use SQL methods like containstable with entityframework code first which the rest of your application could be using you could be 'forced' to do something with a storedprocedure like your describe. Whether it's best practice I don't know. However it it gets the job done I don't see why it wouldn't be sensible.
Yes - I have and still am working on a project build around EF codefirst where I had to do a fairly complex search that included several search parameters marked as 'must have' and several values marked as 'nice to have' and in from that return a weighted result.
Depending on the complexity of the result set I don't think you need to do a second roundtrip to the database and I will show you a way I have been doing it below.
Bear in mind that below is simply an example:
public List<Person> GetPeople(params string[] p)
{
var people = new List<Person>();
using (var db = new DataContext())
{
var context = ((IObjectContextAdapter)db).ObjectContext;
db.Database.Connection.Open();
var command = db.Database.Connection.CreateCommand();
command.CommandText = "SomeStoredProcedureReturningWeightedResultSetOfPeople";
command.CommandType = System.Data.CommandType.StoredProcedure;
//Add parameters to command object
people = context.Translate<Person>(command.ExecuteReader()).ToList();
}
return people;
}
Even though the storedprocedure will have a column for the weight value it won't get mapped when you translate it.
You could potentially derive a class from Person that includes the weight value if you needed it.
Posting this as an answer rather than an edit to my question:
Taking some of the insight provided by #Drauka's (and google) here is what I did for my initial iteration.
Created the stored procedure to do the full text searching. It was really too complex to be done in EF even if supported (as one example some of my entities are related via business logic and I wanted to group them returning as a single result). The stored procedure maps to a DTO with the entity id's and a Rank.
I modified this blogger's snippet / code to make the call to the stored procedure, and populate my DTO: http://www.lucbos.net/2012/03/calling-stored-procedure-with-entity.html
I populate my results object with totals and paging information from the results of the stored procedure and then just load the entities for the current page of results:
int[] projectIDs = new int[Settings.Default.ResultsPerPage];
foreach (ProjectFTS_DTO dto in
RankedSearchResults
.Skip(Settings.Default.ResultsPerPage * (pageNum - 1))
.Take(Settings.Default.ResultsPerPage)) {
projectIDs[index] = dto.ProjectID;
index++;
}
IEnumerable<Project> projects = _repository.Projects
.Where(o=>projectIDs.Contains(o.ProjectID));
Full Implementation:
As this question receives a lot of views I thought it may be worth while to post more details of my final solution for others help or possible improvement.
The complete solution looks like:
DatabaseExtensions class:
public static class DatabaseExtensions {
public static IEnumerable<TResult> ExecuteStoredProcedure<TResult>(
this Database database,
IStoredProcedure<TResult> procedure,
string spName) {
var parameters = CreateSqlParametersFromProperties(procedure);
var format = CreateSPCommand<TResult>(parameters, spName);
return database.SqlQuery<TResult>(format, parameters.Cast<object>().ToArray());
}
private static List<SqlParameter> CreateSqlParametersFromProperties<TResult>
(IStoredProcedure<TResult> procedure) {
var procedureType = procedure.GetType();
var propertiesOfProcedure = procedureType.GetProperties(BindingFlags.Public | BindingFlags.Instance);
var parameters =
propertiesOfProcedure.Select(propertyInfo => new SqlParameter(
string.Format("#{0}",
(object) propertyInfo.Name),
propertyInfo.GetValue(procedure, new object[] {})))
.ToList();
return parameters;
}
private static string CreateSPCommand<TResult>(List<SqlParameter> parameters, string spName)
{
var name = typeof(TResult).Name;
string queryString = string.Format("{0}", spName);
parameters.ForEach(x => queryString = string.Format("{0} {1},", queryString, x.ParameterName));
return queryString.TrimEnd(',');
}
public interface IStoredProcedure<TResult> {
}
}
Class to hold stored proc inputs:
class AdvancedFTS :
DatabaseExtensions.IStoredProcedure<AdvancedFTSDTO> {
public string SearchText { get; set; }
public int MinRank { get; set; }
public bool IncludeTitle { get; set; }
public bool IncludeDescription { get; set; }
public int StartYear { get; set; }
public int EndYear { get; set; }
public string FilterTags { get; set; }
}
Results object:
public class ResultsFTSDTO {
public int ID { get; set; }
public decimal weightRank { get; set; }
}
Finally calling the stored procedure:
public List<ResultsFTSDTO> getAdvancedFTSResults(
string searchText, int minRank,
bool IncludeTitle,
bool IncludeDescription,
int StartYear,
int EndYear,
string FilterTags) {
AdvancedFTS sp = new AdvancedFTS() {
SearchText = searchText,
MinRank = minRank,
IncludeTitle=IncludeTitle,
IncludeDescription=IncludeDescription,
StartYear=StartYear,
EndYear = EndYear,
FilterTags=FilterTags
};
IEnumerable<ResultsFTSDTO> resultSet = _context.Database.ExecuteStoredProcedure(sp, "ResultsAdvancedFTS");
return resultSet.ToList();
}
I have the following C# model class:
public class Thingy
{
public ObjectId Id { get; set; }
public string Title { get; set; }
public DateTime TimeCreated { get; set; }
public string Content { get; set; }
public string UUID { get; set; }
}
and the following ASP.MVC controller action:
public ActionResult Create(Thingy thing)
{
var query = Query.EQ("UUID", thing.UUID);
var update = Update.Set("Title", thing.Title)
.Set("Content", thing.Content);
var t = _collection.Update(query, update, SafeMode.True);
if (t.UpdatedExisting == false)
{
thing.TimeCreated = DateTime.Now;
thing.UUID = System.Guid.NewGuid().ToString();
_collection.Insert(thing);
}
/*
var t = _collection.FindOne(query);
if (t == null)
{
thing.TimeCreated = DateTime.Now;
thing.UUID = System.Guid.NewGuid().ToString();
_collection.Insert(thing);
}
else
{
_collection.Update(query, update);
}
*/
return RedirectToAction("Index", "Home");
}
This method either does an update or insert. If it needs to do an insert, it must set the UUID and TimeCreated members. If it needs to do an update, it must leave UUID and TimeCreated alone, but must update the members Title and Content.
The code that's commented out works, but does not seem to be most efficient. When it calls FindOne, that is one trip to mongodb. Then if it goes to the else clause, it does another query and an update operation, so that's 2 more trips to mongodb.
What is a more efficient way to do what I'm trying to accomplish?
As mentioned in the linked SO answer, for upserts to work, you need to update the entire document, not just a few properties.
Personally I would separate the Create and Edit into separate MVC actions. SRP. Creating a Thingy has different considerations from updating it.
If you still want to do an upsert instead of separate insert/update calls, you will need to use the following code:
_collection.Update(
Query.EQ("UUID", thing.UUID),
Update.Replace(thing),
UpsertFlags.Upsert
);
The question now becomes, how do we ensure the thing has the appropriate values for both cases, ie insert as well as update.
My assumption is (based on your code model binding to a Thingy instance), your view is sending back all fields (including UUID and TimeCreated). Which implies, in case of an update, the view already has the values pre-populated for UUID and TimeCreated. So in the case of a Thingy being updated, the thing object has the latest values.
Now in case of an create, when the view is rendered, you could store DateTime.MinValue for the TimeCreated field. In your Create MVC action, you could check if TimeCreated is DateTime.MinValue, then set it to current time and also store a new value for UUID.
This way, in the case of a insert as well, the thing has the latest values. We can thus safely do an Upsert.
I take this approach when doing upserts for Mongo from the controller
public ActionResult Create(Thingy model)
{
var thing = _collection.FindOneAs<Thingy>(Query.EQ("UUID", model.UUID));
if(thing == null)
{
thing = new Thingy{
TimeCreated = DateTime.Now,
UUID = System.Guid.NewGuid().ToString(),
Id = ObjectId.GenerateNewId()
}
}
else
{
thing.Content = model.Content;
//other updates here
}
_collection.Save<Thingy>(thing);
return View();
}
I'm in charge to migrate our own DAL to a solution based on Entity Framework 4 but, before I can do it, I need to be sure it's possible to translate all our "constructs" to this new technology.
One of the biggest issues I'm having is the possibility to read a field and build a custom type. Valid examples could be a bit mask saved in a BIGINT field, a list of mail addresses saved as a CSV list in a NVARCHAR field or an XML field containing aggregated data not worth to have their own table/entity. Basically the serialization mechanism is not fixed.
Let's take the classic "Address" example.
public class Address
{
public string Street {get; set;}
public string City {get; set;}
public string Zip {get; set;}
public string Country {get; set;}
}
and let's suppose we want to save it in an XML field using this template:
<address>
<street>Abrahamsbergsvägen, 73</street>
<city>Stockholm</city>
<zip>16830</zip>
<country>Sweden</country>
</address>
The question basically is: does exist a method to override how EF4 serializes and deserializes the content of a field mapped to a property of an entity?
I found this solution. It's not as clean as I wished but it seems it's impossible to get anything better.
given this base entity,
public class Institute
{
public int InstituteID { get; set; }
public string Name { get; set; }
// other properties omitted
}
I added in the database an XML field called Data containing some strings using this simple template
<values>
<value>Value 1</value>
<value>Value 2</value>
<value>Value 3</value>
</values>
In the entity I added these properties and I mapped the database field "Data" to the property "DataRaw".
protected string DataRaw
{
get
{
if (_Data == null)
return _DataRaw;
else
return new XElement("values", from s in Data select new XElement("value", s)).ToString();
}
set
{
_DataRaw = value;
}
}
private string _DataRaw;
private string[] _Data;
public string[] Data
{
get
{
if (_Data == null)
{
_Data = (from elem in XDocument.Parse(_DataRaw).Root.Elements("value")
select elem.Value).ToArray();
}
return _Data;
}
set
{
_Data = value;
}
}
This solution works. Here is the sample code:
class Program
{
static void Main(string[] args)
{
var ctx = new ObjectContext("name=TestEntities");
var institute = ctx.CreateObjectSet<Institute>().First();
System.Console.WriteLine("{0}, {1}", institute.InstituteID, institute.Name);
foreach (string data in institute.Data)
System.Console.WriteLine("\t{0}", data);
institute.Data = new string[] {
"New value 1",
"New value 2",
"New value 3"
};
ctx.SaveChanges();
}
}
Does anyone have a better solution?
Entity Framework does NOT serializes or deserializes entities nor it controls how the serialization should take place in other layers or modules of your application.
What you need to do is to simply open your POCO(s) and annotate their Properties with appropriate attributes that will be taken into account at the time of serialization when you want to ship them to your other application layers.