I am trying to create driver for linqpad and have question:
When I am creating DynamicDataContextDriver, I must create class TypedDataContext.
What I should put in it?
How will it be populated?
Can I control how will it be populated?
If I use object database here, is there something that I must bear in mind?
I found some answer here, but I can not find there all the above answers.
A typed data context is simply a class with properties/fields suitable for querying. Those properties/fields will typically return IEnumerables or IQueryables. For example:
public class TypedDataContext
{
public IEnumerable<Customer> Customers { get { ... } }
public IEnumerable<Order> Orders { get { ... } }
...
}
When you use Visual Studio to create a new item of kind "LINQ to SQL classes" or "ADO.NET Entity Data Model", Visual Studio creates a typed data context for you which is an excellent example of what LINQPad expects. A typed data context can also expose methods (e.g., to map stored procedures or functions) - in fact it can expose anything that makes sense to the end user.
When you execute a query in LINQPad that has a connection, LINQPad subclasses the typed data context associated with the connection so that the query has access to all of its fields/properties. This is why Customers.Dump() is a valid query - we can just access Customers without having to instantiate the typed data context first.
A LINQPad driver can work in one of two ways. Either it can act like Visual Studio and build the typed data context automatically and on the fly (dynamic data context driver), or it can extract a typed data context from an existing assembly provided by the user (static data context driver). When you add a connection in LINQPad, you'll notice that the drivers are listed in two list boxes (Build data context automatically = dynamic driver, and Use a typed data context from your own assembly = static driver).
The typed data context is instantiated whenever a query executes. Because its properties typically return lazily evaluated IEnumerables/IQueryables, it's not helpful to think of "populating" it. However, it will need to know how to access an underlying data source, and this is done by passing arguments into the constructor.
LINQPad normally keeps the query's application domain alive between query runs, and this might be useful with caching and optimization should you be writing a driver for an object database. Other than that, there shouldn't be any special considerations for object databases.
Related
I'm using Entity Framework 6.
I want to run a stored procedure that returns a non entity object (3 colunms per row)
using(var dbContext = new DBContextEntity())
{
var queryProducts = dbContext.Database.SqlQuery<DataTable>("dbo.GetProductByDesc #q", query);
}
How to get that data as DataSet or anonymous object that I can iterate that?
As far as I know EntityFramework does not provide anonymous object materialization. The reason for that is that it probably generates IL code for each type and caches it (or just does a plain PropertyInfo caching).
The solution is to just create a simple class with properties you need matching the names of the stored procedure result set and use this class as a generic parameter for SqlQuery.
Edit:
SqlQuery implements IEnumerable and when you iterate over it, it executes automatically in thge current thread. To iterate the result you can for example:
foreach(var product in queryProducts)
{
// do something with each product here
}
You can also pass a list of product class instances to a function expecting it:
ShowProducts(queryProducts.ToList());
You can also make the query run in background and return you a list of Product after it has finished, more information about asynchronous fetching can be found here: http://www.codeguru.com/csharp/.net/net_framework/performing-asynchronous-operations-using-entity-framework.htm
Like #Too said, it is best to define a POCO class with properties corresponding to the field names and datatypes returned by the StoredProcedure or other SQL statement.
It is generally better to avoid the use of DataSets in any new development work you are doing. They do have their uses but have a performance penalty in high throughput scenarios which the POCO's clearly avoid.
If the attraction for DataSets is the ability to easily serialize the data over the wire or to a file for later use, then the various serialization frameworks will help you with that eg DataContractSerializer, Newtonsoft.Json, etc.
This also allows for portability if the POCO is defined in a PCL (Portable Class Library).
If you must use DataSets, I would rather use typed DataSets. The DataRow's can be used as the POCO in #Too's answer, since they have a default constructor. Just be careful of nulls and their unique treatment in fields other than String.
I have two tables that have the same layout -
Report Table
ID
ReportCol1
ReportCol2
In another database I have
Reporting Table
ID
ReportCol1
ReportCol2
I want to use a single entity model called Report to load the data from both of these tables.
In my context class I have
public DbSet<Report> Report{ get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new ReportMap());
}
In my call to the first database Report table I get the results as expected.
I change the connection string to point to the second database, but I can't change the name of the table in the table mapping.
I don't want to use stored procs for the reason outlined in my comment.
What can I do, short of the tables names in the database(that is not an option).
Have you tried this fluent API modelBuilder.Entity<Report>().ToTable("Reporting"); ? You may need to write this so it conditionally does this based on which database you are connecting to. You may need to have your configuration allow you to say "DatabaseA uses this mapping and connection string", and "DatabaseB uses this other mapping and conenctions string", and rather than changing the connection string, you specify which database by some name/key, and your app looks up that name to determine which mapping code to run.
if(dbMappingconfig == DbMapping.A)//some enum you create
{
modelBuilder.Entity<Report>().ToTable("Reporting");
}
If your goal is to be able to pass these entities to other methods like DisplayReport(Report r) so that you don't have to duplicate code, you could have both Reporting and Report classes implement a IReport interface.
EF also supports inheritance hierarchies, so you could have them inherit from the same class, BUT I havfe a strong feeling that will not work across databases.
If the OnModelCreating doesn't rerun, it's probably already cached. Put modelBuilder.CacheForContextType = false; in there so it doesn't cache it in future, and to clear the current cache I think you can just do a Clean+Rebuild. This will come at the price of rebuilding the model everytime instead of reusing a cache. What you'd really want is use the cache up until the connection string changes. I don't know of anyway to manually clear the cache, but there might be a way. You can manage the model building yourself:
DbModelBuilder builder = new DbModelBuilder();
// Setup configurations
DbModel model = builder.Build(connection);
DbCompiledModel compiledModel = model.Compile();
DbContext context = new DbContext(connection, compiledModel);
But that will introduce additional complexities since you will need to manage the caching yourself.
While searching on this, I came across this that looks like they are trying to accomplish the same thing, as well as having gone down the same page, see Final section in question: How to map an Entity framework model to a table name dynamically
Are you able to create the same named view in each database and map to that instead of a variable table name?
I have 2 copies of tables with different names in my solution and deal with that by having 2 contexts and 2 sets of map files (generated text templates)
I have an application that will generate reports from various databases, each with the same schema. The only difference between DB1, DB2, DB3, and DBx is the data. All tables, views, etc are the same in structure.
For each participating DB in my application I have created a distinct Linq to SQL DataContext.
I am creating a ReportHelper class that will generate reports from the underlying data. I want to be able to call a Method like "GetCustomerSales" and have it spit back the data for my report. The problem is that I want to pass or set the DataContext for the GetCustomerSales method before I call it (ideally when constructing the ReportHelper Class).
However, my GetCustomerSales method wants me to use a specific DataContext, and I do not want to create this method over and over for each potential DataContext in use in the app. What's the correct approach here?
Have a single data-context, that matches the common schema. The difference is that instead of using just new SomeDataContext(), you should supply an appropriate connection-string (or connection) to the constructor:
var db = new SomeDataContext(connectionString);
or
var db = new SomeDataContext(connection);
Now all you need is multiple connection-strings, which is easier than multiple data-contexts. Two choices there; you can store multiple strings, perhaps in config (this is especially useful if they each need different user accounts etc); or - you can use SqlConnectionStringBuilder to create the connection string at runtime, specifying the appropriate database.
To illustrate; every site in the StackExchange network is ultimately a different database, but they all use the same data-context type. We simply tell it about the connection at runtime (based on which hostname you accessed).
You can get around it by passing a connection string to your data context.
public class DataRepository
{
private MyDataContext ctx;
public DataRepository(string connection){
ctx = new MyDataContext(connection);
}
//now you can use your context
}
I've several databases that contains the exact same base tables with identical design. Now I need to be able to access a base table from any one of those database.
Is there any way to create a common interface that can still utilize the power of Linq2Sql? I was thinking that I would have a factory that created a data context for that chosen database which I could afterwards query:
string university = "SomeUniversity";
var database = UniversityDataContextFactory.Get(university);
var user = database.Users.Where(u => u.Id== id).First();
This, however, would require the data contexts returned from the individual databases to implement a shared interface and very possibly also to share data classes.
If the database schemas are identical then you only need one set of data classes - the only difference between one "data context" and another will be the connection string.
This is where things get a bit more fun. If you are explicitly creating the data context - as in your factory example - then there's not an issue so long as you have a means to provide/create the appropriate connection sting as there is a constructor for a data context that takes a connection string as a parameter. However if you're implicitly creating one (in back of something like dynamic data) then I'm not sure what you'd need to do (I've dealt with this on a per application instance basis but not for multiple connections in a single application).
The key to remember is that the data connection string in your .config is the default but not necessarily the only connection string that you can use with a data context.
I have an existing SQL Server database, where I store data from large specific log files (often 100 MB and more), one per database. After some analysis, the database is deleted again.
From the database, I have created both a Entity Framework Model and a DataSet Model via the Visual Studio designers. The DataSet is only for bulk importing data with SqlBulkCopy, after a quite complicated parsing process. All queries are then done using the Entity Framework Model, whose CreateQuery Method is exposed via an interface like this
public IQueryable<TTarget> GetResults<TTarget>() where TTarget : EntityObject, new()
{
return this.Context.CreateQuery<TTarget>(typeof(TTarget).Name);
}
Now, sometimes my files are very small and in such a case I would like to omit the import into the database, but just have a an in-memory representation of the data, accessible as Entities. The idea is to create the DataSet, but instead of bulk importing, to directly transfer it into an ObjectContext which is accessible via the interface.
Does this make sense?
Now here's what I have done for this conversion so far: I traverse all tables in the DataSet, convert the single rows into entities of the corresponding type and add them to instantiated object of my typed Entity context class, like so
MyEntities context = new MyEntities(); //create new in-memory context
///....
//get the item in the navigations table
MyDataSet.NavigationResultRow dataRow = ds.NavigationResult.First(); //here, a foreach would be necessary in a true-world scenario
NavigationResult entity = new NavigationResult
{
Direction = dataRow.Direction,
///...
NavigationResultID = dataRow.NavigationResultID
}; //convert to entities
context.AddToNavigationResult(entity); //add to entities
///....
A very tedious work, as I would need to create a converter for each of my entity type and iterate over each table in the DataSet I have. Beware, if I ever change my database model....
Also, I have found out, that I can only instantiate MyEntities, if I provide a valid connection string to a SQL Server database. Since I do not want to actually write to my fully fledged database each time, this hinders my intentions. I intend to have only some in-memory proxy database.
Can I do simpler? Is there some automated way of doing such a conversion, like generating an ObjectContext out of a DataSet object?
P.S: I have seen a few questions about unit testing that seem somewhat related, but not quite exact.
There are tools that map between objects, such as automapper. This is a very good open source tool.
However, these tools sometimes have problems, for example generating duplicate entity keys, or problems when the structure of the objects being mapped are very different.
If you are trying to automate it, I think that there is a greater chance of it working if you use EF 4 and POCO objects.
If you end up writing the mapping code manually, I would move it into a seperate procedure with automated unit tests on it.
The way we do this is to create a static class with "Map" methods":
From DTO to EF object
From EF to DTO
Then write a test for each method in which we check that the fields were mapped correctly.