I have been reading several examples of Sitecore DataProvider implementation on single database and single table (using the config file parameters to specify particular table and columns trying to integrate with). Just wonder if it is possible to implement the dataprovider working on multiple tables instead of just one. Couldn't find any examples on this, just asking for any ideas or possibilities.
First problem I encounter when I try to deal with multiple tables is to override GetItemDefinition method. Since this method returns only one item definition and needs to know which particular table it will get the item information from. (This is specified in the config file if just dealing with one table). Basically I am looking for a way to switch (dynamically) between tables without changing the config file params every time.
If you're creating a custom data provider then then implemetation is left entirely up to you. If you have been following some of the examples such, such as the Northwind Dataprovider then as you state the implementation acts on a single database as specified in config. But you can specify whatever you need in the methods that you implement, and run logic to switch the select statement you call in the methods such as GetItemDefinition() and GetItemFields(). You can see in the Northwind exmaple that the SQL query is dynamically built:
StringBuilder sqlSelect = new StringBuilder();
sqlSelect.AppendFormat("SELECT {0} FROM {1}", nameField, table);
If you are building a read-only dataprovider then you might be able to make use of SQL Views, allowing you to write a query to combine the results from several tables using UNION operator. As long as each record has a unique ID across tables (i.e. if you are using GUIDs as the ID) then this should work fine.
Related
I have more than 50 data tables that have nearly identical structures. Some of the tables have additional columns. I'm developing an application to help me monitor and track changes to the data contained in these tables and only need to be able to read the data contained in them. I want to create an entity framework model that will work with all of the tables and give me access to all columns that exist.
As long as the model contains the subset of columns that exist is all of the tables my model works and I can dynamically switch between the tables with the same model. However I need accesses to the additional columns when they exist. When my model contains a column that doesn't exist in the table that I switch to I get an exception for an invalid column. Is there a way to have my model be the set of all columns and if the column doesn't exist in the context of a particular table handle it in a way that I still have access to the columns that exist? I know that using strait SQL I can do this quite easily but I'm curious is there is a way to do this with entity framework. Essentially I am looking for the equivalent of querying sys.columns to determine the structure of the table and then interact with the table based on knowing what columns exist from the sys.columns query.
Sample of issue:
The 50+ tables hold data from different counties. Some of there counties have included additional data, for instance a url link to an image or file. Thus I have an column that is a varchar that contains this link. Many of the counties don't supply this type of attribute and it isn't apart of the table in other counties. But there are 100 other reported attributes that are common between all tables. I realize a solution to this issue is to have all tables contain all possible columns. However in practice this has been hard to achieve due to frequent changes to provide more to our clients in certain counties.
From the EF prospective I do not know a solution but you can try something with an extension method like below:
public static DbRawSqlQuery<YourBaseModel> GetDataFromTable(this ApplicationDbContext context, string tableName)
{
return context.Database.SqlQuery<YourBaseModel>("select * from " + tableName);
}
I think this will map only columns that exists in table with properties in your model.
This is not tested by the way but it can give you an idea of what I mean.
Entity Framework supports generating Table per Concrete type mapping, this lets you have a base class that contains all the shared columns, and derived classes for each specific table
https://weblogs.asp.net/manavi/inheritance-mapping-strategies-with-entity-framework-code-first-ctp5-part-3-table-per-concrete-type-tpc-and-choosing-strategy-guidelines
I am using entity framework to interface with a database and I would like to create a generic insert function!
I have got it working with standalone tables i.e. add a new record to Customer table which uses the Set function to get the table of the correct type. But the problem is that cross reference tables are mapped into lists in entity framework - conforming to an object orientated interface (so I understand why). However, how can I account for such inserts in a generic manor, as in general I will be dealing with whole entities however I have a few scenarios where I need to deal with the lists inside an entity.
Of course I can create specific methods to deal with these specific cases but I really want to avoid doing this!
One idea I have had is too create a dictionary of Type, Action, the type being types of DTO's associated with list inserts the service may receive and the action being specific insert code for dealing with the lists, that way I can still use the generic insert function, and just check if there are any "insert rules" in the dictionary that should be executed opposed to the generic insert code. This way from a client programming perspective only the one insert function is ever used. BUT this still requires writing specific insert code - which I really would like to avoid.
I don't have that much experience with EF - so what I would like to know is do I have any cleaner options for getting around this problem?
Code demonstrating my issue:
Normal generic insert -
public void InsertRecord(object newRecord, )
{
using (PIRSDBCon db = new PIRSDBCon())
{
var table = db.Set(newRecord.GetType());
table.Add(newRecord);
db.SaveChanges();
}
}
as you can see this handles standard inserts into tables
Insert into cross reference table -
public void InsertCarerCheck(int carer_id, Check newCheck)
{
using (PIRSDBCon db = new PIRSDBCon())
{
Foster_Carers carer_record = (from c in db.Foster_Carers
where c.foster_carer_id == carer_id
select c).SingleOrDefault();
carer_record.Checks1.Add(newCheck);
db.SaveChanges();
}
}
Checks1 is a list property generated by EF for a cross reference table linking a foster carer and a check record. How can scenarios like this be accounted for in a generic insert function?
I'm using the PetaPoco mini-ORM, which in my implementation runs stored procedures and maps them to object models I've defined. This works very intuitively for queries that pull out singular tables (i.e. SELECT * FROM Orders), but less so when I start writing queries that pull aggregate results. For example, say I've got a Customers table and Orders table, where the Orders table contains a foreign key reference to a CustomerID. I want to retrieve a list of all orders, but in the view of my application, display the Customer name as well as all the other order fields, i.e.
SELECT
Customers.Name,
Orders.*
FROM
Orders
INNER JOIN Customers
ON Orders.CustomerID = Customers.ID
Having not worked with an ORM of any sort before, I'm unsure of the proper method to handle this sort of data. I see two options right now:
Create a new aggregate model for the specific operation. I feel like I would end up with a ton of models in any large application by doing this, but it would let me map a query result directly to an object.
Have two separate queries, one that retrieves Orders, another that retrieves Customers, then join them via LINQ. This seems a better alternative than #1, but similarly seems obtuse as I am pulling out 30 columns when I desire one (although my particular mini-ORM allows me to pull out just one row and bind it to a model).
Is there a preferred method of doing this, either of the two I mentioned, or a better way I haven't thought of?
Option #1 is common in CQRS-based architectures. It makes sense when you think about it: even though it requires some effort, it maps intuitively to what you are doing, and it doesn't impact other pieces of your solution. So if you have to change it, you can do so without breaking anything elsewhere.
I have a legacy system that dynamically augments a table with additional columns when needed. Now I would like to access said table via C#/NHibernate.
There is no way to change the behaviour of the legacy system and I dynamically need to work with the data in the additional columns. Therefore dynamic-component mapping is not an option since I do not know the exact names of the additional columns.
Is there a way to put all unmapped columns into a dictionary (column name as key)? Or if that's not an option put all columns into a dictionary?
Again, I do not know the names of the columns at compile time so this has to be fully dynamic.
Example:
public class History
{
public Guid Id { get; set; }
public DateTime SaveDateTime { get; set; }
public string Description { get; set; }
public IDictionary<string, object> AdditionalProperties { get; set; }
}
So if the table History contains the Columns Id, SaveDateTime, Description, A, B, C and D I would like to have "A", "B", "C" and "D" in the IDictionary. Or if that's too hard to do simply throw all columns in there.
For starters I would also be fine with only using string columns if that helps.
You probably need an ADO.NET query to get this data out. If you use NH, even with a SQL query using SELECT *, you'll not getting the column names.
You can try using SMO (SqlServer management objects, a .NET Port to the SqlServer) or some other way to find the table definitions. Then you build up the mapping using Fluent NHibernate with a dynamic component. I'm not sure if you can change the mappings after you already used the session factory. It's worth a try. Good luck :-)
I guess with the following code you can get your results in a Hashtable:
var hashTable = (Hashtable)Session.CreateSQLQuery("SELECT * FROM MyTable")
.SetResultTransformer(Transformers.AliasToEntityMap)
.UniqueResult();
Obviously all your data will be detached from the session...
I think the best you can do is to find the columns at runtime, create a mapping for these extra columns then write the output to an xmlfile. Once that is done you can add the mapping at runtime...
ISessionFactory sessionFactory = new Configuration()
.AddFile("myDynamicMapping.hbm.xml")
How you would use this mapping is a good question as you would have to create your class dynamically as well then you are SOL
good luck.
What's not possible in sql is not possible in NHibernate.
It's not possible to write in insert query to insert into unknown columns.
I assume that your program builds a single Configuration object on startup, by reading XML files, and then uses the Configuration object build ISessionFactory objects.
Instead of reading the XML files, building the Configuration object, and calling it a day, however, your program can send a query to the database to figure out any extra columns on this table, and then alter the Configuration, adding columns to the DynamicMapping programmatically, before compiling the Configuration object into an ISessionFactory.
NHibernate does have ways to get the database schema, provided it's supported by the database type/dialect. It is primarily used by the SchemaExport and SchemaUpdate functions.
If you're not scared of getting your hands a bit dirty;
Start by looking at the GenerateSchemaUpdateScript function in the Configuration class:
https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/nhibernate/src/NHibernate/Cfg/Configuration.cs
In particular, you'd be interested in this class, which is referenced in that method:
https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/nhibernate/src/NHibernate/Tool/hbm2ddl/DatabaseMetadata.cs
The DatabaseMetadata object will allow you to traverse the metadata for all tables and fields in the database, allowing you to figure out which fields are not mapped. If you look at the Configuration class again, it holds a list of it's mapping in the TableMappings collection. Taking hints from the GenerateSchemaUpdateScript function, you can compare a Table object from the TableMappings against the any object implementing ITableMetadata returned by the DatabaseMetadata.GetTableMetadata function to figure out which columns are unmapped.
Use this information to then rebuild the mapping file used by the "dynamic" class at runtime placing all the dynamic/runtime fields in the "AdditionalProperties" dynamic-component section of the mapping file. The mapping file will need to be included as an external file and not an embedded resource to do this, but this is possible with the Configuration AddFile function. After it is rebuilt, reload the configuration, and finally rebuild the session factory.
At this time it looks like Firebird, MsSQL Compact, MsSQL, MySQL, Oracle, SQLite, and SybaseAnywhere have implementations for ITableMetadata, so it's only possible with one of these(unless you make your own implementation).
Edit: I am using SqlDataAdapters to fill the data sets. Sorry--I should have been more clear.
I'm working on a project where I need to fill a number of strongly-typed data sets with information from stored procedures. Right now, I have a generic method in my data access layer:
public static DataSet FillDataSet(DataSet dataSet, string storedProcedureName, Dictionary<string, string> parameters);
The problem with this is that I need to establish mappings between the returned recordsets from the stored procedure and the tables in my data sets. I have come up with two options for doing this:
Add a new formal to my FillDataSet method (KeyValuePair<string, string>[] mappings) that would provide the information for the table mappings.
Create a DataSetMappingFactory that would take a DataSet as a parameter and then add the appropriate mappings based on its type. If it were an unknown type, then it wouldn't add any mappings. Then, it would return the DataSet to the FillDataSet method.
Do any of you have other thoughts about how I could approach this problem? Also, does anyone want to weigh in on an approach that would be best in terms of object-oriented design?
The first question I'd ask is: do I really need to do this at all? The typed DataSet designer already gives you a tool for defining the mapping between a stored procedure and a DataTable. If you design your DataSet with care, you already have a Fill method for every DataTable. Does it make sense to reinvent that wheel?
I think it might. It's really cool that there's a way to maintain that mapping, but everything in that mapping is frozen at compile time. If you want to change the mapping, you need to rebuild your assembly. Also the typed DataSet design doesn't deal with stored procedures that return multiple result sets. If you want to generically map parameters and values, you have to use reflection to get the argument lists from the Fill methods. It may be that if you look at those factors (and others I'm not thinking of), working with the existing tool isn't the way to go.
In that case, it seems to me that your goal is to be able to populate a DataSet from a series of stored procedures with code that knows as little as possible about the implementation details. So this is a process that's going to be driven by metadata. When you have a process driven by metadata, what's going to matter the most to you in the long run is how easy it's going to be to maintain the metadata that the process uses. Once you get the code working, you probably won't touch it very much. But you'll be tweaking the metadata constantly.
If I look at the problem from that perspective, the first thing I think to do is design a typed DataSet to contain the metadata. This gives us a bunch of things that we'd otherwise have to figure out:
a persistence format
a straightforward path to building a bound UI
an equally straightforward path to persisting the metadata in a database if we decide to go down that road
an object model for navigating the data.
In this DataSet, you'd have a DataSetType table, keyed on the Type of each typed DataSet you intend to be able to populate. It would have a child StoredProcedures table, with a row for each SP that gets called. That would have two child tables, Parameter and DataTableType. There would be one DataTableType row, ordered by ordinal position, for each result set that the SP's expected to return. The DataTableType table would have a child ColumnMapping table. It's in that table that you'd maintain the mappings between the columns in the result set and the columns in the table you're populating.
Make sure all of your DataRelations are Nested, and that you've given rational names to the relations. (I like FK_childtablename_parenttablename.)
Once you have this, the class design becomes pretty straightforward. The class has a reference to the metadata DataSet, the Connection, etc.,, and it exposes a method with this signature:
public void FillDataSet(DataSet targetDs, Dictionary<string, Dictionary<string, KeyValuePair<string, string>> parameterMap);
You start by using the targetDs's Type to find the top-level DataSetType row. Then all of the private methods iterate through lists of DataRows returned by DataTable.GetChildRows(). And you add an event or two to the class design, so that as it performs the operation it can raise events to let the calling application know how it's progressing.
Probably the first place I'd expect to refactor this design is in giving me more fine-grained control over the filling process. For instance, as designed, there's only one set of SPs per typed DataSet. What if I only want to fill a subset of the DataSet? As designed, I can't. But you could easily make the primary key of the DataSetType table two-part, with the parts being DataSet type and some string key (with a name like SPSetName, or OperationName), and add the second part of the key to the FillDataSet argument list.