I have the following situation: i need to create some temporary tables to optimize a load problem the recently has ocurred. It seems that LINQ to SQL doesn't work well with temporary table, unless they are mapped on the DBML. I, honestly, still don't understand how scope works on LINQ to SQL. With that in mind i went to define every temporary table on the DBML.
But, as always, things can't be that easy. I can't define on compilation time (which is what linq needs) what name my temporary table will have, because it will be defined when an user logs on the system. To add more: i will have several of these dynamic temporary table, so there's no way i can map it all to the DBML.
When i tried to create my temporary tables through executeCommand, select its results and cast it to strong type (TempTableDefinition). However, when i tried to insert values on this new created temporary table i got a SQLException saying 'Invalid object name #NewTempTable' (this was the same name i used to create the table).
It appears that i will have to use pure old plain ADO.NET to create every temporary table and map it's properties to a strong typed object (i prefer this approach). I really wouldn't like to mix ADO.NET with LINQ, since i just read that it's a bad ideia. Plus, i prefer linq approach of strong type objects to the ADO.NET way.
Resume:
So, do you know or is it even possible to create dynamic temporary tables that linq to sql can work with? I can't define it's name on compilation time, only on execution time. Any tips will be appreciated.
The problem seems to be that L2S by default opens and closes the connection for each logical request. That kills your temp tables.
Either open the connection manually (and close it, of course) or wrap everything in a TransactionScope which integrates with L2S and keeps the connection open.
to optimize a load problem
Linq-2-sql and batch/bulk will anyway not work together. Every insert/update/delete will result in a single statement (ok, one transaction but still). For hardcore performance, avoid Linq-2-sql and once you have your data in, use Linq with all the advantages like strong typing etc.
Related
I apologize if this is duplicative; I could find nothing directly pertaining.
The difficulty involves EF Core (v 3.1.8, if it matters), but is not specific or restricted thereto. I am doing code first, creating a number of entities, but the key point is that I am getting my initial data set from an app that I am trying to replace. My new app has a number of structural differences in every corresponding entity, but the data in the old app is still critical, so I will be transferring it to my new database. (Old db is hosted by MS SQL 2008; new db is hosted by MS SQL 2019, if it matters).
Most of the key fields are GUIDs, and the problem is that in EF Core, at the point in the future when I want to use the new app to do more data entry, I will also want the database to choose the GUID. In EF Core Fluent API parlance, that would be, for example:
modelBuilder.Entity("ReplaceOldApp.Models.Address", b =>
{
b.Property<Guid>("AddressID")
.ValueGeneratedOnAdd()
.HasColumnType("uniqueidentifier");
}
However, if I inform EF Core that I want the database to create the key, then it will create the tables such that when I try to transfer the data from the old database (whether using EF or some other means), the new database will ignore the old GUID and create a new, unrelated one. (Or at least, that's what I think will happen. I'm not ready to try it yet.) If that happens, then all of the data from, say, the old Person entity (such as the above-implied Address entity), will no longer be related between their corresponding entities in the new database, because all records will have shiny new GUIDs. I will have all the information, and no way to actually use it.
Obviously I can tell EF Core to inform the database that it will not be creating the GUIDs, and I can then read, unmunge and transfer the data from the old database to the new without fear of data loss (God willing). But then going forward, for any new data entry, the GUIDs will not be automatically genned. I can of course then mod my IEntityTypeConfiguration Fluent API classes for the various entities and do a second migration, re-genning the affected tables, but I'm worried that EF Core will decide that it needs to DROP the tables to accommodate such a change. (Again, I do not know for sure because I have not tried it: sorry.)
So my question is: How would you approach such a situation? Should I ignore EF and do something clever with MS SQL Studio? Should I do two migrations with a transfer in-between? Should I tell the database, even though it has been told to gen the keys, somehow to accept the old keys without changing things, perhaps via LINQ?
============== Edit:
I'm sure SSIS would work to transfer the data from old to new databases, but the learning curve appears daunting, and I am only trying to solve one problem, not gain a new career. Powershell ditto, although it may be a bit more of a hacker's tool, and as such knowledge of it might assist tweaking or help to solve a diverse set of one-time SQL Server headaches. However, again, as would you, I prefer to use what I know, or failing that, learn or learn more about a tool which promises to serve me consistently into the future.
With the very welcome new (to me) information about IDENTITY_INSERT, and information gained from Linq To Sql and identity_insert, I believe I should not use LINQ to SQL because it may assume that IDENTITY_INSERT is OFF and simply filter out the crucial GUID, failing therefore to provide it to the target server. Rather, it seems I can use C# to produce a series of generated SQL statements, and then run each one on the target server inside a TransactionScope(). Because each such insert will thereby run 'in the same connection', the state of IDENTITY_INSERT will be preserved for that entire insert transaction, and (creek don't rise) it should work.
Again, I appreciate your answer, Randy in Marin. It has, it seems, led me to an approach that will work within the potential constraints of my context (EF Core), while allowing me to preserve the crucial existing IDENTITY information. Peace.
Not being an EF programmer, I don't know if there is an option for identity insert that you can enable for a migration. You might search the term to see if it comes up.
Our team support database migrations. We can do it a number of ways. I would not even consider EF because it's not designed for data migrations - or for database design. (And because we tend to use what we know.)
This is not the way I would do it, but it might be better than SSIS if you have not used SSIS. If the tables are in the same database or in databases on the same server, you can use T-SQL to load each table one at a time. Even if not on the same server, a linked server would allow a distributed transaction. (I avoid linked servers like the plague, but for a one time thing like a migration I would tolerate it. I would rather restore a copy of the source database to the destination server to use as a source. Distributed transactions gone wrong have forced me to reboot critical servers.)
Each table can have a 4 part name. If the server part (e.g., using a linked server name) is not present, the local instance is used. If the database part is not present, the current database is used. This is the format I assume for the "src_table" and "dst_table".
[myserver\myinstance].[mydatabase].[myschema].[mytable]
Each table is loaded with T-SQL as follows:
TRUNCATE TABLE dst_table
SET IDENTITY_INSERT dst_table ON
INSERT dst_table (...) SELECT ... FROM src_table
SET IDENTITY_INSERT dst_table OFF -- must be turned off - only 1 table can have this ON
If there are foreign keys, some tables (e.g., def tables) would need to be loaded first.
If the table does not have an IDENTITY column (EF code creates all values), you don't use the IDENTITY_INSERT stuff. It will fail if you use it and there is not an identity column. It will fail if you don't use it and try to insert into an identity column.
If there is a lot of data in a table, the transaction might be too big or slow. Inserting in batches might be called for.
If it was something to run on a schedule, I would likely create a SSIS package to do the load.
If I wanted to try something new, I would use powershell and the DBATools module cmdlets to see if extracting to csv and importing the csv would be efficient. The import cmdlet has a column mapping parameter, among many others. PowerShell could be used to do transformation, but I think this crosses over into SSIS territory.
I have dealt with migrations where the GUIDs and IDs no longer related after the move. Using queries joining the new data to the old data, we were able to fix the related values. It's likely more work to fix it after than to plan for it to be correct from the start.
I've recently studied some LINQ sources and decided to use it in the project I'm working on. Everything is almost clear except for one thing.
I'm making complicated reports that make up of several tables. Earlier I used stored procedures for the purpose. I formed several temporary pieces of data that I stored in temporary tables and then joined them together using a series of 2-table joins.
Trouble is: LINQ doesn't allow the creation of temporary tables. I know that complicated queries are built in LINQ in a "cascade" way, but if I do it this way,
Question is: what am I going to receive in DataContext.Log in the end? I assume it's going to be a really huge query that is impossible to understand and use for debugging. Am I right? If I am, how to find a workaround for this? DataLoadOptions and LoadWith won't do, because I am processing all the data at once and using it will lead to an avalanche of queries.
Thanks in advance
LINQ definitely allows the creation of temporary tables, it's just any data type that implements IQueryable (Lists being a good example) So if you need a temp table, you just do this...
var tempTable1 = [LINQ Query goes here].ToList()
and voila, you have a temp table. If you don't like generic variables, then you can create classes and use a List<tableClass> instead of a generic.
From this point, you can use the new Var as something to select from while running LINQ queries. If you need it to persist longer, you can store it in Session or pass the variable around.
I would like to have your advice.
I'm now developing a small WPF client application using C#, bindings, ADO.Net Entity Framework, ODP.net and an Oracle database.
The application is a small one, two XAML screens, about 15 tables. I was developing using entities by filling my entities through the application and using the SaveChanges method.
However our DBA said me that I don't have the right to make direct access to the but only using stored procedures. I asked him why and he said me that it is a security reason because using stored procedures forces to provide the row identifier when deleting a record in one table.
According him the risk is that the application will maybe delete all the rows in one table instead of only one row if the id is provided througe the stored procedure.
I find that is a lot of overkill for only 15 table.
What do you think about that?
Have you suggested to your DBA that you use Linq to SQL? That way you can extract objects, representing individual rows and it would make it far less likely you would accidentally delete multiple rows.
Personally I think EDM might be overkill for the size of DB.
I should say I'm a big proponent of LINQ to SQL and not a big fan of SPs however....
LINQ2SQL on top of ODP.NET is a great stack. And I agree with Andrew, because you would have to write code to load the records, delete all of them, and commit the changes, it's not exactly something that can happen "easily".
Forgetting a where clause in a LINQ statement is no easier or harder then forgetting a where clause in a stored procedure.
I'm looking for a good solution to make my life easier with regards to writing/reading to a SQL Server DB in a dynamic manner. I started with Entity-framework to make my life easier to begin with, but as the software become more general and config driven I'm finding that Entity becomes less and less appropriate because it relies on specific objects defined at design time.
What I'd like to do.
Generate Tables/Fields at runtime.
Select rows from tables by table name with unknown schema into a generic data type (eg Dictionary)
Insert rows to tables by table name using generic data types (dictonary, where the string maps to field name), where the data type mapping between typeof(object) and field type is taken care off.
I've started implementing this stuff myself, but I imagine someone has already has already done it before.
Any suggestions?
Thanks.
I'm having trouble understanding how what you are describing is any different than plain old ADO.NET. DataTables are dynamically constructed based on a SQL query and a DataRow is just a special case of an IndexedDictionary (sometimes called an OrderedDictionary where you can access values via a string name or an integer index like a list). I make no judgment as to whether choosing ADO.NET is actually right or wrong for your needs, but I'm trying to understand why you seem to have ruled it out.
You can use Sql.Net ( http://sqlom.sourceforge.net ) to easily generate dynamic SQL statements in C#.
The iBATIS.NET (now MyBatis.NET) Data Mapper framework doesn't automatically generate tables or fields at runtime, but it does allow you to select and commit data via Dictionary objects.
It's probably not going to suit your needs completely (it's kind of tedious to set up, but pretty easy to maintain once it is), but it might be worth a look. Here's a link to the online documentation.
Other popular frameworks might do the same or similar, such as NHibernate.
I have a legacy database with a pretty evil design that I need to write some applications for. I am not allowed to touch the database design at all, seeing how this is a fragile old system held together by spit and prayers. I am of course very aware that this is not how the database should have been designed in the first place, but real life some times gets in the way..
For my new application I am using NHibernate (with Fluent for mappings and NHibernate LINQ for querying) and trying to Do Things Right. So there is IoC and repositories and more interfaces than I can count. However, the DB structure is giving me some headaches.
The system is very much focused around the concept of customers, and each customer lives in a campaign. These campaigns are created by one of the old applications. Each campaign in the system is defined in a table called CampaignSettings. One of the columns of this table is simply a text column called "Table", which refers to a database table that is created at the same time as the campaign entry in CampaignSettings. The name of this table is related to the name of the campaign, which can pretty much be anything the customer wants (within the constraints given by SQL Server (2000 or 2005)). In these tables the customers live.
So that is challenge #1 - I won't know the table names until runtime. And it will change from site to site - no static mapping I guess.
To make it even worse, we have challenge #2 - this campaign table is also dynamic in structure, meaning it has a certain number of columns that are always there (customer id, name, phone number, email address and other housekeeping stuff), and then there are two other sets of columns, added depending on the requirements of the customer on a case-by-case basis.
The old applications use SQL to get the column names present in the table, then add the ones it doesn't know about as "custom fields" in the application. I need to handle this.
I know I probably can't handle these challenges simply by using mapping magic, and I am prepared to do some ugly SQL in addition to the ORM goodness that I get from NHibernate (there are 20-some "static" tables in here as well which NHibernate handles beautifully) - but how?
I will create a Customer entity that I guess I can populate manually by doing direct SQL like
SELECT * FROM SomeCampaignTable WHERE id=<?>
and then going through the columns one by one and putting stuff where it belongs. Not fun, but necessary.
And then I guess to discover the structure of the table in the first place, I could run SQL like this:
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'SomeCampaignTable'
ORDER BY ORDINAL_POSITION
And again do some manual work to configure my object to handle the custom fields.
My question is simply - how can I do this in NHibernate? Is it a simple matter of finding a way to run my own SQL, then looping through the results, or is there a more elegant way to take the pain out of it?
While I appreciate that this database design belongs in some kind of Museum of Torture somewhere, answers like "Add some views" or "Change the DB" won't help me - I will be shot if I suggest something like that.
Thanks for anything that could help save my sanity here!
You might be able to use NHibernate using Native SQL Entity Queries. Forget Linq2NH - not that I would recommend Linq2NH for any serious application.
Check this page.
13.1.2. Entity queries
https://www.hibernate.org/hib_docs/nhibernate/1.2/reference/en/html/querysql.html
You could maybe do something like this:
Map your entities based on a 'fake' table to keep NHibernate happy when it compiles the mapping documents (I know you said you can't change the DB, but hopefully ok to make an empty table to keep NH happy).
Then run a query like this, as per 13.1.2 above:
sess.CreateSQLQuery("SELECT tempColumn1 as mappingFileColumn1, tempColumn2 as mappingFileColumn2, tempColumn3 as mappingFileColumn3 FROM tempTableName").AddEntity(typeof(Cat));
NHibernate should stitch together the columns you've returned with the mapped entity and give you the entity of type 'Cat' with all the properties populated. I am speculating here though, I do not know for sure if this will work, its the only way I can think of to use NHibernate for this given you don't know the tables/columns at compile time. You definitely cannot use HQL, Criteria, Linq2NH since you don't know the tables and columns at compile time, and HQL et al all convert your mappings to the mapped column names to produce the underlying SQL. Native SQL Queries are the only way I think.