I'm writing an app in C# that connects to a SQL Server using Entity Framework. Every instance of the app shares the tables and some variables (ints, strings and bools).
What's the best way to share that variables (int string and bools) via tables in SQL Server?
Since tables have fixed type columns, one table would not do it without loosing type-safe in C#, because every type should be converted to string or boxed to object.
The two solutions I came up with are, one table with 3 columns (int, varchar, bool), with the data writing in the appropriately typed column, or 3 tables with one column.
Or maybe I am totally missing the point here..
The question would be: what's the most elegant way to accomplish saving typed data to a SQL Server?
You can use SQL_Variant for that purpose
https://learn.microsoft.com/en-us/sql/t-sql/data-types/sql-variant-transact-sql?view=sql-server-2017
CREATE TABLE tableA(colA sql_variant, colB int)
INSERT INTO tableA values ( cast (46279.1 as decimal(8,2)), 1689)
SELECT SQL_VARIANT_PROPERTY(colA,'BaseType') AS 'Base Type',
SQL_VARIANT_PROPERTY(colA,'Precision') AS 'Precision',
SQL_VARIANT_PROPERTY(colA,'Scale') AS 'Scale'
FROM tableA
WHERE colB = 1689
Related
I want to develop a dynamic insert method with LINQ.
Let's assume I have two tables like:
Product
{
id int,
name varchar(20),
price int
}
Factory
{
id int,
name varchar(50),
address varchar(240)
}
But, consider that I don't know the tables, but only the names of them.
This is how I get the column names of the table which I know the name of:
var db = new DataContext();
var columnNames = db.Mapping.MappingSource
.GetModel(typeof(DataContext))
.GetMetaType(typeof(table_name))
.DataMembers;
But I can't figure out how to get the column names of the table which I don't know the name of. What I tried so far:
context.Mapping.GetTables().FirstOrDefault(
x=> x.TableName == table_name ).Model.ContextType.Attributes;
table_name is dynamically changes and be like: Product, Factory etc.. But this way is a dead end, I think.
So, in the end I couldn't figure out how to get column names of a random table, let alone inserting a row to random table.
I can do this with classic way using SqlCommands but I want to know how to do it with LINQ.
As Mant101 said in his comment:
I don't think Linq is going to help here. You could write some code in
ADO.NET to get the columns definitions from the database, then use
some reflection to build an insert/update statement based on the
properties of the object that match the columns. I would ask why you
need to do this, are you working with some database that is in an
unknown state when the app run?
And as StriplingWarrior gives countenance to him with:
Mant101 is right: The whole purpose of an object-relational mapper is
to make it easier to work with persisted data by converting it into
objects that you can use in the programming language. Since you're not
going to program against those objects in this case, you don't get any
value from LINQ-to-SQL. You're better off bypassing the ORM and using
straight SQL/ADO.NET.
Inserting any table using generic methods with LINQ seems impossible.
However, you can do it with SQL.
If I have a record defined in a PL/SQL package and a procedure defined in the same package, is it possible to create the "record" type on the .NET (C#) side and pass it to the procedure using the type t_my_rec. I'm sure I can do this using UDTs (Oracle user-defined data types), but since I am using the managed driver, it isn't yet supported.
TYPE t_arr_my_rec IS TABLE OF t_my_rec INDEX BY PLS_INTEGER;
TYPE t_my_rec IS RECORD
(
item_id items.item_id%type,
item_name items.item_name%type
);
PROCEDURE insert_my_rec
(
p_my_rec in t_my_rec
);
PROCEDURE bulk_insert_my_rec
(
p_my_recs in t_arr_my_rec
);
Ideally I'd like to avoid defining array types for every single item in the table to do bulk FORALL insert statements.
I really appreciate the help!
I don't think you can deal with Oracle type declarations in ODP.net outside a UDT, and even then I've only done so with Type declarations made in the database rather than in a package.
You could also consider passing a collection of objects across in an XML object and parsing it out at both sides. That ensures that you can define the structures in play, although you will incur the overhead of creating / validating / parsing the string, and the data overhead of passing numbers as strings rather than as a couple of bytes.
Heck, in the old days before any decent UDT or XML support I remember stuffing a bunch of data into a CLOB to pass across and parse out, once both sides agreed on the format. Works OK if you never ever EVER expect to change the data object. A flipping maintenance nightmare otherwise. But do-able.
No it is not possible. You'll need to use some other technique, such as flattening out the record into multiple SProc parameters, using a temp table, etc.
Here is a relevant thread over on the OTN forums.
https://community.oracle.com/thread/3620578
I had a similar problem. I have solved using an associative array for each field in the record. Instead of having a single output parameter of type PL / SQL table of records have so many parameters as columns. In the package I have defined two basic types of associative arrays of varchar2 and number.
CREATE OR REPLACE PACKAGE xxx AS
type t_tbl_alfa is table of varchar2(50) index by binary_integer;
type t_tbl_num is table of number index by binary_integer;
I have many tables in the database that have at least one column that contains a Url. And these are repeated a lot through-out the database. So I normalize them to a dedicated table and I just use numeric IDs everywhere I need them. I often need to join them so numeric ids are much better than full strings.
In MySql + C++, to insert a lot of Urls in one strike, I used to use multi-row INSERT IGNOREs or mysql_set_local_infile_handler(). Then batch SELECT with IN () to pull the IDs back from the database.
In C# + SQLServer I noticed there's a SqlBulkCopy class that's very useful and fast in mass-insertion. But I also need mass-selection to resolve the Url IDs after I insert them. Is there any such helper class that would work the same as SELECT WHERE IN (many, urls, here)?
Or do you have a better idea for turning Urls into numbers in a consistent manner in C#? I thought about crc32'ing the urls or crc64'ing them but I worry about collisions. I wouldn't care if collisions are few, but if not... it would be an issue.
PS: We're talking about tens of millions of Urls to get an idea of scale.
PS: For basic large insert, SQLBulkCopy is faster than SqlDbType.Structured. Plus it has the SqlRowsCopied event for a status tracking callback.
There is even a better way than SQLBulkCopy.
It's called Structured Parameters and it allows you to pass a table-valued parameter to stored procedure or query through ADO.NET.
There are code examples in the article, so I will only highlight what you need to do to get it up and working:
Create a user defined table type in the database. You can call it UrlTable
Setup a SP or query which does the SELECT by joining with a table variable or type UrlTable
In your backing code (C#), create a DataTable with the same structure as UrlTable, populate it with URLs and pass it to an SqlCommand through as a structured parameter. Note that column order correspondence is critical between the data table and the table type.
What ADO.NET does behind the scenes (if you profile the query you can see this) is that before the query it declares a variable of type UrlTable and populates it (INSERT statements) with what you pass in the structured parameter.
Other than that, query-wise, you can do pretty much everything with table-valued parameters in SQL (join, select, etc).
I think you could use the IGNORE_DUP_KEY option on your index. If you set IGNORE_DUP_KEY = ON on the index of the URL column, the duplicate values are simply ignored and the rest are inserted appropriately.
In an C# program I have an array with about 100.000 elements.
Then I have a SQL Server 2008 table where the primary key column contains more or less nearly all elements of the array (but a few not). The table can have up to 30.000.000 rows.
Now I want to determine which elements of the array do not exist in the table. How can this be achieved efficiently?
The most efficient method would probably be to bulk-insert those 100,000 elements into a temp table and then perform the comparison within the database itself.
(Note that I haven't tested this theory; it's just an educated guess.)
Query the table with a
select <primarykey> where <primarykey> in (<primary key of ur list of elements in c#>)
This should be faster than inserting all rows into a table and then checking with an except/minus command for missing elements, because it does not involve any write operation.
Once you have the list of primary keys which are common..pull it back into c# and compare.
A way to avoid creating temp tables would be to use a stored procedure which accepts a table valued parameter of a user-defined table type (udtt). This table would have a schema of one column of a data type matching that in your array.
If you populate a DataTable (with a schema matching the udtt schema) with your array values and supply the data table as your stored proc's parameter, you can pass up all 100,000 of your items in their sql binary format. The proc can just do a join between the 30M row table and the table-valued parameter, returning the items in the TVP table with no matches in the master table.
This avoids needing to build massive IN statements.
EDIT Regarding the comment from #Kyro below
I'm now less confident in this approach. I found an article showing the under-the-covers row-by-row inserts that Kyro describes. What you might gain in sending binary data over the network rather than a large TSQL where in() statement, may well be taken away by the performance SQL side. However, it's a fairly simple code approach, so might just be worth a quick test. Let us know how you get on?
I have SQL Server 2008 and VS 2008 Pro. And I am coding in C#. I accept a text input file from the customer and I parse this data into a DataTable in my C# aspx.cs file. I know that this is working correctly. So I populated the DataTable. But now how do I load this data into an SP? Like can I use a dynamic table parameter? Or should I use XML instead?
The problem is that the number of required columns may vary depending on which table they want to insert into. The details are I let the user select which table they want to append data to. For simplicity, let's say:
TABLE_NAME NUM_COLS
A 2
B 3
C 4
And also let's assume that the first column in each of these is an INT primary key.
So if they choose Table B, then DataTable would look something like:
PK C1 C2 C3
1 'd' 'e' '3/10/99'
2 'g' 'h' '4/10/99'
So now I want to append this data above into Table B in my AdventureWorks DB. What is the easiest way to implement this both in the SP definition and also the C# code which calls this SP?
Thank you!
I think I understand what you're asking. I'm going to assume each row of your data import will map directly/cleanly to a table in the database. I am also going to assume your application logic can determine where each row of data shall be persisted.
This said, I suggest working with each row of the .NET DataTable individually rather than passing the data in bulk to SQL as a single stored procedure parameter and then depending on SQL to do any data parsing and table mapping.
Basically, loop through your DataTable, determine the type of data and execute the appropriate insert for each row. I hope this helps.