I have created a stored procedure that returns a recordset model which joins several different tables in a single database. I have scoured the internet for information regarding "proper syntax for calling a stored procedure from mvc 6 C# without parameters" and have learned several things.
Firstly, there used to be what looked like understandable answers, to wit: "ExecuteSqlCommand " and "ExecuteSqlCommandAsync ", which, evidently are no longer used. Their replacements are explained here: [https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-core-3.x/breaking-changes#fromsql][1]. They seem to be limited to "FromSql/FromSqlRaw" (which returns a recordset model) and "ExecuteSqlRaw/ExecuteSqlRawAsync()" which returns an integer with a specified meaning.
The second thing is that, everywhere examples of "before and after" are given, the example without parameters are skipped (as in all of the MS docs).
And thirdly, all of the examples that return a recordset model with data seem tied to a table, such as:
"var students = context.Students.FromSql("GetStudents 'Bill'").ToList();" And, as stored procedures are stored in their own directories, can reference any tables, multiple tables, or even no tables, I don't understand this relationship requirement in calling them.
(such as here:
[https://www.entityframeworktutorial.net/efcore/working-with-stored-procedure-in-ef-core.aspx][2]
var students = context.Students.FromSql("GetStudents 'Bill'").ToList();)
Or maybe they are models (since in this entity framework, everything seems to have the exact same name)... But what if your stored procedure isn't returning a recordset tied to a model. Do you have to create a new model just for the output of this stored procedure? I tried that, and it didn't seem to help.
So, my fundamental question is, how do I call a stored procedure without any parameters that returns a recordset model with data?
return await _context.(what goes here?).ExecuteSqlRaw("EXEC MyStoredProcedure").ToListAsync();
return await _context.ReturnModel.ExecuteSqlRaw("EXEC? MyStoredProcedure").ToListAsync();
Updated Code:
Added Model
public class InquiryQuote
{
public Inquiry inquiry { get; set; }
public int QuoteID { get; set; } = 0;
}
Added DBSet:
public virtual DbSet<InquiryQuote> InquiryQuotes { get; set; } = null!;
And updated the calling controller:
// GET: api/Inquiries
[HttpGet]
public async Task<ActionResult<IEnumerable<InquiryQuote>>> GetInquiries()
{
//return await _context.Inquiries.ToListAsync();
//return await _context.Inquiries.Where(i => i.YNDeleted == false).ToListAsync();
// var IQ = await _context.InquiryQuotes.FromSqlRaw("GetInquiryList").ToListAsync();
var IQ = await _context.InquiryQuotes.FromSqlRaw("EXEC GetInquiryList").ToListAsync();
return Ok(IQ);
}
Both versions of "IQ" return the same results:
System.Data.SqlTypes.SqlNullValueException: Data is Null. This method or property cannot be called on Null values.
at Microsoft.Data.SqlClient.SqlBuffer.ThrowIfNull()
at Microsoft.Data.SqlClient.SqlBuffer.get_Int32()
at Microsoft.Data.SqlClient.SqlDataReader.GetInt32(Int32 i)
at lambda_method17(Closure , QueryContext , DbDataReader , Int32[] )
...
[And here is the image of the stored procedure run directly from my development site:][1]
UPDATE (And partial answer to the question in the comments):
I am using the Entity Framework, and will be performing data manipulation prior to returning the newly created InquiryQuotes model from the stored procedure to be used in several views.
Why I am getting a SQL error thrown in postman (System.Data.SqlTypes.SqlNullValueException: Data is Null. This method or property cannot be called on Null values.) when calling the stored procedure directly from visual studio returns a "dataset" as shown in my image. Does it have something to do with additional values being returned from the stored procedure that are not being accounted for, like "DECLARE #return_value Int / SELECT #return_value as 'Return Value' ", or is this just a feature of executing it from VS. Since it has no input params, where is the NULL coming from?
[1]: https://i.stack.imgur.com/RJhMr.png
I seem to have found the answer (but still don't know the why...)
I started breaking it down bit-by-bit. The procedure ran on sql, and ran remotely on Visual Studio when directly accessing sql, but not when called. So I replaced the complex stored procedure with a simple one that returned all fields from the inquiry table where the id matched an input variable (because I had LOTS) of examples for that.
Stored Procedure:
CREATE PROCEDURE [dbo].[GetInquiry]
#InquiryID int = 0
AS
BEGIN
SET NOCOUNT ON
select i.*
FROM dbo.Inquiries i
WHERE i.YNDeleted = 0 AND i.InquiryId = #InquiryID
END
And the controller method (with the InquiryQuote model modified to eliminate the "quote" requirement:
public async Task<ActionResult<IEnumerable<InquiryQuote>>> GetInquiries()
{
//return await _context.Inquiries.ToListAsync();
//return await _context.Inquiries.Where(i => i.YNDeleted == false).ToListAsync();
SqlParameter ID = new SqlParameter();
ID.Value = 0;
var IQ = _context.InquiryQuotes.FromSqlRaw("GetInquiryList {0}", ID).ToList();
//var IQ = await _context.InquiryQuotes.FromSqlRaw("dbo.GetInquiryList").ToListAsync();
return IQ;
}
And (after a bit of tweaking) it returned a JSON result of the inquiry data for the ID in Postman.
{
"inquiryId": 9,
(snip)
"ynDeleted": false
}
So, once I had something that at least worked, I added just the quote back in to this simple model and ran it again
select i.*, 0 AS Quoteid
FROM dbo.Inquiries i
LEFT JOIN dbo.Quotes q ON i.InquiryId = q.InquiryId
WHERE i.YNDeleted = 0 AND i.InquiryId = #InquiryID
(I set the QuoteID to 0, because I had no data in the Quotes table yet).
AND the Model:
[Keyless]
public class InquiryQuote
{
public Inquiry inquiry { get; set; }
public bool QuoteID{ get; set; } = 0;
}
And ran it again, and the results were astonishing:
{
inquiry:{null},
QuoteID:0
}
I still don't understand why, but, evidently it must have been because of my LEFT join of the inquiryID from the Inquiry Table left joined with a null table returned null results - but when running on SQL, results were returned... The join in sql worked and returned results, but somewhere between sql and the API, the data was being nullified...
To test this theory, I updated my InquiryQuote model to put the "inquiry" data and "quoteid" at the same level, to wit:
public class InquiryQuote
{
public int InquiryId { get; set; } = 0;
(snip)
public Boolean YNDeleted { get; set; } = false;
public int QuoteID { get; set; } = 0;
}
and the entire results set was null...
So at that point, I figured it must have something to do with that LEFT JOIN with a table with no records. So I added a blank (default) entry into that table and, voila, the data I was expecting:
{
"inquiryId": 9,
(snip)
"ynDeleted": false,
"quoteID": 0
}
So, now I have a working way to call a stored procedure with one parameter!!
I then updated the stored procedure to deal with nulls from the database as so:
select i.*, ISNULL(q.QuoteId,0) AS Quoteid
FROM dbo.Inquiries i
LEFT JOIN dbo.Quotes q ON i.InquiryId = q.InquiryId
WHERE i.YNDeleted = 0 AND i.InquiryId = #InquiryID
And now am returning correct data.
I still don't know why the stored procedure runs in sql and returns data, but returns a SQL error when run from the controller. That will require a deeper dive into the interconnectivity between the sql and the API and how errors are passed between the two. And I am pretty certain I will be able to figure out how to convert this call into one that uses no parameters.
Thank you everyone for your help.
I'm using a SQL table as a job queue very similar to the article here: https://vladmihalcea.com/database-job-queue-skip-locked/
My problem is that I'm using Entity Framework 6 with Database First code and from what I can tell, EF6 doesnt' support the skip locked command. Here is my table class and I'm using each computer as a worker to handle the task I'm passing it.
public partial class InProgress
{
public int ID { get; set; }
public string Task { get; set; }
public string Computer { get; set; }
public DateTime Date { get; set; }
}
Does anyone have any C# code they can share so I can make sure that no other computer can work on the same task as another computer at the same time?
UPDATE: I want to clarify that I'm not doing a traditional queue where you constantly add and remove to the queue or in this case table. I have a table that contains a task list and I'm constantly having the tasks worked on by multiple computers and when they are finished, they update the Date column with the finished time. I work on the tasks that have the oldest Date first.
Here is some pseudo code of what I'm trying to do based on the info provided
create procedure usp_enqueuePending
#date datetime,
#task varchar(50),
#computer varchar(50)
as
set nocount on;
insert into InProgresses(Date, Task, Computer)
values (#date, #task, #computer);
go
create procedure usp_dequeuePending
as
set nocount on;
declare #now datetime;
set #now = getutcdate();
with cte as (
select top(1)
Task
from InProgresses with (rowlock, updlock, readpast)
where Date < #now
order by Date)
delete from cte
output deleted.Task;
go
using var context = new context();
var dequeuedItem = context.usp_dequeuePending(); // not sure how to convert this back to an InProgress class
// do work here I'm guessing
// add to the queue when finished with it??
context.usp_enqueuePending(DateTime.UtcNow, task, computer);
You can write custom queries in EF Core, see here. So you could do something like this:
dbContext.InProgress
.FromSqlRaw("SELECT * FROM model.InProgress WITH (rowlock, updlock, readpast)")
.Where(...) // do other LINQ stuff here
It's not super pretty, but I don't know of a better solution at the moment.
I need to insert a huge CSV-File into 2 Tables with a 1:n relationship within a mySQL Database.
The CSV-file comes weekly and has about 1GB, which needs to be append to the existing data.
Each of them 2 tables have a Auto increment Primary Key.
I've tried:
Entity Framework (takes most time of all approaches)
Datasets (same)
Bulk Upload (doesn't support multiple tables)
MySqlCommand with Parameters (needs to be nested, my current approach)
MySqlCommand with StoredProcedure including a Transaction
Any further suggestions?
Let's say simplified this is my datastructure:
public class User
{
public string FirstName { get; set; }
public string LastName { get; set; }
public List<string> Codes { get; set; }
}
I need to insert from the csv into this database:
User (1-n) Code
+---+-----+-----+ +---+---+-----+
|PID|FName|LName| |CID|PID|Code |
+---+-----+-----+ +---+---+-----+
| 1 |Jon | Foo | | 1 | 1 | ed3 |
| 2 |Max | Foo | | 2 | 1 | wst |
| 3 |Paul | Foo | | 3 | 2 | xsd |
+---+-----+-----+ +---+---+-----+
Here a sample line of the CSV-file
Jon;Foo;ed3,wst
A Bulk load like LOAD DATA LOCAL INFILE is not possible because i have restricted writing rights
Referring to your answer i would replace
using (MySqlCommand myCmdNested = new MySqlCommand(cCommand, mConnection))
{
foreach (string Code in item.Codes)
{
myCmdNested.Parameters.Add(new MySqlParameter("#UserID", UID));
myCmdNested.Parameters.Add(new MySqlParameter("#Code", Code));
myCmdNested.ExecuteNonQuery();
}
}
with
List<string> lCodes = new List<string>();
foreach (string code in item.Codes)
{
lCodes.Add(String.Format("('{0}','{1}')", UID, MySqlHelper.EscapeString(code)));
}
string cCommand = "INSERT INTO Code (UserID, Code) VALUES " + string.Join(",", lCodes);
using (MySqlCommand myCmdNested = new MySqlCommand(cCommand, mConnection))
{
myCmdNested.ExecuteNonQuery();
}
that generates one insert statement instead of item.Count
Given the great size of data, the best approach (performance wise) is to leave as much data processing to the database and not the application.
Create a temporary table that the data from the .csv file will be temporarily saved.
CREATE TABLE `imported` (
`id` int(11) NOT NULL,
`firstname` varchar(45) DEFAULT NULL,
`lastname` varchar(45) DEFAULT NULL,
`codes` varchar(450) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Loading the data from the .csv to this table is pretty straightforward. I would suggest the use of MySqlCommand (which is also your current approach). Also, using the same MySqlConnection object for all INSERT statements will reduce the total execution time.
Then to furthermore process the data, you can create a stored procedure that will handle it.
Assuming these two tables (taken from your simplified example):
CREATE TABLE `users` (
`PID` int(11) NOT NULL AUTO_INCREMENT,
`FName` varchar(45) DEFAULT NULL,
`LName` varchar(45) DEFAULT NULL,
PRIMARY KEY (`PID`)
) ENGINE=InnoDB AUTO_INCREMENT=3737 DEFAULT CHARSET=utf8;
and
CREATE TABLE `codes` (
`CID` int(11) NOT NULL AUTO_INCREMENT,
`PID` int(11) DEFAULT NULL,
`code` varchar(45) DEFAULT NULL,
PRIMARY KEY (`CID`)
) ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=utf8;
you can have the following stored procedure.
CREATE DEFINER=`root`#`localhost` PROCEDURE `import_data`()
BEGIN
DECLARE fname VARCHAR(255);
DECLARE lname VARCHAR(255);
DECLARE codesstr VARCHAR(255);
DECLARE splitted_value VARCHAR(255);
DECLARE done INT DEFAULT 0;
DECLARE newid INT DEFAULT 0;
DECLARE occurance INT DEFAULT 0;
DECLARE i INT DEFAULT 0;
DECLARE cur CURSOR FOR SELECT firstname,lastname,codes FROM imported;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur;
import_loop:
LOOP FETCH cur INTO fname, lname, codesstr;
IF done = 1 THEN
LEAVE import_loop;
END IF;
INSERT INTO users (FName,LName) VALUES (fname, lname);
SET newid = LAST_INSERT_ID();
SET i=1;
SET occurance = (SELECT LENGTH(codesstr) - LENGTH(REPLACE(codesstr, ',', '')) + 1);
WHILE i <= occurance DO
SET splitted_value =
(SELECT REPLACE(SUBSTRING(SUBSTRING_INDEX(codesstr, ',', i),
LENGTH(SUBSTRING_INDEX(codesstr, ',', i - 1)) + 1), ',', ''));
INSERT INTO codes (PID, code) VALUES (newid, splitted_value);
SET i = i + 1;
END WHILE;
END LOOP;
CLOSE cur;
END
For every row in the source data, it makes an INSERT statement for the user table. Then there is a WHILE loop to split the comma separated codes and make for each one an INSERT statement for the codes table.
Regarding the use of LAST_INSERT_ID(), it is reliable on a PER CONNECTION basis (see doc here). If the MySQL connection used to run this stored procedure is not used by other transactions, the use of LAST_INSERT_ID() is safe.
The ID that was generated is maintained in the server on a per-connection basis. This means that the value returned by the function to a given client is the first AUTO_INCREMENT value generated for most recent statement affecting an AUTO_INCREMENT column by that client. This value cannot be affected by other clients, even if they generate AUTO_INCREMENT values of their own. This behavior ensures that each client can retrieve its own ID without concern for the activity of other clients, and without the need for locks or transactions.
Edit: Here is the OP's variant that omits the temp-table imported. Instead of inserting the data from the .csv to the imported table, you call the SP to directly store them to your database.
CREATE DEFINER=`root`#`localhost` PROCEDURE `import_data`(IN fname VARCHAR(255), IN lname VARCHAR(255),IN codesstr VARCHAR(255))
BEGIN
DECLARE splitted_value VARCHAR(255);
DECLARE done INT DEFAULT 0;
DECLARE newid INT DEFAULT 0;
DECLARE occurance INT DEFAULT 0;
DECLARE i INT DEFAULT 0;
INSERT INTO users (FName,LName) VALUES (fname, lname);
SET newid = LAST_INSERT_ID();
SET i=1;
SET occurance = (SELECT LENGTH(codesstr) - LENGTH(REPLACE(codesstr, ',', '')) + 1);
WHILE i <= occurance DO
SET splitted_value =
(SELECT REPLACE(SUBSTRING(SUBSTRING_INDEX(codesstr, ',', i),
LENGTH(SUBSTRING_INDEX(codesstr, ',', i - 1)) + 1), ',', ''));
INSERT INTO codes (PID, code) VALUES (newid, splitted_value);
SET i = i + 1;
END WHILE;
END
Note: The code to split the codes is taken from here (MySQL does not provide a split function for strings).
I developed my WPF application application using the Entity Framework and used SQL server database and needed to read data from an excel file and had to insert that data into 2 tables that has relationship between them. For roughly about 15000 rows in excel it used to take around 4 hours of time. Then what I did was I used a block of 500 rows per insert and this speeded up my insertion to unbelievalbe fast and now it takes mere 3-5 seconds to import that same data.
So I would suggest you add your rows to a Context like 100/200/500 at a time and then call the SaveChanges method (if you really want to be using EF). There are other helpful tips as well to speed up the performance for EF. Please read this for your reference.
var totalRecords = TestPacksData.Rows.Count;
var totalPages = (totalRecords / ImportRecordsPerPage) + 1;
while (count <= totalPages)
{
var pageWiseRecords = TestPacksData.Rows.Cast<DataRow>().Skip(count * ImportRecordsPerPage).Take(ImportRecordsPerPage);
count++;
Project.CreateNewSheet(pageWiseRecords.ToList());
Project.CreateNewSpool(pageWiseRecords.ToList());
}
And here is the CreateNewSheet method
/// <summary>
/// Creates a new Sheet record in the database
/// </summary>
/// <param name="row">DataRow containing the Sheet record</param>
public void CreateNewSheet(List<DataRow> rows)
{
var tempSheetsList = new List<Sheet>();
foreach (var row in rows)
{
var sheetNo = row[SheetFields.Sheet_No.ToString()].ToString();
if (string.IsNullOrWhiteSpace(sheetNo))
continue;
var testPackNo = row[SheetFields.Test_Pack_No.ToString()].ToString();
TestPack testPack = null;
if (!string.IsNullOrWhiteSpace(testPackNo))
testPack = GetTestPackByTestPackNo(testPackNo);
var existingSheet = GetSheetBySheetNo(sheetNo);
if (existingSheet != null)
{
UpdateSheet(existingSheet, row);
continue;
}
var isometricNo = GetIsometricNoFromSheetNo(sheetNo);
var newSheet = new Sheet
{
sheet_no = sheetNo,
isometric_no = isometricNo,
ped_rev = row[SheetFields.PED_Rev.ToString()].ToString(),
gpc_rev = row[SheetFields.GPC_Rev.ToString()].ToString()
};
if (testPack != null)
{
newSheet.test_pack_id = testPack.id;
newSheet.test_pack_no = testPack.test_pack_no;
}
if (!tempSheetsList.Any(l => l.sheet_no == newSheet.sheet_no))
{
DataStore.Context.Sheets.Add(newSheet);
tempSheetsList.Add(newSheet);
}
}
try
{
DataStore.Context.SaveChanges();
**DataStore.Dispose();** This is very important. Dispose the context
}
catch (DbEntityValidationException ex)
{
// Create log for the exception here
}
}
CreateNewSpool is ditto same method except for the fields name and table name, because it updates a child table. But the idea is the same
1 - Add a column VirtualId to User table & class.
EDITED
2 - Assign numbers in a loop for the VirtualId (use negative numbers starting -1 to avoid collisions in the last step) field in each User Object. For each Code c object belonging to User u object set the c.UserId = u.VirtualId.
3 - Bulk load Users into User table, Bulk load Codes into Code table.
4- UPDATE CODE C,USER U SET C.UserId = U.Id WHERE C.UserId = U.VirtualId.
NOTE : If you have a FK Constraint on Code.UserId you can drop it and re-add it after the Insert.
public class User
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public int VirtualId { get; set; }
}
public class Code
{
public int Id { get; set; }
public string Code { get; set; }
public string UserId { get; set; }
}
Can you break the CSV into two files?
E.g. Suppose your file has the following columns:
... A ... | ... B ...
a0 | b0
a0 | b1
a0 | b2 <-- data
a1 | b3
a1 | b4
So one set of A might have multiple B entries. After you break it apart, you get:
... A ...
a0
a1
... B ...
b0
b1
b2
b3
b4
Then you bulk insert them separately.
Edit: Pseudo code
Based on the conversation, something like:
DataTable tableA = ...; // query schema for TableA
DataTable tableB = ...; // query schmea for TableB
List<String> usernames = select distinct username from TableA;
Hashtable htUsername = new Hashtable(StringComparer.InvariantCultureIgnoreCase);
foreach (String username in usernames)
htUsername[username] = "";
int colUsername = ...;
foreach (String[] row in CSVFile) {
String un = row[colUsername] as String;
if (htUsername[un] == null) {
// add new row to tableA
DataRow row = tableA.NewRow();
row["Username"] = un;
// etc.
tableA.Rows.Add(row);
htUsername[un] = "";
}
}
// bulk insert TableA
select userid, username from TableA
Hashtable htUserId = new Hashtable(StringComparer.InvariantCultureIgnoreCase);
// htUserId[username] = userid;
int colUserId = ...;
foreach (String[] row in CSVFile) {
String un = row[colUsername] as String;
int userid = (int) htUserId[un];
DataRow row = tableB.NewRow();
row[colUserId] = userId;
// fill in other values
tableB.Rows.Add(row);
if (table.Rows.Count == 65000) {
// bulk insert TableB
var t = tableB.Clone();
tableB.Dispose();
tableB = t;
}
}
if (tableB.Rows.Count > 0)
// bulk insert TableB
AFAIK the insertions done in a table are sequential while the insertions in different table can be done in parallel. Open two separate new connections to the same database and then insert in parallel maybe by using Task Parallel Library.
However, if there are integrity constraints about 1:n relationship between the tables, then:
Insertions might fail and thus any parallel insert approach would be wrong. Clearly then your best bet would be to do sequential inserts only, one table after the other.
You can try and sort the data of both tables write the InsertInto method written below such that insert in second table will happen only after you are done inserting the data in the first one.
Edit: Since you have requested, if there is a possibility for you to perform the inserts in parallel, following is the code template you can use.
private void ParallelInserts()
{
..
//Other code in the method
..
//Read first csv into memory. It's just a GB so should be fine
ReadFirstCSV();
//Read second csv into memory...
ReadSecondCSV();
//Because the inserts will last more than a few CPU cycles...
var taskFactory = new TaskFactory(TaskCreationOptions.LongRunning, TaskContinuationOptions.None)
//An array to hold the two parallel inserts
var insertTasks = new Task[2];
//Begin insert into first table...
insertTasks[0] = taskFactory.StartNew(() => InsertInto(commandStringFirst, connectionStringFirst));
//Begin insert into second table...
insertTasks[1] = taskFactory.StartNew(() => InsertInto(commandStringSecond, connectionStringSecond));
//Let them be done...
Task.WaitAll(insertTasks);
Console.WriteLine("Parallel insert finished.");
}
//Defining the InsertInto method which we are passing to the tasks in the method above
private static void InsertInto(string commandString, string connectionString)
{
using (/*open a new connection using the connectionString passed*/)
{
//In a while loop, iterate until you have 100/200/500 rows
while (fileIsNotExhausted)
{
using (/*commandString*/)
{
//Execute command to insert in bulk
}
}
}
}
When you say "efficiently" are you talking memory, or time?
In terms of improving the speed of the inserts, if you can do multiple value blocks per insert statement, you can get 500% improvement in speed. I did some benchmarks on this over in this question: Which is faster: multiple single INSERTs or one multiple-row INSERT?
My approach is described in the answer, but simply put, reading in up to say 50 "rows" (to be inserted) at once and bundling them into a single INSERT INTO(...), VALUES(...),(...),(...)...(...),(...) type statement seems to really speed things up. At least, if you're restricted from not being able to bulk load.
Another approach btw if you have live data you can't drop indexes on during the upload, is to create a memory table on the mysql server without indexes, dump the data there, and then do an INSERT INTO live SELECT * FROM mem. Though that uses more memory on the server, hence the question at the start of this answer about "what do you mean by 'efficiently'?" :)
Oh, and there's probably nothing wrong with iterating through the file and doing all the first table inserts first, and then doing the second table ones. Unless the data is being used live, I guess. In that case you could definitely still use the bundled approach, but the application logic to do that is a lot more complex.
UPDATE: OP requested example C# code for multivalue insert blocks.
Note: this code assumes you have a number of structures already configured:
tables List<string> - table names to insert into
fieldslist Dictionary<string, List<String>> - list of field names for each table
typeslist Dictionary<string, List<MySqlDbType>> - list of MySqlDbTypes for each table, same order as the field names.
nullslist Dictionary<string, List<Boolean>> - list of flags to tell if a field is nullable or not, for each table (same order as field names).
prikey Dictionary<string, string> - list of primary key field name, per table (note: this doesn't support multiple field primary keys, though if you needed it you could probably hack it in - I think somewhere I have a version that does support this, but... meh).
theData Dictionary<string, List<Dictionary<int, object>>> - the actual data, as a list of fieldnum-value dictionaries, per table.
Oh yeah, and the localcommand is MySqlCommand created by using CreateCommand() on the local MySqlConnection object.
Further note: I wrote this quite a while back when I was kind of starting. If this causes your eyes or brain to bleed, I apologise in advance :)
const int perinsert = 50;
foreach (string table in tables)
{
string[] fields = fieldslist[table].ToArray();
MySqlDbType[] types = typeslist[table].ToArray();
bool[] nulls = nullslist[table].ToArray();
int thisblock = perinsert;
int rowstotal = theData[table].Count;
int rowsremainder = rowstotal % perinsert;
int rowscopied = 0;
// Do the bulk (multi-VALUES block) INSERTs, but only if we have more rows than there are in a single bulk insert to perform:
while (rowscopied < rowstotal)
{
if (rowstotal - rowscopied < perinsert)
thisblock = rowstotal - rowscopied;
// Generate a 'perquery' multi-VALUES prepared INSERT statement:
List<string> extravals = new List<string>();
for (int j = 0; j < thisblock; j++)
extravals.Add(String.Format("(#{0}_{1})", j, String.Join(String.Format(", #{0}_", j), fields)));
localcmd.CommandText = String.Format("INSERT INTO {0} VALUES{1}", tmptable, String.Join(",", extravals.ToArray()));
// Now create the parameters to match these:
for (int j = 0; j < thisblock; j++)
for (int i = 0; i < fields.Length; i++)
localcmd.Parameters.Add(String.Format("{0}_{1}", j, fields[i]), types[i]).IsNullable = nulls[i];
// Keep doing bulk INSERTs until there's less rows left than we need for another one:
while (rowstotal - rowscopied >= thisblock)
{
// Queue up all the VALUES for this block INSERT:
for (int j = 0; j < thisblock; j++)
{
Dictionary<int, object> row = theData[table][rowscopied++];
for (int i = 0; i < fields.Length; i++)
localcmd.Parameters[String.Format("{0}_{1}", j, fields[i])].Value = row[i];
}
// Run the query:
localcmd.ExecuteNonQuery();
}
// Clear all the paramters - we're done here:
localcmd.Parameters.Clear();
}
}
I was hoping that I solved the problem with the triggers. According to all the documentation, if I do an insert or an upgrade in C # (via pattern Reposity) trigger must return a value. This works fine for simply triggers.
If I use a more complex trigger, I get an error on the return value.
a) If return from trigger SELECT ##identity;
Member of type System.Int32 which is a type of values that does not allow nulls. You can not assign a value to Null.
b) If return from trigger: select * from dbo.Table where IDColumn = scope_identity();
Operation failed AutoSync member. In order to perform after insertion members operation
AutoSynced must have a type either automatically generated identity, or a key that does
not change the database after the insertion.
Used: C#, .NET FW4.5, Linq to SQL, MSSQL Express 2012 (compatibility 2008)
I'll be pleased as well as directions to the problems with Triggers and link to SQL.
More complex trigger (cleaned and reduced to a minimum):
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[bitrgTable]
ON [dbo].[Table]
INSTEAD OF INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT OFF;
-- Declare variables
DECLARE #new...
-- Get inserted values
SELECT #new... FROM INSERTED
-- do update all unique values
UPDATE [dbo].[Table] SET
...
WHERE
...;
-- if the last result is null
IF ##ROWCOUNT = 0
BEGIN
-- Do insert new values
INSERT INTO [dbo].[Table]
(...)
SELECT
...
FROM INSERTED
END
-- somehow return the result
-- ----------------------------------------------------------------
-- select * from dbo.table where TableID = scope_identity();
-- SELECT ##identity;
SELECT ??????
END
Best regards, Peter
Based on the answers for Ismet Alkan complement question.
// Reposity class
public class Reposity
{
private DataContext myDataContext = new DataContext();
public void Save()
{
myDataContext.SubmitChanges();
}
}
// Where working with reposity (class in console app)
public class ConsoleShell
{
// declare variable
Reposity myReposity = new Reposity();
// Many methods, working with treads etc,
// here I worked with Save()
private void ExecuteThread(....)
{
...
// go thŕow all found results
foreach (KeyValuePair<Oid, AsnType> kvpDot1dTpFdbAddress in dictDot1dTpFdbAddress)
{
// create new object instance
CustomObjectFromLinq objCustomObjectFromLinq = new CustomObjectFromLinq();
// append values for objCustomObjectFromLinq
if(CONDITION == TRUE)
{
// save item
myReposity.Add(objCustomObjectFromLinq);
myReposity.Save();
}
else
{
// if want ignore result, do nothing, prevent object null
objCustomObjectFromLinq = null;
}
.. next codes
} -- end foreach
} -- end method
} -- end class
I tried the ways you mentioned. However, it requires a huge effort and you really don't need to work around this. I'm not giving an exact answer here, but the way I got it working.
You're creating a class that has a DataContext as an attribute. I think it's better to create a class that inherits it. I used DbContext in my ASP.NET MVC3 application, instead of DataContext, but I don't know if there's a big deal of difference. Another thing is: I don't add an element directly to the context, instead I add it to a DBSet in the context. Here's my context class.
public class YourDBContext : DbContext
{
public DbSet<YourObject> YourObjects { get; set; }
}
Operating the context:
public class TheClassYouOperate
{
private YourDBContext yourDBContext = new YourDBContext();
public int SaveAndReturnID(YourObject newObject)
{
if (ModelState.IsValid) // or CONDITION == TRUE for your case
{
yourDBContext.YourObjects.Add(newObject);
yourDBContext.SaveChanges();
return newObject.ID;
}
return 0; // or null or -1, whatever the way you'll understand insertion fails
}
}
Hope it helps!
In a project I am currently working on, I need to access 2 databases in LINQ in the following manner:
I get a list of all trip numbers between a specified date range from DB1, and store this as a list of 'long' values
I perform an extensive query with a lot of joins on DB2, but only looking at trips that have their trip number included in the above list.
Problem is, the trip list from DB1 often returns over 2100 items - and I of course hit the 2100 parameter limit in SQL, which causes my second query to fail. I've been looking at ways around this, such as described here, but this has the effect of essentially changing my query to LINQ-to-Objects, which causes a lot of issues with my joins
Are there any other workarounds I can do?
as LINQ-to-SQL can call stored procs, you could
have a stored proc that takes an array as a input then puts the values in a temp table to join on
likewise by taking a string that the stored proc splits
Or upload all the values to a temp table yourself and join on that table.
However maybe you should rethink the problem:
Sql server can be configured to allow query against tables in other databases (including oracle), if you are allowed this may be an option for you.
Could you use some replication system to keep a table of trip numbers updated in DB2?
Not sure whether this will help, but I had a similar issue for a one-off query I was writing in LinqPad and ended up defining and using a temporary table like this.
[Table(Name="#TmpTable1")]
public class TmpRecord
{
[Column(DbType="Int", IsPrimaryKey=true, UpdateCheck=UpdateCheck.Never)]
public int? Value { get; set; }
}
public Table<TmpRecord> TmpRecords
{
get { return base.GetTable<TmpRecord>(); }
}
public void DropTable<T>()
{
ExecuteCommand( "DROP TABLE " + Mapping.GetTable(typeof(T)).TableName );
}
public void CreateTable<T>()
{
ExecuteCommand(
typeof(DataContext)
.Assembly
.GetType("System.Data.Linq.SqlClient.SqlBuilder")
.InvokeMember("GetCreateTableCommand",
BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.InvokeMethod
, null, null, new[] { Mapping.GetTable(typeof(T)) } ) as string
);
}
Usage is something like
void Main()
{
List<int> ids = ....
this.Connection.Open();
// Note, if the connection is not opened here, the temporary table
// will be created but then dropped immediately.
CreateTable<TmpRecord>();
foreach(var id in ids)
TmpRecords.InsertOnSubmit( new TmpRecord() { Value = id}) ;
SubmitChanges();
var list1 = (from r in CustomerTransaction
join tt in TmpRecords on r.CustomerID equals tt.Value
where ....
select r).ToList();
DropTable<TmpRecord>();
this.Connection.Close();
}
In my case the temporary table only had one int column, but you should be able to define whatever column(s) type you want, (as long as you have a primary key).
You may split your query or use a temporary table in database2 filled with results from database1.