I'm working on a pet project that will allow me to store my game collection in a DB and write notes on those games. The single entries of games has been coded by inserting desired variables into my game_information table and outputting the PK (identity) of the newly created row from that table, so I can insert it into my game_notes table along with the note.
var id = db.QueryValue("INSERT INTO Game_Information (gamePrice, name, edition) output Inserted.gameId VALUES (#0, #1, #2)", gamePrice, name, edition);
db.Execute("INSERT INTO Game_Notes(gameId, notes, noteDate) VALUES (#0, #1, #2)", id, notes, noteDate);
I'm now playing with uploading data in bulk via csv but how can I write a BULK INSERT that would output all PKs of the newly created rows, so I can inserted them into my second table (game_notes) along with a variable called notes?
At the moment I have the following:
Stored Procedure that reads .csv and uses BULK INSERT to dump information into a view of game_information
#FileName nvarchar(200)
AS
BEGIN
DECLARE #sql nvarchar(MAX);
SET #sql = 'BULK INSERT myview
FROM ''mycsv.csv''
WITH
(
FIELDTERMINATOR = '','',
ROWTERMINATOR = ''\n'',
FIRSTROW = 2
)'
EXEC(#sql)
END
C# code that creates set up in WebMatrix
if ((IsPost) && (Request.Files[0].FileName!=" "))
{
var fileSavePath = "";
var uploadedFile = Request.Files[0];
fileName = Path.GetFileName(uploadedFile.FileName);
uploadedFile.SaveAs(//path +filename);
var command = "EXEC Procedure1 #FileName = #0";
db.Execute(command, //path +filename);
File.Delete(//path +filename);
}
Which allows for csv records to be inserted into game_information.
If this isn't feasible with BULK INSERT, would something along the lines of be a valid solution to attempt?
BULK INSERT into a temp_table
INSERT from temp_table to my game_information table
OUTPUT the game_Ids from the INSERT as an array(?)
then INSERT the Ids along with note into game_notes.
I've also been looking at OPENROWSET but I'm unsure if that will allow for what I'm trying to accomplish. Feedback on this is greatly appreciated.
Thank your for your input womp. I was able to get the desired results by amending my BULK INSERT as follows:
BEGIN
DECLARE #sql nvarchar(MAX);
SET #sql=
'CREATE TABLE #Temp (--define table--)
BULK INSERT #Temp --Bulk into my temp table--
FROM '+char(39)+#FileName+char(39)+'
WITH
(
FIELDTERMINATOR = '','',
ROWTERMINATOR = ''\n'',
FIRSTROW = 2
)
INSERT myDB.dbo.game_information(gamePrice, name, edition, date)
OUTPUT INSERTED.gameId, INSERTED.Date INTO myDB.dbo.game_notes(gameId, noteDate)
SELECT gamePrice, name, edition, date
FROM #Temp'
EXEC(#sql)
END
This placed the correct ids into game_notes and left the Note column of the table as Null for those entries. Which meant I could run a simple
"UPDATE game_notes SET Notes = #0 WHERE Notes IS NULL";
To push the desired note into the correct rows. I'm executing this and the stored bulk procedure in the same If (IsPost), so I feel like I'm protected from the wrong accidental note updates.
You have a few different options.
Bulk inserting into a temp table and then copying information into your permanent tables is definitely a valid solution. However, based on what you're trying to do I don't see the need for a temp table. Just bulk import into game_information, SELECT your ID's to your application, and then do your update of game_notes.
Another option would be to insert your keys. You can allow for IDENTITY_INSERT to be on for your tables and just have your keys as part of the CSV file. See here: https://msdn.microsoft.com/en-ca/library/ms188059.aspx?f=255&MSPPError=-2147217396. If you did this then you could do a BULK INSERT into your Game_information table, and then do a second BULK INSERT into your secondary tables by using a different CSV file. Be sure to re-enable key constraints and turn IDENTITY_INSERT off after its finished.
If you need more particular control over the data you're selecting from the CSV file then you can use OPENROWSET but there's not enough details in your post to comment further.
Related
SQL Server FileTable was first introduced in 2012, yet it is not supported by Entity Framework (either .NET Core or full .NET Framework). Using SQL Server FileTable or FileStream allow faster file uploads and downloads.
I want to use FileTable with my .NET Core application where I must create a relationship of this FileTable with another simple table.
To do that, I need to have type hierarchyid in C# as well as enabling file stream support at SQL Server instance level. Entity Framework doesn't seem to provide a support for creating database with FileTable support when it creates database during updating migrations.
Earlier I asked the question and found no answer when I wanted to use SQL File Tables with my .Net Core application using code first approach. Well, now that I have resolved the issue, I would like to share it. I am not going with each and every detail, however, this is defined on Microsoft's website in detail.
First of all, you need to enable File Stream support at SQL Server Instance level. Right click on the MSSQLSERVER service in SQL Server Configuration Manager (my SQL Server instance name was MSSQLSERVER) and then choose properties. In the FileStream tab, enable the first two check boxes ("Enable Filestream for transact-SQL access & Enable Filestream for file I/O access"). Then restart the service.
In SQL Server Management Studio (SSMS), right click at the top node (named after your computer name mostly) and click properties. Go to the Advanced tab and change FILESTREAM Access Level as per your desire (whether transact-SQL or Full access).
Thereafter, you are supposed to create the Database by following query:
CREATE DATABASE xyz
ON PRIMARY
(NAME = FS,
FILENAME = 'D:\Database\xyzDB.mdf'),
FILEGROUP FileStreamFS CONTAINS FILESTREAM(NAME = FStream,
FILENAME = 'D:\Database\xyzFs')
LOG ON
(NAME = FILESDBLog,
FILENAME = 'D:\Database\xyzDBLog.ldf')
WITH FILESTREAM (NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'xyz')
GO
Here, D:\Database is my directory where I am storing the database files including files that will be stored in SQL File Table. xyz is name of the database suffixing DB.mdf, Fs & DBLog.ldf and prefixing N thereafter. I enabled NON_TRANSACTED_ACCESS; but you can disable if you don't want such access. Note that, directory must exist before you run this query.
Now that you have created the database, you can go ahead and run migration from your .Net Application with the same database name in your Connection String.
You will also require a SQL functions to support your operations, I created that using MigrationBuilder class:
migrationBuilder.Sql("CREATE FUNCTION dbo.HierarchyIdToString (#Id hierarchyid) RETURNS varchar(max) with schemabinding AS BEGIN RETURN CONVERT(varchar(max),CONVERT(varbinary(max),#Id,1),1); END");
AND
migrationBuilder.SQL("CREATE FUNCTION StringToHierarchyId (#Id varchar(max)) "+
"RETURNS hierarchyid WITH SCHEMABINDING AS "+
"BEGIN "+
"RETURN CONVERT(hierarchyid,CONVERT(VARBINARY(MAX),#Id,1),1) "+
"END");
Later, I will use this function and explain its role along the way.
Now, create the File Table that will store your files. You can of course create as many file tables as you want for different type of files.
migrationBuilder.Sql("CREATE TABLE DocumentStore AS FILETABLE WITH (FileTable_Directory = 'DocumentStore', FileTable_Collate_Filename = database_default);");
My File Table name is DocumentStore.
You will also require a Stored procedure to get the File Table Root Path (where it stores files) in order to access these files at file system level, in case you want to access files in NON-TRANSACTED way. Below is the code:
migrationBuilder.Sql("CREATE PROCEDURE GetFileTableRootPath #TableName VARCHAR(100), #Path VARCHAR(1000) OUTPUT AS BEGIN SET #Path = (SELECT FILETABLEROOTPATH(#TableName)) END");
Note that FILETABLEROOTPATH('TableName') is the built-in function that I used within the stored procedure. I knew how to call stored procedures in .Net, so I just wrapped the function in Stored Procedure.
I created another stored procedure to get the Path_Locator of any file stored in the file table. Path_Locator is the primary key of a File Table which I would later require to enter as a reference to this file in another table. Code is:
migrationBuilder.Sql("CREATE PROCEDURE GetFileTableRootPath #TableName VARCHAR(100), #Path VARCHAR(1000) OUTPUT AS BEGIN SET #Path = (SELECT FILETABLEROOTPATH(#TableName)) END");
I also created a table using simple .Net Model Class named Documents.cs including the normal attributes (repeated though as they are available in File Table as well) including an attribute to reference file in File Table. Since File Table has PK named Path_Locator with type of HIERARCHYID, I created the varchar(max) field in the Documents table and will store the Path_Locator in this column after converting into SQL VARCHAR from SQL HIERARCHYID data type. This table is part of my domain classes and will have relationship as a table would normally have.
Now that I have created the supporting tables, I also need to implement CASCADE DELETE functionality which I can do by using SQL Triggers. First Trigger on the File Table as:
migrationBuilder.Sql(
"CREATE TRIGGER dbo.CascadeDelete ON DocumentStore "+
"AFTER DELETE NOT FOR REPLICATION " +
"AS "+
"BEGIN "+
"SET NOCOUNT ON "+
"DECLARE #Id varchar(max); "+
"DECLARE #Table Table (MyHierarchy hierarchyid); "+
"INSERT INTO #Table SELECT deleted.path_locator from deleted; "+
"WHILE ((SELECT COUNT(*) FROM #Table) > 0) "+
"BEGIN "+
"select #Id = dbo.HierarchyIdToString((SELECT TOP 1 * from #Table)); "+
"DELETE FROM Documents WHERE HierarchyInString = #Id; "+
"DELETE FROM #Table where MyHierarchy = dbo.StringToHierarchyId(#Id); "+
"END END"
);
And the second trigger would work with the table using migrations named as Documents to reflect the File Deletes in File Table (synchronizing the Database):
migrationBuilder.Sql(
"CREATE TRIGGER dbo.CascadeDeleteDocuments ON Documents AFTER DELETE NOT FOR REPLICATION AS BEGIN SET NOCOUNT ON DECLARE #Id hierarchyid; DECLARE #Table Table (MyHierarchyInString varchar(max)); INSERT INTO #Table SELECT deleted.HierarchyInString from deleted; WHILE ((SELECT COUNT(*) FROM #Table) > 0) BEGIN select #Id = dbo.StringToHierarchyId((SELECT TOP 1 * from #Table)); DELETE FROM DocumentStore WHERE path_locator = #Id; DELETE FROM #Table where MyHierarchyInString = dbo.HierarchyIdToString(#Id); END END");
To get these triggers work with, I needed to convert the HIERARCHYID with VARCHAR(MAX) and vice versa, for which I used SQL Scalar functions to convert back and forth.
Now, to insert files, I store into windows file system at location retrieved from GETPATHLOCATOR stored procedure and the file is stored automatically to my File Table. Simultaneously, I also add a record to my other table created from C# model class i.e. Documents maintaining both tables.
In future, I would attempt to Create Database with File Stream support, enable File Stream support at SQL Server Instance Level using some code from within the application to avoid this out of the code process.
I am facing a peculiar issue with loading a list of tables from a specific database (well rather a group of databases) while attached to the master database. Currently my query loads all of the databases on the server, then loops through those databases sending information back to the client via RAISERROR. As this loop is executing I need a nested loop to load all of the tables for the current database for later transmission as a SELECT once the query has completed. The issue I'm running into is that this will be executed as a single query inside of C# code. Ideally I would like to load everything in SQL and return it to the client for processing. For example:
WHILE (#dbLoop < #dbCount) BEGIN
-- Do cool things and send details back to client.
SET #dbName = (SELECT _name FROM dbTemp WHERE _id = #dbLoop);
-- USE [#dbName]
-- Get a count of the tables from info schema on the newly specified database.
WHILE (#tableLoop < #tableCount) BEGIN
-- USE [#dbName]
-- Do super cool things and load tables from info schema.
SET #tableLoop += 1;
END
SET #dbLoop += 1;
END
-- Return the list of tables from all databases to the client for use with SQLDataAdapter.
SELECT * FROM tableTemp;
This topic is pretty straight forward; I just need a way to access tables in a specified database (preferably by name) without having to change the connection on the SqlConnection object, and without having to have a loop inside of my C# code to process the same query on each database on the C# side. It would be more efficient to load everything in SQL and send it back to the application. Any help that can be provided on this would be great!
Thanks,
Jamie
All the tables are in the meta data you can just do a query against that and join to your list of schemas you want to look at.
SELECT tab.name
FROM sys.tables AS tab
JOIN sys.schemas AS sch on tab.schema_id = sch.schema_id
JOIN dbTemp temp on sch.name = temp.[_name]
This returns a list of the table to return back as a result set.
The statement USE [#dbName] takes effect AFTER it is run (usually via the GO statement.
USE [#dbName]
GO
The above 2 lines would make you start using the new Database. You cannot use this in the middle of your SQL or SP.
One other option which you can use is to use the dot notation, i.e., dbname..tablename syntax to query your tables.
double dot notation post
Okay, after spending all day working on this, I have finally come up with a solution. I load all the databases into a table variable, then I begin looping through those databases and send back their details to the client. After the database details themselves have been sent to the client via RAISERROR I then utilize sp_executesql to execute a new sub-query with the current database specified to get the list of tables for processing at the end of the primary. The example below demonstrates the basic structure of this process for others experiencing this issue in the future.
Thank you all once again for your help!
-Jamie
DECLARE #LoopCounter INT = 1, #DatabaseCount INT = 0;
DECLARE #SQL NVARCHAR(MAX), #dbName NVARCHAR(MAX);
DECLARE #Databases TABLE ( _id INT, _name NVARCHAR(MAX) );
DECLARE #Tables TABLE ( _name NVARCHAR(MAX), _type NVARCHAR(15) );
INSERT INTO #Databases
SELECT ROW_NUMBER() OVER(ORDER BY name) AS id, name
FROM sys.databases
WHERE name NOT IN ( 'master', 'tempdb', 'msdb', 'model' );
SET #DatabaseCount = (SELECT COUNT(*) FROM #Databases);
WHILE (#LoopCounter <= #DatabaseCount) BEGIN
SET #dbName = (SELECT _name FROM #Databases WHERE _id = #LoopCounter);
SET #SQL NVARCHAR(MAX) = 'SELECT TABLE_NAME, TABLE_TYPE
FROM [' + #dbName + '].INFORMATION_SCHEMA.TABLES';
INSERT INTO #Tables EXEC sp_executesql #SQL;
SET #LoopCounter += 1;
END
I have to get the IDENTITY values from a table after SQLBULKCOPY to the same table. The volume of data could be thousands of records.
Can someone help me out on this ?
Disclaimer: I'm the owner of the project Bulk Operations
In short, this project overcomes SqlBulkCopy limitations by adding MUST-HAVE features like outputting inserted identity value.
Under the hood, it uses SqlBulkCopy and a similar method as #Mr Moose answer.
var bulk = new BulkOperation(connection)
// Output Identity Value
bulk.ColumnMappings.Add("CustomerID", ColumnMappingDirectionType.Output);
// Map Column
bulk.ColumnMappings.Add("Code");
bulk.ColumnMappings.Add("Name");
bulk.ColumnMappings.Add("Email");
bulk.BulkInsert(dt);
EDIT: Answer comment
can I simply get a IList or simply , I see its saved back in the customers table, but there is no variable where I can get a hold of it, can you please help with that. So, I an insert in Orders.CustomerID table
It depends, you can keep a reference to the Customer DataRow named CustomerRef in the Order DataTable.
So once you merged your customer, you are able to populate easily a column CustomerID from the column CustomerRef in your Order DataTable.
Here is an example of what I'm trying to say: https://dotnetfiddle.net/Hw5rf3
I've used a solution similar to this one from Marc Gravell in that it is useful to first import into a temp table.
I've also used the MERGE and OUTPUT as described by Jamie Thomson on this post to track data I have inserted into my temp table to match it with the id generated by the IDENTITY column of the table I want to insert into.
This is particularly useful when you need to use that ID as a foreign key reference to other tables you are populating.
Try this
CREATE TABLE #temp
(
DataRow varchar(max)
)
BULK INSERT #Temp FROM 'C:\tt.txt'
ALTER TABLE #temp
ADD id INT IDENTITY(1,1) NOT NULL
SELECT * FROM #temp
-- dummy schema
CREATE TABLE TMP (data varchar(max))
CREATE TABLE [Table1] (id int not null identity(1,1), data varchar(max))
CREATE TABLE [Table2] (id int not null identity(1,1), id1 int not null, data varchar(max))
-- imagine this is the SqlBulkCopy
INSERT TMP VALUES('abc')
INSERT TMP VALUES('def')
INSERT TMP VALUES('ghi')
-- now push into the real tables
INSERT [Table1]
OUTPUT INSERTED.id, INSERTED.data INTO [Table2](id1,data)
SELECT data FROM TMP
Due to some bad database design I've had to go through a bunch of steps to filter the results I'm looking for using a Stored Procedure and Table-valued Function. Now that I have the SP working properly and returning the records I want, I need to take those matches and work them back into the main sql query in my code behind.
So, I've got my stored procedure called usp_County which returns multiple records, then I have my normal query in code behind like this:
Select * From MyTable Where Name = #Name AND Address = #Address and so on.
Is there some (hopefully simple) way to work in my results from the stored procedure? Something along the lines of this maybe?
Select * From MyTable Where (dbo.usp_County(#county) AND Name = #Name AND Address = #Address and so on.
The results I'm trying to find are all in the same table that the SP is running against, just using the SP and this query to further filter.
For example, I want to search/query based on:
Name1 Address1 County(this is already populated by records through the SP)
Edit
My attempt at creating temp table:
CREATE TABLE #spResults (id int, Counties varchar(max))
INSERT INTO #spResults (id, Counties)
EXEC usp_County '#county'
GO
You can return sproc data into a temp table (sproc output data types must match table definition), then join to the temp table for the extra data & to apply your filter.
CREATE TABLE #spResults
(col1, col2, col3..)
INSERT INTO #spResults
EXEC sprocName(#Param)
SELECT * FROM MyTable JOIN #spResults ON <Cond>
I will import many data rows from a csv file into a SQL Server database (through a web application). I need the auto generated id value back for the client.
If I do this in a loop, the performance is very bad (but I can use SCOPE_IDENTITY() without any problems).
A more performant solution would be a way like this:
INSERT INTO [MyTable]
VALUES ('1'), ('2'), ('3')
SELECT SCOPE_IDENTITY()
Is there any way to get all generated IDs and not only the last generated id?
No, SCOPE_IDENTITY() only gives you the one, latest inserted IDENTITY value. But you could check out the OUTPUT clause of SQL Server ....
DECLARE #IdentityTable TABLE (SomeKeyValue INT, NewIdentity INT)
INSERT INTO [MyTable]
OUTPUT Inserted.Keyvalue, Inserted.ID INTO #IdentityTable(SomeKeyValue, NewIdentity)
VALUES ('1'), ('2'), ('3')
Once you've run your INSERT statement, the table variable will hold "some key value" (for you, to identify the row) and the newly inserted ID values for each row inserted. Now go crazy with this! :-)