I have to insert new records in a database every day from a text file ( tab delimited).
I'm trying to make this into a stored procedure with a parameter for the file to read data from.
CREATE PROCEDURE dbo.UpdateTable
#FilePath
BULK INSERT TMP_UPTable
FROM #FilePath
WITH
(
FIRSTROW = 2,
MAXERRORS = 0,
FIELDTERMINATOR = '\t',
ROWTERMINATOR = '\n'
)
RETURN
Then i would call this stored procedure from my code (C#) specifying the file to insert.
This is obviously not working, so how can i do it ?
Just to be clear the problem here is that i can't pass the parameter #FilePath to the FROM clause, or at least i don't know how.
Sorry, I misunderstood.
You need to create the SQL statement dynamically and then execute it:
CREATE procedure dbo.UpdateTable
#FilePath varchar(max)
AS
declare #sql varchar(max)
declare #parameters varchar(100)
set #parameters = 'FIRSTROW = 2, MAXERRORS = 0, FIELDTERMINATOR = ''\\t'', ROWTERMINATOR = ''\\n'' '
SET #SQL = 'BULK INSERT TMP_UPTable FROM ' + #FilePath + #parameters
EXEC (#SQL)
RETURN
Sorry if I am late here, but I would suggest a different approach - open the file in your C# application and convert it to something more SQL-friendly, a DataTable or even XML. In C# you have complete control over how you parse the files. Then write the stored procedure to accept your DataTable or XML. A DataTable is preferable, but cannot be used with Entity Framework.
There is lots of help around of how to do inserts by joining to this sort of input, and SQL Server is optimised for set operations.
Related
I have a code first create database table which I truncate and insert approximately 12000 records in with a c# script and Entity Framework Core 2.2.6.
I have 5 indexes on the table which I need to recreate after I do my database work. I can of course do this manually each time after I run my script, but being a programmer, that feels weird.
I tried finding a way to do this with EntityFrameworkCore, but I cannot seem to find it.
As a last resort I can execute a SQL command of course, but I was wondering whether there is some EntityFrameworkCore functionality that I am overlooking.
Or are there other ways of doing this more efficiently?
edit:
I run my script each time I receive a new DB from a third party to create our own from it, now in development that is roughly each month, later that will be less, so manually is an option, but I have an allergy for doing things manually
I ended up using a stored procedure which does the below taken from here
DECLARE #TableName VARCHAR(255)
DECLARE #sql NVARCHAR(500)
DECLARE #fillfactor INT
SET #fillfactor = 80
DECLARE TableCursor CURSOR FOR
SELECT QUOTENAME(OBJECT_SCHEMA_NAME([object_id]))+'.' + QUOTENAME(name) AS TableName
FROM sys.tables
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO #TableName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #sql = 'ALTER INDEX ALL ON ' + #TableName + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),#fillfactor) + ')'
EXEC (#sql)
FETCH NEXT FROM TableCursor INTO #TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
GO
I've got a stored procedure that returns data for a grid control. Given a table name, the grid will display data from that table. The user can sort and filter this data. There is also paging logic for large data sets.
The names of the tables that data is pulled from is not known until runtime, so dynamic SQL was used. This works well, but is vulnerable to SQL injection - the tableName, sortExpression and filterExpression variables are generated clientside and passed through to the server.
Below is a simplified version of the procedure:
create procedure ReadTable (
#tableName as varchar(128),
#sortExpression as varchar(128),
#filterExpression as varchar(512)
)
as
begin
declare #SQLString as nvarchar(max) =
'select * from ' + #tableName +
' where ' + #filterExpression +
' order by ' + #sortExpression
exec Sp_executesql #SQLString
end
I'm struggling to find a way to easily prevent SQL injection in this case. I've found a good answer explaining how to check the #tableName is legitamite (How should I pass a table name into a stored proc?), but the approach won't work for the filtering or sort strings.
One way would be perhaps to do some sanitizing server side before the data is passed through to the database - breaking the expressions down into column names and checking them against the known column names of the table.
Would there be an easier way?
I'm working on a pet project that will allow me to store my game collection in a DB and write notes on those games. The single entries of games has been coded by inserting desired variables into my game_information table and outputting the PK (identity) of the newly created row from that table, so I can insert it into my game_notes table along with the note.
var id = db.QueryValue("INSERT INTO Game_Information (gamePrice, name, edition) output Inserted.gameId VALUES (#0, #1, #2)", gamePrice, name, edition);
db.Execute("INSERT INTO Game_Notes(gameId, notes, noteDate) VALUES (#0, #1, #2)", id, notes, noteDate);
I'm now playing with uploading data in bulk via csv but how can I write a BULK INSERT that would output all PKs of the newly created rows, so I can inserted them into my second table (game_notes) along with a variable called notes?
At the moment I have the following:
Stored Procedure that reads .csv and uses BULK INSERT to dump information into a view of game_information
#FileName nvarchar(200)
AS
BEGIN
DECLARE #sql nvarchar(MAX);
SET #sql = 'BULK INSERT myview
FROM ''mycsv.csv''
WITH
(
FIELDTERMINATOR = '','',
ROWTERMINATOR = ''\n'',
FIRSTROW = 2
)'
EXEC(#sql)
END
C# code that creates set up in WebMatrix
if ((IsPost) && (Request.Files[0].FileName!=" "))
{
var fileSavePath = "";
var uploadedFile = Request.Files[0];
fileName = Path.GetFileName(uploadedFile.FileName);
uploadedFile.SaveAs(//path +filename);
var command = "EXEC Procedure1 #FileName = #0";
db.Execute(command, //path +filename);
File.Delete(//path +filename);
}
Which allows for csv records to be inserted into game_information.
If this isn't feasible with BULK INSERT, would something along the lines of be a valid solution to attempt?
BULK INSERT into a temp_table
INSERT from temp_table to my game_information table
OUTPUT the game_Ids from the INSERT as an array(?)
then INSERT the Ids along with note into game_notes.
I've also been looking at OPENROWSET but I'm unsure if that will allow for what I'm trying to accomplish. Feedback on this is greatly appreciated.
Thank your for your input womp. I was able to get the desired results by amending my BULK INSERT as follows:
BEGIN
DECLARE #sql nvarchar(MAX);
SET #sql=
'CREATE TABLE #Temp (--define table--)
BULK INSERT #Temp --Bulk into my temp table--
FROM '+char(39)+#FileName+char(39)+'
WITH
(
FIELDTERMINATOR = '','',
ROWTERMINATOR = ''\n'',
FIRSTROW = 2
)
INSERT myDB.dbo.game_information(gamePrice, name, edition, date)
OUTPUT INSERTED.gameId, INSERTED.Date INTO myDB.dbo.game_notes(gameId, noteDate)
SELECT gamePrice, name, edition, date
FROM #Temp'
EXEC(#sql)
END
This placed the correct ids into game_notes and left the Note column of the table as Null for those entries. Which meant I could run a simple
"UPDATE game_notes SET Notes = #0 WHERE Notes IS NULL";
To push the desired note into the correct rows. I'm executing this and the stored bulk procedure in the same If (IsPost), so I feel like I'm protected from the wrong accidental note updates.
You have a few different options.
Bulk inserting into a temp table and then copying information into your permanent tables is definitely a valid solution. However, based on what you're trying to do I don't see the need for a temp table. Just bulk import into game_information, SELECT your ID's to your application, and then do your update of game_notes.
Another option would be to insert your keys. You can allow for IDENTITY_INSERT to be on for your tables and just have your keys as part of the CSV file. See here: https://msdn.microsoft.com/en-ca/library/ms188059.aspx?f=255&MSPPError=-2147217396. If you did this then you could do a BULK INSERT into your Game_information table, and then do a second BULK INSERT into your secondary tables by using a different CSV file. Be sure to re-enable key constraints and turn IDENTITY_INSERT off after its finished.
If you need more particular control over the data you're selecting from the CSV file then you can use OPENROWSET but there's not enough details in your post to comment further.
I need to insert Tamil language into SQL Server 2005. I have tried using Insert or Update query, it worked fine. While going to stored procedure, I don't know how to pass the parameter.
ALTER PROCEDURE [dbo].[spr_Sam]
#Row_Id int = NULL,
#Description_Ta nvarchar(MAX) = null
AS
BEGIN
update tblTest set
Description_Ta = #Description_Ta
where Row_Id = #Row_Id
END
exec [dbo].[spr_Sam] 2, 'பெண்டிரேம்';
If I execute this it gets inserted as ?????.
exec [dbo].[spr_Sam] 2, N'பெண்டிரேம்';
If I execute this it gets inserted correctly.. but I don't know how to pass that 'N' from my C# Application. I used a text-box to get that Description_Ta parameter.
C# should add the N automatically if you use SqlDbType.NVarChar for SQLParameter
You must be using SqlDbType.VarChar of course
The MSDN doc for SqlDbType states (my bold)
VarChar: A variable-length stream of non-Unicode characters...
...
NVarChar: A variable-length stream of Unicode characters...
Here is the correct update statement:
update tblTest
set Description_Ta = #Description_Ta
where Row_Id = #Row_Id;
You don't need single quotes around a variable.
But, I think the posting is confused. To call the procedure use:
exec [dbo].[spr_Sam] 2, N'பெண்டிரேம்';
To modify it:
ALTER PROCEDURE [dbo].[spr_Sam]
#Row_Id int = NULL,
#Description_Ta nvarchar(MAX) = null
AS
BEGIN
update tblTest
set Description_Ta = #Description_Ta
where Row_Id = #Row_Id;
END;
You shouldn't have arguments when you define the stored procedure.
I am to create a stored procedure to create a table to capture form data, this is part of a bigger project to create a Form Generator.
I was wondering if anyone had created a stored procedure that took a stringified JSON object as input and created the the table based on this schema?
I'm still toying with this in my brain as to whether I should be doing this within the sproc (preferable) or writing dynamic sql within a C# Service.
Personally I wouldn't approach this problem by passing the JSON string to a stored procedure. However, if you wish to do it this way you could pass the JSON object directly to the stored procedure and then manipulate the string as below. I have provided the code to manipulate the table name and create a table based upon the example JSON string '{TABLENAME:TABLENAME, Fields: {field1:varchar, field2: int }}'. You would then have to modify this to include fields and datatypes based upon the string.
CREATE PROCEDURE CreateTableFromJSON
(
#JSON VARCHAR(100)
)
AS
DECLARE #TableName VARCHAR(100)
SET #TableName = SUBSTRING(#json, CHARINDEX(':', #json)+1, CHARINDEX(',', #json) -CHARINDEX(':', #json)-1)
DECLARE #SQL VARCHAR(100)
SET #SQL = 'CREATE TABLE ' + #TableName + ' (ID INT)'
EXEC(#SQL)
GO
EXEC CreateTableFromJSON '{TABLENAME:TABLENAME, Fields: {field1:varchar, field2: int }}'