SQL Server FileTable was first introduced in 2012, yet it is not supported by Entity Framework (either .NET Core or full .NET Framework). Using SQL Server FileTable or FileStream allow faster file uploads and downloads.
I want to use FileTable with my .NET Core application where I must create a relationship of this FileTable with another simple table.
To do that, I need to have type hierarchyid in C# as well as enabling file stream support at SQL Server instance level. Entity Framework doesn't seem to provide a support for creating database with FileTable support when it creates database during updating migrations.
Earlier I asked the question and found no answer when I wanted to use SQL File Tables with my .Net Core application using code first approach. Well, now that I have resolved the issue, I would like to share it. I am not going with each and every detail, however, this is defined on Microsoft's website in detail.
First of all, you need to enable File Stream support at SQL Server Instance level. Right click on the MSSQLSERVER service in SQL Server Configuration Manager (my SQL Server instance name was MSSQLSERVER) and then choose properties. In the FileStream tab, enable the first two check boxes ("Enable Filestream for transact-SQL access & Enable Filestream for file I/O access"). Then restart the service.
In SQL Server Management Studio (SSMS), right click at the top node (named after your computer name mostly) and click properties. Go to the Advanced tab and change FILESTREAM Access Level as per your desire (whether transact-SQL or Full access).
Thereafter, you are supposed to create the Database by following query:
CREATE DATABASE xyz
ON PRIMARY
(NAME = FS,
FILENAME = 'D:\Database\xyzDB.mdf'),
FILEGROUP FileStreamFS CONTAINS FILESTREAM(NAME = FStream,
FILENAME = 'D:\Database\xyzFs')
LOG ON
(NAME = FILESDBLog,
FILENAME = 'D:\Database\xyzDBLog.ldf')
WITH FILESTREAM (NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'xyz')
GO
Here, D:\Database is my directory where I am storing the database files including files that will be stored in SQL File Table. xyz is name of the database suffixing DB.mdf, Fs & DBLog.ldf and prefixing N thereafter. I enabled NON_TRANSACTED_ACCESS; but you can disable if you don't want such access. Note that, directory must exist before you run this query.
Now that you have created the database, you can go ahead and run migration from your .Net Application with the same database name in your Connection String.
You will also require a SQL functions to support your operations, I created that using MigrationBuilder class:
migrationBuilder.Sql("CREATE FUNCTION dbo.HierarchyIdToString (#Id hierarchyid) RETURNS varchar(max) with schemabinding AS BEGIN RETURN CONVERT(varchar(max),CONVERT(varbinary(max),#Id,1),1); END");
AND
migrationBuilder.SQL("CREATE FUNCTION StringToHierarchyId (#Id varchar(max)) "+
"RETURNS hierarchyid WITH SCHEMABINDING AS "+
"BEGIN "+
"RETURN CONVERT(hierarchyid,CONVERT(VARBINARY(MAX),#Id,1),1) "+
"END");
Later, I will use this function and explain its role along the way.
Now, create the File Table that will store your files. You can of course create as many file tables as you want for different type of files.
migrationBuilder.Sql("CREATE TABLE DocumentStore AS FILETABLE WITH (FileTable_Directory = 'DocumentStore', FileTable_Collate_Filename = database_default);");
My File Table name is DocumentStore.
You will also require a Stored procedure to get the File Table Root Path (where it stores files) in order to access these files at file system level, in case you want to access files in NON-TRANSACTED way. Below is the code:
migrationBuilder.Sql("CREATE PROCEDURE GetFileTableRootPath #TableName VARCHAR(100), #Path VARCHAR(1000) OUTPUT AS BEGIN SET #Path = (SELECT FILETABLEROOTPATH(#TableName)) END");
Note that FILETABLEROOTPATH('TableName') is the built-in function that I used within the stored procedure. I knew how to call stored procedures in .Net, so I just wrapped the function in Stored Procedure.
I created another stored procedure to get the Path_Locator of any file stored in the file table. Path_Locator is the primary key of a File Table which I would later require to enter as a reference to this file in another table. Code is:
migrationBuilder.Sql("CREATE PROCEDURE GetFileTableRootPath #TableName VARCHAR(100), #Path VARCHAR(1000) OUTPUT AS BEGIN SET #Path = (SELECT FILETABLEROOTPATH(#TableName)) END");
I also created a table using simple .Net Model Class named Documents.cs including the normal attributes (repeated though as they are available in File Table as well) including an attribute to reference file in File Table. Since File Table has PK named Path_Locator with type of HIERARCHYID, I created the varchar(max) field in the Documents table and will store the Path_Locator in this column after converting into SQL VARCHAR from SQL HIERARCHYID data type. This table is part of my domain classes and will have relationship as a table would normally have.
Now that I have created the supporting tables, I also need to implement CASCADE DELETE functionality which I can do by using SQL Triggers. First Trigger on the File Table as:
migrationBuilder.Sql(
"CREATE TRIGGER dbo.CascadeDelete ON DocumentStore "+
"AFTER DELETE NOT FOR REPLICATION " +
"AS "+
"BEGIN "+
"SET NOCOUNT ON "+
"DECLARE #Id varchar(max); "+
"DECLARE #Table Table (MyHierarchy hierarchyid); "+
"INSERT INTO #Table SELECT deleted.path_locator from deleted; "+
"WHILE ((SELECT COUNT(*) FROM #Table) > 0) "+
"BEGIN "+
"select #Id = dbo.HierarchyIdToString((SELECT TOP 1 * from #Table)); "+
"DELETE FROM Documents WHERE HierarchyInString = #Id; "+
"DELETE FROM #Table where MyHierarchy = dbo.StringToHierarchyId(#Id); "+
"END END"
);
And the second trigger would work with the table using migrations named as Documents to reflect the File Deletes in File Table (synchronizing the Database):
migrationBuilder.Sql(
"CREATE TRIGGER dbo.CascadeDeleteDocuments ON Documents AFTER DELETE NOT FOR REPLICATION AS BEGIN SET NOCOUNT ON DECLARE #Id hierarchyid; DECLARE #Table Table (MyHierarchyInString varchar(max)); INSERT INTO #Table SELECT deleted.HierarchyInString from deleted; WHILE ((SELECT COUNT(*) FROM #Table) > 0) BEGIN select #Id = dbo.StringToHierarchyId((SELECT TOP 1 * from #Table)); DELETE FROM DocumentStore WHERE path_locator = #Id; DELETE FROM #Table where MyHierarchyInString = dbo.HierarchyIdToString(#Id); END END");
To get these triggers work with, I needed to convert the HIERARCHYID with VARCHAR(MAX) and vice versa, for which I used SQL Scalar functions to convert back and forth.
Now, to insert files, I store into windows file system at location retrieved from GETPATHLOCATOR stored procedure and the file is stored automatically to my File Table. Simultaneously, I also add a record to my other table created from C# model class i.e. Documents maintaining both tables.
In future, I would attempt to Create Database with File Stream support, enable File Stream support at SQL Server Instance Level using some code from within the application to avoid this out of the code process.
Related
I've a datagridview and Show button on my Windows form app. In datagridview, I want to show the tables of multi databases but with my sql code I only show the current databases tables. How can I show all databases' tables?
SELECT TABLE_CATALOG AS 'DATABASE Name',
TABLE_SCHEMA AS 'SCHEMA Name',
TABLE_NAME AS 'TABLO Name'
FROM INFORMATION_SCHEMA.TABLES
You can use the SQL syntax table1 JOIN table2 to Show multiple tables. If you can't do that use multiple SQL queries to feed a table then set it as ItemsSource to your datagrid, the name might be a bit different in Windows.Forms then in WPF.
One way is to use SQL Server Undocumented Stored Procedure - sp_MSForEachDB.
Excerpt from the site:
6 Common Uses of the undocumented Stored Procedure sp_MSforeachdb
http://www.sqlservercurry.com/2009/04/6-common-uses-of-undocumented-stored.html
Print all the database names in a SQL Server Instance
Print all the tables in all the databases of a SQL Server Instance
Display the size of all databases in a SQL Server instance
Determine all the physical names and attributes(size,growth,usage) of all databases in a SQL Server instance
Change Owner of all databases to 'sa'
Check the Logical and Physical integrity of all objects in the database
/* Create temp table to hold result */
Declare #blackFrog as Table
(
[DATABASE Name] [nvarchar](128) NULL,
[SCHEMA Name] [sysname] NULL,
[TABLE Name] [sysname] NOT NULL
);
DECLARE #command varchar(1000) /* create the command you want to execute on all databases */
SELECT #command = 'IF ''?'' NOT IN(''master'', ''model'', ''msdb'', ''tempdb'')
BEGIN USE ? SELECT TABLE_CATALOG AS ''DATABASE Name'',
TABLE_SCHEMA AS ''SCHEMA Name'', TABLE_NAME AS ''TABLE Name'' FROM
INFORMATION_SCHEMA.TABLES; END'
Insert Into #blackFrog EXEC sp_MSforeachdb #command;
Select * From #blackFrog
It's best to wrap the whole thing in a stored procedure. It is strongly recommended to avoid using undocumented features of SQL Server in your Production environment.
I have two databases as mentioned below:
[QCR_DEV]
[QCR_DEV_LOG]
All application data are stored in [QCR_DEV]. On each table of [QCR_DEV], there is a trigger that insert the details of insertion and update of [QCR_DEV] table into [QCR_DEV_LOG] database.
Suppose i have a table [party] in [QCR_DEV] database. Whenever i insert,update or delete some record in the table. There will be one insertion in table [party_log] which exists in [QCR_DEV_LOG] database. In short i am keeping the log or action performed on tables of [QCR_DEV] into [QCR_DEV_LOG] database.
When we connect to database through application, it connect to database somehow using connection-string. In my stored procedure, i did not use database name like:
Select * From [QCR_DEV].[party];
I am using like this:
Select * From [party];
This is because, in feature if i need to change database name then i will only need to change connection-string.
Now come to the point, i need to get data from [QCR_DEV_LOG] database. I am writing a stored procedure in which i need to get data from both databases like:
Select * From [QCR_DEV_LOG][party_log]
INNER JOIN [person] on [person].person_id = [QCR_DEV_LOG][party_log].person_id
where party_id = 1
This stored procedure is in [QCR_DEV] database. I need to get data from both databases. For this i need to mention the database name in query. I don't want this. Is there any way to set database name globally and use this name in my queries so that if in future i need to change database name, i only change from where it sets globally. Is there any way to do this?
I would second Jeroen Mostert comment and use synonyms:
CREATE SYNONYM [party_log] FOR [QCR_DEV_LOG].[dbo].[party_log];
And when the target database is renamed, this query would generate a migration script:
SELECT 'DROP SYNONYM [' + name + ']; CREATE SYNONYM [' + name + '] FOR ' + REPLACE(base_object_name, '[OldLogDbName].', '[NewLogDbName].') + ';'
FROM sys.synonyms
WHERE base_object_name LIKE '[OldLogDbName].%';
You could do this in the DEV database:
CREATE VIEW [dbo].[party_log]
AS
SELECT * FROM [QCR_DEV_LOG].[dbo].[party_log]
Then you can write SELECT-queries as if the [party_log] table exists in the DEV database.
Any WHERE.. or JOIN..ON.. clauses should get applied before the combined query is executed.
If the LOG database ever gets moved or renamed, then you'd only need to update the view (or a couple of views, but probably never a lot).
If you expect regular changes, or if you need to use this on multiple servers then you could use dynamic SQL:
IF OBJECT_ID('[dbo].[party_log]') IS NOT NULL DROP VIEW [dbo].[party_log]
-- etc, repeat to DROP other views
DECLARE #logdb VARCHAR(80) = 'QCR_DEV_LOG'
EXEC ('CREATE VIEW [dbo].[party_log] AS SELECT * FROM [' + #logdb + '].[dbo][party_log]')
-- etc, repeat to create other views
I'm working on a pet project that will allow me to store my game collection in a DB and write notes on those games. The single entries of games has been coded by inserting desired variables into my game_information table and outputting the PK (identity) of the newly created row from that table, so I can insert it into my game_notes table along with the note.
var id = db.QueryValue("INSERT INTO Game_Information (gamePrice, name, edition) output Inserted.gameId VALUES (#0, #1, #2)", gamePrice, name, edition);
db.Execute("INSERT INTO Game_Notes(gameId, notes, noteDate) VALUES (#0, #1, #2)", id, notes, noteDate);
I'm now playing with uploading data in bulk via csv but how can I write a BULK INSERT that would output all PKs of the newly created rows, so I can inserted them into my second table (game_notes) along with a variable called notes?
At the moment I have the following:
Stored Procedure that reads .csv and uses BULK INSERT to dump information into a view of game_information
#FileName nvarchar(200)
AS
BEGIN
DECLARE #sql nvarchar(MAX);
SET #sql = 'BULK INSERT myview
FROM ''mycsv.csv''
WITH
(
FIELDTERMINATOR = '','',
ROWTERMINATOR = ''\n'',
FIRSTROW = 2
)'
EXEC(#sql)
END
C# code that creates set up in WebMatrix
if ((IsPost) && (Request.Files[0].FileName!=" "))
{
var fileSavePath = "";
var uploadedFile = Request.Files[0];
fileName = Path.GetFileName(uploadedFile.FileName);
uploadedFile.SaveAs(//path +filename);
var command = "EXEC Procedure1 #FileName = #0";
db.Execute(command, //path +filename);
File.Delete(//path +filename);
}
Which allows for csv records to be inserted into game_information.
If this isn't feasible with BULK INSERT, would something along the lines of be a valid solution to attempt?
BULK INSERT into a temp_table
INSERT from temp_table to my game_information table
OUTPUT the game_Ids from the INSERT as an array(?)
then INSERT the Ids along with note into game_notes.
I've also been looking at OPENROWSET but I'm unsure if that will allow for what I'm trying to accomplish. Feedback on this is greatly appreciated.
Thank your for your input womp. I was able to get the desired results by amending my BULK INSERT as follows:
BEGIN
DECLARE #sql nvarchar(MAX);
SET #sql=
'CREATE TABLE #Temp (--define table--)
BULK INSERT #Temp --Bulk into my temp table--
FROM '+char(39)+#FileName+char(39)+'
WITH
(
FIELDTERMINATOR = '','',
ROWTERMINATOR = ''\n'',
FIRSTROW = 2
)
INSERT myDB.dbo.game_information(gamePrice, name, edition, date)
OUTPUT INSERTED.gameId, INSERTED.Date INTO myDB.dbo.game_notes(gameId, noteDate)
SELECT gamePrice, name, edition, date
FROM #Temp'
EXEC(#sql)
END
This placed the correct ids into game_notes and left the Note column of the table as Null for those entries. Which meant I could run a simple
"UPDATE game_notes SET Notes = #0 WHERE Notes IS NULL";
To push the desired note into the correct rows. I'm executing this and the stored bulk procedure in the same If (IsPost), so I feel like I'm protected from the wrong accidental note updates.
You have a few different options.
Bulk inserting into a temp table and then copying information into your permanent tables is definitely a valid solution. However, based on what you're trying to do I don't see the need for a temp table. Just bulk import into game_information, SELECT your ID's to your application, and then do your update of game_notes.
Another option would be to insert your keys. You can allow for IDENTITY_INSERT to be on for your tables and just have your keys as part of the CSV file. See here: https://msdn.microsoft.com/en-ca/library/ms188059.aspx?f=255&MSPPError=-2147217396. If you did this then you could do a BULK INSERT into your Game_information table, and then do a second BULK INSERT into your secondary tables by using a different CSV file. Be sure to re-enable key constraints and turn IDENTITY_INSERT off after its finished.
If you need more particular control over the data you're selecting from the CSV file then you can use OPENROWSET but there's not enough details in your post to comment further.
There's a long version of this question, and a short version.
The short version:
why are both LINQ and EF so slow at inserting a single, large (7 Mb) record into a remote SQL Server database ?
And here's the long version (with some information about workarounds, which might be useful to other readers):
All of the following example code does run okay, but as my users are in Europe and our Data Centers are based in America, it is damned slow. But if I run the same code on a Virtual PC in America, it runs instantly. (And no, sadly my company wants to keep all data in-house, so I can't use Azure, Amazon Cloud Services, etc)
Quite a few of my corporate apps involve reading/writing data from Excel into SQL Server, and often, we'll want to save a raw-copy of the Excel file in a SQL Server table.
This is very straightforward to do, simply reading in the raw data from a local file, and saving it into a record.
private int SaveFileToSQLServer(string filename)
{
// Read in an Excel file, and store it in a SQL Server [External_File] record.
//
// Returns the ID of the [External_File] record which was added.
//
DateTime lastModifed = System.IO.File.GetLastWriteTime(filename);
byte[] fileData = File.ReadAllBytes(filename);
// Create a new SQL Server database record, containing our file's raw data
// (Note: the table has an IDENTITY Primary-Key, so will generate a ExtFile_ID for us.)
External_File newFile = new External_File()
{
ExtFile_Filename = System.IO.Path.GetFileName(filename),
ExtFile_Data = fileData,
ExtFile_Last_Modified = lastModifed,
Update_By = "mike",
Update_Time = DateTime.UtcNow
};
dc.External_Files.InsertOnSubmit(newFile);
dc.SubmitChanges();
return newFile.ExtFile_ID;
}
Yup, no surprises there, and it works fine.
But, what I noticed is that for large Excel files (7-8Mb), this code to insert one (large!) record would take 40-50 seconds to run. I put this in a background thread, and it all worked fine, but, of course, if the user quit my application, this process would get killed off, which would cause problems.
As a test, I tried to replace this function with code to do this:
copy the file into a shared directory on the SQL Server machine
called a stored procedure to read the raw data (blob) into the same table
Using this method, the entire process would take just 3-4 seconds.
If you're interested, here's the Stored Procedure I used to upload a file (which MUST be stored in a folder on the SQL Server machine itself) into a database record:
CREATE PROCEDURE [dbo].[UploadFileToDatabase]
#LocalFilename nvarchar(400)
AS
BEGIN
-- By far, the quickest way to do this is to copy the file onto the SQL Server machine, then call this stored
-- procedure to read the raw data into a [External_File] record, and link it to the Pricing Account record.
--
-- EXEC [dbo].[UploadPricingToolFile] 'D:\ImportData\SomeExcelFile.xlsm'
--
-- Returns: -1 if something went wrong (eg file didn't exist) or the ID of our new [External_File] record
--
-- Note that the INSERT will go wrong, if the user doesn't have "bulkadmin" rights.
-- "You do not have permission to use the bulk load statement."
-- EXEC master..sp_addsrvrolemember #loginame = N'GPP_SRV', #rolename = N'bulkadmin'
--
SET NOCOUNT ON;
DECLARE
#filename nvarchar(300), -- eg "SomeFilename.xlsx" (without the path)
#SQL nvarchar(2000),
#New_ExtFile_ID int
-- Extract (just) the filename from our Path+Filename parameter
SET #filename = RIGHT(#LocalFilename,charindex('\',reverse(#LocalFilename))-1)
SET #SQL = 'INSERT INTO [External_File] ([ExtFile_Filename], [ExtFile_Data]) '
SET #SQL = #SQL + 'SELECT ''' + #Filename + ''', *
SET #SQL = #SQL + ' FROM OPENROWSET(BULK ''' + #LocalFilename +''', SINGLE_BLOB) rs'
PRINT convert(nvarchar, GetDate(), 108) + ' Running: ' + #SQL
BEGIN TRY
EXEC (#SQL)
SELECT #New_ExtFile_ID = ##IDENTITY
END TRY
BEGIN CATCH
PRINT convert(nvarchar, GetDate(), 108) + ' An exception occurred.'
SELECT -1
RETURN
END CATCH
PRINT convert(nvarchar, GetDate(), 108) + ' Finished.'
-- Return the ID of our new [External_File] record
SELECT #New_ExtFile_ID
END
The key to this code is that it builds up a SQL command like this:
INSERT INTO [External_File] ([ExtFile_Filename], [ExtFile_Data])
SELECT 'SomeFilename.xlsm', * FROM OPENROWSET(BULK N'D:\ImportData\SomeExcelFile.xlsm', SINGLE_BLOB) rs
.. and, as both the database and file to be uploaded are both on the same machine, this runs almost instantly.
As I said, overall, it took 3-4 seconds to copy the file to a folder on the SQL Server machine, and run this stored procedure, compared to 40-50 seconds to do the same using C# code with LINQ or EF.
Exporting blob data from SQL Server into an external file
And, of course, the same is true in the opposite direction.
First, I wrote some C#/LINQ code to load the one (7Mb !) database record and write its binary data into a raw-file. This took about 30-40 seconds to run.
But if I exported the SQL Server data to a file (saved on the SQL Server machine) first..
EXEC master..xp_cmdshell 'BCP "select ef.ExtFile_Data FROM [External_File] ef where ExtFile_ID = 585" queryout "D:\ImportData\SomeExcelFile.xslx" -T -N'
...and then copied the file from the SQL Server folder to the user's folder, then once again, it ran in a couple of seconds.
And this is my question: Why are both LINQ and EF so bad at inserting a single large record into the database ?
I assume the latency (distance between us, here in Europe, and our Data Centers in the States) are a major cause of the delay, but it's just odd that a bog-standard file-copy can be so much faster.
Am I missing something ?
Obviously, I've found walkarounds to these problems, but they involve added some extra permissions to our SQL Server machines and shared folders on SQL Server machines, and our DBAs really don't like granting rights for things like "xp_cmdshell"...
A few months later...
I had the same issue again this week, and tried Kevin H's suggestion to use Bulk-Insert to insert a large (6Mb) record into SQL Server.
Using bulk-insert, it took around 90 seconds to insert the 6Mb record, even though our data centre is 6,000 miles away.
So, the moral of the story: when inserting very-large database records, avoid using a regular SubmitChanges() command, and stick to using bulk-insert.
You could try using profiler to see what Entity Framework is doing with the insert. For example, if it's selecting data out of your table, it could be taking a long time to return the data over the wire, and you may not notice that locally.
I have found that the best way to load a large amount of data (both record count and record size) into sql server from c# is to use the SqlBulkCopy class. Even though you are inserting only 1 record, you may still benefit from this change.
To use bulk copy, just create a datatable that matches the structure of your table. Then call the code like this.
using (SqlConnection destinationConnection = new SqlConnection(connectionString))
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConnection))
{
bulkCopy.DestinationTableName = "External_File";
bulkCopy.WriteToServer(dataTable);
}
I'm converting an application from Access to SQL Server 2014. One of the capabilities of this tool is to allow users to create ad-hoc SQL queries to modify delete or add data to a number of tables.
Right now in Access there is no tracking of who does what so if something gets messed up on accident, there is no way to know who it was or when it happened (it has happened enough times that it is a serious issue and one of many reasons the tool is being rewritten).
The application I'm writing is a Windows application in C#. I'm looking for ANY and all suggestions on ways this can be done without putting a huge demand on the server (processing or space). Since the users are creating their own queries I can't just add a column for user name and date (also that would only track the most recent change).
We don't need to keep the old data or even identifying exactly what was changed. Just who changed data and when they did. I want to be able to look at something (view, table or even separate database) that shows me a list of users that made a change and when they did it.
You haven't specified the SQL Server Version, anyway if you have a version >= 2008 R2 you can use Extended Events to monitor your system.
On stackoverflow you can read my answer to similar problem
You can consider to use triggers and a log table, this will work on all SQL Servers. Triggers are a bit more expensive that CDC, but if your users already are updating directly on your tables, this should not be a problem. I think this also will depend on how many tables you want to log.
I will provide you with a simple example for logging the users that has changed a table, or several tables (just add the trigger to the tables):
CREATE TABLE UserTableChangeLog
(
ChangeID INT PRIMARY KEY IDENTITY(1,1)
, TableName VARCHAR(128) NOT NULL
, SystemUser VARCHAR(256) NOT NULL DEFAULT SYSTEM_USER
, ChangeDate DATETIME NOT NULL DEFAULT GETDATE()
)
GO
CREATE TABLE TestTable
(
ID INT IDENTITY(1,1)
, Test VARCHAR(255)
)
GO
--This sql can be added for multiple tables, just change the trigger name, and the table name
CREATE TRIGGER TRG_TABLENAME_Log ON TestTable
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
SET NOCOUNT ON;
--Can be used to get type of change, and wich data that was altered.
--SELECT * FROM INSERTED;
--SELECT * FROM DELETED;
DECLARE #tableName VARCHAR(255) = (SELECT OBJECT_NAME( parent_id ) FROM sys.triggers WHERE object_id = ##PROCID);
INSERT INTO UserTableChangeLog (TableName) VALUES (#tableName);
END
GO
This is how it will work:
INSERT INTO TestTable VALUES ('1001');
INSERT INTO TestTable VALUES ('2002');
INSERT INTO TestTable VALUES ('3003');
GO
UPDATE dbo.TestTable SET Test = '4004' WHERE ID = 2
GO
SELECT * FROM UserTableChangeLog