I am trying to create a temp table from the a select statement so that I can get the schema information from the temp table.
I am able to achieve this in SQL Server with the following code:
//This creates the temp table
SELECT location.id, location.name into #URM_TEMP_TABLE from location
//This retrieves column information from the temp table
SELECT * FROM tempdb.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME like '#U%'
If I run the code in c# like so:
using (CONN = new SqlConnection(Settings.Default.UltrapartnerDBConnectionString))
{
var commandText = ReportToDisplay.ReportQuery.ToLower().Replace("from", "into #URM_TEMP_TABLE from");
using (SqlCommand command = CONN.CreateCommand())
{
//Create temp table
CONN.Open();
command.CommandText = commandText;
int retVal = command.ExecuteNonQuery();
CONN.Close();
//get column data from temp table
command.CommandText = "SELECT * FROM TEMPDB.INFORMATION_SCHEMA.Columns WHERE TABLE_NAME like '#U%'";
CONN.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
ColumnsForReport.Add(new ListBoxCheckBoxItemModel
{
Name = reader["COLUMN_NAME"].ToString(),
DataType = reader["DATA_TYPE"].ToString(),
IsSelected = false,
RVMCommandModel = this
});
}
}
CONN.Close();
//drop table
command.CommandText = "DROP TABLE #URM_TEMP_TABLE";
CONN.Open();
command.ExecuteNonQuery();
CONN.Close();
}
}
Everything works until it gets to the drop statement: Cannot drop the table '#URM_TEMP_TABLE'
So ExecuteNonQuery returns back 2547 - which is the number of rows the temp table is supposed to have in it. However, it seems that the table does not actually get created using this. Is ExecuteNonQuery the right method to call?
temporary tables are only in scope for the current session, in the code you've posted you're opening a connection, creating a temp table, closing connection.
then opening another connection (new session) and attempting to drop a table which is not in scope of that session.
You would need to drop the temp table within the same connection, or possibly make it a global temp table (##) - though in this case with two separate connections, a global temp table would still fall out of scope.
Additionally, as it was pointed out in the comments your temp tables will be cleaned up automatically - but if you really did want to drop them, you must do so from the session that created them.
EDIT taken from another SO thread:
Global temporary tables in SQL Server
Global temporary tables operate much like local temporary tables; they
are created in tempdb and cause less locking and logging than
permanent tables. However, they are visible to all sessions, until the
creating session goes out of scope (and the global ##temp table is no
longer being referenced by other sessions). If two different sessions
try the above code, if the first is still active, the second will
receive the following:
Server: Msg 2714, Level 16, State 6, Line 1 There is already an object
named '##people' in the database.
I have yet to see a valid justification for the use of a global ##temp
table. If the data needs to persist to multiple users, then it makes
much more sense, at least to me, to use a permanent table. You can
make a global ##temp table slightly more permanent by creating it in
an autostart procedure, but I still fail to see how this is
advantageous over a permanent table. With a permanent table, you can
deny permissions; you cannot deny users from a global ##temp table.
Looks like global temp tables still go out of scope... they're just bad to use in general IMO. Can you just drop the table in the same session or rethink your solution?
Related
I'm copying some data from one SQL Server database to another SQL Server database.
That works fine, what I need is to check if some data already exists, then not copy it. How can I do that? Any suggestions?
string Source = ConfigurationManager.ConnectionStrings["Db1"].ConnectionString;
string Destination = ConfigurationManager.ConnectionStrings["Db2"].ConnectionString;
using (SqlConnection sourceCon = new SqlConnection(Source))
{
SqlCommand cmd = new SqlCommand("SELECT [Id],[Client] FROM [Db1].[dbo].[Client]", sourceCon);
sourceCon.Open();
using (SqlDataReader rdr = cmd.ExecuteReader())
{
using (SqlConnection destCon = new SqlConnection(Destination))
{
using (SqlBulkCopy bc = new SqlBulkCopy(destCon))
{
bc.DestinationTableName = "Clients";
bc.ColumnMappings.Add("Id", "ClientId");
bc.ColumnMappings.Add("Client", "Client");
destCon.Open();
bc.WriteToServer(rdr);
}
}
}
}
One way to do what you're after would be to bulk-copy into a staging table (a separate table with similar layout), and then perform a conditional insert from the staging table into the real table.
You could also do something similar using a table-valued-parameter instead of SqlBulkCopy, and treat the table-valued-parameter as the staging table.
Copy all tables from your source database to your destination database as temp tables, then run SQL to add the missing record from the temp table to the destination table. The final step to delete the temp tables.
hope that will work for you.
You could make a database link from source db to destination, and run a query to work out which rows would need to transit, but be careful not to drag too much data over the link as it could make the process slow- realistically you only need the. Columns you will use to determine whether a row in the source equals a row in the dest
Typically though it's easier to bulk copy all the data into a temporary table at the destination then use a merge or insert-leftjoin to only insert some data from temporary table to real table
Here's an example of how to insert only some rows that don't already exist:
INSERT INTO real(column1,column2...)
SELECT temp.column1,temp.column2... FROM
temp
LEFT JOIN real ON real.ID = temp.ID
WHERE
real.ID IS NULL
In c# terms it would look like:
new SqlCommand(#"INSERT INTO real(column1,column2...)
SELECT temp.column1,temp.column2... FROM
temp
LEFT JOIN real ON real.ID = temp.ID
WHERE
real.ID IS NULL", conn).ExecuteNonQuery();
You need to run this using a conn connected to your destination database
I am creating a progress tracker for a school.
The progress tracker stores scores for each student in various threads and criteria within the threads.
I am currently planning on using a table per class (of students) which stores their progress in each thread and then a table per thread which stores their progress in each criteria within that thread.
I have no way of knowing how many classes (tables) are going to be in the school so I need to find some way of allowing the Administrator accounts to create classes (tables) with a name specified by the Admin.
The easiest way I thought of doing this was with using variables as the table name upon creation but there could be a better way of doing this?
You CAN do something like that, but as D Stanley highlighted you can't use parameters for table names. As such you wouldn't be able to parameterise the user's input if that's to be used as the table name and therefore it makes it a very bad idea. This would immediately open you up to SQL injection, which is never a good plan.
Even with tight sanitization of the user's input there are too many variables to consider, which no doubt require far more work than desired and could still fall prone to attacks as sql evolves.
I would suggest rewording your question to perhaps giving a general idea of what your app is trying to achieve to see if there's another way forwards without creating a table per user.
UPDATE
Based on your rewording of your question it sounds like you need to think about your desired database structure. I'd be tempted to have the following tables:
Students, with 1 entry per student, primary key of StudentId
Classes - with 1 entry per class, primary key of ClassId
Criteria - 1 entry per type of class criteria, primary key of CriteriaId
Progress - potentially multiple entries per student referencing the StudentId, ClassId, CriteriaId and the Score (perhaps ClassScore and CriteriaScore).
You could then have queries to the Progress table that pulled out a student's progress based on just their Id, or their Id and ClassId, or further still their Id, ClassId and CriteriaId etc.
In terms of allowing Admins to create their own you'd simply create queries that allow Admins to insert student records into the Student table, classes into the Class table and criteria into the Criteria table. On creating a Student record you'd also presumably capture their classes and criteria at the same time, which would insert their record into the Progress table (initially 0 for progress so far). You'd presumably also want an update statement to allow admins to update the Progress table for any given student.
Anyway, hopefully this is enough of a pointer to enable you to not have to create a table per student or per class etc.
Well, firstable you must create the database and the Table (or you can create it later using C#). You must connect C# with SQL using the resources files, which is something like this example
Provider=XXXXXX.DataSource.1 ; Data Source=XXXX.XXXX.XXXX.CXXX;
Persist Security Info=True;User ID=XXXXX;pASSWORD=XXXXXX;
Initial Catalog=XXXXX;Force Translate=0;
Catalog Library List=XXXXXX,XXXXX
Then you create a SQLConnection object, create the connection with this method
CreateConnection()
Select the one you put in your resx file (or Resources) with this method:
`NameOfObject.ConnectionString = ConnStr();
and use this method to Open it NameOfObject.Open();
Now you can insert, delete, execute queries with this structure, you finally must get in your code something like this:
SqlConnection sqlConnection1 = new SqlConnection("Your Connection String"); //Here you can put the string that you'll use in your resx file
SqlCommand cmd = new SqlCommand(); //Initialize the command object for executing instructions (queries)
SqlDataReader reader;
cmd.CommandText = "INSERT INTO TABLE_NAME_HERE VALUES (" + nameYouWillExtractFromTheUser + ")";
cmd.CommandType = CommandType.Text; //We say that the command is a textType
cmd.Connection = sqlConnection1; //Initiate the connection
sqlConnection1.Open(); //Opens the connection
reader = cmd.ExecuteReader(); //Execute the connection
I have an identity (auto increment integer) column in my data table in a SQL Server database.
When I start my program and add new record to this table identity column always equals -1. For the next record it becomes -2 and so on. I add new record this way:
http://msdn.microsoft.com/en-us/library/5ycd1034.aspx
However when I restart my program all identity values are reordered (become 1, 2, ...).
Any ideas why this happens? It would be no issue if i could delete these records without restarting. I use SQL Server 2008.
Also is there any way to specify MAX size for column data type through GUI interface (when adding table in visual studio 2012 server explorer) ?
Why they are negative I don't know. However, when you reload the application those records already exist in the database and have id's that were assigned when they were committed to the database; that's why they have real values on restart.
But, you don't need to delete the records from the DataTable, you just need to refresh that row after committing it to the database. There are a number of ways to do this, and would depend significantly on exactly how you're accessing your data now, but you can do things like tack on the SELECT SCOPE_IDENTITY() command with the INSERT command and then use ExecuteScalar to commit the row, like this:
var insertCmd = "INSERT INTO tbl (fld1, fld2) VALUES (#fld1, #fld2); SELECT SCOPE_IDENTITY()";
using (var c = new SqlConnection(connString))
using (var cmd = new SqlCommand(insertCmd, c))
{
cmd.AddParameterWithValue("#fld1", fld1Value);
cmd.AddParameterWithValue("#fld2", fld2Value);
var result = cmd.ExecuteScalar();
int id;
if (int.TryParse(result, out id))
{
// update the DataTable row here
dataTable.Rows[index]["id_column"] = id;
dataTable.AcceptChanges();
}
}
You could even choose to reload the entire DataTable after performing the update.
How do i restrict other users to update or insert in a table after a certain transaction has begun ?
I tried this :
MySqlConnection con = new MySqlConnection("server=localhost;database=data;user=root;pwd=;");
con.Open();
MySqlTransaction trans = con.BeginTransaction();
try
{
string sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','483', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',-233.22,1,'asadfsaf','Bank OD A/c','Receipt','4274',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','4274', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',100,0,'asadfsaf','Sundry Creditors','Receipt','483',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','427', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',133.22,0,'asadfsaf','Sundry Creditors','Receipt','483',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
trans.Commit();
}
catch (Exception ex)
{
trans.Rollback();
}
finally
{
con.Close();
}
but this still allows to insert rows after BeginTransaction.
BeginTransaction does not mean that "your transaction has started and everything is locked". It just informs the RDBMS regarding your intent of initiating a transaction and that everything that you should do from now on should and must be considered atomic.
This means that you could call BeingTransaction and I could delete all data from all tables in your database and the RDBMS will happily let me do that. Hopefully, it should not let me drop the DB because you have an open connection to it, however, you never know these days. There might be some undocumented features I am not aware of.
Atomic means any action or set of actions must be performed as one. If any one of them fails that all of them fail. It is an everything or nothing concept.
Looks like you are inserting three rows into a table. If your table is empty or has very low number of rows, it might lock the whole table depending on the LOCK ESCALATION rules of your RDBMS. However, if it is a large or very large or partitioned table then the LOCK escalation rules might not guarantee a table lock. So, it might still be possible for multiple transactions to insert rows into your table at the same time. It all depends on how the RDBMS handles this situation and how your data model is structured.
Now to answer your question:
HINT - Look for a way to lock the entire table before you start inserting data.
However, this is usually not good but I am assuming that you have a reasonable reason to do it.
Hope this helps.
I have a C# application, using ADO.Net to connect to MSSQL
I need to create the table (with a dynamic number of columns), then insert many records, then do a select back out of the table.
Each step must be a separate C# call, although I can keep a connection/transaction open for the duration.
There are two types of temp tables in SQL Server, local temp tables and global temp tables. From the BOL:
Prefix local temporary table names with single number sign (#tablename), and prefix global temporary table names with a double number sign (##tablename).
Local temp tables will live for just your current connection. Globals will be available for all connections. Thus, if you re-use (and you did say you could) the same connection across your related calls, you can just use a local temp table without worries of simultaneous processes interfering with each others' temp tables.
You can get more info on this from the BOL article, specifically under the "Temporary Tables" section about halfway down.
The issue is that #Temp tables exist only within the Connection AND the Scope of the execution.
When the first call from C# to SQL completes, control passes up to a higher level of scope.
This is just as if you had a T-SQL script that called two stored procedures. Each SP created a table named #MyTable. The second SP is referencing a completly different table than the first SP.
However, if the parent T-SQL code created the table, both SP's could see it, but they can't see each others.
The solution here is to use ##Temp tables. They cross scope and connections.
The danger though is that if you use a hard coded name, then two instances of your program running at the same time could see the same table. So dynamically set the table name to something that will be always be unique.
You might take a look at the repository pattern as far as dealing with this concept in C#. This allows you to have a low level repository layer for data access where each method performs a task. But the connection is passed in to the method and actual actions are performed with in a transaction scope. This means you can theoretically call many different methods in your data access layer (implemented as repository) and if any of them fail you can roll back the whole operation.
http://martinfowler.com/eaaCatalog/repository.html
The other aspects of your question would be handled by standard sql where you can dynamically create a table, insert into it, delete from it, etc. The tricky part here is keeping one transaction away from another transaction. You might look to using temp tables...or you might simply have a 2nd database specifically for performing this dynamic table concept.
Personaly I think you are doing this the hard way. Do all the steps in one stored proc.
One way to extend the scope/lifetime of your single pound sign #Temp is to use a transaction. For as long as the transaction lives, the #temp table continues to exist. You can also use TransactionScope to give you the same effect, because TransactionScope creates an ambient transaction in the background.
The below test methods pass, proving that the #temp table contents survive between executions.
This may be preferable to using double-pound temp tables, because ##temp tables are global objects. If you have more than one client that happens to use the same ##temp table name, then they could step on each other. Also, ##temp tables do not survive a server restart, so their lifespan is technically not forever. IMHO it's best to control the scope of #temp tables because they're meant to be limited.
using System.Transactions;
using Dapper;
using Microsoft.Data.SqlClient;
using IsolationLevel = System.Data.IsolationLevel;
namespace TestTempAcrossConnection
{
[TestClass]
public class UnitTest1
{
private string _testDbConnectionString = #"Server=(localdb)\mssqllocaldb;Database=master;trusted_connection=true";
class TestTable1
{
public int Col1 { get; set; }
public string Col2 { get; set; }
}
[TestMethod]
public void TempTableBetweenExecutionsTest()
{
using var conn = new SqlConnection(_testDbConnectionString);
conn.Open();
var tran = conn.BeginTransaction(IsolationLevel.ReadCommitted);
conn.Execute("create table #test1(col1 int, col2 varchar(20))", transaction: tran);
conn.Execute("insert into #test1(col1,col2) values (1, 'one'),(2,'two')", transaction: tran);
var tableResult = conn.Query<TestTable1>("select col1, col2 from #test1", transaction: tran).ToList();
Assert.AreEqual(1, tableResult[0].Col1);
Assert.AreEqual("one", tableResult[0].Col2);
tran.Commit();
}
[TestMethod] public void TempTableBetweenExecutionsScopeTest()
{
using var scope = new TransactionScope();
using var conn = new SqlConnection(_testDbConnectionString);
conn.Open();
conn.Execute("create table #test1(col1 int, col2 varchar(20))");
conn.Execute("insert into #test1(col1,col2) values (1, 'one'),(2,'two')");
var tableResult = conn.Query<TestTable1>("select col1, col2 from #test1").ToList();
Assert.AreEqual(2, tableResult[1].Col1);
Assert.AreEqual("two", tableResult[1].Col2);
scope.Complete();
}
}
}