I am creating a progress tracker for a school.
The progress tracker stores scores for each student in various threads and criteria within the threads.
I am currently planning on using a table per class (of students) which stores their progress in each thread and then a table per thread which stores their progress in each criteria within that thread.
I have no way of knowing how many classes (tables) are going to be in the school so I need to find some way of allowing the Administrator accounts to create classes (tables) with a name specified by the Admin.
The easiest way I thought of doing this was with using variables as the table name upon creation but there could be a better way of doing this?
You CAN do something like that, but as D Stanley highlighted you can't use parameters for table names. As such you wouldn't be able to parameterise the user's input if that's to be used as the table name and therefore it makes it a very bad idea. This would immediately open you up to SQL injection, which is never a good plan.
Even with tight sanitization of the user's input there are too many variables to consider, which no doubt require far more work than desired and could still fall prone to attacks as sql evolves.
I would suggest rewording your question to perhaps giving a general idea of what your app is trying to achieve to see if there's another way forwards without creating a table per user.
UPDATE
Based on your rewording of your question it sounds like you need to think about your desired database structure. I'd be tempted to have the following tables:
Students, with 1 entry per student, primary key of StudentId
Classes - with 1 entry per class, primary key of ClassId
Criteria - 1 entry per type of class criteria, primary key of CriteriaId
Progress - potentially multiple entries per student referencing the StudentId, ClassId, CriteriaId and the Score (perhaps ClassScore and CriteriaScore).
You could then have queries to the Progress table that pulled out a student's progress based on just their Id, or their Id and ClassId, or further still their Id, ClassId and CriteriaId etc.
In terms of allowing Admins to create their own you'd simply create queries that allow Admins to insert student records into the Student table, classes into the Class table and criteria into the Criteria table. On creating a Student record you'd also presumably capture their classes and criteria at the same time, which would insert their record into the Progress table (initially 0 for progress so far). You'd presumably also want an update statement to allow admins to update the Progress table for any given student.
Anyway, hopefully this is enough of a pointer to enable you to not have to create a table per student or per class etc.
Well, firstable you must create the database and the Table (or you can create it later using C#). You must connect C# with SQL using the resources files, which is something like this example
Provider=XXXXXX.DataSource.1 ; Data Source=XXXX.XXXX.XXXX.CXXX;
Persist Security Info=True;User ID=XXXXX;pASSWORD=XXXXXX;
Initial Catalog=XXXXX;Force Translate=0;
Catalog Library List=XXXXXX,XXXXX
Then you create a SQLConnection object, create the connection with this method
CreateConnection()
Select the one you put in your resx file (or Resources) with this method:
`NameOfObject.ConnectionString = ConnStr();
and use this method to Open it NameOfObject.Open();
Now you can insert, delete, execute queries with this structure, you finally must get in your code something like this:
SqlConnection sqlConnection1 = new SqlConnection("Your Connection String"); //Here you can put the string that you'll use in your resx file
SqlCommand cmd = new SqlCommand(); //Initialize the command object for executing instructions (queries)
SqlDataReader reader;
cmd.CommandText = "INSERT INTO TABLE_NAME_HERE VALUES (" + nameYouWillExtractFromTheUser + ")";
cmd.CommandType = CommandType.Text; //We say that the command is a textType
cmd.Connection = sqlConnection1; //Initiate the connection
sqlConnection1.Open(); //Opens the connection
reader = cmd.ExecuteReader(); //Execute the connection
Related
I am trying to create a temp table from the a select statement so that I can get the schema information from the temp table.
I am able to achieve this in SQL Server with the following code:
//This creates the temp table
SELECT location.id, location.name into #URM_TEMP_TABLE from location
//This retrieves column information from the temp table
SELECT * FROM tempdb.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME like '#U%'
If I run the code in c# like so:
using (CONN = new SqlConnection(Settings.Default.UltrapartnerDBConnectionString))
{
var commandText = ReportToDisplay.ReportQuery.ToLower().Replace("from", "into #URM_TEMP_TABLE from");
using (SqlCommand command = CONN.CreateCommand())
{
//Create temp table
CONN.Open();
command.CommandText = commandText;
int retVal = command.ExecuteNonQuery();
CONN.Close();
//get column data from temp table
command.CommandText = "SELECT * FROM TEMPDB.INFORMATION_SCHEMA.Columns WHERE TABLE_NAME like '#U%'";
CONN.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
ColumnsForReport.Add(new ListBoxCheckBoxItemModel
{
Name = reader["COLUMN_NAME"].ToString(),
DataType = reader["DATA_TYPE"].ToString(),
IsSelected = false,
RVMCommandModel = this
});
}
}
CONN.Close();
//drop table
command.CommandText = "DROP TABLE #URM_TEMP_TABLE";
CONN.Open();
command.ExecuteNonQuery();
CONN.Close();
}
}
Everything works until it gets to the drop statement: Cannot drop the table '#URM_TEMP_TABLE'
So ExecuteNonQuery returns back 2547 - which is the number of rows the temp table is supposed to have in it. However, it seems that the table does not actually get created using this. Is ExecuteNonQuery the right method to call?
temporary tables are only in scope for the current session, in the code you've posted you're opening a connection, creating a temp table, closing connection.
then opening another connection (new session) and attempting to drop a table which is not in scope of that session.
You would need to drop the temp table within the same connection, or possibly make it a global temp table (##) - though in this case with two separate connections, a global temp table would still fall out of scope.
Additionally, as it was pointed out in the comments your temp tables will be cleaned up automatically - but if you really did want to drop them, you must do so from the session that created them.
EDIT taken from another SO thread:
Global temporary tables in SQL Server
Global temporary tables operate much like local temporary tables; they
are created in tempdb and cause less locking and logging than
permanent tables. However, they are visible to all sessions, until the
creating session goes out of scope (and the global ##temp table is no
longer being referenced by other sessions). If two different sessions
try the above code, if the first is still active, the second will
receive the following:
Server: Msg 2714, Level 16, State 6, Line 1 There is already an object
named '##people' in the database.
I have yet to see a valid justification for the use of a global ##temp
table. If the data needs to persist to multiple users, then it makes
much more sense, at least to me, to use a permanent table. You can
make a global ##temp table slightly more permanent by creating it in
an autostart procedure, but I still fail to see how this is
advantageous over a permanent table. With a permanent table, you can
deny permissions; you cannot deny users from a global ##temp table.
Looks like global temp tables still go out of scope... they're just bad to use in general IMO. Can you just drop the table in the same session or rethink your solution?
I am using stored procedures to fetch information from the database. First I fetch all the parent elements and hold them in the array and then using the parent Id I fetch all the related children. Each parent can have 150 children. There are about 100 parent elements. What is the best way to increase the performance of the fetch operation. Currently it takes 13 seconds to retrieve.
Here is the basic algorithm:
while(reader.read())
{
Parent p = new Parent();
// assign properties to the parent
p.Children = GetChildrenByParentId(parent.Id);
}
You should get all that data in one SQL select / stored proc (do some sort of join on child data) and then populate parent and children objects. Now you have 100*150 = 15000 requests on DB and I if you can do this with one request I would expect dramatic performance effect.
As Brian mentioned it in comment, that is known as RBAR, RowByAgonizingRow :)
Like a achronime a lot, here is more :
https://www.simple-talk.com/sql/t-sql-programming/rbar--row-by-agonizing-row/
The first and most important step is to measure the performance. Is it SQL Server that is the bottle neck, or .NET?
Also, you need to minimize the times you have to go back to the database, so if you can retrieve all of the data you need in a single stored procedure, that would be best.
From your question, it sounds to me like it is SQL Server that is the problem. To test this, run your stored procedure from SQL Query Anylizer, and see how long it takes for a known parent id. I bet you just need some indexes added to your underlying table to make it possible for SQL to get the data faster. If possible, look at your Execution Plan for the stored procedure. You can find a good article about reading execution plans here.
SQL Server 2008 is easy, create a user defined table type and pass the list of parent IDs to that, OR you can just use the logic you used to get those parent IDs in the first place and just join to the tables that hold child data.
To create the table type, you make something like this:
CREATE TYPE [dbo].[Int32List]
AS TABLE (
[ID] int NOT NULL
);
GO
And your stored proc goes something like this:
CREATE PROCEDURE [dbo].[MyStoredProc]
#ParentIDTable [dbo].[Int32List] READONLY
AS
--logic goes here
GO
And you call that procedure from your C# code like this:
DataTable ParentIDs = new DataTable();
ParentIDs.Columns.Add("ID", typeof(int));
SqlConnection connection = new SqlConnection(yourConnectionInfo);
SqlCommand command = new SqlCommand("MyStoredProc", connection);
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add("#ParentIDTable", SqlDbType.Structured).Value = ParentIDs;
command.Parameters["#ParentIDTable"].TypeName = "Int32List";
This way is nice, because it's a great way to effectively pass a list of values to SQL Server and treat it like a table. I use table types all over my applications where I want to pass an array of values to a stored proc. Just remember that the column names in the C# DataTable need to match the column names in the table type you created, and the TypeName property needs to match the table type's name.
With this method, you will only make 1 call to the DB, and when you iterate through the results, you should also make sure to include the ParentID in the select list so you can match each child to the proper parent object.
Here's a great resource to explain table types in more detail: http://www.sommarskog.se/arrays-in-sql-2008.html
Can someone suggest the best way to retrieve a scalar value when the site uses .xsd files for the data sets? I have such site where before I commit to a insert task I need to verify duplicates.
Back in the day one would just instantiate a new connection and command object and run the query through BLL/DAL - easy job. With this prepackaged xsd file that the Studio creates for you I have no idea how to do it.
Thanks,
Risho
First, i would recommend to add an unique index in your database to ensure that it's impossible to create duplicates.
To answer your question: you can add queries to the automatically created TableAdapters:
How to: Create TableAdapter queries
From MSDN
TableAdapter with multiple queries
Unlike standard data adapters, TableAdapters can contain multiple
queries to fill their associated data tables. You can define as many
queries for a TableAdapter as your application requires, as long as
each query returns data that conforms to the same schema as its
associated data table. This enables loading of data that satisfies
differing criteria. For example, if your application contains a table
of customers, you can create a query that fills the table with every
customer whose name begins with a certain letter, and another query
that fills the table with all customers located in the same state. To
fill a Customers table with customers in a given state you can create
a FillByState query that takes a parameter for the state value: SELECT
* FROM Customers WHERE State = #State. You execute the query by calling the FillByState method and passing in the parameter value like
this: CustomerTableAdapter.FillByState("WA").
In addition to queries that return data of the same schema as the
TableAdapter's data table, you can add queries that return scalar
*(single) values.* For example, creating a query that returns a count of
customers (SELECT Count(*) From Customers) is valid for a
CustomersTableAdapter even though the data returned does not conform
to the table's schema.
I'm stuck on a little problem concerning database.
Once a month I get a XML file with customer information (Name, address, city,etc.). My primary key is a customer number which is provided in the XML file.
I have no trouble inserting the information in the database;
var cmd = new SqlCommand("insert into [customer_info]
(customer_nr, firstname, lastname, address_1, address_2, address_3.......)");
//some code
cmd.ExecuteNonQuery();
Now, I would like to update my table or just fill it with new information. How can I achieve this?
I've tried using TableAdapter but it does not work.
And I'm only permitted to add one XML because I can only have one customer_nr as primary key.
So basically how do I update or fill my table with new information?
Thanks.
One way would be to bulk insert the data into a new staging table in the database (you could use SqlBulkCopy for this for optimal insert speed). Once it's in there, you could then index the customer_nr field and then run 2 statements:
-- UPDATE existing customers
UPDATE ci
SET ci.firstname = s.firstname,
ci.lastname = s.lastname,
... etc
FROM StagingTable s
INNER JOIN Customer_Info ci ON s.customer_nr = ci.customer_nr
-- INSERT new customers
INSERT Customer_Info (customer_nr, firstname, lastname, ....)
SELECT s.customer_nr, s.firstname, s.lastname, ....
FROM StagingTable s
LEFT JOIN Customer_Info ci ON s.customer_nr = ci.customer_nr
WHERE ci.customer_nr IS NULL
Finally, drop your staging table.
Alternatively, instead of the 2 statements, you could just use the MERGE statement if you are using SQL Server 2008 or later, which allows you to do INSERTs and UPDATEs via a single statement.
If I understand your question correctly - if the customer already exists you want to update their information, and if they don't already exist you want to insert a new row.
I have a lot of problems with hard-coded SQL commands in your code, so I would firstly be very tempted to refactor what you have done. However, to achieve what you want, you will need to execute a SELECT on the primary key, if it returns any results you should execute an UPDATE else you should execute an INSERT.
It would be best to do this in something like a Stored Procedure - you can pass the information to the stored procedure at then it can make a decision on whether to UPDATE or INSERT - this would also reduce the overhead of making several calls for your code to the database (A stored procedure would be much quicker)
AdaTheDev has indeed given the good suggestion.
But in case, you must insert/update from .NET code then you can
Create a stored procedure that will handle insert/update i.e. instead of using a direct insert query as command text, you make a call to stored proc. The SP will check if row exists or not and then update (or insert).
User TableAdapter - but this would be tedious. First you have to setup both insert & update commands. Then you have to query the database to get the existing customer numbers and then update the corresponding rows in the datatable making the Rowstate as Updated. I would rather not go this way.
I have a C# application, using ADO.Net to connect to MSSQL
I need to create the table (with a dynamic number of columns), then insert many records, then do a select back out of the table.
Each step must be a separate C# call, although I can keep a connection/transaction open for the duration.
There are two types of temp tables in SQL Server, local temp tables and global temp tables. From the BOL:
Prefix local temporary table names with single number sign (#tablename), and prefix global temporary table names with a double number sign (##tablename).
Local temp tables will live for just your current connection. Globals will be available for all connections. Thus, if you re-use (and you did say you could) the same connection across your related calls, you can just use a local temp table without worries of simultaneous processes interfering with each others' temp tables.
You can get more info on this from the BOL article, specifically under the "Temporary Tables" section about halfway down.
The issue is that #Temp tables exist only within the Connection AND the Scope of the execution.
When the first call from C# to SQL completes, control passes up to a higher level of scope.
This is just as if you had a T-SQL script that called two stored procedures. Each SP created a table named #MyTable. The second SP is referencing a completly different table than the first SP.
However, if the parent T-SQL code created the table, both SP's could see it, but they can't see each others.
The solution here is to use ##Temp tables. They cross scope and connections.
The danger though is that if you use a hard coded name, then two instances of your program running at the same time could see the same table. So dynamically set the table name to something that will be always be unique.
You might take a look at the repository pattern as far as dealing with this concept in C#. This allows you to have a low level repository layer for data access where each method performs a task. But the connection is passed in to the method and actual actions are performed with in a transaction scope. This means you can theoretically call many different methods in your data access layer (implemented as repository) and if any of them fail you can roll back the whole operation.
http://martinfowler.com/eaaCatalog/repository.html
The other aspects of your question would be handled by standard sql where you can dynamically create a table, insert into it, delete from it, etc. The tricky part here is keeping one transaction away from another transaction. You might look to using temp tables...or you might simply have a 2nd database specifically for performing this dynamic table concept.
Personaly I think you are doing this the hard way. Do all the steps in one stored proc.
One way to extend the scope/lifetime of your single pound sign #Temp is to use a transaction. For as long as the transaction lives, the #temp table continues to exist. You can also use TransactionScope to give you the same effect, because TransactionScope creates an ambient transaction in the background.
The below test methods pass, proving that the #temp table contents survive between executions.
This may be preferable to using double-pound temp tables, because ##temp tables are global objects. If you have more than one client that happens to use the same ##temp table name, then they could step on each other. Also, ##temp tables do not survive a server restart, so their lifespan is technically not forever. IMHO it's best to control the scope of #temp tables because they're meant to be limited.
using System.Transactions;
using Dapper;
using Microsoft.Data.SqlClient;
using IsolationLevel = System.Data.IsolationLevel;
namespace TestTempAcrossConnection
{
[TestClass]
public class UnitTest1
{
private string _testDbConnectionString = #"Server=(localdb)\mssqllocaldb;Database=master;trusted_connection=true";
class TestTable1
{
public int Col1 { get; set; }
public string Col2 { get; set; }
}
[TestMethod]
public void TempTableBetweenExecutionsTest()
{
using var conn = new SqlConnection(_testDbConnectionString);
conn.Open();
var tran = conn.BeginTransaction(IsolationLevel.ReadCommitted);
conn.Execute("create table #test1(col1 int, col2 varchar(20))", transaction: tran);
conn.Execute("insert into #test1(col1,col2) values (1, 'one'),(2,'two')", transaction: tran);
var tableResult = conn.Query<TestTable1>("select col1, col2 from #test1", transaction: tran).ToList();
Assert.AreEqual(1, tableResult[0].Col1);
Assert.AreEqual("one", tableResult[0].Col2);
tran.Commit();
}
[TestMethod] public void TempTableBetweenExecutionsScopeTest()
{
using var scope = new TransactionScope();
using var conn = new SqlConnection(_testDbConnectionString);
conn.Open();
conn.Execute("create table #test1(col1 int, col2 varchar(20))");
conn.Execute("insert into #test1(col1,col2) values (1, 'one'),(2,'two')");
var tableResult = conn.Query<TestTable1>("select col1, col2 from #test1").ToList();
Assert.AreEqual(2, tableResult[1].Col1);
Assert.AreEqual("two", tableResult[1].Col2);
scope.Complete();
}
}
}