Fatal error encountered during command execution while updating blob to mysql - c#

String query = "";
string constr = ConfigurationSettings.AppSettings["MySQLConnectionStringForIMS"];
using (MySqlConnection con = new MySqlConnection(constr))
{
//string query = "INSERT INTO user(name, files,contentType) VALUES (#name,#files,#contentType)";
if (update == "mainSec")
{
query = "update main_section set contentType=#contentType,fileData=#fileData,fileNameAfterUploading=#fname,haveDir=#dir where id=#id";
}
else
{
query = "update sub_section set subContentType=#contentType,subFileData=#fileData,fileNameAfterUploading=#fname,haveDir=#dir where MainSecId=#id and id=#subId";
}
using (MySqlCommand cmd = new MySqlCommand(query))
{
cmd.Connection = con;
cmd.CommandType = CommandType.Text;
cmd.Parameters.AddWithValue("#contentType", contentType);
cmd.Parameters.AddWithValue("#fileData", data);
cmd.Parameters.AddWithValue("#fname", filename);
cmd.Parameters.AddWithValue("#dir", 1);
cmd.Parameters.AddWithValue("#id", mainId);
if (update == "subSec")
{
cmd.Parameters.AddWithValue("#subId", subId);
}
con.Open();
int st = cmd.ExecuteNonQuery();
if (st == 1)
{
//Uri uri = new Uri(url, UriKind.Absolute);
//System.IO.File.Delete(uri.LocalPath);
}
con.Close();
}
}
We are using MySql.Data.dll version 6.9.5.0.
This fails with the error: mysql Fatal error encountered during command execution. Any ideas on why this would fail?

Tl;DR
Because of mismatched branch comparisons, you are executing a query with 6 unbound variables, but you are only binding 5 parameters.
Detail
There wasn't really sufficient information provided in the stack trace / exception to answer definitively, but it seems the guess about the bad practice in the branching above was the root cause, i.e. in these two branches:
if (update == "mainSec")
{
query = ... Query has 5 unbound variables
}
else
{
query = ... Query has 6 unbound variables
}
and
if (update == "subSec")
{
... bind the 6th parameter here
}
.. because the update type / mode string wasn't constrained to a range of mainSec or subSec, there is a branch which used the sub_section query with 6 parameter tokens, but which didn't bind the 6th token, causing the error.
In situations like this, I would recommend that instead of using weakly constrained strings, that you rigidly constrain the range of inputs of your update, e.g. with an enum:
enum UpdateMode
{
Invalid = 0, // This will be the default, and can be used to ensure assignment
MainSection,
SubSection
}
Since there's only two possible modes, you could avoid the first query assignment branch with a conditional assignment, i.e.
Contract.Assert(updateMode != UpdateMode.Invalid);
var query = updateMode == UpdateMode.MainSection
? "update main_section set contentType=#contentType ... "
: "update sub_section set subContentType=#contentType ... ";
This has the benefits that the declaration and assignment of query can be tied together (and provides additional compiler guarantees that query must be assigned).
(And if there were more than two queries (and more than two enum states) then a static IReadOnlyDictionary<enum, string> would allow this pattern to be extended.)
The binding would also change to
if (updateMode == UpdateMode.SubSection)
{
cmd.Parameters.AddWithValue("#subId", subId);
}
Some notes
con.Close(); isn't needed, since you already have a using around the new Connection - Dispose will call .Close if it's open
I know this is commented out, but I would strongly recommend against doing File IO at this point
if (st == 1)
{
// File.IO
}
Since
From a separation of concerns point of view, deleting files belongs elsewhere. If the deletion is dependent on exactly one row being updated, then this can be returned from this Blob update method.
The I/O would be in the scope of the using block, this will potentially hold up the release of a MySql Connection (to the connection pool)
The IO could fail, and depending on any transaction control, this could leave your system in a problematic state, where the record is deleted but the file is not.

Related

Error from SQL query

Currently I'm working on cleaning up some code on the backend of an application I'm contracted for maintenance to. I ran across a method where a call is being made to the DB via Oracle Data Reader. After examining the SQL, I realized it was not necessary to make the call to open up Oracle Data Reader seeing how the object being loaded up was already within the Context of our Entity Framework. I changed the code to follow use of the Entity Model instead. Below are the changes I made.
Original code
var POCs = new List<TBLPOC>();
Context.Database.Connection.Open();
var cmd = (OracleCommand)Context.Database.Connection.CreateCommand();
OracleDataReader reader;
var SQL = string.Empty;
if (IsAssociate == 0)
SQL = #"SELECT tblPOC.cntPOC,INITCAP(strLastName),INITCAP(strFirstName)
FROM tblPOC,tblParcelToPOC
WHERE tblParcelToPOC.cntPOC = tblPOC.cntPOC AND
tblParcelToPOC.cntAsOf = 0 AND
tblParcelToPOC.cntParcel = " + cntParcel + " ORDER BY INITCAP(strLastName)";
else
SQL = #"SELECT cntPOC,INITCAP(strLastName),INITCAP(strFirstName)
FROM tblPOC
WHERE tblPOC.cntPOC NOT IN ( SELECT cntPOC
FROM tblParcelToPOC
WHERE cntParcel = " + cntParcel + #"
AND cntAsOf = 0 )
AND tblPOC.ysnActive = 1 ORDER BY INITCAP(strLastName)";
cmd.CommandText = SQL;
cmd.CommandType = CommandType.Text;
using (reader = cmd.ExecuteReader())
{
while (reader.Read())
{
POCs.Add(new TBLPOC { CNTPOC = (decimal)reader[0],
STRLASTNAME = reader[1].ToString(),
STRFIRSTNAME = reader[2].ToString() });
}
}
Context.Database.Connection.Close();
return POCs;
Replacement code
var sql = string.Empty;
if (IsAssociate == 0)
sql = string.Format(#"SELECT tblPOC.cntPOC,INITCAP(strLastName),INITCAP(strFirstName)
FROM tblPOC,tblParcelToPOC
WHERE tblParcelToPOC.cntPOC = tblPOC.cntPOC
AND tblParcelToPOC.cntAsOf = 0
AND tblParcelToPOC.cntParcel = {0}
ORDER BY INITCAP(strLastName)",
cntParcel);
else
sql = string.Format(#"SELECT cntPOC,INITCAP(strLastName), INITCAP(strFirstName)
FROM tblPOC
WHERE tblPOC.cntPOC NOT IN (SELECT cntPOC
FROM tblParcelToPOC
WHERE cntParcel = {0}
AND cntAsOf = 0)
AND tblPOC.ysnActive = 1
ORDER BY INITCAP(strLastName)",
cntParcel);
return Context.Database.SqlQuery<TBLPOC>(sql, "0").ToList<TBLPOC>();
The issue I'm having right now is when the replacement code is executed, I get the following error:
The data reader is incompatible with the specified 'TBLPOC'. A member of the type 'CNTPOCORGANIZATION', does not have a corresponding column in the data reader with the same name.
The field cntPOCOrganization does exist within tblPOC, as well as within the TBLPOC Entity. cntPOCOrganization is a nullable decimal (don't ask why decimal, I myself don't get why the previous contractors used decimals versus ints for identifiers...). However, in the past code and the newer code, there is no need to fill that field. I'm confused on why it is errors out on that particular field.
If anyone has any insight, I would truly appreciate it. Thanks.
EDIT: So after thinking on it a bit more and doing some research, I think I know what the issue is. In the Entity Model for TBLPOC, the cntPOCOrganization field is null, however, there is an object tied to this Entity Model called TBLPOCORGANIZATION. I'm pondering if it's trying to fill it. It too has cntPOCOrganization within itself and I'm guessing that maybe it is trying to fill itself and is what is causing the issue.
That maybe possibly why the previous contractor wrote the Oracle Command versus run it through the Entity Framework. I'm going to revert back for time being (on a deadline and really don't want to play too long with it). Thanks!
This error is issued when your EF entity model does not match the query result. If you post your entity model you are trying to fetch this in, the SQL can be fixed. In general you need to use:
sql = string.Format(#"SELECT tblPOC.cntPOC AS <your_EF_model_property_name_here>,INITCAP(strLastName) AS <your_EF_model_property_name_here>,INITCAP(strFirstName) AS <your_EF_model_property_name_here>
FROM tblPOC,tblParcelToPOC
WHERE tblParcelToPOC.cntPOC = tblPOC.cntPOC
AND tblParcelToPOC.cntAsOf = 0
AND tblParcelToPOC.cntParcel = {0}
ORDER BY INITCAP(strLastName)",
cntParcel);

Code not synced throwing Index was out of bounds array?

Before you mark this question as a duplicate, here is the tricky part I don't understand. This error is sporadic, I believe the code is correct and it's always working and I'm handling the possible mistakes with an if else condition inside the Reader part. Here is the code:
public static Tuple<int, string> GetIDAndString(string term)
{
try
{
using (SqlConnection con = GetConnection())
using (cmd = new SqlCommand())
using (myReader)
{
int ID = 0;
string status = string.Empty;
cmd.Connection = con;
con.Open();
cmd.CommandText = #"SELECT t.TableID, t.Status
FROM Table t WITH (NOLOCK) /* I know NOLOCK is not causing the mistake as far as I know */
WHERE t.Term = #term";
cmd.Parameters.AddWithValue("#term", term);
myReader = cmd.ExecuteReader();
while(myReader.Read())
{
ID = myReader.IsDBNull(0) ? 0 : myReader.GetInt32(0);
status = myReader.IsDBNull(1) ? string.Empty : myReader.GetString(1).Trim();
}
myReader.Close();
return new Tuple<int, string>(ID, status);
}
}
catch (Exception)
{
throw;
}
}
I know I should be using a class instead of a Tuple, but I can't change that existing code and as you can see. So the main problem is that in the production server there was a Index out of bounds array exception in that method but I can't identify what's the problem.
Even if the term is not found in the query, the myReader will not enter and I'll return the ID = 0, status = string.Empty. Sometimes when I'm debugging code and working on the develpment server, my code starts to crash everywhere, showing me exceptions where is tested code and I have to reopen the solution to avoid that (I haven't found a solution to that, not even cleaning the solution).
So I hope someone have experience with something like that in a production server. I don't have specifications to the production server so I don't know anything about the server.
First you don't need the try/catch block, you don't do anything with it. After that don't share SqlDataReader in the class, this could bring problems and probably the problem comes from this. You are overwriting the value of ID and Status all the time in your while. Probably a good idea will be to call Top 1 on your query and order it by with correct field. Also there is no need to Dispose() the SqlCommand, the Constructor of SqlCommand is calling SupressFinalization().
Why this problem can happen: Imagine your query returns 1000 records with TableID and Status column and you are entering the while loop. In this moment some other user is going in your application and executing another method which overwrites the SqlDataReader and return 5 records with only one column. On the next iteration of you while loop you will receive your exception. Because of that you should never define your Readers as static for the whole class. Static variables are shared between all of application users.
public static Tuple<int, string> GetIDAndString(string term)
{
int ID = 0;
string status = string.Empty;
using (SqlConnection con = GetConnection())
{
SqlCommand cmd = new SqlCommand();
cmd.Connection = con;
con.Open();
cmd.CommandText = #"SELECT t.TableID, t.Status
FROM Table t WITH (NOLOCK) /* I know NOLOCK is not causing the mistake as far as I know */
WHERE t.Term = #term";
cmd.Parameters.AddWithValue("#term", term);
using(SqlDataReader myReader = cmd.ExecuteReader())
{
while(myReader.Read())
{
ID = myReader.IsDBNull(0) ? 0 : myReader.GetInt32(0);
status = myReader.IsDBNull(1) ? string.Empty : myReader.GetString(1).Trim();
}
}
}
return new Tuple<int, string>(ID, status);
}
this probably happens when you do ID = myReader.IsDBNull(0) ? 0 : myReader.GetInt32(0); or status = myReader.IsDBNull(1) ? string.Empty : myReader.GetString(1).Trim(); because the result set does not conform to your expectations. You should add logging of the reader's row before actually reading it, might help you pinpoint the issue
I guess the problem is caused by the myReader field which I suppose is static. If you look at the SqlDataReader (I suppose that's the field's type) documentation at https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader(v=vs.110).aspx, you'll find that instance methods are not thread safe, hence you must synchronize access to that field.
using (myReader) captures the value that the reader had at that time and disposes that later. It does not remember the variable. This has to be so as you can see from this example: using (Random() ? myReader : null). Clearly, the C# language will not reexecute that expression at dispose time. It runs it just once.
So you're disposing some old/other reader.
In case you are sharing objects between threads (maybe using static variables) this trivially is a race condition. Don't do that. Use locals. There is no need/advantage to use static variables here.

Embedded loops containing SQL connections

I have the following code:
SqlConnection connection1, connection2;
SqlCommand command1, command2;
SqlDataReader reader1, reader2;
using (connection1 = new SqlConnection("connection string here"))
{
using (command1 = new SqlCommand(#"SELECT * FROM [SERVER1].[DATABASE1].[TABLE1] WHERE COL1 = #COL1 AND COL2 = #COL2", connection1))
{
command1.Parameters.Add("#COL1", SqlDbType.VarChar, 255).Value = TextBox1.Text;
command1.Parameters.Add("#COL2", SqlDbType.VarChar, 255).Value = TextBox2.Text;
connection1.Open();
using (reader1 = command1.ExecuteReader())
{
while (reader1.Read())
{
int COL3Index = reader1.GetOrdinal("COL3");
Console.Write("### LOOP 1 ###");
Console.Write(reader1.GetDouble(COL3Index));
using (connection2 = new SqlConnection("same connection string here"))
{
using (command2 = new SqlCommand(#"SELECT * FROM [SERVER1].[DATABASE1].[TABLE2] WHERE COL1 = #COL1", connection1))
{
command2.Parameters.Add("#COL1", SqlDbType.Float).Value = reader1.GetDouble(COL3Index);
connection2.Open();
using (reader2 = command2.ExecuteReader())
{
while (reader2.Read())
{
int COL2Index = reader2.GetOrdinal("COL2");
Console.Write("### LOOP 2 ###");
Console.Write(reader2.GetDouble(COL2Index));
}
}
}
}
}
}
}
}
Basically 2 of everything, I will be needing to do this 5 times, i.e. loop within loop within loop within loop within loop...
The first loop on its own works, but the second one does not work and gives the following error:
There is already an open DataReader associated with this Command which
must be closed first.
on the line:
using (reader2 = command2.ExecuteReader())
How can I get this to work as I need to embed loops
This is the definition of Select N+1 and should be avoided if possible. I would recommend using something like Entity Framework and eagerly loading the child values.
If not possible to avoid, loop though your entire reader1 results, assign to a local collection, close reader1, and then iterate through the local collection and load based on the local values.
You have no reason to open the connection twice if you have MARS enabled to the same database/connection string. This can be done by adding "MultipleActiveResultSets=True" to your connection string.
Additionally, You can use a DataAdapter to load the data into a DataSet/DataTable and then query the DataSet. However, this assumes that your tables aren't too big that you can load them into memory, otherwise an ORM would be a better option.
An ORM solution such as LINQ-to-SQL or LINQ-to-Entities via Entity Framework (as mentioned in Mike Cole's answer) could really help you out here so you don't need to worry about writing these queries and handling the connections. Instead you just rely on a DataContext to handle the connections.

What am I doing wrong with this query?

I can't seem to find why this function doesn't insert records into the database. :(
I get no error messages or whatsoever, just nothing in the database.
EDIT: this is how my query looks now .. still nothing ..
connection.Open();
XmlNodeList nodeItem = rssDoc.SelectNodes("/edno23/posts/post");
foreach (XmlNode xn in nodeItem)
{
cmd.Parameters.Clear();
msgText = xn["message"].InnerText;
C = xn["user_from"].InnerText;
avatar = xn["user_from_avatar"].InnerText;
string endhash = GetMd5Sum(msgText.ToString());
cmd.Parameters.Add("#endhash",endhash);
cmd.CommandText = "Select * FROM posts Where hash=#endhash";
SqlCeDataReader reader = cmd.ExecuteReader();
while (reader.Read())
{
string msgs = reader["hash"].ToString();
if (msgs != endhash || msgs == null)
{
sql = "INSERT INTO posts([user],msg,avatar,[date],hash) VALUES(#username,#messige,#userpic,#thedate,#hash)";
cmd.CommandText = sql;
cmd.Parameters.Add("#username", C);
cmd.Parameters.Add("#messige", msgText.ToString());
cmd.Parameters.Add("#userpic", avatar.ToString());
cmd.Parameters.Add("#thedate", dt);
cmd.Parameters.Add("#hash", endhash);
cmd.ExecuteNonQuery();// executes query
adapter.Update(data);// saves the changes
}
}
reader.Close();
}
connection.Close();
Does nodeItem actually have any items in it? If not, the contents of the foreach loop aren't being executed.
What's the adapter and data being used for? The queries and updates seem be done via other commands and readers.
What does 'hash' actually contain? If it's a hash, why are you hashing the content of the hash inside the while loop? If not, why is it being compared against a hash in the query SELECT * FROM posts WHERE hash = #endhash?
Won't closing the connection before the end of the while loop invalidate the reader used to control the loop?
Lots of things going on here...
You are using the command 'cmd' to loop over records with a datareader, and then using the same 'cmd' command inside the while statement to execute an insert statement. You declared another command 'cmdAdd' before but don't seem to use it anywhere; is that what you intended to use for the insert statement?
You also close your data connection inside the while loop that iterates over your datareader. You are only going to read one record and then close the connection to your database that way; if your conditional for inserting is not met, you're not going to write anything to the database.
EDIT:
You really should open and close the connection to the database outside the foreach on the xmlnodes. If you have 10 nodes to loop over, the db connection is going to be opened and closed 10 times (well, connection pooling will probably prevent that, but still...)
You are also loading the entire 'posts' table into a dataset for seemingly no reason. You're not changing any of the values in the dataset yet you are calling an update on it repeatedly (at "save teh shanges"). If the 'posts' table is even remotely large, this is going to suck a lot of memory for no reason (on a handheld device, no less).
Is anything returned from "Select * FROM posts Where hash=#endhash"?
If not, nothing inside the while loop matters....
Why are you closing the Database Connection inside the while loop?
The code you posted should throw an exception when you try to call cmd.ExecuteNonQuery() with an unopen DB connection object.
SqlCeCommand.ExecuteNonQuery() method returns the number of rows affected.
Why don't you check whether it is returning 1 or not in the debugger as shown below?
int rowsAffectedCount = cmd.ExecuteNonQuery();
Hope it helps :-)
You've got some issues with not implementing "using" blocks. I've added some to your inner code below. The blocks for the connection and select command are more wishful thinking on my part. I hope you're doing the same with the data adapter.
using (var connection = new SqlCeConnection(connectionString))
{
connection.Open();
var nodeItem = rssDoc.SelectNodes("/edno23/posts/post");
foreach (XmlNode xn in nodeItem)
{
using (
var selectCommand =
new SqlCeCommand(
"Select * FROM posts Where hash=#endhash",
connection))
{
var msgText = xn["message"].InnerText;
var c = xn["user_from"].InnerText;
var avatar = xn["user_from_avatar"].InnerText;
var endhash = GetMd5Sum(msgText);
selectCommand.Parameters.Add("#endhash", endhash);
selectCommand.CommandText =
"Select * FROM posts Where hash=#endhash";
using (var reader = selectCommand.ExecuteReader())
{
while (reader.Read())
{
var msgs = reader["hash"].ToString();
if (msgs == endhash && msgs != null)
{
continue;
}
const string COMMAND_TEXT =
"INSERT INTO posts([user],msg,avatar,[date],hash) VALUES(#username,#messige,#userpic,#thedate,#hash)";
using (
var insertCommand =
new SqlCeCommand(
COMMAND_TEXT, connection))
{
insertCommand.Parameters.Add("#username", c);
insertCommand.Parameters.Add(
"#messige", msgText);
insertCommand.Parameters.Add(
"#userpic", avatar);
insertCommand.Parameters.Add("#thedate", dt);
insertCommand.Parameters.Add(
"#hash", endhash);
insertCommand.ExecuteNonQuery();
// executes query
}
adapter.Update(data); // saves teh changes
}
reader.Close();
}
}
}
connection.Close();
}
Of course with the additional nesting, parts should be broken out as separate methods.
I suspect your problem is that you're trying to reuse the same SqlCeCommand instances.
Try making a new SqlCeCommand within the while loop. Also, you can use the using statement to close your data objects.
Why are you calling adapter.Update(data) since you're not changing the DataSet at all? I suspect you want to call adapter.Fill(data). The Update method will save any changes in the DataSet to the database.
How to debug programs: http://www.drpaulcarter.com/cs/debug.php
Seriously, can you post some more information about where it's working? Does it work if you use SQL Server Express instead of SQL CE? If so, can you break out SQL Profiler and take a look at the SQL commands being executed?

Check if a SQL table exists

What's the best way to check if a table exists in a Sql database in a database independant way?
I came up with:
bool exists;
const string sqlStatement = #"SELECT COUNT(*) FROM my_table";
try
{
using (OdbcCommand cmd = new OdbcCommand(sqlStatement, myOdbcConnection))
{
cmd.ExecuteScalar();
exists = true;
}
}
catch
{
exists = false;
}
Is there a better way to do this? This method will not work when the connection to the database fails. I've found ways for Sybase, SQL server, Oracle but nothing that works for all databases.
bool exists;
try
{
// ANSI SQL way. Works in PostgreSQL, MSSQL, MySQL.
var cmd = new OdbcCommand(
"select case when exists((select * from information_schema.tables where table_name = '" + tableName + "')) then 1 else 0 end");
exists = (int)cmd.ExecuteScalar() == 1;
}
catch
{
try
{
// Other RDBMS. Graceful degradation
exists = true;
var cmdOthers = new OdbcCommand("select 1 from " + tableName + " where 1 = 0");
cmdOthers.ExecuteNonQuery();
}
catch
{
exists = false;
}
}
If you're trying for database independence you will have to assume a minimum standard. IIRC The ANSI INFORMATION_SCHEMA views are required for ODBC conformance, so you could query against them like:
select count (*)
from information_schema.tables
where table_name = 'foobar'
Given that you are using ODBC, you can also use various ODBC API calls to retrieve this metadata as well.
Bear in mind that portability equates to write-once test anywhere so you are still going to have to test the application on every platform you intend to support. This means that you are inherently limited to a finite number of possible database platforms as you only have so much resource for testing.
The upshot is that you need to find a lowest common denominator for your application (which is quite a lot harder than it looks for SQL) or build a platform-dependent section where the non-portable functions can be plugged in on a per-platform basis.
I don't think that there exists one generic way that works for all Databases, since this is something very specific that depends on how the DB is built.
But, why do you want to do this using a specific query ?
Can't you abstract the implementation away from what you want to do ?
I mean: why not create a generic interface, which has among others, a method called 'TableExists( string tablename )' for instance.
Then, for each DBMS that you want to support , you create a class which implements this interface, and in the TableExists method, you write specific logic for this DBMS.
The SQLServer implementation will then contain a query which queries sysobjects.
In your application, you can have a factory class which creates the correct implementation for a given context, and then you just call the TableExists method.
For instance:
IMyInterface foo = MyFactory.CreateMyInterface (SupportedDbms.SqlServer);
if( foo.TableExists ("mytable") )
...
I think this is how I should do it.
I fully support Frederik Gheysels answer. If you have to support multiple database systems, you should implement your code against an abstract interface with specific implementations per database system. There are many more examples of incompatible syntax than just checking for an existing table (e.g.: limiting the query to a certain number of rows).
But if you really have to perform the check using the exception handling from your example, you should use the following query that is more efficient than a COUNT(*) because the database has no actual selection work to do:
SELECT 1 FROM my_table WHERE 1=2
I would avoid executing the select count(x) from xxxxxx as the DBMS will actually go ahead and do it which may take some time for a large table.
Instead just prepare a select * from mysterytable query. The prepare will fail if mysterytable does not exist. There is no need to actually execute the prepared statement.
The following works well for me...
private bool TableExists(SqlConnection conn, string database, string name)
{
string strCmd = null;
SqlCommand sqlCmd = null;
try
{
strCmd = "select case when exists((select '['+SCHEMA_NAME(schema_id)+'].['+name+']' As name FROM [" + database + "].sys.tables WHERE name = '" + name + "')) then 1 else 0 end";
sqlCmd = new SqlCommand(strCmd, conn);
return (int)sqlCmd.ExecuteScalar() == 1;
}
catch { return false; }
}
In current project on my job I need to write 'data agent' which would support a lot of database types.
So I decided to do next: write a base class with the base (database independent) functionality using virtual methods and override in subclasses all database-specific moments
Very Simple
use YOUR_DATABASE --OPTIONAL
SELECT count(*) as Exist from INFORMATION_SCHEMA.TABLES where table_name = 'YOUR_TABLE_NAME'
If the answer is 1, There is a table.
If the answer is 0, There is no table.
If you want to avoid try-catch solutions, I'm suggesting this method, using sys.tables
private bool IsTableExisting(string table)
{
string command = $"select * from sys.tables";
using (SqlConnection con = new SqlConnection(Constr))
using (SqlCommand com = new SqlCommand(command, con))
{
SqlDataReader reader = com.ExecuteReader();
while (reader.Read())
{
if (reader.GetString(0).ToLower() == table.ToLower())
return true;
}
reader.Close();
}
return false;
}

Categories