I'm just building my application, but wish to test it online, rather than locally to see how it performs.
Essentially, as it will be a distributed application, I want it to be secure. I want the connection string to be encrypted so later, when I perform obfuscation, it will increase overall security of the application.
http://puu.sh/is8IX/12ccc76d65.png
Overall, how does it look? As it works perfectly locally, I just wish to improve it overall as I'm heading in to an SE job and wish to be prepared.
Apart from security, any other advice?
Personally regardless of obfuscation or not, I would not keep the connection string information in the class, sorry had to say this even though you mentioned not security related.
Apart from that,
I would change your code to use using statements, example:
using(var connection = U_SQLConnection.GetConnection())
{
using(var command = new SqlCommand(""))
{
using(var reader = command.ExecuteReader())
{
}
}
}
You don't "Have To" use using statements, but mostly they would be considered good practice. Also from a personal point of view, I would not keep the SqlConnection variable globally in the class either (SqlConnection _con). If you take a look you are in fact setting it twice, once in line 4 and again on line 32 (line numbers are more guessing), it might not break anything, but seems like it is not required.
Related
I'd like to ask a question. I've been trying to find some information regarding transactions with multiple connections, but I haven't been able to find any good source of information.
Now for what I'm trying to do. I have code that looks like this:
using (var Connection1 = m_Db.CreateConnection())
using (var Connection2 = m_Db.CreateConnection())
{
Connection1.DoRead(..., (IDataReader Reader) =>
{
// Do stuff
Connection2.DoWrite(...);
Connection2.DoRead(..., (IDataReader Reader) =>
{
// Do more stuff
using (var Connection3 = m_Db.CreateConnection())
{
Connection3.DoWrite(...);
Connection3.Commit(); // Is this even right?
}
});
});
Connection1.DoRead(..., (IDataReader) =>
{
// Do yet more stuff
});
Connection1.Commit();
Connection2.Commit();
}
Each CreateConnection creates a new transaction using MySqlConnection::BeginTransaction. The CreateConnection method creates a Connection object which wraps a MySqlConnection. The DoRead function executes some SQL, and disposes the IDataReader when done.
Every Connection will do a Rollback when disposed.
Now for some notes:
I have ONE server with multiple databases.
I am running MySql server with InnoDB databases.
I am doing both reads and writes to these databases.
For performance reasons and not to mess up the database, I am using transactions.
The code is (at least, for now) entirely serial. There are NO concurrent threads. All inserts and queries are done in serial fashion.
I use multiple connections to the database because a read or write is not allowed while another read is in progress (basically the reader object has not yet been disposed).
I basically want every connection to see all changes. So for example, after Connection 3 does some writes, Connection 1 should see those. But the data should be in the transaction and not written to the database (yet).
Now, as for my questions:
Does this work? Will everything ONLY be committed only once the last Commit function is called? Should I use another approach?
Is this right? Is my approach completely and utterly wrong and silly?
Any drawbacks? Especially regarding performance.
Thanks.
Welp, it seems no one knows. But that's okay.
For now, I just went with the method of just using one connection and reading all the results into a List>, then closing the reader, thereby avoiding the problem of having to use multiple connections.
Might there be performance problems? Maybe, but it's better than having to deal with uncertainty and deadlocks.
My hosting company blocked my website for using more than 15 concurrent database connections. But in my code I closed each and every connection that I opened. But still they are saying that there are too many concurrent connections. And suggested me the solution that I should change the source code of my website. So please tell me the solution about this? And my website is dynamic, so would making it static simple HTML old days type will make a difference or not?
Also note that I tried this when no solution I can think of, before every con.open(), I added con.Close(), So that any other connection opened will be closed.
The first thing to do is to check when you open connections - see if you can minimise that. For example, and you doing "n+1" on different connections?
If you have a single server, the technical solution here is a semaphore - for example, something like:
someSemaphore.TakeOne();
try {
using(var conn = GetConnection()) {
...
}
} finally {
someSemaphore.Release();
}
which will (assuming someSemaphore is shared, for example static) ensure that you can only get into that block "n" times at once. In your case, you would create the semaphore with 15 spaces:
static readonly Semaphore someSemaphore = new Semaphore(15,15);
However! Caution is recommended: in some cases you could get a deadlock: imagine 2 threads poorly written each need 9 connections - thread A takes 7 and thread B takes 8. They both need more - and neither will ever get them. Thus, using WaitOne with a timeout is important:
static void TakeConnection() {
if(!someSemaphore.TakeOne(3000)) {
throw new TimeoutException("Unable to reserve connection");
}
}
static void ReleaseConnection() {
someSemaphore.Release();
}
...
TakeConnection();
try {
using(var conn = GetConnection()) {
...
}
} finally {
ReleaseConnection();
}
It would also be possible to wrap that up in IDisposable to make usage more convenient.
Change Hosting Company.
Seriously.
Unless you run a pathetic Little home blog.
You can easily have more than 15 pages / requests being handled at the same time. I am always wary of "run away Connections" but I would not consider 15 Connections to even be something worth mentioning. This is like a car rental Company complaining you drive more than 15km - this simply is a REALLY low Limit.
On a busy Website you can have 50, 100, even 200 open Connections just because you ahve that many requests at the same time.
This is something not so obvious, but even if you care about opening and closing your connections properly, you have to look at something particular.
If you make the smallest change on the text you use to build a connection string, .net will create a whole new connection instead of using one already opened (even if the connection uses MARS), so just in case, look for your code if you are creating connection strings on the fly instead of using a single one from your web config.
I believe SQL Connections are pooled. When you close one, you actually just return it to connection pool.
You can use SqlConnection.ClearPool(connection) or SqlConnection.ClearAllPools to actually close the connection, but it will affect the performance of your site.
Also, you can disable pooling by using connection string parameter Pooling=false.
There are also Max Pool Size (default 100), you may want to set it to a lower number.
This all might work, but i would also suggest you to switch providers ....
If you only fetch data from database then it is not very difficult to create some sort of cache. But if there full CRUD then the better solution is to change hosting provider.
I know that creating a custom data access layer is not a very good idea unless you: 1) Know exactly what you're doing, and/or 2) Have a very specific need. However, I am maintaining some legacy code that uses a custom data access layer where each method looks something like this:
using (SqlConnection cn = new SqlConnection(connectionString))
{
using (SqlDataAdapter da = new SqlDataAdapter("sp_select_details", cn))
{
using (DataSet ds = new DataSet())
{
da.SelectCommand.Parameters.Add("#blind", SqlDbType.Bit).Value = blind;
da.SelectCommand.CommandType = CommandType.StoredProcedure;
da.SelectCommand.CommandTimeout = CommandTimeout;
da.Fill(ds, "sp_select_details");
return ds;
}
}
}
Consequently, the usage looks something like this:
protected void Page_Load(object sender, EventArgs e) {
using (Data da = new Data ("SQL Server connection string")) {
DataSet ds = da.sp_select_blind_options(Session.SessionID); //opens a connection
Boolean result = da.sp_select_login_exists("someone");//opens another connection
}
}
I am thinking that using Microsoft's Enterprise Library would save me from setting up and tearing down, namely, the connection to SQL Server every method call. Am I correct in this thinking?
I've used Enterprise Library in the past very successfully, and Enterprise Library would hide some of the messy details from you, but essentially it would be using the same code internally as that demonstrated in your example.
As #tigran says, I wouldn't recommend trying to change an existing codebase unless there are fundamental issues with it.
Yes, it will definitely save your time, but you will pay in terms of performance and flexibility.
So creating a custom DataLayer is also a very good idea to gain a performance and flexibility.
Considering that you're talking about legacy code, that, I suppose, works, I wouldn't change it to something modern (but less performant) only for having something fresh in my code.
Solid, workable DataLayer is a best choice ever over any other new technology you should implement in legacy code.
In short, change it only if you have really seriouse reasons to do that. I understand your willingness to change the stuff, cause it's always hard to understand the code written by someone else, but believe me, very often not changing old legacy code is a best choice for the project.
Good luck.
Yep, by default connection pooling will be on. The application domain basically maintains a list of connections, and when you issue a call to create a connection, it returns an unused one from the pool, if it exists or creates one if not.
So when your connection cn goes out of scope in teh using statement and get's disposed, what actually happens is it goes back in to the pool, ready for the next request and hang around in there based on various optimisation parameters.
Google ADO connection pooling for more details, there's a lot in there.
I've been searching for some time now in here and other places and can't find a good answer to why Linq-TO-SQL with NOLOCK is not possible..
Every time I search for how to apply the with(NOLOCK) hint to a Linq-To-SQL context (applied to 1 sql statement) people often answer to force a transaction (TransactionScope) with IsolationLevel set to ReadUncommitted. Well - they rarely tell this causes the connection to open an transaction (that I've also read somewhere must be ensured closed manually).
Using ReadUncommitted in my application as is, is really not that good. Right now I've got using context statements for the same connection within each other. Like:
using( var ctx1 = new Context()) {
... some code here ...
using( var ctx2 = new Context()) {
... some code here ...
using( var ctx3 = new Context()) {
... some code here ...
}
... some code here ...
}
... some code here ...
}
With a total execution time of 1 sec and many users on the same time, changing the isolation level will cause the contexts to wait for each other to release a connection because all the connections in the connection pool is being used.
So one (of many reasons) for changing to "nolock" is to avoid deadlocks (right now we have 1 customer deadlock per day). The consequence of above is just another kind of deadlock and really doesn't solve my issue.
So what I know I could do is:
Avoid nested usage of same connection
Increase the connection pool size at the server
But my problem is:
This is not possible within near future because of many lines of code re-factoring and it will conflict with the architecture (without even starting to comment whether this is good or bad)
Even though this of course will work, this is what I would call "symptomatic treatment" - as I don't know how much the application will grow and if this is a reliable solution for the future (and then I might end up with a even worse situation with a lot more users being affected)
My thoughts are:
Can it really be true that NoLock is not possible (for each statement without starting transactions)?
If 1 is true - can it really be true no one other got this problem and solved it in a generic linq to sql modification?
If 2 is true - why is this not a issue for others?
Is there another workaround I havn't looked at maybe?
Is the using of the same connection (nested) many times so bad practice that no-one has this issue?
1: LINQ-to-SQL does indeed not allow you to indicate hints like NOLOCK; it is possible to write your own TSQL, though, and use ExecuteQuery<T> etc
2: to solve in an elegant way would be pretty complicated, frankly; and there's a strong chance that you would be using it inappropriately. For example, in the "deadlock" scenario, I would wager that actually it is UPDLOCK that you should be using (during the first read), to ensure that the first read takes a write lock; this prevents a second later query getting a read lock, so you generally get blocking instead of deadlock
3: using the connection isn't necessarily a big problem (although note that new Context() won't generally share a connection; to share a connection you would use new Context(connection)). If seeing this issue, there are three likely solutions (if we exclude "use an ORM with hint support"):
using an explicit transaction (which doesn't have to be TransactionScope - it can be a connection level transaction) to specify the isolation level
write your own TSQL with hints
use a connection-level isolation level (noting the caveat I added as a comment)
IIRC there is also a way to subclass the data-context and override some of the transaction-creation code to control the isolation-level for the transactions that it creates internally.
Is there a way to write custom refactorings or code transformations for Visual Studio?
An example: I have a codebase with a billion instances of:
DbConnection conn = null;
conn = new DbConnection();
conn.Open();
...a number of statements using conn...
conn.Close();
conn = null;
I would like to transform this into:
using (DbConnection conn = GetConnection()){
...statements...
}
Everywhere the above pattern appears.
Edit: The above is just an example. The point is that I need to do a number of code transformations which are too complex to perform with a text-based search-replace. I wonder if I can hook into the same mechanism underlying the built-in refactorings to write my own code transformations.
As Marc said, this is more of a 'replace' thing than a refactoring. But in any case, ReSharper is an option, and if you decide to use it, you can check out this guide. Good luck!
It appears that the above link is now broken, try this one instead
Strictly speaking, that isn't a pure refactor, since it changes the code in a way that significantly changes the behaviour (in particular, calling Dispose()). I would hope that either "Resharper" or "Refactor! Pro" would have a bulk "introduce using" (or similar). I've checked on "Refactor! Pro" (since that is what I use), and although it detects the undisposed local (at least, it does with DbConnection conn = new SqlConnection();), it doesn't offer an automated fix (trivial to do manually, of course). I would suggest:
check Resharper (there is an evaluation period)
if not, do it manually
You would need to write a macro to do this.