Write custom refactorings for Visual Studio - c#

Is there a way to write custom refactorings or code transformations for Visual Studio?
An example: I have a codebase with a billion instances of:
DbConnection conn = null;
conn = new DbConnection();
conn.Open();
...a number of statements using conn...
conn.Close();
conn = null;
I would like to transform this into:
using (DbConnection conn = GetConnection()){
...statements...
}
Everywhere the above pattern appears.
Edit: The above is just an example. The point is that I need to do a number of code transformations which are too complex to perform with a text-based search-replace. I wonder if I can hook into the same mechanism underlying the built-in refactorings to write my own code transformations.

As Marc said, this is more of a 'replace' thing than a refactoring. But in any case, ReSharper is an option, and if you decide to use it, you can check out this guide. Good luck!
It appears that the above link is now broken, try this one instead

Strictly speaking, that isn't a pure refactor, since it changes the code in a way that significantly changes the behaviour (in particular, calling Dispose()). I would hope that either "Resharper" or "Refactor! Pro" would have a bulk "introduce using" (or similar). I've checked on "Refactor! Pro" (since that is what I use), and although it detects the undisposed local (at least, it does with DbConnection conn = new SqlConnection();), it doesn't offer an automated fix (trivial to do manually, of course). I would suggest:
check Resharper (there is an evaluation period)
if not, do it manually

You would need to write a macro to do this.

Related

Dapper vs ADO.Net with reflection which is faster?

I have studied about Dapper and ADO.NET and performed select tests on both and found that sometimes ADO.NET is faster than Dapper and sometimes is reversed. I understand this could be database issues as i am using SQL Server. As it is stated that reflection is slow and i am using reflection in ADO.NET. So can anyone tell me which approach is the fastest?
Here what i coded.
Using ADO.NET
DashboardResponseModel dashResp = null;
SqlConnection conn = new SqlConnection(connStr);
try
{
SqlCommand cmd = new SqlCommand("spGetMerchantDashboard", conn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("#MID", mid);
conn.Open();
var dr = cmd.ExecuteReader();
List<MerchantProduct> lstMerProd = dr.MapToList<MerchantProduct>();
List<MerchantPayment> lstMerPay = dr.MapToList<MerchantPayment>();
if (lstMerProd != null || lstMerPay != null)
{
dashResp = new DashboardResponseModel();
dashResp.MerchantProduct = lstMerProd == null ? new
List<MerchantProduct>() : lstMerProd;
dashResp.MerchantPayment = lstMerPay == null ? new
List<MerchantPayment>() : lstMerPay;
}
dr.Close();
}
return dashResp;
Using Dapper
DashboardResponseModel dashResp = null;
var multipleresult = db.QueryMultiple("spGetMerchantDashboard", new { mid =
mid }, commandType: CommandType.StoredProcedure);
var merchantproduct = multipleresult.Read<MerchantProduct>().ToList();
var merchantpayment = multipleresult.Read<MerchantPayment>().ToList();
if (merchantproduct.Count > 0 || merchantpayment.Count > 0)
dashResp = new DashboardResponseModel { MerchantProduct =
merchantproduct, MerchantPayment = merchantpayment };
return dashResp;
Dapper basically straddles ADO.NET as a very thin abstraction - so in theory it can't be faster than well written ADO.NET code (although to be honest: most people don't write well written ADO.NET code).
It can be virtually indistinguishable, though; assuming you're using just Dapper (not any of the things that sit on top of it) then it doesn't include any query generation, expression tree / DSL parsing, complex model configuration, or any of those other things that tend to make full ORMs more flexible but more expensive.
Instead: it focuses just on executing user-supplied queries and mapping results; what it does is to generate all of the materialization code (how to map MerchantProduct to your columns) via IL-emit and cache that somewhere. Likewise it prepares much of the parameter preparation code in the same way. So at runtime it is usually just fetching two delegate instances from cache and invoking them.
Since the combination of (latency to the RDBMS + query execution cost + network bandwidth cost of the results) is going to be much higher than the overhead of fetching two delegates from dictionaries, we can essentially ignore that cost.
In short: it would be rare that you can measure a significant overhead here.
As a minor optimization to your code: prefer AsList() to ToList() to avoid creating a copy.
Theory:
Dapper is micro-ORM or a Data Mapper. It internally uses ADO.NET. Additionally, Dapper maps the ADO.NET data structures (DataReader for say) to your custom POCO classes. As this is additional work Dapper does, in theory, it cannot be faster than ADO.NET.
Following is copied from one of the comments (#MarcGravell) for this answer:
it can't be faster than the raw API that it sits on top of; it can, however, be faster than the typical ADO.NET consuming code - most code that consumes ADO.NET tends to be badly written, inefficient etc; and don't even get me started on DataTable :)
It is assumed that ADO.NET is used properly in optimized ways while this comparison. Otherwise, the result may be opposite; but that is not fault of ADO.NET. If ADO.NET used incorrectly, it may under-perform than Dapper. This is what happens while using ADO.NET directly bypassing Dapper.
Practical:
Dapper in most of the cases perform equally (negligible difference) compared to ADO.NET. Dapper internally implements many optimizations recommended for ADO.NET those are in its scope. Also, it forces many good ADO.NET coding practices those ultimately improve performance (and security).
As mapping is core part of Dapper, it is much optimized by use of IL. This makes Dapper better choice than manually mapping in code.
Refer this blog which explains how Dapper was invented and how it is optimized for performance: https://samsaffron.com/archive/2011/03/30/How+I+learned+to+stop+worrying+and+write+my+own+ORM
In following scenario, Dapper MAY be slower:
If returned data structure is large enough (which increases the mapping time), Dapper will be slightly slower. But, this is equally true for ADO.NET as well. As said earlier, mapper part of Dapper is much optimized; so it is still better choice than manual-mapping in code. Further, Dapper provides buffered parameter; if set to false, Dapper does not materialize the list. It simply hands over each item to you in iterator. Refer comment on this answer by #Marc.
Dapper does not implement provider-specific features as it is written over IDbConnection. This may hit the performance in those very rare cases. But this can be done if you implement an interface to tell Dapper how to do this.
Dapper does not support preparing the statements. That may be an issue in very few cases. Read this blog.
With this slight and rare performance hit, you get huge benefits including strongly typed data structure and much less and manageable code. This is really a big gain.
There are many performance comparison statistics of Dapper (with other ORMs and ADO.NET) available on net; have a look just in case you are interested.

C# WPF ConnectionString Class Security

I'm just building my application, but wish to test it online, rather than locally to see how it performs.
Essentially, as it will be a distributed application, I want it to be secure. I want the connection string to be encrypted so later, when I perform obfuscation, it will increase overall security of the application.
http://puu.sh/is8IX/12ccc76d65.png
Overall, how does it look? As it works perfectly locally, I just wish to improve it overall as I'm heading in to an SE job and wish to be prepared.
Apart from security, any other advice?
Personally regardless of obfuscation or not, I would not keep the connection string information in the class, sorry had to say this even though you mentioned not security related.
Apart from that,
I would change your code to use using statements, example:
using(var connection = U_SQLConnection.GetConnection())
{
using(var command = new SqlCommand(""))
{
using(var reader = command.ExecuteReader())
{
}
}
}
You don't "Have To" use using statements, but mostly they would be considered good practice. Also from a personal point of view, I would not keep the SqlConnection variable globally in the class either (SqlConnection _con). If you take a look you are in fact setting it twice, once in line 4 and again on line 32 (line numbers are more guessing), it might not break anything, but seems like it is not required.

Static Data Access Methods

I know that creating a custom data access layer is not a very good idea unless you: 1) Know exactly what you're doing, and/or 2) Have a very specific need. However, I am maintaining some legacy code that uses a custom data access layer where each method looks something like this:
using (SqlConnection cn = new SqlConnection(connectionString))
{
using (SqlDataAdapter da = new SqlDataAdapter("sp_select_details", cn))
{
using (DataSet ds = new DataSet())
{
da.SelectCommand.Parameters.Add("#blind", SqlDbType.Bit).Value = blind;
da.SelectCommand.CommandType = CommandType.StoredProcedure;
da.SelectCommand.CommandTimeout = CommandTimeout;
da.Fill(ds, "sp_select_details");
return ds;
}
}
}
Consequently, the usage looks something like this:
protected void Page_Load(object sender, EventArgs e) {
using (Data da = new Data ("SQL Server connection string")) {
DataSet ds = da.sp_select_blind_options(Session.SessionID); //opens a connection
Boolean result = da.sp_select_login_exists("someone");//opens another connection
}
}
I am thinking that using Microsoft's Enterprise Library would save me from setting up and tearing down, namely, the connection to SQL Server every method call. Am I correct in this thinking?
I've used Enterprise Library in the past very successfully, and Enterprise Library would hide some of the messy details from you, but essentially it would be using the same code internally as that demonstrated in your example.
As #tigran says, I wouldn't recommend trying to change an existing codebase unless there are fundamental issues with it.
Yes, it will definitely save your time, but you will pay in terms of performance and flexibility.
So creating a custom DataLayer is also a very good idea to gain a performance and flexibility.
Considering that you're talking about legacy code, that, I suppose, works, I wouldn't change it to something modern (but less performant) only for having something fresh in my code.
Solid, workable DataLayer is a best choice ever over any other new technology you should implement in legacy code.
In short, change it only if you have really seriouse reasons to do that. I understand your willingness to change the stuff, cause it's always hard to understand the code written by someone else, but believe me, very often not changing old legacy code is a best choice for the project.
Good luck.
Yep, by default connection pooling will be on. The application domain basically maintains a list of connections, and when you issue a call to create a connection, it returns an unused one from the pool, if it exists or creates one if not.
So when your connection cn goes out of scope in teh using statement and get's disposed, what actually happens is it goes back in to the pool, ready for the next request and hang around in there based on various optimisation parameters.
Google ADO connection pooling for more details, there's a lot in there.

Static analysis tool to check locking before access to variable

I know there are a quite a few static analysis tools for C# or .Net around. See this question for a good list of available tools. I have used some of those in the past and they have a good way of detecting problems.
I am currently looking for a way to automatically enforce some locking rules we have in our teams. For example I would like to enforce the following rules:
"Every public method that uses member foo must acquire a lock on bar"
Or
"Every call to foobar event must be outside lock to bar"
Writing custom FxCop rules, if feasible, seems rather complex. Is there any simpler way of doing it?
Multithreading is hard. Using locks is not the only way to make operations thread-safe. A developer may use non-blocking synchronization with a loop and Interlocked.CompareExchange, or some other mechanism instead. A rule can not determine if something is thread-safe.
If the purpose of rules is to ensure high quality code, I think the best way to go about this is to create a thread-safe version of your class which is simple to consume. Put checks in place that the more-complex synchronization code is only modified under code review by developers that understand multithreading.
With NDepend you could write a code rule over a LINQ query (CQLinq) that could look like:
warnif count > 0 from m in Methods where
m.IsUsing ("YourNamespace.YourClass.foo") && (
! m.IsUsing ("YourNamespace.YourClass.bar") ||
! m.IsUsing ("System.Threading.Monitor.Enter(Object)".AllowNoMatch()) ||
! m.IsUsing ("System.Threading.Monitor.Exit(Object)".AllowNoMatch()) )
select new { m, m.NbLinesOfCode }
Basically it will matches methods that uses the field foo, without using the field bar, or without calling Monitor Enter or Exit. This is not exactly what you are asking for, since you want lock explicitely on bar, but this is simple and quite close.
Notes that you can also write...
m.AssignField("YourNamespace.YourClass.foo")
... to restrict a specific write/assign field usage on foo.
One of possible solutions could be implementation of Code Contracts. You define rules, run them at compile time (so can be also integrated in your CI environment if any) and get results.
For en example of using CodeContracts like a tool for code static analys see :
Static Code Analysis and Code Contracts

LINQ-To-SQL NOLOCK (NOT ReadUncommitted)

I've been searching for some time now in here and other places and can't find a good answer to why Linq-TO-SQL with NOLOCK is not possible..
Every time I search for how to apply the with(NOLOCK) hint to a Linq-To-SQL context (applied to 1 sql statement) people often answer to force a transaction (TransactionScope) with IsolationLevel set to ReadUncommitted. Well - they rarely tell this causes the connection to open an transaction (that I've also read somewhere must be ensured closed manually).
Using ReadUncommitted in my application as is, is really not that good. Right now I've got using context statements for the same connection within each other. Like:
using( var ctx1 = new Context()) {
... some code here ...
using( var ctx2 = new Context()) {
... some code here ...
using( var ctx3 = new Context()) {
... some code here ...
}
... some code here ...
}
... some code here ...
}
With a total execution time of 1 sec and many users on the same time, changing the isolation level will cause the contexts to wait for each other to release a connection because all the connections in the connection pool is being used.
So one (of many reasons) for changing to "nolock" is to avoid deadlocks (right now we have 1 customer deadlock per day). The consequence of above is just another kind of deadlock and really doesn't solve my issue.
So what I know I could do is:
Avoid nested usage of same connection
Increase the connection pool size at the server
But my problem is:
This is not possible within near future because of many lines of code re-factoring and it will conflict with the architecture (without even starting to comment whether this is good or bad)
Even though this of course will work, this is what I would call "symptomatic treatment" - as I don't know how much the application will grow and if this is a reliable solution for the future (and then I might end up with a even worse situation with a lot more users being affected)
My thoughts are:
Can it really be true that NoLock is not possible (for each statement without starting transactions)?
If 1 is true - can it really be true no one other got this problem and solved it in a generic linq to sql modification?
If 2 is true - why is this not a issue for others?
Is there another workaround I havn't looked at maybe?
Is the using of the same connection (nested) many times so bad practice that no-one has this issue?
1: LINQ-to-SQL does indeed not allow you to indicate hints like NOLOCK; it is possible to write your own TSQL, though, and use ExecuteQuery<T> etc
2: to solve in an elegant way would be pretty complicated, frankly; and there's a strong chance that you would be using it inappropriately. For example, in the "deadlock" scenario, I would wager that actually it is UPDLOCK that you should be using (during the first read), to ensure that the first read takes a write lock; this prevents a second later query getting a read lock, so you generally get blocking instead of deadlock
3: using the connection isn't necessarily a big problem (although note that new Context() won't generally share a connection; to share a connection you would use new Context(connection)). If seeing this issue, there are three likely solutions (if we exclude "use an ORM with hint support"):
using an explicit transaction (which doesn't have to be TransactionScope - it can be a connection level transaction) to specify the isolation level
write your own TSQL with hints
use a connection-level isolation level (noting the caveat I added as a comment)
IIRC there is also a way to subclass the data-context and override some of the transaction-creation code to control the isolation-level for the transactions that it creates internally.

Categories