Provider model and performance - c#

Are there any performance implications with using the provider pattern?
Does it rely on reflection for each instantiation or anything?

Yes, the provider model usually involves a small amount of reflection, and therefore, there is going to be a little bit of a performance hit, however, it is only in the instantiation of the provider object. Once the object is instantiated, it is accessed as normal (usually via an interface). The performance versus a hard-coded model should have very little difference, but the gain you get from the programming perspective far outweighs any performance penalty. Assuming the provider actually may change one day. If not, just hard-code it.

Providers are instanced once per app-domain. Although newing up an object via reflection is slower than doing it inline, it is still very, very fast. I would say there is no performance concern for most business apps.

Related

Migrating Custom .NET ORM to Entity Frame / Dapper

I've inherited a project that is massive in scale, and a bit of a labyrinth. The traffic is substantial enough to want to optimize the data access, so I've began converting some WCF services that are invoked by javascript to Web API.
Unfortunately, the database's primary keys (not auto incrementing) are also managed by a custom ORM by querying a MySQL function that returns the next set of ID's to be used. The ORM then caches them and serves them up to the application. The database itself is an ever growing 2 TB's of data, which would make downtime significant.
I had planned on using Dapper as I've enjoyed ease/performance in the past, but weening this database off of this custom ORM seems daunting and prone to error.
Questions:
Does anyone have any advice for tackling a large scale project?
Should I focus more on migrating the database into an entirely new data structure? (it needs significant normalization, too!)
My humble opinion:
A rule of thumb when you deal with legacy code is: if something works, keep it that way. If necessary, make a change or an improvement. The main reasons are:
The effect of redesign is almost zero when you want to add a business value to the system.
The system, good or bad, works. No matter the care you have, you can always mess something with an structural change.
Besides, it depends a lot on the plan of the company the reasons to change (adding a feature, fixing a bug, improving design or optimizing resource usage). My humble experience tells me that time and budget are very important, and although we always want to redesign (or in some cases, to code from scratch), the most important is the business objectives and the added value.
In your case, maybe it's not necessary to change all the ORM. If you say that the ID's are cached, a better approach would be to modify the PK's, adding on them the identity property (with the properly starting value on each table). After that you can delete that particular part of the code that get the next Id's.
In some cases, an unnormalized database has his reasons. I've seen cases in which the data is copied to the tables to avoid the join, which affects performance. I'm talking about millions of records...
Reasons to change the ORM: maybe if it's inefficient, or if it does not close unmanaged code (in this case a better approach is to implement the IDisposable interface). If it works, maybe a better approach is to use a new ORM once you need to create new functionalities. If the project needs a refactoring for optimization purposes, the change needs to be applied in the bottlenecks, not the entire code.
There is a lot of discussion about the topic. A good recommended resource is "Working effectively with legacy code" by Michael Feathers, or "Getting Started With DDD When Surrounded By Legacy Systems" by Eric Evans.
Greetings!

Benefits and drawbacks of strongly typed datasets for DAL in .Net

I'm currently working with a system that has inherited a DAL using .Net's strongly typed datasets. I have never worked with them before this, but I'm finding that I have a strong aversion to using them. Compared to a POCO based DAL, them seem to be clunky, difficult to manage, and the resulting objects are highly coupled with database-specific concerns (e.g., accessing objects from tables and rows, getting desired data by key values, etc -- isn't the entire purpose of a DAL to abstract this away from logic layers?).
There has been some discussion about rewriting and/or re-factoring parts of the database layer. I, personally, would like to see these datasets removed, but I'm having a hard time convincing some of my colleagues, who are used to using them.
What are some of the pros and cons of using strongly typed datasets vs. a POCO based DAL? Am I justified in my aversion to strongly typed datasets, or is the community consensus that they arn't a problem? Are there any other solutions that I'm missing?
Although I also agree that there are benefits to using an ORM framework like NHibernate, I think a library of that complexity would be a hard sell on my colleagues. If anyone can provide a compelling enough argument to this direction, I would like to hear it.
Strongly typed datasets were an easy way to do a designer-based approach to database access. They could be generated from the database and are fairly easy to update. They also have the benefit of enforcing data types.
You can think of them as a transitionary stage between raw ADO.NET with DataSets, DataTables, and DataAdapters, and Entity Framework. I would try presenting Entity Framework using the designer and database first code generation as a replacement for the current method to your colleagues. It should be a familiar pattern and will allow them to transition more easily. It should also be the lowest amount of additional work to retrofit the existing code.
You can use that introduction to work on their comfort level and later on start to introduce POCO, Linq and separation of concerns in new projects. Remember that, typically, the faster the pace of change (and/or the higher workloads are), the greater the resistance. If you can present new methodologies in bite-sized pieces and as safe-feeling proof of concepts, you'll be much better received. Change is risk, so management of both perceptions and potential work expansion due to unknowns is important.

Regarding the use of reflection in dotnet and the performance

we know that with the help of reflection we can create instance of a class dynamically at run time and can call the method of the class very easily. so this point reflection is late binding because action is taken at run time. so i just want know reflection is faster or not.
what is the performance of reflection. is it good or bad...is it resource hungry. please discuss. thnks.
Technically speaking reflection is a performance hit. But if you're doing something that needs it then you have to use it. If you can go without it, avoid it.
EDIT
To further emphasize, reflection is neither good or bad. Its in the Framework because there's very legitimate reasons to use it. That said, 90% of the time that I see someone using reflection they're trying to do something the hard way, not knowing the easy route. Often its because they don't know about generics.
Generally, the performance of reflection is worse than when you do the same thing without reflection. But whether it is too slow for you depends on what your performance requirements are (do you need it to be fast) and what exactly are you doing.

ASP.NET MVC: Is UpdateModel an "expensive" operation (due to Reflection)?

I wondered whether UpdateModel is considered an "expensive" operation (due to Reflection lookup of the model properties), especially when seen in the context of a larger web application (think StackOverflow)?
I don't want to engage in premature optimization but I consider it a design choice to use UpdateModel which is why I'd like to know early whether it is advisable or not to go with it. The other (tedious) choice is writing my own UpdateModel method for various domain objects with fixed properties.
Thank you!
You are smart to want to not engage in premature optimization. Especially since this "optimization" would favor the processor's time over yours, which is far more expensive.
The primary rule of optimization is to optimize the slow stuff first. So consider how often you actually update a model versus selecting from your database backend. I'm guessing it's 1/10 as often or less. Now consider the cost of selecting from the database backend versus the cost of reflection. The cost of reflection is measured in milliseconds. The cost of selecting from the database backend can be measured in seconds at worst. My experience is that POSTs are rarely very slow, and when they are it's usually the database at fault rather than the reflection. I think you're likely to spend most of your optimization time on GETs.
Compared to network latency, database calls and general IO, the UpdateModel() call is trivial and I wouldn't bother with it.
I think UpdateModel is a bit of a shortcut that causes a huge amount of coupling between the view and the model.
I choose not to use "built-in" models (like being able to pass LINQ created objects to the view directly from the database) because I want the option to replace my model with something more sophisticated - or even just another database provider. It is very tempting to use LINQtoSQL (or ADO.NET Entities) for fast prototyping though.
What I tend to do is create my MVC application, then expose a 'service' layer which is then connected to a 'model' (which is an OO view of my domain). That way I can easily create a web service layer, swap databases, write new workflows etc without concern.
(and make sure you write your tests and use DI - it saves a lot of hassle!)
Rob

What is the "cost" of .NET reflection? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How costly is .NET reflection?
I am currently in a programming mentality that reflection is my best friend. I use it a lot for dynamic loading of content that allows "loose implementation" rather than strict interfaces, as well as a lot of custom attributes.
What is the "real" cost to using reflection?
Is it worth the effort for frequently reflected types to have cached reflection, such as our own pre-LINQ DAL object code on all the properties to table definitions?
Would the caching memory footprint outwieght the reflection CPU usage?
Reflection requires a large amount of the type metadata to be loaded and then processed. This can result in a larger memory overhead and slower execution. According to this article property modification is about 2.5x-3x slower and method invocation is 3.5x-4x slower.
Here is an excellent MSDN article outlining how to make reflection faster and where the overhead is. I highly recommend reading if you want to learn more.
There is also an element of complexity that reflection can add to the code that makes it substantially more confusing and hence difficult to work with. Some people, like Scott Hanselman believe that by using reflection you often make more problems than you solve. This is especially the case if your teams is mostly junior devs.
You may be better off looking into the DLR (Dynamic Language Runtime) if you need alot of dynamic behaviour. With the new changes coming in .NET 4.0 you may want to see if you can incorporate some of it into your solution. The added support for dynamic from VB and C# make using dynamic code very elegant and creating your own dynamic objects fairly straight forward.
Good luck.
EDIT: I did some more poking around Scott's site and found this podcast on reflection. I have not listened to it but it might be worth while.
There are lots of things you can do to speed up reflection. For example, if you are doing lots of property-access, then HyperDescriptor might be useful.
If you are doing a lot of method-invoke, then you can cache methods to typed delegates using Delegate.CreateDelegate - this then does the type-checking etc only once (during CreateDelegate).
If you are doing a lot of object construction, then Delegate.CreateDelegate won't help (you can't use it on a constructor) - but (in 3.5) Expression can be used to do this, again compiling to a typed delegate.
So yes: reflection is slow, but you can optimize it without too much pain.
With great power comes great responsibility.
As you say, reflection has costs associated with it, and depending on how much reflection you do it can slow the application down significantly.
One of the very approrpiate places to use it is for IoC (Inversion of Control) since, depending on the size of your application, would probably have more benefits than not.
Thanks for the great links and great comments, especially on the part of the Jr Devs, that hit it right on the money.
For us it is easier for our junior developers to do this:
[TableName("Table")]
public class SomeDal : BaseDal
{
[FieldName("Field")]
public string Field
}
rather than some larger impelementations of DAL. This speeds up their building of the DAL objects, while hiding all the internal workings for the senior developers to gut out.
Too bad LINQ didn't come out earlier, I feel at times we wrote half of it.
One thing that can sometimes bite you when using reflection is not updating calls using reflection when doing refactoring. Tools like resharper will prompt you to update comments and strings when you change a method name, so you can catch most of them that way, but when you're calling methods that have been dynamically generated or the method name has been dynamically generated you might miss something.
The only solution is good documentation and thorough unit testing.

Categories