I have noticed that it takes a long time in connecting to the database when executing my query. Is it possible to write an asp.net application in such way that has a database connection that is always open? Or is it better to write service and have asp.net app communicate with that service?
You can use connection pooling in order to conserve time it takes to initialize a connection. BTW, SQL-Server supports it OOTB, so you don't really have to implement it yourself.
It does not matter much which means you use to connect to the DB (ADO.NET, DAAB, etc..)
As to your second suggestion, to write a service and have the application communicate requests to it: it wont help in this scenario, since you are simply moving the problem to another process, but the accumulated time of fulfilling a request remains or even grows, considering the extra network time.
Related
I have a unique (or so I think) problem - we have an ASP.NET web app using MVC principles. The project will be at most single threaded (our business requires single point of control). We are using Entity Framework to connect to the database
Problem:
We want to query our database less frequently than every page load.
I have considered putting our database connection in a singleton but am worried about connecting to in too infrequently -- will a query still work if it connected a significant time ago? How would you recommend connecting to the database?
How would you recommend connecting to the database?
Do NOT use a shared connection. Connections are not thread-safe, and are pooled by .NET, so creating one generally isn't an expensive operation.
The best practice is to create a command and connection for every database request. If you are using Entity Framework, then this will be taken care of for you.
If you want to cache results using the built-in Session or Cache properties, then that's fine, but don't cache disposable resources like connections, EF contexts, etc.
If at some point you find you have a measurable performance problem directly related to creating connections or contexts, then you can try and deal with that, but don't try to optimize something that might not even be a problem.
If you want to get data without connecting to the database, you need to cache it - either in memory, in a file or in whatever mean of storage you want, but you need to keep it in front of the DB somehow. There is no other way known to me.
If by connecting you mean building a completely new SqlConnection to your DB, then you can either rely on connection pooling (EF is smart enough to keep your connections alive for some minutes even after you finish your business) or you can just create connections and keep them alive inside your application by not closing them instantly (i.e. keeping track of them inside a structure).
But you should definitely consider if this is REALLY what you want. The way EF does it internally is most of the time exactly what you want.
Some further reading:
https://learn.microsoft.com/en-us/aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
I have a C# Web App. I have multiple databases where data is the same, so I can use a Round Robin method to distribute the Database calls.
I plan to read in each connection string, and iterate through each DB and return the data for the first call that passes.
I would like to record the last database that was used, so I can try the next database in the list for the next call that comes in.
A database seems overkill for this, so could I use a static List to track this and lock the read and update of the list?
In terms of doing a round robin approach, you can definitely use a List with a Lock, but I make some general comments below which might be helpful.
You are trying to implement a Network Load Balancer here. You will face a couple of problems. IIS will happily spin up multiple threads of calls to your website if it receives multiple requests before the first one has completed. Secondly if your website is in a WebGarden (multiple instances of IIS on same computer) or runnning as a WebFarm (multiple instances of the OS) or is on Azure or some other cloud platform, then those multiple isntances of your web call might not even be on the same machine (or VM), so you need to be clear that it will be almost impossible to generate a true round robin series of database hits on a properly scalable website.
I'm not sure creating a new synchronisation point between all your web threads is a good idea for scalability. Round Robin is also not the best use of resources - if you want your website to run as fast as possible using as few resources as possible (generally why a NLB system is put in place) then use a Pool based approach to leasing an open database connection rather than iterating around a set of open database connections. The calling code gets handed the next connection which has not been released back into the pool.
Actually I am confuse about one thing.
I have .net application and using sql server 2008 for database.
Now, on my Method A i am filling datareader.
now, during while loop i am calling another method B by pass one property of result.
at that time i also open DB connection & close it.
same thing doing Calling another method C from method B.
at that time also open DB connection & close.
To fill final list of Method A. it is taking time.
so, my point is.To open DB connection & closing it. Is it time consuming process?
To open DB connection & closing it. it is time consuming process
Yes. If pooling is enabled and the connection is returned from the pool then opening and closing costs at least one round-trip, to reset the connection. If the connection is not pooled opening and closing is a full SSPI complete handshake. If SQL authentication is used and encryption is enabled, there is another complete SSL handshake to establish a secure channel before the SQL handshake. Even under ideal conditions, it takes 10s of ms, it can go up to whole seconds with some minimal network latency added.
A well written ASP.Net application needs one (pooled) connection per request, never more.
It is generally very bad practice to connect to a third part application (database, WebService or similar) in any kind of loop. Communication like this is always takes a relatively long time.
As the number of elements in the loop increases the application will become exponentially slower and slower.
A much better approach is to perform an operation for all the elements then pass the required data into your loop logic.
As with all things there are exceptions, loops where you have millions of entities to process and the connection is a small overhead may create circumstances where it's more efficient to process each entity atomically.
I have an application that once started will get some initial data from my database and after that some functions may update or insert data to it.
Since my database is not on the same computer of the one running the application and I would like to be able to freely move the application server around, I am looking for a more flexible way to insert/update/query data as needed.
I was thinking of using an website API on a separated thread on my application with some kinda of list where this thread will try to update the data every X minutes and if a given entry is updated it will be removed from the list.
This way instead of being held by the database queries and the such the application would run freely queuing what has to be update/inserted etc
The main point here is so I can run the functions without worrying about connectivity issues to the database end, or issues related, since all the changes are queued to be updated on it.
Is this approach ok ? bad ? are the better recommendations for this scenario ?
On "can access DB through some web server instead of talking directly to DB server": yes this is very common and recommended approach. It is much easier to limit set of operations exposed through custom API (web services, REST services, ...) than restrict direct communication with DB.
On "sync on separate thread..." - you need to figure out what are requirements of the synchronization. Delayed sync may be ok if you don't need to know latest data and not care if updates from client are commited to storage immediately.
I have a require ment to read data from a table(SQL 2005) and send that data to other application for every 5 seconds. I am looking for the best approach to do the same.
Right now I am planning to write a console application(.NET and C#) which will read the data from sql server 2005(QUEUE table which will be filled through different applications) and send to other application through TCP/IP(Central server). Run that console application under schedule task for every 5 seconds. I am assuming scheduled task will take care to discard new run event if task is already running(avoid to run concurrent executions).
Does any body come accross similar situation? Please share your experience and advice me for best approach.
Thanks in advance for your valuable time spending for my request.
-Por-hills-
We have done simliar work. If you are going to query a sql database every 5 seconds, be sure to use a stored procedure that is optimized to be very fast. It should not update data unless aboslutely necessary. This approach is typically called 'polling' and I've found that it is acceptable if your sqlserver is not otherwise bogged down with too many other calls.
In approaches we've used, a Windows Service that does the polling works well.
To communicate results to another app, it all depends on what your other app is doing and what type of interface you can make into it, and how quickly you need the results. The WCF class libraries from Microsoft provide many workable approaches for real time communication. My preference is to write to the applications database, and then have the application read the data (if it works for that app). If you need something real time, WCF is the way to go, and I'd suggest using a stateless protocol like http if < 5 sec response time is required, (using standard HTTP posts), or TCP/IP if subsecond response time is required.
since I assume your central storage is also SQL 2005, have you considered using what SQL Server 2005 offers out of the box to achieve your requirements? Rather than pool every 5 seconds, marshal and unmarshal TCP/IP, implement authentication and authorization for the TCP/IP pipe, scale TCP transmission with boxcaring, manage message acknowledgments and retries, deal with central site availability, fragment large messages, implement fairness in transmission and so on and so forth, why not simply use Service Broker? It does all you need and more, out of the box, already tested, already tuned for performance and scalability.
Getting reliable messaging right is not trivial and you should focus your efforts in meeting your business specifics, not reiventing the wheel.
I would recommend writing a Windows Service (since you are C#) that has some timer which runs every 5 seconds. That way you wont be starting and stopping an application all the time, it can run even when there is no one logged into the machine, and it will automatically start when the machine is restarted.
For one of my projects, I needed to do something periodically. I opted for a service and set up a timer that takes care of reading the data. You might consider that solution. It has worked well for me.
I suggest to create a windows service and not an application and to perform the timing yourself - create a timer and execute one step on each timer event. For the communication you have many choices - I would consider using standard technologies like a webservice or Winows Communication Foundation.
Besides this custom solution I would evaluate if the task can be solved using Microsoft Integration Services .
Finally other question comes to mind - why do you need this application? Why doesn't/don't the application(s) consuming the data query the database? Is the expensive polling required? Is it possible for the data producers to signal the availibilty of new data directly to the data consumers?
I am not sure about the details of your project, specifically related to security but maybe it would be better to create an SSIS package and schedule it as a job?