Negative impact of setting sql query command timeout to max [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I came across a timeout exception during the execution of my sql query. So i increased the timeout in my C# code and now its working fine.
DbCommand.CommandTimeout = 3600;
This must have been occurred as a result of increasing data in the database.
I do not want this exception to occur in the future for any other scenarios.
So is it a good practice to add the command timeout line in all my methods?
It would be great to know the positive and negative side of this operation.

Having a reasonable expectation of how fast you expect something to run is always a good idea, but frankly it is very rarely necessary to specify an explicit timeout - usually this is only done when you know something will take a long time and you can't currently fix it at the db for whatever reasons. It is the exception, not the norm. If you have utility code that wraps your data access, you could perhaps provide a centralized default timeout
The only positive aspect of setting a long timeout is as a band-aid: to make it work. However, this is an automatic code smell - you should really be looking at why it is taking so long, and re-architect it a bit. There are significant real issues that this can raise, including long running blocked operations (perhaps even an undetectable deadlock) that will never finish; the other more immediate negative aspect is that it distracts you from fixing the real problem

you also set time out in SQL Database Side
Simply you add following code in your Stored Procedure OR Query
EXEC SP_CONFIGURE 'remote query timeout', 1800
reconfigure
EXEC sp_configure

Related

C#: methode for storing large and complex data? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am trying to store a software "Calender"(complex logs that saved and browsed based on date).
I tried using both .ini and xml but when the application tried to read the entire file to find info for 1 specific day out of the 100 days (or so it seemed), it took almost 9 seconds to get 5 variables out of 500 variables. The size of the actual file might eventually be more than 40 variables per day.
Also, I would rather not make a file for each day, that will seems a little bit unprofessional and messy.
I am asking the question to know if there is an alternative to keep things fast and neat. The data includes different types of variables and different amounts of them. I know i am kinda overdoing it with logging thing but the program needs logs to do its work
If the data must be stored it has to be a file or a database (local or remote), I'd go for SQLite, it would end in a single file, but you could query the data with SELECT, JOIN, etc.
EDIT:
You can use SQLite3 from c# if you include this package:
https://www.nuget.org/packages/System.Data.SQLite/
You'll need to learn some SQL, but after that you'll just use something like:
select Message from Logs where Date > '2015-11-01' and Date < '2015-11-25';
which is easier, faster and clearer than messing with XML, and it will not load the whole file.
As mentioned above, SQLite will offer a great possibility. Since you (generally), and probably not a lot of people out here will be able to write a database management system that is as efficient as the ones out there.
https://www.sqlite.org
Whole point of using RDBMS because it's far more efficient that dealing with files.
SQL Lite is light weight and easier to deploy. But remember that,
SQLite only supports a single writer at a time (meaning the execution
of an individual transaction). SQLite locks the entire database when
it needs a lock (either read or write) and only one writer can hold a
write lock at a time. Due to its speed this actually isn't a problem
for low to moderate size applications, but if you have a higher volume
of writes (hundreds per second) then it could become a bottleneck.
Reference this question
If this is an enterprise level application requirement I would go for Azure Table storage based solution which is identical for this sort of scenario.

Performance evaluation of Web API calls with Database LINQ queries [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a UI which calls WebAPIs (WebAPI 2.0), Web API are basically LINQ queries (to MS SQL database) and some processing logic for the data. I want to do performance evaluation of the complete flow (click from UI to API, back to UI to display data) upon a huge DB with 30K - 60K records in it.
How can it be done? Let me know the methods/tools used for this.
currently I am tracking time-taken in chrome debug window, which shows the total time for each network call.
Wow. This is a subject in its own right but here's an approach:
The bits are independent so you break it down. You measure your LINQ queries without any of the logic or web api stuff getting in the way. If LINQ is against stored procedures then measure those first. Then you measure the cost of the logic, then you measure the cost of sending X rows of data using WebAPI. You should avoid including the cost of actually retrieving the rows from the database so you're checking just the connectivity. I'd also consider writing a browserless test client (i.e. GETS/POSTS or whatever) to eliminate the browser as a variable.
Now you've got a fairly good picture of where the time gets spent. You know if you've got DB issues, query issues, network issues or application server issues.
Assuming it all goes well, now add a bunch of instances to your test harness so you're testing concurrent access, load testing and the like. Often if you get something wrong you can't surface that with a single user so this is important.
Break it down into chunks and have a data set that you can consistently bring back to a known state.
As for tools, it really depends on what you use. VS comes with a bunch of useful things but there are tons of third party ones too. If you have a dedicated test team this might be part of their setup. SQL Server has a huge chunk of monitoring capability. Ask your DBAs. If you've got to find your own way, just keep in mind that you want to be able to do this by pressing a button, not by setting up a complex environment.

Does frequent INSERT/UPDATE query compromise the database safety? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Hi, I am relatively new to using databases in a web application and I am trying to develop a dynamic application for my users.
So, I was wondering how safe is it to have your application frequently (say every 2 seconds) execute an INSERT/UPDATE query from the same user.
I'm aware that INSERT/UPDATE queries are quite slow to execute (I've read its 12 INSERT per second) but that's not my question.My question is concerning the safety of my database.
The frequency of executing INSERTs or UPDATEs to your database is in no way related to the safety of of your database. But it might impact its performance, that's possible.
I think the concern of the OP is more about the robustness of the database's ability to handle load and not become corrupt, rather than issues of SQL injection. (Valuable comments to take note of though). The volumes suggested per user should no concern.
Database integrity for any DB should be checked with regular maintenance. Make sure you are doing the following and you should have no problems with performance and reliability.
Back up your DB. Full backup and transaction log backups.
Re-build and/or Re-Organize indexes
Delete old data backups and check free space often.
Check the maintenance logs for issues on your DB.
Then monitor performance for sufficient Memory, DISK I/O, and CPU.
wihout "with(nolock)" segment after the table name, the table being insert/update is always locked.
if so, how can it performance well

Intelligent compilers can't prevent infinite loops and SQL Cartesian joins [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am frustrated that my programs crash due to the following two problems:
Infinite loops (e.g. C# or Javascript)
SQL joins where I forgot to add a join clause
These seem to be preventable problems if the compilers were competent enough. How can these problems be prevented programatically?
Modern compilers can and do unroll loops for optimization reasons, but without knowing some of the data ahead of time, can't even make a heuristic for whether your loops will terminate (See: dataflow programming). In fact, deciding whether your program itself will terminate is called the Halting Problem
In other cases, you want infinite loops. For example a graphics engine usually does something like this:
while(true)
render
As for your SQL joins... I guess it should be pretty obvious when you miss one. In some cases, an INNER JOIN is implied when you don't give one, so in that sense your compiler is fixing this exact issue.

Multithreading advice on approach needed [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have been told to make a process that insert data for clients using multithreading.
Need to update a client database in a short period of time.There is an application that does the job but it's single threaded.Need to make it multithread.
The idea being is to insert data in batches using the existing application
EG
Process 50000 records
assign 5000 record to each thread
The idea is to fire 10-20 threads and even multiple instance of the same application to do the job.
Any ideas,suggestions examples how to approach this.
It's .net 2.0 unfortunately.
Are there any good example how to do it that you have come across,EG ThreadPool etc.
Reading on multithreading in the meantime
I'll bet dollars to donuts the problem is that the existing code just uses an absurdly inefficient algorithm. Making it multi-threaded won't help unless you fix the algorithm too. And if you fix the algorithm, it likely will not need to be multi-threaded. This doesn't sound like the type of problem that typically benefits from multi-threading itself.
The only possible scenario I could see where this matters is if latency to the database is an issue. But if it's on the same LAN or in the same datacenter, that won't be an issue.

Categories