Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
When configuring UseScheduledRedelivery in a mass-transit consumer. what is the best practice for what should be handled.
Is handling Exception overkill? and is there a list of exceptions that can proberbly be recovered from?
Redelivery is second-level retry. It means that it handles exceptions that are not recovered by first-level retry (retry policies).
Basically, you probably want to retry everything except exceptions that are caused by your message data. However, even null reference exceptions can be subject of retries. For example, you have a database and you try to get a record and get null. This can be because the record is not there yet, but it will come later since there is a message in the queue to create it. So, race conditions can lead to such exceptions.
Second-level retries, however, are different. You want to use them to overcome, for example, issues with resource starvation (busy database or something). These exceptions are very specific, like network timeout exception or database timeout exception. But there is no "list", you need to look at your system design to decide where you apply first-level retries and where you use second-level retries, and which exceptions are handled by those.
We use retries for all exceptions and redelivery for a very small number of exceptions and not in all services. Usually we redeliver after getting database timeout.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I need a counter in the server which holds the number of http requests it received.As each server can handle multiple requests asynchronously (lets say the same controller is being called by every user) at a specific point of time, where can I place the counter so that it can be shared between every request made.
Yes, the controller is instantiated and disposed with each request. Yes, each request gets it own thread, though that thread may be exchanged (in the case of async work) or may serve multiple requests over its lifetime.
Parallelism is a complex topic and based on numerous different factors at any give time. Simplistically though, yes, threads will run in parallel. However, they do not share resources between each other (for the most part). Ultimately, there is some resource-sharing via the parent process, but for practical application, you should consider them idempotent.
Based on all that and your final question, if you have designs on trying to implement a counter in your code, don't. It won't work and never will. Even if you can somewhat coordinate some sort of process-bound thread-safe counter, it won't work with workers and it will be killed every time the App Pool recycles, crashes or otherwise restarts for any reason.
A counter should be implemented in a database or other persistent data store. Full-stop. Even then, you'll need to be extremely careful with concurrency, and unless you devote a ton of time to excluding bots, repeat page loads by the same users, etc., your count will be off no matter what.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
i can't figure out what is difference between HttpException and other exceptions in asp mvc,
based on asp helper HttpException means Describes an exception that occurred during the processing of HTTP requests.
it says during the processing of HTTP requests in that case does not other exceptions occurr during HTTP requests?
The processing of a HTTP request can raise other exceptions - it depends on what methods you are calling.
For example a BinaryRead() raises an ArgmentOutOfRangeException.
You need to look at the exceptions raised by each method and decide whether you can sensibly trap them or whether you have to let them propagate.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have quite a large class that uses many more classes. It uses external resources (database, files etc.) and a few exceptions might happen.
As I learned, sometimes it is preferable to use the event of UnhandledException instead of putting try-catch blocks everywhere.
However, my class is just one of many other classes and the aforementioned solutions work at the application-level.
Can I somehow narrow it down to get fired only if the exception fired in this class and the other unhandled exceptions don't get caught?
Using AOP seems like a good way but I'm not sure.
I'm not sure if I entirely understand your question, but allow me to try to answer it anyway:
You're asking if it's possible to load your class in such a way that it knows which of the exceptions it might generate, are being handled in the class that's loading the dll? This seems impossible, simply because of the calling hierarchy. What I would suggest is that you document which exceptions may be thrown in your classes, using this mechanism:
/// <exception cref="ArgumentOutOfRangeException">Thrown if argument is greater than the size of the array.</exception>
That way your calling classes can be better prepared to handle the exceptions, and know more or less which possible exceptions aren't being handled.
Another approach is to encapsulate your code in try-catch blocks, and use the fact the more specific exceptions are handled first. You can then handle the scenarios you can resolve programmatically, and then catch the generic Exception last to ensure your program remains stable even if underlying classes fail catastrophically.
Unfortunately I don't see how you're going to tell your called dll, which of the exceptions it might throw are being handled.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I came across a timeout exception during the execution of my sql query. So i increased the timeout in my C# code and now its working fine.
DbCommand.CommandTimeout = 3600;
This must have been occurred as a result of increasing data in the database.
I do not want this exception to occur in the future for any other scenarios.
So is it a good practice to add the command timeout line in all my methods?
It would be great to know the positive and negative side of this operation.
Having a reasonable expectation of how fast you expect something to run is always a good idea, but frankly it is very rarely necessary to specify an explicit timeout - usually this is only done when you know something will take a long time and you can't currently fix it at the db for whatever reasons. It is the exception, not the norm. If you have utility code that wraps your data access, you could perhaps provide a centralized default timeout
The only positive aspect of setting a long timeout is as a band-aid: to make it work. However, this is an automatic code smell - you should really be looking at why it is taking so long, and re-architect it a bit. There are significant real issues that this can raise, including long running blocked operations (perhaps even an undetectable deadlock) that will never finish; the other more immediate negative aspect is that it distracts you from fixing the real problem
you also set time out in SQL Database Side
Simply you add following code in your Stored Procedure OR Query
EXEC SP_CONFIGURE 'remote query timeout', 1800
reconfigure
EXEC sp_configure
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have been told to make a process that insert data for clients using multithreading.
Need to update a client database in a short period of time.There is an application that does the job but it's single threaded.Need to make it multithread.
The idea being is to insert data in batches using the existing application
EG
Process 50000 records
assign 5000 record to each thread
The idea is to fire 10-20 threads and even multiple instance of the same application to do the job.
Any ideas,suggestions examples how to approach this.
It's .net 2.0 unfortunately.
Are there any good example how to do it that you have come across,EG ThreadPool etc.
Reading on multithreading in the meantime
I'll bet dollars to donuts the problem is that the existing code just uses an absurdly inefficient algorithm. Making it multi-threaded won't help unless you fix the algorithm too. And if you fix the algorithm, it likely will not need to be multi-threaded. This doesn't sound like the type of problem that typically benefits from multi-threading itself.
The only possible scenario I could see where this matters is if latency to the database is an issue. But if it's on the same LAN or in the same datacenter, that won't be an issue.