Calling SqlDependency.Start two times continuously, the second time failed? - c#

The purpose of calling SqlDependency.Start multiple times is to ensure it's fine before some other action such as creating a new instance of SqlCacheDependency based on a Command. According to Microsoft's document about SqlDependency.Start at here https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldependency.start(v=vs.110).aspx (the Remarks section), looks like calling SqlDependency.Start multiple times is totally fine:
Multiple calls with identical parameters (the same connection string and Windows credentials in the calling thread) are valid.
But actually it can fail (and really it has never succeeded for me) for the second call, making all next attempts to call SqlDependency.Start fail (silently by returning false, no exception is thrown).
What I did should meet the first restriction (mentioned in the Remarks section in the above link), that is all the calls to SqlDependency.Start have the same parameters (in fact there was just 1 same parameter which is the connection string). It looks just like this:
//at initialization step (such as in `Application_Start()` in ASP.NET MVC)
SqlDependency.Start(myConnectionString);//this usually returns OK
//later at the time before creating an instance of SqlCacheDependency
//I tried to call the Start method again to ensure everything is ok
var ok = SqlDependency.Start(myConnectionString);//almost always false
if(ok){
//almost never reach here ...
}
So it's really hard to understand about what stated by Microsoft (in the first restriction in the Remarks section), the 2 calls are exactly the same. But with the second call failed, any that same call used after that will still fail (meaning there is not any chance to start it successfully once I attempted to call it more than once).
When I see the log in Sql Server I can see that there are a lot of messages saying something like Cannot find the remote service ... because it does not exist
I don't need a solution or work-around this problem, I just need some explanation to why it does not work expectedly like what Microsoft stated, or I misunderstood what stated by Microsoft?

As Jeroen Mostert mentioned in the comments and the docs for SqlCommand.Start() state:
Returns
Boolean
true if the listener initialized successfully; false if a compatible listener already exists.
As the remarks in the docs describe, SqlDependency.Start() and SqlDependency.Stop() will keep track of the number of calls to each one. It will ensure a background connection is running or being set up if the number of calls to SqlDependency.Start() exceeds the number of calls to SqlDependency.Stop() (though I think it loses track and resets its count if you call SqlDependency.Stop() more times than than you call SqlDependency.Start()).
Start() Errors
It may help to clarify that it is possible for SqlDependency.Start() to fail. One way to get it to fail is to call it multiple times from one AppDomain with different connection strings. Within a particular AppDomain, SqlDependency.Start() will throw an exception if you pass in a different connection string unless if at least one of the following properties in the connection string is different from a previously passed connection string:
Database name
Username
I.e., you are expected to normalize or cache the connection string you first pass to SqlDependency.Start() so that you never pass it a string that has, for example, a different value for Max Pool Size. I think it does this to try to avoid creating a lot of broker queues and connections for a single process. Additionally, when it tries to match up a command to a broker queue when you actually set up an SqlDependency later, it probably uses these distinguishing connection string properties to decide which queue to use.
ASP.NET Life Cycle
From the ASP.NET Application Life Cycle documentation under “Life Cycle Events and the Global.asax file”, note the following:
The Application_Start method, while an instance method, is called only when the application is starting which often occurs during the first HTTP request for your application. The documentation specifically states:
You should set only static data during application start. Do not set any instance data because it will be available only to the first instance of the HttpApplication class that is created.
The method you should use to clean up things which you initialized in Application_Start is Application_End. When a webapp is gracefully stopped, an instance of your application class will be created and Application_End called on it. Note that this might be a different instance of the application class than Application_Start was called on.
Because of ASP.NET’s architecture, a distinct HttpApplication class instance is required for each request that is processing. That means that multiple instances will be created to handle concurrent requests. The docs also state that, for performance reasons, application class instances may be cached by the framework and used for multiple requests. To give you an opportunity to initialize and cleanup your application class at an instance level, you may implement Init and Dispose methods. These methods should configure the application class’s instance variables that are not specific to a particular requests. The docs state:
Init
Called once for every instance of the HttpApplication class after all modules have been created.
Dispose
Called before the application instance is destroyed.
However, you mentioned that you were initializing global state (i.e., SqlDependency.Start()) in Application_Start and cleaning up global state (i.e., SqlDependency.Stop()) in Dispose(). Due to the fact that Application_Start will be called once and is intended for configuring statics/globals and Dispose() is called for each application class instance that the framework retires (which may happen multiple times before Application_End() is called), it is likely that you are stopping the dependency quickly.
Thus, it may be that SqlDependency.Stop() is called after the server runs out of requests, in which case it would clean up the HttpApplication instance by calling Dispose(). Any attempts to actually start monitoring for changes by attaching an SqlDependency to an SqlCommand should likely fail at after that. I am not sure what already-subscribed commands will do, but they may fail at that point which would trigger your code to resubscribe a new dependency which should then hit an error. This could be the explanation for your “Cannot find the remote service” errors—you called SqlDependency.Stop() too early and too often.

Related

WCF - Is GetCallbackChannel reliable?

In the part "Figure 5 Storing the Callback References for Later Use" of this tutorial, it's clear that the service would need to keep the manual cache list synchronized reflecting the connected clients only to prevent exceptions caused by the reference to old clients that got disconnected. But, if I don't plan to use such a cache mechanism (for which I don't see any need at all) and I directly access GetCallbackChannel<T> instead to perform event calls to the client, is it guaranteed that the internal list will only contain all connected clients and would never throw a corresponding CommunicationException when calling a contained event?
Sorry, I hadn't read here where it says:
Gets a channel to the client instance that called the current
operation.
This immediately causes the "Figure 5 Storing the Callback References for Later Use" part of the first tutorial to make sense now, as we'll call the clients (supporting multiple in fact) in another thread (so deferred to their requests). I thought GetCallbackChannel simply represented the whole acknowledged callbacks (one per client) at any point of the service execution.
I understand then I'll naturally have to catch exceptions such as CommunicationException once I mimic that caching list approach (or simply Exception only).

ASP.NET: How to make data visible for AOP/Cross-cutting concerns?

I would be interested in seeing a relevant pattern for this, but here is my immediate specific problem.
I have a requirement to modify logging in a web application such that every entry in the log is annotated with the entity (user name or process name) responsible for executing the code that resulted in the log entry. For example, rather than:
timestamp level loggerName A sensitive object was deleted from the database
you would get
timestamp level loggerName [ELLIOT.ALDERSON] A sensitive object was deleted from the database
or
timestamp level loggerName [DAILY CRON JOB] A sensitive object was deleted from the database
In addition to identifying the "user" (or process) that took an action, if it is a user, there is also a requirement to log information about the request itself (e.g. ip address, user agent, headers, etc.), although that data can be written to an adjunct log so the main log itself stays readable.
In Java, this was relatively trivial to do without modifying the interface to our logger because the HTTP server we use (Tomcat) 'guarantees' one request/one thread, resulting in my being able to put both user information and request information in thread-local variables. Any of my code, anywhere, could figure out "who" called it and access request properties by asking for the current user and request associated with that thread, with no need to pass user and request variables in every method down through the entire application. Which meant that when any of my code wrote to the log, my minimally-modified logger code could produce the desired output without changing any single call to the logger in my application.
In C#.NET, I don't know how to do this. IIS pretty much guarantees thread reuse from a pool, so there goes thread-local variables to identify what user and which request is associated to any particular method call (and therefore the the user/request to tie to logger calls made by that method). All of the AOP articles I've ever read deal with applying behavior, not so much data. Inside the controller method itself, of course, I can see session and request information. But the controllers call methods that call methods that call methods, etc. ad nauseum, those methods don't have visibility unless the session and request are passed as additional parameters to every method, which is a non-starter (I also thought of and dismissed walking the stack up to the controller or until I'm convinced there is no controller; however, the stack trace just essentially identifies the source code associated with a particular frame, it doesn't give you access to the actual objects on the stack. Plus, as expensive as formatting and writing log data is, the additional expense of walking the stack seems a bit excessive).
Is there a technique that would allow me the same kind of visibility to arbitrary context-specific data (in this case session and request objects) to my cross-cutting concern code?
If you need a static access to current request data you can use HttpContext.Current.Items . It Is a dictionary of string, object and differs for every request. If you change thread (i.e. you are using async await) the context will be preserved and you will find the correct data.

.NET: 100% CPU usage in HttpClient because of Dictionary?

Short Question:
Has anyone else encountered an issue in using a singleton .NET HttpClient where the application pegs the processor at 100% until it's restarted?
Details:
I'm running a Windows Service that does continuous, schedule-based ETL. One of the data-syncing threads occasionally either just dies, or starts running out of control and pegs the processor at 100%.
I was lucky enough to see this happening live before someone simply restarted the service (the standard fix), and was able to grab a dump-file.
Loading this in WinDbg (w/ SOS and SOSEX), I found that I have about 15 threads (sub-tasks of the main processing thread) all running with identical stack-traces. However, there don't appear to be any deadlocks. I.E. the high-utilization threads are running, but never finishing.
The relevant stack-trace segment follows (addresses omitted):
System.Collections.Generic.Dictionary`2[[System.__Canon, mscorlib],[System.__Canon, mscorlib]].FindEntry(System.__Canon)
System.Collections.Generic.Dictionary`2[[System.__Canon, mscorlib],[System.__Canon, mscorlib]].TryGetValue(System.__Canon, System.__Canon ByRef)
System.Net.Http.Headers.HttpHeaders.ContainsParsedValue(System.String, System.Object)
System.Net.Http.Headers.HttpGeneralHeaders.get_TransferEncodingChunked()
System.Net.Http.Headers.HttpGeneralHeaders.AddSpecialsFrom(System.Net.Http.Headers.HttpGeneralHeaders)
System.Net.Http.Headers.HttpRequestHeaders.AddHeaders(System.Net.Http.Headers.HttpHeaders)
System.Net.Http.HttpClient.SendAsync(System.Net.Http.HttpRequestMessage, System.Net.Http.HttpCompletionOption, System.Threading.CancellationToken)
...
[Our Application Code]
According to this article (and others I've found), the use of dictionaries is not thread-safe, and infinite loops are possible (as are straight-up crashes) if you access a dictionary in a multi-threaded manner.
BUT our application code is not using a dictionary explicitly. So where is the dictionary mentioned in the stack-trace?
Following through via .NET Reflector, it appears that the HttpClient uses a dictionary to store any values that have been configured in the "DefaultRequestHeaders" property. Any request the gets sent through the HttpClient, therefore, triggers an enumeration of a singleton, non-thread-safe dictionary (in order to add the default headers to the request), which could potentially infinitely spin (or kill) the threads involved if a corruption occurs.
Microsoft has stated bluntly that the HttpClient class is thread-safe. But it seems to me like this is no longer true if any headers have been added to the DefaultRequestHeaders of the HttpClient.
My analysis seems to indicate that this is the real root problem, and an easy workaround is to simply never use the DefaultRequestHeaders where the HttpClient could be used in a multi-threaded manner.
However, I'm looking for some confirmation that I'm not barking up the wrong tree. If this is correct, it seems like a bug in the .NET framework, which I automatically tend to doubt.
Sorry for the wordy question, but thanks for any input you may have.
Thanks for all the comments; they got me thinking along different lines, and helped me find the ultimate root cause of the issue.
Although the issue was a result of corruption in the backing dictionary of the DefaultRequestHeaders, the real culprit was the initialization code for the HttpClient object:
private HttpClient InitializeClient()
{
if (_client == null)
{
_client = GetHttpClient();
_client.DefaultRequestHeaders.Accept.Clear();
_client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
SetBaseAddress(BaseAddress);
}
return _client;
}
I said that the HttpClient was a singleton, which is partially incorrect. It's created as a single-instance that is shared amongst multiple threads doing a unit of work, and is disposed when the work is complete. A new instance will be spun up the next time this particular task must be done.
The "InitializeClient" method above is called every time a request is to be sent, and should just short-circuit due to the "_client" field not being null after the first run-through.
(Note that this isn't being done in the object's constructor because it's an abstract class, and "GetHttpClient" is an abstract method -- BTW: don't ever call an abstract method in a base-class's constructor... that causes other nightmares)
Of course, it's fairly obvious that this isn't thread-safe, and the resultant behavior is non-deterministic.
The fix is to put this code behind a double-checked "lock" statement (although I will be eliminating the use of the "DefaultRequestHeaders" property anyways, just because).
In a nutshell, my original question shouldn't ever be an issue if you're careful in how you initialize the HttpClient.
Thanks for the clarity of thought that you all provided!

WCF - Sharing/caching of data between calls

I am new to WCF & Service development and have a following question.
I want to write a service which relies on some data (from database for example) in order to process client requests and reply back.
I do not want to look in database for every single call. My question is, is there any technique or way so that I can load such data either upfront or just once, so that it need not go to fetch this data for every request?
I read that having InstanceContextMode to Single can be a bad idea (not exactly sure why). Can somebody explain what is the best way to deal with such situation.
Thanks
The BCL has a Lazy class that is made for this purpose. Unfortunately, in case of a transient exception (network issue, timeout, ...) it stores the exception forever. This means that your service is down forever if that happens. That's unacceptable. The Lazy class is therefore unusable. Microsoft has declared that they are unwilling to fix this.
The best way to deal with this is to write your own lazy or use something equivalent.
You also can use LazyInitializer. See the documentation.
I don't know how instance mode Single behaves in case of an exception. In any case it is architecturally unwise to put lazy resources into the service class. If you want to share those resources with multiple services that's a problem. It's also not the responsibility of the service class to do that.
It all depends on amount of data to load and the pattern of data usage.
Assuming that your service calls are independent and may require different portions of data, then you may implement some caching (using Lazy<T> or similar techniques). But this solution has one important caveat: once data is loaded into the cache it will be there forever unless you define some expiration strategy (time-based or flush on write or something else). If you do not have cache entry expiration strategy your service will consume more and more memory over time.
This may not be too important problem, though, if amount of data you load from the database is small or majority of calls access same data again and again.
Another approach is to use WCF sessions (set InstanceContextMode to PerSession). This will ensure that you have service object created for lifetime of a session (which will be alive while particular WCF client is connected) - and all calls from that client will be dispatched to the same service object. It may or may not be appropriate from business domain point of view. And if this is appropriate, then you can load your data from the database on a first call and then subsequent calls within same session will be able to reuse the data. New session (another client or same client after reconnect) will have to load data again.

Unique value in StackTrace?

Background:
I have written a generic logging library in .NET 3.5. Basically the developer needs to call the Begin() method at the beginning of their method and the End() method at the end of it. These methods do not take a parameter - the library uses the stacktrace to figure out where it came from. The library has a collection that keeps track of the call stack and writes out the elapsed time of each method.
This works really well and I'm happy with it.
But now we want to add it to the server. When multiple users are on the system, there is only one log file and the stack traces are lumped together. It's impossible to tell which thread is doing what.
My question is this:
Is there a way to retrieve a unique value from the StackTrace class or an indivdual StackFrame? What about using reflection? I would like to be able to create a seperate file for each user. At the very least, I'd like to be able to tag each line with the unique value so we can filter the file by this value when reviewing traces.
We are using WCF TcpBinding as our server side communication protocol, if that helps. I am looking for a thread id, hashcode, address, something to distinguish where the call stack came from.
Any ideas?
Thanks.
You could use something associated with the current thread - perhaps the thread id?.
Threads from the thread pool get reused, so you would see the id's repeated throughout the log file, but for the lifetime of a Begin/End pair it would uniquely tag a single user.
If you used some form of Aspect Oriented Programming (like Postsharp) you might find a better, declarative way to get the information you need. Thread.CurrentThread.ManagedThreadId would give you a reference for the thread running the code at the time, but all your developers would have to do do is apply an attribute to a method, rather than calling Begin() and End() for every method.
To get the user account under which the current thread is running, you can use WindowsIdentity.GetCurrent().

Categories