In the part "Figure 5 Storing the Callback References for Later Use" of this tutorial, it's clear that the service would need to keep the manual cache list synchronized reflecting the connected clients only to prevent exceptions caused by the reference to old clients that got disconnected. But, if I don't plan to use such a cache mechanism (for which I don't see any need at all) and I directly access GetCallbackChannel<T> instead to perform event calls to the client, is it guaranteed that the internal list will only contain all connected clients and would never throw a corresponding CommunicationException when calling a contained event?
Sorry, I hadn't read here where it says:
Gets a channel to the client instance that called the current
operation.
This immediately causes the "Figure 5 Storing the Callback References for Later Use" part of the first tutorial to make sense now, as we'll call the clients (supporting multiple in fact) in another thread (so deferred to their requests). I thought GetCallbackChannel simply represented the whole acknowledged callbacks (one per client) at any point of the service execution.
I understand then I'll naturally have to catch exceptions such as CommunicationException once I mimic that caching list approach (or simply Exception only).
Related
The purpose of calling SqlDependency.Start multiple times is to ensure it's fine before some other action such as creating a new instance of SqlCacheDependency based on a Command. According to Microsoft's document about SqlDependency.Start at here https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldependency.start(v=vs.110).aspx (the Remarks section), looks like calling SqlDependency.Start multiple times is totally fine:
Multiple calls with identical parameters (the same connection string and Windows credentials in the calling thread) are valid.
But actually it can fail (and really it has never succeeded for me) for the second call, making all next attempts to call SqlDependency.Start fail (silently by returning false, no exception is thrown).
What I did should meet the first restriction (mentioned in the Remarks section in the above link), that is all the calls to SqlDependency.Start have the same parameters (in fact there was just 1 same parameter which is the connection string). It looks just like this:
//at initialization step (such as in `Application_Start()` in ASP.NET MVC)
SqlDependency.Start(myConnectionString);//this usually returns OK
//later at the time before creating an instance of SqlCacheDependency
//I tried to call the Start method again to ensure everything is ok
var ok = SqlDependency.Start(myConnectionString);//almost always false
if(ok){
//almost never reach here ...
}
So it's really hard to understand about what stated by Microsoft (in the first restriction in the Remarks section), the 2 calls are exactly the same. But with the second call failed, any that same call used after that will still fail (meaning there is not any chance to start it successfully once I attempted to call it more than once).
When I see the log in Sql Server I can see that there are a lot of messages saying something like Cannot find the remote service ... because it does not exist
I don't need a solution or work-around this problem, I just need some explanation to why it does not work expectedly like what Microsoft stated, or I misunderstood what stated by Microsoft?
As Jeroen Mostert mentioned in the comments and the docs for SqlCommand.Start() state:
Returns
Boolean
true if the listener initialized successfully; false if a compatible listener already exists.
As the remarks in the docs describe, SqlDependency.Start() and SqlDependency.Stop() will keep track of the number of calls to each one. It will ensure a background connection is running or being set up if the number of calls to SqlDependency.Start() exceeds the number of calls to SqlDependency.Stop() (though I think it loses track and resets its count if you call SqlDependency.Stop() more times than than you call SqlDependency.Start()).
Start() Errors
It may help to clarify that it is possible for SqlDependency.Start() to fail. One way to get it to fail is to call it multiple times from one AppDomain with different connection strings. Within a particular AppDomain, SqlDependency.Start() will throw an exception if you pass in a different connection string unless if at least one of the following properties in the connection string is different from a previously passed connection string:
Database name
Username
I.e., you are expected to normalize or cache the connection string you first pass to SqlDependency.Start() so that you never pass it a string that has, for example, a different value for Max Pool Size. I think it does this to try to avoid creating a lot of broker queues and connections for a single process. Additionally, when it tries to match up a command to a broker queue when you actually set up an SqlDependency later, it probably uses these distinguishing connection string properties to decide which queue to use.
ASP.NET Life Cycle
From the ASP.NET Application Life Cycle documentation under “Life Cycle Events and the Global.asax file”, note the following:
The Application_Start method, while an instance method, is called only when the application is starting which often occurs during the first HTTP request for your application. The documentation specifically states:
You should set only static data during application start. Do not set any instance data because it will be available only to the first instance of the HttpApplication class that is created.
The method you should use to clean up things which you initialized in Application_Start is Application_End. When a webapp is gracefully stopped, an instance of your application class will be created and Application_End called on it. Note that this might be a different instance of the application class than Application_Start was called on.
Because of ASP.NET’s architecture, a distinct HttpApplication class instance is required for each request that is processing. That means that multiple instances will be created to handle concurrent requests. The docs also state that, for performance reasons, application class instances may be cached by the framework and used for multiple requests. To give you an opportunity to initialize and cleanup your application class at an instance level, you may implement Init and Dispose methods. These methods should configure the application class’s instance variables that are not specific to a particular requests. The docs state:
Init
Called once for every instance of the HttpApplication class after all modules have been created.
Dispose
Called before the application instance is destroyed.
However, you mentioned that you were initializing global state (i.e., SqlDependency.Start()) in Application_Start and cleaning up global state (i.e., SqlDependency.Stop()) in Dispose(). Due to the fact that Application_Start will be called once and is intended for configuring statics/globals and Dispose() is called for each application class instance that the framework retires (which may happen multiple times before Application_End() is called), it is likely that you are stopping the dependency quickly.
Thus, it may be that SqlDependency.Stop() is called after the server runs out of requests, in which case it would clean up the HttpApplication instance by calling Dispose(). Any attempts to actually start monitoring for changes by attaching an SqlDependency to an SqlCommand should likely fail at after that. I am not sure what already-subscribed commands will do, but they may fail at that point which would trigger your code to resubscribe a new dependency which should then hit an error. This could be the explanation for your “Cannot find the remote service” errors—you called SqlDependency.Stop() too early and too often.
My ApplicationUser model contains a property:
public bool SubscribedToNewsletter { get;set; }
I would like to make sure that whenever I update its value in the database, an external API will be called to add or remove the user from a list in my email automation system, without manually calling the method myself to ensure synchronization regardless of programmer's intention.
Is there a built-in functionality provided in ASP.NET? Or do I have to extend the UserManager class and centralize all the calls updating the database?
Calling an external API to keep in sync with your application data is a little more complicated than making a simple change in a domain model.
If you did this, would you call the API before or after you persist changes to the database? If before:
How do you make sure that the change is going to be accepted by the DB?
What if the API call fails? Do you refuse to update the DB?
What if the API call succeeds but the application crashes before updating the DB or the DB connection is temporarily lost?
If after:
The API could be unavailable (e.g. outage). How do you make sure this gets called later to keep things in sync?
The application crashes after updating the DB. How do you make sure the API gets called when it restarts?
There are a few different ways you could potentially solve this. However, bear in mind that by synchronising to an external system that you have lost the ACID semantics you may be used to and your application will have to deal with eventual consistency.
A simple solution would be to have another database table that acts as a queue of API calls to be made (it's important this is ordered by time). When the user's email is updated, you add a row as part of the DB transaction with the relevant details needed. This ensures the request to call the API is always recorded with an update.
Then you would have a separate process (or thread) that polls this table. You could use pg_notify to support push notifications rather than polling.
This process can read the row (in order) then call the relevant API to make the change in the external system. If it succeeds, it can remove the row. If it fails, it can try again using an exponential back-off. Continued failures should be logged for investigation.
The worst case scenario now is that you have at-least-once delivery semantics for updating the system (e.g. if API call succeeded but process crashed before removing the row then the call would be made again when process restarted). If you needed at-most-once, you would remove the row before attempting to make the call.
This is obviously glossing over some of the details and would need modified for a high through-put system but should hopefully explain some of the principles.
I usually tackle this sort of thing with LISTEN and NOTIFY plus a queue table. You send a NOTIFY from a trigger when there's a change of interest, and insert a row into a queue table. A LISTENing connection notices the change, grabs the new row(s) from the queue table, actions them, and marks them as completed.
Instead of listen and notify you can just poll a queue table, listen and notify are an optimisation.
To make this reliable, either the actions you take must be in the same DB and done on the same connection as the update to the queue, or you need to use two-phase commit to synchronise actions. That's beyond the scope of this sort of answer, as you need a transaction resolver for crash recovery etc.
If it's safe to call the API multiple times (it's idempotent), then on failure midway through an operation it becomes fine to just execute all entries in the pending queue table again on crash recovery/restart/etc. You generally only need 2PC etc if you cannot safely repeat one of the actions.
I am new to WCF & Service development and have a following question.
I want to write a service which relies on some data (from database for example) in order to process client requests and reply back.
I do not want to look in database for every single call. My question is, is there any technique or way so that I can load such data either upfront or just once, so that it need not go to fetch this data for every request?
I read that having InstanceContextMode to Single can be a bad idea (not exactly sure why). Can somebody explain what is the best way to deal with such situation.
Thanks
The BCL has a Lazy class that is made for this purpose. Unfortunately, in case of a transient exception (network issue, timeout, ...) it stores the exception forever. This means that your service is down forever if that happens. That's unacceptable. The Lazy class is therefore unusable. Microsoft has declared that they are unwilling to fix this.
The best way to deal with this is to write your own lazy or use something equivalent.
You also can use LazyInitializer. See the documentation.
I don't know how instance mode Single behaves in case of an exception. In any case it is architecturally unwise to put lazy resources into the service class. If you want to share those resources with multiple services that's a problem. It's also not the responsibility of the service class to do that.
It all depends on amount of data to load and the pattern of data usage.
Assuming that your service calls are independent and may require different portions of data, then you may implement some caching (using Lazy<T> or similar techniques). But this solution has one important caveat: once data is loaded into the cache it will be there forever unless you define some expiration strategy (time-based or flush on write or something else). If you do not have cache entry expiration strategy your service will consume more and more memory over time.
This may not be too important problem, though, if amount of data you load from the database is small or majority of calls access same data again and again.
Another approach is to use WCF sessions (set InstanceContextMode to PerSession). This will ensure that you have service object created for lifetime of a session (which will be alive while particular WCF client is connected) - and all calls from that client will be dispatched to the same service object. It may or may not be appropriate from business domain point of view. And if this is appropriate, then you can load your data from the database on a first call and then subsequent calls within same session will be able to reuse the data. New session (another client or same client after reconnect) will have to load data again.
Lets suppose I have an .NET client application that connects to a WCF service, or perhaps a message queue. During the normal execution of the program it is possible that there might be connection losses or maybe the user has been forced to log off by the administrator, or the administrator sends a message to the app to change and login to another WCF server (e.g. some form of manual load balancing).
The client application would only know about this when any one of many low level methods ties to make a WCF call and it fails.
When such a thing happens I'd like the application and all its windows to somehow be disabled/hidden, for a dialog box / splash window to come up and do a reconnection, and once successful the windows get shown again.
How does one go about doing this? The problem I see is that the code which first finds out there is a problem is at the lowest level (i.e. maybe as a result of a button click on a dialog window that is on top of main windows). Sort of need the program to be inside out to handle it intuitively. Therefore I am assuming there are some patterns or frameworks that help with this?
Unfortunately there isn't a great way of doing this because the exceptions caused by it are going to start anyplace a WCF call can happen and go upward until something catches them. For the HTTP bindings you know when that will be because WCF only does anything when you make an explicit call, so you could catch any disconnect/timeout exceptions and deal with them appropriately.
For message queues or TCP bindings I think it might get a bit messier, but the tactic is the same. Anytime you're making a WCF call, you'll need to watch for the appropriate exceptions and then the application will have to call some function that can change the UI how you want.
I believe what you're looking for is called "exception handling". Exceptions are the way to get from the bottom to the top.
One of the possible solutions you may apply is you may call some kind of non-transactional method that will return minimal result in a fixed interval. Or if you can get the underlying socket object of the instantiated wcf client; the overhead of checking is not that huge. Though socket objects probably don't have some kind of event that is related to disconnection; you may check only if you try to communicate to the other end but I might be wrong about this.
if an administrator logs on to my service, he may wish to disconnect sessions which meet (or don't meet) certain requirements, be it automated or manual. throwing exceptions seems like a simple and effective solution, as all resources are released.
i could use a local bool field which, if true, would disconnect this user the next time he calls any of the methods, but that doesn't seem like an elegant solution.
and, it doesn't have to be throwing exception, as i've already noticed you can use OperationContext.Current.Channel.Close(), or abort, to disable access to that session.
is there a "standard" way to do this in wcf?
From within a given 'session', I do not think there is an out-of-the-box way to do anything with another session (i.e. instance context).
You can however track clients that connect to your service by utilizing the IChannelInitializer interface in conjunction with a service or contract behavior. This would give you access to each clients' associated channel which you could then close if you wanted to, although this would not necessarily produce a very informative fault on the client end as it would look like a communication issue.
Another option is to look at callbacks if you have control over the clients that are accessing the service. By utilizing the above mentioned client tracking in conjunction with callbacks you can actually cause the client to more gracefully close its own channel and potentially inform the user of what happened.