Thread in WCF basicHttpBinding [duplicate] - c#

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
WCF Windows Service - Long operations/Callback to calling module
I have a WCF application hosted on windows service. I have to use basicHttpBinding. In the application, I make a long-term analysis of the data, and then turn them into customers.
Is it possible to call WCF creating a thread that will be carried out analysis (Id of this thread will be sent to the client)?
The client should be able to communicate with the theme of using the transmitted ID and, if it receives information that the data is ready, it should be able to be downloaded. This will, in turn, release the thread.
How can I achieve this functionality?
Ok. It works. Client method call creates a thread that even after paying guid runs in the background and saves the result of the operation. How best can store these results? Due to the fact that the service is running Per Call dictionary resets with each calling the service. Declaring static data can be overridden, but I do not think it was a good idea. Any ideas?
namespace WCFRiskService
{
[ServiceContract]
public interface IRiskService
{
// return Thread ID
[OperationContract]
int GetAnalysis(AnalysisId);
[OperationContract]
string GetAnalysisData(int ThreadId);
}
public class Analysis
{
public GenerateAnalysis()
{
Thread.Sleep(20000);
Analysis = "Generated Data";
}
}
   public class RiskService : IRiskService
{
// How can I change this, to use non-static objects ?
static string AnalysisData = "";
public string GetAnalysisData(int ThreadId);
{
return AnalysisData;
}
public int GetAnalysis(AnalysisId);
{
Analysis AObject = new Analysis();
AObject.Tree = AnalysisTree;
Thread workerThread = new Thread(AObject.GenerateAnalysis);
int managedThreadId = workerThread.ManagedThreadId;
workerThread.Start();
while (!workerThread.IsAlive) ;
return managedThreadId;
}
}
}

You could create a job Id (Guid) for each job and pass it back to the client. Then in the service, store the job Id on a ConcurrentDictionary<Guid, AnalysisResult> and when the client asks for the results, you return the AnalysisResult that corresponds to the job Id. The client will need to check if the AnalysisResult that is returned by the operation is not null and etc.
Note that polling is not the best approach though.
If you could replace basicHttpBinding with wsDualHttpBinding then have a look at the duplex services that allow both endpoints to send messages. This way the server can send messages to the client anytime it wishes to. You could created a callback interface for progress reporting.

Related

Wait for a third-party API callback

I need to create an REST API that connect to a third party SOAP API. The third party API events are sent by callback to an URL I provide.
The typical steps my API go through is it starts a session with the third party by providing an ID and an callback URL. The third party can now send new events to my API through this URL when, for example, a new participant connects. Now sometimes i need to request specific info, like the list of participants for a given session(ID), and wait for the event containing the info.
Note that there may be multiple open sessions at the same time.
An example of what I need:
private string url = "http://myapi/callback";
[HttpGet]
[Route("createSession")]
public async Task<string> CreateSession()
{
var id = Guid.NewGuid().ToString();
var result = await ExternAPI.CreateSession(id, this.url);
return result; //contains the id
}
[HttpGet]
[Route("endSession")]
public async Task<string> EndSession([FromUri] string id)
{
var result = await ExternAPI.EndSession(id);
return result;
}
[HttpGet]
[Route("partipants")]
public async Task<string> Partipants([FromUri] string id)
{
ExternAPI.participants(id); // The results of this method will be sent to the callback function
results = // Wait for the results for this id
return results;
}
[HttpPost]
[Route("callback")]
public void Callback(body)
{
// notify waiting function and pass body
}
I came up with a solution using ReactiveX but I'm not really sure about its reliability in production. What I have in mind is to create a subject that never terminate and handle all the events but it is not a usual lifetime for a subject, what happens on error ? And I don't think I did it the "RX-way" (state concerns).
Here it is (you will need System.Reactive to run this code):
class Data
{
public int id;
public string value;
}
class Program
{
private static Subject<Data> sub;
static void Main(string[] args)
{
sub = new Subject<Data>();
Task.Run(async () => {
int id = 1;
ExternAPI(CallBackHook, id);
Data result = await sub.Where(data => data.id == id).FirstAsync();
Console.WriteLine("{0}", result.value);
});
Console.ReadLine();
}
static void CallBackHook(Data data)
{
sub.OnNext(data);
}
static String ExternAPI(Action<Data> callback, int id)
{
// Third-party API, access via SOAP. callback is normally an url (string)
Task.Run(() =>
{
Thread.Sleep(1000);
callback(new Data { id = id, value = "test" });
});
return "success";
}
}
An other way will be a dictionary of subjects, one for each session, so I could manage their lifetimes.
it is not a usual lifetime for a subject
what happens on error?
And I don't think I did it the "RX-way"
Yes, these are all perfectly valid concerns with this kind of approach. Personally, I don't much mind the last one, because even though Subjects are frowned-upon, many times they're just plain easier to use than the proper Rx way. With the learning curve of Rx what it is, I tend to optimize for developer maintainability, so I do "cheat" and use Subjects unless the alternative is equally understandable.
Regarding lifetime and errors, the solutions there depend on how you want your application to behave.
For lifetime, it looks like currently you have a WebAPI resource (the SOAP connection) requiring an explicit disconnect call from your client; this raises some red flags. At the very least, you'd want some kind of timeout there where that resource is disposed even if endSession is never called. Otherwise, it'll be all too easy to end up with dangling resources.
Also for errors, you'll need to decide the appropriate approach. You could "cache" the error and report it to each call that tries to use that resource, and "clear" the error when endSession is called. Or, if it's more appropriate, you could let an error take down your ASP.NET process. (ASP.NET will restart a new one for you).
To delay an API until you get some other event, use TaskCompletionSource<T>. When starting the SOAP call (e.g., ExternAPI.participants), you should create a new TCS<T>. The API call should then await the TaskCompletionSource<T>.Task. When the SOAP service responds with an event, it should take that TaskCompletionSource<T> and complete it. Points of note:
If you have multiple SOAP calls that are expecting responses over the same event, you'll need a collection of TaskCompletionSource<T> instances, along with some kind of message-identifier to match up which events are for which calls.
Be sure to watch your thread safety. Incoming SOAP events are most likely arriving on the thread pool, with (possibly multiple) API requests on other thread pool threads. TaskCompletionSource<T> itself is threadsafe, but you'd need to make your collection threadsafe as well.
You may want to write a Task-based wrapper for your SOAP service first (handling all the TaskCompletionSource<T> stuff), and then consume that from your WebAPI.
As a very broad alternative, instead of bridging SOAP with WebAPI, I would consider bridging SOAP with SignalR. You may find that this is a more natural translation. Among other things, SignalR will give you client-connect and client-disconnect events (complete with built-in timeouts for clients). So that may solve your lifetime issues more naturally. You can use the same Task-based wrapper for your SOAP service as well, or just expose the SOAP events directly as SignalR messages.

How to call a method of the ServiceHost from the hosting process in WCF C#

I have a publisher / subscriber pattern WCF Duplex ServiceHost that is hosted by a Windows Service. The Windows Service receives events from a separate process. OnEvent I would like to force my WCF Host to publish that data to all subscribed clients. Typically if a Client is calling this is straight forward. But when my Service Host needs to do this - I can't get my head around HOW to do that.
I have 2 questions:
1: I do not know how to create a Channel in WCFHost from my Windows Service so that it can use to publish to the Subscribers.
2: I read Creating WCF ChannelFactory so I do know I am creating a DuplexChannelFactory (2 per second ) which might be too much overhead.
Any help examples, hints are greatly appreciated. I am not a WCF expert and currently know more about it than I thought I should have to know in order to use it.
I had read on SO
Can I call a Method in a self hosted wcf host locally?
So then I have created a method inside my WCFHost like so:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession,
AutomaticSessionShutdown = false,
IncludeExceptionDetailInFaults = true)]
[CallbackBehavior(UseSynchronizationContext = false, ConcurrencyMode = ConcurrencyMode.Multiple)]
public class ServerHost<TService> : ServiceHost where TService : class
{
public T GetDuplexClientChannel<T, Cback>(BindingType bindingType, EndpointAddress endPointAddress) where T : class
{
ServiceEndpoint sep = GetContractServiceEndPoint<T>(bindingType, endPointAddress);
lock (_syncRoot)
{
DuplexChannelFactory<T> factory = new DuplexChannelFactory<T>(typeof(Cback), sep);
return factory.CreateChannel(endPointAddress);
}
}
}
I get an error of course that there is no InstanceContext because I am constructing using typeof(Cback) ..
"This CreateChannel overload cannot be called on this instance of DuplexChannelFactory, as the DuplexChannelFactory was initialized with a Type and no valid InstanceContext was provided."
So I am not sure how I can go about performing this ?
And for those that say read the error : yes I read the error.
Now how to do that with an InstanceContext that does not exist as OperationContext.Current does not exist at this point as I am calling this method form my Hosting Process into my WCFHost.
So if I could have a nice example of how to do this - even if I must use the code example on the 2nd link (of course implementing the DuplexChannelFactory) I would greatly appreciate it.
EDIT
Basically the windows Service is doing some heavy work monitoring other services, about 2 times a second it then must publish that to "Subscribed" Clients via WCF.
I think you have got very confused about how everything is wired together and are mixing concepts from the client in with the service. You haven't provided much concrete information about your scenario to go on so I'm going to provide a small example and hopefully you will be able to apply the ideas to your problem.
[ServiceContract(CallbackContract=typeof(IMyServiceCallback))]
public interface IMyService
{
[OperationContract]
void Register();
}
public interface IMyServiceCallback
{
[OperationContract]
void ReceiveData(string data);
}
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
public class MyService : IMyService
{
static HashSet<IMyServiceCallback> s_allClients = new HashSet<IMyServiceCallback>();
static object s_lockobj = new object();
public void Register()
{
lock(s_lockobj)
{
_allClients.Add(OperationContext.Current.GetCallbackChannel<IMyServiceCallback>());
}
}
public static void SendDataToClients(string data)
{
HashSet<IMyServiceCallback> tempSet;
lock(s_lockobj)
{
tempSet = new HashSet<IMyServiceCallback>(_allClients);
}
foreach(IMyServiceCallback cb in tempSet)
{
try
{
cb.ReceiveData(data);
}
catch(Exception)
{
lock(s_lockobj)
{
_allClients.Remove(cb);
cb.Abort();
cb.Dispose();
}
}
}
}
}
In your OnEvent method, you would call something similar to this inside your event method.
MyService.SendDataToClients(mydata);
This uses static data to store the list of clients. If you wanted to do something like segment your clients for different endpoints, you would need to do something different. There is a potential out of order message and scaling problem with this code if your OnEvent method can be called again while the previous call hasn't completed. For example, if you receive 2 messages, the first being large and the second being small, you could potentially send the second smaller message to clients later in the HashSet iteration order before they have been sent the first message. Also this won't scaled to a large number of clients as you could block timing out on one client holding up messages being sent to other clients. You could use something similar to Task's to dispatch multiple message deliveries. If this needs to scale, I would suggest looking at Reactive Extensions for .Net

WCF - different implementation of shared types

I am trying to design client/server application, that would be able to exchange "commands". The thing is, that server application is processing some stuff and I would like the client to be able to for example send command "pause" to the server.
The thing is, that my manager suggested, that best approach would be to create interface (ICommand for example) and then class for each command (Pause, Resume) that would inherit from the ICommand. After that, we could simply create an object Pause with [DataContract] attribute, that would be sent over to server.
For that purpouse, I tried to use shared-types, so I created seprated assembly in which I designed all the [DataContracts], so that both server and client can use them (they have reference leading to them).
On the server, we would then have [OperationContract], that would take the [DataContract] as parameter and return [DataContract] as well, like this:
[ServiceKnownType(typeof(PauseServer))]
[ServiceKnownType(typeof(Resume))]
[ServiceContract]
public interface ITestService
{
[OperationContract]
ICommand DoCommand(ICommand command);
}
The problem is, that apart from some properties, we would like to have for example method "Execute(param1,param2)", that would do certain operation - this method would do different operation on server (pause the process) and different operation on client side (change the status and enable "Resume" button for example). Like this:
[DataContract(Namespace="PauseContract")]
public class Pause
{
string _param1;
int _param2;
public void Execute()
{
// DO SOMETHING HERE
}
[DataMember]
public string Param1
{
get
{
return _param1;
}
set
{
this._param1 = value;
}
}
[DataMember]
public int Param2
{
get
{
return _param2;
}
set
{
this._param2 = value;
}
}
}
In the end, the whole process would like this:
1) Client wants to pause the process, so it creates object "Pause", that would contain for example ID of the process.
2) This object is passed to the DoCommand() method, which creates object "Pause" on server side and run its "Execute()" method with the given parameters.
3) If the Pausing process ended well, the Pause object is returned back to client (with process ID and other attributes)
4) If client gets this response, it will know that the process has eben paused and run its own "Execute()" method on its own Pause object, that would change the GUI and so on.
So my question is - is it somehow possible, to have different implementation of contracts stored in common library on both server/client side? Or is this approach wrong in general? From what I have heards, it is not advised to include behaviour (methods) to [DataContracts], but I thought it would be ok, if I dont mark them with [DataMember] attribute.
Thank You, Jakub.
To be honest, I don't think the ICommand with ServiceKnownType attribute idea works well for commands.
ServiceKnownType is designed to support polymorphism across service boundaries in the context of type properties and not behavior.
Your Pause/Resume scenario would be very easily implement with the exchange of two distinct request/response DataContract definitions.

how to implement a distributed system for a monitoring platform

I am having some trouble implementing the right patterns for a work project and I don't want to precede until I am satisfied with the right design strategy.
The project is based around Genesys Computer Telephony Integration (CTI) Platform. Essentially, utilizing a SDK provided by Genesys, a single client subscribes to a number of Genesys services (or TServers) running remotely. The client then registers a whole heap of Directory Numbers (DN's) associated to a particular TServer and waits for call events. When an event occurs, it is captured by the client and stored in a database. A number of other operations are executed, which is irrelevant at this stage. A lot of the communication work is handled by the Genesys ProtocolManager object, so a single event handler captures event data across all clients, which in turn is handled by a EventBrokerService. Here is a simple code to illustrate the connection process, registration of a single DN and the event function:
EventBrokerService eventBrokerService;
using (var client = new TServerProtocol(
new Endpoint(
new Uri("tcp://tserver01:11234"))))
{
client.Open();
eventBrokerService = BrokerServiceFactory.CreateEventBroker(client);
eventBrokerService.Activate();
eventBrokerService.Register(this.OnEvent);
RequestRegisterAddress requestRegisterAddress =
RequestRegisterAddress.Create("977845873",
RegisterMode.ModeMonitor,
ControlMode.RegisterDefault,
AddressType.DN);
IMessage response = client.Request(requestRegisterAddress);
}
and then we listen for events (there are many different events):
private void OnEvent(IMessage response)
{
switch (response.Id)
{
case EventACK.MessageId:
//do something
break;
case EventLinkConnected.MessageId:
var ev = response as EventLinkConnected;
//Insert event into DB and perform some other operations...
break;
}
}
The Genesys Platform, comes with another component called a Genesys Configuration server. The config server holds all of the TServer details, including the DN information and a whole bunch of other "objects". It is really just a fancy DBMS. The difference is, you can also subscribe to the config server and register for CRUD events (i.e. CreateEvent, UpdateEvent etc...). Without illustrating the code, the concept is similar to the one above. (i.e. You can register to a number of different Configuration Servers and listen for CRUD events).
For the most part, I have covered the above well and I am satisfied with the implementation so far. What I am trying to achieve is as follows:
I am attempting to implement a distributed system. In a nutshell, the system will consist of 2 components. Monitoring Services and Dispatcher Service components (they will all be Windows Services)
Monitoring Service Component
The "Monitoring Service(s)" connect to 1 or many T Servers to monitor for call events
The monitoring service will ALSO subscribe to a dispatcher service
Dispatcher Service Component
The "Dispatcher Service" connects to 1 or more Configuration Servers and waits for CRUD events.
Once an event occurs (i.e. a new DN was added on the config server), the dispatcher captures the creation event, and notifies all monitoring service subscribers. Subsequently, the dispatcher will also update a local database, so the DN information is preserved for redundancy (in case dispatcher can not connect to a Configuration Server).
The monitoring subscriber, to whom the newly created DN belongs (distinguished by a unique DBID and TServerID identifiers) will accept the DN, and register it for listening events (similarly illustrated in the first code snippet). The monitoring subscriber who does not possess the required TServer connection will drop the received request, naturally.
The Dispatcher can also receive newly added TServers, but this time around, it will make the decision which monitoring service it want's to utilize in order for that monitoring service to make ANOTHER connection. This will be determined by factors such as the number of current sessions running on a monitoring service or the how much memory a single service is chewing up at the time.
I have come up with some basic concepts and here is some of the code to illustrate what I have done thus far:
The communication method I have chosen is WCF with NetTcpBinding, so for the simple part, I have exposed an interface:
[ServiceContract(Namespace = "urn:Netwatch",
SessionMode = SessionMode.Required,
CallbackContract = typeof(IDisMonServiceCallback))]
public interface IDisMonService
{
[OperationContract]
bool Subscribe(string uid);
[OperationContract(IsOneWay = true)]
void Unsubscribe(string uid);
}
[ServiceContract(Namespace="urn:Netwatch")]
public interface IDisMonServiceCallback
{
[OperationContract]
bool DNRegistered(int tServerId, string dnEntry);
}
and on the dispatcher, I have implemented it:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
public class DisMonService : IDisMonService
{
private ConcurrentDictionary<string, IDisMonServiceCallback> subscribers = new ConcurrentDictionary<string, IDisMonServiceCallback>();
public IDisMonServiceCallback this[string uid]
{
get
{
IDisMonServiceCallback callback;
if (!subscribers.TryGetValue(uid, out callback))
return null;
return callback;
}
}
public List<IDisMonServiceCallback> GetAllServiceCallbacks()
{
return new List<IDisMonServiceCallback>(subscribers.Values);
}
public bool Subscribe(string uid)
{
IDisMonServiceCallback callback = GlobalHelper.Callback<IDisMonServiceCallback>();
if (!subscribers.ContainsKey(uid))
if (!subscribers.TryAdd(uid, callback))
return false;
return true;
}
public void Unsubscribe(string uid)
{
IDisMonServiceCallback callback;
if (subscribers.ContainsKey(uid))
if (!subscribers.TryRemove(uid, out callback))
return;
return;
}
}
From the code above, it is obvious that each subscribing monitoring service has a unique identifier, that way the right service callback context is retrieved (in case I decide to do some other funky operations).
This is where my dilemma essentially begins. To cut the long story short, my question(s) are as follow:
How do I deal with DisMonService class when attempting to pass on messages to all subscribers from within the Dispatcher service. i.e. new DN has been added, let us call the DisMonService class and notify all subscribers.
What would be the most optimal pattern to implement in dealing with updates to all subscribers from within DisMonServie
At the moment my dummy client connects to the dispatcher, and it registers itself. Moving forward, what is the best way to access the DisMonService class.
I hope I am not confusing anybody at what I am trying to ask. I guess what I am really trying to find is best way to implement the above system, any suggestions and such. Some code samples and snippets would really be helpful.
This is my first post here so I apologise to anybody if I haven't explained myself to the forum's standards.

How to write a WCF service with in-memory persistent storage?

I wrote a WCF service, but the data stored in the Service implementation doesn't persists between calls, not even if stored in a static variable. What can I do?
The service implementation is as follows:
public class Storage : IStorage
{
protected static object[] _data;
#region IStorage Members
public void Insert(object[] data)
{
lock (_data)
{
_data = _data.Concat(data).ToArray();
}
}
public object[] SelectAll()
{
lock (_data)
{
return (object[])_data.Clone();
}
}
#endregion
}
The service host is a console application:
static void Main(string[] args)
{
ServiceHost serviceHost =
new ServiceHost(typeof(TimeSpanStorage));
serviceHost.Open();
Console.WriteLine("Service running. Please 'Enter' to exit...");
Console.ReadLine();
}
By default WCF instanceMode is set to Per call, meaning data used in the service is specific to that client for that method call.
On your implementation try adding
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single, ConcurrencyMode=ConcurrencyMode.Single)]
public class MyService: IService
This makes the service essentially a singleton.
What you are looking to do is create a durable service:
WCF Durable services are WCF services
in which the operations can remember
the values of private variables (=the
state of the service) inbetween
restarts of the serivcehost and/or
client.
Are you wanting to persist the data beyond the lifetime of your ServiceHost instance? If so, then I agree that a durable service makes sense.
However, if you are only wanting to persist data between calls to your WCF service while the service is alive, then a durable service is overkill in my humble opinion. Using static data is perfectly acceptable; it is precisely what I do in my WCF project. In fact, the code that you've shown should work, so something else is going on here.
Is the Main() method actually as you've shown it? If so, then that's a problem. As soon as your WCF-enabled console application starts up, it immediately shuts back down, taking the WCF service with it. You need to have some logic in there to keep the console application alive because the WCF service will only remain 'hosted' while the console application is running.
If this is not the problem, let me know, and I'll add the full code of a simple application that demonstrates how to do this.
Add:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
Above your class and you'll have a service that is a single instance (i.e. the class proterties remain the same) and allows multiple concurrent connections.
Now you have to take care of your property read/write, i.e. use locks as you've already done (or some other technique).

Categories