We have an old Silverlight UserControl + WCF component in our framework and we would like to increase the reusability of this feature. The component should work with basic functionality by default, but we would like to extend it based on the current project (without modifying the original, so more of this control can appear in the full system with different functionality).
So we made a plan, where everything looks great, except one thing. Here is a short summary:
Silverlight UserControl can be extended and manipulated via ContentPresenter at the UI and ViewModel inheritance, events and messaging in the client logic.
Back-end business logic can be manipulated with module loading.
This gonna be okay I think. For example you can disable/remove fields from the UI with overriden ViewModel properties, and at the back-end you can avoid some action with custom modules.
The interesting part is when you add new fields via the ContentPresenter. Ok, you add new properties to the inherited ViewModel, then you can bind to them. You have the additional data. When you save base data, you know it's succeeded, then you can start saving your additional data (additional data can be anything, in a different table at back-end for example). Fine, we extended our UserControl and the back-end logic and the original userControl still doesn't know anything about our extension.
But we lost transaction. For example we can save base data, but additional data saving throws an exception, we have the updated base data but nothing in the additional table. We really doesn't want this possibility, so I came up with this idea:
One WCF call should wait for the other at the back-end, and if both arrived, we can begin cross thread communication between them, and of course, we can handle the base and the additional data in the same transaction, and the base component still doesn't know anything about the other (it just provide a feature to do something with it, but it doesn't know who gonna do it).
I made a very simplified proof of concept solution, this is the output:
1 send begins
Press return to send the second piece
2 send begins
2 send completed, returned: 1
1 send completed, returned: 2
Service
namespace MyService
{
[ServiceContract]
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public class Service1
{
protected bool _sameArrived;
protected Piece _same;
[OperationContract]
public Piece SendPiece(Piece piece)
{
_sameArrived = false;
Mediator.Instance.WaitFor(piece, sameArrived);
while (!_sameArrived)
{
Thread.Sleep(100);
}
return _same;
}
protected void sameArrived(Piece piece)
{
_same = piece;
_sameArrived = true;
}
}
}
Piece (entity)
namespace MyService
{
[DataContract]
public class Piece
{
[DataMember]
public long ID { get; set; }
[DataMember]
public string SameIdentifier { get; set; }
}
}
Mediator
namespace MyService
{
public sealed class Mediator
{
private static Mediator _instance;
private static object syncRoot = new Object();
private List<Tuple<Piece, Action<Piece>>> _waitsFor;
private Mediator()
{
_waitsFor = new List<Tuple<Piece, Action<Piece>>>();
}
public static Mediator Instance
{
get
{
if (_instance == null)
{
lock (syncRoot)
{
_instance = new Mediator();
}
}
return _instance;
}
}
public void WaitFor(Piece piece, Action<Piece> callback)
{
lock (_waitsFor)
{
var waiter = _waitsFor.Where(i => i.Item1.SameIdentifier == piece.SameIdentifier).FirstOrDefault();
if (waiter != null)
{
_waitsFor.Remove(waiter);
waiter.Item2(piece);
callback(waiter.Item1);
}
else
{
_waitsFor.Add(new Tuple<Piece, Action<Piece>>(piece, callback));
}
}
}
}
}
And the client side code
namespace MyClient
{
class Program
{
static void Main(string[] args)
{
Client c1 = new Client(new Piece()
{
ID = 1,
SameIdentifier = "customIdentifier"
});
Client c2 = new Client(new Piece()
{
ID = 2,
SameIdentifier = "customIdentifier"
});
c1.SendPiece();
Console.WriteLine("Press return to send the second piece");
Console.ReadLine();
c2.SendPiece();
Console.ReadLine();
}
}
class Client
{
protected Piece _piece;
protected Service1Client _service;
public Client(Piece piece)
{
_piece = piece;
_service = new Service1Client();
}
public void SendPiece()
{
Console.WriteLine("{0} send begins", _piece.ID);
_service.BeginSendPiece(_piece, new AsyncCallback(sendPieceCallback), null);
}
protected void sendPieceCallback(IAsyncResult result)
{
Piece returnedPiece = _service.EndSendPiece(result);
Console.WriteLine("{0} send completed, returned: {1}", _piece.ID, returnedPiece.ID);
}
}
}
So is it a good idea to wait for another WCF call (which may or may not be invoked, so in a real example it would be more complex), and process them together with cross threading communication? Or not and I should look for another solution?
Thanks in advance,
negra
If you want to extend your application without changing any existing code, you can use MEF that is Microsoft Extensibility Framework.
For using MEF with silverlight see: http://development-guides.silverbaylabs.org/Video/Silverlight-MEF
I would not wait for 2 WCF calls from Silverlight, for the following reasons:
You are making your code more complex and less maintainable
You are storing business knowledge, that two services should be called together, in the client
I would call a single service that aggreagated the two services.
It doesn't feel like a great idea to me, to be honest. I think it would be neater if you could package up both "partial" requests in a single "full" request, and wait for that. Unfortunately I don't know the best way of doing that within WCF. It's possible that there's a generalized mechanism for this, but I don't know about it. Basically you'd need some loosely typed service layer where you could represent a generalized request and a generalized response, routing the requests appropriately in the server. You could then represent a collection of requests and responses easily.
That's the approach I'd look at, personally - but I don't know how neatly it will turn out in WCF.
Related
I have an abstract class called HttpHelper it has basic methods like, GET, POST, PATCH, PUT
What I need to achieve is this:
Store the url, time & date in the database each time the function is called GET, POST, PATCH, PUT
I don't want to store directly to the database each time the functions are called (that would be slow) but to put it somewhere (like a static queue-memory-cache) which must be faster and non blocking, and have a background long running process that will look into this cache-storage-like which will then store the values in the database.
I have no clear idea how to do this but the main purpose of doing so is to take the count of each calls per hour or day, by domain, resource and url query.
I'm thinking if I could do the following:
Create a static class which uses ConcurrentQueue<T> to store data and call that class in each function inside HttpHelper class
Create a background task similar to this: Asp.Net core long running/background task
Or use Hangfire, but that might be too much for simple task
Or is there a built-in method for this in .netcore?
Both Hangfire and background tasks would do the trick as consumers of the queue items.
Hangfire was there before long running background tasks (pre .net core), so go with the long running tasks for net core implementations.
There is a but here though.
How important is to you that you will not miss a call? If it is, then neither can help you.
The Queue or whatever static construct you have will be deleted the time your application crashes/machine restarts or just plain recycling of the application pools.
You need to consider some kind of external Queuing mechanism like rabbit mq with persistence on.
You can also append to a file, but that might also cause some delays as read/write.
I do not know how complex your problem is but I would consider two solutions.
First is calling Async Insert Method which will not block your main thread but will start task. You can return response without waiting for your log to be appended to database. Since you want it to be implemented in only some methods, I would do it using Attributes and Middleware.
Simplified example:
public IActionResult SomePostMethod()
{
LogActionAsync("This Is Post Method");
return StatusCode(201);
}
public static Task LogActionAsync(string someParameter)
{
return Task.Run(() => {
// Communicate with database (X ms)
});
}
Better solution is creating buffer which will not communicate with database each time but only when filled or at interval. It would look like this:
public IActionResult SomePostMethod()
{
APILog.Log(new APILog.Item() { Date = DateTime.Now, Item1 = "Something" });
return StatusCode(201);
}
public partial class APILog
{
private static List<APILog.Item> _buffer = null;
private cont int _msTimeout = 60000; // Timeout between updates
private static object _updateLock = new object();
static APILog()
{
StartDBUpdateLoopAsync();
}
private void StartDBUpdateLoopAsync()
{
// check if it has been already and other stuff
Task.Run(() => {
while(true) // Do not use true but some other expression that is telling you if your application is running.
{
Thread.Sleep(60000);
lock(_updateLock)
{
foreach(APILog.Item item in _buffer)
{
//Import into database here
}
}
}
});
}
public static void Log(APILog.Item item)
{
lock(_updateLock)
{
if(_buffer == null)
_buffer = new List<APILog.Item>();
_buffer.Add(item);
}
}
}
public partial class APILog
{
public class Item
{
public string Item1 { get; set; }
public DateTime Date { get; set; }
}
}
Also in this second example I would not call APILog.Log() each time but use Middleware in combination with Attribute
I have a simple service interface I am using to synchronize data with a server via HTTP. The service interface has a method to start and stop the synchronization process. The idea is to start the synchronization process after the user signs in, and stop the synchronization at the end of the application before the user signs out. The synchronization service will check for new messages every few minutes, and then notify the ViewModel(s) of new/changed data using the MvxMessenger plugin.
What is the recommended way to ensure the synchronization service lives for the duration of the app? I am currently using a custom IMvxAppStart which registers the service interface as a singleton, and then holds a static reference to the service interface. Is that enough to keep the service alive for the lifetime of the app, or is there a better way?
public class App : MvxApplication
{
public override void Initialize()
{
...
RegisterAppStart(new CustomAppStart());
}
}
public class CustomAppStart : MvxNavigatingObject, IMvxAppStart
{
public static ISyncClient SynchronizationClient { get; set; }
public void Start(object hint = null)
{
SynchronizationClient = Mvx.Resolve<ISyncClient>();
ShowViewModel<SignInViewModel>();
}
}
public interface ISyncClient
{
void StartSync();
void StopSync();
bool IsSyncActive { get; }
}
You don't need a static property for this. When you register the Interface as a singleton, the IoC do the work for you. Example: In one of our apps wee need a state-property with important data for the whole lifetime of the app.
The models who need this state, just uses following code snippet:
protected IApplicationState AppState
{
get { return _appstate ?? (_appstate = Mvx.GetSingleton<IApplicationState>()); }
}
private IApplicationState _appstate;
But: You can do it also with a static property. But in this case you don't need a singleton-value in the IoC.
So I am only a few days into learning about wcf services, specifically duplex, and I am starting with a test app. The goal is to have a Service that has an internal (static?) class which stores variables, and a Client that fetches for those variables.
Currently I have two variables in the Storage class, one which is a list of Subscribers (ObservableCollection<IMyContractCallBack>) and one which is an ObservableCollection<string>, where each string gets sent in the callback method to the client.
I would like to be able to have the client Fetch (which first Subscribes if not already, by adding its context to the collection on the server side) the strings in the collection on the server side. That part works as expected. However, I would also like to Push a string from the server to every client in the subscription list, as well as Add strings to the collection of strings. That's where my issues crop up.
Anytime I Fetch, it adds to string list "test1..." and "test2..." and sends them, then the client updates a textblock UI (wpa) so if I fetch twice I'll have "test1...","test2...","test1...","test2..." because right now there's no checking for duplicates. That proves that the collection can get updated and remembered on the server side from Fetch to Fetch. However, when I try to Add or Send a given text, all variables are forgotten, so the subscriber list is null, and the list-to-add-to is empty. Yet when I then Fetch again, the old list is back (now with 6 things, test1...,test2... etc...)
I have this before the class
[ServiceBehavior(InstanceContextMode= InstanceContextMode.PerSession, ConcurrencyMode = ConcurrencyMode.Single)]
and I also tried a Singleton context mode to no avail. Changing the ConcurrencyMode to Multiple doesn't do anything different either. Any ideas as to why my static data is being reset only when internal commands come from the server itself?
Here is the code for my Service:
namespace WcfService3
{
[ServiceBehavior(InstanceContextMode= InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Single)]
public class Service1 : IService1
{
public static event Action NullContext;
public static ObservableCollection<IMyContractCallBack> Subscriptions;
public void NormalFunction()
{
//Only sends to Subs that are STILL Open
foreach (IMyContractCallBack user in Subscriptions)
{
//Removes the Closed users, because they are hanging around from last session
if (((ICommunicationObject)user).State != CommunicationState.Opened)
{
Subscriptions.Remove(user);
}
else
{
ObservableCollection<string> holder = Storage.GetList();
foreach (string str in holder)
{
user.CallBackFunction(str);
}
}
}
}
public static void Send(string str)
{
try
{
foreach (IMyContractCallBack user in Subscriptions)
{
user.CallBackFunction(str);
}
}
catch
{
//For some reason 'Subscriptions' is always null
NullContext.Invoke();
}
}
public static void Add(string str)
{
//For some reason 'SendList' is always null here, too
Storage.AddToList(str);
if (Subscriptions != null)
{
//For same reason 'Subscriptions' is always null
foreach (IMyContractCallBack user in Subscriptions)
{
user.CallBackFunction(str);
}
}
}
public void Subscribe()
{
//Adds the callback client to a list of Subscribers
IMyContractCallBack callback = OperationContext.Current.GetCallbackChannel<IMyContractCallBack>();
if (Subscriptions == null)
{
Subscriptions = new ObservableCollection<IMyContractCallBack>();
}
if(!Subscriptions.Contains(callback))
{
Subscriptions.Add(callback);
}
}
and here is my code for the Storage class:
namespace WcfService3
{
public static class Storage
{
public static readonly ObservableCollection<string> SendList = new ObservableCollection<string>();
public static IMyContractCallBack callback;
public static ObservableCollection<string> GetList()
{
if (SendList.Count == 0)
{
AddToList("Test1...");
AddToList("Test2...");
}
return SendList;
}
public static void AddToList(string str)
{
SendList.Add(str);
}
}
}
I can provide more code if needed.
Are you using the ThreadStatic attribute anywhere? (just do a quick search) Thats a real long shot and probably not your issue.
You probably have a threading issue. Do all your clients connect at the same time (i really mean in close succession?) If yes, you are going to have threading issues with this code in your Subscribe method:
if (Subscriptions == null)
{
Subscriptions = new ObservableCollection<IMyContractCallBack>();
}
You should better constrain access to your Subscriptions method so you can see who modifies it and when and use Console statments to figure out where you're going wrong.
I have an application(say App1) which is connected to another application (App2) via .net remoting. App2 acts as a server.. If App2 goes down App1 will not be able to pull data from App2. We are planning to run an instance of App2(say App2a) in another machine so that if App2 goes down App1 automatically takes the data from App2a. When App2 runs again.. App1 will need to take the data from App2. The fail over mechanism is not implemented yet... Please suggest a design pattern so that in future any number of server instances can be added for App1 to pull data.
Thanks
The closest design pattern that I can think of is the Chain of Responsibility pattern.
The idea is that:
You build a chain of objects (servers)
Let the object (server) handle the request
If it is unable to do so, pass the request down the chain
Code:
// Server interface
public interface IServer
{
object FetchData(object param);
}
public class ServerProxyBase: IServer
{
// Successor.
// Alternate server to contact if the current instance fails.
public ServerBase AlternateServerProxy { get; set; }
// Interface
public virtual object FetchData(object param)
{
if (AlternateServerProxy != null)
{
return AlternateServerProxy.FetchData(param);
}
throw new NotImplementedException("Unable to recover");
}
}
// Server implementation
public class ServerProxy : ServerProxyBase
{
// Interface implementation
public override object FetchData(object param)
{
try
{
// Contact actual server and return data
// Remoting/WCF code in here...
}
catch
{
// If fail to contact server,
// run base method (attempt to recover)
return base.FetchData(param);
}
}
}
public class Client
{
private IServer _serverProxy;
public Client()
{
// Wire up main server, and its failover/retry servers
_serverProxy = new ServerProxy("mainserver:2712")
{
AlternateServerProxy = new ServerProxy("failover1:2712")
{
AlternateServerProxy = new ServerProxy("failover2:2712")
}
};
}
}
This example wires up a chain of 3 servers (mainserver, failover1, failover2).
The call the FetchData() will always attempt to go to mainserver.
When it fails, it'll then attempt failover1, followed by failover2, before finally throwing an exception.
If it were up to me, I wouldn't mind using something quick and dirty such as:
public class FailoverServerProxy: IServer
{
private readonly List<ServerProxy> _servers;
public FailoverServerProxy RegisterServer(Server server)
{
_servers.Add(server);
return this;
}
// Implement interface
public object FetchData(object param)
{
foreach(var server in _servers)
{
try
{
return server.FetchData(param);
}
catch
{
// Failed. Continue to next server in list
continue;
}
}
// No more servers to try. No longer able to recover
throw new Exception("Unable to fetch data");
}
}
public class Client
{
private IServer _serverProxy;
public Client()
{
// Wire up main server, and its failover/retry servers
_serverProxy = new FailoverServerProxy()
.RegisterServer("mainserver:2712")
.RegisterServer("failover1:2712")
.RegisterServer("failover2:2712");
}
}
I think it borrows ideas from other patterns such as Facade, Strategy and Proxy.
But my motivations are simply to:
Make the least impact on existing classes (ie, No extra property in the Server class)
Separation of concerns:
Central class for the server's failover/recovery logic.
Keep the failover/recovery's implementation hidden from the Client/Server.
I am trying to migrate my .net remoting code to wcf but I'm finding it difficult. Can someone help me migrate this simple Remoting based program below to use WCF? The program implements a simple publisher/subscriber pattern where we have a single TemperatureProviderProgram that publishers to many TemperatureSubcriberPrograms that subcribe to the TemperatureProvider.
To run the programs:
Copy the TemperatureProviderProgram and TemperatureSubcriberProgram into seperate console application projects.
Copying to remaining classes and interfaces into a common Class Library project then add a reference to System.Runtime.Remoting library
Add a reference to the Class Library project from the console app projects.
Complie and run 1 TemperatureProviderProgram and multiple TemperatureSubcriberProgram.
Please note no IIS or xml should be used. Thanks in advance.
public interface ITemperatureProvider
{
void Subcribe(ObjRef temperatureSubcriber);
}
[Serializable]
public sealed class TemperatureProvider : MarshalByRefObject, ITemperatureProvider
{
private readonly List<ITemperatureSubcriber> _temperatureSubcribers = new List<ITemperatureSubcriber>();
private readonly Random randomTemperature = new Random();
public void Subcribe(ObjRef temperatureSubcriber)
{
ITemperatureSubcriber tempSubcriber = (ITemperatureSubcriber)RemotingServices.Unmarshal(temperatureSubcriber);
lock (_temperatureSubcribers)
{
_temperatureSubcribers.Add(tempSubcriber);
}
}
public void Start()
{
Console.WriteLine("TemperatureProvider started...");
BinaryServerFormatterSinkProvider provider = new BinaryServerFormatterSinkProvider();
provider.TypeFilterLevel = System.Runtime.Serialization.Formatters.TypeFilterLevel.Full;
TcpServerChannel tcpChannel = new TcpServerChannel("TemperatureProviderChannel", 5001, provider);
ChannelServices.RegisterChannel(tcpChannel, false);
RemotingServices.Marshal(this, "TemperatureProvider", typeof(ITemperatureProvider));
while (true)
{
double nextTemp = randomTemperature.NextDouble();
lock (_temperatureSubcribers)
{
foreach (var item in _temperatureSubcribers)
{
try
{
item.OnTemperature(nextTemp);
}
catch (SocketException)
{}
catch(RemotingException)
{}
}
}
Thread.Sleep(200);
}
}
}
public interface ITemperatureSubcriber
{
void OnTemperature(double temperature);
}
[Serializable]
public sealed class TemperatureSubcriber : MarshalByRefObject, ITemperatureSubcriber
{
private ObjRef _clientRef;
private readonly Random portGen = new Random();
public void OnTemperature(double temperature)
{
Console.WriteLine(temperature);
}
public override object InitializeLifetimeService()
{
return null;
}
public void Start()
{
BinaryServerFormatterSinkProvider provider = new BinaryServerFormatterSinkProvider();
provider.TypeFilterLevel = System.Runtime.Serialization.Formatters.TypeFilterLevel.Full;
int port = portGen.Next(1, 65535);
TcpServerChannel tcpChannel = new TcpServerChannel(string.Format("TemperatureSubcriber_{0}", Guid.NewGuid()), port, provider);
ChannelServices.RegisterChannel(tcpChannel, false);
ITemperatureProvider p1 = (ITemperatureProvider)RemotingServices.Connect(typeof(ITemperatureProvider), "tcp://localhost:5001/TemperatureProvider");
_clientRef = RemotingServices.Marshal(this, string.Format("TemperatureSubcriber_{0}_{1}.rem", Environment.MachineName, Guid.NewGuid()));
p1.Subcribe(_clientRef);
}
}
public class TemperatureProviderProgram
{
static void Main(string[] args)
{
TemperatureProvider tp = new TemperatureProvider();
tp.Start();
}
}
public class TemperatureSubcriberProgram
{
static void Main(string[] args)
{
Console.WriteLine("Press any key to start TemperatureSubcriber.");
Console.ReadLine();
TemperatureSubcriber ts = new TemperatureSubcriber();
ts.Start();
Console.ReadLine();
}
}
In WCF, with a "push" from the server you're really talking about duplex comms; the MarshalByRefObject is largely redundant here (AFAIK). The page here discusses various scenarios, including duplex/callbacks.
If the issue is xml (for some philosophical reason), then simply using NetDataContractSerializer rather than DataContractSerializer might help.
The other approach is to have the clients "pull" data periodically; this works well if you need to support basic http, etc.
What it sounds like you want to do is use WCF NetTcpBinding with Callbacks.
Take a look at this: http://www.codeproject.com/KB/WCF/publisher_subscriber.aspx
"Learning WCF" by Michele Bustamante is also very good. You can get Chpt1 for VS2008 at her website along with the code for the book. Chpt1 will explain/demo setting up connections and such. She also has downloadable sample code. One of the Samples is a DuplexPublishSubscribe.
You will need to modify your logic a bit. If you want to migrate this app to WCF. You will need to have clients pull data from the service at regular intervals.
You will also need a Windows service or application to host the WCF like the console you are using in the previous code.
Well I build real time systems so polling is not an option - I need to push data.
Also I am finding there is no WCF equivalent of System.Runtime.Remoting.ObjRef! This is an extremely useful type that encapsulates a service endpoint and can be serialise and passed around the network to other remoting service.
Think I’ll be sticking with good old remoting until the ObjRef equivalent is introduced.
Yes it is true, just one correction..
ObjRefs are created automatically when any MarshalByRefObject derived object is going outside the appdomain.
So in this case your ITemperatureProvider interface Subscribe method shoud take ITemperatureSubscriber instead of objref.
And then on client side just call p1.Subscribe(this) and the remoting layer will generate ObjRef from the object that will be serialized and sent. (sending b reference)