I'm nearing the end of a project for which I'm trying to use DDD, but have discovered a glaring bug that I'm not sure how to easily solve.
Here is my entity - I've reduced it for simplicity:
public class Contribution : Entity
{
protected Contribution()
{
this.Parts = new List<ContributionPart>();
}
internal Contribution(Guid id)
{
this.Id = id;
this.Parts = new List<ContributionPart>();
}
public Guid Id { get; private set; }
protected virtual IList<ContributionPart> Parts { get; private set; }
public void UploadParts(string path, IEnumerable<long> partLengths)
{
if (this.Parts.Count > 0)
{
throw new InvalidOperationException("Parts have already been uploaded.");
}
long startPosition = 0;
int partNumber = 1;
foreach (long partLength in partLengths)
{
this.Parts.Add(new ContributionPart(this.Id, partNumber, partLength));
this.Commands.Add(new UploadContributionPartCommand(this.Id, partNumber, path, startPosition, partLength));
startPosition += partLength;
partNumber++;
}
}
public void SetUploadResult(int partNumber, string etag)
{
if (etag == null)
{
throw new ArgumentNullException(nameof(etag));
}
ContributionPart part = this.Parts.SingleOrDefault(p => p.PartNumber == partNumber);
if (part == null)
{
throw new ContributionPartNotFoundException(this.Id, partNumber);
}
part.SetUploadResult(etag);
if (this.Parts.All(p => p.IsUploaded))
{
IEnumerable<PartUploadedResult> results = this.Parts.Select(p => new PartUploadedResult(p.PartNumber, p.ETag));
this.Events.Add(new ContributionUploaded(this.Id, results));
}
}
}
My bug occurs in the SetUploadResult method. Basically, multiple threads are performing uploads concurrently, and then call SetUploadResult at the end of the upload. But because the entity was loaded a few seconds beforehand, each thread will be calling SetUploadResult on a different instance of the entity, and so the test if (this.Parts.All(p => p.IsUploaded) will never evaluate to true.
I'm not sure how to easily resolve this. The idea behind adding multiple UploadContributionPartCommands to the Commands collection was so that each ContributionPart could be uploaded in parallel - my CommandBus ensures this - but with each part uploaded in parallel, it causes problems for my entity logic.
I think you can refactor the Contribution so that it will not handle the SetUploadResult. It will decouple the Contribution entity and the side effects of the SetUploadResult are isolated, keeping the technical concern out of the Contribution domain model.
Create a dispatcher class that contains what the SetUploadResult is doing.
Once the Contribution entity is finished carrying out its logic, the thread of execution will return to the application service. It is at this point that the events from the entity can be fed into the dispatcher.
If they are long running process, you can add them as collection of tasks and run them asynchronously. Then you can just await when all tasks are done. You can search in SO on how to do this.
var results = await Task.WhenAll(task1, task2,...taskN);
If several threads may call the SetUploadResult method simultaneously and you have a race condition your should protect the critical section using a synchronization mechanism such as a lock: https://msdn.microsoft.com/en-us/library/c5kehkcz.aspx.
If you make the lock field static it will be shared across all instances of your entity type, e.g.:
private static readonly object _lock = new object();
public void SetUploadResult(int partNumber, string etag)
{
if (etag == null)
{
throw new ArgumentNullException(nameof(etag));
}
ContributionPart part = this.Parts.SingleOrDefault(p => p.PartNumber == partNumber);
if (part == null)
{
throw new ContributionPartNotFoundException(this.Id, partNumber);
}
part.SetUploadResult(etag);
lock (_lock) //Only one thread at a time can enter this critical section.
//The second thread will wait here until the first thread leaves the critical section.
{
if (this.Parts.All(p => p.IsUploaded))
{
IEnumerable<PartUploadedResult> results = this.Parts.Select(p => new PartUploadedResult(p.PartNumber, p.ETag));
this.Events.Add(new ContributionUploaded(this.Id, results));
}
}
}
Related
I doing a small project to map a network (routers only) using SNMP. In order to speed things up, I´m trying to have a pool of threads responsible for doing the jobs I need, apart from the first job which is done by the main thread.
At this time I have two jobs, one takes a parameter the other doesn´t:
UpdateDeviceInfo(NetworkDevice nd)
UpdateLinks() *not defined yet
What I´m trying to achieve is to have those working threads waiting for a job to
appear on a Queue<Action> and wait while it is empty. The main thread will add the first job and then wait for all workers, which might add more jobs, to finish before starting adding the second job and wake up the sleeping threads.
My problem/questions are:
How to define the Queue<Actions> so that I can insert the methods and the parameters if any. If not possible I could make all functions accept the same parameter.
How to launch the working threads indefinitely. I not sure where should I create the for(;;).
This is my code so far:
public enum DatabaseState
{
Empty = 0,
Learning = 1,
Updating = 2,
Stable = 3,
Exiting = 4
};
public class NetworkDB
{
public Dictionary<string, NetworkDevice> database;
private Queue<Action<NetworkDevice>> jobs;
private string _community;
private string _ipaddress;
private Object _statelock = new Object();
private DatabaseState _state = DatabaseState.Empty;
private readonly int workers = 4;
private Object _threadswaitinglock = new Object();
private int _threadswaiting = 0;
public Dictionary<string, NetworkDevice> Database { get => database; set => database = value; }
public NetworkDB(string community, string ipaddress)
{
_community = community;
_ipaddress = ipaddress;
database = new Dictionary<string, NetworkDevice>();
jobs = new Queue<Action<NetworkDevice>>();
}
public void Start()
{
NetworkDevice nd = SNMP.GetDeviceInfo(new IpAddress(_ipaddress), _community);
if (nd.Status > NetworkDeviceStatus.Unknown)
{
database.Add(nd.Id, nd);
_state = DatabaseState.Learning;
nd.Update(this); // The first job is done by the main thread
for (int i = 0; i < workers; i++)
{
Thread t = new Thread(JobRemove);
t.Start();
}
lock (_statelock)
{
if (_state == DatabaseState.Learning)
{
Monitor.Wait(_statelock);
}
}
lock (_statelock)
{
if (_state == DatabaseState.Updating)
{
Monitor.Wait(_statelock);
}
}
foreach (KeyValuePair<string, NetworkDevice> n in database)
{
using (System.IO.StreamWriter file = new System.IO.StreamWriter(n.Value.Name + ".txt")
{
file.WriteLine(n);
}
}
}
}
public void JobInsert(Action<NetworkDevice> func, NetworkDevice nd)
{
lock (jobs)
{
jobs.Enqueue(item);
if (jobs.Count == 1)
{
// wake up any blocked dequeue
Monitor.Pulse(jobs);
}
}
}
public void JobRemove()
{
Action<NetworkDevice> item;
lock (jobs)
{
while (jobs.Count == 0)
{
lock (_threadswaitinglock)
{
_threadswaiting += 1;
if (_threadswaiting == workers)
Monitor.Pulse(_statelock);
}
Monitor.Wait(jobs);
}
lock (_threadswaitinglock)
{
_threadswaiting -= 1;
}
item = jobs.Dequeue();
item.Invoke();
}
}
public bool NetworkDeviceExists(NetworkDevice nd)
{
try
{
Monitor.Enter(database);
if (database.ContainsKey(nd.Id))
{
return true;
}
else
{
database.Add(nd.Id, nd);
Action<NetworkDevice> action = new Action<NetworkDevice>(UpdateDeviceInfo);
jobs.Enqueue(action);
return false;
}
}
finally
{
Monitor.Exit(database);
}
}
//Job1 - Learning -> Update device info
public void UpdateDeviceInfo(NetworkDevice nd)
{
nd.Update(this);
try
{
Monitor.Enter(database);
nd.Status = NetworkDeviceStatus.Self;
}
finally
{
Monitor.Exit(database);
}
}
//Job2 - Updating -> After Learning, create links between neighbours
private void UpdateLinks()
{
}
}
Your best bet seems like using a BlockingCollection instead of the Queue class. They behave effectively the same in terms of FIFO, but a BlockingCollection will let each of your threads block until an item can be taken by calling GetConsumingEnumerable or Take. Here is a complete example.
http://mikehadlow.blogspot.com/2012/11/using-blockingcollection-to-communicate.html?m=1
As for including the parameters, it seems like you could use closure to enclose the NetworkDevice itself and then just enqueue Action instead of Action<>
I have a situation where new tasks are being constantly generated and added to a ConcurrentBag<Tasks>.
I need to wait all tasks to complete.
Waiting for all the tasks in the ConcurrentBag via WaitAll is not enough as the number of tasks would have grown while the previous wait is completed.
At the moment I am waiting it in the following way:
private void WaitAllTasks()
{
while (true)
{
int countAtStart = _tasks.Count();
Task.WaitAll(_tasks.ToArray());
int countAtEnd = _tasks.Count();
if (countAtStart == countAtEnd)
{
break;
}
#if DEBUG
if (_tasks.Count() > 100)
{
tokenSource.Cancel();
break;
}
#endif
}
}
I am not very happy with the while(true) solution.
Can anyone suggest a better more efficient way to do this (without having to pool the processor constantly with a while(true))
Additional context information as requested in the comments. I don't think though this is relevant to the question.
This piece of code is used in a web crawler. The crawler scans page content and looks for two type of information. Data Pages and Link Pages. Data pages will be scanned and data will be collected, Link Pages will be scanned and more links will be collected from them.
As each of the tasks carry-on the activities and find more links, they add the links to an EventList. There is an event OnAdd on the list (code below) that is used to trigger other task to scan the newly added URLs. And so forth.
The job is complete when there are no more running tasks (so no more links will be added) and all items have been processed.
public IEventList<ISearchStatus> CurrentLinks { get; private set; }
public IEventList<IDataStatus> CurrentData { get; private set; }
public IEventList<System.Dynamic.ExpandoObject> ResultData { get; set; }
private readonly ConcurrentBag<Task> _tasks = new ConcurrentBag<Task>();
private readonly CancellationTokenSource tokenSource = new CancellationTokenSource();
private readonly CancellationToken token;
public void Search(ISearchDefinition search)
{
CurrentLinks.OnAdd += UrlAdded;
CurrentData.OnAdd += DataUrlAdded;
var status = new SearchStatus(search);
CurrentLinks.Add(status);
WaitAllTasks();
_exporter.Export(ResultData as IList<System.Dynamic.ExpandoObject>);
}
private void DataUrlAdded(object o, EventArgs e)
{
var item = o as IDataStatus;
if (item == null)
{
return;
}
_tasks.Add(Task.Factory.StartNew(() => ProcessObjectSearch(item), token));
}
private void UrlAdded(object o, EventArgs e)
{
var item = o as ISearchStatus;
if (item==null)
{
return;
}
_tasks.Add(Task.Factory.StartNew(() => ProcessFollow(item), token));
_tasks.Add(Task.Factory.StartNew(() => ProcessData(item), token));
}
public class EventList<T> : List<T>, IEventList<T>
{
public EventHandler OnAdd { get; set; }
private readonly object locker = new object();
public new void Add(T item)
{
//lock (locker)
{
base.Add(item);
}
OnAdd?.Invoke(item, null);
}
public new bool Contains(T item)
{
//lock (locker)
{
return base.Contains(item);
}
}
}
I think that this task can be done with TPL Dataflow library with very basic setup. You'll need a TransformManyBlock<Task, IEnumerable<DataTask>> and an ActionBlock (may be more of them) for actual data processing, like this:
// queue for a new urls to parse
var buffer = new BufferBlock<ParseTask>();
// parser itself, returns many data tasks from one url
// similar to LINQ.SelectMany method
var transform = new TransformManyBlock<ParseTask, DataTask>(task =>
{
// get all the additional urls to parse
var parsedLinks = GetLinkTasks(task);
// get all the data to parse
var parsedData = GetDataTasks(task);
// setup additional links to be parsed
foreach (var parsedLink in parsedLinks)
{
buffer.Post(parsedLink);
}
// return all the data to be processed
return parsedData;
});
// actual data processing
var consumer = new ActionBlock<DataTask>(s => ProcessData(s));
After that you need to link the blocks between each over:
buffer.LinkTo(transform, new DataflowLinkOptions { PropagateCompletion = true });
transform.LinkTo(consumer, new DataflowLinkOptions { PropagateCompletion = true });
Now you have a nice pipeline which will execute in background. At the moment you realize that everything you need is parsed, you simply call the Complete method for a block so it stops accepting news messages. After the buffer became empty, it will propagate the completion down the pipeline to transform block, which will propagate it down to consumer(s), and you need to wait for Completion task:
// no additional links would be accepted
buffer.Complete();
// after all the tasks are done, this will get fired
await consumer.Completion;
You can check the moment for a completion, for example, if both buffer' Count property and transform' InputCount and transform' CurrentDegreeOfParallelism (this is internal property for the TransformManyBlock) are equal to 0.
However, I suggested you to implement some additional logic here to determine current transformers number, as using the internal logic isn't a great solution. As for cancelling the pipeline, you can create a TPL block with a CancellationToken, either the one for all, or a dedicated for each block, getting the cancellation out of box.
Why not write one function that yields your tasks as necessary, when they are created? This way you can just use Task.WhenAll to wait for them to complete or, have I missed the point? See this working here.
using System;
using System.Threading.Tasks;
using System.Collections.Generic;
public class Program
{
public static void Main()
{
try
{
Task.WhenAll(GetLazilyGeneratedSequenceOfTasks()).Wait();
Console.WriteLine("Fisnished.");
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
public static IEnumerable<Task> GetLazilyGeneratedSequenceOfTasks()
{
var random = new Random();
var finished = false;
while (!finished)
{
var n = random.Next(1, 2001);
if (n < 50)
{
finished = true;
}
if (n > 499)
{
yield return Task.Delay(n);
}
Task.Delay(20).Wait();
}
yield break;
}
}
Alternatively, if your question is not as trivial as my answer may suggest, I'd consider a mesh with TPL Dataflow. The combination of a BufferBlock and an ActionBlock would get you very close to what you need. You could start here.
Either way, I'd suggest you want to include a provision for accepting a CancellationToken or two.
Update - solved
The final solution differs a bit from Brandon's suggestion but his answer brought me on the right track.
class State
{
public int Offset { get; set; }
public HashSet<string> UniqueImageUrls = new HashSet<string>();
}
public IObservable<TPicture> GetPictures(ref object _state)
{
var localState = (State) _state ?? new State();
_state = localState;
return Observable.Defer(()=>
{
return Observable.Defer(() => Observable.Return(GetPage(localState.Offset)))
.SubscribeOn(TaskPoolScheduler.Default)
.Do(x=> localState.Offset += 20)
.Repeat()
.TakeWhile(x=> x.Count > 0)
.SelectMany(x=> x)
.Where(x=> !localState.UniqueImageUrls.Contains(x.ImageUrl))
.Do(x=> localState.UniqueImageUrls.Add(x.ImageUrl));
});
}
IList<TPicture> GetPage(int offset)
{
...
return result;
}
Original Question
I'm currently struggling with the following problem. The PictureProvider implementation shown below is working with an offset variable used for paging results of a backend service providing the actual data. What I would like to implement is an elegant solution making the current offset available to the consumer of the observable to allow for resuming the observable sequence at a later time at the correct offset. Resuming is already accounted for by the intialState argument to GetPictures().
Recommendations for improving the code in a more RX like fashion would be welcome as well. I'm actually not so sure if the Task.Run() stuff is appropriate here.
public class PictureProvider :
IPictureProvider<Picture>
{
#region IPictureProvider implementation
public IObservable<Picture> GetPictures(object initialState)
{
return Observable.Create<Picture>((IObserver<Picture> observer) =>
{
var state = new ProducerState(initialState);
ProducePictures(observer, state);
return state;
});
}
#endregion
void ProducePictures(IObserver<Picture> observer, ProducerState state)
{
Task.Run(() =>
{
try
{
while(!state.Terminate.WaitOne(0))
{
var page = GetPage(state.Offset);
if(page.Count == 0)
{
observer.OnCompleted();
break;
}
else
{
foreach(var picture in page)
observer.OnNext(picture);
state.Offset += page.Count;
}
}
}
catch (Exception ex)
{
observer.OnError(ex);
}
state.TerminateAck.Set();
});
}
IList<Picture> GetPage(int offset)
{
var result = new List<Picture>();
... boring web service call here
return result;
}
public class ProducerState :
IDisposable
{
public ProducerState(object initialState)
{
Terminate = new ManualResetEvent(false);
TerminateAck = new ManualResetEvent(false);
if(initialState != null)
Offset = (int) initialState;
}
public ManualResetEvent Terminate { get; private set; }
public ManualResetEvent TerminateAck { get; private set; }
public int Offset { get; set; }
#region IDisposable implementation
public void Dispose()
{
Terminate.Set();
TerminateAck.WaitOne();
Terminate.Dispose();
TerminateAck.Dispose();
}
#endregion
}
}
I suggest refactoring your interface to yield the state as part of the data. Now the client has what they need to resubscribe where they left off.
Also, once you start using Rx, you should find that using synchronization primitives like ManualResetEvent are rarely necessary. If you refactor your code so that retrieving each page is its own Task, then you can eliminate all of that synchronization code.
Also, if you are calling a "boring web service" in GetPage, then just make it async. This gets rid of the need to call Task.Run among other benefits.
Here is a refactored version, using .NET 4.5 async/await syntax. It could also be done without async/await. I also added a GetPageAsync method that uses Observable.Run just in case you really cannot convert your webservice call to be asynchronous
/// <summary>A set of pictures</summary>
public struct PictureSet
{
public int Offset { get; private set; }
public IList<Picture> Pictures { get; private set; }
/// <summary>Clients will use this property if they want to pick up where they left off</summary>
public int NextOffset { get { return Offset + Pictures.Count; } }
public PictureSet(int offset, IList<Picture> pictures)
:this() { Offset = offset; Pictures = pictures; }
}
public class PictureProvider : IPictureProvider<PictureSet>
{
public IObservable<PictureSet> GetPictures(int offset = 0)
{
// use Defer() so we can capture a copy of offset
// for each observer that subscribes (so multiple
// observers do not update each other's offset
return Observable.Defer<PictureSet>(() =>
{
var localOffset = offset;
// Use Defer so we re-execute GetPageAsync()
// each time through the loop.
// Update localOffset after each GetPageAsync()
// completes so that the next call to GetPageAsync()
// uses the next offset
return Observable.Defer(() => GetPageAsync(localOffset))
.Select(pictures =>
{
var s = new PictureSet(localOffset, pictures);
localOffset += pictures.Count;
})
.Repeat()
.TakeWhile(pictureSet => pictureSet.Pictures.Count > 0);
});
}
private async Task<IList<Picture>> GetPageAsync(int offset)
{
var data = await BoringWebServiceCallAsync(offset);
result = data.Pictures.ToList();
}
// this version uses Observable.Run() (which just uses Task.Run under the hood)
// in case you cannot convert your
// web service call to be asynchronous
private IObservable<IList<Picture>> GetPageAsync(int offset)
{
return Observable.Run(() =>
{
var result = new List<Picture>();
... boring web service call here
return result;
});
}
}
Clients just need to add a SelectMany call to get their IObservable<Picture>. They can choose to store the pictureSet.NextOffset if they wish.
pictureProvider
.GetPictures()
.SelectMany(pictureSet => pictureSet.Pictures)
.Subscribe(picture => whatever);
Instead of thinking about how to save the subscription state, I would think about how to replay the state of the inputs (i.e. I'd try to create a serializable ReplaySubject that, on resume, would just resubscribe and catch back up to the current state).
I am interfacing with a back-end system, where I must never ever have more than one open connection to a given object (identified by it's numeric ID), but different consumers may be opening and closing them independently of one another.
Roughly, I have a factory class fragment like this:
private Dictionary<ulong, IFoo> _openItems = new Dictionary<ulong, IFoo>();
private object _locker = new object();
public IFoo Open(ulong id)
{
lock (_locker)
{
if (!_openItems.ContainsKey(id))
{
_openItems[id] = _nativeResource.Open(id);
}
_openItems[id].RefCount++;
return _openItems[id];
}
}
public void Close(ulong id)
{
lock (_locker)
{
if (_openItems.ContainsKey(id))
{
_openItems[id].RefCount--;
if (_openItems[id].RefCount == 0)
{
_nativeResource.Close(id);
_openItems.Remove(id);
}
}
}
}
Now, here is the problem. In my case, _nativeResource.Open is very slow. The locking in here is rather naive and can be very slow when there are a lot of different concurrent .Open calls, even though they are (most likely) referring to different ids and don't overlap, especially if they are not in the _openItems cache.
How do I structure the locking so that I am only preventing concurrent access to a specific ID and not to all callers?
What you may want to look into is a striped locking strategy. The idea is that you share N locks for M items (possible ID's in your case), and choose a lock such that for any ID the lock chosen is always the same one. The classic way of choosing locks for this technique is modulo division- simply divide M by N, take the remainder, and use the lock with that index:
// Assuming the allLocks class member is defined as follows:
private static AutoResetEvent[] allLocks = new AutoResetEvent[10];
// And initialized thus (in a static constructor):
for (int i = 0; i < 10; i++) {
allLocks[i] = new AutoResetEvent(true);
}
// Your method becomes
var lockIndex = id % allLocks.Length;
var lockToUse = allLocks[lockIndex];
// Wait for the lock to become free
lockToUse.WaitOne();
try {
// At this point we have taken the lock
// Do the work
} finally {
lockToUse.Set();
}
If you are on .net 4, you could try the ConcurrentDictionary with something along these lines:
private ConcurrentDictionary<ulong, IFoo> openItems = new ConcurrentDictionary<ulong, IFoo>();
private object locker = new object();
public IFoo Open(ulong id)
{
var foo = this.openItems.GetOrAdd(id, x => nativeResource.Open(x));
lock (this.locker)
{
foo.RefCount++;
}
return foo;
}
public void Close(ulong id)
{
IFoo foo = null;
if (this.openItems.TryGetValue(id, out foo))
{
lock (this.locker)
{
foo.RefCount--;
if (foo.RefCount == 0)
{
if (this.openItems.TryRemove(id, out foo))
{
this.nativeResource.Close(id);
}
}
}
}
}
If anyone can see any glaring issues with that, please let me know!
I'm looking for resources, or anyone who have writting scheduler in WCF.
What I want to achive is basically what can you see in OGame, or Travian or doznes of other text browser based game.
Player click and send task to server to make building for him, or unit or something else.
From what I figured out I need to run some kind if scheduler on server that will gather all tasks, and will track them, until some perdiod of time will pass (2 mins, 10 mins, 3 days etc.), and after that period end service should call an action, like send data to database.
Well. I've been trying to make something very simply, that I can start from and ended with this:
public class BuildingScheduler
{
public int TaskID { get; set; }
public string UserName { get; set; }
public DateTime TaskEnd { get; set; }
public string BuildingName { get; set; }
public bool TaskDone { get; set; }
public DateTime RemainingTime { get; set; }
TestDBModelContainer _ctx;
TestData _testData;
public IDuplexClient Client { get; set; }
public BuildingScheduler()
{
TaskDone = false;
}
public void MakeBuilding()
{
while (DateTime.Now <= TaskEnd)
{
//Client.DisplayMessage(DateTime.Now.ToString());
RemainingTime = DateTime.Now;
}
_testData = new TestData { DataName = BuildingName, Created = DateTime.Now };
_ctx = new TestDBModelContainer();
_ctx.TestDataSet.AddObject(_testData);
_ctx.SaveChanges();
TaskDone = true;
//Client.DisplayMessage("Building completed!");
}
}
static List<UserChannel> _userChannels = new List<UserChannel>();
static List<BuildingScheduler> _buildingSchedules = new List<BuildingScheduler>();
List<BuildingScheduler> _buildingSchedulesToRemove = new List<BuildingScheduler>();
[OperationContract]
public void DoWork()
{
IDuplexClient client = OperationContext.Current.GetCallbackChannel<IDuplexClient>();
UserChannel userChannel = new UserChannel { Client = client, ClientName = "TestClient" };
string clientName = (from p in _userChannels
where p.ClientName == "TestClient"
select p.ClientName).FirstOrDefault();
lock (((ICollection)_userChannels).SyncRoot)
{
if (clientName == null)
{
_userChannels.Add(userChannel);
}
}
CheckBuilding();
}
private void CheckBuilding()
{
BuildingScheduler bs = (from p in _buildingSchedules
where p.UserName == "TestClient"
select p).FirstOrDefault();
IDuplexClient client = (from p in _userChannels
where p.ClientName == "TestClient"
select p.Client).FirstOrDefault();
if (bs != null)
{
client.DisplayMessage(bs.RemainingTime);
}
}
private void StartBuilding()
{
foreach (BuildingScheduler bs in _buildingSchedules)
{
if (bs.TaskDone == false)
{
bs.MakeBuilding();
}
else if (bs.TaskDone == true)
{
_buildingSchedulesToRemove.Add(bs);
}
}
for(int i = 0; i <= _buildingSchedulesToRemove.Count; i++)
{
BuildingScheduler bs = _buildingSchedulesToRemove.Where(p => p.TaskDone == true).Select(x => x).FirstOrDefault();
_buildingSchedules.Remove(bs);
_buildingSchedulesToRemove.Remove(bs);
}
CheckBuilding();
}
[OperationContract]
public void MakeBuilding(string name)
{
BuildingScheduler _buildng = new BuildingScheduler();
//_buildng.Client = client;
_buildng.TaskID = 1;
_buildng.UserName = "TestClient";
_buildng.TaskEnd = DateTime.Now.AddSeconds(50);
_buildng.BuildingName = name;
_buildingSchedules.Add(_buildng);
StartBuilding();
}
I have hardcoded most values, for testing.
InstanceContextMode is set for PerCall.
Anyway. This code is working. At least to some point. If we ignore zylion exceptions from Entity Framework, I can add tasks from multiple clients and they are added to db in order from newset to oldest (or shortest to longest ?).
The point is, user CANT track his tasks. When I change page I don't see how much time remaining till task done. This code essentialy do not provide time tracking because i got rid of it as it wasn't working anyway.
I guess I should store my task in some presitant data storage and everytime user check page newset state should be draw from that storage and send to him.
I'm not and expert but i think best option here is to store all data in Memory until task is done.
Any relational database will be probably to slow, if I will have to constantly update records with newset state of task.
I know I should just synchronize client side timer with server and do not stream constatly time from server. But Big question is here how to get newset state of task pogress when user come back to page after 3 second or 3 hours ?
Any advices how to make it work as expected.
And btw. I'm using pollingduplex and Silverlight.
It sounds like you are using WCF to do something for which it wasn't designed (but I understand the need). I would suggest you look at Workflow Foundation (WF) as a possible solution to your problem. Here is a good explanation:
http://msdn.microsoft.com/library/dd851337.aspx
Here is also a good intro video:
http://channel9.msdn.com/Blogs/mwink/Introduction-to-Workflow-Services-building-WCF-Services-with-WF
Workflow can consume WCF services and it is designed to work over time. It holds data in state until something changes (regardless of time) without consuming excess resources. Also, it allows for persistance and parallel processes.