Suppose I have a set of URIs that I am monitoring for availability. Each URI is either "up" or "down", and new URIs to monitor may be added to the system at any time:
public enum ConnectionStatus
{
Up,
Down
}
public class WebsiteStatus
{
public string Uri
{
get;
set;
}
public ConnectionStatus Status
{
get;
set;
}
}
public class Program
{
static void Main(string[] args)
{
var statusStream = new Subject<WebsiteStatus>();
Test(statusStream);
Console.WriteLine("Done");
Console.ReadKey();
}
private static void Test(IObservable<WebsiteStatus> statusStream)
{
}
}
Now suppose in Test() I want to reactively ascertain:
whether all URIs are down (as a bool)
which URIs are down (as IEnumerable<string>)
So Test would end up creating an observable like IObservable<Tuple<bool, IEnumerable<string>>> where the bool indicates whether all URIs are down and the IEnumerable<string> contains those URIs that are.
How do I go about this? My initial thinking is that I would need to group by the URI, then combine the latest from each group into a list that I could then perform a Select against. However, this did not work out due to the way CombineLatest works.
EDIT: Thanks to Matthew's answer I looked into rxx and found that it implemented a CombineLatest overload in exactly the fashion I would have expected in rx out of the box, except that I needed to change it so that it publishes even when there is only a single source stream being combined (by default it was waiting for a minimum of two source streams). Also, I can't justify pulling in an extra 2MB of binaries for the sake of one method, so I have copy/pasted it into my project. Doing so, I was able to solve as follows:
private static void Test(IObservable<WebsiteStatus> statusStream)
{
statusStream
.GroupBy(x => x.Uri)
.CombineLatest()
.Select(
x =>
{
var down = x.Where(y => y.Status == ConnectionStatus.Down);
var downCount = down.Count();
var downUris = down.Select(y => y.Uri).ToList();
return new
{
AllDown = x.Count == downCount,
DownUris = downUris
};
})
.Subscribe(x =>
{
Console.WriteLine(" Sources down ({0}): {1}", x.AllDown ? "that's all of them" : "some are still up", x.DownUris.Aggregate("", (y, z) => y += (z + " | ")));
});
}
The neatest way is to use the Rxx extension in this answer. An alternative is below, it just keeps a list of sites that are down/up.
var downStream = statusStream
.Aggregate<WebsiteStatus, IEnumerable<string>>(new string[0], (down, newStatus) =>
{
if (newStatus.IsUp)
return down.Where(uri => uri != newStatus.Uri);
else if (!down.Contains(newStatus.Uri))
return down.Concat(new string[] { newStatus.Uri });
else
return down;
});
var upStream = statusStream
.Aggregate<WebsiteStatus, IEnumerable<string>>(new string[0], (up, newStatus) =>
{
if (!newStatus.IsUp)
return up.Where(uri => uri != newStatus.Uri);
else if (!up.Contains(newStatus.Uri))
return down.Concat(new string[] { newStatus.Uri });
else
return up;
});
var allDown = upStream.Select(up => !up.Any());
Related
I'am trying to test persistence actor, but behavior is wierd.
My tested actor:
public class PredictionManager : ReceivePersistentActor
{
public override string PersistenceId => _persistanceId;
public PredictionManager(string persistenceId)
{
_persistanceId = persistenceId;
Command<AddPredictionRequest>(OnPrediction);
Recover<SnapshotOffer>(x => OnRecover((PredictionManagerState)x.Snapshot), x => x.Snapshot is PredictionManagerState);
}
private void OnPrediction(AddPredictionRequest request)
{
/* some code */
_state.Add(request);
SaveSnapshot(_state);
}
private void OnRecover(PredictionManagerState state)
{
foreach(var request in state.RequestMap)
{
OnPrediction(request.Value);
}
}
}
My state save all messages and deletes them after manager actor recieve some message. When I try to debug my test, Recover function called first and after this called OnPrediction. My question is - how it's possible? If data stores in momory, why it have SnapshotOffer? Also I have tried to generate new percistenceId from Guid.NewGuid() but it doesn't work.
public void AddPrediction_PassToChild_CreateNewManager_PassToChild()
{
var sender = CreateTestProbe(Sys);
var persistanceId = "AddPrediction_PassToChild_CreateNewManager_PassToChild";
var props = Props.Create(() => new PredictionManager(Mock.Of<IEventBus>(), persistanceId));
var predictionManager = ActorOf(props);
var message = new PredictionManager.AddPredictionRequest(Props.Create(() => new ChildTestActor(sender.Ref)),
new StartPrediction<IPredictionParameter>("a", 1, "a", new Param() ));
//Act
predictionManager.Tell(message, sender);
sender.ExpectMsg<string>(x => x == "ok", TimeSpan.FromSeconds(15));
Sys.Stop(predictionManager);
predictionManager = Sys.ActorOf(props);
sender.ExpectMsg<string>(x => x == "ok", TimeSpan.FromSeconds(15));
Sys.Stop(predictionManager);
}
I found out that default storage for snapshots is LocalStorage not MemoryStorage. So it stores snapshots in files, and this is why it has SnapshotOffer after app restart. But I still can't get why Guid.NewGuid() as persistanceId is not working.
For example, I have an observable of some collections which indicates statuses of objects (I get it periodically through REST API).
class User
{
int Id { get; }
string Name { get; }
string Status { get; }
}
IObservable<User> source;
I want to create a DynamicCache object and update it each time the source gives me a new result. So I wrote:
var models = new SourceCache<User,int>(user => user.Id);
models.Connect()
.Transform(u => new UserViewModel() {...})
...
.Bind(out viewModels)
.Subscribe();
source.Subscribe(ul => models.EditDiff(ul, (a, b) => a.Status == b.Status));
But now every time a user changes its status, .Transform(...) method creates a new instance of UserViewModel, which isn't the desired behaviour.
Can I somehow determine a rule of updating existing ViewModel's properties (in the derived collection) when source item with the same Id is changing instead of creating a new one every time?
The answer is you need to create a custom operator. I have posted a gist here TransformWithInlineUpdate which you can copy into your solution. Example usage is:
var users = new SourceCache<User, int>(user => user.Id);
var transformed = users.Connect()
.TransformWithInlineUpdate(u => new UserViewModel(u), (previousViewModel, updatedUser) =>
{
previousViewModel.User = updatedUser;
});
For completeness of the answer, here is the code:
public static IObservable<IChangeSet<TDestination, TKey>> TransformWithInlineUpdate<TObject, TKey, TDestination>(this IObservable<IChangeSet<TObject, TKey>> source,
Func<TObject, TDestination> transformFactory,
Action<TDestination, TObject> updateAction = null)
{
return source.Scan((ChangeAwareCache<TDestination, TKey>)null, (cache, changes) =>
{
//The change aware cache captures a history of all changes so downstream operators can replay the changes
if (cache == null)
cache = new ChangeAwareCache<TDestination, TKey>(changes.Count);
foreach (var change in changes)
{
switch (change.Reason)
{
case ChangeReason.Add:
cache.AddOrUpdate(transformFactory(change.Current), change.Key);
break;
case ChangeReason.Update:
{
if (updateAction == null) continue;
var previous = cache.Lookup(change.Key)
.ValueOrThrow(()=> new MissingKeyException($"{change.Key} is not found."));
//callback when an update has been received
updateAction(previous, change.Current);
//send a refresh as this will force downstream operators to filter, sort, group etc
cache.Refresh(change.Key);
}
break;
case ChangeReason.Remove:
cache.Remove(change.Key);
break;
case ChangeReason.Refresh:
cache.Refresh(change.Key);
break;
case ChangeReason.Moved:
//Do nothing !
break;
}
}
return cache;
}).Select(cache => cache.CaptureChanges()); //invoke capture changes to return the changeset
}
Is this the most efficient way of skipping random changesets when getting latest from TFS?
I have done a LOT of research into this subject and have yet to run across a solution. All comments / suggestions are welcome. Even if that suggestion is to use a completely different solution (that works).
My first attempt I filtered the changesets and then looped through them issuing a workspace.get(). This was incredibly slow, and did not get the right results. It ended up taking over 45 minutes for one folder where my final solution ended up taking 3:30 minutes for the same folder. Whereas a normal get process on the same folder took around 50 seconds each time.
This code is test code right now and is only intended to get this working and as such is missing basic things like exception handling and other best practices. It has passed all the tests I have thrown at it so far, but it is a bit slower than the normal get however I do not see a way to make it faster.
Here is what I ended up with:
You will need references to:
assemblyref://Microsoft.TeamFoundation.Client&
assemblyref://Microsoft.TeamFoundation.Common&
assemblyref://Microsoft.TeamFoundation.VersionControl.Client&
assemblyref://Microsoft.TeamFoundation.VersionControl.Common&
assemblyref://Microsoft.VisualStudio.Services.Common
The code:
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.Framework.Common;
using Microsoft.TeamFoundation.VersionControl.Client;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
class Program
{
static void Main(string[] args)
{
//these would be the changesets to ignore
var ignoreChangeSets = new List<int>()
{
1,10,50,900 // change these to ids you want to ignore. These are just random example numbers
};
// Replace with your setup
var tfsServer = #"http://server_name:8080/TFS/";
var serverPath = #"$/TFS_PATH_TO_FOLDER/";
// Connect to server
var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(tfsServer));
tfs.Connect(ConnectOptions.None);
var vcs = tfs.GetService<VersionControlServer>();
//get both sets so we can do a comparison of the final changes
var folderName = "Foo";
var sourceDir = $#"{Path.GetTempPath()}\{folderName}\";
var targetDir = $#"{Path.GetTempPath()}\{folderName}-ChangeSets\";
//download the entire source
DownloadSource(vcs, serverPath, sourceDir);
var changeSets = GetChangeSets(vcs, serverPath);
//technically this query could be anything. As long as it filters the changesets out.....
//you could filter by user, date, info in the changesets, anything really. up to you.
var filteredChangeSets = from cs in changeSets
where !ignoreChangeSets.Contains(cs.ChangesetId)
select cs;
if (changeSets.Count() == filteredChangeSets.Count())
{
// we did not filter anything so do a normal pull as it is faster
//download the entire source
DownloadSource(vcs, serverPath, targetDir);
}
else
{
GetChangeSetsLatest(vcs, serverPath, filteredChangeSets, targetDir);
}
}
private static void RecreateDir(string dir)
{
if (Directory.Exists(dir))
{
Directory.Delete(dir, true);
}
Directory.CreateDirectory(dir);
}
private static GetStatus DownloadSource(VersionControlServer vcs, string serverPath, string dir)
{
string wsName = "TempWorkSpace";
Workspace ws = null;
try
{
ws = vcs.GetWorkspace(wsName, Environment.UserName);
}
catch (WorkspaceNotFoundException)
{
ws = vcs.CreateWorkspace(wsName, Environment.UserName);
}
RecreateDir(dir);
ws.Map(serverPath, dir);
var getResponse = ws.Get(VersionSpec.Latest, GetOptions.GetAll | GetOptions.Overwrite);
vcs.DeleteWorkspace(wsName, Environment.UserName);
return getResponse;
}
private static IEnumerable<Changeset> GetChangeSets(VersionControlServer vcs, string serverPath)
{
VersionSpec versionFrom = null; // VersionSpec.ParseSingleSpec("C529", null);
VersionSpec versionTo = VersionSpec.Latest;
// Get Changesets
var changesets = vcs.QueryHistory(
serverPath,
VersionSpec.Latest,
0,
RecursionType.Full,
null,
versionFrom,
versionTo,
Int32.MaxValue,
true,
false
).Cast<Changeset>();
return changesets;
}
private static void GetChangeSetsLatest(VersionControlServer vcs, string serverPath, IEnumerable<Changeset> changesets, string dir)
{
//we are going to hold the latest item (file) in this dictionary, so we can do all our downloads at the end. The key will be the TFS server file path
var items = new Dictionary<string, Item>();
RecreateDir(dir);
//we need the changesets ordered by changesetid.
var changesetsOrdered = changesets.OrderBy(c => c.ChangesetId);
//DO NOT PARALLEL HERE. We need these changesets in EXACT order
foreach (var changeset in changesetsOrdered)
{
foreach (var change in changeset?.Changes.Where(i => i.Item.ItemType == ItemType.File))
{
var itemPath = change.Item.ServerItem.Replace(serverPath, dir).Replace("/", "\\");
if (change.ChangeType.HasFlag(ChangeType.Edit) && change.ChangeType.HasFlag(ChangeType.SourceRename))
{
if (change.Item.DeletionId == 0)
{ items.AddOrUpdate(change.Item.ServerItem, change.Item); }
else
{ items.TryRemove(change.Item.ServerItem); }
}
else if (change.ChangeType.HasFlag(ChangeType.Delete) && change.ChangeType.HasFlag(ChangeType.SourceRename))
{
var previousChange = GetPreviousServerChange(vcs, change.Item);
if (previousChange != null) { items.TryRemove(previousChange?.Item.ServerItem); }
if (change.Item.DeletionId == 0)
{ items.AddOrUpdate(change.Item.ServerItem, change.Item); }
else
{ items.TryRemove(change.Item.ServerItem); }
}
else if (change.ChangeType.HasFlag(ChangeType.Rollback) && change.ChangeType.HasFlag(ChangeType.Delete))
{
items.TryRemove(change.Item.ServerItem);
}
else if (change.ChangeType.HasFlag(ChangeType.Rollback))
{
var item = GetPreviousServerChange(vcs, change.Item)?.Item;
if (item != null) { items.AddOrUpdate(item.ServerItem, item); }
}
else if (change.ChangeType.HasFlag(ChangeType.Add) || change.ChangeType.HasFlag(ChangeType.Edit) || change.ChangeType.HasFlag(ChangeType.Rename))
{
if (change.Item.DeletionId == 0) { items.AddOrUpdate(change.Item.ServerItem, change.Item); }
}
else if (change.ChangeType.HasFlag(ChangeType.Delete))
{
items.TryRemove(change.Item.ServerItem);
}
else
{
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine($"Unknown change types: {change.ChangeType.ToString()}");
Console.ResetColor();
}
}
}
//HUGE penalty for switching to parallel, stick to single file at a time, one test went from 3:30 to 11:05. File system does not appreciate threading. :|
//Parallel.ForEach(items, (i) =>
foreach (var item in items)
{
var itemPath = item.Key.Replace(serverPath, dir).Replace("/", "\\");
item.Value.DownloadFile(itemPath);
Console.WriteLine(item.Value.ChangesetId + " - " + itemPath);
};
}
//really not sure this is the right way to do this. works quite well, but it begs the question that surely there must be an easier way?
private static Change GetPreviousServerChange(VersionControlServer vcs, Item item)
{
//get the changesets and reverse their order, so we can take the next one after it
var changesets = GetChangeSets(vcs, item.ServerItem).OrderByDescending(cs => cs.ChangesetId);
//skip until we find our changeset, then take the following changeset
var previousChangeSet = changesets.SkipWhile(c => c.ChangesetId != item.ChangesetId).Skip(1).FirstOrDefault();
//return the Change that matches the itemid (file id)
return previousChangeSet?.Changes.FirstOrDefault(c => c.Item.ItemId == item.ItemId);
}
}
static class Extensions
{
public static void AddOrUpdate<TKey, TValue>(this IDictionary<TKey, TValue> dictionary, TKey key, TValue value)
{
if (dictionary.ContainsKey(key))
{
dictionary[key] = value;
}
else
{
dictionary.Add(key, value);
}
}
public static void TryRemove<TKey, TValue>(this IDictionary<TKey, TValue> dictionary, TKey key)
{
if (dictionary.ContainsKey(key))
{
dictionary.Remove(key);
}
}
}
EDIT: I am being forced into this because of existing business procedures. Multiple teams or devs can work on the same database at the same time. The changes per team or dev are cataloged under an RFC #. Each RFC # can have its own release schedule and release at any various point in time. I will use Red Gate SQL Compare to compare the folder with everything (as the source) to the folder minus the RFC change-sets (as the target) to generate a change script for that RFC.
Then there are these rules:
An RFC can get parked for an indeterminate period of time. For example I have seen RFC's parked in Staging for over a year. Other RFC's will pass by this RFC on their way to production.
Individual RFC's can be withdrawn from a production push at the last minute.
The chance of me changing these existing procedures is nil. So I had to figure out a way to work around them. This was that way. I would prefer to follow a normal release schedule of pushing all changes out every release that then flow all the way to production. Unfortunately that is not the case here.
Seems like you need to have a Release branch and a Dev branch. Every check-in is then done in the Dev branch when the work is done. Whenever an RFC is approved for release you do a merge of that particular changeset from Dev to Release followed by whatever steps you need for your release.
Depending on the complexity of your development and releases you might need more branches of each, but I would suggest to aim for as few branches as possible, because it tends to get complicated very fast.
I have a very large set of methods that I want to make asynchronous accessible. The methods are complex and sometimes very long. The approach I can think off is copying all the existing methods and make them async. But when I have to make some changes I have to edit 2 methods. Is there a better approach with the code in one place?
As you can see the code is basically the same. Is it possible to combine those 2 methods into 1?
public async Task ManufacturersToWebshopAsync(HttpContext httpContext, ManufacturerNopServiceClient manufacturerNopServiceClient, bool onlyChanged = false, bool includeNight = false)
{
Log.Verbose("ManufacturersToWebshop", "Start", "");
// client
if (manufacturerNopServiceClient == null)
{
var host = httpContext.Request.Url.Host;
manufacturerNopServiceClient = GetManufacturerNopServiceClient(host);
}
var manufacturers = _manufacturerService.GetAllManufacturers();
if (onlyChanged && includeNight)
{
manufacturers = manufacturers.Where(x => x.State == State.Changed || x.State == State.Night).ToList();
}
else
{
if (onlyChanged)
{
manufacturers = manufacturers.Where(x => x.State == State.Changed).ToList();
}
if (includeNight)
{
manufacturers = manufacturers.Where(x => x.State == State.Night).ToList();
}
}
var tasks = new List<Task>();
var total = manufacturers.Count();
var count = 1;
foreach (var manufacturer in manufacturers)
{
Log.Information("ManufacturersToWebshop", "Manufacturer " + count + " van de " + total, "");
//tasks.Add(ManufacturerToWebshop(httpContext, manufacturer, manufacturerNopServiceClient));
await ManufacturerToWebshopAsync(httpContext, manufacturer, manufacturerNopServiceClient);
count++;
}
//await Task.WhenAll(tasks);
Log.Verbose("ManufacturersToWebshop", "End", "");
}
public void ManufacturersToWebshop(HttpContext httpContext, ManufacturerNopServiceClient manufacturerNopServiceClient, bool onlyChanged = false, bool includeNight = false)
{
Log.Verbose("ManufacturersToWebshop", "Start", "");
// client
if (manufacturerNopServiceClient == null)
{
var host = httpContext.Request.Url.Host;
manufacturerNopServiceClient = GetManufacturerNopServiceClient(host);
}
var manufacturers = _manufacturerService.GetAllManufacturers();
if (onlyChanged && includeNight)
{
manufacturers = manufacturers.Where(x => x.State == State.Changed || x.State == State.Night).ToList();
}
else
{
if (onlyChanged)
{
manufacturers = manufacturers.Where(x => x.State == State.Changed).ToList();
}
if (includeNight)
{
manufacturers = manufacturers.Where(x => x.State == State.Night).ToList();
}
}
var total = manufacturers.Count();
var count = 1;
foreach (var manufacturer in manufacturers)
{
Log.Information("ManufacturersToWebshop", "Manufacturer " + count + " van de " + total, "");
ManufacturerToWebshop(httpContext, manufacturer, manufacturerNopServiceClient);
count++;
}
Log.Verbose("ManufacturersToWebshop", "End", "");
}
Is it possible to combine those 2 methods into 1?
Not in a good way that works in all scenarios. There are hacks to write synchronous wrappers for asynchronous methods, and other hacks to write asynchronous wrappers for synchronous methods - but none of the options work in all scenarios. There is no generic, general-purpose solution for this problem.
I recommend that you consider what your method is doing, and decide whether it should be asynchronous or not. E.g., if it is doing I/O, then it should be asynchronous. Then, if the method should be asynchronous, just make it asynchronous (without a synchronous version).
If you're updating code, then this will require updating all the code that calls the new asynchronous method (and code that calls those methods, etc, etc). If this update will take too long to apply throughout the system, then (temporary) duplication of code is my recommended approach. However, if you don't control the calling code, then you might have to consider one of the hacks in the linked articles. Just be sure to think through the ramifications of each, and choose the one appropriate for your specific code.
I asked a similar question to this over on Unity3D Q&A, but I didn't get a response. I've finally got back round to this project and still have the same issue.
Link To Unity Question
I have searched, but I don't think I've hit the right keywords cause I still haven't found an answer that fits. Anyway, I will ask the question a different way and hopefully someone can point me in the right direction.
So below is the code that I have come up with, it's not the actual code in the game, but it does a good job at showing my problem.
Basically, in the build() method I want to check that both lists each contain the same type of tool, without actually hard coding the type. I don't care about the specific tool instance, just that it is of a certain type. The aim is that I can create new tools without having to modify the build method to incorporate these new types.
If there is a better way to do it I'm all ears.
Thanks
namespace TypeExample
{
class Tool
{
}
class Spanner : Tool
{
}
class Wrench : Tool
{
}
class Builder
{
List<Tool> toolsAvailable = new List<Tool>();
List<Tool> toolsRequired = new List<Tool>();
public Builder()
{
Spanner spanner = new Spanner();
Wrench wrench = new Wrench();
toolsRequired.Add(spanner);
toolsRequired.Add(wrench);
}
public void GiveTool(Tool tool)
{
toolsAvailable.Add(tool);
}
public void build()
{
// if true
Console.WriteLine("Building");
// else
Console.WriteLine("I don't have the tools to build!");
}
}
class Program
{
static void Main(string[] args)
{
Spanner spanner = new Spanner();
Wrench wrench = new Wrench();
Builder builder = new Builder();
builder.GiveTool(spanner);
builder.GiveTool(wrench);
builder.build();
Console.ReadLine();
}
}
}
Basicly you should get all the types from collections using Linq and then compair the result.
var toolsAvailableTypes = toolsAvailable.Select(t => t.GetType()).Distinct();
var toolsRequiredTypes = toolsRequired.Select(t => t.GetType()).Distinct();
if (toolsRequiredTypes.Except(toolsAvailableTypes).Any())
{
//building
}
It is not clear, should you compair only types of instruments or quantity also. My answer assume you don't care about quantity and you can build with one spanner when you require two.
-Update
To comply the requirement about subclasses (Sriram Sakthivel mentioned it) you can check if available tool type is subclass to required tool type so you can use any SpecialSpanner when you need a Spanner
var toolsAvailableTypes = toolsAvailable.Select(t => t.GetType()).Distinct().ToList();
var toolsRequiredTypes = toolsRequired.Select(t => t.GetType()).Distinct().ToList();
if (CanBuild(toolsAvailableTypes, toolsRequiredTypes))
{
Console.WriteLine("building");
}
else
{
Console.WriteLine("not enough minerals");
}
CanBuild method:
bool CanBuild(List<Type> toolsAvailableTypes, List<Type> toolsRequiredTypes)
{
foreach (var requiredType in toolsRequiredTypes)
{
bool isAvailable = toolsAvailableTypes.Any(availableType => availableType.IsSubclassOf(requiredType) || availableType == requiredType);
if (!isAvailable)
{
return false;
}
}
return true;
}
var reqdTypes = toolsRequired.Select(x => x.GetType());
var availableTypes = toolsAvailable.Select(x => x.GetType());
if (reqdTypes.Except(availableTypes).Any())
{
//Something exist in reqdTypes which is not in availableTypes
}
Note:This will fail if you provide more derived type than expected type. For example if you provide SpecialSpanner in place or Spanner this won't work.