Any downsides to replacing REST endpoints with SignalR? - c#

I'm building a fairly simple single page app. It's basically a list of items, where each item has some details, an activity log, and a current status along with some buttons to trigger actions on the server to advance the status along a workflow.
It was originally written using MVC and REST/Web API but I got stuck on the problem of keeping concurrent users up to date. For example, if User A adds an item, we want the list on User B's screen to now update to include it.
To solve this I looked into SignalR which works great. But I had a problem.
When adding an item (using POST) the callback adds the item on the requesting client. This is fine.
I then triggered a SignalR broadcast on the server to tell all clients about the new item. This worked fine except the local client, who now has 2 items.
I was looking into filtering the duplicate id client-side, or sending the connection id with the POST, then broadcast to all clients except the requester but it seems a bit needlessly complicated.
Instead I'm just doing this.
public class UpdateHub : Hub
{
public void AddNewItem(NewItem item)
{
// and some server-side stuff, persist in the data store, etc
item.trackingID = new Guid();
item.addLogEntry("new item");
// ...
dataStore.addItem(item);
// send message type and data payload
Clients.All.broadcastMessage("add", item);
}
}
It seems a lot simpler to just get rid of all the REST stuff altogether, so am I missing anything important?
It'll run on an intranet for a handful of users using IE11+ and I guess we do lose some commonly-understood semantics around HTTP response codes for error handling, but I don't think that's a huge deal in this situation.

In order to solve duplicate you can try to use Clients.Others inside Hub class, or AllExcept(id) if you not in the Hub class.
Clients.Others.broadcastMessage("add", item);
In your case using SignalR shouldn`t have any downsides.

Related

Task is not running asynchronously

I have wrapped the action in Task.Run but it seems that I am missing something very basic. But unable to figure it out.
public void SaveOrderList(List<Order> inputList)
{
Dictionary<string, string> result = new Dictionary<string, string>();
string code = string.Empty;
Task.Run(() =>
{
foreach (var item in inputList)
{
code = CreateSingleOrder(item);
result.Add(item.TicketNumber, code);
}
////TODO: Write logic to send mail
emailSender.SendEmail("abc#xyz.com");
});
}
Since there can be many entries in inputList and each entry may take 5 sec to process, I don't want the UI to be blocked for end user. Instead, I will send a mail and notify how many processed successfully and what all are failed.
To achieve this, best I knew was Task.Run. But, the problem is as soon as function completes, I don't see that the code inside the foreach loop ever worked because it never made to the DB.
Can anyone help me find out what is that I am missing here.
Just for information, this function is called from Web API and Web API POST method is called from javascript. Below is the code for Web API endpoint.
[HttpPost, Route("SaveOrderList")]
[ResponseType(typeof(bool))]
public IHttpActionResult SaveOrderList(List<Order> orderList)
{
orderManagerService.SaveOrderList(orderList)
return this.Ok();
}
Thanks in advance for help.
You need to consider carefully how this works. There are a few suggestions in this article:
https://blog.stephencleary.com/2014/06/fire-and-forget-on-asp-net.html
But I would point out that 'fire and forget' on a web application is usually the wrong approach.
For your example, you really want to consider your UX - if I make an order on your site and then only find out some time later that the order failed (via email, which I may not be checking), I'd not be too impressed. It would be better to await the save result, or make multiple API requests for single order items and show the incremental result of successful orders on your front end.
I'd also suggest a hard look at why your order saving is so slow - this will continue to be problematic for you until it's faster.

Sending multiple emails with transaction behavior (if one fails no emails get send)

Sorry if I failed to find an existing post with my problem.
What am I trying to do is as follows:
I simply want to send a couple of emails (2-3) to different people and more importantly with different content. The emails are official, important and also logged in the system. Long story short, I need when for some reason one of them fails to stop the sending of the others. I need either all of them sent or none of them.
What have I done so far
It is not the first time the system I worked on has to send an automatic email. The application is an ASP MVC website. So some time ago, I installed the MvcMailer (MvcMailer) and used it the way it was explained. It worked quite well and so I liked the idea of previewing the email (as you can give it a view to send).
So, in the light of my new problem I read carefully the MvcMailer documentation and did not find anything about sending multiple emails in transaction-like manner. Couldn't think of a way to force them to behave in this way. In my tests when I send an email, even if it is one email with a few CCs, all working mails get send and just one of them fails (wrong email name, closed port ... whatever).
I was hoping someone could suggest me a way to achieve something like this. I hope my explanation was sufficient and if not, let me know I will provide you with all details required.
If this is impossible with the MvcMailer, then is it possible with other tools ?
My implementation so far: (keep in mind I'm still in the testing stages)
public class MailerController : Controller
{
private MailerRepository mailerRep;
public MailerController()
{
this.mailerRep = new MailerRepository();
}
public void TestTransmittalEmail()
{
var model = this.mailerRep.GetTransmittalModel(1234); //I've stucked a specific clientId
var mailer = new TransmittalsMailer();
mailer.TransmittalEmail(model).Send();
}
}
public class TransmittalsMailer : MailerBase
{
public TransmittalsMailer()
{
}
public MvcMailMessage TransmittalEmail(TransmittalManifestModel model)
{
var mailMessage = new MvcMailMessage() { Subject = "Transmittals - TESTING EMAIL" };
//embedding a few images in the email
var resources = new Dictionary<string, string>();
resources["logo"] = PDFResourcePaths.VripackLogo;
resources["companyInfo"] = PDFResourcePaths.CompanyInfoBlock;
mailMessage.To.Add("test1#email.com");
mailMessage.Attachments.Add(new Attachment(#"D:\ASD\TransmittalFolders\1\Archives\150812.1433.MMA.rar"));
ViewData["model"] = model;
this.PopulateBody(mailMessage, "TransmittalEmailView", resources);
return mailMessage;
}
}
This is actually quite a difficult problem to solve. This is because when you send an email, it will work as long the SMTP server can be found (no exception will be thrown).
So you basically have to wait some arbitrary amount of time, and check the inbox of the email you sent it from for the delivery failure. If there is a delivery failure, you stop sending.
So long story short, you shouldn't probably do this. You should simply send all three and notify yourself some other way (probably another email) that there was a failure and which email failed.
You can check if an exception is thrown and then cancel the sending of subsequent emails until the issue is addressed. However this only deals with issues on your end on the sending of each email individual email. If the first two mails are successfully sent, and the last throws an exception, you can't unsend the first emails.
A possible workaround (ugly as it is) would be to initially send emails to an email address you control. If an exception is thrown at this step, log the error, and do not send any further emails until the error is dealt with. If no exceptions are raised, send the emails. However this does not and cannot handle issues that may occur on the recipients side.
A delivery failure notice is not guaranteed as depending on the configuration of your SMTP and the recipient SMTP a delivery failure notification might not be sent. Even if delivery failure notifications are enabled, the notification might not be sent for days (in one of my previous jobs, email delivery failure notifications were not sent until 14 days had elapsed).
Ideally, this should be dealt with at the initial input of the email addresses before you ever send any documents to anyone. You simply send a verification email to the email address in question and have the user clicking on a verification link back to a web service you control. Until this has been done, you don't send any emails to the intended recipient.

Contact presence/status on Lync 2013 SDK shows "Presence unknown" until manual client search

I'm working on an automation service for lync that will automatically add people to an IM conversation based on their availability/lync "presence". It essentially goes down a list, checks who is online, and adds the first person to a call.
The problem I'm getting is that sometimes (usually when lync had to be restarted), it does not always fetch the contact's presence.
First I just had it grab the presence. Then I added code to check for the ContactInformationChanged event firing, but that does not seem to happen unless I go into the app and manually type the alias I'm looking for.
Is there a Refresh() method I'm missing somewhere? Or is there any way to force it to find this? Here's my search methods:
public Contact GetContact(string emailAddress)
{
Contact user;
lock (ContactLookupCache)
{
while (!ContactLookupCache.TryGetValue(emailAddress.ToLower(), out user))
{
lock (Client)
{
Client.ContactManager.BeginSearch(emailAddress, this.HandleContactLookup, null);
}
Monitor.Wait(ContactLookupCache);
}
}
return user;
}
public string GetContactPresenceState(Contact contact)
{
string presenceStatus = contact.GetContactInformation(ContactInformationType.Activity).ToString();
// see if the status is either "Presence unknown" or "Updating..."
if (IsUnknownPresenceState(presenceStatus))
{
lock (contact)
{
//bug?? This event seems to only fire sometimes when you search on the app for contact details
contact.ContactInformationChanged += (object sender, ContactInformationChangedEventArgs e) =>
{
if (e.ChangedContactInformation.Contains(ContactInformationType.Activity))
{
lock (contact)
{
presenceStatus = contact.GetContactInformation(ContactInformationType.Activity).ToString();
if(!IsUnknownPresenceState(presenceStatus))
Monitor.PulseAll(contact);
}
}
};
Monitor.Wait(contact);
}
}
return presenceStatus;
}
Also, sorry for the crappy code... I was just trying to get it to work and kept throwing more junk code in hoping something would help.
Could you verify that the code works fine for all the contacts in your contact list and it's just the ones that aren't listed where presence change events aren't raised correctly?
This makes sense to me given you are using the client SDK which will only tell you about events the client is interested in. For example it would be pretty traffic intensive if all 85,000 clients received the presence changes for the other 85,000 clients in a company.
I think you are in the realms of either polling the presence at regular intervals or adding the contacts to the client (perhaps under a relevant group just to keep things tidy).
Failing that you may want to looking into the UCMA SDK which is better suited to centralised services than the client SDK.

SignalR not adding server callbacks

When every I attempt to add a new server callback function I cannot seem to get the callback to show up in the $.connection server callback list.
What do I need to do to refresh the javascript that SignalR produces and sends to the client with the latest list of server callbacks.
I would think that I should be able to just add a server callback for the client to call and rebuild my app and fire up a new instance of Google Chrome and the newly added server callback would be in the list of available callbacks on the client.
For example here is exactly what I've done.
1.) A client joins a group and everyone is notified.
public override Task OnConnected()
{
string pid = this.Context.QueryString["pid"],
uid = this.Context.QueryString["uid"],
ispro = this.Context.QueryString["ispro"];
Groups.Add(this.Context.ConnectionId, pid);
return Clients.Group(pid).joined(new cmsg(Context.ConnectionId, UtilCommon.GetUserMini(new Guid(uid), bool.Parse(ispro))));
}
2.) On the client the joined function is called from the server
this.collaborateHub.client.joined = function (cmsg) {
//
that.chatBox.addAttendee(cmsg);
//let the new attendee know about you
if (cmsg.cnnid !== that.chatBox.getMeAttendee().cnnid) {
that.chatBox.getMeAttendee().newbieid = cmsg.cnnid;
debugger
this.server.addMeNewbie(that.chatBox.getMeAttendee());
};
};
3.) Now, if someone joined and it was not the user of the currently opened window, that means the someone how just joined is someone other than myself, so I need to call the server back and notify the newly joined user to add me to their user list. I stead of having to keep up with the currently signed on users to the given group in a database, I am just using every signed on client of the group as a distributed database and let them manage their own info.
So, I need to call the server callback named addMeNewbie; however, this callback function is not available in the list.
public Task addMeNewbie(cmsg cmsg) {
return Clients.Client(cmsg.newbieid).addCurrAttendee(cmsg);
}
Here is a snapshot of the client side in debug mode
4.) And finally here is the client side callback that i need the server to call so that the newly joined group member can update their list of current group members.
this.collaborateHub.client.addCurrAttendee = function (cmsg) {
debugger
that.chatBox.addAttendee(cmsg);
};
I would think that I should be able to add as many server callbacks as I want and they should show up on the next build and restart of a new instance of Google Chrome; however, this is not the case. What do I need to do to get newly added server callbacks to show up in the available server callbacks on the client side?
This doesn't make since to me, but I am thinking maybe this is a cashing issue or something?
The only callback that is available is the one shown in the snapshot provided. At this time I've only got two callbacks, the one that you see in the snapshot and the one that is not visible.
Somewhere along the way I created a signalr-hubs.js file from the generated JavaScript file that can be retrieved by http://localhost:3250/signalr/hubs and was adding new server functions to my hub but not updating the signalr-hugs.js file.
At this point every time I add a new server callback I retrieve the newly generated proxy by placing this url http://localhost:3250/signalr/hubs into the browser, coping the code, and then updating my signalr-hubs.js file.

Finding Connection by UserId in SignalR

I have a webpage that uses ajax polling to get stock market updates from the server. I'd like to use SignalR instead, but I'm having trouble understanding how/if it would work.
ok, it's not really stock market updates, but the analogy works.
The SignalR examples I've seen send messages to either the current connection, all connections, or groups. In my example the stock updates happen outside of the current connection, so there's no such thing as the 'current connection'. And a user's account is associated with a few stocks, so sending a stock notification to all connections or to groups doesn't work either. I need to be able to find a connection associated with a certain userId.
Here's a fake code example:
foreach(var stock in StockService.GetStocksWithBigNews())
{
var userIds = UserService.GetUserIdsThatCareAboutStock(stock);
var connections = /* find connections associated with user ids */;
foreach(var connection in connections)
{
connection.Send(...);
}
}
In this question on filtering connections, they mention that I could keep current connections in memory but (1) it's bad for scaling and (2) it's bad for multi node websites. Both of these points are critically important to our current application. That makes me think I'd have to send a message out to all nodes to find users connected to each node >> my brain explodes in confusion.
THE QUESTION
How do I find a connection for a specific user that is scalable? Am I thinking about this the wrong way?
I created a little project last night to learn this also. I used 1.0 alpha and it was Straight forward. I created a Hub and from there on it just worked :)
I my project i have N Compute Units(some servers processing work), when they start up they invoke the ComputeUnitRegister.
await HubProxy.Invoke("ComputeUnitReqisted", _ComputeGuid);
and every time they do something they call
HubProxy.Invoke("Running", _ComputeGuid);
where HubProxy is :
HubConnection Hub = new HubConnection(RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue("SignalREndPoint"):
"http://taskqueue.cloudapp.net/");
IHubProxy HubProxy = Hub.CreateHubProxy("ComputeUnits");
I used RoleEnviroment.IsAvailable because i can now run this as a Azure Role , a Console App or what ever in .NET 4.5. The Hub is placed in a MVC4 Website project and is started like this:
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(50);
RouteTable.Routes.MapHubs();
public class ComputeUnits : Hub
{
public Task Running(Guid MyGuid)
{
return Clients.Group(MyGuid.ToString()).ComputeUnitHeartBeat(MyGuid,
DateTime.UtcNow.ToEpochMilliseconds());
}
public Task ComputeUnitReqister(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, "ComputeUnits").Wait();
return Clients.Others.ComputeUnitCameOnline(new { Guid = MyGuid,
HeartBeat = DateTime.UtcNow.ToEpochMilliseconds() });
}
public void SubscribeToHeartBeats(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, MyGuid.ToString());
}
}
My clients are Javascript clients, that have methods for(let me know if you need to see the code for this also). But basicly they listhen for the ComputeUnitCameOnline and when its run they call on the server SubscribeToHeartBeats. This means that whenever the server compute unit is doing some work it will call Running, which will trigger a ComputeUnitHeartBeat on javascript clients.
I hope you can use this to see how Groups and Connections can be used. And last, its also scaled out over multiply azure roles by adding a few lines of code:
GlobalHost.HubPipeline.EnableAutoRejoiningGroups();
GlobalHost.DependencyResolver.UseServiceBus(
serviceBusConnectionString,
2,
3,
GetRoleInstanceNumber(),
topicPathPrefix /* the prefix applied to the name of each topic used */
);
You can get the connection string on the servicebus on azure, remember the Provider=SharedSecret. But when adding the nuget packaged the connectionstring syntax is also pasted into your web.config.
2 is how many topics to split it about. Topics can contain 1Gb of data, so depending on performance you can increase it.
3 is the number of nodes to split it out on. I used 3 because i have 2 Azure Instances, and my localhost. You can get the RoleNumber like this (note that i hard coded my localhost to 2).
private static int GetRoleInstanceNumber()
{
if (!RoleEnvironment.IsAvailable)
return 2;
var roleInstanceId = RoleEnvironment.CurrentRoleInstance.Id;
var li1 = roleInstanceId.LastIndexOf(".");
var li2 = roleInstanceId.LastIndexOf("_");
var roleInstanceNo = roleInstanceId.Substring(Math.Max(li1, li2) + 1);
return Int32.Parse(roleInstanceNo);
}
You can see it all live at : http://taskqueue.cloudapp.net/#/compute-units
When using SignalR, after a client has connected to the server they are served up a Connection ID (this is essential to providing real time communication). Yes this is stored in memory but SignalR also can be used in multi-node environments. You can use the Redis or even Sql Server backplane (more to come) for example. So long story short, we take care of your scale-out scenarios for you via backplanes/service bus' without you having to worry about it.

Categories