Using "Queue" in C# for sending SMS - c#

I am trying to send SMS to clients from transactional data. It works fine but sometimes the staging table gets tanked due to huge data load. I was thinking if there is any way to queue data in a First-In-First-Out (FIFO) order. I am still learning C#. Please help me to implement the following code in a smart way.
I am sharing what I did.
private static void Main(string[] args)
{
var datetime = DateTime.Now;
var hr = datetime.Hour;
var mm = datetime.Minute + 1;
var ss = double.Parse(ConfigurationManager.AppSettings["SMSTIME"]);
// For Interval in Seconds
// This Scheduler will start at current time + 1 minute and call after every 3 Seconds
// IntervalInSeconds(start_hour, start_minute, seconds)
SchedulerTasks.IntervalInSeconds(hr, mm, ss,
() =>
{
CreateNewSms.Instance.PostNewSmsText();
});
Console.ReadLine();
}
public async void PostNewSmsText()
{
var smsDt = await Task.Run(() => DataAccessLayer.Instance.InsertSmsDataToLocalDb());
if (smsDt.Rows.Count != 0)
{
var smsData = GetSmsData(smsDt);
var xmlRespDoc = new XmlDocument();
await Task.Run(() =>
{
foreach (var sms in smsData)
{
var smsText = GetSmsText(sms.Channel, sms.AccountNumber, sms.Amount, sms.DrCr, sms.CurrentBalance, sms.BranchName,
sms.CurrencyName, sms.TrnDescription, sms.UbsTrnDateTime);
var smsSubmitTime = DateTime.Now.ToString(CultureInfo.InvariantCulture);
var response = ExecuteSmsService.Instance.PostSmsText(sms.MobileNo, smsText, sms.TrnRefNo);
var smsDeliveryTime = DateTime.Now.ToString(CultureInfo.InvariantCulture);
var operatorInfo = ConfigurationManager.AppSettings["Operator"];
xmlRespDoc.LoadXml(response);
var smsCsmsId = xmlRespDoc.SelectSingleNode("//SMSINFO/CSMSID")?.InnerText;
var smsRefNo = xmlRespDoc.SelectSingleNode("//SMSINFO/REFERENCEID")?.InnerText;
var smsDeliveryStatus = xmlRespDoc.SelectSingleNode("//SMSINFO/MSISDNSTATUS")?.InnerText;
if (smsRefNo != null) //Save response data if sms delivery successful
{
DataAccessLayer.Instance.InsertSmsResponseDetails(
sms.EntrySerialNo,
smsCsmsId,
smsRefNo,
smsText,
smsSubmitTime,
smsDeliveryTime,
true,
"Success",
operatorInfo);
DataAccessLayer.Instance.UpdateSmsStatus(sms.EntrySerialNo);
DataAccessLayer.Instance.DeleteSmsData(sms.EntrySerialNo, true);
Console.WriteLine("SMS successfully sent to: " + sms.MobileNo);
}
else
{
//If any sms delivery failed here we set a counter to retry again in next date
DataAccessLayer.Instance.SetSmsCounter(sms.EntrySerialNo);
//Save response data if sms delivery unsuccessful with failed status
DataAccessLayer.Instance.InsertSmsResponseDetails(
sms.EntrySerialNo,
smsCsmsId,
null,
smsText,
smsSubmitTime,
smsDeliveryTime,
false,
smsDeliveryStatus,
operatorInfo);
DataAccessLayer.Instance.DeleteSmsData(sms.EntrySerialNo, false);
Console.WriteLine("Invalid mobile no: " + sms.MobileNo);
}
}
});
}
else
{
Console.WriteLine("Waiting for transactions.....");
}
}
I was searching on Queue Implementations and found this but confused how to use it here in my code.
public class SendSmsQueue
{
private ConcurrentQueue<object> _jobs = new ConcurrentQueue<object>();
public SendSmsQueue()
{
var thread = new Thread(new ThreadStart(OnStart));
thread.IsBackground = true;
thread.Start();
}
public void Enqueue(object job)
{
_jobs.Enqueue(job);
}
private void OnStart()
{
while (true)
{
if (_jobs.TryDequeue(out object result))
{
Console.WriteLine(result);
}
}
}
}

Related

ZeroMQ Broadcast with Proxy losing messages

I am doing some performance tests on ZeroMQ in order to compare it with others like RabbitMQ and ActiveMQ.
In my broadcast tests and to avoid "The Dynamic Discovery Problem" as referred by ZeroMQ documentation I have used a proxy. In my scenario, I am using 50 concurrent publishers each one sending 500 messages with 1ms delay between sends. Each message is then read by 50 subscribers. And as I said I am losing messages, each of the subscribers should receive a total of 25000 messages and they are each receiving between 5000 and 10000 messages only.
I am using Windows and C# .Net client clrzmq4 (4.1.0.31).
I have already tried some solutions that I found on other posts:
I have set linger to TimeSpan.MaxValue
I have set ReceiveHighWatermark to 0 (as it is presented as infinite, but I have tried also Int32.MaxValue)
I have set checked for slow start receivers, I made receivers start some seconds before publishers
I had to make sure that no garbage collection is made to the socket instances (linger should do it but to make sure)
I have a similar scenario (with similar logic) using NetMQ and it works fine. The other scenario does not use security though and this one does (and that's also the reason why I use clrzmq in this one because I need client authentication with certificates that is not yet possible on NetMQ).
EDIT:
public class MCVEPublisher
{
public void publish(int numberOfMessages)
{
string topic = "TopicA";
ZContext ZContext = ZContext.Create();
ZSocket publisher = new ZSocket(ZContext, ZSocketType.PUB);
//Security
// Create or load certificates
ZCert serverCert = Main.GetOrCreateCert("publisher");
var actor = new ZActor(ZContext, ZAuth.Action, null);
actor.Start();
// send CURVE settings to ZAuth
actor.Frontend.Send(new ZFrame("VERBOSE"));
actor.Frontend.Send(new ZMessage(new List<ZFrame>()
{ new ZFrame("ALLOW"), new ZFrame("127.0.0.1") }));
actor.Frontend.Send(new ZMessage(new List<ZFrame>()
{ new ZFrame("CURVE"), new ZFrame(".curve") }));
publisher.CurvePublicKey = serverCert.PublicKey;
publisher.CurveSecretKey = serverCert.SecretKey;
publisher.CurveServer = true;
publisher.Linger = TimeSpan.MaxValue;
publisher.ReceiveHighWatermark = Int32.MaxValue;
publisher.Connect("tcp://127.0.0.1:5678");
Thread.Sleep(3500);
for (int i = 0; i < numberOfMessages; i++)
{
Thread.Sleep(1);
var update = $"{topic} {"message"}";
using (var updateFrame = new ZFrame(update))
{
publisher.Send(updateFrame);
}
}
//just to make sure it does not end instantly
Thread.Sleep(60000);
//just to make sure publisher is not garbage collected
ulong Affinity = publisher.Affinity;
}
}
public class MCVESubscriber
{
private ZSocket subscriber;
private List<string> prints = new List<string>();
public void read()
{
string topic = "TopicA";
var context = new ZContext();
subscriber = new ZSocket(context, ZSocketType.SUB);
//Security
ZCert serverCert = Main.GetOrCreateCert("xpub");
ZCert clientCert = Main.GetOrCreateCert("subscriber");
subscriber.CurvePublicKey = clientCert.PublicKey;
subscriber.CurveSecretKey = clientCert.SecretKey;
subscriber.CurveServer = true;
subscriber.CurveServerKey = serverCert.PublicKey;
subscriber.Linger = TimeSpan.MaxValue;
subscriber.ReceiveHighWatermark = Int32.MaxValue;
// Connect
subscriber.Connect("tcp://127.0.0.1:1234");
subscriber.Subscribe(topic);
while (true)
{
using (var replyFrame = subscriber.ReceiveFrame())
{
string messageReceived = replyFrame.ReadString();
messageReceived = Convert.ToString(messageReceived.Split(' ')[1]);
prints.Add(messageReceived);
}
}
}
public void PrintMessages()
{
Console.WriteLine("printing " + prints.Count);
}
}
public class Main
{
static void Main(string[] args)
{
broadcast(500, 50, 50, 30000);
}
public static void broadcast(int numberOfMessages, int numberOfPublishers, int numberOfSubscribers, int timeOfRun)
{
new Thread(() =>
{
using (var context = new ZContext())
using (var xsubSocket = new ZSocket(context, ZSocketType.XSUB))
using (var xpubSocket = new ZSocket(context, ZSocketType.XPUB))
{
//Security
ZCert serverCert = GetOrCreateCert("publisher");
ZCert clientCert = GetOrCreateCert("xsub");
xsubSocket.CurvePublicKey = clientCert.PublicKey;
xsubSocket.CurveSecretKey = clientCert.SecretKey;
xsubSocket.CurveServer = true;
xsubSocket.CurveServerKey = serverCert.PublicKey;
xsubSocket.Linger = TimeSpan.MaxValue;
xsubSocket.ReceiveHighWatermark = Int32.MaxValue;
xsubSocket.Bind("tcp://*:5678");
//Security
serverCert = GetOrCreateCert("xpub");
var actor = new ZActor(ZAuth.Action0, null);
actor.Start();
// send CURVE settings to ZAuth
actor.Frontend.Send(new ZFrame("VERBOSE"));
actor.Frontend.Send(new ZMessage(new List<ZFrame>()
{ new ZFrame("ALLOW"), new ZFrame("127.0.0.1") }));
actor.Frontend.Send(new ZMessage(new List<ZFrame>()
{ new ZFrame("CURVE"), new ZFrame(".curve") }));
xpubSocket.CurvePublicKey = serverCert.PublicKey;
xpubSocket.CurveSecretKey = serverCert.SecretKey;
xpubSocket.CurveServer = true;
xpubSocket.Linger = TimeSpan.MaxValue;
xpubSocket.ReceiveHighWatermark = Int32.MaxValue;
xpubSocket.Bind("tcp://*:1234");
using (var subscription = ZFrame.Create(1))
{
subscription.Write(new byte[] { 0x1 }, 0, 1);
xpubSocket.Send(subscription);
}
Console.WriteLine("Intermediary started, and waiting for messages");
// proxy messages between frontend / backend
ZContext.Proxy(xsubSocket, xpubSocket);
Console.WriteLine("end of proxy");
//just to make sure it does not end instantly
Thread.Sleep(60000);
//just to make sure xpubSocket and xsubSocket are not garbage collected
ulong Affinity = xpubSocket.Affinity;
int ReceiveHighWatermark = xsubSocket.ReceiveHighWatermark;
}
}).Start();
Thread.Sleep(5000); //to make sure proxy started
List<MCVESubscriber> Subscribers = new List<MCVESubscriber>();
for (int i = 0; i < numberOfSubscribers; i++)
{
MCVESubscriber ZeroMqSubscriber = new MCVESubscriber();
new Thread(() =>
{
ZeroMqSubscriber.read();
}).Start();
Subscribers.Add(ZeroMqSubscriber);
}
Thread.Sleep(10000);//to make sure all subscribers started
for (int i = 0; i < numberOfPublishers; i++)
{
MCVEPublisher ZeroMqPublisherBroadcast = new MCVEPublisher();
new Thread(() =>
{
ZeroMqPublisherBroadcast.publish(numberOfMessages);
}).Start();
}
Thread.Sleep(timeOfRun);
foreach (MCVESubscriber Subscriber in Subscribers)
{
Subscriber.PrintMessages();
}
}
public static ZCert GetOrCreateCert(string name, string curvpath = ".curve")
{
ZCert cert;
string keyfile = Path.Combine(curvpath, name + ".pub");
if (!File.Exists(keyfile))
{
cert = new ZCert();
Directory.CreateDirectory(curvpath);
cert.SetMeta("name", name);
cert.Save(keyfile);
}
else
{
cert = ZCert.Load(keyfile);
}
return cert;
}
}
This code also produces the expected number of messages when security is disabled, but when turned on it doesn't.
Does someone know another thing to check? Or has it happened to anyone before?
Thanks

M2MQTT client disconnecting without an exception or error message

I'm trying to create an API that consumes various topics.
For this, I'm trying to multi-thread things, so that the whole thing can be scalable into multiple APIs, later on, but that's very besides the point.
I'm using ASP.net Core 4.0, if that's got anything to do with it. Entity Framework as well.
My problem is based on my connection to my Mosquitto server being broken without throwing an exception or anything of the like, after a minute or so. It doesn't matter how big the messages are, or how many are exchanged. I have no idea of how I can create a callback or anything of the kind to know what's going on with my connection. Can anyone help?
I'll link the code I use to establish a connection and subscribe to a connection below. Using the Subscribe method or doing it manually also changes nothing. I'm at a loss, here.
Thanks in advance!
Main.cs:
Task.Factory.StartNew(() => DataflowController.ResumeQueuesAsync());
BuildWebHost(args).Run();
DataflowController.cs:
public static Boolean Subscribe(String topic)
{
Console.WriteLine("Hello from " + topic);
MqttClient mqttClient = new MqttClient(brokerAddress);
byte code = mqttClient.Connect(Guid.NewGuid().ToString());
// Register to message received
mqttClient.MqttMsgPublishReceived += client_recievedMessageAsync;
string clientId = Guid.NewGuid().ToString();
mqttClient.Connect(clientId);
// Subscribe to topic
mqttClient.Subscribe(new String[] { topic }, new byte[] { MqttMsgBase.QOS_LEVEL_EXACTLY_ONCE });
System.Console.ReadLine();
return true;
}
public static async Task ResumeQueuesAsync()
{
var mongoClient = new MongoClient(connectionString);
var db = mongoClient.GetDatabase(databaseName);
var topics = db.GetCollection<BsonDocument>(topicCollection);
var filter = new BsonDocument();
List<BsonDocument> result = topics.Find(filter).ToList();
var resultSize = result.Count;
Task[] subscriptions = new Task[resultSize];
MqttClient mqttClient = new MqttClient(brokerAddress);
byte code = mqttClient.Connect(Guid.NewGuid().ToString());
// Register to message received
mqttClient.MqttMsgPublishReceived += client_recievedMessageAsync;
string clientId = Guid.NewGuid().ToString();
mqttClient.Connect(clientId);
int counter = 0;
foreach(var doc in result)
{
subscriptions[counter] = new Task(() =>
{
Console.WriteLine("Hello from " + doc["topic"].ToString());
// Subscribe to topic
mqttClient.Subscribe(new String[] { doc["topic"].ToString() }, new byte[] { MqttMsgBase.QOS_LEVEL_EXACTLY_ONCE });
System.Console.ReadLine();
});
counter++;
}
foreach(Task task in subscriptions)
{
task.Start();
}
}
static async void client_recievedMessageAsync(object sender, MqttMsgPublishEventArgs e)
{
// Handle message received
var message = System.Text.Encoding.Default.GetString(e.Message);
var topic = e.Topic;
var id = topic.Split("/")[2];
BsonDocument doc = new BsonDocument {
{"Plug ID", id },
{"Consumption", message }
};
await Save(doc, "smartPDM_consumption");
System.Console.WriteLine("Message received from " + topic + " : " + message);
}
This line was the issue:
byte code = mqttClient.Connect(Guid.NewGuid().ToString());
Deleted it, and it just worked.

SignalR 2 still see connection live even after internet cut out at client side

I configure the server as following on startup.cs
GlobalHost.HubPipeline.RequireAuthentication();
// Make long polling connections wait a maximum of 110 seconds for a
// response. When that time expires, trigger a timeout command and
// make the client reconnect.
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(40);
// Wait a maximum of 30 seconds after a transport connection is lost
// before raising the Disconnected event to terminate the SignalR connection.
GlobalHost.Configuration.DisconnectTimeout = TimeSpan.FromSeconds(30);
// For transports other than long polling, send a keepalive packet every
// 10 seconds.
// This value must be no more than 1/3 of the DisconnectTimeout value.
GlobalHost.Configuration.KeepAlive = TimeSpan.FromSeconds(10);
GlobalHost.HubPipeline.AddModule(new SOHubPipelineModule());
var hubConfiguration = new HubConfiguration { EnableDetailedErrors = true };
var heartBeat = GlobalHost.DependencyResolver.Resolve<ITransportHeartbeat>();
var monitor = new PresenceMonitor(heartBeat);
monitor.StartMonitoring();
app.MapSignalR(hubConfiguration);
where PresenceMonitor is the class responsible of check unlive data . as I keep them in database using the following code
public class PresenceMonitor
{
private readonly ITransportHeartbeat _heartbeat;
private Timer _timer;
// How often we plan to check if the connections in our store are valid
private readonly TimeSpan _presenceCheckInterval = TimeSpan.FromSeconds(40);
// How many periods need pass without an update to consider a connection invalid
private const int periodsBeforeConsideringZombie = 1;
// The number of seconds that have to pass to consider a connection invalid.
private readonly int _zombieThreshold;
public PresenceMonitor(ITransportHeartbeat heartbeat)
{
_heartbeat = heartbeat;
_zombieThreshold = (int)_presenceCheckInterval.TotalSeconds * periodsBeforeConsideringZombie;
}
public async void StartMonitoring()
{
if (_timer == null)
{
_timer = new Timer(_ =>
{
try
{
Check();
}
catch (Exception ex)
{
// Don't throw on background threads, it'll kill the entire process
Trace.TraceError(ex.Message);
}
},
null,
TimeSpan.Zero,
_presenceCheckInterval);
}
}
private async void Check()
{
// Get all connections on this node and update the activity
foreach (var trackedConnection in _heartbeat.GetConnections())
{
if (!trackedConnection.IsAlive)
{
await trackedConnection.Disconnect();
continue;
}
var log = AppLogFactory.Create<WebApiApplication>();
log.Info($"{trackedConnection.ConnectionId} still live ");
var connection = await (new Hubsrepository()).FindAsync(c => c.ConnectionId == trackedConnection.ConnectionId);
// Update the client's last activity
if (connection != null)
{
connection.LastActivity = DateTimeOffset.UtcNow;
await (new Hubsrepository()).UpdateAsync(connection, connection.Id).ConfigureAwait(false);
}
}
// Now check all db connections to see if there's any zombies
// Remove all connections that haven't been updated based on our threshold
var hubRepository = new Hubsrepository();
var zombies =await hubRepository.FindAllAsync(c =>
SqlFunctions.DateDiff("ss", c.LastActivity, DateTimeOffset.UtcNow) >= _zombieThreshold);
// We're doing ToList() since there's no MARS support on azure
foreach (var connection in zombies.ToList())
{
await hubRepository.DeleteAsync(connection);
}
}
}
and my hub connect disconnect , reconnect looks like
public override async Task OnConnected()
{
var log = AppLogFactory.Create<WebApiApplication>();
if (Context.QueryString["transport"] == "webSockets")
{
log.Info($"Connection is Socket");
}
if (Context.Headers.Any(kv => kv.Key == "CMSId"))
{
// Check For security
var hederchecker = CryptLib.Decrypt(Context.Headers["CMSId"]);
if (string.IsNullOrEmpty(hederchecker))
{
log.Info($"CMSId cannot be decrypted {Context.Headers["CMSId"]}");
return;
}
log.Info($" {hederchecker} CMSId online at {DateTime.UtcNow} ");
var user = await (new UserRepository()).FindAsync(u => u.CMSUserId == hederchecker);
if (user != null)
await (new Hubsrepository()).AddAsync(new HubConnection()
{
UserId = user.Id,
ConnectionId = Context.ConnectionId,
UserAgent = Context.Request.Headers["User-Agent"],
LastActivity = DateTimeOffset.UtcNow
}).ConfigureAwait(false);
//_connections.Add(hederchecker, Context.ConnectionId);
}
return;
}
public override async Task OnDisconnected(bool stopCalled)
{
try
{
//if (!stopCalled)
{
var hubRepo = (new Hubsrepository());
var connection = await hubRepo.FindAsync(c => c.ConnectionId == Context.ConnectionId);
if (connection != null)
{
var user = await (new UserRepository()).FindAsync(u => u.Id == connection.UserId);
await hubRepo.DeleteAsync(connection);
if (user != null)
{
//var log = AppLogFactory.Create<WebApiApplication>();
//log.Info($"CMSId cannot be decrypted {cmsId}");
using (UserStatusRepository repo = new UserStatusRepository())
{
//TODO :: To be changed immediatley in next release , Date of change 22/02/2017
var result = await (new CallLogRepository()).CallEvent(user.CMSUserId);
if (result.IsSuccess)
{
var log = AppLogFactory.Create<WebApiApplication>();
var isStudent = await repo.CheckIfStudent(user.CMSUserId);
log.Info($" {user.CMSUserId} CMSId Disconnected here Before Set offline at {DateTime.UtcNow} ");
var output = await repo.OfflineUser(user.CMSUserId);
log.Info($" {user.CMSUserId} CMSId Disconnected here after Set offline at {DateTime.UtcNow} ");
if (output)
{
log.Info($" {user.CMSUserId} CMSId Disconnected at {DateTime.UtcNow} ");
Clients.All.UserStatusChanged(user.CMSUserId, false, isStudent);
}
}
}
}
}
}
}
catch (Exception e)
{
var log = AppLogFactory.Create<WebApiApplication>();
log.Error($"CMSId cannot Faild to be offline {Context.ConnectionId} with error {e.Message}{Environment.NewLine}{e.StackTrace}");
}
}
public override async Task OnReconnected()
{
string name = Context.User.Identity.Name;
var log = AppLogFactory.Create<WebApiApplication>();
log.Info($" {name} CMSId Reconnected at {DateTime.UtcNow} ");
var connection = await (new Hubsrepository()).FindAsync(c => c.ConnectionId == Context.ConnectionId);
if (connection == null)
{
var user = await (new UserRepository()).FindAsync(u => u.CMSUserId == name);
if (user != null)
await (new Hubsrepository()).AddAsync(new HubConnection()
{
UserId = user.Id,
ConnectionId = Context.ConnectionId,
UserAgent = Context.Request.Headers["User-Agent"],
LastActivity = DateTimeOffset.UtcNow
}).ConfigureAwait(false);
}
else
{
connection.LastActivity = DateTimeOffset.UtcNow;
await (new Hubsrepository()).UpdateAsync(connection, connection.Id).ConfigureAwait(false);
}
}
all test cases passes well except when internet cut on client side the connection keep live for more than 10 minutes, is this related to authentication , or any configuration wrong at my side any help am really don't know what's wrong . client use websocket transport

How to batch queue records and execute them in a different thread and wait till it;s over?

I am using push sharp version PushSharp 4.0.4.
I am using it in a windows application.
I have three main methods
1- BroadCastToAll
2- BrodcatsToIOS
3- BrodcatsToAndriod
I have a button calld send. On the click event of the button. I am calling the
BroadCastToAll function.
private void btnSend_Click(object sender, EventArgs e)
{
var url = "www.mohammad-jouhari.com"
var promotion = new Promotion ();
BroadCastToAll(promotion, url);
}
Here is the BrodcastToAll Function
public void BroadCastToAll(Promotion promotion, string url)
{
var deviceCatalogs = GetDeviceCatalog();
BroadCastToIOS(promotion, url, deviceCatalogs.Where(d => d.OS == "IOS").ToList());
BroadCastToAndriod(promotion, url, deviceCatalogs.Where(d => d.OS == "Android").ToList());
}
Here is the BrodcastToIOS Function
public void BroadCastToIOS(Promotion promotion, string url, List<DeviceCatalog> deviceCatalogs)
{
if (deviceCatalogs.Count == 0)
return;
lock (_lock)// Added this lock because there is a potential chance that PushSharp callback execute during registering devices
{
QueueAllAppleDevicesForNotification(promotion, url, deviceCatalogs, logsMessage);
}
}
Here is the BrodcastToAndriod Function
public void BroadCastToAndriod(Promotion promotion, string url, List<DeviceCatalog> deviceCatalogs)
{
if (deviceCatalogs.Count == 0)
return;
lock (_lock)// Added this lock because there is a potential chance that PushSharp callback execute during registering devices
{
QueueAllGcmDevicesForNotification(promotion, url, deviceCatalogs, logsMessage);
}
}
Here is the QueueAllAppleDevicesForNotification function
private void QueueAllAppleDevicesForNotification(Promotion promotion, string url, List<DeviceCatalog> deviceCatalogs)
{
var apnsServerEnviroment = UseProductionCertificate ? ApnsConfiguration.ApnsServerEnvironment.Production : ApnsConfiguration.ApnsServerEnvironment.Sandbox;
var fileService = new FileService();
var filePath = Application.StartupPath+ "/Certifcates/" + (UseProductionCertificate ? "prod.p12" : "dev.p12");
var buffer = fileService.GetFileBytes(filePath);
var config = new ApnsConfiguration(apnsServerEnviroment, buffer, APPLE_CERTIFICATE_PWD);
apnsServiceBroker = new ApnsServiceBroker(config);
apnsServiceBroker.OnNotificationFailed += (notification, aggregateEx) => {
aggregateEx.Handle (ex => {
// Log the Resposne
});
};
apnsServiceBroker.OnNotificationSucceeded += (notification) => {
// Log The Response
};
apnsServiceBroker.Start();
foreach (var deviceToken in deviceCatalogs) {
var title = GetTitle(promotion, deviceToken);
//title += DateTime.UtcNow.TimeOfDay.ToString();
var NotificationPayLoadObject = new NotificationPayLoadObjectApple();
NotificationPayLoadObject.aps.alert = title;
NotificationPayLoadObject.aps.badge = 0;
NotificationPayLoadObject.aps.sound = "default";
NotificationPayLoadObject.url = url;
var payLoad = JObject.Parse(JsonConvert.SerializeObject(NotificationPayLoadObject));
apnsServiceBroker.QueueNotification(new ApnsNotification
{
Tag = this,
DeviceToken = deviceToken.UniqueID,
Payload = payLoad
});
}
var fbs = new FeedbackService(config);
fbs.FeedbackReceived += (string deviceToken, DateTime timestamp) =>
{
// This Token is no longer avaialble in APNS
new DeviceCatalogService().DeleteExpiredIosDevice(deviceToken);
};
fbs.Check();
apnsServiceBroker.Stop();
}
And here is the QueueAllGcmDevicesForNotification
private void QueueAllGcmDevicesForNotification(Promotion promotion, string url, List<DeviceCatalog> deviceCatalogs, )
{
var config = new GcmConfiguration(ANDROID_SENDER_ID, ANDROID_SENDER_AUTH_TOKEN, ANDROID_APPLICATION_ID_PACKAGE_NAME);
gcmServiceBroker = new GcmServiceBroker(config);
gcmServiceBroker.OnNotificationFailed += (notification, aggregateEx) => {
aggregateEx.Handle (ex => {
// Log Response
return true;
});
};
gcmServiceBroker.OnNotificationSucceeded += (notification) => {
// Log Response
};
var title = GetTitle(shopexPromotion);
gcmServiceBroker.Start ();
foreach (var regId in deviceCatalogs) {
var NotificationPayLoadObject = new NotificationPayLoadObjectAndriod(url, title, "7", promotion.ImageUrl);
var payLoad = JObject.Parse(JsonConvert.SerializeObject(NotificationPayLoadObject));
gcmServiceBroker.QueueNotification(new GcmNotification
{
RegistrationIds = new List<string> {
regId.UniqueID
},
Data = payLoad
});
}
gcmServiceBroker.Stop();
}
Now When I click the send button. The event will start executing.
The BrodcastToAll function will be called. I am calling BrodcastToIOS devices first and then BrodcatsToAndriod.
Is there any way in which I can call BrodcastToIOS and wait until all the devices have been Queued and notification has been pushed by the library and the call back events fired fully then start executing the BrodcastToAndriod Fucntion ?
What lines of code I need to add ?
also Is there any way to batch the number of devices to be Queued ?
For example.
Let us say I have 1000 Devices
500 IOS
500 Andriod
Can I queue 100, 100,100,100,100 for IOS and when it's done
I queue 100,100,100,100,100 for Andriod.
Any Help is appreciated.
Thanks.
The call to broker.Stop () by default will block until all the notifications from the queue have been processed.

SqlWorkflowInstanceStore WaitForEvents returns HasRunnableWorkflowEvent but LoadRunnableInstance fails

Dears
Please help me with restoring delayed (and persisted) workflows.
I'm trying to check on self-hosted workflow store is there any instance that was delayed and can be resumed. For testing purposes I've created dummy activity that is delayed and it persists on delay.
generally resume process looks like:
get WF definition
configure sql instance store
call WaitForEvents
is there event with HasRunnableWorkflowEvent.Value name and if it is
create WorkflowApplication object and execute LoadRunnableInstance method
it works fine if store is created|initialized, WaitForEvents is called, store is closed. In such case store reads all available workflows from persisted DB and throws timeout exception if there is no workflows available to resume.
The problem happens if store is created and loop is started only for WaitForEvents (the same thing happens with BeginWaitForEvents). In such case it reads all available workflows from DB (with proper IDs) but then instead of timeout exception it is going to read one more instance (I know exactly how many workflows is there ready to be resumed because using separate test database). But fails to read and throws InstanceNotReadyException. In catch I'm checking workflowApplication.Id, but it was not saved with my test before.
I've tried to run on new (empty) persistent database and result is the same :(
This code fails:
using (var storeWrapper = new StoreWrapper(wf, connStr))
for (int q = 0; q < 5; q++)
{
var id = Resume(storeWrapper); // InstanceNotReadyException here when all activities is resumed
But this one works as expected:
for (int q = 0; q < 5; q++)
using (var storeWrapper = new StoreWrapper(wf, connStr))
{
var id = Resume(storeWrapper); // timeout exception here or beginWaitForEvents continues to wait
What is a best solution in such case? Add empty catch for InstanceNotReadyException and ignore it?
Here are my tests
const int delayTime = 15;
string connStr = "Server=db;Database=AppFabricDb_Test;Integrated Security=True;";
[TestMethod]
public void PersistOneOnIdleAndResume()
{
var wf = GetDelayActivity();
using (var storeWrapper = new StoreWrapper(wf, connStr))
{
var id = CreateAndRun(storeWrapper);
Trace.WriteLine(string.Format("done {0}", id));
}
using (var storeWrapper = new StoreWrapper(wf, connStr))
for (int q = 0; q < 5; q++)
{
var id = Resume(storeWrapper);
Trace.WriteLine(string.Format("resumed {0}", id));
}
}
Activity GetDelayActivity(string addName = "")
{
var name = new Variable<string>(string.Format("incr{0}", addName));
Activity wf = new Sequence
{
DisplayName = "testDelayActivity",
Variables = { name, new Variable<string>("CustomDataContext") },
Activities =
{
new WriteLine
{
Text = string.Format("before delay {0}", delayTime)
},
new Delay
{
Duration = new InArgument<TimeSpan>(new TimeSpan(0, 0, delayTime))
},
new WriteLine
{
Text = "after delay"
}
}
};
return wf;
}
Guid CreateAndRun(StoreWrapper sw)
{
var idleEvent = new AutoResetEvent(false);
var wfApp = sw.GetApplication();
wfApp.Idle = e => idleEvent.Set();
wfApp.Aborted = e => idleEvent.Set();
wfApp.Completed = e => idleEvent.Set();
wfApp.Run();
idleEvent.WaitOne(40 * 1000);
var res = wfApp.Id;
wfApp.Unload();
return res;
}
Guid Resume(StoreWrapper sw)
{
var res = Guid.Empty;
var events = sw.GetStore().WaitForEvents(sw.Handle, new TimeSpan(0, 0, delayTime));
if (events.Any(e => e.Equals(HasRunnableWorkflowEvent.Value)))
{
var idleEvent = new AutoResetEvent(false);
var obj = sw.GetApplication();
try
{
obj.LoadRunnableInstance(); //instancenotready here if the same store has read all instances from DB and no delayed left
obj.Idle = e => idleEvent.Set();
obj.Completed = e => idleEvent.Set();
obj.Run();
idleEvent.WaitOne(40 * 1000);
res = obj.Id;
obj.Unload();
}
catch (InstanceNotReadyException)
{
Trace.TraceError("failed to resume {0} {1} {2}", obj.Id
, obj.DefinitionIdentity == null ? null : obj.DefinitionIdentity.Name
, obj.DefinitionIdentity == null ? null : obj.DefinitionIdentity.Version);
foreach (var e in events)
{
Trace.TraceWarning("event {0}", e.Name);
}
throw;
}
}
return res;
}
Here is store wrapper definition I'm using for test:
public class StoreWrapper : IDisposable
{
Activity WfDefinition { get; set; }
public static readonly XName WorkflowHostTypePropertyName = XNamespace.Get("urn:schemas-microsoft-com:System.Activities/4.0/properties").GetName("WorkflowHostType");
public StoreWrapper(Activity wfDefinition, string connectionStr)
{
_store = new SqlWorkflowInstanceStore(connectionStr);
HostTypeName = XName.Get(wfDefinition.DisplayName, "ttt.workflow");
WfDefinition = wfDefinition;
}
SqlWorkflowInstanceStore _store;
public SqlWorkflowInstanceStore GetStore()
{
if (Handle == null)
{
InitStore(_store, WfDefinition);
Handle = _store.CreateInstanceHandle();
var view = _store.Execute(Handle, new CreateWorkflowOwnerCommand
{
InstanceOwnerMetadata = { { WorkflowHostTypePropertyName, new InstanceValue(HostTypeName) } }
}, TimeSpan.FromSeconds(30));
_store.DefaultInstanceOwner = view.InstanceOwner;
//Trace.WriteLine(string.Format("{0} owns {1}", view.InstanceOwner.InstanceOwnerId, HostTypeName));
}
return _store;
}
protected virtual void InitStore(SqlWorkflowInstanceStore store, Activity wfDefinition)
{
}
public InstanceHandle Handle { get; protected set; }
XName HostTypeName { get; set; }
public void Dispose()
{
if (Handle != null)
{
var deleteOwner = new DeleteWorkflowOwnerCommand();
//Trace.WriteLine(string.Format("{0} frees {1}", Store.DefaultInstanceOwner.InstanceOwnerId, HostTypeName));
_store.Execute(Handle, deleteOwner, TimeSpan.FromSeconds(30));
Handle.Free();
Handle = null;
_store = null;
}
}
public WorkflowApplication GetApplication()
{
var wfApp = new WorkflowApplication(WfDefinition);
wfApp.InstanceStore = GetStore();
wfApp.PersistableIdle = e => PersistableIdleAction.Persist;
Dictionary<XName, object> wfScope = new Dictionary<XName, object> { { WorkflowHostTypePropertyName, HostTypeName } };
wfApp.AddInitialInstanceValues(wfScope);
return wfApp;
}
}
I'm not workflow foundation expert so my answer is based on the official examples from Microsoft. The first one is WF4 host resumes delayed workflow (CSWF4LongRunningHost) and the second is Microsoft.Samples.AbsoluteDelay. In both samples you will find a code similar to yours i.e.:
try
{
wfApp.LoadRunnableInstance();
...
}
catch (InstanceNotReadyException)
{
//Some logging
}
Taking this into account the answer is that you are right and the empty catch for InstanceNotReadyException is a good solution.

Categories