GraceFul ShutDown in Azure Webjob in C# - c#

i'm new to Azure Webjobs. I was trying to achieve the GraceFul Shutdown. while using the WebJobsShutdownWatcher Class.
Public static void Main()
{
try
{
var config = new JobHostConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
}
var watcher = new WebJobsShutdownWatcher();
Task.Run(() =>
{
bool isCancelled = false;
while (!isCancelled)
{
if (watcher.Token.IsCancellationRequested)
{
Console.WriteLine("WebJob cancellation Token Requested!");
isCancelled = true;
}
}
}, watcher.Token).Wait();
var host = new JobHost();
The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
catch (Exception)
{
Console.WriteLine("Error");
}
}
To Achieve the GraceFul ShutDown, i have stop the Webjob and again Hosted on azure.
After Hosting on Azure, the Queues are not getting Trigger. when i Debug the Code the Control is Stop at the WebJobsShutdownWatcher Class.
What did i Done Wrong ?

As Thomas says, there are something wrong with your codes.
Since the WebJobsShutdownWatcher class will continue watch the webjobs status and you use wait method to wait the WebJobsShutdownWatcher class to get the cancel token, the host.RunAndBlock method is never hit.
You could remove the wait method, the codes will work well.
Here I write a test demo, it works well.
static void Main()
{
var config = new JobHostConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
}
var watcher = new WebJobsShutdownWatcher();
Task.Run(() =>
{
bool isCancelled = false;
while (!isCancelled)
{
if (watcher.Token.IsCancellationRequested)
{
Console.WriteLine("WebJob cancellation Token Requested!");
isCancelled = true;
}
}
}, watcher.Token);
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}

Related

Losing messages with MQTT with C# uPLibrary.Networking.M2Mqtt

I have a problem that I lose messages with MQTT although I send them with "QOS_LEVEL_EXACTLY_ONCE".
The loss is only when the receiver is not running and then starts later.
These messages are then not collected.
Version of M2Mqtt is 4.3.0
If both clients, i.e. receiver and transmitter, are running, no messages are lost.
Only if the receiver is not running, the messages are prefetched during this time and do not arrive at the receiver.
I can't find any setting on the server(broker) for how long messages should be saved
sender
public class Programm
{
static MqttClient mqttClient;
static async Task Main(string[] args)
{
var locahlost = true;
var clientName = "Sender 1";
Console.WriteLine($"{clientName} Startet");
var servr = locahlost ? "localhost" : "test.mosquitto.org";
mqttClient = new MqttClient(servr);
mqttClient.Connect(clientName);
Task.Run(() =>
{
if (mqttClient != null && mqttClient.IsConnected)
{
for (int i = 0; i < 100; i++)
{
var Message = $"{clientName} ->Test {i}";
mqttClient.Publish("Application1/NEW_Message", Encoding.UTF8.GetBytes($"{Message}"), MqttMsgBase.QOS_LEVEL_EXACTLY_ONCE, true);
Console.WriteLine(Message);
Thread.Sleep(i * 1000);
}
}
});
Console.WriteLine($"{clientName} End");
}
}
Server
public class Programm
{
static async Task Main(string[] args)
{
Console.WriteLine("Server");
MqttServerOptionsBuilder options = new MqttServerOptionsBuilder()
// set endpoint to localhost
.WithDefaultEndpoint()
// port used will be 707
.WithDefaultEndpointPort(1883);
// handler for new connections
// creates a new mqtt server
IMqttServer mqttServer = new MqttFactory().CreateMqttServer();
// start the server with options
mqttServer.StartAsync(options.Build()).GetAwaiter().GetResult();
// keep application running until user press a key
Console.ReadLine();
}
}
Receiver
public class Programm
{
static MqttClient mqttClient;
static async Task Main(string[] args)
{
var clientName = "Emfänger 1";
var locahlost = true;
Console.WriteLine($"Start of {clientName}");
Task.Run(() =>
{
var servr = locahlost ? "localhost" : "test.mosquitto.org";
mqttClient = new MqttClient(servr);
mqttClient.MqttMsgPublishReceived += MqttClient_MqttMsgPublishReceived;
mqttClient.Subscribe(new string[] { "Application1/NEW_Message" }, new byte[] { MqttMsgBase.QOS_LEVEL_EXACTLY_ONCE });
mqttClient.Connect(clientName);
});
// client.UseConnecedHandler(e=> {Console.WriteLine("Verbunden") });
Console.ReadLine();
Console.WriteLine($"end of {clientName}");
Console.ReadLine();
}
private static void MqttClient_MqttMsgPublishReceived(object sender, uPLibrary.Networking.M2Mqtt.Messages.MqttMsgPublishEventArgs e)
{
var message = Encoding.UTF8.GetString(e.Message);
Console.WriteLine(message);
}
}
The default value for the Clean session flag when connecting to the broker with M2MQTT is true.
This means that the broker will discard any queued messages.
https://m2mqtt.wordpress.com/using-mqttclient/
You need to set this to false to ensure the client receives the queued messages.
mqttClient.Connect(clientName, false);
I found the error, saving was missing.
Here is the new code from the server
static async Task Main(string[] args)
{
Console.WriteLine("Server");
MqttServerOptionsBuilder options = new MqttServerOptionsBuilder()
.WithDefaultEndpoint()
.WithDefaultEndpointPort(1883)
.WithConnectionValidator(OnNewConnection)
.WithApplicationMessageInterceptor(OnNewMessage)
.WithClientMessageQueueInterceptor(OnOut)
.WithDefaultCommunicationTimeout(TimeSpan.FromMinutes(5))
.WithMaxPendingMessagesPerClient(10)
.WithPersistentSessions()
.WithStorage(storage);
// creates a new mqtt server
IMqttServer mqttServer = new MqttFactory().CreateMqttServer();
// start the server with options
mqttServer.StartAsync(options.Build()).GetAwaiter().GetResult();
// keep application running until user press a key
Console.ReadLine();
}

Kafka consume message and then produce to another topic

I have to consume from a Kafka topic, get the message and do some json clean and filter job, then I need to produce the new message to another Kafka topic, my code is like this:
public static YamlMappingNode configs;
public static void Main(string[] args)
{
using (var reader = new StreamReader(Path.Combine(Directory.GetCurrentDirectory(), ".gitlab-ci.yml")))
{
var yaml = new YamlStream();
yaml.Load(reader);
//find variables
configs = (YamlMappingNode)yaml.Documents[0].RootNode;
configs = (YamlMappingNode)configs.Children.Where(k => k.Key.ToString() == "variables")?.FirstOrDefault().Value;
}
CancellationTokenSource cts = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) => {
e.Cancel = true; // prevent the process from terminating.
cts.Cancel();
};
Run_ManualAssign(configs, cts.Token);
}
public static async void Run_ManualAssign(YamlMappingNode configs, CancellationToken cancellationToken)
{
var brokerList = configs.Where(k => k.Key.ToString() == "kfk_broker")?.FirstOrDefault().Value.ToString();
var topics = configs.Where(k => k.Key.ToString() == "input_kfk_topic")?.FirstOrDefault().Value.ToString();
var config = new ConsumerConfig
{
// the group.id property must be specified when creating a consumer, even
// if you do not intend to use any consumer group functionality.
GroupId = new Guid().ToString(),
BootstrapServers = brokerList,
// partition offsets can be committed to a group even by consumers not
// subscribed to the group. in this example, auto commit is disabled
// to prevent this from occurring.
EnableAutoCommit = true
};
using (var consumer =
new ConsumerBuilder<Ignore, string>(config)
.SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
.Build())
{
//consumer.Assign(topics.Select(topic => new TopicPartitionOffset(topic, 0, Offset.Beginning)).ToList());
consumer.Assign(new TopicPartitionOffset(topics, 0, Offset.End));
//var producer = new ProducerBuilder<Null, string>(config).Build();
try
{
while (true)
{
try
{
var consumeResult = consumer.Consume(cancellationToken);
/// Note: End of partition notification has not been enabled, so
/// it is guaranteed that the ConsumeResult instance corresponds
/// to a Message, and not a PartitionEOF event.
//filter message
var result = ReadMessage(configs, consumeResult.Message.Value);
//send to kafka topic
await Run_ProducerAsync(configs, result);
}
catch (ConsumeException e)
{
Console.WriteLine($"Consume error: {e.Error.Reason}");
}
}
}
catch (OperationCanceledException)
{
Console.WriteLine("Closing consumer.");
consumer.Close();
}
}
}
#endregion
#region Run_Producer
public static async Task Run_ProducerAsync(YamlMappingNode configs, string message)
{
var brokerList = configs.Where(k => k.Key.ToString() == "kfk_broker")?.FirstOrDefault().Value.ToString();
var topicName = configs.Where(k => k.Key.ToString() == "target_kafka_topic")?.FirstOrDefault().Value.ToString();
var config = new ProducerConfig {
BootstrapServers = brokerList,
};
using (var producer = new ProducerBuilder<Null, string>(config).Build())
{
try
{
/// Note: Awaiting the asynchronous produce request below prevents flow of execution
/// from proceeding until the acknowledgement from the broker is received (at the
/// expense of low throughput).
var deliveryReport = await producer.ProduceAsync(topicName, new Message<Null, string> { Value = message });
producer.Flush(TimeSpan.FromSeconds(10));
Console.WriteLine($"delivered to: {deliveryReport.TopicPartitionOffset}");
}
catch (ProduceException<string, string> e)
{
Console.WriteLine($"failed to deliver message: {e.Message} [{e.Error.Code}]");
}
}
}
#endregion
Am I doing something wrong here? The program existed immediately when executing var deliveryReport = await producer.ProduceAsync(topicName, new Message<Null, string> { Value = message });, no error message, no error code.
In the meanwhile I used Python and config the same for Producer, it works well.
Run_ManualAssign(configs, cts.Token);
For this line in the Main function, you are calling async without await in a sync function. Thus the program exit immediately after this invoke started (not finished as it is async)
You could have 2 options
Use async Main function and add await in front of this invoke.
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/proposals/csharp-7.1/async-main
If you really want to call async function in sync function
Run_ManualAssign(configs, ts.Token).ConfigureAwait(false).GetAwaiter().GetResult();
I solved this problem but I don't know why actually. I opened an issue here.

Prevent running multiple instances of a mono app

I know how to prevent running multiple instances of a given app on Windows:
Prevent multiple instances of a given app in .NET?
This code does not work under Linux using mono-develop though. It compiles and runs but it does not work. How can I prevent it under Linux using mono?
This is what I have tried but the code deos not work under linux only on windows.
static void Main()
{
Task.Factory.StartNew(() =>
{
try
{
var p = new NamedPipeServerStream("SomeGuid", PipeDirection.In, 1);
Console.WriteLine("Waiting for connection");
p.WaitForConnection();
}
catch
{
Console.WriteLine("Error another insance already running");
Environment.Exit(1); // terminate application
}
});
Thread.Sleep(1000);
Console.WriteLine("Doing work");
// Do work....
Thread.Sleep(10000);
}
I came up with this answer. Call this method passing it a unique ID
public static void PreventMultipleInstance(string applicationId)
{
// Under Windows this is:
// C:\Users\SomeUser\AppData\Local\Temp\
// Linux this is:
// /tmp/
var temporaryDirectory = Path.GetTempPath();
// Application ID (Make sure this guid is different accross your different applications!
var applicationGuid = applicationId + ".process-lock";
// file that will serve as our lock
var fileFulePath = Path.Combine(temporaryDirectory, applicationGuid);
try
{
// Prevents other processes from reading from or writing to this file
var _InstanceLock = new FileStream(fileFulePath, FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.None);
_InstanceLock.Lock(0, 0);
MonoApp.Logger.LogToDisk(LogType.Notification, "04ZH-EQP0", "Aquired Lock", fileFulePath);
// todo investigate why we need a reference to file stream. Without this GC releases the lock!
System.Timers.Timer t = new System.Timers.Timer()
{
Interval = 500000,
Enabled = true,
};
t.Elapsed += (a, b) =>
{
try
{
_InstanceLock.Lock(0, 0);
}
catch
{
MonoApp.Logger.Log(LogType.Error, "AOI7-QMCT", "Unable to lock file");
}
};
t.Start();
}
catch
{
// Terminate application because another instance with this ID is running
Environment.Exit(102534);
}
}

Program throwing OperationCanceledException despite catching it

I'm trying to use Tokens to cancel Task started by Task.Run. I took pattern from microsoft site: https://msdn.microsoft.com/pl-pl/library/hh160373(v=vs.110).aspx
This is my code:
public static class Sender
{
public static async Task sendData(NetworkController nc) {
await Task.Run(() => {
IPEndPoint endPoint = new IPEndPoint(nc.serverIp, nc.dataPort);
byte[] end = Encoding.ASCII.GetBytes("end");
while (true) {
if (Painting.pointsQueue.Count > 0 && !nc.paintingSenderToken.IsCancellationRequested) {
byte[] sendbuf = Encoding.ASCII.GetBytes(Painting.color.ToString());
nc.socket.SendTo(sendbuf, endPoint);
do {
sendbuf = Painting.pointsQueue.Take();
nc.socket.SendTo(sendbuf, endPoint);
} while (sendbuf != end && !nc.paintingSenderToken.IsCancellationRequested);
}
else if (nc.paintingSenderToken.IsCancellationRequested) {
nc.paintingSenderToken.ThrowIfCancellationRequested();
return;
}
}
}, nc.paintingSenderToken);
}
}
And here I start this task:
public void stopController() {
try {
paintingSenderTokenSource.Cancel();
senderTask.Wait();
} catch(AggregateException e) {
string message = "";
foreach (var ie in e.InnerExceptions)
message += ie.GetType().Name + ": " + ie.Message + "\n";
MessageBox.Show(message, "Przerwano wysylanie");
}
finally {
paintingSenderTokenSource.Dispose();
byte[] message = Encoding.ASCII.GetBytes("disconnect");
IPEndPoint endPoint = new IPEndPoint(serverIp, serverPort);
socket.SendTo(message, endPoint);
socket.Close();
mw.setStatus("disconnected");
}
}
public async void initialize() {
Task t = Reciver.waitForRespond(this);
sendMessage("connect");
mw.setStatus("connecting");
if (await Task.WhenAny(t, Task.Delay(5000)) == t) {
mw.setStatus("connected");
Painting.pointsQueue = new System.Collections.Concurrent.BlockingCollection<byte[]>();
senderTask = Sender.sendData(this);
}
else {
mw.setStatus("failed");
}
}
}
In initialize() method I'm waiting for the response from the server and if I get it I start new thread in this sendData() method. It is in static class to make code cleaner. If I want to stop this thread I call stopController() method. In microsoft site we can read:
The CancellationToken.ThrowIfCancellationRequested method throws an OperationCanceledException exception that is handled in a catch block when the calling thread calls the Task.Wait method.
But my program breaks onnc.paintingSenderToken.ThrowIfCancellationRequested(); which is in 'sendData()' method and the error says that OperationCanceledException was not handled. I started program from microsoft site and it works perfectly. I think I'm doing everything like they did but unfortunately it doesnt't work like it should.
You may have "Enable Just My Code" enabled.
To find the settings go to:
Tools => Options => Debugging => General => Enable Just My Code
If this checkbox is checked could you un-check it and then run your application again.

BrokeredMessage disposed after accessing from different thread

This might be a duplicate of this question but that's confused with talk about batching database updates and still has no proper answer.
In a simple example using Azure Service Bus queues, I can't access a BrokeredMessage after it's been placed on a queue; it's always disposed if I read the queue from another thread.
Sample code:
class Program {
private static string _serviceBusConnectionString = "XXX";
private static BlockingCollection<BrokeredMessage> _incomingMessages = new BlockingCollection<BrokeredMessage>();
private static CancellationTokenSource _cancelToken = new CancellationTokenSource();
private static QueueClient _client;
static void Main(string[] args) {
// Set up a few listeners on different threads
Task.Run(async () => {
while (!_cancelToken.IsCancellationRequested) {
var msg = _incomingMessages.Take(_cancelToken.Token);
if (msg != null) {
try {
await msg.CompleteAsync();
Console.WriteLine($"Completed Message Id: {msg.MessageId}");
} catch (ObjectDisposedException) {
Console.WriteLine("Message was disposed!?");
}
}
}
});
// Now set up our service bus reader
_client = GetQueueClient("test");
_client.OnMessageAsync(async (message) => {
await Task.Run(() => _incomingMessages.Add(message));
},
new OnMessageOptions() {
AutoComplete = false
});
// Now start sending
Task.Run(async () => {
int sent = 0;
while (!_cancelToken.IsCancellationRequested) {
var msg = new BrokeredMessage();
await _client.SendAsync(msg);
Console.WriteLine($"Sent {++sent}");
await Task.Delay(1000);
}
});
Console.ReadKey();
_cancelToken.Cancel();
}
private static QueueClient GetQueueClient(string queueName) {
var namespaceManager = NamespaceManager.CreateFromConnectionString(_serviceBusConnectionString);
if (!namespaceManager.QueueExists(queueName)) {
var settings = new QueueDescription(queueName);
settings.MaxDeliveryCount = 10;
settings.LockDuration = TimeSpan.FromSeconds(5);
settings.EnableExpress = true;
settings.EnablePartitioning = true;
namespaceManager.CreateQueue(settings);
}
var factory = MessagingFactory.CreateFromConnectionString(_serviceBusConnectionString);
factory.RetryPolicy = new RetryExponential(minBackoff: TimeSpan.FromSeconds(0.1), maxBackoff: TimeSpan.FromSeconds(30), maxRetryCount: 100);
var queueClient = factory.CreateQueueClient(queueName);
return queueClient;
}
}
I've tried playing around with settings but can't get this to work. Any ideas?
Answering my own question with response from Serkant Karaca # Microsoft here:
Very basic rule and I am not sure if this is documented. The received message needs to be processed in the callback function's life time. In your case, messages will be disposed when async callback completes, this is why your complete attempts are failing with ObjectDisposedException in another thread.
I don't really see how queuing messages for further processing helps on the throughput. This will add more burden to client for sure. Try processing the message in the async callback, that should be performant enough.
Bugger.

Categories