This might be a duplicate of this question but that's confused with talk about batching database updates and still has no proper answer.
In a simple example using Azure Service Bus queues, I can't access a BrokeredMessage after it's been placed on a queue; it's always disposed if I read the queue from another thread.
Sample code:
class Program {
private static string _serviceBusConnectionString = "XXX";
private static BlockingCollection<BrokeredMessage> _incomingMessages = new BlockingCollection<BrokeredMessage>();
private static CancellationTokenSource _cancelToken = new CancellationTokenSource();
private static QueueClient _client;
static void Main(string[] args) {
// Set up a few listeners on different threads
Task.Run(async () => {
while (!_cancelToken.IsCancellationRequested) {
var msg = _incomingMessages.Take(_cancelToken.Token);
if (msg != null) {
try {
await msg.CompleteAsync();
Console.WriteLine($"Completed Message Id: {msg.MessageId}");
} catch (ObjectDisposedException) {
Console.WriteLine("Message was disposed!?");
}
}
}
});
// Now set up our service bus reader
_client = GetQueueClient("test");
_client.OnMessageAsync(async (message) => {
await Task.Run(() => _incomingMessages.Add(message));
},
new OnMessageOptions() {
AutoComplete = false
});
// Now start sending
Task.Run(async () => {
int sent = 0;
while (!_cancelToken.IsCancellationRequested) {
var msg = new BrokeredMessage();
await _client.SendAsync(msg);
Console.WriteLine($"Sent {++sent}");
await Task.Delay(1000);
}
});
Console.ReadKey();
_cancelToken.Cancel();
}
private static QueueClient GetQueueClient(string queueName) {
var namespaceManager = NamespaceManager.CreateFromConnectionString(_serviceBusConnectionString);
if (!namespaceManager.QueueExists(queueName)) {
var settings = new QueueDescription(queueName);
settings.MaxDeliveryCount = 10;
settings.LockDuration = TimeSpan.FromSeconds(5);
settings.EnableExpress = true;
settings.EnablePartitioning = true;
namespaceManager.CreateQueue(settings);
}
var factory = MessagingFactory.CreateFromConnectionString(_serviceBusConnectionString);
factory.RetryPolicy = new RetryExponential(minBackoff: TimeSpan.FromSeconds(0.1), maxBackoff: TimeSpan.FromSeconds(30), maxRetryCount: 100);
var queueClient = factory.CreateQueueClient(queueName);
return queueClient;
}
}
I've tried playing around with settings but can't get this to work. Any ideas?
Answering my own question with response from Serkant Karaca # Microsoft here:
Very basic rule and I am not sure if this is documented. The received message needs to be processed in the callback function's life time. In your case, messages will be disposed when async callback completes, this is why your complete attempts are failing with ObjectDisposedException in another thread.
I don't really see how queuing messages for further processing helps on the throughput. This will add more burden to client for sure. Try processing the message in the async callback, that should be performant enough.
Bugger.
Related
I have to consume from a Kafka topic, get the message and do some json clean and filter job, then I need to produce the new message to another Kafka topic, my code is like this:
public static YamlMappingNode configs;
public static void Main(string[] args)
{
using (var reader = new StreamReader(Path.Combine(Directory.GetCurrentDirectory(), ".gitlab-ci.yml")))
{
var yaml = new YamlStream();
yaml.Load(reader);
//find variables
configs = (YamlMappingNode)yaml.Documents[0].RootNode;
configs = (YamlMappingNode)configs.Children.Where(k => k.Key.ToString() == "variables")?.FirstOrDefault().Value;
}
CancellationTokenSource cts = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) => {
e.Cancel = true; // prevent the process from terminating.
cts.Cancel();
};
Run_ManualAssign(configs, cts.Token);
}
public static async void Run_ManualAssign(YamlMappingNode configs, CancellationToken cancellationToken)
{
var brokerList = configs.Where(k => k.Key.ToString() == "kfk_broker")?.FirstOrDefault().Value.ToString();
var topics = configs.Where(k => k.Key.ToString() == "input_kfk_topic")?.FirstOrDefault().Value.ToString();
var config = new ConsumerConfig
{
// the group.id property must be specified when creating a consumer, even
// if you do not intend to use any consumer group functionality.
GroupId = new Guid().ToString(),
BootstrapServers = brokerList,
// partition offsets can be committed to a group even by consumers not
// subscribed to the group. in this example, auto commit is disabled
// to prevent this from occurring.
EnableAutoCommit = true
};
using (var consumer =
new ConsumerBuilder<Ignore, string>(config)
.SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
.Build())
{
//consumer.Assign(topics.Select(topic => new TopicPartitionOffset(topic, 0, Offset.Beginning)).ToList());
consumer.Assign(new TopicPartitionOffset(topics, 0, Offset.End));
//var producer = new ProducerBuilder<Null, string>(config).Build();
try
{
while (true)
{
try
{
var consumeResult = consumer.Consume(cancellationToken);
/// Note: End of partition notification has not been enabled, so
/// it is guaranteed that the ConsumeResult instance corresponds
/// to a Message, and not a PartitionEOF event.
//filter message
var result = ReadMessage(configs, consumeResult.Message.Value);
//send to kafka topic
await Run_ProducerAsync(configs, result);
}
catch (ConsumeException e)
{
Console.WriteLine($"Consume error: {e.Error.Reason}");
}
}
}
catch (OperationCanceledException)
{
Console.WriteLine("Closing consumer.");
consumer.Close();
}
}
}
#endregion
#region Run_Producer
public static async Task Run_ProducerAsync(YamlMappingNode configs, string message)
{
var brokerList = configs.Where(k => k.Key.ToString() == "kfk_broker")?.FirstOrDefault().Value.ToString();
var topicName = configs.Where(k => k.Key.ToString() == "target_kafka_topic")?.FirstOrDefault().Value.ToString();
var config = new ProducerConfig {
BootstrapServers = brokerList,
};
using (var producer = new ProducerBuilder<Null, string>(config).Build())
{
try
{
/// Note: Awaiting the asynchronous produce request below prevents flow of execution
/// from proceeding until the acknowledgement from the broker is received (at the
/// expense of low throughput).
var deliveryReport = await producer.ProduceAsync(topicName, new Message<Null, string> { Value = message });
producer.Flush(TimeSpan.FromSeconds(10));
Console.WriteLine($"delivered to: {deliveryReport.TopicPartitionOffset}");
}
catch (ProduceException<string, string> e)
{
Console.WriteLine($"failed to deliver message: {e.Message} [{e.Error.Code}]");
}
}
}
#endregion
Am I doing something wrong here? The program existed immediately when executing var deliveryReport = await producer.ProduceAsync(topicName, new Message<Null, string> { Value = message });, no error message, no error code.
In the meanwhile I used Python and config the same for Producer, it works well.
Run_ManualAssign(configs, cts.Token);
For this line in the Main function, you are calling async without await in a sync function. Thus the program exit immediately after this invoke started (not finished as it is async)
You could have 2 options
Use async Main function and add await in front of this invoke.
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/proposals/csharp-7.1/async-main
If you really want to call async function in sync function
Run_ManualAssign(configs, ts.Token).ConfigureAwait(false).GetAwaiter().GetResult();
I solved this problem but I don't know why actually. I opened an issue here.
I'm posting two messages back to the user as a reply as below,
static Timer t = new Timer(new TimerCallback(TimerEvent));
static Timer t1 = new Timer(new TimerCallback(TimerEventInActivity));
static int timeOut = Convert.ToInt32(ConfigurationManager.AppSettings["disableEndConversationTimer"]); //3600000
public static void CallTimer(int due) {
t.Change(due, Timeout.Infinite);
}
public static void CallTimerInActivity(int due) {
t1.Change(due, Timeout.Infinite);
}
public async static Task PostAsyncWithDelay(this IDialogContext ob, string text) {
try {
var message = ob.MakeMessage();
message.Type = Microsoft.Bot.Connector.ActivityTypes.Message;
message.Text = text;
await PostAsyncWithDelay(ob, message);
CallTimer(300000);
if ("true".Equals(ConfigurationManager.AppSettings["disableEndConversation"])) {
CallTimerInActivity(timeOut);
}
} catch (Exception ex) {
Trace.TraceInformation(ex.Message);
}
}
await context.PostAsyncWithDelay("Great!");
await context.PostAsyncWithDelay("I can help you with that.");
But, there is no delay between them when received. Both messages are received in one go.
How can I delay the second message with some time?
In Root Dialog
To delay your message you can use Task.Delay method. Change your PostAsyncWithDelay as:
public async static Task PostAsyncWithDelay(IDialogContext context, string text)
{
await Task.Delay(4000).ContinueWith(t =>
{
var message = context.MakeMessage();
message.Text = text;
using (var scope = DialogModule.BeginLifetimeScope(Conversation.Container, message))
{
var client = scope.Resolve<IConnectorClient>();
client.Conversations.ReplyToActivityAsync((Activity)message);
}
});
}
You can call PostAsyncWithDelay method when you want to delay a message, otherwise use context.PostAsync method to send your messages.
private async Task MessageReceivedAsync(IDialogContext context, IAwaitable<object> result)
{
//Sending a message nomally
await context.PostAsync("Hi");
//Notify the user that the bot is typing
var typing = context.MakeMessage();
typing.Type = ActivityTypes.Typing;
await context.PostAsync(typing);
//The message you want to delay.
//NOTE: Do not use context.PostAsyncWithDelay instead simply call the method.
await PostAsyncWithDelay(context, "2nd Hi");
}
OUTPUT
How can I delay the second message with some time?
If you’d like to delay sending the second message, you can try the following code snippet:
await context.PostAsync($"You sent {activity.Text} at {DateTime.Now}");
Task.Delay(5000).ContinueWith(t =>
{
using (var scope = Microsoft.Bot.Builder.Dialogs.Internals.DialogModule.BeginLifetimeScope(Conversation.Container, activity))
{
var client = scope.Resolve<IConnectorClient>();
Activity reply = activity.CreateReply($"I can help you with that..");
client.Conversations.ReplyToActivityAsync(reply);
}
});
context.Wait(MessageReceivedAsync);
Besides, as others mentioned in comments, the method PostAsyncWithDelay seems not a built-in method in Bot Builder SDK. If you try to achieve the requirement and defined that custom method, you can post the code of that method.
To make all replies delay, you may insert this directly in the controller.
if (activity.Type == ActivityTypes.Message)
{
var connector = new ConnectorClient(new Uri(activity.ServiceUrl));
Activity isTypingReply = activity.CreateReply();
isTypingReply.Type = ActivityTypes.Typing;
await connector.Conversations.ReplyToActivityAsync(isTypingReply);
var message = isTypingReply;
await Task.Delay(4000).ContinueWith(t =>
{
using (var scope = DialogModule.BeginLifetimeScope(Conversation.Container, message))
{
}
});
await Conversation.SendAsync(activity, () => new Dialogs.RootDialog());
}
There is a method HTTP_actions.put_import() in XenAPI, which is synchronous and it supports cancellation via its delegate.
I have the following method:
private void UploadImage(.., Func<bool> isTaskCancelled)
{
try
{
HTTP_actions.put_import(
cancellingDelegate: () => isTaskCancelled(),
...);
}
catch (HTTP.CancelledException exception)
{
}
}
It so happens that in some cases this method HTTP_actions.put_import hangs and doesn't react to isTaskCancelled(). In that case the whole application also hangs.
I can run this method in a separate thread and kill it forcefully once I receive cancellation signal, but this method doesn't always hang and sometimes I want to gracefully cancel this method. Only when this method is really hanging, I want to kill it myself.
What is the best way to handle such situation?
Wrote blog post for below : http://pranayamr.blogspot.in/2017/12/abortcancel-task.html
Tried lot of solution since last 2 hr for you and I come up with below working solution , please have try it out
class Program
{
//capture request running that , which need to be cancel in case
// it take more time
static Thread threadToCancel = null;
static async Task<string> DoWork(CancellationToken token)
{
var tcs = new TaskCompletionSource<string>();
//enable this for your use
//await Task.Factory.StartNew(() =>
//{
// //Capture the thread
// threadToCancel = Thread.CurrentThread;
// HTTP_actions.put_import(...);
//});
//tcs.SetResult("Completed");
//return tcs.Task.Result;
//comment this whole this is just used for testing
await Task.Factory.StartNew(() =>
{
//Capture the thread
threadToCancel = Thread.CurrentThread;
//Simulate work (usually from 3rd party code)
for (int i = 0; i < 100000; i++)
{
Console.WriteLine($"value {i}");
}
Console.WriteLine("Task finished!");
});
tcs.SetResult("Completed");
return tcs.Task.Result;
}
public static void Main()
{
var source = new CancellationTokenSource();
CancellationToken token = source.Token;
DoWork(token);
Task.Factory.StartNew(()=>
{
while(true)
{
if (token.IsCancellationRequested && threadToCancel!=null)
{
threadToCancel.Abort();
Console.WriteLine("Thread aborted");
}
}
});
///here 1000 can be replace by miliseconds after which you want to
// abort thread which calling your long running method
source.CancelAfter(1000);
Console.ReadLine();
}
}
Here is my final implementation (based on Pranay Rana's answer).
public class XenImageUploader : IDisposable
{
public static XenImageUploader Create(Session session, IComponentLogger parentComponentLogger)
{
var logger = new ComponentLogger(parentComponentLogger, typeof(XenImageUploader));
var taskHandler = new XenTaskHandler(
taskReference: session.RegisterNewTask(UploadTaskName, logger),
currentSession: session);
return new XenImageUploader(session, taskHandler, logger);
}
private XenImageUploader(Session session, XenTaskHandler xenTaskHandler, IComponentLogger logger)
{
_session = session;
_xenTaskHandler = xenTaskHandler;
_logger = logger;
_imageUploadingHasFinishedEvent = new AutoResetEvent(initialState: false);
_xenApiUploadCancellationReactionTime = new TimeSpan();
}
public Maybe<string> Upload(
string imageFilePath,
XenStorage destinationStorage,
ProgressToken progressToken,
JobCancellationToken cancellationToken)
{
_logger.WriteDebug("Image uploading has started.");
var imageUploadingThread = new Thread(() =>
UploadImageOfVirtualMachine(
imageFilePath: imageFilePath,
storageReference: destinationStorage.GetReference(),
isTaskCancelled: () => cancellationToken.IsCancellationRequested));
imageUploadingThread.Start();
using (new Timer(
callback: _ => WatchForImageUploadingState(imageUploadingThread, progressToken, cancellationToken),
state: null,
dueTime: TimeSpan.Zero,
period: TaskStatusUpdateTime))
{
_imageUploadingHasFinishedEvent.WaitOne(MaxTimeToUploadSvm);
}
cancellationToken.PerformCancellationIfRequested();
return _xenTaskHandler.TaskIsSucceded
? new Maybe<string>(((string) _xenTaskHandler.Result).GetOpaqueReferenceFromResult())
: new Maybe<string>();
}
public void Dispose()
{
_imageUploadingHasFinishedEvent.Dispose();
}
private void UploadImageOfVirtualMachine(string imageFilePath, XenRef<SR> storageReference, Func<bool> isTaskCancelled)
{
try
{
_logger.WriteDebug("Uploading thread has started.");
HTTP_actions.put_import(
progressDelegate: progress => { },
cancellingDelegate: () => isTaskCancelled(),
timeout_ms: -1,
hostname: new Uri(_session.Url).Host,
proxy: null,
path: imageFilePath,
task_id: _xenTaskHandler.TaskReference,
session_id: _session.uuid,
restore: false,
force: false,
sr_id: storageReference);
_xenTaskHandler.WaitCompletion();
_logger.WriteDebug("Uploading thread has finished.");
}
catch (HTTP.CancelledException exception)
{
_logger.WriteInfo("Image uploading has been cancelled.");
_logger.WriteInfo(exception.ToDetailedString());
}
_imageUploadingHasFinishedEvent.Set();
}
private void WatchForImageUploadingState(Thread imageUploadingThread, ProgressToken progressToken, JobCancellationToken cancellationToken)
{
progressToken.Progress = _xenTaskHandler.Progress;
if (!cancellationToken.IsCancellationRequested)
{
return;
}
_xenApiUploadCancellationReactionTime += TaskStatusUpdateTime;
if (_xenApiUploadCancellationReactionTime >= TimeForXenApiToReactOnCancel)
{
_logger.WriteWarning($"XenApi didn't cancel for {_xenApiUploadCancellationReactionTime}.");
if (imageUploadingThread.IsAlive)
{
try
{
_logger.WriteWarning("Trying to forcefully abort uploading thread.");
imageUploadingThread.Abort();
}
catch (Exception exception)
{
_logger.WriteError(exception.ToDetailedString());
}
}
_imageUploadingHasFinishedEvent.Set();
}
}
private const string UploadTaskName = "Xen image uploading";
private static readonly TimeSpan TaskStatusUpdateTime = TimeSpan.FromSeconds(1);
private static readonly TimeSpan TimeForXenApiToReactOnCancel = TimeSpan.FromSeconds(10);
private static readonly TimeSpan MaxTimeToUploadSvm = TimeSpan.FromMinutes(20);
private readonly Session _session;
private readonly XenTaskHandler _xenTaskHandler;
private readonly IComponentLogger _logger;
private readonly AutoResetEvent _imageUploadingHasFinishedEvent;
private TimeSpan _xenApiUploadCancellationReactionTime;
}
HTTP_actions.put_import
calls
HTTP_actions.put
calls
HTTP.put
calls
HTTP.CopyStream
The delegate is passed to CopyStream which then checks that the function isn’t null (not passed) or true (return value). However, it only does this at the While statement so the chances are it is the Read of the Stream that is causing the blocking operation. Though it could also occur in the progressDelegate if one is used.
To get around this, put the call to HTTP.put_import() inside a task or background thread and then separately check for cancellation or a return from the task/thread.
Interestingly enough, a quick glance at that CopyStream code revealed a bug to me. If the function that works out if a process has been cancelled returns a different value based off some check it is making, you can actually get the loop to exit without generating a CancelledException(). The result of the CancelledException call should be stored in a local variable.
I have some kind of work manager in c# which will receive tasks to do and it will execute them. Task will be arriving from different threads, but they must be executed only one at the same time in the order they were received.
I don't want a while loop, which will be running all the time, checking if they are new tasks in the queue. Is there a built-in queue or an easy way to implement a queue which will wait for tasks and execute them synchronously without busy-waiting?
As per the comments you should look into ConcurrentQueue but also BlockingCollection and use GetConsumingEnumerable() instead of your unwanted WHILE loop
BlockingCollection<YourClass> _collection =
new BlockingCollection<YourClass>(new ConcurrentQueue<YourClass>());
_collection.Add() can be called from multiple threads
On a separate thread you can use
foreach (var message in _collection.GetConsumingEnumerable())
{}
You can use a SemaphoreSlim (https://msdn.microsoft.com/en-us/library/system.threading.semaphoreslim(v=vs.110).aspx) and a ConcurrentQueue
Example:
private delegate void TaskBody();
private class TaskManager
{
private ConcurrentQueue<TaskBody>
TaskBodyQueue = new ConcurrentQueue<TaskBody>();
private readonly SemaphoreSlim
TaskBodySemaphoreSlim = new SemaphoreSlim(1, 1);
public async void Enqueue(TaskBody body)
{
TaskBodyQueue.Enqueue(body);
await TaskBodySemaphoreSlim.WaitAsync();
Console.WriteLine($"Cycle ...");
if (TaskBodyQueue.TryDequeue(out body) == false) {
throw new InvalidProgramException($"TaskBodyQueue is empty!");
}
body();
Console.WriteLine($"Cycle ... done ({TaskBodyQueue.Count} left)");
TaskBodySemaphoreSlim.Release();
}
}
public static void Main(string[] args)
{
var random = new Random();
var tm = new TaskManager();
Parallel.ForEach(Enumerable.Range(0, 30), async number => {
await Task.Delay(100 * number);
tm.Enqueue(delegate {
Console.WriteLine($"Print {number}");
});
});
Task
.Delay(4000)
.Wait();
WaitFor(action: "exit");
}
public static void WaitFor(ConsoleKey consoleKey = ConsoleKey.Escape, string action = "continue")
{
Console.Write($"Press {consoleKey} to {action} ...");
var consoleKeyInfo = default(ConsoleKeyInfo);
do {
consoleKeyInfo = Console.ReadKey(true);
}
while (Equals(consoleKeyInfo.Key, consoleKey) == false);
Console.WriteLine();
}
I just can't seem to understand how to structure an asynchronous call to SendPingAsync. I want to loop through a list of IP addresses and ping them all asynchronously before moving on in the program... right now it takes forever to go through all of them one at a time. I asked a question about it earlier thinking I'd be able to figure out async but apparently I was wrong.
private void button1_Click(object sender, EventArgs e)
{
this.PingLoop();
MessageBox.Show("hi"); //for testing
}
public async void PingLoop()
{
Task<int> longRunningTask = PingAsync();
int result = await longRunningTask;
MessageBox.Show("async call is finished!");
//eventually want to loop here but for now just want to understand how this works
}
private async Task<int> PingAsync()
{
Ping pingSender = new Ping();
string reply = pingSender.SendPingAsync("www.google.com", 2000).ToString();
pingReplies.Add(reply); //what should i be awaiting here??
return 1;
}
I'm afraid I just don't get what is really going on here enough... when should I return a task? When I run this as is I just get a frozen UI and a ping error. I have read the MSDN documentation and tons of questions here and I'm just not getting it.
You'd want to do something like:
private async Task<List<PingReply>> PingAsync()
{
var tasks = theListOfIPs.Select(ip => new Ping().SendPingAsync(ip, 2000));
var results = await Task.WhenAll(tasks);
return results.ToList();
}
This will start off one request per IP in theListOfIPs asynchronously, then asynchronously wait for them all to complete. It will then return the list of replies.
Note that it's almost always better to return the results vs. setting them in a field, as well. The latter can lead to bugs if you go to use the field (pingReplies) before the asynchronous operation completes - by returning, and adding the range to your collection after the call is made with await, you make the code more clear and less bug prone.
What you do here pingSender.SendPingAsync("www.google.com", 2000).ToString(); doesn't make much sense.
Instead you should return pingSender.SendPingAsync("www.google.com", 2000) and
await Task.WhenAll(your all ping requests)
What you want is to start all pings at once:
var pingTargetHosts = ...; //fill this in
var pingTasks = pingTargetHosts.Select(
host => new Ping().SendPingAsync(host, 2000)).ToList();
Now the pings are running. Collect their results:
var pingResults = await Task.WhenAll(pingTasks);
Now the concurrent phase of the processing is done and you can examine and process the results.
Here is how I do it
private delegate void scanTargetDelegate(IPAddress ipaddress);
private Task<PingReply> pingAsync(IPAddress ipaddress)
{
var tcs = new TaskCompletionSource<PingReply>();
try
{
AutoResetEvent are = new AutoResetEvent(false);
Ping ping = new Ping();
ping.PingCompleted += (obj, sender) =>
{
tcs.SetResult(sender.Reply);
};
ping.SendAsync(ipaddress, new object { });
}
catch (Exception)
{
}
return tcs.Task;
}
in a BackgroundWorker I do this
List<Task<PingReply>> pingTasks = new List<Task<PingReply>>();
addStatus("Scanning Network");
foreach (var ip in range)
{
pingTasks.Add(pingAsync(ip));
}
Task.WaitAll(pingTasks.ToArray());
addStatus("Network Scan Complete");
scanTargetDelegate d = null;
IAsyncResult r = null;
foreach (var pingTask in pingTasks)
{
if (pingTask.Result.Status.Equals(IPStatus.Success))
{
d = new scanTargetDelegate(scanTarget); //do something with the ip
r = d.BeginInvoke(pingTask.Result.Address, null, null);
Interlocked.Increment(ref Global.queuedThreads);
}
else
{
if (!ownIPs.Contains(pingTask.Result.Address))
{
failed.Add(pingTask.Result.Address);
}
}
}
if (r != null)
{
WaitHandle[] waits = new WaitHandle[] { r.AsyncWaitHandle };
WaitHandle.WaitAll(waits);
}
public static async Task<bool> PingAsync(string host)
{
try
{
var ping = new System.Net.NetworkInformation.Ping();
var reply = await ping.SendTaskAsync(host);
return (reply.Status == System.Net.NetworkInformation.IPStatus.Success);
}
catch { return false; }
}