I have a line in C# which does not work very reliable and does not time out at all and runs for infinity.
to be more precise i am trying to check the connection to a proxy WebClient.DownloadString
I want it to timeout after 5 seconds without making the full method asynchronous
so the code should be like this:
bool success = false
do_this_for_maximum_5_seconds_or_until_we_reach_the_end
{
WebClient.DownloadString("testurl");
success = true;
}
it will try to download testurl and after it did download it it will set success to true. If DownloadString takes more than 5 seconds, the call is canceled, we do not reach the the line where we set success to true, so it remains false and i know that it field.
The thread will remain frozen while we try to DownloadString, so the action is not taking parallel. The ONLY difference to a normal line would be that we set a timeout after 5 seconds
Please do not suggest alternatives such as using HttpClient, because i need a similar codes also for other places, so i simply want a code which will run in a synchronous application (i have not learned anything about asynchronus programing therefore i would like to avoid it completely)
my approach was like suggested by Andrew Arnott in this thread
Asynchronously wait for Task<T> to complete with timeout
however my issue is, I am not exactly sure what type of variable "SomeOperationAsync()" is in his example (i mean it seems like a task, but how can i put actions into the task?), and the bigger issue is that VS wants to switch the complete Method to asynchronos, but i want to run everything synchronous but just with a timeout for a specific line of code.
In case the question has been answered somewhere kindly provide a link
Thank you for any help!!
You should use Microsoft's Reactive Framework (aka Rx) - NuGet System.Reactive and add using System.Reactive.Linq; - then you can do this:
var downloadString =
Observable
.Using(() => new WebClient(), wc => Observable.Start(() => wc.DownloadString("testurl")))
.Select(x => new { success = true, result = x });
var timeout =
Observable
.Timer(TimeSpan.FromSeconds(5.0))
.Select(x => new { success = false, result = (string)null });
var operation = Observable.Amb(downloadString, timeout);
var output = await operation;
if (output.success)
{
Console.WriteLine(output.result);
}
The first observable downloads your string. The second sets up a timeout. The third, uses the Amb operator to get the result from which ever of the two input observables completes first.
Then we can await the third observable to get its value. And then it's a simple task to check what result you got.
Related
I am trying to create a unit test to simulate my API being called by many people at the same time.
I've got this code in my unit test:
var tasks = new List<Task>();
for (int i = 0; i < 10; i++)
{
var id = i; // must assign to new variable inside for loop
var t = Task.Run(async () =>
{
response = await Client.GetAsync("/api/test2?id=" + id);
Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
});
tasks.Add(t);
}
await Task.WhenAll(tasks);
Then in my controller I am putting in a Thread.Sleep.
But when I do this, the total time for all tests to complete is 10 x the sleep time.
I expected all the calls to be made and to have ended up at the Thread.Sleep call at more or less the same time.
But it seems the API calls are actually made one after the other.
The reason I am testing the parallel API call is because I want to test a deadlock issue with my data repository when using SQLite which has only happened when more than 1 user uses my website at the same time.
And I have never been able to simulate this and I thought I'd create a unit test, but the code I have now seems to not be executing the calls in parallel.
My plan with the Thread.Sleep calls was to put a couple in the Controller method to make sure all requests end up between certain code blocks at the same time.
Do I need to set a a max number of parallel requests on the Web Server or something or am I doing something obviously wrong?
Thanks in advance.
Update 1:
I forgot to mention I get the same results with await Task.Delay(1000); and many similar alternatives.
Not sure if it's clear but this is all running within a unit test using NUnit.
And the "Web Server" and Client is created like this:
var builder = new WebHostBuilder().UseStartup<TStartup>();
Server = new TestServer(builder);
Client = Server.CreateClient();
You can use Task.Delay(time in milliseconds). Thread.Sleep will not release a thread and it can't process other tasks while waiting to result.
The HttpClient class in .NET has a limit of two concurrent requests to the same server by default, which I believe might be causing the issue in this case. Usually, this limit can be overridden by creating a new HttpClientHandler and using it as an argument in the constructor:
new HttpClient(new HttpClientHandler
{
MaxConnectionsPerServer = 100
})
But because the clients are created using the TestServer method, that gets a little more complicated. You could try changing the ServicePointManager.DefaultConnectionLimit property like below, but I'm not sure if that will work with the TestServer:
System.Net.ServicePointManager.DefaultConnectionLimit = 100;
That being said, I believe using Unit Tests for doing load testing is not a good approach and recommend looking into tools specific for load testing.
Reference for ServicePointManagerClass
This blog post also has more in-depth information about the subject
I found the problem with my test.
It was not the TestServer or the client code, it was the Database code.
In my controller I was starting an NHibernate Transaction, and that was blocking the requests because it would put a lock on the table being updated.
This is correct, so I had to change my code a bit to not automatically start a transaction. But rather leave that up to the calling code to manage.
This issue is really hard to debug, not always happens (not happen in a short time so that I can just debug the code easily) and looks like no one out there has had the similar issue like this? (I've googled for hours without finding anything related to this issue).
In a short word, my dataflow network works fine at some point until I find out that the terminal block (which updates the UI) seems to stop working (no new data updated on the UI) whereas all the upwards dataflow blocks are still working fine. So it's like there is some disconnection between the other blocks and the ui block here.
Here is my detailed dataflow network, let's check out first before I'm going to explain more about the issue:
//the network graph first
[raw data block]
-> [switching block] -> [data counting block]
-> [processing block] -> [ok result block] -> [completion monitoring]
-> [not ok result block] -> [completion monitoring]
//in the UI code behind where I can consume the network and plug-in some other blocks for updating
//like this:
[ok result block] -> [ok result counting block]
[not ok result block] -> [other ui updating]
The block [ok result block] is a BroadcastBlock which pushes result to the [ok result counting block]. The issue I've described partly here is this [ok result counting block] seems to be disconnected from [ok result block].
var options = new DataflowBlockOptions { EnsureOrdered = false };
var execOptions = new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 80 };
//[raw data block]
var rawDataBlock = new BufferBlock<Input>(options);
//[switching block]
var switchingBlock = new TransformManyBlock<Input,Input>(e => new[] {e,null});
//[data counting block]
var dataCountingBlock = new BroadcastBlock<Input>(null);
//[processing block]
var processingBlock = new TransformBlock<Input,int>(async e => {
//call another api to compute the result
var result = await …;
//rollback the input for later processing (some kind of retry)
if(result < 0){
//per my logging, there is only one call dropping
//in this case
Task.Run(rollback);
}
//local function to rollback
async Task rollback(){
await rawDataBlock.SendAsync(e).ConfigureAwait(false);
}
return result;
}, execOptions);
//[ok result block]
var okResultBlock = new BroadcastBlock<int>(null, options);
//[not ok result block]
var notOkResultBlock = new BroadcastBlock<int>(null, options);
//[completion monitoring]
var completionMonitoringBlock = new ActionBlock<int>(e => {
if(rawDataBlock.Completion.IsCompleted && processingBlock.InputCount == 0){
processingBlock.Complete();
}
}, execOptions);
//connect the blocks to build the network
rawDataBlock.LinkTo(switchingBlock);
switchingBlock.LinkTo(processingBlock, e => e != null);
switchingBlock.LinkTo(dataCountingBlock, e => e == null);
processingBlock.LinkTo(okResultBlock, e => e >= 9);
processingBlock.LinkTo(notOkResultBlock, e => e < 9);
okResultBlock.LinkTo(completionMonitoringBlock);
notOkResultBlock.LinkTo(completionMonitoringBlock);
In the UI code behind, I plug in some other UI blocks to update the info. Here I'm using WPF but I think it does not matter here:
var uiBlockOptions = new ExecutionDataflowBlockOptions {
TaskScheduler = TaskScheduler.FromCurrentSynchronizationContext()
};
dataCountingBlock.LinkTo(new ActionBlock<int>(e => {
//these are properties in the VM class, which is bound to the UI (xaml view)
RawInputCount++;
}, uiBlockOptions));
okResultBlock.LinkTo(new ActionBlock<int>(e => {
//these are properties in the VM class, which is bound to the UI (xaml view)
ProcessedCount++;
OkResultCount++;
}, uiBlockOptions));
notOkResultBlock.LinkTo(new ActionBlock<int>(e => {
//these are properties in the VM class, which is bound to the UI (xaml view)
ProcessedCount++;
PendingCount = processingBlock.InputCount;
}, uiBlockOptions));
I do have code monitoring the completion status of the blocks: rawDataBlock, processingBlock, okResultBlock, notOkResultBlock.
I also have other logging code inside the processingBlock to help diagnosing.
So as I said, after some fairly long time (about 1 hour with about 600K items processed, actually this number says nothing about the issue, it could be random), the network seems to still run fine except that some counts (ok result, not ok result) are not updated, as if the okResultBlock and notOkResultBlock were disconnected from the processingBlock OR they were disconnected from the UI blocks (which updates the UI). I ensure that the processingBlock is still working (no exception logged and the results are still written to file), the dataCountingBlock is still working well (with new count updated on the UI), all the blocks processingBlock, okResultBlock, notOkResultBlock are not completed (their completions are .ContinueWith a task which logs out the status and nothing logged).
So it's really stuck there. I don't have any clue about why it could stop working like that. This could only happen when we use a black-box library like TPL Dataflow. I know it may also be hard for you to diagnose, imagine and think about possibilities. I just asked here for suggestions to solve this as well as any shared experience (about the similar issues) from you and possibly some guesses about what could cause such kind of issue in TPL Dataflow
UPDATE:
I've successfully reproduced the bug one more time and before I had prepared some code to write down some info to help debugging. The issue now keeps down to this point: The processingBlock somehow does not actually push/post/send any msg to all the linked blocks (including the okResultBlock and notOkResultBlock) AND even a new block (prepended with DataflowLinkOptions having Append of false) linked to it could not receive any message (the result). As I said the processBlock does seem to still work fine (its Action does run the code inside and produce result logging normally). So this is still a very strange issue.
In a short word, the problem now becomes why the processBlock could not send/post its messages to all the other linked blocks? Is there any possible cause for that to occur? How to know if the blocks are linked successfully (after the call to .LinkTo)?
It's actually my fault, the processingBlock is actually blocked but it's blocked correctly and in a good way (by design).
The processingBlock is blocked by 2 factors:
The EnsureOrdered is true (as by default), so the output is always queued in the processed order.
There is at least one output result which cannot be pushed out (to some other block).
So if one output result cannot be pushed out, it will be a blocking item because of all the output results being queued in the processed order. All the after processed output results will simply be blocked (queued up) by the first output result that cannot be pushed out.
In my case the special output result that cannot be pushed out here is a null result. That null result can only be produced by some error (exception handling). So I have 2 blocks okResultBlock and notOkResultBlock linked to the processingBlock. But both those blocks are filtered to let only non-null results go through. Sorry that my question does not reflect the exact code I have, about the output type. In the question it is just a simple int but actually it's a class (nullable), the actual linking code looks like this:
processingBlock.LinkTo(okResultBlock, e => e != null && e.Point >= 9);
processingBlock.LinkTo(notOkResultBlock, e => e != null && e.Point < 9);
So the null output result will be blocked and consequentially block all the after processed result (because of the option EnsureOrdered being true by default).
To fix this, I just simply set the EnsureOrdered to false (although this is not required to avoid the blocking, but it's good in my case) and add one more block to consume the null output result (this is the most important to help avoid blocking):
processingBlock.LinkTo(DataflowBlock.NullTarget<Output>(), e => e == null);
I am attempting to implement a timeout for my Durable function.
In my function, I am currently doing a fan out of activity functions, each of which call a separate API to collect current pricing data. (Price comparison site). All of this works well and I am happy with the results, however I need to implement a time out in case 1 or more APIs do not respond within a reasonable time (~15 seconds)
I am using the following pattern:
var parallelActivities = new List<Task<T>>
{
context.CallActivityAsync<T>( "CallApi1", input ),
context.CallActivityAsync<T>( "CallApi2", input ),
context.CallActivityAsync<T>( "CallApi3", input ),
context.CallActivityAsync<T>( "CallApi4", input ),
context.CallActivityAsync<T>( "CallApi5", input ),
context.CallActivityAsync<T>( "CallApi16", input )
};
var timeout = TimeSpan.FromSeconds(15);
var deadline = context.CurrentUtcDateTime.Add(timeout);
using ( var cts = new CancellationTokenSource() )
{
var timeoutTask = context.CreateTimer(deadline, cts.Token);
var taskRaceWinner = await Task.WhenAny(Task.WhenAll( parallelActivities ), timeoutTask);
if ( taskRaceWinner != timeoutTask )
{
cts.Cancel();
}
foreach ( var completedParallelActivity in parallelActivities.Where( task => task.Status == TaskStatus.RanToCompletion ) )
{
//Process results here
}
//More logic here
}
Everything seems to work correctly. If any activity doesn't return within the time limit, the timeout task wins, and the data is processed and returned correctly.
The Durable functions documentation indicates that the Orchestrator states "This mechanism does not actually terminate in-progress activity function execution. Rather, it simply allows the orchestrator function to ignore the result and move on. For more information, see the Timers documentation."
Unfortunately my function remains in the "running" status until it ultimately hits the durable function timeout and recycles.
Am I doing something wrong? I realize that, generally, the durable function will be marked as running until all activities have completed, however the documentation above indicates that I should be able to "ignore" the activities that are running too long.
I could implement a timeout in each individual API, however that doesn't seem like good design and I have been resisting. So, please help me stackoverflow!
According to this, The Durable Task Framework will not change an orchestration's status to "completed" until all outstanding tasks are completed or canceled even though output of those are ignored. Also, according to this and this, we can't cancel activity/sub-orchestration from parent at this moment. So, currently only way I can think of is to pass a Timeout param (of TimeSpan type) from parent as part of the input object to the activity (e.g. context.CallActivityAsync<T>( "CallApi1", input )) and let the child activity function handle it's exit respecting that timeout. I tested this myself and works fine. Please feel free to reach me for any follow up.
I'm struggling with a ReactiveUI use case that I feel is so simple there must be "out-of-the-box" support for it. But I cannot find it.
The scenario is a basic search interface with these features:
A search string TextBox where the user enters the search text
A result TextBox where the result is presented
An indicator showing that a search is in progress
The search should work like this:
The search string TextBox is throttled, so that after 500ms of
inactivity, a search operation is initiated.
Each time a new search is initiated any ongoing search operation should be cancelled.
Basically I'm trying to extend the "Compelling example" to cancel the currently executing command before starting a new command.
Seems easy enough? Yeah, but I cannot get it right using ReactiveCommand. This is what I have:
var searchTrigger = this.WhenAnyValue(vm => vm.SearchString)
.Throttle(TimeSpan.FromMilliseconds(500))
.Publish().RefCount();
var searchCmd = ReactiveCommand.CreateFromObservable(
() => Observable
.StartAsync(ct => CancellableSearch(SearchString, ct))
.TakeUntil(searchTrigger));
searchCmd.ToPropertyEx(this, vm => vm.Result);
searchCmd.IsExecuting.ToPropertyEx(this, vm => vm.IsSearching);
searchTrigger.Subscribe(_ => searchCmd.Execute(Unit.Default).Subscribe());
The above code works in all aspects except searchCmd.IsExecuting. I kick off a new search regardless of the state of searchCmd.CanExecute. This makes IsExecuting unreliable since it assumes serial operation of the commands. And I cannot use InvokeCommand instead of Execute since then new searches would not be started while a search is in progress.
I currently have a working solution without ReactiveCommand. But I have a strong feeling this simple use case should be supported in a straightforward way using ReactiveCommand. What am i missing?
AFAICT Rx7 doesn't really handle this kind of overlapping execution. All the messages will eventually make it through but not in a way that will keep your IsExecuting consistently true. Rx6 used an In flight counter so overlapping executions were handled but Rx7 simplified it all way down. Most likely for performance and reliability (but I'm just guessing). Because Tasks aren't going to cancel right away that first command is going to complete after the second command starts which leads to IsExecuting toggling from true to false to true to false. But that middle transition from false to true to false happens instantly as the messages catch up. I know you said you had a non Reactive Command working but here's a version that I think works with Reactive Commands by waiting for the first command to finish or finish cancelling. One advantage to waiting until the Task actually cancels is that you are assured you don't have two hands in the cookie jar :-) Which might not matter in your case but can be nice in some cases.
//Fires an event right away so search is cancelled faster
var searchEntered = this.WhenAnyValue(vm => vm.SearchString)
.Where(x => !String.IsNullOrWhiteSpace(x))
.Publish()
.RefCount();
ReactiveCommand<string, string> searchCmd = ReactiveCommand.CreateFromObservable<string, string>(
(searchString) => Observable.StartAsync(ct => CancellableSearch(SearchString, ct))
.TakeUntil(searchEntered));
//if triggered wait for IsExecuting to transition back to false before firing command again
var searchTrigger =
searchEntered
.Throttle(TimeSpan.FromMilliseconds(500))
.Select(searchString => searchCmd.IsExecuting.Where(e => !e).Take(1).Select(_ => searchString))
.Publish()
.RefCount();
_IsSearching =
searchCmd.IsExecuting
.ToProperty(this, vm => vm.IsSearching);
searchTrigger
.Switch()
.InvokeCommand(searchCmd);
I develop some USB communication which custom made device. I have use this USB dll to made things easier:
HidLibrary
Very good library but have one little bug. For example if I send something to USB device and device doesnt responde to that (it is not right command, etc.) this dll toolbox waiting for command response to infinity!
Now I like to add some timeout if after some second it no response go further.
I use now method whith bool status response, that I know if reading was successful or not.
var readreport = _choosendevice.ReadReport();
Later I need "readreport" variable, because there inside (readreport.Data) are acceptet data.
My question is how to implement this line into some timeout command? I found already solution for bug, but was not working for me (bug fix link).
If any question please ask. If question is not questioned in the right way, sorry for that because I am beginner in C#. THANKS! for help
You can use Tasks to do so:
Task<HideReport> myTask = Task.Factory.StartNew(() => _choosendevice.ReadReport(););
myTask.Wait(100); //Wait for 100 ms.
if (myTask.IsCompleted)
Console.WriteLine("myTask completed.");
else
Console.WriteLine("Timed out before myTask completed.");
HidReport report = myTask.Result;
EDIT I didn't know the return value of your function. It returns a HidReport objtect. I just modified the Task creation to fit the return type
As said in comments the library already provides this mechanism so you just can call the right method
HidReport report = await ReadReportAsync(timeout);
** EDIT ** This code gone well for me
HidDevice device = HidDevices.Enumerate().ToList().First(e =>e.Description.Contains("mouse"));
Task<HidReport> t = Task.Factory.StartNew(() => device.ReadReport(1000));
t.Wait();
HidReport report = t.Result;
Late response, but in case someone visits this question:
It's better to use separate tasks for the result and the waiting.
var waitTask = Task.Delay(timeoutInMs);
Task<MyReport> reportTask = Task.Factory.StartNew(() => _choosendevice.ReadReport(););
await Task.WhenAny(waitTask, reportTask );
if (reportTask.IsCompleted)
{
return await reportTask;
}
else
{
// preferred timeout error handling...
}
That way you do not have to wait for the timeout if the report is ready in time.
(And Taks.Delay is better than Task.Wait because it doesn't block)
Task<HidReport> myTask = Task.Factory.StartNew(() => _choosendevice.ReadReport());
myTask.Wait(1000);
if (myTask.IsCompleted)
{
HidReport report = myTask.Result;
}
else
{
myTask.Dispose();
// show ERROR
}
Maybe this will work. Until now was working, just make few tests and then I confirm (I helped with this page this:))