I'm running .NET application (.NET 4.5) on Mono on Debian/Raspbian (on Raspberry Pi). And very often, say 9 out of 10 runs, I see after a while:
_wapi_handle_ref: Attempting to ref unused handle 0x770
_wapi_handle_unref_full: Attempting to unref unused handle 0x770
Of course the "0x770" is always different.
The applications runs fine then a short time. But eventually fails - either hard or just stops progressing (looks like deadlock/livelock).
Anything guide how to pinpoint problem in .NET code causing it and help Mono resolve it?
Mono version info:
Mono JIT compiler version 3.2.3 (Debian 3.2.3+dfsg-5+rpi1)
Copyright (C) 2002-2012 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com
TLS: __thread
SIGSEGV: normal
Notifications: epoll
Architecture: armel,vfp+hard
Disabled: none
Misc: softdebug
LLVM: supported, not enabled.
GC: sgen
On 3.2.7 (built from current sources) the app fails even harder:
Stacktrace:
at <unknown> <0xffffffff>
at (wrapper managed-to-native) System.Buffer.BlockCopyInternal (System.Array,int,System.Array,int,int) <0xffffffff>
at System.IO.FileStream.ReadSegment (byte[],int,int) <0x0006f>
at System.IO.FileStream.ReadInternal (byte[],int,int) <0x00233>
at (wrapper runtime-invoke) <Module>.runtime_invoke_int__this___object_int_int (object,intptr,intptr,intptr) <0xffffffff>
Native stacktrace:
Debug info from gdb:
Mono support loaded.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".
[New Thread 0xb519a430 (LWP 1456)]
[New Thread 0xb52ba430 (LWP 32577)]
[New Thread 0xb52da430 (LWP 32576)]
[New Thread 0xb5538430 (LWP 32574)]
[New Thread 0xb5b7b430 (LWP 32573)]
0xb6f05494 in pthread_cond_wait##GLIBC_2.4 () from /lib/arm-linux-gnueabihf/libpthread.so.0
Id Target Id Frame
6 Thread 0xb5b7b430 (LWP 32573) "mono" 0xb6f07700 in sem_wait##GLIBC_2.4 () from /lib/arm-linux-gnueabihf/libpthread.so.0
5 Thread 0xb5538430 (LWP 32574) "mono" 0xb6f09250 in nanosleep () from /lib/arm-linux-gnueabihf/libpthread.so.0
4 Thread 0xb52da430 (LWP 32576) "mono" 0xb6e6de84 in epoll_wait () from /lib/arm-linux-gnueabihf/libc.so.6
3 Thread 0xb52ba430 (LWP 32577) "mono" 0xb6f07954 in sem_timedwait () from /lib/arm-linux-gnueabihf/libpthread.so.0
2 Thread 0xb519a430 (LWP 1456) "mono" 0xb6f09a3c in waitpid () from /lib/arm-linux-gnueabihf/libpthread.so.0
* 1 Thread 0xb6fd9000 (LWP 32571) "mono" 0xb6f05494 in pthread_cond_wait##GLIBC_2.4 () from /lib/arm-linux-gnueabihf/libpthread.so.0
Thread 6 (Thread 0xb5b7b430 (LWP 32573)):
#0 0xb6f07700 in sem_wait##GLIBC_2.4 () from /lib/arm-linux-gnueabihf/libpthread.so.0
#1 0x001fb618 in mono_sem_wait (sem=0x2eff34, alertable=1) at mono-semaphore.c:119
#2 0x0017a52c in finalizer_thread (unused=<optimized out>) at gc.c:1073
#3 0x0015f3ec in start_wrapper_internal (data=0x974850) at threads.c:609
#4 start_wrapper (data=0x974850) at threads.c:654
#5 0x001f1718 in thread_start_routine (args=0x92f628) at wthreads.c:294
#6 0x001ff824 in inner_start_thread (arg=<optimized out>) at mono-threads-posix.c:49
#7 0xb6f00bfc in start_thread () from /lib/arm-linux-gnueabihf/libpthread.so.0
#8 0xb6e6d758 in ?? () from /lib/arm-linux-gnueabihf/libc.so.6
#9 0xb6e6d758 in ?? () from /lib/arm-linux-gnueabihf/libc.so.6
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
Thread 5 (Thread 0xb5538430 (LWP 32574)):
#0 0xb6f09250 in nanosleep () from /lib/arm-linux-gnueabihf/libpthread.so.0
#1 0xb6f08044 in __pthread_enable_asynccancel () from /lib/arm-linux-gnueabihf/libpthread.so.0
#2 0x001f08f8 in SleepEx (ms=<optimized out>, alertable=162) at wthreads.c:842
#3 0x00160e80 in monitor_thread (unused=<optimized out>) at threadpool.c:779
#4 0x0015f3ec in start_wrapper_internal (data=0xa0e400) at threads.c:609
#5 start_wrapper (data=0xa0e400) at threads.c:654
#6 0x001f1718 in thread_start_routine (args=0x92f7d8) at wthreads.c:294
#7 0x001ff824 in inner_start_thread (arg=<optimized out>) at mono-threads-posix.c:49
#8 0xb6f00bfc in start_thread () from /lib/arm-linux-gnueabihf/libpthread.so.0
#9 0xb6e6d758 in ?? () from /lib/arm-linux-gnueabihf/libc.so.6
#10 0xb6e6d758 in ?? () from /lib/arm-linux-gnueabihf/libc.so.6
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
Thread 4 (Thread 0xb52da430 (LWP 32576)):
#0 0xb6e6de84 in epoll_wait () from /lib/arm-linux-gnueabihf/libc.so.6
#1 0x00161804 in tp_epoll_wait (p=0x2efd7c) at ../../mono/metadata/tpool-epoll.c:118
#2 0x0015f3ec in start_wrapper_internal (data=0xbead88) at threads.c:609
#3 start_wrapper (data=0xbead88) at threads.c:654
#4 0x001f1718 in thread_start_routine (args=0x92fb38) at wthreads.c:294
#5 0x001ff824 in inner_start_thread (arg=<optimized out>) at mono-threads-posix.c:49
#6 0xb6f00bfc in start_thread () from /lib/arm-linux-gnueabihf/libpthread.so.0
#7 0xb6e6d758 in ?? () from /lib/arm-linux-gnueabihf/libc.so.6
#8 0xb6e6d758 in ?? () from /lib/arm-linux-gnueabihf/libc.so.6
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
Thread 3 (Thread 0xb52ba430 (LWP 32577)):
#0 0xb6f07954 in sem_timedwait () from /lib/arm-linux-gnueabihf/libpthread.so.0
#1 0x001fb6f8 in mono_sem_timedwait (sem=0x2efcfc, timeout_ms=<optimized out>, alertable=1) at mono-semaphore.c:82
#2 0x00163844 in async_invoke_thread (data=0xb6e16e00) at threadpool.c:1565
#3 0x0015f3ec in start_wrapper_internal (data=0xbea9c8) at threads.c:609
#4 start_wrapper (data=0xbea9c8) at threads.c:654
#5 0x001f1718 in thread_start_routine (args=0x92fbc8) at wthreads.c:294
#6 0x001ff824 in inner_start_thread (arg=<optimized out>) at mono-threads-posix.c:49
#7 0xb6f00bfc in start_thread () from /lib/arm-linux-gnueabihf/libpthread.so.0
#8 0xb6e6d758 in ?? () from /lib/arm-linux-gnueabihf/libc.so.6
#9 0xb6e6d758 in ?? () from /lib/arm-linux-gnueabihf/libc.so.6
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
Thread 2 (Thread 0xb519a430 (LWP 1456)):
#0 0xb6f09a3c in waitpid () from /lib/arm-linux-gnueabihf/libpthread.so.0
#1 0x000b1284 in mono_handle_native_sigsegv (signal=<optimized out>, ctx=<optimized out>) at mini-exceptions.c:2299
#2 0x000277e4 in mono_sigsegv_signal_handler (_dummy=11, info=0xb5199548, context=0xb51995c8) at mini.c:6777
#3 <signal handler called>
#4 mono_array_get_byte_length (array=0xb4e54010) at icall.c:6121
#5 ves_icall_System_Buffer_BlockCopyInternal (src=0xb3512010, src_offset=<optimized out>, dest=<optimized out>, dest_offset=<optimized out>, count=4096) at icall.c:6192
#6 0xb6817a18 in ?? ()
Cannot access memory at address 0xff8
=================================================================
Got a SIGSEGV while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
=================================================================
Related
I have a long running task that needs to only have one instance running at a time. I chose Azure Durable Entities based on the documentation that these are designed for this sort of situation, but it seems that past a certain threshold the entity will re-run the task after execution has completed. It does the job of single threading the task beautifully, but never seems to recognize that it has completed.
Here is an example with the Sleep call representing the long running task execution:
[FunctionName("LongRunningTask")]
public static void LongRunningTask([EntityTrigger] IDurableEntityContext context, ILogger log)
{
var sleepTime = context.GetInput<TimeSpan>();
var state = context.GetState<LongRunningTaskState>() ?? new LongRunningTaskState();
log.LogInformation($"Waiting for {sleepTime}... State when started: {JsonConvert.SerializeObject(state)}");
System.Threading.Thread.Sleep(sleepTime);
state.RunCount++;
context.SetState(state);
var updatedState = context.GetState<LongRunningTaskState>() ?? new LongRunningTaskState();
log.LogInformation($"Finished waiting {sleepTime}. New state is {JsonConvert.SerializeObject(updatedState)}");
context.Return(state);
}
[FunctionName("StartLongRunningTask")]
public static async Task<IActionResult> StartLongRunningTask(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "StartLongRunningTask/{seconds:int}")] HttpRequestMessage req,
int seconds,
[DurableClient] IDurableEntityClient starter,
ILogger log)
{
var delay = TimeSpan.FromSeconds(seconds);
var entityId = new EntityId(nameof(LongRunningTask), "Singleton");
await starter.SignalEntityAsync(entityId, "Download", operationInput: delay);
return (ActionResult)(new OkObjectResult($"Schedule task for {delay}"));
}
If I tell it to wait 30 seconds, it behaves as expected:
[2023-02-03T23:02:22.452Z] Executing 'StartLongRunningTask' (Reason='This function was programmatically called via the host APIs.', Id=542d8dad-58a2-47a1-94ac-5f5489a56d67)
[2023-02-03T23:02:22.514Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' scheduled. Reason: EntitySignal:Download. IsReplay: False. State: Scheduled. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 2.
[2023-02-03T23:02:22.529Z] Executed 'StartLongRunningTask' (Succeeded, Id=542d8dad-58a2-47a1-94ac-5f5489a56d67, Duration=98ms)
[2023-02-03T23:02:22.660Z] Executing 'LongRunningTask' (Reason='(null)', Id=11e224a1-acc1-404e-b29f-7dd062e16e40)
[2023-02-03T23:02:22.666Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' started. IsReplay: False. Input: (216 bytes). State: Started. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 3. TaskEventId: -1
[2023-02-03T23:02:22.676Z] Waiting for 00:00:30... State when started: {"runCount":1}
[2023-02-03T23:02:52.773Z] Finished waiting 00:00:30. New state is {"runCount":2}
[2023-02-03T23:02:52.777Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' completed 'Download' operation 7470263c-de86-4243-84e5-eec14d489af0 in 30103.6145ms. IsReplay: False. Input: (216 bytes). Output: (56 bytes). HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 4.
[2023-02-03T23:02:52.809Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' completed. ContinuedAsNew: True. IsReplay: False. Output: (56 bytes). State: Completed. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 5. TaskEventId: -1
[2023-02-03T23:02:52.815Z] Executed 'LongRunningTask' (Succeeded, Id=11e224a1-acc1-404e-b29f-7dd062e16e40, Duration=30157ms)
But if I tell it to wait 10 minutes, it enters an endless loop where it doesn't recognize the successful completion of the previous iteration. Note the state at the beginning of each attempt stays at "runCount": 1:
[2023-02-03T22:29:41.355Z] Executing 'StartLongRunningTask' (Reason='This function was programmatically called via the host APIs.', Id=bd751b6f-71f8-4c1d-97ee-24b3027a6d85)
[2023-02-03T22:29:41.427Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' scheduled. Reason: EntitySignal:Download. IsReplay: False. State: Scheduled. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 2.
[2023-02-03T22:29:41.453Z] Executed 'StartLongRunningTask' (Succeeded, Id=bd751b6f-71f8-4c1d-97ee-24b3027a6d85, Duration=124ms)
[2023-02-03T22:29:41.474Z] Executing 'LongRunningTask' (Reason='(null)', Id=bdbb65aa-ac65-4252-bab8-028974a81148)
[2023-02-03T22:29:41.481Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' started. IsReplay: False. Input: (216 bytes). State: Started. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 3. TaskEventId: -1
[2023-02-03T22:29:41.500Z] Waiting for 00:10:00... State when started: {"runCount":1}
[2023-02-03T22:39:41.586Z] Finished waiting 00:10:00. New state is {"runCount":2}
[2023-02-03T22:39:41.592Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' completed 'Download' operation 8a07b5fb-fa93-4ff5-9278-2b33eb125ea1 in 600097.4966ms. IsReplay: False. Input: (216 bytes). Output: (56 bytes). HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 4.
[2023-02-03T22:39:41.616Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' completed. ContinuedAsNew: True. IsReplay: False. Output: (56 bytes). State: Completed. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 5. TaskEventId: -1
[2023-02-03T22:39:41.619Z] Executed 'LongRunningTask' (Succeeded, Id=bdbb65aa-ac65-4252-bab8-028974a81148, Duration=600148ms)
[2023-02-03T22:39:41.653Z] Executing 'LongRunningTask' (Reason='(null)', Id=18e7056a-22e2-496f-8486-a7704791713e)
[2023-02-03T22:39:41.654Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' started. IsReplay: False. Input: (216 bytes). State: Started. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 6. TaskEventId: -1
[2023-02-03T22:39:41.656Z] Waiting for 00:10:00... State when started: {"runCount":1}
[2023-02-03T22:49:42.466Z] Finished waiting 00:10:00. New state is {"runCount":2}
[2023-02-03T22:49:42.470Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' completed 'Download' operation 4e08a7d2-a0bf-4d68-bc2a-eff989ef3007 in 600814.92ms. IsReplay: False. Input: (216 bytes). Output: (56 bytes). HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 7.
[2023-02-03T22:49:42.473Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' completed. ContinuedAsNew: True. IsReplay: False. Output: (56 bytes). State: Completed. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 8. TaskEventId: -1
[2023-02-03T22:49:42.475Z] Executed 'LongRunningTask' (Succeeded, Id=18e7056a-22e2-496f-8486-a7704791713e, Duration=600822ms)
[2023-02-03T22:49:42.502Z] Executing 'LongRunningTask' (Reason='(null)', Id=d57f3470-af6f-464a-9b14-8091e70946c2)
[2023-02-03T22:49:42.503Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' started. IsReplay: False. Input: (216 bytes). State: Started. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 9. TaskEventId: -1
[2023-02-03T22:49:42.505Z] Waiting for 00:10:00... State when started: {"runCount":1}
[2023-02-03T22:59:42.567Z] Finished waiting 00:10:00. New state is {"runCount":2}
[2023-02-03T22:59:42.570Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' completed 'Download' operation 8a07b5fb-fa93-4ff5-9278-2b33eb125ea1 in 600065.2645ms. IsReplay: False. Input: (216 bytes). Output: (56 bytes). HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 10.
[2023-02-03T22:59:42.571Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' completed. ContinuedAsNew: True. IsReplay: False. Output: (56 bytes). State: Completed. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 11. TaskEventId: -1
[2023-02-03T22:59:42.572Z] Executed 'LongRunningTask' (Succeeded, Id=d57f3470-af6f-464a-9b14-8091e70946c2, Duration=600070ms)
[2023-02-03T22:59:42.594Z] Executing 'LongRunningTask' (Reason='(null)', Id=94e639a1-c460-494e-b48c-3215253acf6f)
[2023-02-03T22:59:42.595Z] #longrunningtask#Singleton: Function 'longrunningtask (Entity)' started. IsReplay: False. Input: (216 bytes). State: Started. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.9.0. SequenceNumber: 12. TaskEventId: -1
[2023-02-03T22:59:42.596Z] Waiting for 00:10:00... State when started: {"runCount":1}
I have the function timeout set to 4 hours in my host.json file and a premium app service plan that allows for longer running functions.
When I go to the unity editor, then go to collaborate, Unity crashes when I switch for another commit revision, why ? I have no logs, no error message, it directly crashes and I didn't found anything on the internet then can help me.
EDIT: I found logs for the editor
[./Editor/Src/AssetPipeline/PluginManager.cpp line 151]
(Filename: Library/PackageCache/com.unity.collab-proxy#1.2.16/Editor/Collab/CollabToolbarButton.cs Line: 149)
Refreshing native plugins compatible for Editor in 2.89 ms, found 3 plugins.
Preloading 0 native plugins for Editor in 0.08 ms.
UPID Received '6b4a481d-7f06-4928-ac27-e359b0198a45'.
UPID Received '6b4a481d-7f06-4928-ac27-e359b0198a45'.
Obtained 17 stack frames.
Thread 0x30ed53000 may have been prematurely finalized
#0 0x000001024fbd8e in MemoryManager::Allocate(unsigned long, unsigned long, MemLabelId const&, AllocateOptions, char const*, int)
Thread 0x30ed53000 may have been prematurely finalized
#1 0x000001024f6c7b in malloc_internal(unsigned long, unsigned long, MemLabelId const&, AllocateOptions, char const*, int)
Thread 0x30ed53000 may have been prematurely finalized
#2 0x0000010049d036 in core::StringStorageDefault<char>::assign(char const*, unsigned long)
Thread 0x30ed53000 may have been prematurely finalized
#3 0x0000010049dcda in core::basic_string<char, core::StringStorageDefault<char> >::basic_string(core::basic_string<char, core::StringStorageDefault<char> > const&)
Thread 0x30ed53000 may have been prematurely finalized
#4 0x00000101397129 in CollabEntry::CollabEntry(CollabEntry const&)
Thread 0x30ed53000 may have been prematurely finalized
#5 0x0000010139953c in CollabChangeItem::CollabChangeItem(CollabChangeItem const&)
Thread 0x30ed53000 may have been prematurely finalized
#6 0x0000010140f61d in CollabUpdateAsyncJob::Exec_RemoveMatchingLocalChanges()
Thread 0x30ed53000 may have been prematurely finalized
#7 0x00000101412e60 in CollabUpdateAsyncJob::Execute()
Thread 0x30ed53000 may have been prematurely finalized
#8 0x000001013f8c1f in CollabJob::DoJobCaller(CollabJob*)
Thread 0x30ed53000 may have been prematurely finalized
#9 0x00000102d24840 in JobQueue::Exec(JobInfo*, long long, int)
Thread 0x30ed53000 may have been prematurely finalized
#10 0x00000102d24b8f in JobQueue::Steal(JobGroup*, JobInfo*, long long, int, bool)
Thread 0x30ed53000 may have been prematurely finalized
#11 0x00000102d24e56 in JobQueue::ExecuteJobFromQueue()
Thread 0x30ed53000 may have been prematurely finalized
#12 0x00000102d25168 in JobQueue::ProcessJobs(JobQueue::ThreadInfo*, void*)
Thread 0x30ed53000 may have been prematurely finalized
#13 0x00000102d240d9 in JobQueue::WorkLoop(void*)
Thread 0x30ed53000 may have been prematurely finalized
#14 0x000001032e137f in Thread::RunThreadWrapper(void*)
Thread 0x30ed53000 may have been prematurely finalized
#15 0x007fff2033e8fc in _pthread_start
Thread 0x30ed53000 may have been prematurely finalized
#16 0x007fff2033a443 in thread_start
Launching bug reporter
I am working on a console application that sends multiple requests to an API and I am making use of async, tasks and await. I am using the Stopwatch to show the time spent for each request/task and I noticed that it starts very low (150 ms) and there is adding around ~100 ms for each next task.
I think the tasks are running concurrently because the program completes 83 requests/tasks in 8 seconds and when I measure the get request with Chrome it showed around 200ms.
Do you know why the time is increasing as the tasks go? Is there something wrong with the measuring or with my code logic?
Isn't this suppose to be faster? From what I red, WhenAll should run the tasks concurrently and the overall completion time is the max task time from the list.
public async Task<List<CatalogEvent>> GetEventsAsync(int id)
{
sw.Restart();
var request = GetRequest(msCatalogEndpoint);
request.AddParameter("id", id, ParameterType.UrlSegment);
List<CatalogEvent> events = new List<CatalogEvent>();
var response = await client.ExecuteTaskAsync(request).ConfigureAwait(false);
var catalog = JsonConvert.DeserializeObject<CatalogEndpoint>(response.Content);
if (!(catalog.catalogEvents is null))
{
foreach (var ev in catalog.catalogEvents)
{
CatalogEvent catalogEvent = ev.Value;
catalogEvent.eventName = ev.Key.ToString();
catalogEvent.titleId = id;
DateTime dateTime = DateTime.UtcNow;
catalogEvent.date = dateTime.ToString();
events.Add(catalogEvent);
}
}
Console.WriteLine($"Task for Id: {id} took {sw.ElapsedMilliseconds} ms and was managed by Thread: {Thread.CurrentThread.ManagedThreadId}");
return events;
}
I am using RestSharp package to make the requests.
The main method is like this:
static void Main(string[] args)
{
//this list has 83 ids which I am getting from a database
List<int> ids = GetIds();
async Task ProcessEvents()
{
IEnumerable<Task<List<CatalogEvent>>> techBriefEvents = ids.Select(id => GetEventsAsync(id));
await Task.WhenAll(techBriefEvents);
}
Task.WhenAll(ProcessEvents());
Console.ReadKey();
}
This is the output:
Task for TitleId: 142 took 164 ms and was managed by Thread: 8
Task for TitleId: 16 took 349 ms and was managed by Thread: 5
Task for TitleId: 10 took 634 ms and was managed by Thread: 6
Task for TitleId: 215 took 650 ms and was managed by Thread: 5
Task for TitleId: 114 took 826 ms and was managed by Thread: 6
Task for TitleId: 214 took 843 ms and was managed by Thread: 5
Task for TitleId: 56 took 983 ms and was managed by Thread: 6
Task for TitleId: 212 took 1001 ms and was managed by Thread: 5
Task for TitleId: 168 took 1141 ms and was managed by Thread: 6
Task for TitleId: 21 took 1168 ms and was managed by Thread: 5
Task for TitleId: 26 took 1309 ms and was managed by Thread: 6
Task for TitleId: 30 took 1334 ms and was managed by Thread: 5
Task for TitleId: 213 took 1462 ms and was managed by Thread: 6
Task for TitleId: 24 took 1510 ms and was managed by Thread: 5
Task for TitleId: 29 took 1619 ms and was managed by Thread: 6
Task for TitleId: 23 took 1669 ms and was managed by Thread: 5
Task for TitleId: 31 took 1779 ms and was managed by Thread: 6
Task for TitleId: 14 took 1906 ms and was managed by Thread: 5
Task for TitleId: 18 took 1943 ms and was managed by Thread: 6
Task for TitleId: 20 took 2064 ms and was managed by Thread: 6
Task for TitleId: 19 took 2110 ms and was managed by Thread: 6
Task for TitleId: 175 took 2222 ms and was managed by Thread: 8
Task for TitleId: 15 took 2275 ms and was managed by Thread: 6
Task for TitleId: 102 took 2400 ms and was managed by Thread: 8
Task for TitleId: 33 took 2464 ms and was managed by Thread: 8
Task for TitleId: 135 took 2563 ms and was managed by Thread: 5
Task for TitleId: 5 took 2632 ms and was managed by Thread: 8
Task for TitleId: 137 took 2750 ms and was managed by Thread: 5
Task for TitleId: 12 took 2796 ms and was managed by Thread: 8
Task for TitleId: 41 took 2911 ms and was managed by Thread: 5
Task for TitleId: 136 took 2998 ms and was managed by Thread: 8
Task for TitleId: 43 took 3084 ms and was managed by Thread: 5
Task for TitleId: 139 took 3159 ms and was managed by Thread: 8
Task for TitleId: 51 took 3240 ms and was managed by Thread: 5
Task for TitleId: 42 took 3322 ms and was managed by Thread: 5
Task for TitleId: 39 took 3393 ms and was managed by Thread: 5
Task for TitleId: 44 took 3502 ms and was managed by Thread: 8
Task for TitleId: 122 took 3583 ms and was managed by Thread: 5
Task for TitleId: 36 took 3697 ms and was managed by Thread: 8
Task for TitleId: 95 took 3744 ms and was managed by Thread: 5
Task for TitleId: 67 took 3871 ms and was managed by Thread: 8
Task for TitleId: 229 took 3896 ms and was managed by Thread: 5
Task for TitleId: 226 took 4034 ms and was managed by Thread: 8
Task for TitleId: 108 took 4078 ms and was managed by Thread: 5
Task for TitleId: 123 took 4213 ms and was managed by Thread: 8
Task for TitleId: 143 took 4285 ms and was managed by Thread: 5
Task for TitleId: 236 took 4364 ms and was managed by Thread: 8
Task for TitleId: 228 took 4466 ms and was managed by Thread: 5
Task for TitleId: 232 took 4540 ms and was managed by Thread: 6
Task for TitleId: 230 took 4641 ms and was managed by Thread: 5
Task for TitleId: 149 took 4715 ms and was managed by Thread: 6
Task for TitleId: 176 took 4793 ms and was managed by Thread: 5
Task for TitleId: 208 took 4902 ms and was managed by Thread: 6
Task for TitleId: 155 took 4946 ms and was managed by Thread: 5
Task for TitleId: 61 took 5057 ms and was managed by Thread: 6
Task for TitleId: 190 took 5097 ms and was managed by Thread: 5
Task for TitleId: 93 took 5262 ms and was managed by Thread: 5
Task for TitleId: 194 took 5280 ms and was managed by Thread: 5
Task for TitleId: 156 took 5419 ms and was managed by Thread: 6
Task for TitleId: 101 took 5440 ms and was managed by Thread: 5
Task for TitleId: 193 took 5572 ms and was managed by Thread: 6
Task for TitleId: 167 took 5598 ms and was managed by Thread: 5
Task for TitleId: 197 took 5730 ms and was managed by Thread: 6
Task for TitleId: 111 took 5755 ms and was managed by Thread: 5
Task for TitleId: 216 took 5882 ms and was managed by Thread: 6
Task for TitleId: 60 took 5930 ms and was managed by Thread: 5
Task for TitleId: 9 took 6059 ms and was managed by Thread: 5
Task for TitleId: 152 took 6085 ms and was managed by Thread: 5
Task for TitleId: 169 took 6218 ms and was managed by Thread: 6
Task for TitleId: 154 took 6264 ms and was managed by Thread: 5
Task for TitleId: 7 took 6403 ms and was managed by Thread: 6
Task for TitleId: 141 took 6506 ms and was managed by Thread: 5
Task for TitleId: 58 took 6560 ms and was managed by Thread: 6
Task for TitleId: 172 took 6670 ms and was managed by Thread: 5
Task for TitleId: 11 took 6730 ms and was managed by Thread: 6
Task for TitleId: 17 took 6846 ms and was managed by Thread: 5
Task for TitleId: 55 took 6912 ms and was managed by Thread: 6
Task for TitleId: 166 took 7020 ms and was managed by Thread: 5
Task for TitleId: 140 took 7069 ms and was managed by Thread: 6
Task for TitleId: 110 took 7177 ms and was managed by Thread: 5
Task for TitleId: 90 took 7222 ms and was managed by Thread: 6
Task for TitleId: 160 took 7352 ms and was managed by Thread: 5
Task for TitleId: 97 took 7400 ms and was managed by Thread: 6
Task for TitleId: 200 took 7503 ms and was managed by Thread: 5
Task for TitleId: 153 took 7556 ms and was managed by Thread: 6
Task for TitleId: 207 took 7654 ms and was managed by Thread: 5
Task for TitleId: 161 took 7721 ms and was managed by Thread: 6
Task for TitleId: 231 took 7810 ms and was managed by Thread: 5
Task for TitleId: 202 took 7873 ms and was managed by Thread: 6
Task for TitleId: 220 took 8068 ms and was managed by Thread: 6
One obvious misconception is that "and was managed by Thread:". Unless ExecuteTaskAsync is very badly implemented, there is no thread.
If the requests are being made to the same host, you might be running into service point manager limitations.
The code sample below
using System.Threading;
namespace TimerApp
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("***** Timer Application *****\n");
Console.WriteLine("In the thread #{0}", Thread.CurrentThread.ManagedThreadId);
// Create the delegate for the Timer type.
TimerCallback timerCB = new TimerCallback(ShowTime);
// Establish timer settings.
Timer t = new Timer(
timerCB, // The TimerCallback delegate object.
"Hello from Main()", // Any info to pass into the called method (null for no info).
0, // Amount of time to wait before starting (in milliseconds).
1000); // Interval of time between calls (in milliseconds).
Console.WriteLine("Hit key to terminate...");
Console.ReadLine();
}
// Method to show current time...
public static void ShowTime(object state)
{
Console.WriteLine("From the thread #{0}, it is background?{1}: time is {2}, param is {3}",
Thread.CurrentThread.ManagedThreadId,
Thread.CurrentThread.IsBackground,
DateTime.Now.ToLongTimeString(),
state.ToString());
}
}
}
produces the following output
***** Timer Application *****
In the thread #1
Hit key to terminate...
From the thread #4, it is background?True: time is 10:37:54 PM, param is Hello from Main()
From the thread #4, it is background?True: time is 10:37:55 PM, param is Hello from Main()
From the thread #5, it is background?True: time is 10:37:56 PM, param is Hello from Main()
From the thread #4, it is background?True: time is 10:37:57 PM, param is Hello from Main()
From the thread #5, it is background?True: time is 10:37:58 PM, param is Hello from Main()
From the thread #4, it is background?True: time is 10:37:59 PM, param is Hello from Main()
From the thread #5, it is background?True: time is 10:38:00 PM, param is Hello from Main()
...
Press any key to continue . . .
Does the System.Threading.Timer make callbacks using several threads at a time?
It makes use of the thread pool, using the first thread that it finds available at each time interval. The timer simply triggers the firing of these threads.
void Main()
{
System.Threading.Timer timer = new Timer((x) =>
{
Console.WriteLine($"{DateTime.Now.TimeOfDay} - Is Thread Pool Thread: {Thread.CurrentThread.IsThreadPoolThread} - Managed Thread Id: {Thread.CurrentThread.ManagedThreadId}");
Thread.Sleep(5000);
}, null, 1000, 1000);
Console.ReadLine();
}
Output
07:19:44.2628607 - Is Thread Pool Thread: True - Managed Thread Id: 10
07:19:45.2639080 - Is Thread Pool Thread: True - Managed Thread Id: 13
07:19:46.2644998 - Is Thread Pool Thread: True - Managed Thread Id: 9
07:19:47.2649563 - Is Thread Pool Thread: True - Managed Thread Id: 8
07:19:48.2660500 - Is Thread Pool Thread: True - Managed Thread Id: 12
07:19:49.2664012 - Is Thread Pool Thread: True - Managed Thread Id: 14
07:19:50.2669635 - Is Thread Pool Thread: True - Managed Thread Id: 15
07:19:51.2679269 - Is Thread Pool Thread: True - Managed Thread Id: 10
07:19:52.2684307 - Is Thread Pool Thread: True - Managed Thread Id: 9
07:19:53.2693090 - Is Thread Pool Thread: True - Managed Thread Id: 13
07:19:54.2839838 - Is Thread Pool Thread: True - Managed Thread Id: 8
07:19:55.2844800 - Is Thread Pool Thread: True - Managed Thread Id: 12
07:19:56.2854568 - Is Thread Pool Thread: True - Managed Thread Id: 15
In the code above we are setting the thread to wait 5 seconds, so after printing out to the console, the thread is kept alive for an additional 5 seconds before completing execution and returning to the Thread Pool.
The timer carries on firing on each second regardless, it's not waiting on the thread it triggered to complete.
I have the following Scenario.
I take 50 jobs from the database into a blocking collection.
Each job is a long running one. (potentially could be). So I want to run them in a separate thread. (I know - it may be better to run them as Task.WhenAll and let the TPL figure it out - but I want to control how many runs simultaneously)
Say I want to run 5 of them simultaneously (configurable)
I create 5 tasks (TPL), one for each job and run them in parallel.
What I want to do is to pick up the next Job in the blocking collection as soon as one of the jobs from step 4 is complete and keep going until all 50 are done.
I am thinking of creating a Static blockingCollection and a TaskCompletionSource which will be invoked when a job is complete and then it can call the consumer again to pick one job at a time from the queue. I would also like to call async/await on each job - but that's on top of this - not sure if that has an impact on the approach.
Is this the right way to accomplish what I'm trying to do?
Similar to this link, but catch is that I want to process the next Job as soon as one of the first N items are done. Not after all N are done.
Update :
Ok, I have this code snippet doing exactly what I want, if someone wants to use it later. As you can see below, 5 threads are created and each thread starts the next job when it is done with current. Only 5 threads are active at any given time. I understand this may not work 100% like this always, and will have performance issues of context switching if used with one cpu/core.
var block = new ActionBlock<Job>(
job => Handler.HandleJob(job),
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 5 });
foreach (Job j in GetJobs())
block.SendAsync(j);
Job 2 started on thread :13. wait time:3600000ms. Time:8/29/2014
3:14:43 PM
Job 4 started on thread :14. wait time:15000ms. Time:8/29/2014
3:14:43 PM
Job 0 started on thread :7. wait time:600000ms. Time:8/29/2014
3:14:43 PM
Job 1 started on thread :12. wait time:900000ms. Time:8/29/2014
3:14:43 PM
Job 3 started on thread :11. wait time:120000ms. Time:8/29/2014
3:14:43 PM
job 4 finished on thread :14. 8/29/2014 3:14:58 PM
Job 5 started on thread :14. wait time:1800000ms. Time:8/29/2014
3:14:58 PM
job 3 finished on thread :11. 8/29/2014 3:16:43 PM
Job 6 started on thread :11. wait time:1200000ms. Time:8/29/2014
3:16:43 PM
job 0 finished on thread :7. 8/29/2014 3:24:43 PM
Job 7 started on thread :7. wait time:30000ms. Time:8/29/2014 3:24:43
PM
job 7 finished on thread :7. 8/29/2014 3:25:13 PM
Job 8 started on thread :7. wait time:100000ms. Time:8/29/2014
3:25:13 PM
job 8 finished on thread :7. 8/29/2014 3:26:53 PM
Job 9 started on thread :7. wait time:900000ms. Time:8/29/2014
3:26:53 PM
job 1 finished on thread :12. 8/29/2014 3:29:43 PM
Job 10 started on thread :12. wait time:300000ms. Time:8/29/2014
3:29:43 PM
job 10 finished on thread :12. 8/29/2014 3:34:43 PM
Job 11 started on thread :12. wait time:600000ms. Time:8/29/2014
3:34:43 PM
job 6 finished on thread :11. 8/29/2014 3:36:43 PM
Job 12 started on thread :11. wait time:300000ms. Time:8/29/2014
3:36:43 PM
job 12 finished on thread :11. 8/29/2014 3:41:43 PM
Job 13 started on thread :11. wait time:100000ms. Time:8/29/2014
3:41:43 PM
job 9 finished on thread :7. 8/29/2014 3:41:53 PM
Job 14 started on thread :7. wait time:300000ms. Time:8/29/2014
3:41:53 PM
job 13 finished on thread :11. 8/29/2014 3:43:23 PM
job 11 finished on thread :12. 8/29/2014 3:44:43 PM
job 5 finished on thread :14. 8/29/2014 3:44:58 PM
job 14 finished on thread :7. 8/29/2014 3:46:53 PM
job 2 finished on thread :13. 8/29/2014 4:14:43 PM
You can easily achieve what you need using TPL Dataflow.
What you can do is use BufferBlock<T>, which is a buffer for storing you data, and link it together with an ActionBlock<T> which will consume those requests as they're coming in from the BufferBlock<T>.
Now, the beauty here is that you can specify how many requests you want the ActionBlock<T> to handle concurrently using the ExecutionDataflowBlockOptions class.
Here's a simplified console version, which processes a bunch of numbers as they're coming in, prints their name and Thread.ManagedThreadID:
private static void Main(string[] args)
{
var bufferBlock = new BufferBlock<int>();
var actionBlock =
new ActionBlock<int>(i => Console.WriteLine("Reading number {0} in thread {1}",
i, Thread.CurrentThread.ManagedThreadId),
new ExecutionDataflowBlockOptions
{MaxDegreeOfParallelism = 5});
bufferBlock.LinkTo(actionBlock);
Produce(bufferBlock);
Console.ReadKey();
}
private static void Produce(BufferBlock<int> bufferBlock)
{
foreach (var num in Enumerable.Range(0, 500))
{
bufferBlock.Post(num);
}
}
You can also post them asynchronously if needed, using the awaitable BufferBlock.SendAsync
That way, you let the TPL handle all the throttling for you without needing to do it manually.
You can use BlockingCollection and it will work just fine, but it was built before async-await so it blocks synchronously which could be less scalable in most cases.
You're better off using async ready TPL Dataflow as Yuval Itzchakov suggested. All you need is an ActionBlock that processes each item concurrently with a MaxDegreeOfParallelism of 5 and you post your work to it synchronously (block.Post(item)) or asynchronously (await block.SendAsync(item)):
private static void Main()
{
var block = new ActionBlock<Job>(
async job => await job.ProcessAsync(),
new ExecutionDataflowBlockOptions {MaxDegreeOfParallelism = 5});
for (var i = 0; i < 50; i++)
{
block.Post(new Job());
}
Console.ReadKey();
}
You could do this with a SemaphoreSlim like in this answer, or using ForEachAsync like in this answer.