I have an Azure function in which I am using the Service bus processor to receieve message from Service bus Topic while the function is running (Please note this is not the trigger).
Based on the message receieved from the Topic, I am setting a CancellationToken.
I was able to move all the service bus related code in a separate service and injecting the ServiceBusClient in Program.cs:-
builder.Services.AddAzureClients(clientBuilder =>
{
clientBuilder.AddServiceBusClientWithNamespace(serviceBusOptions.ServiceBusUri)
.WithCredential(new DefaultAzureCredential())
.ConfigureOptions(options =>
{
//
});
}
But I am not sure if I can also inject the Service Bus Processor or if I can move this logic to a service. Based on the docmentation, it's recommended to be cached.
Current code (Inside function):-
processor = queueService.GetProcessor(subscriptionName);
processor.ProcessMessageAsync += async (ProcessSessionMessageEventArgs args) => {...}
processor.ProcessErrorAsync += Processor_ProcessErrorAsync;
await processor.StartProcessingAsync();
I want to avoid keeping this code inside the main function class, is there any way to inject it or better write the same?
Edit: The function runs some sql stored procedures, at time the procedure might take time and user may request cancellation. Since there is no other way to stop the running instance of function, I went with this approach. Please suggest if there is any better way to stop a running function instance.
Just as someone pointed out already here, I have a similar feeling about this approach.
The issue is with running a ServiceBusProcessor within a function. While technically it's possible, it's not designed for a short term execution. ServiceBusProcessor in it's own is a message pump. So is Azure function. Perhaps function based implementation is not the approach for this scenario.
Related
In my multi-tenant application I have a background process that runs in a webjob and can take several minutes. The time varies according to each customer's data.
But sometimes, when I'm testing something, I start the process and (looking at the logs) soon I realize something is wrong and I want to cancel that specific run.
I cannot just kill all the messages in the queue or stop the WebJob, because I'd be killing the processes that are running for the other customers.
And I want to do it programmatically so I can put a Cancel button in my web application.
I was not able to find the best architecture approach (or a pattern) to work with this kind of execution cancellation.
I read about passing a CancellationTokenSource, but I couldn't think of how I would call the Cancel() method on the specific run that I want to cancel. Should I store all currently running tokens in a static class? And then send another message to the webjob telling that I want to cancel it?
(I think that might be the answer, but I'm afraid I'm overthinking. That's why I'm asking it here.)
My Function is as simple as:
public static void EngineProcessQueue([QueueTrigger("job-for-process")] string message, TextWriter log)
{
// Inside this method there is a huge codebase
// and I'm afraid that I'll have to put the "if (token.IsCancelled)" in lots of places...
// (but that's another question)
ProcessQueueMessage(message, log);
}
QueueTrigger is essentially a function trigger. The Cancel you want should not be supported.
Because once the function execution method is entered, the specific business logic code may have asynchronous operations. Assuming that even if we delete or stop the QueueTrigger at this time, business data will be affected and rollback cannot be achieved.
The following is my personal suggestion,
because I think the cancel operation can be improved from the business logic:
Use redis cache, and create a object name of mypools, to store your bussiness command.
When running webjob, we can get all Queue, we also can find in Azure Storage Explore. And we can save it in mypools with specical command.
The format of command should be ClientName-TriggerName-Status-Extend. Such as Acompany-jobforprocess-run-null, when this command has not been executed yet, we can modify it with Acompany-jobforprocess-cancel-null.
We can set Azure WebJob queue name at runtime. Then dynamically handle business in the program.For the executed business, data rollback is performed.
I am working on a project with the following details:
No IIS or other 3rd party webservers involved
.NET 4.5 Windows application, which acts as a "server"
The server starts multiple WCF webservices, all of them self-hosted
The webservices are accessible to multiple users
Here is a most simple example of one of the many webservice methods:
public async Task<int> Count()
{
int result = 0;
//There is nothing to await here, so I use Task.Run
await Task.Run(() =>
{
using (IDB ctx = new DB())
{
result = ctx.Customers.All.Count();
}
//Here could happen a lot more
//System.Threading.Thread.Sleep(10000);
}).ConfigureAwait(false);
return result;
}
As you can see, I am using Task.Run to access some data, because non of the repository interfaces offers async methods. I can not await anything. If I wanted to do "real async", I would have to rewrite the complete repository interface.
If I would not use Task.Run, the server would be blocking all other incoming requests.
My 2 questions are:
Is there anything wrong using Task.Run in this scenario?
Even if it is working and maybe not completely wrong, is there a better, more professional solution to call synchronous code in an async method?
The initial reason for this question is, that I read, that using Task.Run in an async method is "fake async". (I think Task.Run starts a new thread, while "real async" code does not)
I answered my own question, see answer below. I hope it can help others.
Yes it is fake async and less scalable as it starts a thread and blocks it, there is no giving it back until its finished.
However,
as Stephen Clearly alludes to in his Task.Run Etiquette and Proper Usage
I call such methods “fake-asynchronous methods” because they look
asynchronous but are really just faking it by doing synchronous work
on a background thread. In general, do not use Task.Run in the
implementation of the method; instead, use Task.Run to call the
method. There are two reasons for this guideline:
Consumers of your code assume that if a method has an asynchronous signature, then it will act truly asynchronously. Faking
asynchronicity by just doing synchronous work on a background thread
is surprising behavior.
If your code is ever used on ASP.NET, a fake-asynchronous method leads developers down the wrong path. The goal of async on the server
side is scalability, and fake-asynchronous methods are less scalable
than just using synchronous methods.
Also Stephen Toub (aka Mr. Parallel) Should I expose asynchronous wrappers for synchronous methods?
The idea of exposing “async over sync” wrappers is also a very
slippery slope, which taken to the extreme could result in every
single method being exposed in both synchronous and asynchronous
forms. Many of the folks that ask me about this practice are
considering exposing async wrappers for long-running CPU-bound
operations. The intention is a good one: help with responsiveness.
But as called out, responsiveness can easily be achieved by the
consumer of the API, and the consumer can actually do so at the right
level of chunkiness, rather than for each chatty individual operation.
Further, defining what operations could be long-running is
surprisingly difficult. The time complexity of many methods often
varies significantly.
However, you actually really dont fit into either of these categories. From your description you are hosting this WCF service. It will run your code asynchronously anyway if you have set the InstanceContextMode and ConcurrencyMode correctly. Your will additionally have the ability to run the TBA wrappers for your call form the client, assuming you generated your proxies with the appropriate settings.
If i understand you correctly, you could just let this method be entirely synchronous, and let WCF take care of the details and save resources
Update
An example: If I use Task.Run inside any webservice methode, I can
even call Thread.Sleep(10000) inside Task.Run and the server stays
responsive to any incoming traffic.
I think the following might help you the most
Sessions, Instancing, and Concurrency
A session is a correlation of all messages sent between two endpoints.
Instancing refers to controlling the lifetime of user-defined service
objects and their related InstanceContext objects. Concurrency is the
term given to the control of the number of threads executing in an
InstanceContext at the same time.
Its seems like your WCF service is setup for InstanceContextMode.PerSession, and ConcurrencyMode.Single. If your service is stateless you probably want to use InstanceContextMode.PerCall and only use async when you have something that truly can be awaited
First of all: Thank all of you for your hints. I needed them to dive deeper into the problem.
I have found the real solution to this problem and I think, I could add some value to the community by answering my own question in detail.
The solution can also be found in this great article: https://www.oreilly.com/library/view/learning-wcf/9780596101626/ch04s04.html
Here is a quick summary of the initial problem and the solution:
The goal
My goal is to host multiple self-hosted WCF services in a .NET 4.5 application
All self-hosted WCF services are accessible to multiple clients
All self-hosted WCF services MUST NOT block each other when multiple users are using them
The problem (and the false solution) (my initial question)
My problem was, that whenever one client used a webservice, it would block the other webservices until it returned to the client
It did not matter what kind of InstanceContextMode or ConcurrencyMode I used
My false solution was to use async and Task.Run ("Fake Async"). It worked, but it was not the real solution.
The solution (found in the article)
When self-hosting WCF webservices, you MUST make sure, that you always call ServiceHost.Open in a seperate thread, different from the UI thread
Whenever you open a ServiceHost in a Console, WinForms or WPF application, or a Windows Service, you have to be aware, at which time you call ServiceHost.Open and how you use the ServiceBehaviorAttribute.UseSynchronizationContext
The default value for ServiceBehaviorAttribute.UseSynchronizationContext is True. (this is bad and leads to blocking!)
If you just call ServiceHost.Open, without setting UseSynchronizationContext = false , all ServiceHosts will run in the UI thread and block this thread and each other.
Solution 1 (tested and it works - no more blocking)
Set ServiceBehaviorAttribute.UseSynchronizationContext = false
Solution 2 (tested and it works - no more blocking)
Do NOT touch ServiceBehaviorAttribute.UseSynchronizationContext, just let it be true
But create at least one or multiple threads in which you call ServiceHost.Open
Code:
private List<ServiceHost> _ServiceHosts = new List<ServiceHost>();
private List<Thread> _Threads = new List<Thread>();
foreach (ServiceHost host in _ServiceHosts)
{
_Threads.Add(new Thread(() => { host.Open(); }));
_Threads[_Threads.Count - 1].IsBackground = true;
_Threads[_Threads.Count - 1].Start();
}
Solution 3 (not tested, but mentioned in the article)
Do NOT touch ServiceBehaviorAttribute.UseSynchronizationContext, just let it be true
But make sure, that you call ServiceHost.Open BEFORE the UI thread is created
Then the ServiceHosts will use a different thread and not block the UI thread
I hope this can help others with the same problem.
Here is my problem, I got a WCF project, which doesnt really matter in fact because it's more about C#/.NET I believe. In my WCF Service when client is requestinq one of the methods I make the validation of the input, and if it succeeds I start some business logic calculactions. I want to start this logic in another thread/task so after the input validation I can immediately return response. Its something like this:
XXXX MyMethod(MyArgument arg)
{
var validation = _validator.Validate(arg);
if (validation.Succeed)
{
Task.Run(() => businessLogic())
}
return MyResponseModel();
}
I need to make it like this because my buesinessLogic can take long time calculactions and database saves in the end, but client requesting the Service have to know immediately if the model is correct.
In my businessLogic calculactions/saves that will be running in background thread I have to catch exceptions if something fail and save it in database. (its pretty big logic so many exceptions can be thrown, like for example after calculactions im persisting the object in the database so save error can be thrown if database is offline for example)
How to correctly implement/what to use for such a requirements? I am just giving consideration if using Task.Run and invoking all the logic in the action event is a good practice?
You can do it like this.
Be aware, though, that worker processes can exit at any time. In that case outstanding work will simply be lost. Maybe you should queue the work to a message queue instead.
Also, if the task "crashes" you will not be notified in any way. Implement your own error logging.
Also, there is no limit to the number of tasks that you can spawn like this. If processing is too slow more and more work will queue up. This might not at all be a problem if you know that the server will not be overloaded.
It was suggested that Task.Run will use threads and therefore not scale. This is not necessarily so. Usually, the bottleneck of any processing is not the number of threads but the backend resources being used (database, disk, services, ...). Even using hundreds of threads is not in any way likely to be a bottleneck. Async IO is not a way around backend resource constraints.
In my WCF operations I will do the logic necessary for the operation: save a record, get a dataset, etc. and in some cases I need to log the activity as well. However, in these cases I feel that there is no point in having the client application waiting for the WCF operation to log the activity. I would like to fire off the logging process and then immediately return whatever necessary to the client without waiting for the logging process to complete.
I do not care to know when the logging process is complete, just fire and forget.
I also prefer to use BasicHttpBinding to maintain maximum interoperability.
Is this possible? Would anyone care sharing coding samples or links to sites with coding examples?
This can be accomplished pretty easily using any number of threading techniques.
For a very simple example, try modifying this:
// Log something going on.
System.Threading.ThreadPool.QueueUserWorkItem((args) =>
{
System.Diagnostics.EventLog.WriteEntry("my source", "my logging message");
});
Inside that lambda method you can use whatever logging class you prefer, and you can include local variables to the logger if you want to log some current state.
I need a console app which will calling webmethod.
It must be asynchronous and without timeout (we don't know how much time takes this method to deal with task.
Is it good way:
[WebMethod]
[SoapDocumentMethod(OneWay = true)]
??
Don't use one-way if you need results
First, if you need a response from your method, you don't want [SoapDocumentMethod(OneWay = true)]. This attribute creates a "fire and forget" call which never returns a response back to the caler and must return void. Instead, use a regular method call and call it async.
One method or two?
If you're using ASMX, there are two basic solutions: one method with a very long timeout, or two methods (as #Aaronaught suggested above): one to kick off the operation and return an ID of the operation, and another to pass in the ID and retrieve results (if available).
Personally, I would not recommend this two-method approach in most cases because of the additional complexity involved, including:
client and server code needs to be changed to suppport 2-step invocation
ASP.NET intrinsic objects like Request and Response are not available when called from a background task launched with ThreadPool.QueueUserWorkItem.
throttling on a busy server gets much harder if there are multiple threads involved with each request.
the server must hang onto the results until the client picks them up (or you decide to throw them out), which may eat up RAM if results are large.
you can't stream large, intermediate results back to the client
True, in some scenarios the 2-method approach may scale better and will be more resilient to broken network connections between client and server. If you need to pick up results hours later, this is something to consider. But your operations only take a few minutes and you can guarantee the client will stay connected, given the addiitonal dev complexity of the 2-method approach I'd consider it a last resort to be used only if the one-method solution doesn't match your needs.
Anyway, the solution requires two pieces. First, you need to call the method asynchronously from the client. Second, you need to lengthen timeouts on both client and server. I cover both below.
Calling ASMX Web Services Asynchronously
For calling an ASMX web service asynchronously from a command-line app, take a look at this article starting with page 2. It shows how to call a web service asynchronously from a .NET cilent app using the newer Event-Based Async Pattern. Note that the older .NET 1.0 approach described here, which relies on BeginXXX/EndXXX methods on the proxy, is not recommended anymore anymore since Visual Studio's proxy generator doesn't create those methods. Better to use the event-based pattern as linked above.
Here's an excerpt/adaptation from the article above, so you can get an idea of the code involved:
void KickOffAsyncWebServiceCall(object sender, EventArgs e)
{
HelloService service = new HelloService();
//Hookup async event handler
service.HelloWorldCompleted += new
HelloWorldCompletedEventHandler(this.HelloWorldCompleted);
service.HelloWorldAsync();
}
void HelloWorldCompleted(object sender,
HelloWorldCompletedEventArgs args)
{
//Display the return value
Console.WriteLine (args.Result);
}
Lengthen server and client timeouts
To prevent timeouts, http://www.dotnetmonster.com/Uwe/Forum.aspx/asp-net-web-services/5202/Web-Method-TimeOut has a good summary of how to adjust both client and server timeouts. You didn't specify in your question if you own the server-side method or just the client-side call, so the excerpt below covers both cases:
there has two setttings that will
affect the webservice call timeout
behavior:
** The ASP.NET webservice's server-side httpruntime timeout
setting, this is configured through
the following element:
httpRuntime Element (ASP.NET Settings Schema)
http://msdn2.microsoft.com/en-us/library/e1f13641.aspx
<configuration> <system.web>
<httpRuntime .............
executionTimeout="45"
.............../> </system.web>
</configuration>
Also, make sure that you've set the
<compilation debug="false" /> so as to
make the timeout work correctly.
** If you're using the wsdl.exe or VS IDE "add webreference" generated
proxy to call webservice methods,
there is also a timeout setting on the
client proxy class(derived from
SoapHttpClientProtocol class). This is
the "Timeout" property derived from
"WebClientProtocol" class:
WebClientProtocol.Timeout Property http://msdn2.microsoft.com/en-us/library/system.web.services.protocols.webclientprotocol.timeout.aspx
Therefore, you can consider adjusting
these two values according to your
application's scenario. Here is a
former thread also mentioned this:
http://groups.google.com/group/microsoft.public.dotnet.framework.webservices/browse_thread/thread/73548848d0544bc9/bbf6737586ca3901
Note that I'd strongly recommend making your timeouts long enough to encompass your longest operation (plus enough buffer to be safe should things get slower) but I wouldn't recommend turning off timeouts altogether. It's generally bad programming practice to allow unlimited timeouts since an errant client or server can permanently disable the other. Instead, just make timeouts very long--- and make sure to be logging instances where your clients or servers time out, so you can detect and diagnose the problem when it happens!
Finally, to echo the commenters above: for new code it's best to use WCF. But if you're stuck using ASMX web services, the above solution should work.
If the method is actually one-way, and you don't care about the result or ever need to follow up on the status of your request, then that is good enough.
If you do need a result (eventually), or need to check on the status of the operation, then this won't work very well. What your method should do in that case is start the work in a background thread, then immediately return an ID that can be used in a different web method to look up the status.
So something like this:
public enum JobStatus { Running, Completed, Failed };
public class MyService : WebService
{
[WebMethod]
public int BeginJob()
{
int id = GetJobID();
// Save to a database or persistent data source
SaveJobStatus(id, JobStatus.Running);
ThreadPool.QueueUserWorkItem(s =>
{
// Do the work here
SaveJobStatus(id, JobStatus.Completed);
}
return id;
}
[WebMethod]
public JobStatus GetJobStatus(int id)
{
// Load the status from database or other persistent data source
return ( ... )
}
}
That's one method to start the work, and another method to check on its status. It's up to the client to poll periodically. It's not a very good system, but you don't have a lot of options with ASMX.
Of course, if you do need a response from this operation, a much better way is to use WCF instead. WCF gives you callback contracts, which you can use to begin a one-way operation and subscribe to a notification when that operation is complete, which eliminates the need for polling above.
So, to summarize all that:
If you don't need any response or status updates, just use IsOneWay = true.
If you do need updates, and can use WCF on the service side, use that with a callback contract. You should be using WCF for new Web Service projects anyway.
If you need updates and cannot use WCF, do the work in a background thread and implement a periodic polling system with an additional status-check web method.