I'm new to Azure Functions and I couldn't find a good explanation about output bindings.
For example, if I want to upload a blob to Azure Storage when is recommended to use output binding and when to manually upload (which are advantages/disadvantages in each case)?
And which is the difference between output binding as a parameter in the Run function, and as an attribute?
Parameter:
[FunctionName("MyFunction")]
public static void Run([ServiceBusTrigger("myqeue")] Message message,
[Blob("output-container/{name}", FileAccess.Write)] Stream stream)
{ }
Attribute:
[FunctionName("MyFunction")]
[return: Blob("output-container/{name}")]
public static string Run([ServiceBusTrigger("myqeue")] Message message, ILogger log)
{ }
when is recommended to use output binding and when to manually upload (which are advantages/disadvantages in each case)?
Here is the clear overview of when to use output bindings and for which service gives what features when using output bindings!
As we know, Bindings cannot co-exist without Azure Function triggers and 2 types of bindings available: Input and Output.
Output Binding: Data sent by your function is an example of an output binding.
Let's discuss the 2 ways of using output binding:
1) Stored Queue's data from the blob container acts as output binding
Scenario:
In the Azure Functions EF Core Http Trigger Context, Queue will be utilized as an output binding to store the data to the local storage once the data has been saved to the database.
Second, when data is added to the Queue store, a new function will be formed. In this case, the data in the queue will be saved in the Blob container as an output binding, with the queue serving as the input binding. By including the Blob property for each queue trigger, the Blob container is used as output binding.
2) Service bus queue from Azure storage acting as output binding
Scenario:
Firstly, upon triggering of HTTP request, the HTTP request body will be sent as a request to Server Bus-Queue by using an output binding.
Secondly, after the message is sent to the server bus queue, function will be triggered and the data will be logged in the application.
Here is the blog article provided the use cases and features of each service in detailed format, thanks to Ashish Patel.
A storage queue can be the ideal choice if you want to create code as soon as possible or if you want to transmit each message to just one recipient/destination. Otherwise, Service Bus queues offer a lot more flexibility and possibilities.
Related
I'm coding a C# function which is triggered by the upload of a blob. I would like to read another file in the container. How would the input binding bring the second blob?
public static async Task Run([BlobTrigger("csv/{name}.csv", Connection = "StorageConnectionAppSetting")]Stream myBlob, string name, ILogger log)
In addition to this question, how can I reference values from the local.settings.json in my code? I'm able to reference the "StorageConnectionAppSetting" on the input binding but I'm not sure how to do the same for portions of my code where I'm creating clients using APIKEYs.
Thanks!
A blob-trigger can be a single blob, but you can add an input binding to your function. In this case you can add an input binding to CloudBlobContainer by adding a reference to the storage SDK, and then read any blobs in that container.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-input?tabs=csharp#usage.
Another option would be not to use input binding and read the container and its contents the way you would normally do using storage SDK. You will need to add reference to Microsoft.Azure.Storage.Blob in both the cases.
For app settings you can use System.Environment.GetEnvironmentVariable("APIKEY");, assuming APIKEY is your customer setting. Remember, local.settings.json will only be local and you will need to set these values in Azure either via Azure Portal or your CI/CD pipeline via and ARM template.
You can also use Azure functions dependency injection and inject configuration. Check the section Working with options and settings at https://learn.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection#working-with-options-and-settings
I need to add correlationId to my logging context and I did it on my MVC project by adding CorrelationId nuget to the project and setting up its middleware, but I could not do the same in Azure functions.
I have loaded the ICorrelationContextAccessor using Dependency injection and then set my correlationId like this:
[FunctionName("func1")]
public async Task Run([ServiceBusTrigger("mytopic", "MySubscription", Connection = "ServiceBusConnectionString")]Message message)
{
_correlationContextAccessor.CorrelationContext = _correlationContextFactory.Create(message.CorrelationId, "X-Correlation-ID");
_logger.LogInformation($"C# ServiceBus topic trigger function processed message: {message.MessageId}, {Encoding.UTF8.GetString(message.Body)}");
It works fine and I see my correlationId in the log line below and in my services in the function. The only part that I am missing is that I have logs regarding the start and finish of the function that still has no correlationId, which kind of make sense becaue when the function wants to log that it has received the message the correlationId is not set.
The short version is that you can't effect the logging code that runs before your function using the built in bindings.
You won't be able to change that first "C# Timer trigger function processed message" as the message hasn't been read at that point-- it would be the same as trying to get the correlation ID set in your MVC project before reading the incoming HTTP request.
You could add logging as soon as the message is first received by creating a custom binding. I would encourage you to consider carefully whether or not it is worth building and maintaining a custom binding to get your logging setup a few lines sooner.
I have a method called ReadFileData(string blobStorageFilePath) in my .NET Web API project. This method reads the text content from Azure Blob file. The azure blob storage file path is passed via the parameter in this method. Till now a client application (web) was calling this method to read file data but now I have to automate this process.
So, is it possible to call this web API method (by some way) whenever a new file is added to azure blob storage automatically? So this way there will be no need of any client application.
Which approach should I use to implement this process?
Any working example will be appreciated.
You can add a webjob to your Azure app service and install the Azure Webjobs SDK. Then you can easily trigger your read with a simple declarative blob trigger
https://learn.microsoft.com/en-us/azure/app-service-web/websites-dotnet-webjobs-sdk-storage-blobs-how-to
public static void CopyBlob([BlobTrigger("input/{name}")] TextReader input,
[Blob("output/{name}")] out string output)
{
output = input.ReadToEnd();
}
https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-storage-blob-triggered-function
You can create a function triggered by Azure Blob storage. Please see the link for the complete example
I am building a console application that will be run as a continuous Azure WebJob. I am using the Azure WebJobs SDK via the Nuget Package Microsoft.Azure.Jobs.ServiceBus v0.3.1-beta (prerelease). I have static method that triggers on an Azure ServiceBus queue. I do some processing and then want to have the option to send a response via the output parameter to another queue. The method signature looks like this:
public static void TriggerOnQueue(
[ServiceBusTrigger(QueueName)] BrokeredMessage receivedMessage,
[ServiceBus(QueueResponseName)] out BrokeredMessage responseMessage)
{
...
}
My initial thought was to set the responseMessage to null. However, when I do this an error appears in the console window. It doesn't stop execution (so it technically does what I want it to do), but I would rather not push something throwing errors to production. Is there any non-hackish way to set a value in the response message that will not throw an error, but will not submit the message to the response queue?
If not, is there another pattern I am missing that I could use? I would prefer to use the pipeline feature of the WebJobs SDK as opposed to creating the output queue manually. I could probably submit the requests that need a response on to a separate queue and have 2 separate triggers, but with the small amount I am having this do I would rather keep it simple and together.
Thoughts?
This pattern of specifying null for an out parameter works for Azure Queues but it throws an exception for Service Bus Queues. This looks like bug in the SDK. I will open a bug for us to fix it. Thank you for reporting this issue
This is on .Net 4, full framework.
I'm trying to make a simple winforms app that will make some simple WCF REST calls. It's using ChannelFactory and the service contract interface. Of the ~20 methods in the interface, 2 of them involve Stream (an upload and a download method) so the service side (and currently also the client side) using TransferMode=Streamed.
My goal is to include the full HTTP request and response (much like you would see in ethereal/wireshark, or fiddler, or whatever), with headers, in a textbox of the winforms app (just to show what went over the wire)
In trying to use the built-in diagnostics (via SvcConfigEditor) and my own (via implementing IClientMessageInspector and then IEndpointBehavior to add the inspector, then channelFactory.Endpoint.Behaviors.Add to add the behavior :), I'm having 2 issues:
When doing request.ToString() or reply.ToString() in BeforeSendRequest and AfterReceiveReply, it only gets the 'body' and not the headers. Digging around in the objects in the debugger it looks like the reply has them in reply.Properties["httpResponse"], but the request's request.Properties["httpRequest"] has an empty Headers property even though Fiddler shows headers for Content-Type, Host, Accept-Encoding, and Connection. It seems like there's likely a better way to get the 'raw' message that I'm missing (and if there's not, someone probably knows an existing chunk of code to 'reconstruct' the raw one from the Message)
Since the transfer mode is Streamed, the 'body' part just shows up as the string '... stream ...', both in SvcTraceViewer (and the 'raw' svclog - even with logEntireMessage=true) and when doing a ToString(). If the mode is Buffered instead, it shows the actual body fine. I tried making a copy with reply.CreateBufferedCopy(int.MaxValue); but that then caused the actual WCF call to fail with an InvalidOperationException: This message cannot support the operation because it has been copied.
One fallback would be to move the client to Buffered and just change to StreamedRequest for the one upload call and StreamedResponse for the download call (but I'd have to do that programmatically AFAICT, as it's set at the binding level in the config and I don't see anyway of doing it via attributes on the calls), which would take care of the 'body' part and leave me with just the "get the http request headers" (issue #1, specifically request.Properties["httpRequest"].Headers being empty) to deal with, but I'm hoping there's some way of logging the 'raw' messages without doing so, leaving the TransferMode as Streamed.
Thanks!
I can't find any reference right now, but it's a known fact that you cannot capture the contents of a streamed message to WCF tracing. When streaming is enabled, only the headers of the message will be traced.
Here's the source: Configuring Message Logging on MSDN
See towards the end of the page:
Service Level
Messages logged at this layer are
about to enter (on receiving) or leave
(on sending) user code. If filters
have been defined, only messages that
match the filters are logged.
Otherwise, all messages at the service
level are logged. Infrastructure
messages (transactions, peer channel,
and security) are also logged at this
level, except for Reliable Messaging
messages. On streamed messages, only
the headers are logged. In addition,
secure messages are logged decrypted
at this level.