I am using an Azure Function to get messages from a Rabbit MQ Broker to an Event Hub.
The function works perfect when I run it locally.
Here is the code of the function:
using System.Text;
using System.Dynamic;
using System.Threading.Tasks;
using CaseOnline.Azure.WebJobs.Extensions.Mqtt;
using CaseOnline.Azure.WebJobs.Extensions.Mqtt.Messaging;
using Microsoft.Azure.WebJobs;
using Newtonsoft.Json;
public static class Test
{
[FunctionName("EventHubOutput")]
public static async Task Run(
[MqttTrigger("topic/#")] IMqttMessage message,
[EventHub("eventhubname", Connection = "EventHubConnectionAppSetting")] IAsyncCollector<string> outputEvents,
ILogger log)
{
var body = message.GetMessage();
var bodyString = Encoding.UTF8.GetString(body);
dynamic obj = JsonConvert.DeserializeObject<ExpandoObject>(bodyString);
obj.Topic = message.Topic;
await outputEvents.AddAsync(JsonConvert.SerializeObject(obj));
}
}
When deployed and run in the Azure portal, I get the following error messages:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: EventHubOutput
---> System.InvalidOperationException: Error while handling parameter outputEvents after function returned:
---> System.Net.Sockets.SocketException (0xFFFDFFFF): Name or service not known
at (...)
Any idea what the problem might be?
Thank you.
You are using the bindings incorrectly. Check out RabbitMQ bindings for Azure Functions overview.
The following example shows a C# function that reads and logs the RabbitMQ message as a RabbitMQ Event:
[FunctionName("RabbitMQTriggerCSharp")]
public static void RabbitMQTrigger_BasicDeliverEventArgs(
[RabbitMQTrigger("queue", ConnectionStringSetting = "rabbitMQConnectionAppSetting")] BasicDeliverEventArgs args,
ILogger logger
)
{
logger.LogInformation($"C# RabbitMQ queue trigger function processed message: {Encoding.UTF8.GetString(args.Body)}");
}
The following example shows how to read the message as a POCO:
namespace Company.Function
{
public class TestClass
{
public string x { get; set; }
}
public class RabbitMQTriggerCSharp{
[FunctionName("RabbitMQTriggerCSharp")]
public static void RabbitMQTrigger_BasicDeliverEventArgs(
[RabbitMQTrigger("queue", ConnectionStringSetting = "rabbitMQConnectionAppSetting")] TestClass pocObj,
ILogger logger
)
{
logger.LogInformation($"C# RabbitMQ queue trigger function processed message: {pocObj}");
}
}
}
I recommend you to check out this complete guide to setup Rabbit MQ Trigger in Azure Functions: RabbitMQ trigger for Azure Functions overview
Related
All the code samples I've seen so far for Azure WebJobs rely on some kind of trigger (e.g. TimerTrigger or QueueTrigger).
I am looking specifically at WebJobs SDK 3.x, by the way.
So. For a triggerless WebJob (Windows Service-alike one), am I expected to use NoAutomaticTrigger and find a way to kickoff my "main" code manually?
Or should I resort to implementing and registering a class that implements the IHostedService interface?
So far that's the approach I'm taking but it feels more of a hack than a recommended way.
I have not even tried to deploy this code and only ran it on my local machine, so I am afraid that the publishing process will confirm my code is not suitable for Azure WebJobs in its current form.
EntryPoint.cs
This is how the application is being bootstrap when the process is starting.
using Microsoft.Azure.ServiceBus;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
namespace AbcCorp.Jobs
{
public static class Program
{
static async Task Main(string[] args)
{
var config = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddJsonFile($"appsettings.{Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT")}.json", false)
.Build();
var hostBuilder = new HostBuilder()
.ConfigureWebJobs(builder => { builder.AddAzureStorageCoreServices(); })
.ConfigureServices(serviceCollection =>
{
ConfigureServices(serviceCollection, config);
serviceCollection.AddHostedService<ConsoleApplication>();
});
using (var host = hostBuilder.Build())
await host.RunAsync();
}
private static IServiceCollection ConfigureServices(IServiceCollection services, IConfigurationRoot configuration)
{
services.AddTransient<ConsoleApplication>();
// ... more DI registrations
return services;
}
}
}
ConsoleApplication.cs
This would normally be implemented as a function with a trigger.
The thing is, I want this code to only run once on the process startup.
It will start listening on the service bus events using the regular Microsoft.Azure.ServiceBus SDK package.
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Hosting;
using AbcCorp.Internal.Microsoft.Azure.ServiceBus;
using AbcCorp.Api.Messaging;
namespace AbcCorp.Jobs
{
public sealed class ConsoleApplication: IHostedService
{
private readonly IReceiver<SubmissionNotification> _messageReceiver;
private readonly MessageHandler _messageHandler;
public ConsoleApplication(IReceiver<SubmissionNotification> messageReceiver, MessageHandler messageHandler)
{
_messageReceiver = messageReceiver;
_messageHandler = messageHandler;
}
public Task StartAsync(CancellationToken cancellationToken)
{
_messageReceiver.StartListening(_messageHandler.HandleMessage, _messageHandler.HandleException);
return Task.Delay(Timeout.Infinite);
}
public Task StopAsync(CancellationToken cancellationToken)
{
_messageReceiver.Dispose();
return Task.CompletedTask;
}
}
}
So you want a console application to run in a WebJob and listen to messages. You don't really care about WebJob magic like triggers, it's just a place to run your console app. I've done the exact same thing before.
I found the IHostedService abstraction to be very helpful, but I didn't like their SDK. I found it bloated and hard to use. I didn't want to take a large dependency in order use a large array of special magic Azure stuff, when all I wanted to do was run a console application in a WebJob for now, and maybe move it elsewhere later.
So I ended just deleting that dependency, stealing the Shutdown code from the SDK and writing my own Service Host. The result is on my Github Repo azure-webjob-host. Feel free to use it or raid it for ideas. I don't know, maybe if I did it again I'd have another attempt at getting the SDK to work, but I present this is a bit of an alternative to the SDK.
Basically I wrote an IServiceHost not too different from yours (except that StartAsync exited when stuff started instead of just hanging). Then I wrote my own service host, which is basically just a loop:
await _service.StartAsync(cancellationToken);
while (!token.IsCancellationRequested){await Task.Delay(1000);}
await _service.StopAsync(default);
Then I stole the WebJobsShutdownWatcher code from their repo.
Then I created an IServiceHost that started my message handler. (I was using Rabbit, which has nothing to do with triggers or azure stuff)
public class MessagingService : IHostedService, IDisposable
{
public MessagingService(ConnectionSettings connectionSettings,
AppSubscriberSettings subscriberSettings,
MessageHandlerTypeMapping[] messageHandlerTypeMappings,
ILogger<MessagingService> logger)
{
....
}
public async Task StartAsync(CancellationToken cancellationToken)
{
cancellationToken.ThrowIfCancellationRequested();
await Task.WhenAll(subscribers.Value.Select(s => s.StartSubscriptionAsync()));
}
public async Task StopAsync(CancellationToken cancellationToken)
{
...
}
public void Dispose()
{
...
}
}
Then I put that all together into something like this:
IHostedService myService = new MyService();
using (var host = new ServiceHostBuilder().HostService(myService))
{
await host.RunAsync(default);
}
I have some workers attached to service bus topics and what we do is the following (ServiceBusClient is a custom Class that contains our Subscription Client):
public override Task StartAsync(CancellationToken cancellationToken)
{
_serviceBusClient.RegisterOnMessageHandlerAndReceiveMessages(MessageReceivedAsync);
_logger.LogDebug($"Started successfully the Import Client. Listening for messages...");
return base.StartAsync(cancellationToken);
}
public void RegisterOnMessageHandlerAndReceiveMessages(Func<Message, CancellationToken, Task> ProcessMessagesAsync)
{
// Configure the message handler options in terms of exception handling, number of concurrent messages to deliver, etc.
var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
{
// Maximum number of concurrent calls to the callback ProcessMessagesAsync(), set to 1 for simplicity.
// Set it according to how many messages the application wants to process in parallel.
MaxConcurrentCalls = 1,
// Indicates whether MessagePump should automatically complete the messages after returning from User Callback.
// False below indicates the Complete will be handled by the User Callback as in `ProcessMessagesAsync` below.
AutoComplete = false
};
// Register the function that processes messages.
SubscriptionClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
}
And then you can use DI to instantiate your service bus client and inject on the constructor of your Worker class.
Here i have the initialization of the singleton instance of my custom class Service Bus Client
services.AddSingleton<IServiceBusClient, ServiceBusClient>((p) =>
{
var diagnostics = p.GetService<EventHandling>();
var sbc = new ServiceBusClient(
programOptions.Endpoint,
programOptions.TopicName,
programOptions.Subscriber,
programOptions.SubscriberKey);
sbc.Exception += exception => diagnostics.HandleException(exception);
return sbc;
});
Then on this custom class, i initialize my subscription client
public ServiceBusClient(
string endpoint,
string topicName,
string subscriberName,
string subscriberKey, ReceiveMode mode = ReceiveMode.PeekLock)
{
var connBuilder = new ServiceBusConnectionStringBuilder(endpoint, topicName, subscriberName, subscriberKey);
var connectionString = connBuilder.GetNamespaceConnectionString();
ConnectionString = connectionString;
TopicName = topicName;
SubscriptionName = topicName;
SubscriptionClient = new SubscriptionClient(connectionString, topicName, subscriberName, mode);
}
You can check #george chen's answer from this post How to create service bus trigger webjob?
where instead of creating a receiver and registering a message handler, you can use the in built queue trigger and and write your message handler logic inside it.
I am attempting to convert my v1 function to a v2 function, but I cannot find a replacement for deferring a message.
In V1 of Azure Functions it was a method on the BrokeredMesage called .DeferAsync(). In V2 there is no longer a BrokeredMessage but just a Microsoft.Azure.ServiceBus.Message and this does not contain the method of .DeferAsync().
According to the docs:
The API is BrokeredMessage.Defer or BrokeredMessage.DeferAsync in the .NET Framework client, MessageReceiver.DeferAsync in the .NET Standard client, and mesageReceiver.defer or messageReceiver.deferSync in the Java client.
But how can I get access to the MessageReciever?
Here is an example of my function:
[FunctionName("MyFunction")]
public static void Run([ServiceBusTrigger("topic", "subscription", Connection = "AzureServiceBusPrimary")]Message message, ILogger log)
{
//Code
}
So does anyone know how to defer a V2 Message that is triggered from the Azure Service Bus?
As you mention, the new message receiver offers an async defer method and you can add this to your function by using the following code:
[FunctionName("MyFunction")]
public static async Task Run([ServiceBusTrigger("topic", "subscription", Connection = "AzureServiceBusPrimary")]Message message, string lockToken, MessageReceiver messageReceiver, ILogger log)
{
//Your function logic
await messageReceiver.DeferAsync(lockToken);
}
I have an Azure WebJobs (v2.2.0) project that I would like to monitor with Application Insights (AI), and there are events that I would like to be able to track. In a normal web app that is configured to use AI you can just use this:
TelemetryClient tc = new TelemetryClient();
tc.TrackEvent("EventName");
However this seems not to work in the context of a WebJob! I have configured my WebJob project as per the instructions on the WebJob SDK repo which ends up looking like this:
Program
using System.Configuration;
using System.Diagnostics;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
namespace WebJobs
{
public class Program
{
public static void Main()
{
JobHostConfiguration config = new JobHostConfiguration();
config.UseTimers();
using (LoggerFactory loggerFactory = new LoggerFactory())
{
string key = ConfigurationManager.AppSettings["webjob-instrumentation-key"];
loggerFactory.AddApplicationInsights(key, null);
loggerFactory.AddConsole();
config.LoggerFactory = loggerFactory;
config.Tracing.ConsoleLevel = TraceLevel.Off;
if (config.IsDevelopment)
config.UseDevelopmentSettings();
JobHost host = new JobHost(config);
host.RunAndBlock();
}
}
}
}
Functions
This is just a test function that will run every minute for half an hour.
using Core.Telemetry;
using Microsoft.ApplicationInsights;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Timers;
using System;
using System.Collections.Generic;
namespace WebJobs.Functions
{
public class TestFunctions
{
public void TelemetryTest([TimerTrigger(typeof(Schedule))] TimerInfo timer)
{
TelemetryClient tc = new TelemetryClient();
tc.TrackEvent("TelemetryTestEvent");
}
// schedule that will run every minute
public class Schedule : DailySchedule
{
private static readonly string[] times =
{
"12:01","12:02","12:03","12:04","12:05","12:06","12:07","12:08","12:09","12:10",
"12:11","12:12","12:13","12:14","12:15","12:16","12:17","12:18","12:19","12:20",
"12:21","12:22","12:23","12:24","12:25","12:26","12:27","12:28","12:29","12:30"
};
public Schedule() : base(times) { }
}
}
}
This seems to partially work in that I can see some telemetry in AI but not the custom events. For example I can see a Request show up each time TestFunctions.TelemetryTest() runs and various Traces during the initialisation of the WebJob.
I have probably not configured something properly or am not getting the TelemetryClient in the correct manner, but I cannot find any documentation on tracking custom events in WebJobs.
Any help would be appreciated.
Try setting the instrumentationkey explicit:
tc.Context.InstrumentationKey = "<your_key>";
According to the docs you should be able to get the key using
System.Environment.GetEnvironmentVariable(
"APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process)
if you have set up application insights integration.
Now i'm working at writing unit test on azure service bus trigger function
It's highly needed to mock somehow BrokeredMessage object that pass around into function. Function declaration is given below:
public static void Run(
[ServiceBusTrigger("saas01.queue.dbmigration", AccessRights.Manage, Connection = "connection")]BrokeredMessage message)
Unfortunately, i can't find any applicable way to mock it. It hardly mocking du to this class is sealed and i can't event create wrapper around it. Do you have some ideas about it?
Thanks for helping
,
One solution is to create a wrapper around BrokeredMessage you can test, as is done here. Here's also a MSDN post to the ServiceBus team that talks about using a wrapper too.
Note that Azure Functions V2 uses the Message class, which is public and not sealed.
[FunctionName("ServiceBusFunc")]
public static void Run([ServiceBusTrigger("myqueue", AccessRights.Manage, Connection = "ServiceBus")]BrokeredMessage myQueueItem, TraceWriter log)
{
var message = new MyBrokeredMessage(myQueueItem);
BusinessLogic(message, log);
}
public static void BusinessLogic(MyBrokeredMessage myMessage, TraceWriter log)
{
var stream = myMessage.GetBody<Stream>();
var reader = new StreamReader(stream);
log.Info($"C# ServiceBus queue trigger function processed message: '{reader.ReadToEnd() }'");
}
public class MyBrokeredMessage
{
private BrokeredMessage _msg;
public MyBrokeredMessage(BrokeredMessage msg) => _msg = msg;
public T GetBody<T>()
{
return _msg.GetBody<T>();
}
}
Edit: I will accept Azure configuration related changes as an answer to this question.
I am attempting to setup a retry policy to prevent instantly retrying a message when a 3rd party service is temporarily unavailable.
Currently the job is retried immediately multiple times and fails each time due to the temporary outage of the 3rd party service.
How do I set a retry delay for these messages?
I have the following code for Main:
class Program
{
static void Main()
{
var config = new JobHostConfiguration();
if (config.IsDevelopment)
config.UseDevelopmentSettings();
config.UseCore();
config.UseServiceBus(new ServiceBusConfiguration()
{
ConnectionString = Configuration.GetAppSetting("Microsoft.ServiceBus.ConnectionString"),
MessageOptions = new OnMessageOptions()
{
}
});
var host = new JobHost(config);
LogManager.GetCurrentClassLogger().Information("F1.Birst.Automation web job starting.");
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
}
I have an ErrorMonitor setup which properly logs errors:
public class ExceptionHandler
{
private static readonly ILogger Log = LogManager.GetCurrentClassLogger();
public void Handle([ErrorTrigger] TraceFilter message, TextWriter log)
{
foreach (var exception in message.GetEvents())
Log.Error(exception.Exception.InnerException, exception.Message);
}
}
And my message handler looks like this:
public class ChurchCodeChangedEventHandler : ChurchSpaceHandler
{
private static readonly ILogger Log = LogManager.GetCurrentClassLogger();
public void Handle([ServiceBusTrigger(nameof(ChurchCodeChangedEvent), "F1.Birst.Automation.ChurchCodeChangedEvent")] ChurchCodeChangedEvent message, TextWriter log)
{
Log.Information(LogTemplates.ChurchCodeChanged, message.ChurchId);
// snip
}
}
How do I set a retry delay for these messages?
Webjobs do not support the concept of delayed retries. You can only control a few things using ServiceBusConfiguration, but those are not retries looking at the source code.
You could use frameworks like NServiceBus or MassTransit to get delayed retries. There's an example of how to use NServiceBus with WebJobs and you can run it locally to see how delayed retries would work.