ManualTrigger in Azure WebJob without outputting to queue at end - c#

In my Visual Studio project I've created a WebJob that runs on a schedule (once a week) to dump logs from the db to blob storage. This works great but all the code is in a ManualTrigger method that has a queue message as a required output. I really have no need of this message so I'd rather not create it and have the queue sitting there growing with unused messages.
Am I understanding this correctly or is this queue message for something else and automatically removed? I can't seem to find any documentation on the generated ManualTrigger method.
My ManualTrigger code looks like:
[NoAutomaticTrigger]
public static void ManualTrigger(TextWriter log, int value, [Queue("queue")] out string message)
{
... Log Dump Code ...
message = "Unused message";
}
Thanks,
Jason

If you don't need queue output, don't use it :) Just remove the last parameter and you should be good to go

Related

Checking status of message insert to Azure Queue

I have 1 problem about Azure Queue. After insert 1 message to Azure Queue, how can I check this message have been inserted successful or not in my code (.net)?
My solution is checking number of message already storage in queue and check again after insert new message queue. I try to find another solution. Thank for helping me
Update:
The design of method 'AddMessageAsync' in Microsoft.WindowsAzure.Storage.Queue is like this:
public virtual Task AddMessageAsync(CloudQueueMessage message);
It returns nothing, so you have two ways:
The first one is like you says, get the number of the messages in queue and check after Add message.
The second one is put the logic in the try-catch, return exception if add message not success.
Original Answer:
Have a look of this API reference:
https://learn.microsoft.com/en-us/dotnet/api/azure.storage.queues.queueclient.sendmessage?view=azure-dotnet
The return type of the sendmessage method is response, so I think you don't need to get the number of the messages. Just check the response status is OK.

Azure Storage Queue - processing messages on poison queue

I've been using Azure Storage Queues to post messages too, then write the messages to a db table. However I've noticed that when an error occurs processing messages on the queue, the message is written to a poison queue.
Here is some background to the setup of my app:
Azure Web App -> Writes message to the queue
Azure function -> Queue trigger processes the message and writes the contents to a db
There was an issue with the db schema which caused the INSERTS to fail. Each message was retried 5 times, which I believe is the default for retrying queue messages, and after the 5th attempt the message was placed on the poison queue.
The db schema was subsequently fixed but now I've no way of processing the messages on the poison queue.
My question is can we recover messages written to the poison queue in order to process them and INSERT them into the db, and if so how?
For your particular problem, I would recommend solution mentioned in question part of this post: Azure: How to move messages from poison queue to back to main queue?
Please note that name of poison queue == $"{queueName}-poison"
In my current project I've created something what is called: "Support functions" in the FunctionApp. It exposes a special HTTP endpoint with Admin authorization level that can be executed at any time.
Please See the code below, which solves the problem of reprocessing messages from the poison queue:
public static class QueueOperations
{
[FunctionName("Support_ReprocessPoisonQueueMessages")]
public static async Task<IActionResult> Support_ReprocessPoisonQueueMessages([HttpTrigger(AuthorizationLevel.Admin, "put", Route = "support/reprocessQueueMessages/{queueName}")]HttpRequest req, ILogger log,
[Queue("{queueName}")] CloudQueue queue,
[Queue("{queueName}-poison")] CloudQueue poisonQueue, string queueName)
{
log.LogInformation("Support_ReprocessPoisonQueueMessages function processed a request.");
int.TryParse(req.Query["messageCount"], out var messageCountParameter);
var messageCount = messageCountParameter == 0 ? 10 : messageCountParameter;
var processedMessages = 0;
while (processedMessages < messageCount)
{
var message = await poisonQueue.GetMessageAsync();
if (message == null)
break;
var messageId = message.Id;
var popReceipt = message.PopReceipt;
await queue.AddMessageAsync(message); // a new Id and PopReceipt is assigned
await poisonQueue.DeleteMessageAsync(messageId, popReceipt);
processedMessages++;
}
return new OkObjectResult($"Reprocessed {processedMessages} messages from the {poisonQueue.Name} queue.");
}
}
Alternatively it may be a good idea to create a new message with the additional metadata (as information that the message has already been processed in the past with no success - then it may be send to the dead letter queue).
You have two options
Add another function that is triggered by messages added to the poison queue. You can try adding the contents to the db in this function. More details on this approach can be found here. Of course, if this function too fails to process the message you could check the dequeue count and post a notification that needs manual intervention.
Add an int 'dequeueCount' parameter to the function processing the queue and after say 5 retries log the failure instead of letting the message go the poison queue. For example you can send an email to notify that manual intervention is required.
You can use azure management studio(cerulean) and move the message from poison queue to actual queue. Highly recommended tool to access queues and blobs and do any production related activity also. https://www.cerebrata.com/products/cerulean
I am just user of the tool and no way affiliated, i recommended because it is very powerful, very useful and makes you very productive.
Click on move and message can be moved to the actual uploaded queue
Just point your Azure function to the poison queue and the items in that poison queue will be handled. More details here: https://briancaos.wordpress.com/2018/05/03/azure-functions-how-to-retry-messages-in-the-poison-queue/
Azure Storage Explorer(version above 1.15.0) has now added support to move messages from one queue to another. This makes it possible to move all, or a selected set of messages, from the poison queue back to the original queue.
https://github.com/microsoft/AzureStorageExplorer/issues/1064

How to get trace/exception data from Console C# application to the Application Insights

I am trying to put trace or exception data to the Application Insights. I am trying following code:
TelemetryClient tc = new TelemetryClient();
tc.InstrumentationKey = "xxxxxx-xxxxxxx-xxxxxxxx-xxxxxxx";
var traceTelemetry = new TraceTelemetry("Console trace critical", SeverityLevel.Critical);
tc.TrackTrace(traceTelemetry);
tc.TrackException(new ApplicationException("Test for AI"));
tc.Flush();
But it does not work, I can not find such traces or exceptions on the Application Insights dashboard, search or metric explorer.
I would try adding a 5 seconds sleep before existing the process (after the flush) - IIRC flush only flushes the local buffer, and does not force send the telemetry to Application Insights
It looks like that tc.Flush is not enough. I tried your code in console app and didn't see a request in Fiddler. When added Thread.Sleep(5000) then the exception showed up.
class Program
{
static void Main(string[] args)
{
TelemetryClient tc = new TelemetryClient();
tc.InstrumentationKey = "fe549116-0099-49fe-a3d6-f36b3dd20860";
var traceTelemetry = new TraceTelemetry("Console trace critical", SeverityLevel.Critical);
tc.TrackTrace(traceTelemetry);
tc.TrackException(new ApplicationException("Test for AI"));
tc.Flush();
// Without this line Fiddler didn't show a request
Thread.Sleep(5000);
}
}
And I was able to see an Exception in "Failures" screen.
I think you may want to use InMemoryChannel if you are adding AI into the console/desktop/worker application and would like to Flush synchronously.
InMemoryChannel does not store data locally before it sends telemetry out, so there is no protection from data loss if anything happens on the wire. However, when you call Flush() it will actually try to send the telemetry instead of saving it to the disk.
Using InMemoryChannel will help prevent error-prone code like adding a Sleep() to give some time for ServerTelemetryChannel to send the locally stored telemetry items.
You'll need to replace the ServerTelemetryChannel via code or in the ApplicationInsights.coinfig file:
<TelemetryChannel Type="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.ServerTelemetryChannel, Microsoft.AI.ServerTelemetryChannel">
It takes up to an hour for events/messages are visible in App Insights. I guess you have to be a little more patient for the message to show up.

Azure Queue Trigger giving null queue item incorrectly

I'm trying to write a system in Azure, and for one part of it I want to be able to have a number of bits of code writing to a queue, and have a single bit of processing code deal with each item in the queue.
Items are being added to the queue correctly. I've checked this; I have Visual Studio with the Azure plugins. I can then use Cloud Explorer to pull up the storage account and view the queue. In here, the queue content seems correct, in that the Message Text Preview looks as I would expect.
However, when I add an Azure Functions with a Queue Trigger to process this, while the trigger fires, the queue item comes out empty. I've tried the tutorial code, cut down a little. When I set the run function to be:
public static void Run(string myQueueItem,
DateTimeOffset expirationTime,
DateTimeOffset insertionTime,
DateTimeOffset nextVisibleTime,
string queueTrigger,
TraceWriter log)
{
log.Info($"C# Queue trigger function processed: '{myQueueItem.GetType()}'\n" +
$"queueTrigger={queueTrigger}\n" +
$"expirationTime={expirationTime}\n" +
$"insertionTime={insertionTime}\n" +
$"nextVisibleTime={nextVisibleTime}\n");
}
I then get output with the queue item is empty, when I know it isn't. The queue trigger element is also empty. Here is some sample output, when I run the function directly in Azure Functions:
016-11-01T13:47:41.834 C# Queue trigger function processed:
queueTrigger=
expirationTime=12/31/9999 11:59:59 PM +00:00
insertionTime=11/1/2016 1:47:41 PM +00:00
The fact that this triggers at all, and has a sensible looking insertion time suggests that I'm connecting to the right queue.
Does anyone know why the string myQueueItem coming out empty, when the queue peeking tool can see the full preview string?
I've now got this working. I did two things.
First I cleared out the 'poison' queue. I had been trying to deserialize some object from the queue earlier.
Then, I enabled the queue - it was disabled earlier. It seems to me that, when you manually run a disabled Queue Trigger, it provides some fake information, and doesn't take anything from the queue - it doesn't even dequeue a message, wihch was the hint.
From this point on, when I add queues, they get processed correctly.

Debugging/profiling/optimizing C# Windows service in VS 2012

I am creating a Windows service in C#. Its purpose is to consume info from a feed on the Internet. I get the data by using zeromq's pub/sub architecture (my service is a subscriber only). To debug the service I "host" it in a WPF control panel. This allows me to start, run, and stop the service without having to install it. The problem I am seeing is that when I call my stop method it appears as though the service continues to write to the database. I know this because I put a Debug.WriteLine() where the writing occurs.
More info on the service:
I am attempting to construct my service in a fashion that allows it to write to the database asynchronously. This is accomplished by using a combination of threads and the ThreadPool.
public void StartDataReceiver() // Entry point to service from WPF host
{
// setup zmq subscriber socket
receiverThread = new Tread(SpawnReceivers);
receiverThread.Start();
}
internal void SpawnReceivers()
{
while(!stopEvent.WaitOne(0))
{
ThreadPool.QueueUserWorkItem(new WaitCallback(ProcessReceivedData), subscriber.Recv()); // subscriber.Recv() blocks when there is no data to receive (according to the zmq docs) so this loop should remain under control, and threads only created in the pool when there is data to process.
}
}
internal void ProcessReceivedData(Object recvdData)
{
// cast recvdData from object -> byte[]
// convert byte[] -> JSON string
// deserialize JSON -> MyData
using (MyDataEntities context = new MyDataEntities())
{
// build up EF model object
Debug.WriteLine("Write obj to db...");
context.MyDatas.Add(myEFModel);
context.SaveChanges();
}
}
internal void QData(Object recvdData)
{
Debug.WriteLine("Queued obj in queue...");
q.Enqueue((byte[])recvdData);
}
public void StopDataReceiver()
{
stopEvent.Set();
receiverThread.Join();
subscriber.Dispose();
zmqContext.Dispose();
stopEvent.Reset();
}
The above code are the methods that I am concerned with. When I debug the WPF host, and the method ProcessReceivedData is set to be queued in the thread pool everything seems to work as expected, until I stop the service by calling StopDataReceiver. As far as I can tell the thread pool never queues any more threads (I checked this by placing a break point on that line), but I continue to see "Write obj to db..." in the output window and when I 'Break All' in the debugger a little green arrow appears on the context.SaveChanges(); line indicating that is where execution is currently halted. When I test some more, and have the thread pool queue up the method QData everything seems to work as expected. I see "Queued obj in queue..." messages in the output window until I stop the service. Once I do no more messages in the output window.
TL;DR:
I don't know how to determine if the Entity Framework is just slowing things way down and the messages I am seeing are just the thread pool clearing its backlog of work items, or if there is something larger at play. How do I go about solving something like this?
Would a better solution be to queue the incoming JSON strings as byte[] like I do in the QData method then have the thread pool queue up a different method to work on clearing the queue. I feel that that solution will only shift the problem around and not actually solve it.
Could another solution be to write a new service dedicated to clearing that queue? The problem I see with writing another service would be that I would probably have to use WCF (or possibly zmq) to communicate between the two services which would obviously add overhead and possibly become less performant.
I see the critical section in all of this being the part of getting the data off the wire fast enough because the publisher I am subscribed to is set to begin discarding messages if my subscriber can't keep up.

Categories