C# - Save multiple blobs into azure account - c#

I've got a simple project that use M2mqtt client library to connect to HiveMQ broker. When a message arrives an event will fire, the fact is I can receive up to 100 messages per second and the program is able to processing just 20 messages per second.
HiveMQClient.MqttMsgPublishReceived += HiveMQClient_MqttMsgPublishReceived;
So, I have all the HiveMQ logs and telemetry and i can clearly see that the messages arrives in my application with the right rate (100 per second) but the strange thing is that the CPU of the PC where the client program is hosted runs at 10% of its capacity.
I was wondering if I need to "multi thread the event" or there is something that I'm missing.
Thank you all
EDIT
Inside the MqttMsgPublishReceived event i've got a ThreadPool that stores the messages that i receive inside an azure blob account. After some reviews I understood that this is the problem (thanks #Hans Kilian ).
Now i've got an azure blob storage account, standard configuration) that accepts only 30 calls per second. I tried to update into the premium tier but it is only for virtual machine VHD images.
Anybody knows how to improve these numbers?

The MqttMsgPublishReceived call back runs on the clients network thread, if you are interested in performance then you should not be doing any real work in this callback.
For high performance applications the model is normally to use the MqttMsgPublishReceived method to place the incoming message in a local queue in the client and have a thread pool consuming messages from that queue.
This becomes even more important when using QOS 1 or 2 messages as the broker will not send the next message until the MqttMsgPublishReceived has returned and the QOS handshake completes.
As #HandKilian says in the comments things like databases can also be a bottle neck, but using a thread pool combined with a database connection pool can help as it makes sure you are not building and tearing down a connection to the database for each message.

Related

Azure Service Bus scheduled message arriving too late

I'm using ASB Topics. I'm connecting the service by using Microsoft .NET ServiceBus nuget (namespace Microsoft.Azure.ServiceBus.Core)
When a message arrive, my consumer either handles it and release the message or resending it to the topic with a delay.
The problem is that when the delay is less than 15 seconds, sometimes the message only arrive after 15 seconds.
e.g setting the delay to 3s or 10s usually works fine, but some of the messages arrive only after 15s (in both 3s or 10s cases).
When setting the delay to 20 seconds it works just fine with no exceptions.
It's for sure not load on the consumer because in some cases it was idle during the wait time.
I tried using prefetchCount but it had no effect.
I wanted to track the scheduled message in Azure UI but it seems that this option available for queues (not topics) only.
Any idea why is that happening and what can I do? thanks!
I'm using premium tier, my receiver runs in azure k8s. i'm quite sure
this number 15s is defined somewhere to delay messages in certain
cases. was wondering if someone knows about this.
You can queue or subject messages for later processing; for example, you can plan a job to be ready for processing by a system at a specific time. This functionality allows for the creation of a dependable distributed time-based scheduler.
Scheduled messages do not appear in the queue until the enqueue time has passed. Scheduled messages can be cancelled before that time. The communication is deleted when you cancel it.
You can use any of our clients to schedule messages in one of two ways:
Use the standard send API, but before sending, set the ScheduledEnqueueTimeUtc property on the message.
Pass both the standard message and the planned time to the schedule message API. This will return the SequenceNumber of the planned message, which you can use to cancel it later if necessary.
For more information please refer the below links:
MICROSOFT DOCUMENTATION:- Scheduled messages & Best Practices for performance improvements using Service Bus Messaging

how to Store messages at client side when server goes down using wcf?

I came across a situation in my work environment. where i have wcf service which receives messages from client and store in db. Now my problem is suppose server was down for 10 mins these 10 mins messages should be stored in client at some place and client should check for availability of server for every 1 min.Is there any procedure that i could follow or any help would be appreciated .Thank you
binding :netTCPBinding
MSMQ does exactly what your first sentence says - when you send an MSMQ message, if it can't get the remote queue then it stays with the client and the built-in MSMQ service retries in the background. That way your message, once sent, is "safe." It's going to reach its destination if at all possible. (If you have a massive message volume and messages need to be stored for a long time then storage capacity can be an issue, but that's very, very unlikely.)
Configure WCF to send/receive MSMQ messages
I'd only do this if it's necessary. It involves modifying both the service and the client, and the documentation isn't too friendly.
Here's the documentation for MsmqBinding. Steps 3 and 4 for configuring the WCF service are blank. That's not helpful! When I selected the .NET 4.0 documentation those details are filled in.
I looked at several tutorials, and if I was going to look at this I'd start with this one. I find that a lot of tutorials muddy concepts by explaining too many things at once and including unnecessary information about other parts of the writers' projects.
The client queues its messages locally
If you don't to make lots of modifications to your service to support MsmqBinding. You could just implement the queuing locally. If the WCF service is down, the client puts the message in a local MSMQ queue and then at intervals reads the messages back from that queue and tries sending to the WCF service again. (If the WCF service is still down, put the message back in the queue.)
I'd just send messages straight to the queue and have another process dequeue and send to WCF. That way the client itself just "fires and forgets" if that's okay.
That way you don't have to deal with the hassle of modifying your service, but you still get the benefit. If your message can't go to the WCF service then it goes someplace "safe" where it can even survive the client app terminating or the computer restarting.
Sending and receiving messages in a local queue is much easier to configure. Your client can check to see if the queues exist and create them if needed. This is much easier to work with and the code samples are much more complete and on-point.

Any patterns on how to horizontally scale a service process that subscribes to Exchange Web Services?

I'd like to get your ideas as to how I can make my service process scale horizontally by being able to run it across multiple servers. It is a Windows service written in C#, and its purpose in life is to subscribe to our company's Exchange Web Service (EWS) so that it gets notified (via HTTP callback) whenever there's a new incoming email message. The service then gets the email message, processes it, sends a reply if possible, then goes back to sleep and waits for the next incoming email.
If I run it on more than one machine, I can either have all of them subscribing to EWS notification, or only one of them. If I have all of them subscribe, I am kind of hesitant because it might add burden to our MS Exchange infrastructure. Also this will result in all machines receiving and processing the email. I wouldn't want the sender to receive a reply N times (where N is the number of servers in the farm) for a given request message! Now if I have only one machine subscribing to EWS, that exposes me to a single point of failure.
I'd like to get your suggestions on how to address this. I'd love to have multiple servers process incoming messages by distributing email messages among them (perhaps I'll have to do this by making use of a message queueing server). Thanks.
Depends if you are scaling for reliability or throughput.
If reliability, you can have a primary and a standby process. The primary process subscribes and processes all emails. The standby process exchanges keep-alive messages with the primary and takes over as primary if the keep-alive times out.
If throughput, then a message queue mechanism , as you suggested, may be a good approach. You could run primary and standby as above, but the primary just pulls emails into a queue. A farm of message processors pulls off the queue.

How to send email with delay?

I have ASP.NET MVC application and I need to send email in "X" minutes(for each user time is different) to user after he leaves the page.
How can I do it?
Http is stateless and the time response is sent execution of page is finished. You need an application that will be sending mail even when website is not accessed by some body for a significant time interval. You can put the mails that need to be send after an interval of time in the database. Another application could be a Windows service that will pool the database after fixed interval of time let's say 30 seconds and send the mails which have reached the send time.
The solution I would choose depends on the needed scale and reliability of the system you're building.
If it's a low scale (i.e. 1 server with not too many users at the same time), non mission-critical system (i.e. it's OK if from time to time some emails are not actually sent, for example if your server crashes), then the solution can be as simple as managing a queue in memory with a thread that would wake periodically to send emails to the users that recently left the page.
If you need to build something that would be very reliable and potentially have to send a very large number of emails in a short time, and if your system has to scale to a lot of machines, then you would want to build a solution based on a queue in some storage, where as many machines as needed would pick items and handle them. An API such as Windows Azure Queue Service can be a good fit for this if you need a really high scale and reliability.

Selective Reading From a Queue--Custom MSMQ Service, ESB, or Something Else?

Looking for some ideas/pattern to solve a design problem for a system I will be starting work on soon. There is no question that I will need to use some sort of messaging (probably MSMQ) to communicate between certain areas of the system. I don't want to reinvent the wheel, but at the same time I want to make sure I am using the right tool for the job. I have been tinkering with and reading up on NServiceBus, and I'm very impressed with what it does--however I'm not sure it's intended for what I'm trying to achieve.
Here is a (hopefully) very simple and conceptual description of what the system needs to do:
I have a service that clients can send messages to. The service is "Fire and Forget"--the most the client would get back is something that may say success or failure (success being that the message was received).
The handling/processing of each message is non-trivial, and may take up significant system resources. For this reason only X messages can be handled concurrently, where X is a configurable value (based on system specs, etc.). Incoming messages will be stored in queue until it's "their turn" to be handled.
For each client, messages must be handled in order (FIFO). However, some clients may send many messages in succession (thousands or more), for example if they lost connectivity for a period of time. For this reason, messages must be handled in a round-robin fashion across clients--no client is allowed to gorge and no client is allowed to starve. So the system will either have to be able to query the queue for a specific client, or create separate queues per client (automatically, since the clients won't be known at compile time) and pull from them in rotation.
My current thinking is that I really just need to use vanilla MSMQ, create a service to accept messages and write them to one or more queues, then create a process to read messages from the queue(s) and handle/process them. However, the reliability, auditing, scaleability, and ease of configuration you get with something like NServicebus looks very appealing.
Is an ESB the wrong tool for the job? Is there some other technology or pattern I should be looking at?
Update
A few clarifications.
Regarding processing messages "in order"--in the context of a single client, the messages absolutely need to be processed in the order they are received. It's complicated to explain the exact reasons why, but this is a firm requirement. I neglected to mention that only one message per client would ever be processed concurrently. So even if there were 10 worker threads and only one client had messages waiting to be processed, only one of those messages would be processed at a time--there would be no worry of a race condition.
I believe this is generally possible with vanilla MSMQ--that you can have a list of messages in a queue and always take the oldest one first.
I also wanted to clarify a use case for the round robin ordering. In this example, I have two clients (A and B) who send messages, and only one worker thread. All queues are empty. Client A has lost connectivity overnight, so at 8am sends 1000 messages to the service. These messages get queued up and the worker thread takes the oldest one and starts processing it. As this first message is being processed, client B sends a message into the service, which gets queued up (as noted, probably in a separate queue). When Client A's first message completes processing, the logic should check whether client B has a message (it's client B's "turn"), and since it finds one, process it next.
If client B hadn't sent a message during that time, the worker would continue processing client A's messages one at a time, always checking after processing to see if other client queues contained waiting messages to ensure that no client was being starved.
Where I still feel there may be a mismatch between an ESB and this problem is that an ESB is designed to facilitate communication between services; what I am trying to achieve is a combination of messaging/communication and a selective queuing system.
So the system will either have to be
able to query the queue for a specific client,
Searching through an MSMQ queue for a message from a particular client using cursors can be inefficient and doesn't scale.
or create separate queues per client (automatically, since the
clients won't be known at compile time) and pull from them in rotation.
MSMQ cannot create queues automatically. All messages have to be sent to a known queue first. Your own custom dispatcher service, though, could then create new queues on demand and put copies of the messages in them.
[[I avoid saying "move" messages as you can't do that with application code; you can only read a message and create a new message using the original data. This distinction is important when you are using Source Journaling, for example.]]
Cheers
John Breakwell
Using an ESB like NServiceBus seems like a good solution to your problem. But based on your conceptual description, there's some things to consider. Let's go through your requirements step-by-step, using NServiceBus as a possible ESB solution:
I have a service that clients can send messages to. The service is "Fire and Forget"--the most the client would get back is something that may say success or failure (success being that the message was received).
This is easily done with NServiceBus. You can Bus.Send(Message) from the client. If your client requires an answer, you can use Bus.Return(ErrorCode). You mention that "success being that the message was received". If you use an ESB like NServiceBus, it's up to the messaging platform the deliver the message. So, if your Bus.Send doesn't throw an exception, you can be sure that the message has been sent properly. Because of this you don't probably have to send success / failure messages back to the client.
The handling/processing of each message is non-trivial, and may take up significant system resources. For this reason only X messages can be handled concurrently, where X is a configurable value (based on system specs, etc.). Incoming messages will be stored in queue until it's "their turn" to be handled.
When using NServiceBus, you can configure the the number of worker threads by setting the "NumberOfWorkerThreads" option. If your server has multiple cores / cpus, you can use this setting to balance the work load.
For each client, messages must be handled in order (FIFO).
This is something that may cause problems depending on your requirements. ESBs in general don't promise to process the messages in-order, if they have many threads working on the messages. In a case of NServiceBus, you can send an array of messages from the client into the bus and these will be processed in-order. Also, you can solve some of the in-order messaging problems by using Sagas.
However, some clients may send many messages in succession (thousands or more), for example if they lost connectivity for a period of time
When using an ESB solution, your server doesn't have to be up for the client to work. Clients can still send messages and the server will start processing them as soon as it's back online. Here's a small introduction on this.
For this reason, messages must be handled in a round-robin fashion across clients--no client is allowed to gorge and no client is allowed to starve.
This isn't a problem because you've decided to use messages :)
So the system will either have to be able to query the queue for a specific client, or create separate queues per client (automatically, since the clients won't be known at compile time) and pull from them in rotation.
Could you expand on this? I'm not sure of your design on this one.

Categories