C# IMAP client - multiple connections to mail server - c#

After experimenting with many IMAP API's, I have decided to write my own (Most are memory hogs, some just do not work, out of memory exception etc etc etc).
Anyway I have wrote some code that works (using TCPClient object) and its good so far. However, I want my one to be able to handle multiple requests to the mail server. For example:
Say I get a list of all the UIDs, then I cycle through this list getting what I want (Body, Header, etc) for each message.
The question is, how would I handle, multiple requests at the same time? So instead of looping through the UIDs one at a time I can instead process 10 at a time.
What would be the best approach here? An array of TCP Clients, each with its own thread?
Thanks.

In general it's recommended that IMAP clients only have at most one connection to the server at all times. Not only does additional connections require valuable resources on the server but more important the IMAP specification does not guarantee that two connections can select the same mailbox at the same time. Relying on this capability being present on the server may render your client incompatible with those servers.
Instead you should use the protocol as efficient as possible. Note that many commands can operate on a set or range of UIDs. This allows you to make one singe request where you specify every UID instead of making one request for each UID separately.
Another good practice is to not request more data than what is currently needed. For example say that you have a list of messages. Then don't request detailed information for all of them, only request information for the messages that are currently visible.
I highly recommend that you read RFC 2683, IMAP4 Implementation Recommendations. It covers this among other things.
If you decide to use multiple connections anyway then a good approach is usually to use asynchronous operations and not use individual threads explicitly. Combination with some kind of run loop integration is often useful as well, that way your code is called when there is data to read instead of you code having to poll or explicitly check for it. This is often a good approach even if you're only using a single connection. Keep in mind that according to the IMAP protocol the server may send you responses even when you have not explicitly asked for them.

Related

How to call the web api recursively in .net core?

I have an endpoint which returns the response containing hotels and a flag which shows more results are available, the client needs to call this endpoint recursively till the time the server returns more results flag as false. What is the better way to implement this? Could anyone help me on this?
First Option: Avoid It If Possible
Please try to avoid calls on HTTP APIs so as to avoid network latency.
This is very important if you want to make multiple calls from a client which is supposed to be responsive.
e.g. if you are developing a web application / WPF application and you want user to click on something which triggers 10-20 calls to API, the operation may not complete quickly may result in poor user experience.
If it is a background job, then probably it multiple calls would make more sense.
Second Option: Optimize HTTP Calls From Client
If you still want to make multiple calls over HTTP, then you will have to somehow optimize the code in such a way that at least you avoid the network latency.
For avoiding network latency, you can bring all the data or major chunk of the data in one call on the client side. Then client can iterate over this set of data.
Even if you reduce half of the calls you buy much more time for client processing.
Another Option
You can also try to think if this can be a disconnected operation - client sending just one notification to server and then server performing all iterations.
Client can read status somewhere from database to know if this operation is complete.
That way your client UI would still say responsive and you will be able to offload all heavy processing to Server.
You will have to think and which of these options suits High Level Design of your product/project.
Hope I have given enough food for thoughts (although this may not be solving your issue directly).

Invoke NServiceBus Saga as a single awaitable request-response

Consider a web application that implemented every database action except querying (i.e. add, update, remove) as a NServiceBus message, so that whenever a user calls a web API, in the back-end it will be mapped to await endpointInstance.Request method to return the response in the same HTTP request connection.
The challenge is when a message handler needs to send some other messages and wait for their response to finish its job. NServiceBus does not allow to call Request inside a message handler.
I ended up using Saga to implement message handlers that are relied on some other message handler responses. But the problem with Saga is that I can't send back the result in the same HTTP request, because Saga uses publish/subscribe pattern.
All our web APIs need to be responded in the same HTTP request (connection should be kept open until the result is received or a timeout exception occurred).
Is there any clean solution (preferably without using Saga)?
An example scenario:
user call http://test.com/purchase?itemId=5&paymentId=133
web server calls await endpointInstance.Request<PurchaseResult>(new PurchaseMessage(itemId, paymentId));
PurchaseMessage handler should call await endpointInstance.Request<AddPaymentResult>(new AddPaymentMessage(paymentId));
if the AddPaymentResult was successfull, store the purchase details in the database and return true as PurchaseResult, otherwise return false
You're trying to achieve something that we (at Particular Software) are trying to actively prevent. Let me explain.
With Remote Procedure Calls (RPC) you call another component out-of-process. That what makes the procedure call 'remote'. Where with regular programming you do everything in-process and it is blazing fast, with RPC you have the overhead of serialization, latency and more. Basically, you have to deal with the fallacies of distributed computing.
Still, people do it for various reasons. Sometimes because you want to use a WebAPI (or 'old fashioned' web service) because it offers the functionality you don't want to develop. Oldest example in the book is searching for an address by postal code. Or deducting money from someone's bank account. If you're building a CRM, you can use these remote components. These days a lot of people build distributed monoliths because they are taught at conferences that this is a good thing. In an architecture diagram, it looks really nice, but there's still temporal coupling that can provide a lot of headaches.
Some of these headaches come from the fact that you're trying to do stuff in an atomic action. Back in the days, with in-process calling of code/classes/etc this was easy and fast. Until you hit limitations, like tons of locks on a database.
A solution to this is asynchronous communication. You send some information via fire-and-forget. This solves temporal coupling. Instead of having a database that is getting dozens and dozens of requests to update data, etc. and as a result, your website is grinding to a halt, you have various options to make sure this doesn't happen. This is a really good thing, because instead of a single atomic operation, you have various smaller operations and many ways to distributed work, scale your system, etc, etc.
It also brings additional challenges, because not everyone is able to work with fire-and-forget. Some systems that were already built, try to introduce asynchronous communication via messaging (and hopefully NServiceBus). Some parts can work flawlessly with this. But others parts can't. Mainly the user-interface (UI). Because it was built to get an immediate result. So when you send a message from the UI, you expect a result!
With NServiceBus we've built a package called "Client-Side Callbacks" to make exactly this a possibility. We highly recommend our customers not to use it, except for this specific scenario that I just described. It is much better to migrate your entire UI to be able to deal with the fact that you don't receive an immediate answer, but we understand this is so much work, that not many will be able to achieve this.
However once that first message was sent and the UI received a result, there is no need to use callbacks anymore. As a result I'd like to propose this scenario:
use call http://test.com/purchase?itemId=5&paymentId=133
web server calls await endpointInstance.Request<PurchaseResult>();
PurchaseMessage handler retrieves info it needs and sends or publishes a message to (an)other component(s) and then replies back to the web server with an answer.
The next handler works with the send/published message and continues the process
Let us know if you need more information. You can always contact us by sending an email to support#particular.net

stackexchange redis cache performance

I have to add N (independent) items frequently in redis cache using stackexhange.redis C# each with different expiration time so that there is minimum time at client side and min blocking & cost at server side. Redis server will receive hundreds of get requests per second so I don't want to mess with the get time at all.
I have read the documentation here and answer here. I could not find a single method that does this operation. Considering different options:
Using transaction- This will block any operation at server side. So this should not be right solution
Using batching- This will block any operation at client side till all the batch operation is complete. This should not be right solution.
Using pipelining - This will not block any operation at client side and server side. But it can send multiple requests (packets less than N) may consume more network but may increase memory consumption at client side which may induce latency.
Using fire and forget - This also will not block any operation at client side and server side. But it will send multiple requests (more packets than pipelining) which may consume more network bandwidth but no memory consumption at client side.
Which should be the best approach?
I assumed competing operations means 2 inserts and one get and insert cannot go together though they may be accessing different keys. Am I correct in this otherwise what does it mean?
Redis is single-threaded when it comes to either read or write on a database.
What's the best solution in your case? Who knows, it might depend on a lot of variables and each use case should be analyzed separately to implement the right solution.
Redis MULTI can't be avoided unless you want to corrupt your data if something goes wrong in your application layer. Actually, if you want to avoid many requests to Redis, you should use Lua scripts instead.
In the other hand, the point of Redis is trying to make many operations but be sure that those are as small as possible because of the Redis single-threaded nature. Right, it's blazing fast, unless you execute an operation that takes too much time.
In summary, I wouldn't be too concerned about sending many requests as its an in-memory database and works at the lightspeed. Also, consider the wonders of Redis Cluster (i.e. sharding) to being able to optimize your scenario.
Finally, I would take a look at this Redis tutorial: Redis latency problems troubleshooting
You should add to your list of options, Lua scripting. See EVAL.
Also, consider the data structure that you will use. For example you can use MSET to send multiple values in Redis with one hop.

Get a variable from one program into another

Im not even sure how to ask this question, but i'll give it a shot.
I have a program in c# which reads in values from sensors on a manufacturing line that are indicative of the line health. These values update every 500 milisecconds. I have four lines that this is done for. I would like to write a "overview" program which will be able to access these values over the network to give a good summary on how the factory is doing. My question is how do I get the values from the c# programs on the line to the c# overview program realtime?
If my question doesnt make much sense, let me know and I'll try to rephrase it.
Thanks!
You have several options:
MSMQ
Write the messages in MSMQ (Microsoft Message Queuing). This is an (optionally) persistent and fast store for transporting messages between machines.
Since you say that you need the messages in the other app in near realtime, then it makes sense to use MSMQ because you do not want to write logic in that app for handling large amounts of incoming messages.
Keep the MSMQ in the middle and take out what you need and most importantly when you can.
WCF
The other app could expose a WCF service which can be called by your realtime app each time there's data available. The endpoint could be over net.tcp, meaning low overhead, especially if you send small messages.
Other options include what has been said before: database, file, etc. So you can make your choice between a wide variety of options.
It depends on a number of things, I would say. First of all, is it just the last value of each line that is interesting for the 'overview' application or do you need multiple values to determine line health or do you perhaps want to have a history of values?
If you're only interested in the last value, I would directly communicate this value to the overview app. As suggested by others, you have numerous possibilities here:
Raw TCP using TcpClient (may be a bit too low-level).
Expose a http endpoint on the overview application (maybe it's a web application) and post new values to this endpoint.
Use WCF to expose some endpoint (named pipes, net.tcp, http, etc.) on the overview application and call this endpoint from each client application.
Use MSMQ to have each client enqueue messages that are then picked up by the overview app (also directly supported by WCF).
If you need some history of values or you need multiple values to determine line health, I would go with a database solution. Then again you have to choose: does each client write to the database or does each client post to the overview app (using any of the communication means described above) and does the overview app write to the database.
Without knowing any more constraints for your situation, it's hard to decide between any of these.
You can use named pipes (see http://msdn.microsoft.com/en-us/library/bb546085.aspx) to have a fast way to communicate between two processes.
A database. Put your values into a database and the other app then pulls them out that same database. This is a very common solution to this problem and opens up worlds of new scenarios.
see: Relation database

Finding or building an inter-process broadcast communication channel

So we have this somewhat unusual need in our product. We have numerous processes running on the local host and need to construct a means of communication between them. The difficulty is that ...
There is no 'server' or master process
Messages will be broadcast to all listening nodes
Nodes are all Windows processes, but may be C++ or C#
Nodes will be running in both 32-bit and 64-bit simultaneously
Any node can jump in/out of the conversation at any time
A process abnormally terminating should not adversely affect other nodes
A process responding slowly should also not adversely affect other nodes
A node does not need to be 'listening' to broadcast a message
A few more important details...
The 'messages' we need to send are trivial in nature. A name of the type of message and a single string argument would suffice.
The communications are not necessarily secure and do not need to provide any means of authentication or access control; however, we want to group communications by a Windows Log-on session. Perhaps of interest here is that a non-elevated process should be able to interact with an elevated process and vise-versa.
My first question: is there an existing open-source library?, or something that can be used to fulfill this with little effort. As of now I haven't been able to find anything :(
If a library doesn't exist for this then... What technologies would you use to solve this problem? Sockets, named-pipes, memory mapped files, event handles? It seems like connection based transports (sockets/pipes) would be a bad idea in a fully connected graph since n nodes requires n(n-1) number of connections. Using event handles and some form of shared storage seems the most plausible solution right now...
Updates
Does it have to be reliable and guaranteed? Yes, and no... Let's say that if I'm listening, and I'm responding in a reasonable time, then I should always get the message.
What are the typical message sizes? less than 100 bytes including the message identifier and argument(s). These are small.
What message rate are we talking about? Low throughput is acceptable, 10 per second would be a lot, average usage would be around 1 per minute.
What are the number of processes involved? I'd like it to handle between 0 and 50, with the average being between 5 and 10.
I don't know of anything that already exists, but you should be able to build something with a combination of:
Memory mapped files
Events
Mutex
Semaphore
This can be built in such a way that no "master" process is required, since all of those can be created as named objects that are then managed by the OS and not destroyed until the last client uses them. The basic idea is that the first process to start up creates the objects you need, and then all other processes connect to those. If the first process shuts down, the objects remain as long as at least one other process is maintaining a handle to them.
The memory mapped file is used to share memory among the processes. The mutex provides synchronization to prevent simultaneous updates. If you want to allow multiple readers or one writer, you can build something like a reader/writer lock using a couple of mutexes and a semaphore (see Is there a global named reader/writer lock?). And events are used to notify everybody when new messages are posted.
I've waved my hand over some significant technical detail. For example, knowing when to reset the event is kind of tough. You could instead have each app poll for updates.
But going this route will provide a connectionless way of sharing information. It doesn't require that a "server" process is always running.
For implementation, I would suggest implementing it in C++ and let the C# programs call it through P/Invoke. Or perhaps in C# and let the C++ apps call it through COM interop. That's assuming, of course, that your C++ apps are native rather than C++/CLI.
I've never tried this, but in theory it should work. As I mentioned in my comment, use a UDP port on the loopback device. Then all the processes can read and write from/to this socket. As you say, the messages are small, so should fit into each packet - may be you can look at something like google's protocol buffers to generate the structures, or simply mem copy the structure into the packet to send and at the other end, cast. Given it's all on the local host, you don't have any alignment, network order type issues to worry about. To support different types of messages, ensure a common header which can be checked for type so that you can be backward compatible.
2cents...
I think one more important consideration is performance, what message rate are we talking about and no. of processes?
Either way you are relying on a "master" that allows the communication needs, be it a custom service or a system provided(Pipes, Message Queue and such).
If you don't need to keep track and query for past messages, I do think you should consider a dead simple service that opens a named Pipe - allowing all other processes to either read or write to it as PipeClients. If I am not mistaken it checks on all items in your list.
What your looking for is Mailslots!
See CreateMailslot:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365147(v=vs.85).aspx

Categories