Thread-lock by ParameterInstance.lockObject, does it work? - c#

I've googled far and wide and found no answer to this. I am programming my own little Tcp library to make it easy for myself. On the server I have a 'ConnectedClient' object that has a socket and a network stream. On the server static class I have a Send function that sends a length-prefixed stream. I want the stream to be thread safe, but for each client. Would this work for that?
Send(ConnectedClient client, ...(rest of parameters nor relevant))
{
lock (client.lockObject)
{
// Writing to stream thread-safely I hope...
}
}
I hope I made myself clear enough, if not, just ask for more details.

It looks like you are writing some kind of multiplexer. Indeed, that should work fine as long as you write an entire payload (and length-prefix) within a single lock, and as long as the lockObject is representative of the mutual-exclusive resource (i.e. must be a common lockObject for all clients that we don't want to collide).
Perhaps the trickier question is: are you going to read the reply within that method (success/return-value/critical-fail), or are you going to read the reply asynchronously, and let the next writer write to the stream while the first message is flying...
For comparison, when writing BookSleeve (a redis multiplexer, full source available if you want some reference code), I chose a different strategy: one dedicated thread to do all the writing to the thread, with all the callers simply appending to a thread-safe queue; that way, even if there is a backlog of work, the callers aren't delayed.

Related

How to make HttpResponse.Flush() async?

How to implement an asynchrone call to the HttpResponse.Flush() method using .net 4.0 & VS2013 ?
I tried delegate:
var caller = new AsyncFlush(context.Response.Flush);
var result1 = caller.BeginInvoke(null, null);
caller.EndInvoke(result1);
then task:
Task.Factory.StartNew(() => context.Response.Flush()).Start();
and finally thread:
new Thread(new ThreadStart(() => context.Response.Flush()).Start();
But each case seem freeze my internet explorer when flushing larges files (1GB+). Any idea?
Regards.
Whether your flush the response or not does not matter. It also does not matter what chunk size you use when writing to the response object. Client and server communicate over the TCP protocol which does not preserve or communicate chunk sizes in any way. The client is never impacted by the way the server wrote. The client can't even tell the difference if it wanted to. It's an implementation detail of the server.
The reason why your browser "freezes" is unknown but it is not the way you flush data. Browsers have no trouble downloading arbitrarily sized files.
Note, that all three of the code samples you posted are either slightly harmful and pointless or do not work at all. You need to throw this away. Look elsewhere for finding the reason for the freeze.
Your approach for creating async wrapper is fine. But here are few things you should know.
Response.Flush() forces the complete buffer to send to the client. So try to avoid sending a complete 1 Gig+ data on the client at once. This might engage the client processing that huge buffer and might endup with hangs.
Rather than sending the huge buffer once to the client try sending the stream into chunks and use flush for each chunk, so that client doesn't hang during processing your request.
See this KB for writing a huge file in chunks to the response using Response.Flush multiple times.

C# game client with asyncronous sockets

I'm developing a small online game in C#. Currently I am using simple sync TCP sockets. But now (because this is some kind of "learning project") I want to convert to asynchronous sockets. In the client I have the method: byte[] SendAndReceive(Opcode op, byte[] data).
But when I use async sockets this isn't possible anymore.
For example my MapManager class first checks if a map is locally in a folder (checksum) and if it isn't, the map will be downloaded from the server.
So my question:
Is there any good way to send some data and get the answer without saving the received data to some kind of buffer and polling till this buffer isn't null?
Check out IO Completion Ports and the SocketAsyncEventArgs that goes with it. It raises events when data has been transferred, but you still need a buffer. Just no polling. It's fast and pretty efficient.
http://www.codeproject.com/Articles/83102/C-SocketAsyncEventArgs-High-Performance-Socket-Cod
and another example on MSDN
http://msdn.microsoft.com/en-us/library/system.net.sockets.socketasynceventargs.aspx
A code example of what you have would help, but I'd suggest using a new thread for each socket connection with a thread manager. Lmk if that makes sense or if that' applicable here. :)

Can I rely on DataAvailable for an SSL-wrapped networkstream?

I'm dealing with an application that does a lot of asynchronous reading. To improve performance, I'd like to directly do a synchronous call to Read from an SslStream provided that the call does not block.
The SslStream itself does not provide a DataAvailable property like the underlying NetworkStream does.
So, given that I know that it's a wrapped network stream being read, will the true in DataAvailable guarantee that the call to the SslStream won't cause a block?
Like this:
public void Read(NetworkStream netStream, SslStream sslStream)
{
// given that netStream is the inner stream of sslStream
if (netStream.DataAvailable)
{
// Will not block
sslStream.Read(...);
}
else
{
// Would block
sslStream.Read(...);
}
}
The SslStream is already authenticated and ready to go. I'm not sure if there are any additional overhead apart from the encrypting/decrypting. I assume the answer is reliant on if the SslStream requires a read of more than one byte from the underlying stream in order to read one encrypted byte.
No it doesn't guarantee that, because there are SSL Records at the next layer down, and you may not have received an entire one yet, and cryptologically speaking you can't do anything until you have it all, as you first have to check the MAc for integrity purposes.
But more to the point, I question the whole strategy. Just issue the reads as you need them in normal code: don't try to guess which mode will work best in each situation. The SSL overhead will probably swamp the sync/async difference, and the network bandwidth limitation will swamp them both.
It depends on the cipher in use- endpoints using RC4 or another stream cipher are more likely to be decryptable one byte at a time, but no guarantees. An endpoint configured for DES or other block ciphers will wait until a full block is available.
You could do some screwy stuff with a peekable intermediate buffering stream and try to make sure you've got a reasonable block size before making a blocking read, but that's nasty.
If you absolutely can't block, I'd stick to BeginRead and a completion delegate.

Serial Comms programming structure in c# / net /

I'm an embedded programmer trying to do a little bit of coding for a communications app and need a quick start guide on the best / easiest way to do something.
I'm successfully sending serial data packets but need to impliment some form of send/ response protocol to avoid overflow on the target system and to ensure that the packet was received ok.
Right now - I have all the transmit code under a button click and it sends the whole lot without any control.
What's the best way to structure this code , i.e sending some packets - waiting for response .. sending more .. etc etc until it's all done, then carrying on with the main program.
I've not used threads or callbacks or suchlike in this environment before but will learn - I just need a pointer to the most straigtforward ways to do it.
Thanks
Rob
The .NET serialport uses buffers, learn to work with them.
Sending packets that are (far) smaller than the Send-buffer can be done w/o threading.
Receiving can be done by the DataReceived event but beware that that is called from another thread. You might as well start your own thread and use blocking reads from there.
The best approach depends on what your 'packets' and protocol look like.
I think to have a long experience about serial comm, both MCU and PC-based.
I strongly UNSUGGEST the single-thread based solution, although it is very straigthful for light-speed testing, but absolutely out for final releases.
Surely you may choose among several patterns, but they are mostly shaped around a dedicated thread for the comm process and a finite-state-machine to parse the protocol (during receiveing).
The prevoius answers give you an idea to how build a simple program, but it might depends on the protocol specification, target device, scope of the application, etc.
there are of course different ways.
I will describe a thread based and an async operation based way:
If you don't use threads, your app will block as long as the operation is performing. This is not what a user is expecting today. Since you are talking about a series of sending and receiveing commands, I would recommend starting the protocol as a thread and then waiting for it to finish. You might also place an Abort button if neccesary. Set the ReadTimeout values and at every receive be ready to catch the exception! An introducing into creating such a work thread is here
If you want to, use Async Send/Receive functions instead of a thread (e.g. NetworkStream.BeginRead etc.). But this is more difficult because you have to manage state between the calls: I recommend using a Finite State Machine then. In fact you create an enumeration (i.e. ProtocolState) and change the state whenever an operation has completed. You can then simply create a function that performs the next step of the protocol with a simple switch/case statement. Since you are working with a remote entity (in your case the serial target system), you always have to consider the device is not working or stops working during the protocol. Do this by starting a timeout timer (e.g. set to 2000ms) and start it after sending each command (assuming each command will get a reply in your protocol). Stop it if the command was received successfully or on timeout.
You could also implement low-level handshaking on the serial port; set the serial port's Handshake property to rts/cts or xon/xoff.
Otherwise (or in addition), use a background worker thread. For simple threads, I like a Monitor.Wait/Pulse mechanism for managing the thread.
I have some code that does read-only serial communications in a thread; email me and I'll be happy to send it to you.
I wasn't sure from your question if you were designing both the PC and embedded sides of the communication link, if you are you might find this SO question interesting.

WCF service with XML based storage. Concurrency issues?

I programmed a simple WCF service that stores messages sent by users and sends these messages to the intended user when asked for. For now, the persistence is implemented by creating username.xml files with the following structure:
<messages recipient="username">
<message sender="otheruser">
...
</message
</messages>
It is possible for more than one user to send a message to the same recipient at the same time, possibly causing the xml file to be updated concurrently. The WCF service is currently implemented with basicHttp binding, without any provisions for concurrent access.
What concurrency risks are there? How should I deal with them? A ReadWrite lock on the xml file being accessed?
Currently the service runs with 5 users at the most, this may grow up to 50, but no more.
EDIT:
As stated above the client will instantiate a new service class with every call it makes. (InstanceContext is PerCall, ConcurrencyMode irrelevant) This is inherent to the use of basicHttpBinding with default settings on the service.
The code below:
public class SomeWCFService:ISomeServiceContract
{
ClassThatTriesToHoldSomeInfo useless;
public SomeWCFService()
{
useless=new ClassThatTriesToHoldSomeInfo();
}
#region Implementation of ISomeServiceContract
public void IncrementUseless()
{
useless.Counter++;
}
#endregion
}
behaves is if it were written:
public class SomeWCFService:ISomeServiceContract
{
ClassThatTriesToHoldSomeInfo useless;
public SomeWCFService()
{}
#region Implementation of ISomeServiceContract
public void IncrementUseless()
{
useless=new ClassThatTriesToHoldSomeInfo();
useless.Counter++;
}
#endregion
}
So concurrency is never an issue until you try to access some externally stored data as in a database or in a file.
The downside is that you cannot store any data between method calls of the service unless you store it externally.
If your WCF service is a singleton service and guaranteed to be that way, then you don't need to do anything. Since WCF will allow only one request at a time to be processed, concurrent access to the username files is not an issue unless the operation that serves that request spawns multiple threads that access the same file. However, as you can imagine, a singleton service is not very scalable and not something you want in your case I assume.
If your WCF service is not a singleton, then concurrent access to the same user file is a very realistic scenario and you must definitely address it. Multiple instances of your service may concurrently attempt to access the same file to update it and you will get a 'can not access file because it is being used by another process' exception or something like that. So this means that you need to synchronize access to user files. You can use a monitor (lock), ReaderWriterLockSlim, etc. However, you want this lock to operate on per file basis. You don't want to lock the updates on other files out when an update on a different file is going on. So you will need to maintain a lock object per file and lock on that object e.g.
//when a new userfile is added, create a new sync object
fileLockDictionary.Add("user1file.xml",new object());
//when updating a file
lock(fileLockDictionary["user1file.xml"])
{
//update file.
}
Note that that dictionary is also a shared resource that will require synchronized access.
Now, dealing with concurrency and ensuring synchronized access to shared resources at the appropriate granularity is very hard not only in terms of coming up with the right solution but also in terms of debugging and maintaining that solution. Debugging a multi-threaded application is not fun and hard to reproduce problems. Sometimes you don't have an option but sometimes you do. So, Is there any particular reason why you're not using or considering a database based solution? Database will handle concurrency for you. You don't need to do anything. If you are worried about the cost of purchasing a database, there are very good proven open source databases out there such as MySQL and PostgreSQL that won't cost you anything.
Another problem with the xml file based approach is that updating them will be costly. You will be loading the xml from a user file in memory, create a message element, and save it back to file. As that xml grows, that process will take longer, require more memory, etc. It will also hurt your scalibility because the update process will hold onto that lock longer. Plus, I/O is expensive. There are also benefits that come with a database based solution: transactions, backups, being able to easily query your data, replication, mirroring, etc.
I don't know your requirements and constraints but I do think that file-based solution will be problematic going forward.
You need to read the file before adding to it and writing to disk, so you do have a (fairly small) risk of attempting two overlapping operations - the second operation reads from disk before the first operation has written to disk, and the first message will be overwritten when the second message is committed.
A simple answer might be to queue your messages to ensure that they are processed serially. When the messages are received by your service, just dump the contents into an MSMQ queue. Have another single-threaded process which reads from the queue and writes the appropriate changes to the xml file. That way you can ensure you only write one file at a time and resolve any concurrency issues.
The basic problem is when you access a global resource (like a static variable, or a file on the filesystem) you need to make sure you lock that resource or serialize access to it somehow.
My suggestion here (if you want to just get it done quick without using a database or anything, which would be better) would be to insert your messages into a Queue structure in memory from your service code.
public MyService : IMyService
{
public static Queue queue = new Queue();
public void SendMessage(string from, string to, string message)
{
Queue syncQueue = Queue.Synchronized(queue);
syncQueue.Enqueue(new Message(from, to, message));
}
}
Then somewhere else in your app you can create a background thread that reads from that queue and writes to the filesystem one update at a time.
void Main()
{
Timer timer = new Timer();
timer.Tick += (o, e)
{
Queue syncQueue = Queue.Synchronized(MyService.queue);
while(syncQueue.Count > 0)
{
Message message = syncQueue.Dequeue() as Message;
WriteMessageToXMLFile(message);
}
timer.Start();
};
timer.Start();
//Or whatever you do here
StartupService();
}
It's not pretty (and I'm not 100% sure it compiles) but it should work. It sort of follows the "get it done with the tools I have, not the tools I want" kind of approach I think you are looking for.
The clients are also off the line as soon as possible, rather than waiting for the file to be written to the filesystem before they disconnect. This can also be bad... clients might not know their message didn't get delivered should your app go down after they disconnect and the background thread hasn't written their message yet.
Other approaches on here are just as valid... I wanted to post the serialization approach, rather than the locking approach others have suggested.
HTH,
Anderson
Well, it just so happens that I've done something almost exactly the same, except that it wasn't actually messages...
Here's how I'd handle it.
Your service itself talks to a central object (or objects), which can dispatch message requests based on the sender.
The object relating to each sender maintains an internal lock while updating anything. When it gets a new request for a modification, it then can read from disk (if necessary), update the data, and write to disk (if necessary).
Because different updates will be happening on different threads, the internal lock will be serialized. Just be sure to release the lock if you call any 'external' objects to avoid deadlock scenarios.
If I/O becomes a bottleneck, you can look at different strategies involving putting messages in one file, separate files, not immediately writing them to disk, etc. In fact, I'd think about storing the messages for each user in a separate folder for exactly that reason.
The biggest point is, that each service instance acts as, essentially, an adapter to the central class, and that only one instance of one class will ever be responsible for reading/writing messages for a given recipient. Other classes may request a read/write, but they do not actually perform it (or even know how it's performed). This also means that their code is going to look like 'AddMessage(message)', not 'SaveMessages(GetMessages.Add(message))'.
That said, using a database is a very good suggestion, and will likely save you a lot of headaches.

Categories