ProtectedMemory.Unprotect outputs garbage - c#

I've got this code to store and recover an authorization token (which is alphanumeric):
public static void Store (string token)
{
byte[] buffer = Encoding.UTF8.GetBytes (token.PadRight (32));
ProtectedMemory.Protect (buffer, MemoryProtectionScope.SameLogon);
Settings.Default.UserToken = buffer.ToHexString ();
Settings.Default.Save ();
}
public static string Retrieve ()
{
byte[] buffer = Settings.Default.UserToken.FromHexString ();
if (buffer.Length == 0)
return String.Empty;
ProtectedMemory.Unprotect (buffer, MemoryProtectionScope.SameLogon);
return Encoding.UTF8.GetString (buffer).Trim ();
}
And it mostly works fine, although some times I get garbage out (many FD bytes, and some readable ones). I suspect this happens only when I reboot, but I've had some difficulties reproducing it.
Is this the intended behaviour? That is, does MemoryProtectionScope.SameLogon mean that the data will always be unreadable upon reboot? Am I doing something wrong?
The FromHexString and ToHexString methods do exactly what you would expect from them.

Yes, ProtectedMemory will always fail after you reboot (or for the different MemoryProtectionScopes, restart the process etc.). It's only meant to work to protect memory, not data for storage.
You want to use ProtectedData instead:
ProtectedData.Protect(buffer, null, DataProtectionScope.CurrentUser);
Both of those are managed wrappers over the DPAPI (introduced with Windows 2000). There's a bunch of posts with more details on the .NET security blog - http://blogs.msdn.com/b/shawnfa/archive/2004/05/05/126825.aspx

Related

What is the correct way to use USBHIDDRIVER for multiple writes?

I am writing an application that needs to write messages to a USB HID device and read responses. For this purpose, I'm using USBHIDDRIVER.dll (https://www.leitner-fischer.com/2007/08/03/hid-usb-driver-library/ )
Now it works fine when writing many of the message types - i.e. short ones.
However, there is one type of message where I have to write a .hex file containing about 70,000 lines. The protocol requires that each line needs to be written individually and sent in a packet containing other information (start, end byte, checksum)
However I'm encountering problems with this.
I've tried something like this:
private byte[] _responseBytes;
private ManualResetEvent _readComplete;
public byte[][] WriteMessage(byte[][] message)
{
byte[][] devResponse = new List<byte[]>();
_readComplete = new ManualResetEvent(false);
for (int i = 0; i < message.Length; i++)
{
var usbHid = new USBInterface("myvid", "mypid");
usbHid.Connect();
usbHid.enableUsbBufferEvent(UsbHidReadEvent);
if (!usbHid.write(message)) {
throw new Exception ("Write Failed");
}
usbHid.startRead();
if (!_readComplete.WaitOne(10000)) {
usbHid.stopRead();
throw new Exception ("Timeout waiting for read");
}
usbHid.stopRead();
_readComplete.Reset();
devResponse.Add(_responseBytes.ToArray());
usbHid = null;
}
return devResponse;
}
private void ReadEvent()
{
if (_readComplete!= null)
{
_readComplete.Set();
}
_microHidReadBytes = (byte[])((ListWithEvent)sender)[0];
}
This appears to work. In WireShark I can see the messages going back and forth. However as you can see it's creating an instance of the USBInterface class every iteration. This seems very clunky and I can see in the TaskManager, it starts to eat up a lot of memory - current run has it above 1GB and eventually it falls over with an OutOfMemory exception. It is also very slow. Current run is not complete after about 15 mins, although I've seen another application do the same job in less than one minute.
However, if I move the creation and connection of the USBInterface out of the loop as in...
var usbHid = new USBInterface("myvid", "mypid");
usbHid.Connect();
usbHid.enableUsbBufferEvent(UsbHidReadEvent);
for (int i = 0; i < message.Length; i++)
{
if (!usbHid.write(message)) {
throw new Exception ("Write Failed");
}
usbHid.startRead();
if (!_readComplete.WaitOne(10000)) {
usbHid.stopRead();
throw new Exception ("Timeout waiting for read");
}
usbHid.stopRead();
_readComplete.Reset();
devResponse.Add(_responseBytes.ToArray());
}
usbHid = null;
... now what happens is it only allows me to do one write! I write the data, read the response and when it comes around the loop to write the second message, the application just hangs in the write() function and never returns. (Doesn't even time out)
What is the correct way to do this kind of thing?
(BTW I know it's adding a lot of data to that devResponse object but this is not the source of the issue - if I remove it, it still consumes an awful lot of memory)
UPDATE
I've found that if I don't enable reading, I can do multiple writes without having to create a new USBInterface1 object with each iteration. This is an improvement but I'd still like to be able to read each response. (I can see they are still sent down in Wireshark)

SerialPort.ReadLine() slow compared to manual method?

I've recently implemented a small program which reads data coming from a sensor and plotting it as diagram.
The data comes in as chunks of 5 bytes, roughly every 500 µs (baudrate: 500000). Around 3000 chunks make up a complete line. So the total transmission time is around 1.5 s.
As I was looking at the live diagram I noticed a severe lag between what is shown and what is currently measured. Investigating, it all boiled down to:
SerialPort.ReadLine();
It takes around 0.5 s more than the line to be transmitted. So each line read takes around 2 s. Interestingly no data is lost, it just lags behind even more with each new line read. This is very irritating for the user, so I couldn't leave it like that.
I've implemented my own variant and it shows a consistent time of around 1.5 s, and no lag occurs. I'm not really proud of my implementation (more or less polling the BaseStream) and I'm wondering if there is a way to speed up the ReadLine function of the SerialPort class. With my implementation I'm also getting some corrupted lines, and haven't found the exact issue yet.
I've tried changing the ReadTimeout to 1600, but that just produced a TimeoutException. Although the data arrived.
Any explanation as of why it is slow or a way to fix it is appreciated.
As a side-note: I've tried this on a Console application with only SerialPort.ReadLine() as well and the result is the same, so I'm ruling out my own application affecting the SerialPort.
I'm not sure this is relevant, but my implementation looks like this:
LineSplitter lineSplitter = new LineSplitter();
async Task<string> SerialReadLineAsync(SerialPort serialPort)
{
byte[] buffer = new byte[5];
string ret = string.Empty;
while (true)
{
try
{
int bytesRead = await serialPort.BaseStream.ReadAsync(buffer, 0, buffer.Length).ConfigureAwait(false);
byte[] line = lineSplitter.OnIncomingBinaryBlock(this, buffer, bytesRead);
if (null != line)
{
return Encoding.ASCII.GetString(line).TrimEnd('\r', '\n');
}
}
catch
{
return string.Empty;
}
}
}
With LineSplitter being the following:
class LineSplitter
{
// based on: http://www.sparxeng.com/blog/software/reading-lines-serial-port
public byte Delimiter = (byte)'\n';
byte[] leftover;
public byte[] OnIncomingBinaryBlock(object sender, byte[] buffer, int bytesInBuffer)
{
leftover = ConcatArray(leftover, buffer, 0, bytesInBuffer);
int newLineIndex = Array.IndexOf(leftover, Delimiter);
if (newLineIndex >= 0)
{
byte[] result = new byte[newLineIndex+1];
Array.Copy(leftover, result, result.Length);
byte[] newLeftover = new byte[leftover.Length - result.Length];
Array.Copy(leftover, newLineIndex + 1, newLeftover, 0, newLeftover.Length);
leftover = newLeftover;
return result;
}
return null;
}
static byte[] ConcatArray(byte[] head, byte[] tail, int tailOffset, int tailCount)
{
byte[] result;
if (head == null)
{
result = new byte[tailCount];
Array.Copy(tail, tailOffset, result, 0, tailCount);
}
else
{
result = new byte[head.Length + tailCount];
head.CopyTo(result, 0);
Array.Copy(tail, tailOffset, result, head.Length, tailCount);
}
return result;
}
}
I ran into this issue in 2008 talking to GPS modules. Essentially the blocking functions are flaky and the solution is to use APM.
Here are the gory details in another Stack Overflow answer: How to do robust SerialPort programming with .NET / C#?
You may also find this of interest: How to kill off a pending APM operation

C# TCP Connection saving clients and broadcasting to them

For practicing I wanted to create client and server applications to simulate a lobby.
Therefore, in the server-application I accept incoming connections, create a ClientInfo object containing the TcpClient object, usernames, id, etc. and the methods for sending and receiving data, and store that ClientInfo object in a List in my lobby class. When the user does something like chatting, the message is being sent to the server and broadcasted to all available clients.
The problem I have is:
The first client connects. Broadcasts go to DefaultUser1.
The second client connects. Broadcasts go to DefaultUser2 + DefaultUser2.
As you can see, the first Client is not receiving data anymore, nor can the Server receive data from him. Somehow the data in the list must be corrupted. Here is the relevant bit of code:
Accepting incoming conenctions and creating the ClientInfo object and storing it to the lobby:
while (mWorking)
{
TcpClient client = mListener.AcceptTcpClient();
mNumberOfClients++;
Console.WriteLine("New Tcp-Connection with client: " + client.Client.LocalEndPoint.ToString());
ClientInfo newInfo = new ClientInfo(client, mNumberOfClients);
mLobby.AddClient(newInfo);
}
The ClientInfo constructor:
public ClientInfo(TcpClient client, int clientNumber)
{
mClient = client;
mClientNumber = clientNumber;
mUsername = "DefaultUser" + mClientNumber.ToString();
mStream = client.GetStream();
mEncoding = new ASCIIEncoding();
}
The sending method in ClientInfo:
public void Send(String message)
{
mCurrentMessage = message;
Thread sendThread = new Thread(this.WriteTask);
sendThread.Start();
}
private void WriteTask()
{
byte[] data = mEncoding.GetBytes(mCurrentMessage);
byte[] sizeinfo = new byte[4];
sizeinfo[0] = (byte)data.Length;
sizeinfo[1] = (byte)(data.Length >> 8);
sizeinfo[2] = (byte)(data.Length >> 16);
sizeinfo[3] = (byte)(data.Length >> 24);
mStream.Write(sizeinfo, 0, sizeinfo.Length);
mStream.Write(data, 0, data.Length);
}
Relevant code in the lobby class:
private static List<ClientInfo> mClients;
private static processDel mProcessDel;
public Lobby(processDel del)
{
mProcessDel = del;
mClients = new List<ClientInfo>();
}
public void AddClient(ClientInfo client)
{
mClients.Add(client);
client.Listen(mProcessDel);
Broadcast("UJOIN§" + client.username + "$");
}
public void Broadcast(String message)
{
for (int i = 0; i < mClients.Count; i++)
{
Console.WriteLine("Broadcasting to " + mClients[i].username);
mClients[i].Send(message);
}
}
I also tried the broadcasting with foreach, same result. The processDel is a delegate method i need for processing the received data. Receiving is handled by a seperate thread for each client.
As a guess, it seems that you misunderstood what static means in C#.
static means that the method or field is part of the type, rather than the instance of a type. So if all of your fields are static, you don't actually have any instance data, and all the state is shared across all instances of your class - so the second client overwrites all the data associated with the first client as well. The solution is simple - just remove the statics, and you should be fine.
Other than that, your code has some thread-safety issues. Most types in .NET are not thread-safe by default, and you need to add appropriate locking to make sure that consistency is maintained. This is more of a topic for CodeReview, perhaps, so I'll just note the first things that come to mind:
Send always starts a new thread to send the message. However, this also means that if it's called twice in succession under just the right conditions, it can completely corrupt your TCP stream - for example, the first thread might write the length data, then the second writes its length data before the first writes the actual data and you're in trouble. It's also possible that you'd just send the second message twice, since you're passing the text to send through a field.
List<T> isn't thread-safe. That means that you can only safely use it from a single thread - it's not entirely clear from your code, but it seems like you might have trouble with that. Using something like ConcurrentDictionary<IPEndPoint, ClientInfo> might be a better idea, but that really depends on what you're doing.
You could also explore some alternative options, like using asynchronous I/O instead of spamming threads, but that's a bit more advanced option (mind you, multi-threading is even worse :)). Regardless, a good start for thread-safety would be http://www.albahari.com/threading/ It's somewhat long, but multi-threading is a very complex and dangerous topic, and it will tend to produce errors that are hard to find and reproduce, especially while running in a debugger.

How to restart a Network Stream after a SocketException

I have a piece of code that reads a JSON stream from a server on the public internet. I am trying to make the connection a little more robust by catching the exception and trying to restart it on a given interval but I haven't been able to figure out how to restart it.
My stream code is as follows
TcpClient connection = new TcpClient(hostname, port);
NetworkStream stream = connection.GetStream();
thread = new Thread(ProcessStream);
thread.Start(stream);
My ProcessStream method is
private void ProcessStream(object stream)
{
Stream source = (NetworkStream)stream;
byte[] line;
int count;
const int capacity = 300;
ReadState readState;
while ((readState = ReadStreamLine(source, out line, out count, capacity)) != ReadState.EOF && _stopFeed == false)
{
if (readState != ReadState.Error && count > 4)
{
byte[] line1 = new byte[count];
Array.Copy(line, line1, count);
Process(line1); // return ignored in stream mode
}
else
{
ReadFail(line, count);
}
}
}
and my ReadStream function takes the stream s, does an s.ReadByte and then catches the exception when the network connection is broken. It is here that I am not sure how to try and restart the stream on a timed basis. It does not restart automatically when the network is restored.
That is not possible. It is like you calling your friend on the phone and he hangs up in the middle of a conversation. No matter how long you wait, you'll never hear from him again. All you can do is hang-up the phone and dial the number again. Unless the server supports restartable downloads (use HttpWebRequest.AddRange), you'll have to download the json again from the beginning.
If this happens a lot, so it can't be explained by the server going offline or getting overloaded, do keep in mind that the server might well be doing this on purpose. Usually because you exceeded some kind of quota. Talk to the server owner, they typically have a paid plan to allow you to use more resources.
From what I can tell, you instantiate your TcpClient before you start your method. So, in order to restart your stream, you need to re-instantiate or re-initialize your connection stream before trying again.
try
{
// Do something
}
catch (Exception ex)
{
// Caught your exception, might be ideal to log it too
// Have a count, if count is less than goal
// Call your method again
if (count < 5)
{
// re-initialize or re-instantiate connection
TcpClient connection = new TcpClient(host, port);
NetworkStream stream = connection.GetStream();
ProcessStream(stream);
}
}
I hope this helps.
You coculd at first add your stream to a static list of running streams and after finishing reading remove it from there.
Remember to use locking!
Then in the NetworkGone-Catch you can copy your list to a "todoAfterNetworkIsUpAgain"-List and start a timer that checks for network and after your network is up again restarts reading the streams again.
This Might look a bit tuff but its not the case.
Use Recursion and threading in a better way and your problem might get resolved
For Recursion
http://www.dotnetperls.com/recursion
For Threading
Take a look to msdn documentation or take consepts from albahari

Are there well-known patterns for asynchronous network code in C#?

I recently wrote a quick-and-dirty proof-of-concept proxy server in C# as part of an effort to get a Java web application to communicate with a legacy VB6 application residing on another server. It's ridiculously simple:
The proxy server and clients both use the same message format; in the code I use a ProxyMessage class to represent both requests from clients and responses generated by the server:
public class ProxyMessage
{
int Length; // message length (not including the length bytes themselves)
string Body; // an XML string containing a request/response
// writes this message instance in the proper network format to stream
// (helper for response messages)
WriteToStream(Stream stream) { ... }
}
The messages are as simple as could be: the length of the body + the message body.
I have a separate ProxyClient class that represents a connection to a client. It handles all the interaction between the proxy and a single client.
What I'm wondering is are they are design patterns or best practices for simplifying the boilerplate code associated with asynchronous socket programming? For example, you need to take some care to manage the read buffer so that you don't accidentally lose bytes, and you need to keep track of how far along you are in the processing of the current message. In my current code, I do all of this work in my callback function for TcpClient.BeginRead, and manage the state of the buffer and the current message processing state with the help of a few instance variables.
The code for my callback function that I'm passing to BeginRead is below, along with the relevant instance variables for context. The code seems to work fine "as-is", but I'm wondering if it can be refactored a bit to make it clearer (or maybe it already is?).
private enum BufferStates
{
GetMessageLength,
GetMessageBody
}
// The read buffer. Initially 4 bytes because we are initially
// waiting to receive the message length (a 32-bit int) from the client
// on first connecting. By constraining the buffer length to exactly 4 bytes,
// we make the buffer management a bit simpler, because
// we don't have to worry about cases where the buffer might contain
// the message length plus a few bytes of the message body.
// Additional bytes will simply be buffered by the OS until we request them.
byte[] _buffer = new byte[4];
// A count of how many bytes read so far in a particular BufferState.
int _totalBytesRead = 0;
// The state of the our buffer processing. Initially, we want
// to read in the message length, as it's the first thing
// a client will send
BufferStates _bufferState = BufferStates.GetMessageLength;
// ...ADDITIONAL CODE OMITTED FOR BREVITY...
// This is called every time we receive data from
// the client.
private void ReadCallback(IAsyncResult ar)
{
try
{
int bytesRead = _tcpClient.GetStream().EndRead(ar);
if (bytesRead == 0)
{
// No more data/socket was closed.
this.Dispose();
return;
}
// The state passed to BeginRead is used to hold a ProxyMessage
// instance that we use to build to up the message
// as it arrives.
ProxyMessage message = (ProxyMessage)ar.AsyncState;
if(message == null)
message = new ProxyMessage();
switch (_bufferState)
{
case BufferStates.GetMessageLength:
_totalBytesRead += bytesRead;
// if we have the message length (a 32-bit int)
// read it in from the buffer, grow the buffer
// to fit the incoming message, and change
// state so that the next read will start appending
// bytes to the message body
if (_totalBytesRead == 4)
{
int length = BitConverter.ToInt32(_buffer, 0);
message.Length = length;
_totalBytesRead = 0;
_buffer = new byte[message.Length];
_bufferState = BufferStates.GetMessageBody;
}
break;
case BufferStates.GetMessageBody:
string bodySegment = Encoding.ASCII.GetString(_buffer, _totalBytesRead, bytesRead);
_totalBytesRead += bytesRead;
message.Body += bodySegment;
if (_totalBytesRead >= message.Length)
{
// Got a complete message.
// Notify anyone interested.
// Pass a response ProxyMessage object to
// with the event so that receivers of OnReceiveMessage
// can send a response back to the client after processing
// the request.
ProxyMessage response = new ProxyMessage();
OnReceiveMessage(this, new ProxyMessageEventArgs(message, response));
// Send the response to the client
response.WriteToStream(_tcpClient.GetStream());
// Re-initialize our state so that we're
// ready to receive additional requests...
message = new ProxyMessage();
_totalBytesRead = 0;
_buffer = new byte[4]; //message length is 32-bit int (4 bytes)
_bufferState = BufferStates.GetMessageLength;
}
break;
}
// Wait for more data...
_tcpClient.GetStream().BeginRead(_buffer, 0, _buffer.Length, this.ReadCallback, message);
}
catch
{
// do nothing
}
}
So far, my only real thought is to extract the buffer-related stuff into a separate MessageBuffer class and simply have my read callback append new bytes to it as they arrive. The MessageBuffer would then worry about things like the current BufferState and fire an event when it received a complete message, which the ProxyClient could then propagate further up to the main proxy server code, where the request can be processed.
I've had to overcome similar problems. Here's my solution (modified to fit your own example).
We create a wrapper around Stream (a superclass of NetworkStream, which is a superclass of TcpClient or whatever). It monitors reads. When some data is read, it is buffered. When we receive a length indicator (4 bytes) we check if we have a full message (4 bytes + message body length). When we do, we raise a MessageReceived event with the message body, and remove the message from the buffer. This technique automatically handles fragmented messages and multiple-messages-per-packet situations.
public class MessageStream : IMessageStream, IDisposable
{
public MessageStream(Stream stream)
{
if(stream == null)
throw new ArgumentNullException("stream", "Stream must not be null");
if(!stream.CanWrite || !stream.CanRead)
throw new ArgumentException("Stream must be readable and writable", "stream");
this.Stream = stream;
this.readBuffer = new byte[512];
messageBuffer = new List<byte>();
stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null);
}
// These belong to the ReadCallback thread only.
private byte[] readBuffer;
private List<byte> messageBuffer;
private void ReadCallback(IAsyncResult result)
{
int bytesRead = Stream.EndRead(result);
messageBuffer.AddRange(readBuffer.Take(bytesRead));
if(messageBuffer.Count >= 4)
{
int length = BitConverter.ToInt32(messageBuffer.Take(4).ToArray(), 0); // 4 bytes per int32
// Keep buffering until we get a full message.
if(messageBuffer.Count >= length + 4)
{
messageBuffer.Skip(4);
OnMessageReceived(new MessageEventArgs(messageBuffer.Take(length)));
messageBuffer.Skip(length);
}
}
// FIXME below is kinda hacky (I don't know the proper way of doing things...)
// Don't bother reading again. We don't have stream access.
if(disposed)
return;
try
{
Stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null);
}
catch(ObjectDisposedException)
{
// DO NOTHING
// Ends read loop.
}
}
public Stream Stream
{
get;
private set;
}
public event EventHandler<MessageEventArgs> MessageReceived;
protected virtual void OnMessageReceived(MessageEventArgs e)
{
var messageReceived = MessageReceived;
if(messageReceived != null)
messageReceived(this, e);
}
public virtual void SendMessage(Message message)
{
// Have fun ...
}
// Dispose stuff here
}
I think the design you've used is fine that's roughly how I would and have done the same sort of thing. I don't think you'd gain much by refactoring into additional classes/structs and from what I've seen you'd actually make the solution more complex by doing so.
The only comment I'd have is as to whether the two reads where the first is always the messgae length and the second always being the body is robust enough. I'm always wary of approaches like that as if they somehow get out of sync due to an unforseen circumstance (such as the other end sending the wrong length) it's very difficult to recover. Instead I'd do a single read with a big buffer so that I always get all the available data from the network and then inspect the buffer to extract out complete messages. That way if things do go wrong the current buffer can just be thrown away to get things back to a clean state and only the current messages are lost rather than stopping the whole service.
Actually at the moment you would have a problem if you message body was big and arrived in two seperate receives and the next message in line sent it's length at the same time as the second half of the previous body. If that happened your message length would end up appended to the body of the previous message and you'd been in the situation as desecribed in the previous paragraph.
You can use yield return to automate the generation of a state machine for asynchronous callbacks. Jeffrey Richter promotes this technique through his AsyncEnumerator class, and I've played around with the idea here.
There's nothing wrong with the way you've done it. For me, though, I like to separate the receiving of the data from the processing of it, which is what you seem to be thinking with your proposed MessageBuffer class. I have discussed that in detail here.

Categories