I've been struggling with a problem when downloading very big files (>2GB) on Silverlight. My application is an out-of-browser Download Manager running with elevated permissions.
When the file reaches a certain ammount of data (2GB), it throws the following exception:
System.ArgumentOutOfRangeException was caught
Message=Specified argument was out of the range of valid values.
Parameter name: count
StackTrace:
in MS.Internal.InternalNetworkStream.BeginRead(Byte[] buffer, Int32 offset, Int32 count, AsyncCallback callback, Object state)
in MS.Internal.InternalNetworkStream.Read(Byte[] buffer, Int32 offset, Int32 count)
in MySolution.DM.Download.BeginResponseCallback(IAsyncResult ar)
InnerException:
Null
The only clue I have is this site, who shows the BeginCode implementation. This exception only occurs when count is < then 0.
My code
/* "Target" is a File object. "source" is a Stream object */
var buffer = new byte[64 * 1024];
int bytesRead;
Target.Seek(0, SeekOrigin.End); // The file might exists when resuming a download
/* The exception throws from inside "source.Read" */
while ((bytesRead = source.Read(buffer, 0, buffer.Length)) > 0)
{
Target.Write(buffer, 0, bytesRead);
_fileBytes = Target.Length;
Deployment.Current.Dispatcher.BeginInvoke(() => { DownloadPercentual = Double.Parse(Math.Round((decimal)(_fileBytes / (_totalSize / 100)), 5).ToString()); });
}
Target.Close();
logFile.Close();
The error occurs with different kind of files, and they come from public buckets on Amazon S3. (with regular http requests).
I searched a bit and it looks like this is a known limitation in Silverlight. One possible workaround is to perform the download in multiple sections, each smaller than 2GB, using the Range header.
Related
I've an UART device which I'm writing to it a command (via System.IO.Ports.SerialPort) and then immediately the device will respond.
So basically my approach is:
->Write to SerialPort->await Task.Delay->Read from the Port.
//The port is open all the time.
public async byte[] WriteAndRead(byte[] message){
port.Write(command, 0, command.Length);
await Task.Delay(timeout);
var msglen = port.BytesToRead;
if (msglen > 0)
{
byte[] message = new byte[msglen];
int readbytes = 0;
while (port.Read(message, readbytes, msglen - readbytes) <= 0)
;
return message;
}
This works fine on my computer. But if I try it on another computer for example, the bytesToRead property is sometimes mismatched. There are empty bytes in it or the answer is not completed. (E.g. I get two bytes, if I expect one byte: 0xBB, 0x00 or 0x00, 0xBB)
I've also looked into the SerialPort.DataReceived Event, but it fires too often and is (as far as I understand) not really useful for this write and read approach. (As I expect the answer immediately from the device).
Is there a better approach to a write-and-read?
Read carefully the Remarks in https://msdn.microsoft.com/en-us/library/ms143549(v=vs.110).aspx
You should not rely on the BytesToRead value to indicate message length.
You should know, how much data you expect to read to decompose the message.
Also, as #itsme85 noticed, you are not updating the readbytes, and therefore you are always writing received bytes to beginning of your array. Proper code with updating the readbytes should look like this:
int r;
while ((r = port.Read(message, readbytes, msglen - readbytes)) <= 0){
readbytes += r;
}
However, during the time you will read data, more data can come and your "message" might be incomplete.
Rethink, what you want to achieve.
Suppose I am writing a tcp proxy code.
I am reading from the incoming stream and writing to the output stream.
I know that Stream.Copy uses a buffer, but my question is:
Does the Stream.Copy method writes to the output stream while fetching the next chunk from the input stream or it a loop like "read chunk from input, write chunk to ouput, read chunk from input, etc" ?
Here's the implementation of CopyTo in .NET 4.5:
private void InternalCopyTo(Stream destination, int bufferSize)
{
int num;
byte[] buffer = new byte[bufferSize];
while ((num = this.Read(buffer, 0, buffer.Length)) != 0)
{
destination.Write(buffer, 0, num);
}
}
So as you can see, it reads from the source, then writes to the destination. This could probably be improved ;)
EDIT: here's a possible implementation of a piped version:
public static void CopyToPiped(this Stream source, Stream destination, int bufferSize = 0x14000)
{
byte[] readBuffer = new byte[bufferSize];
byte[] writeBuffer = new byte[bufferSize];
int bytesRead = source.Read(readBuffer, 0, bufferSize);
while (bytesRead > 0)
{
Swap(ref readBuffer, ref writeBuffer);
var iar = destination.BeginWrite(writeBuffer, 0, bytesRead, null, null);
bytesRead = source.Read(readBuffer, 0, bufferSize);
destination.EndWrite(iar);
}
}
static void Swap<T>(ref T x, ref T y)
{
T tmp = x;
x = y;
y = tmp;
}
Basically, it reads a chunk synchronously, starts to copy it to the destination asynchronously, then read the next chunk and waits for the write to complete.
I ran a few performance tests:
using MemoryStreams, I didn't expect a significant improvement, since it doesn't use IO completion ports (AFAIK); and indeed, the performance is almost identical
using files on different drives, I expected the piped version to perform better, but it doesn't... it's actually slightly slower (by 5 to 10%)
So it apparently doesn't bring any benefit, which is probably the reason why it isn't implemented this way...
According to Reflector it does not. Such behavior better be documented because it would introduce concurrency. This is never safe to do in general. So the API design to not "pipe" is sound.
So this is not just a question of Stream.Copy being more or less smart. Copying in a concurrent way is not an implementation detail.
Stream.Copy is synchronous operation. I don't think it is reasonable to expect it to use asynchronous read/write to make simultaneous read and write.
I would expect asynchrounous version (like RandomAccessStream.CopyAsync) to use simultaneous read and write.
Note: using multiple threads during copy would be unwelcome behavior, but using asynchronous read and write to run them at the same time is ok.
Writing to the output stream is impossible (when using one buffer) while fetching next chunk because fetching the next chunk can overwrite the buffer while its being used for output.
You can say use double buffering but its pretty much the same as using a double sized buffer.
I am trying to use Memcached.ClientLibrary. I was able to make it work and everything but after a few hits (even before I get to see a page for the first time), I get this weird error about which I couldn't find any info when searching for it.
Error message:
Cannot write to a BufferedStream while the read buffer is not empty if the underlying stream is not seekable. Ensure that the stream underlying this BufferedStream can seek or avoid interleaving read and write operations on this BufferedStream.
Stack trace:
[NotSupportedException: Cannot write to a BufferedStream while the read buffer is not empty if the underlying stream is not seekable. Ensure that the stream underlying this BufferedStream can seek or avoid interleaving read and write operations on this BufferedStream.]
System.IO.BufferedStream.ClearReadBufferBeforeWrite() +10447571
System.IO.BufferedStream.Write(Byte[] array, Int32 offset, Int32 count) +163
Memcached.ClientLibrary.SockIO.Write(Byte[] bytes, Int32 offset, Int32 count) in C:\devroot\memcacheddotnet\trunk\clientlib\src\clientlib\SockIO.cs:411
Memcached.ClientLibrary.SockIO.Write(Byte[] bytes) in C:\devroot\memcacheddotnet\trunk\clientlib\src\clientlib\SockIO.cs:391
Memcached.ClientLibrary.MemcachedClient.Set(String cmdname, String key, Object obj, DateTime expiry, Object hashCode, Boolean asString) in C:\devroot\memcacheddotnet\trunk\clientlib\src\clientlib\MemCachedClient.cs:766
Memcached.ClientLibrary.MemcachedClient.Set(String key, Object value, DateTime expiry) in C:\devroot\memcacheddotnet\trunk\clientlib\src\clientlib\MemCachedClient.cs:465
Yuusoft.Julian.Server.Models.Utils.Caching.CacheWrapper.Add(CacheKey key, T o, CacheDependency dependencies, Nullable`1 expirationTime, CacheItemRemovedCallback callBack)
My code to initialize (static constructor):
SockIOPool pool = SockIOPool.GetInstance();
pool.SetServers(CacheWrapper.Servers);
pool.InitConnections = 3;
pool.MinConnections = 1;
pool.MaxConnections = 50;
pool.SocketConnectTimeout = 1000;
pool.SocketTimeout = 3000;
pool.MaintenanceSleep = 30;
pool.Failover = true;
pool.Nagle = false;
pool.Initialize();
// Code to set (the second is the one erroing - but not at the first hit?!)
MemcachedClient mc = new MemcachedClient();
mc.Set(key, o, expirationTime.Value);
// Code to get
MemcachedClient mc = new MemcachedClient();
object o = mc.Get(key);
In addition to this exception, following two exceptions were also present in my memcached log4net logs of Memcached.ClientLibrary (Error storing data in cache for key:<key with spaces> and Exception thrown while trying to get object from cache for key:<key with spaces>) I was able to resolve all these
three exceptions by ensuring that memcached key doesn't contain any whitespace.
Reference:https://groups.google.com/forum/#!topic/memcached/4WMcTbL8ZZY
Memcached Version: memcached-win32-1.4.4-14
I'm trying to handle a serial port using the SerialPort class.
The application requires us receive one command first, and then give a reply in 20ms; the problem is, there is a delay(up to 15ms) between the command we read, and the actual command, and we don't have time to send the reply back.
The length of the command we need to read is fixed as 20 bytes, and we poll one byte from the input buffer each time.
serialPort.Read(input, 0, 1).
I don't know what is wrong with this process.
Why read one byte at a time? If you're expecting 20 bytes, you can write:
byte[] buffer = new byte[20];
int bytesRead;
int totalBytesRead = 0;
while ((bytesRead = serialPort.Read(buffer, totalBytesRead, buffer.Length - totalBytesRead)) != 0
&& totalBytesRead < buffer.Length)
{
totalBytesRead += bytesRead;
}
At that point, you have all 20 bytes or you've reached the end of the stream.
What do you mean by "there is a delay(up to 15ms) between the command we read, and the actual command,"?
Are you using the DataRecieved event? I had a similar error some time ago, apparently some of the functionality is not invoked without using the event handler.
Is there any limit on the size of data that can be received by TCP client.
With TCP socket communication, server is sending more data but the client is only getting 4K and stopping.
I'm guessing that you're doing exactly 1 Send and exactly 1 Receive.
You need to do multiple reads, there is no guarantee that a single read from the socket will contain everything.
The Receive method will read as much data as is available, up to the size of the buffer. But it will return when it has some data so your program can use it.
You may consider splitting your read/writes over multiple calls. I've definitely had some problems with TcpClient in the past. To fix that we use a wrapped stream class with the following read/write methods:
public override int Read(byte[] buffer, int offset, int count)
{
int totalBytesRead = 0;
int chunkBytesRead = 0;
do
{
chunkBytesRead = _stream.Read(buffer, offset + totalBytesRead, Math.Min(__frameSize, count - totalBytesRead));
totalBytesRead += chunkBytesRead;
} while (totalBytesRead < count && chunkBytesRead > 0);
return totalBytesRead;
}
public override void Write(byte[] buffer, int offset, int count)
{
int bytesSent = 0;
do
{
int chunkSize = Math.Min(__frameSize, count - bytesSent);
_stream.Write(buffer, offset + bytesSent, chunkSize);
bytesSent += chunkSize;
} while (bytesSent < count);
}
//_stream is the wrapped stream
//__frameSize is a constant, we use 4096 since its easy to allocate.
No, it should be fine. I suspect that your code to read from the client is flawed, but it's hard to say without you actually showing it.
No limit, TCP socket is a stream.
There's no limit for data with TCP in theory BUT since we're limited by physical resources (i.e memory), implementors such as Microsoft Winsock utilize something called "tcp window size".
That means that when you send something with the Winsock's send() function for example (and didn't set any options on the socket handler) the data will be first copied to the socket's temporary buffer. Only when the receiving side has acknowledged that he got that data, Winsock will use this memory again.
So, you might flood this buffer by sending faster than it frees up and then - error!