Array size based on available physical memory - c#

I am trying to make an encryption algorithm.
I can read a file and convert it to bytes without any problems, and am saving the bytes in a byteArray.
The problem is I am currently creating the array size like this:
byte[] FileArray =new byte[10000000];
FileStream TheFileStream = new FileStream(FilePath.Text, FileMode.Open);
BinaryReader TheFileBinary = new BinaryReader(TheFileStream);
for (int i = 0; i < TheFileStream.Length; i++) {
FileArray = TheFileBinary.ReadBytes(10000000);
// I call a function here
if (TheFileStream.Position == TheFileStream.Length)
break;
}
However, I don't want the array size to be fixed, because if I make it 1000000 (as an example), other machines with small memory size might face a problem.
I need to find the Idle size of a memory size for each machine, how can I set the array size dynamically based on the free unallocated memory space, to be used where I can put it in the byteArray?
I have noticed the larger the Arraysize the faster it reads, so I don't want to make it too small either.
I would really appreciate the help.

The FileStream keeps track of how many bytes are in the file. Just use the Length property.
FileStream TheFileStream = new FileStream(FilePath.Text, FileMode.Open);
BinaryReader TheFileBinary = new BinaryReader(TheFileStream);
byte[] FileArray = TheFileBinary.ReadBytes(TheFileStream.Length);
Okay reread the question and finnaly found the part of it that was a question, "how can I know the free unallocated memory space so I can put it in the byteArray". Anyways I suggest you take a look at this question along with its highest rated comment.

If you're really worried about space, then use a simple List, read in chunks of the stream at a time (say 1024), and call the AddRange method on the list. After you're done, call ToArray on the List, and now you have a properly size byte array.
List<byte> byteArr = new List<byte>();
byte[] buffer = new byte[1024];
int bytesRead = 0;
using(FileStream TheFileStream = new FileStream(FilePath.Text, FileMode.Open))
{
while((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
byteArr.AddRange(buffer);
}
buffer = byteArr.ToArray();
// call your method here.
Edit: It's still preferable to read it in chunks for larger files. You can of course play with the buffer size however you want, but 1024 is usually a good starting point. Doing a read of the entire file will ultimately DOUBLE the memory, as you also have to deal with the internal read buffer being the size of the stream (on top of your own buffer). Breaking up the reads into chunks only takes FileStream.Length + <buffer size> memory as opposed to FileStream.Length * 2. Just something to keep in mind...
byte[] buffer = null;
using(FileStream TheFileStream = new FileStream(FilePath.Text, FileMode.Open))
{
buffer = new byte[TheFileStream.Length];
int offset = 0;
while((bytesRead = stream.Read(buffer, offset, 1024)) > 0)
offset += bytesRead;
// Or just TheFileStream.Read(buffer, 0, buffer.Length) if it's small enough.
}

You can use WMI to retrieve the instance of the Win32_OperatingSystem class and base your memory calculations off of the FreePhysicalMemory or TotalVisibleMemorySize properties:
static ulong GetAvailableMemoryKilobytes()
{
const string memoryPropertyName = "FreePhysicalMemory";
using (ManagementObject operatingSystem = new ManagementObject("Win32_OperatingSystem=#"))
return (ulong) operatingSystem[memoryPropertyName];
}
static ulong GetTotalMemoryKilobytes()
{
const string memoryPropertyName = "TotalVisibleMemorySize";
using (ManagementObject operatingSystem = new ManagementObject("Win32_OperatingSystem=#"))
return (ulong) operatingSystem[memoryPropertyName];
}
Then pass the result of either method to a method like this to scale the size of your read buffer to the memory of the local machine:
static int GetBufferSize(ulong memoryKilobytes)
{
const int bufferStepSize = 256; // 256 kilobytes of buffer...
const int memoryStepSize = 128 * 1024;// ...for every 128 megabytes of memory...
const int minBufferSize = 512; // ...no less than 512 kilobytes...
const int maxBufferSize = 10 * 1024; // ...no more than 10 megabytes
int bufferSize = bufferStepSize * ((int) memoryKilobytes / memoryStepSize);
bufferSize = Math.Max(bufferSize, minBufferSize);
bufferSize = Math.Min(bufferSize, maxBufferSize);
return bufferSize;
}
Obviously, increasing your buffer size by 256 KB for every 128 MB of RAM seems a little silly, but these number are just examples of how you might scale your buffer size if you really wanted to do that. Unless you're reading many, many files at once, worrying about a buffer that's a few hundred kilobytes or a few megabytes might be more trouble than it's worth. You might be better off just benchmarking to see which sized buffer gives the best performance (it might not need to be as large as you think) and using that.
Now you can simply update your code like this:
ulong memoryKilobytes =
GetAvailableMemoryKilobytes();
// ...or GetTotalMemoryKilobytes();
int bufferSize = GetBufferSize(memoryKilobytes);
using (FileStream TheFileStream = new FileStream(FilePath.Text, FileMode.Open))
{
byte[] FileArray = new byte[bufferSize];
int readCount;
while ((readCount = TheFileBinary.Read(FileArray, 0, bufferSize)) > 0)
{
// Call a method here, passing FileArray as a parameter
}
}

Related

Get PCM byte array from MediaFoundationResampler, Naudio

I'm working on a method to resample a wav file, here's the method:
internal byte[] ResampleWav(byte[] rawPcmData, int frequency, int bits, int channels, int newFrequency)
{
byte[] pcmData;
using (MemoryStream AudioSample = new MemoryStream(rawPcmData))
{
RawSourceWaveStream Original = new RawSourceWaveStream(AudioSample, new WaveFormat(frequency, bits, channels));
using (MediaFoundationResampler conversionStream = new MediaFoundationResampler(Original, new WaveFormat(newFrequency, bits, channels)))
{
// Here should go the code to get the array of bytes with the resampled PCM data
}
}
return pcmData;
}
The problem here is that there isn't any property in the MediaFoundationResampler that returns the size of the buffer. The method should return an array of bytes with the resampled PCM data only.
Thanks in advance!
--Edit
After some time working, I could get this:
internal byte[] WavChangeFrequency(byte[] rawPcmData, int frequency, int bits, int channels, int newFrequency)
{
byte[] pcmData;
using (MemoryStream AudioSample = new MemoryStream(rawPcmData))
{
RawSourceWaveStream Original = new RawSourceWaveStream(AudioSample, new WaveFormat(frequency, bits, channels));
using (MediaFoundationResampler conversionStream = new MediaFoundationResampler(Original, newFrequency))
{
//Start reading PCM data
using (MemoryStream wavData = new MemoryStream())
{
byte[] readBuffer = new byte[1024];
while ((conversionStream.Read(readBuffer, 0, readBuffer.Length)) != 0)
{
wavData.Write(readBuffer, 0, readBuffer.Length);
}
pcmData = wavData.ToArray();
}
}
}
return pcmData;
}
"Seems" to work fine, but there's another problem, seems that the PCM data byte array is greater than expected. Here's one of the tests I've tested with the method:
Input settings:
44100Hz
16 Bits
01 Channel
1846324 Bytes of PCM data
Expected (when I resample the same wav file with Audition, Audacity and WaveFormatConversionStream I get this):
22050Hz
16 Bits
01 Channel
923162 Bytes
MediaFoundationResampler result:
22050Hz
16 Bits
01 Channel
923648 Bytes
And the size changes drastically if I change the size of the readBuffer array.
The main problem is that MediaFoundationResampler doesn't have the property Length to know the real size of the resampled PCM data buffer. Using WaveFormatConversionStream the code would be this, but the quality is not very good:
internal byte[] WavChangeFrequency(byte[] rawPcmData, int frequency, int bits, int channels, int newFrequency)
{
byte[] pcmData;
using (MemoryStream AudioSample = new MemoryStream(rawPcmData))
{
RawSourceWaveStream Original = new RawSourceWaveStream(AudioSample, new WaveFormat(frequency, bits, channels));
using (WaveFormatConversionStream wavResampler = new WaveFormatConversionStream(new WaveFormat(newFrequency, bits, channels), Original))
{
pcmData = new byte[wavResampler.Length];
wavResampler.Read(pcmData, 0, pcmData.Length);
}
}
return pcmData;
}
What should I do to get the expected PCM data array, using the MediaFoundationResampler?
Disclaimer
I'm not familiar with the NAudio Library, so there might be a more proper way of doing this.
EDIT
Still not a good answer, seems still off by a few bytes...
Some corrections to the code, using Mark Heath (NAudio creator) comment on this answer: https://stackoverflow.com/a/14481756/9658671
I keep the answer here for now, as it might help for finding a real answer, but I'll edit or remove it if necessary.
/EDIT
The difference in length between the file produced by Audition and your code is 923648 - 923162 = 486 bytes, which is less than your 1024 buffer.
It can be explained by the following mechanism:
At the very last call to the Read method, the remaining byte count is inferior to your buffer size. So instead of getting 1024 bytes, you get less.
But your code still adds a full 1024 byte group, instead of a smaller number. That explains the 486 bytes difference and the fact that this number will change if you choose another buffer size.
Fixing this should be easy.
From NAudio documentation:
https://github.com/naudio/NAudio/blob/fb35ce8367f30b8bc5ea84e7d2529e172cf4c381/Docs/WaveProviders.md
The Read method returns the number for bytes that were read. This
should never be more than numBytes and can only be less if the end of
the audio stream is reached. NAudio playback devices will stop playing
when Read returns 0.
So instead of pushing always 1024 bytes at each iteration, just push the number returned by the Read method.
Also, from Mark Heath comment:
the buffer size should be configurable to be an exact multiple of the
block align of the WaveStream
So instead of choosing a "random" buffer size, use a multiple of the block align.
internal byte[] WavChangeFrequency(byte[] rawPcmData, int frequency, int bits, int channels, int newFrequency, int BlockAlign)
{
byte[] pcmData;
var BufferSize = BlockAlign * 1024;
using (MemoryStream AudioSample = new MemoryStream(rawPcmData))
{
RawSourceWaveStream Original = new RawSourceWaveStream(AudioSample, new WaveFormat(frequency, bits, channels));
using (MediaFoundationResampler conversionStream = new MediaFoundationResampler(Original, newFrequency))
{
//Start reading PCM data
using (MemoryStream wavData = new MemoryStream())
{
var ByteCount = 0;
var readBuffer = new byte[BufferSize];
while ((ByteCount = conversionStream.Read(readBuffer, 0, readBuffer.Length)) != 0)
{
wavData.Write(readBuffer, 0, ByteCount);
}
pcmData = wavData.ToArray();
}
}
}
return pcmData;
}

Read a large binary file(5GB) into a byte array in C#?

I have a recording file (Binary file) more than 5 GB, i have to read that file and filter out the data needed to be send to server.
Problem is byte[] array supports till 2GB of file data . so just need help if someone had already dealt with this type of situation.
using (FileStream str = File.OpenRead(textBox2.Text))
{
int itemSectionStart = 0x00000000;
BinaryReader breader = new BinaryReader(str);
breader.BaseStream.Position = itemSectionStart;
int length = (int)breader.BaseStream.Length;
byte[] itemSection = breader.ReadBytes(length ); //first frame data
}
issues:
1: Length is crossing the range of integer.
2: tried using long and unint but byte[] only supports integer
Edit.
Another approach i want to give try, Read data on frame buffer basis, suppose my frame buffer size is 24000 . so byte array store that many frames data and then process the frame data and then flush out the byte array and store another 24000 frame data. till keep on going till end of binary file..
See you can not read that much big file at once, so you have to either split the file in small portions and then process the file.
OR
Read file using buffer concept and once you are done with that buffer data then flush out that buffer.
I faced the same issue, so i tried the buffer based approach and it worked for me.
FileStream inputTempFile = new FileStream(Path, FileMode.OpenOrCreate, FileAccess.Read);
Buffer_value = 1024;
byte[] Array_buffer = new byte[Buffer_value];
while ((bytesRead = inputTempFile.Read(Array_buffer, 0, Buffer_value)) > 0)
{
for (int z = 0; z < Array_buffer.Length; z = z + 4)
{
string temp_id = BitConverter.ToString(Array_buffer, z, 4);
string[] temp_strArrayID = temp_id.Split(new char[] { '-' });
string temp_ArraydataID = temp_strArrayID[0] + temp_strArrayID[1] + temp_strArrayID[2] + temp_strArrayID[3];
}
}
this way you can process your data.
For my case i was trying to store buffer read data in to a List, it will work fine till 2GB data after that it will throw memory exception.
The approach i followed, read the data from buffer and apply needed filters and write filter data in to a text file and then process that file.
//text file approach
FileStream inputTempFile = new FileStream(Path, FileMode.OpenOrCreate, FileAccess.Read);
Buffer_value = 1024;
StreamWriter writer = new StreamWriter(Path, true);
byte[] Array_buffer = new byte[Buffer_value];
while ((bytesRead = inputTempFile.Read(Array_buffer, 0, Buffer_value)) > 0)
{
for (int z = 0; z < Array_buffer.Length; z = z + 4)
{
string temp_id = BitConverter.ToString(Array_buffer, z, 4);
string[] temp_strArrayID = temp_id.Split(new char[] { '-' });
string temp_ArraydataID = temp_strArrayID[0] + temp_strArrayID[1] + temp_strArrayID[2] + temp_strArrayID[3];
if(temp_ArraydataID =="XYZ Condition")
{
writer.WriteLine(temp_ArraydataID);
}
}
}
writer.Close();
As said in comments, I think you have to read your file with a stream. Here is how you can do this:
int nbRead = 0;
var step = 10000;
byte[] buffer = new byte[step];
do
{
nbRead = breader.Read(buffer, 0, step);
hugeArray.Add(buffer);
foreach(var oneByte in hugeArray.SelectMany(part => part))
{
// Here you can read byte by byte this subpart
}
}
while (nbRead > 0);
If I well understand your needs, you are looking for a specific pattern into your file?
I think you can do it by looking for the start of your pattern byte by byte. Once you find it, you can start reading the important bytes. If the whole important data is greater than 2GB, as said in the comments, you will have to send it to your server in several parts.

Copy all but the last 16 bytes of a stream? Early detection of end-of-stream?

This is C# related. We have a case where we need to copy the entire source stream into a destination stream except for the last 16 bytes.
EDIT: The streams can range upto 40GB, so can't do some static byte[] allocation (eg: .ToArray())
Looking at the MSDN documentation, it seems that we can reliably determine the end of stream only when the return value is 0. Return values between 0 and the requested size can imply bytes are "not currently available" (what does that really mean?)
Currently it copies every single byte as follows. inStream and outStream are generic - can be memory, disk or network streams (actually some more too).
public static void StreamCopy(Stream inStream, Stream outStream)
{
var buffer = new byte[8*1024];
var last16Bytes = new byte[16];
int bytesRead;
while ((bytesRead = inStream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, bytesRead);
}
// Issues:
// 1. We already wrote the last 16 bytes into
// outStream (possibly over the n/w)
// 2. last16Bytes = ? (inStream may not necessarily support rewinding)
}
What is a reliable way to ensure all but the last 16 are copied? I can think of using Position and Length on the inStream but there is a gotcha on MSDN that says
If a class derived from Stream does not support seeking, calls to Length, SetLength, Position, and Seek throw a NotSupportedException. .
Read between 1 and n bytes from the input stream.1
Append the bytes to a circular buffer.2
Write the first max(0, b - 16) bytes from the circular buffer to the output stream, where b is the number of bytes in the circular buffer.
Remove the bytes that you just have written from the circular buffer.
Go to step 1.
1This is what the Read method does – if you call int n = Read(buffer, 0, 500); it will read between 1 and 500 bytes into buffer and return the number of bytes read. If Read returns 0, you have reached the end of the stream.
2For maximum performance, you can read the bytes directly from the input stream into the circular buffer. This is a bit tricky, because you have to deal with the wraparound within the array underlying the buffer.
The following solution is fast and tested. Hope it's useful. It uses the double buffering idea you already had in mind. EDIT: simplified loop removing the conditional that separated the first iteration from the rest.
public static void StreamCopy(Stream inStream, Stream outStream) {
// Define the size of the chunk to copy during each iteration (1 KiB)
const int blockSize = 1024;
const int bytesToOmit = 16;
const int buffSize = blockSize + bytesToOmit;
// Generate working buffers
byte[] buffer1 = new byte[buffSize];
byte[] buffer2 = new byte[buffSize];
// Initialize first iteration
byte[] curBuffer = buffer1;
byte[] prevBuffer = null;
int bytesRead;
// Attempt to fully fill the buffer
bytesRead = inStream.Read(curBuffer, 0, buffSize);
if( bytesRead == buffSize ) {
// We succesfully retrieved a whole buffer, we will output
// only [blockSize] bytes, to avoid writing to the last
// bytes in the buffer in case the remaining 16 bytes happen to
// be the last ones
outStream.Write(curBuffer, 0, blockSize);
} else {
// We couldn't retrieve the whole buffer
int bytesToWrite = bytesRead - bytesToOmit;
if( bytesToWrite > 0 ) {
outStream.Write(curBuffer, 0, bytesToWrite);
}
// There's no more data to process
return;
}
curBuffer = buffer2;
prevBuffer = buffer1;
while( true ) {
// Attempt again to fully fill the buffer
bytesRead = inStream.Read(curBuffer, 0, buffSize);
if( bytesRead == buffSize ) {
// We retrieved the whole buffer, output first the last 16
// bytes of the previous buffer, and output just [blockSize]
// bytes from the current buffer
outStream.Write(prevBuffer, blockSize, bytesToOmit);
outStream.Write(curBuffer, 0, blockSize);
} else {
// We could not retrieve a complete buffer
if( bytesRead <= bytesToOmit ) {
// The bytes to output come solely from the previous buffer
outStream.Write(prevBuffer, blockSize, bytesRead);
} else {
// The bytes to output come from the previous buffer and
// the current buffer
outStream.Write(prevBuffer, blockSize, bytesToOmit);
outStream.Write(curBuffer, 0, bytesRead - bytesToOmit);
}
break;
}
// swap buffers for next iteration
byte[] swap = prevBuffer;
prevBuffer = curBuffer;
curBuffer = swap;
}
}
static void Assert(Stream inStream, Stream outStream) {
// Routine that tests the copy worked as expected
inStream.Seek(0, SeekOrigin.Begin);
outStream.Seek(0, SeekOrigin.Begin);
Debug.Assert(outStream.Length == Math.Max(inStream.Length - bytesToOmit, 0));
for( int i = 0; i < outStream.Length; i++ ) {
int byte1 = inStream.ReadByte();
int byte2 = outStream.ReadByte();
Debug.Assert(byte1 == byte2);
}
}
A much easier solution to code, yet slower since it would work at a byte level, would be to use an intermediate queue between the input stream and the output stream. The process would first read and enqueue 16 bytes from the input stream. Then it would iterate over the remaining input bytes, reading a single byte from the input stream, enqueuing it and then dequeuing a byte. The dequeued byte would be written to the output stream, until all bytes from the input stream are processed. The unwanted 16 bytes should linger in the intermediate queue.
Hope this helps!
=)
Use a circular buffer sounds great but there is no circular buffer class in .NET which means additional code anyways. I ended up with the following algorithm, a sort of map and copy - I think it's simple. The variable names are longer than usual for the sake of being self descriptive here.
This flows thru the buffers as
[outStream] <== [tailBuf] <== [mainBuf] <== [inStream]
public byte[] CopyStreamExtractLastBytes(Stream inStream, Stream outStream,
int extractByteCount)
{
//var mainBuf = new byte[1024*4]; // 4K buffer ok for network too
var mainBuf = new byte[4651]; // nearby prime for testing
int mainBufValidCount;
var tailBuf = new byte[extractByteCount];
int tailBufValidCount = 0;
while ((mainBufValidCount = inStream.Read(mainBuf, 0, mainBuf.Length)) > 0)
{
// Map: how much of what (passthru/tail) lives where (MainBuf/tailBuf)
// more than tail is passthru
int totalPassthruCount = Math.Max(0, tailBufValidCount +
mainBufValidCount - extractByteCount);
int tailBufPassthruCount = Math.Min(tailBufValidCount, totalPassthruCount);
int tailBufTailCount = tailBufValidCount - tailBufPassthruCount;
int mainBufPassthruCount = totalPassthruCount - tailBufPassthruCount;
int mainBufResidualCount = mainBufValidCount - mainBufPassthruCount;
// Copy: Passthru must be flushed per FIFO order (tailBuf then mainBuf)
outStream.Write(tailBuf, 0, tailBufPassthruCount);
outStream.Write(mainBuf, 0, mainBufPassthruCount);
// Copy: Now reassemble/compact tail into tailBuf
var tempResidualBuf = new byte[extractByteCount];
Array.Copy(tailBuf, tailBufPassthruCount, tempResidualBuf, 0,
tailBufTailCount);
Array.Copy(mainBuf, mainBufPassthruCount, tempResidualBuf,
tailBufTailCount, mainBufResidualCount);
tailBufValidCount = tailBufTailCount + mainBufResidualCount;
tailBuf = tempResidualBuf;
}
return tailBuf;
}

Setting the offset in a stream

It says here msdn.microsoft.com/en-us/library/system.io.stream.read.aspx that the Stream.Read and Stream.Write methods both advance the position/offset in the stream automatically so why is the examples here http://msdn.microsoft.com/en-us/library/system.io.stream.read.aspx and http://msdn.microsoft.com/en-us/library/system.io.filestream.read.aspx manually changing the offset?
Do you only set the offset in a loop if you know the size of the stream and set it to 0 if you don't know the size and using a buffer?
// Now read s into a byte buffer.
byte[] bytes = new byte[s.Length];
int numBytesToRead = (int) s.Length;
int numBytesRead = 0;
while (numBytesToRead > 0)
{
// Read may return anything from 0 to 10.
int n = s.Read(bytes, numBytesRead, 10);
// The end of the file is reached.
if (n == 0)
{
break;
}
numBytesRead += n;
numBytesToRead -= n;
}
and
using (GZipStream stream = new GZipStream(new MemoryStream(gzip), CompressionMode.Decompress))
{
const int size = 4096;
byte[] buffer = new byte[size];
using (MemoryStream memory = new MemoryStream())
{
int count = 0;
do
{
count = stream.Read(buffer, 0, size);
if (count > 0)
{
memory.Write(buffer, 0, count);
}
}
while (count > 0);
return memory.ToArray();
}
}
The offset is actually the offset of the buffer, not the stream. Streams are advanced automatically as they are read.
Edit (to the edited question):
In none of the code snippets you pasted into the question I see any stream offset being set.
I think you are mistaking the calculation of bytes to read vs. bytes received. This protocol may seem funny (why would you receive fewer bytes than requested?) but it makes sense when you consider that you might be reading from a high-latency packet oriented source (think: network sockets).
You might be receiving 6 characters in one burst (from a TCP packet) and only receive the remaining 4 characters in your next read (when the next packet has arrived).
Edit In response to your linked example from the comment:
using (GZipStream stream = new GZipStream(new MemoryStream(gzip), CompressionMode.Decompress))
{
// ... snip
count = stream.Read(buffer, 0, size);
if (count > 0)
{
memory.Write(buffer, 0, count);
}
It appears that the coders use prior knowledge about the underlying stream implementation, that stream.Read will always return 0 OR the size requested. That seems like a risky bet, to me. But if the docs for GZipStream do state that, it could be alright. However, since the MSDN samples use a generic Stream variable, it is (way) more correct to check the exact number of bytes read.
The first linked example uses a MemoryStream in both Write and Read fashion. The position is reset in between, so the data that was written first will be read:
Stream s = new MemoryStream();
for (int i = 0; i < 100; i++)
{
s.WriteByte((byte)i);
}
s.Position = 0;
The second example linked does not set the stream position. You'd typically have seen a call to Seek if it did. You maybe confusing the offsets into the data buffer with the stream position?

Error reading file into array

I get the following error on the second iteration of my loop:
Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
and this is my loop
FileStream fs = new FileStream("D:\\06.Total Eclipse Of The Moon.mp3", FileMode.Open);
byte[] _FileName = new byte[1024];
long _FileLengh = fs.Length;
int position = 0;
for (int i = 1024; i < fs.Length; i += 1024)
{
fs.Read(_FileName, position, Convert.ToInt32(i));
sck.Client.Send(_FileName);
Thread.Sleep(30);
long unsend = _FileLengh - position;
if (unsend < 1024)
{
position += (int)unsend;
}
else
{
position += i;
}
}
fs.Close();
}
fs.Length = 5505214
On the first iteration, you're calling
fs.Read(_FileName, 0, 1024);
That's fine (although why you're calling Convert.ToInt32 on an int, I don't know.)
On the second iteration, you're going to call
fs.Read(_FileName, position, 2048);
which is trying to read into the _FileName byte array starting at position (which is non-zero) and fetching up to 2048 bytes. The byte array is only 1024 bytes long, so that can't possibly work.
Additional problems:
You haven't used a using statement, so on exceptions you'll leave the stream open
You're ignoring the return value from Read, which means you don't know how much of your buffer has actually been read
You're unconditionally sending the socket the complete buffer, regardless of how much has been read.
Your code should probably look more like this:
using (FileStream fs = File.OpenRead("D:\\06.Total Eclipse Of The Moon.mp3"))
{
byte[] buffer = new byte[1024];
int bytesRead;
while ((bytesRead = fs.Read(buffer, 0, buffer.Length)) > 0)
{
sck.Client.Send(buffer, 0, bytesRead);
// Do you really need this?
Thread.Sleep(30);
}
}

Categories