I am trying to send a file over a NetworkStream and rebuild it on the client side. I can get the data over correctly (i think) but when I use either a BinaryWriter or a FileStream object to recreate the file, the file is cut off in the beginning at the same point no matter what methodology I use.
private void ReadandSaveFileFromServer(ref TcpClient clientATF,ref NetworkStream currentStream, string locationToSave)
{
int fileSize = 0;
string fileName = "";
fileName = ReadStringFromServer(ref clientATF,ref currentStream);
fileSize = ReadIntFromServer(ref clientATF,ref currentStream);
byte[] fileSent = new byte[fileSize];
if (currentStream.CanRead && clientATF.Connected)
{
currentStream.Read(fileSent, 0, fileSent.Length);
WriteToConsole("Log Recieved");
}
else
{
WriteToConsole("Log Transfer Failed");
}
FileStream fileToCreate = new FileStream(locationToSave + "\\" + fileName, FileMode.Create);
fileToCreate.Seek(0, SeekOrigin.Begin);
fileToCreate.Write(fileSent, 0, fileSent.Length);
fileToCreate.Close();
//binWriter = new BinaryWriter(File.Open(locationToSave + "\\" + fileName, FileMode.Create));
//binWriter.Write(fileSent);
//binWriter.Close();
}
When I step through and check fileName and fileSize, they are correct. The byte[] is also fully populated. Any clue as to what I can do next?
Thanks in advance...
Sean
EDIT!!!:
So I figured out what is happening. When I read a string and then the Int from stream, the byte array is 256 indices long. So my read for string is taking in the int, which then will clobber the other areas. Need to figure this out...
For one thing, you can use the convenience method File.WriteAllBytes to write the data more simply. But I doubt that that's the problem.
You're assuming you can read all the data in a single call to Read. You're ignoring the return value. Don't do that - instead, read multiple times until either you've read everything you expect to, or you've reached the end of the stream. See this article for more details. If you're using .NET 4, there's a new CopyTo method you may find useful.
(As an aside, your use of ref suggests that you don't understand what it really means. It's well worth making sure you understand how arguments are passed in C#.)
To add to Jon Skeet's answer, your reading code should be:
int bytesRead;
int readPos = 0;
do
{
bytesRead = currentStream.Read(fileSent, readPos, fileSent.Length);
readPos += bytesRead;
} while (bytesRead > 0);
If you are looking for a general solution for sending and receiving files over a network have you considered using a C# network library? It has probably solved most of the issues you will come across when trying to do this.
Disclaimer: I'm one of the developers of this library.
Related
Ack. I am trying to open a specific entry in a zip file archive and store the contents in a string, instead of saving it to a file. I cannot use disk space for this per the client.
Here's what I have:
string scontents = "";
byte[] abbuffer = null;
MemoryStream oms = new MemoryStream();
try
{
//get the file contents
ozipentry.Open().CopyTo(oms);
int length = (int)oms.Length; // get file length
abbuffer = new byte[length]; // create buffer
int icount; // actual number of bytes read
int isum = 0; // total number of bytes read
// read until Read method returns 0 (end of the stream has been reached)
while ((icount = oms.Read(abbuffer, isum, length - isum)) > 0)
{
isum += icount; // sum is a buffer offset for next reading
}
scontents = BytesToString(abbuffer); <----abbuffer is filled with Ascii 0
}
finally
{
oms.Close();
}
The variable abbuffer is supposed to hold that contents of the stream, but all it holds is a bunch of ascii zeros, which I guess means it didn't read (or copy) the stream! But I do not get any error messages or anything. Can someone tell me how to get this working?
I've looked everywhere on stack and on the web, and no where does anyone answer this question specifically for ASP.NET 4.5 ZipArchive library. I cannot use any other library, so if you offer an answer in that, while it would be educational, won't help me at all in this instance. Thanks so much for any help!
One more thing. 'ozipentry' is of type ZipArchiveEntry and is an element in a ZipArchive Entries array. (ie ozipentry = oziparchive.Entries[i])
Oops. One more thing! The function 'BytesToString' is not included, because it is irrelevant. Before the function is called, the abbuffer array is already filled with 0's
Ok. Sorry for being so dense. I realized I was overthinking this. I changed to function to do this:
osr = new StreamReader(ozipentry.Open(), Encoding.Default);
scontents = osr.ReadToEnd();
And it worked fine! Didn't even have to worry about Encoding...
I have c# code reading a text file and printing it out which looks like this:
StreamReader sr = new StreamReader(File.OpenRead(ofd.FileName));
byte[] buffer = new byte[100]; //is there a way to simply specify the length of this to be the number of bytes in the file?
sr.BaseStream.Read(buffer, 0, buffer.Length);
foreach (byte b in buffer)
{
label1.Text += b.ToString("x") + " ";
}
Is there anyway I can know how many bytes my file has?
I want to know the length of the byte[] buffer in advance so that in the Read function, I can simply pass in buffer.length as the third argument.
System.IO.FileInfo fi = new System.IO.FileInfo("myfile.exe");
long size = fi.Length;
In order to find the file size, the system has to read from the disk. So, the above example performs data read from disk but does not read file content.
It's not clear why you're using StreamReader at all if you're going to read binary data. Just use FileStream instead. You can use the Length property to find the length of the file.
Note, however, that that still doesn't mean you should just call Read and *assume` that a single call will read all the data. You should loop until you've read everything:
byte[] data;
using (var stream = File.OpenRead(...))
{
data = new byte[(int) stream.Length];
int offset = 0;
while (offset < data.Length)
{
int chunk = stream.Read(data, offset, data.Length - offset);
if (chunk == 0)
{
// Or handle this some other way
throw new IOException("File has shrunk while reading");
}
offset += chunk;
}
}
Note that this is assuming you do want to read the data. If you don't want to even open the stream, use FileInfo.Length as other answers have shown. Note that both FileStream.Length and FileInfo.Length have a type of long, whereas arrays are limited to 32-bit lengths. What do you want to happen with a file which is bigger than 2 gigs?
You can use the FileInfo.Length method.
Take a look at the example given in the link.
I would imagine something in here should help.
I doubt you can preemptively guess the size of a file without reading it...
How do I use File.ReadAllBytes In chunks
If it is a large file; then reading in chunks should might help
OK, I made a C# winform app, it's a File_Splitter_Joiner.
You just give it a file and it splits it for you to a number of pieces you specify.
The splitting is done in a separate thread.
Everything was working pretty fine until I sliced a 1Gig file!
In the task manager, I saw that my program started consuming 1Gigabyte of memory and my computer almost died!
not just that, when slicing finished, the consuming didn't change!
(dunno if this means that the garbage collector isn't working, although I'm pretty sure that I lost all references to what was holding the big data chumps, so it should work)
Here's the Splitter constructor (just to give you a better idea):
public FileSplitter(string FileToSplitPath, string PiecesFolder, int NumberOfPieces, int PieceSize, SplittingMethod Method)
{
FileToSplitInfo = new FileInfo(FileToSplitPath);
this.FileToSplitPath = FileToSplitPath;
this.PiecesFolder = PiecesFolder;
this.NumberOfPieces = NumberOfPieces;
this.PieceSize = PieceSize;
this.Method = Method;
SplitterThread = new Thread(Split);
}
And here is the method that did the actual splitting:
(I'm still a newbie, so what you're about to see 'may not' be done in the best way ever possible, I'm just learning here)
private void Split()
{
int remainingSize = 0;
int remainingPos = -1;
bool isNumberOfPiecesEqualInSize = true;
int fileSize = (int)FileToSplitInfo.Length; // FileToSplitInfo is a FileInfo object
if (fileSize % PieceSize != 0)
{
remainingSize = fileSize % PieceSize;
remainingPos = fileSize - remainingSize;
isNumberOfPiecesEqualInSize = false;
}
byte[] fileBytes = new byte[fileSize];
var _fs = File.Open(FileToSplitPath, FileMode.Open);
BinaryReader br = new BinaryReader(_fs);
br.Read(fileBytes, 0, fileSize);
br.Close();
_fs.Close();
for (int i = 0, index = 0; i < NumberOfPieces; i++, index += PieceSize)
{
var fs = File.Create(PiecesFolder + "\\" + Path.GetFileName(FileToSplitPath) + "." + (i+1).ToString());
var bw = new BinaryWriter(fs);
bw.Write(fileBytes, index, PieceSize);
if(i == NumberOfPieces-1 && !isNumberOfPiecesEqualInSize && Method == SplittingMethod.NumberOfPieces)
bw.Write(fileBytes, remainingPos, remainingSize);
bw.Close();
fs.Close();
}
MessageBox.Show("File has been splitted successfully!");
SplitterThread.Abort();
}
Now, instead of reading the bytes of the file via a BinaryReader, I was first reading it via the File.ReadAllBytes method, it was working fine with small file sizes, but, I got a "SystemOutOfMemory" exception when I dealt with our big guy, dunno why I didn't get that exception when I read the bytes via a BinaryReader.
(that was an in between question)
So, the main question is, how can I load big files (gigs speaking) in a way that doesn't consume so much memory ? I mean, how can I make my program not consume all that memory ?
and how I can I free the used memory after the splitting is done ?
(I actually used
bw.Dispose; fs.Dispose;
instead of
bw.Close(); fs.Close();
it was the same.
I know the Q might not make sense, cuz when we load something, it gets in our memory not somewhere else, but, the reason I asked it like that, is cuz I used another Splitting_Joining program (not written by me) just to see that if it had the same problem, I loaded the file, the program consumed about 5Migs of ram, when I started splitting, it used about 10Migs!!
Now that is a VERY big difference .. Probably that app was in C/C++ ..
So to sum up, who sucks ? is it my code, and if so how can I fix it ? or is it C# when it comes to performance ?
Thank you SOOO much for anything you could hook me up with :)
The following 2 lines will kil you:
int fileSize = (int)FileToSplitInfo.Length; // a FileInfo object
...
byte[] fileBytes = new byte[fileSize];
Your code will fail when the size is over Int32.MaxValue. Unnecessary, just use long fileSize = FileToSplitInfo.Length;
This corrected code will fail when there is not enough contiguous memory. Fragmentation (of the LOH) will bring you down sooner or later.
You allocate memory for the entire file but your only need PieceSize bytes at a time.
You don't even need to know the fileSize, just
byte[] pieceBuffer = new byte[PieceSize];
while (true)
{
int nBytes = br.Read(pieceBuffer, 0, pieceBuffer.Length);
if (nBytes == 0)
break;
// write this piece, the length is nBytes
}
There are different aspects that can be made better:
if you are working with big file, why first read all inside an array and after write into another file ? Just write into the new file while reading from the other.
use using to gurantee disposal of the streams, in any case: either there is an exception or not.
if you begin to work with really big file, like 1GB or even more, I would recommend to look on Memory Mapped Files. So you will laverage incredible memory consuption benefit with some increased performance cost.
My problem is simple yet complex. I'm a SQL DB Developer that now has to create an SSIS package with C# code in it. The code needs to do a few things but the components I'm having a problem with are:
Reading the last line in the text file and inserting that line into columns in a SQL table.
Each of the detail rows from the 2nd row to the 2nd last row need to be inserted into another table.
I'm relatively new to C# and have NO idea how to go about this. Can someone please guide me in the right direction?
How big is the file? If you can read it all into memory, it'll make life a lot simpler:
string[] lines = File.ReadAllLines("file.txt");
// Could use lines[lines.Length - 1] but LINQ makes it simpler...
string lastLine = lines.Last();
IEnumerable<string> otherLines = lines.Skip(1).Take(lines.Length - 2);
So that's the "getting the data out of a file" part. For the database part, you haven't really given us enough information to help you. You need to determine how you're going to do database access (straight ADO.NET, LINQ etc) and then read tutorials on that topic. There are plenty around.
Try this found it online for you
public static String ReadLastLine(string path)
{
return ReadLastLine(path, Encoding.ASCII, "\n");
}
public static String ReadLastLine(string path, Encoding encoding, string newline)
{
int charsize = encoding.GetByteCount("\n");
byte[] buffer = encoding.GetBytes(newline);
using (FileStream stream = new FileStream(path, FileMode.Open))
{
long endpos = stream.Length / charsize;
for (long pos = charsize; pos < endpos; pos += charsize)
{
stream.Seek(-pos, SeekOrigin.End);
stream.Read(buffer, 0, buffer.Length);
if (encoding.GetString(buffer) == newline)
{
buffer = new byte[stream.Length - stream.Position];
stream.Read(buffer, 0, buffer.Length);
return encoding.GetString(buffer);
}
}
}
return null;
}
http://social.msdn.microsoft.com/Forums/en/netfxbcl/thread/ff6c07e2-9c36-4490-a989-f24dcff76145
There is probably no other way to do this, but is there a way to append the contents of one text file into another text file, while clearing the first after the move?
The only way I know is to just use a reader and writer, which seems inefficient for large files...
Thanks!
No, I don't think there's anything which does this.
If the two files use the same encoding and you don't need to verify that they're valid, you can treat them as binary files, e.g.
using (Stream input = File.OpenRead("file1.txt"))
using (Stream output = new FileStream("file2.txt", FileMode.Append,
FileAccess.Write, FileShare.None))
{
input.CopyTo(output); // Using .NET 4
}
File.Delete("file1.txt");
Note that if file1.txt contains a byte order mark, you should skip past this first to avoid having it in the middle of file2.txt.
If you're not using .NET 4 you can write your own equivalent of Stream.CopyTo... even with an extension method to make the hand-over seamless:
public static class StreamExtensions
{
public static void CopyTo(this Stream input, Stream output)
{
if (input == null)
{
throw new ArgumentNullException("input");
}
if (output == null)
{
throw new ArgumentNullException("output");
}
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
}
Ignoring error handling, encodings, and efficiency for the moment, something like this would probably work (but I haven't tested it)
File.AppendAllText("path/to/destination/file", File.ReadAllText("path/to/source/file"));
Then you just have to delete or clear out the first file once this step is complete.
The cmd.exe version of this is
type fileone.txt >>filetwo.txt
del fileone.txt
You could create a system shell to do this. It should be pretty effecient.
I'm not sure what you mean by "inefficient". Jon's answer is probably enough for most cases.
However, if you are concerned about extremely large source files, Memory-Mapped Files could be your friend. See this link for more info.