Certain Files getting corrupted by SQL Server FileStream - c#

I am saving files to a SQL server 2008 (Express) database using FILESTREAM, the trouble I'm having is that certain files seem to be getting corrupted in the process.
For example if I save a word or excel document in one of the newer formats (docx, or xslx) then when I try to open the file I get an error message saying that the data is corrupted and would I like word/excel to try and recover it, If I click yes office is able to 'recover' the data and opens the file in compatibility mode.
However if i zip the file first then after extracting the contents I'm able to open the file without a problem. Strangely If I save an mp3 file to the database then I have the reverse issue, I can open the file no problem, but If I saved a zipped version of the mp3 I can't even extract the contents of that zip. When I tried to save a pdf or power-point file I ran into similar problems (the pdf i could only read if I zipped it first, and the ppt I couldn't read at all).
Update: here's my code that I'm using to write to the database and to read
To write to the database:
SQL = "SELECT Attachment.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() FROM Activity " +
"WHERE RowID = CAST(#RowID as uniqueidentifier)";
transaction = connection.BeginTransaction();
command.Transaction = transaction;
command.CommandText = SQL;
command.Parameters.Clear();
command.Parameters.Add(rowIDParam);
SqlDataReader readerFS = null;
readerFS= command.ExecuteReader();
string path = (string)readerFS[0].ToString();
byte[] context = (byte[])readerFS[1];
int length = context.Length;
SqlFileStream targetStream = new SqlFileStream(path, context, FileAccess.Write);
int blockSize = 1024 * 512; //half a megabyte
byte[] buffer = new byte[blockSize];
int bytesRead = sourceStream.Read(buffer, 0, buffer.Length);
while (bytesRead > 0)
{
targetStream.Write(buffer, 0, bytesRead);
bytesRead = sourceStream.Read(buffer, 0, buffer.Length);
}
targetStream.Close();
sourceStream.Close();
readerFS.Close();
transaction.Commit();
And to read:
SqlConnection connection = null;
SqlTransaction transaction = null;
try
{
connection = getConnection();
connection.Open();
transaction = connection.BeginTransaction();
SQL = "SELECT Attachment.PathName(), + GET_FILESTREAM_TRANSACTION_CONTEXT() FROM Activity"
+ " WHERE ActivityID = #ActivityID";
SqlCommand command = new SqlCommand(SQL, connection);
command.Transaction = transaction;
command.Parameters.Add(new SqlParameter("ActivityID", activity.ActivityID));
SqlDataReader reader = command.ExecuteReader();
string path = (string)reader[0];
byte[] context = (byte[])reader[1];
int length = context.Length;
reader.Close();
SqlFileStream sourceStream = new SqlFileStream(path, context, FileAccess.Read);
int blockSize = 1024 * 512; //half a megabyte
byte[] buffer = new byte[blockSize];
List<byte> attachmentBytes = new List<byte>();
int bytesRead = sourceStream.Read(buffer, 0, buffer.Length);
while (bytesRead > 0)
{
bytesRead = sourceStream.Read(buffer, 0, buffer.Length);
foreach (byte b in buffer)
{
attachmentBytes.Add(b);
}
}
FileStream outputStream = File.Create(outputPath);
foreach (byte b in attachmentBytes)
{
byte[] barr = new byte[1];
barr[0] = b;
outputStream.Write(barr, 0, 1);
}
outputStream.Close();
sourceStream.Close();
command.Transaction.Commit();

Your read code is incorrect:
while (bytesRead > 0)
{
bytesRead = sourceStream.Read(buffer, 0, buffer.Length);
foreach (byte b in buffer)
{
attachmentBytes.Add(b);
}
}
If the bytesRead is less than buffer.Length, you still add the entire buffer to the attachementBytes. Thus, you always corrupt the document returned by adding any garbage in the end of the last buffer post bytesRead.
Other than that, allow me to have a really WTF moment. Reading a stream as a List<byte> ?? C'mon! First, I don't see the reason why you need to read into an intermediate in-memory storage to start with. You can simply read buffer by buffer and write each buffer straight into the outputStream. Second, if you must use an intermediate in-memory storage, use a MemoryStream, not a List<byte>.

I had the exact problem a few months back and figured out that I was adding an extra byte at the end of the file when reading it from FILESTREAM.

Related

Convert a VERY LARGE binary file into a Base64String incrementally

I need help converting a VERY LARGE binary file (ZIP file) to a Base64String and back again. The files are too large to be loaded into memory all at once (they throw OutOfMemoryExceptions) otherwise this would be a simple task. I do not want to process the contents of the ZIP file individually, I want to process the entire ZIP file.
The problem:
I can convert the entire ZIP file (test sizes vary from 1 MB to 800 MB at present) to Base64String, but when I convert it back, it is corrupted. The new ZIP file is the correct size, it is recognized as a ZIP file by Windows and WinRAR/7-Zip, etc., and I can even look inside the ZIP file and see the contents with the correct sizes/properties, but when I attempt to extract from the ZIP file, I get: "Error: 0x80004005" which is a general error code.
I am not sure where or why the corruption is happening. I have done some investigating, and I have noticed the following:
If you have a large text file, you can convert it to Base64String incrementally without issue. If calling Convert.ToBase64String on the entire file yielded: "abcdefghijklmnopqrstuvwx", then calling it on the file in two pieces would yield: "abcdefghijkl" and "mnopqrstuvwx".
Unfortunately, if the file is a binary then the result is different. While the entire file might yield: "abcdefghijklmnopqrstuvwx", trying to process this in two pieces would yield something like: "oiweh87yakgb" and "kyckshfguywp".
Is there a way to incrementally base 64 encode a binary file while avoiding this corruption?
My code:
private void ConvertLargeFile()
{
FileStream inputStream = new FileStream("C:\\Users\\test\\Desktop\\my.zip", FileMode.Open, FileAccess.Read);
byte[] buffer = new byte[MultipleOfThree];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while(bytesRead > 0)
{
byte[] secondaryBuffer = new byte[buffer.Length];
int secondaryBufferBytesRead = bytesRead;
Array.Copy(buffer, secondaryBuffer, buffer.Length);
bool isFinalChunk = false;
Array.Clear(buffer, 0, buffer.Length);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
if(bytesRead == 0)
{
isFinalChunk = true;
buffer = new byte[secondaryBufferBytesRead];
Array.Copy(secondaryBuffer, buffer, buffer.length);
}
String base64String = Convert.ToBase64String(isFinalChunk ? buffer : secondaryBuffer);
File.AppendAllText("C:\\Users\\test\\Desktop\\Base64Zip", base64String);
}
inputStream.Dispose();
}
The decoding is more of the same. I use the size of the base64String variable above (which varies depending on the original buffer size that I test with), as the buffer size for decoding. Then, instead of Convert.ToBase64String(), I call Convert.FromBase64String() and write to a different file name/path.
EDIT:
In my haste to reduce the code (I refactored it into a new project, separate from other processing to eliminate code that isn't central to the issue) I introduced a bug. The base 64 conversion should be performed on the secondaryBuffer for all iterations save the last (Identified by isFinalChunk), when buffer should be used. I have corrected the code above.
EDIT #2:
Thank you all for your comments/feedback. After correcting the bug (see the above edit), I re-tested my code, and it is actually working now. I intend to test and implement #rene's solution as it appears to be the best, but I thought that I should let everyone know of my discovery as well.
Based on the code shown in the blog from Wiktor Zychla the following code works. This same solution is indicated in the remarks section of Convert.ToBase64String as pointed out by Ivan Stoev
// using System.Security.Cryptography
private void ConvertLargeFile()
{
//encode
var filein= #"C:\Users\test\Desktop\my.zip";
var fileout = #"C:\Users\test\Desktop\Base64Zip";
using (FileStream fs = File.Open(fileout, FileMode.Create))
using (var cs=new CryptoStream(fs, new ToBase64Transform(),
CryptoStreamMode.Write))
using(var fi =File.Open(filein, FileMode.Open))
{
fi.CopyTo(cs);
}
// the zip file is now stored in base64zip
// and decode
using (FileStream f64 = File.Open(fileout, FileMode.Open) )
using (var cs=new CryptoStream(f64, new FromBase64Transform(),
CryptoStreamMode.Read ) )
using(var fo =File.Open(filein +".orig", FileMode.Create))
{
cs.CopyTo(fo);
}
// the original file is in my.zip.orig
// use the commandlinetool
// fc my.zip my.zip.orig
// to verify that the start file and the encoded and decoded file
// are the same
}
The code uses standard classes found in System.Security.Cryptography namespace and uses a CryptoStream and the FromBase64Transform and its counterpart ToBase64Transform
You can avoid using a secondary buffer by passing offset and length to Convert.ToBase64String, like this:
private void ConvertLargeFile()
{
using (var inputStream = new FileStream("C:\\Users\\test\\Desktop\\my.zip", FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[MultipleOfThree];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while(bytesRead > 0)
{
String base64String = Convert.ToBase64String(buffer, 0, bytesRead);
File.AppendAllText("C:\\Users\\test\\Desktop\\Base64Zip", base64String);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
}
}
}
The above should work, but I think Rene's answer is actually the better solution.
Use this code:
public void ConvertLargeFile(string source , string destination)
{
using (FileStream inputStream = new FileStream(source, FileMode.Open, FileAccess.Read))
{
int buffer_size = 30000; //or any multiple of 3
byte[] buffer = new byte[buffer_size];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while (bytesRead > 0)
{
byte[] buffer2 = buffer;
if(bytesRead < buffer_size)
{
buffer2 = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, buffer2, 0, bytesRead);
}
string base64String = System.Convert.ToBase64String(buffer2);
File.AppendAllText(destination, base64String);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
}
}
}

CSVReader throwing error at the last row last column

I am using CSVReader in my project for reading .csv files in windows service.
we pass the .csv file to CSVReader for processing the file and it was working fine.
But recently we decided to save the csv file to Database table and read it from there.
when a user submits an csv file for processing , our aspx page will read the csv file
and converts it to byte array and saves it to database.
and the service will read the table and picks up the byte array and converts it to Filestream.
This stream is passed to CSVReader for further work.
Now it throws error for the last row last column.
Its happening only after saving and reading from Database.
I am getting the following error.
No idea how to fix this.
"The CSV appears to be corrupt near record '9' field '3 at position '494'. Current raw data : '
Hear is the code
Converting file to Byte Array .....
try
{
fs = new FileStream(filepath, FileMode.Open);
fsbuffer = new byte[fs.Length];
fs.Read(fsbuffer, 0, (int)fs.Length);
}
Reading from DB to Byte Array ......
myobject.FileByteArray = ObjectToByteArray(row);
public byte[] ObjectToByteArray(DataRow row)
{
if (row["fileBytearray"] == null)
return null;
try
{
BinaryFormatter bf = new BinaryFormatter();
System.IO.MemoryStream ms = new System.IO.MemoryStream();
bf.Serialize(ms, row["fileBytearray"]);
return ms.ToArray();
}
}
Stream fileStream = new MemoryStream(myobject.FileByteArray)
using (CsvReader csv =
new CsvReader(new StreamReader(fileStream, System.Text.Encoding.UTF7), hasHeader, ','))
I haven't been able to recreate your issue, so instead, as, without seeing the save and load from beginning to end, I'll suggest trying the following for retrieval:
public MemoryStream LoadReportData(int rowId)
{
MemoryStream stream = new MemoryStream();
using (BinaryWriter writer = new BinaryWriter(stream))
{
using (DbConnection connection = db.CreateConnection())
{
DbCommand selectCommand = "SELECT CSVData FROM YourTable WHERE Id = #rowId";
selectCommand.Connection = connection;
db.AddInParameter(selectCommand, "#rowId", DbType.Int32, rowId);
connection.Open();
using (IDataReader reader = selectCommand.ExecuteReader(CommandBehavior.SequentialAccess))
{
while (reader.Read())
{
int startIndex = 0;
int bufferSize = 8192;
byte[] buffer = new byte[bufferSize];
long retVal = reader.GetBytes(0, startIndex, buffer, 0, bufferSize);
while (retVal == bufferSize)
{
writer.Write(buffer);
writer.Flush();
startIndex += bufferSize;
retVal = reader.GetBytes(0, startIndex, buffer, 0, bufferSize);
}
writer.Write(buffer, 0, (int)retVal);
}
}
}
}
return stream;
}
You'll need to replace the selectCommand sql and parameters with whatever sql you are using to return the data.
This method uses a SqlDataReader to sequentially read bytes from the appropriate row (identified by rowId in my example) and column (called CSVData in my example) which should avoid any truncation issues on the way out (it could be that your DataTable object is only returning the first n bytes). The MemoryStream object could be used to resave the CSV file to the file system for testing, or fed straight into your CSVReader.
If you can post your Save method (where you actually persist the data to the database) then we can check that too to make sure that truncation isn't happening there, either.
One other suggestion I can make right now involves your loading the file into a byte array. If you are going to load the file in one go, then you can simply replace:
try
{
fs = new FileStream(filepath, FileMode.Open);
fsbuffer = new byte[fs.Length];
fs.Read(fsbuffer, 0, (int)fs.Length);
}
with
byte[] fileBytes = File.ReadAllBytes(filepath);

Downloading large files, saved in Database

We have a lot of files, saved as binary in our SQL Server database.
I have made an .ashx file, that delivers these files, to the users.
Unfortunately, when the files become rather large, it will fail, with the following error:
Overflow or underflow in the arithmetic operation
I assume it runs out of memory, as I load the binary into a byte[].
So, my question is, how can I make this functionality, read in chunks (maybe?), when it is from a database table? It also seems like Response.TransmitFile() is a good option, but again, how would this work with a database?
The DB.GetReposFile(), in the code beneath, gets the file from the database. There are various fields, for the entry:
Filename, ContentType, datestamps and the FileContent as varbinary.
This is my function, to deliver the file:
context.Response.Clear();
try
{
if (!String.IsNullOrEmpty(context.Request.QueryString["id"]))
{
int id = Int32.Parse(context.Request.QueryString["id"]);
DataTable dtbl = DB.GetReposFile(id);
string FileName = dtbl.Rows[0]["FileName"].ToString();
string Extension = FileName.Substring(FileName.LastIndexOf('.')).ToLower();
context.Response.ContentType = ReturnExtension(Extension);
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + FileName);
byte[] buffer = (byte[])dtbl.Rows[0]["FileContent"];
context.Response.OutputStream.Write(buffer, 0, buffer.Length);
}
else
{
context.Response.ContentType = "text/html";
context.Response.Write("<p>Need a valid id</p>");
}
}
catch (Exception ex)
{
context.Response.ContentType = "text/html";
context.Response.Write("<p>" + ex.ToString() + "</p>");
}
Update:
The function I ended up with, is the one listed below.
DB.GetReposFileSize() simply gets the content Datalength, as Tim mentions.
I call this function, in the original code, instead of these two lines:
byte[] buffer = (byte[])dtbl.Rows[0]["FileContent"];
context.Response.OutputStream.Write(buffer, 0, buffer.Length);
New download function:
private void GetFileInChunks(HttpContext context, int ID)
{
//string path = #"c:\somefile.txt";
//FileInfo file = new FileInfo(path);
int len = DB.GetReposFileSize(ID);
context.Response.AppendHeader("content-length", len.ToString());
context.Response.Buffer = false;
//Stream outStream = (Stream)context.Response.OutputStream;
SqlConnection conn = null;
string strSQL = "select FileContent from LM_FileUploads where ID=#ID";
try
{
DB.OpenDB(ref conn, DB.DatabaseConnection.PDM);
SqlCommand cmd = new SqlCommand(strSQL, conn);
cmd.Parameters.AddWithValue("#ID", ID);
SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
reader.Read();
byte[] buffer = new byte[1024];
int bytes;
long offset = 0;
while ((bytes = (int)reader.GetBytes(0, offset, buffer, 0, buffer.Length)) > 0)
{
// TODO: do something with `bytes` bytes from `buffer`
context.Response.OutputStream.Write(buffer, 0, buffer.Length);
offset += bytes;
}
}
catch (Exception ex)
{
throw ex;
}
finally
{
DB.CloseDB(ref conn);
}
}
You can use DATALENGTH to get the size of the VARBINARY and stream it for instance with a SqldataReader and it's Read-or ReadBytes-Method.
Have a look at this answer to see an implementation: Best way to stream files in ASP.NET

C# / Crystal Reports: Receiving an error when displaying Image from database to Report

I am currently receiving an error when retrieving an image file path coming from the database using the following syntax from this website:
private void DisplayImages(DataRow row, string img, string ImagePath)
{
FileStream stream = new FileStream(ImagePath, FileMode.Open, FileAccess.Read);
byte[] ImgData = new byte[stream.Length];
stream.Read(ImgData, 0, Convert.ToInt32(stream.Length));
stream.Close();
row[img] = ImgData;
}
The column name is named Image with the data type nvarchar.
The error I am currently experiencing is posted below
Inconvertible type mismatch between SourceColumn 'Image' of String and the DataColumn 'Image' of Byte[].
don't use nvarchar type, use BLOB here a sample:
http://msdn2.microsoft.com/en-us/library/87z0hy49(VS.80).aspx
code snippet
// Assumes that connection is a valid SqlConnection object.
SqlCommand command = new SqlCommand(
"SELECT pub_id, logo FROM pub_info", connection);
// Writes the BLOB to a file (*.bmp).
Stream stream;
// Streams the BLOB to the FileStream object.
BinaryWriter writer;
// Size of the BLOB buffer.
int bufferSize = 100;
// The BLOB byte[] buffer to be filled by GetBytes.
byte[] outByte = new byte[bufferSize];
// The bytes returned from GetBytes.
long retval;
// The starting position in the BLOB output.
long startIndex = 0;
// The publisher id to use in the file name.
string pubID = "";
// Open the connection and read data into the DataReader.
connection.Open();
SqlDataReader reader = command.ExecuteReader(CommandBehavior.SequentialAccess);
while (reader.Read())
{
// Get the publisher id, which must occur before getting the logo.
pubID = reader.GetString(0);
// Create a file to hold the output.
stream = new MemoryStream();
writer = new BinaryWriter(stream);
// Reset the starting byte for the new BLOB.
startIndex = 0;
// Read bytes into outByte[] and retain the number of bytes returned.
retval = reader.GetBytes(1, startIndex, outByte, 0, bufferSize);
// Continue while there are bytes beyond the size of the buffer.
while (retval == bufferSize)
{
writer.Write(outByte);
writer.Flush();
// Reposition start index to end of last buffer and fill buffer.
startIndex += bufferSize;
retval = reader.GetBytes(1, startIndex, outByte, 0, bufferSize);
}
// Write the remaining buffer.
writer.Write(outByte, 0, (int)retval - 1);
writer.Flush();
// Close the output file.
writer.Close();
stream.Close();
}
// Close the reader and the connection.
reader.Close();
connection.Close();
Image _Image = Image.FromStream(stream);
System.Windows.Controls.Image _WPFImage = new System.Windows.Controls.Image();
_WPFImage.Source = System.Windows.Media.Imaging.BitmapFrame.Create(stream);
// Assumes that connection is a valid SqlConnection object.
SqlCommand command = new SqlCommand(
"SELECT pub_id, logo FROM pub_info", connection);
// Writes the BLOB to a file (*.bmp).
Stream stream;
// Streams the BLOB to the FileStream object.
BinaryWriter writer;
// Size of the BLOB buffer.
int bufferSize = 100;
// The BLOB byte[] buffer to be filled by GetBytes.
byte[] outByte = new byte[bufferSize];
// The bytes returned from GetBytes.
long retval;
// The starting position in the BLOB output.
long startIndex = 0;
// The publisher id to use in the file name.
string pubID = "";
// Open the connection and read data into the DataReader.
connection.Open();
SqlDataReader reader = command.ExecuteReader(CommandBehavior.SequentialAccess);
while (reader.Read())
{
// Get the publisher id, which must occur before getting the logo.
pubID = reader.GetString(0);
// Create a file to hold the output.
stream = new MemoryStream();
writer = new BinaryWriter(stream);
// Reset the starting byte for the new BLOB.
startIndex = 0;
// Read bytes into outByte[] and retain the number of bytes returned.
retval = reader.GetBytes(1, startIndex, outByte, 0, bufferSize);
// Continue while there are bytes beyond the size of the buffer.
while (retval == bufferSize)
{
writer.Write(outByte);
writer.Flush();
// Reposition start index to end of last buffer and fill buffer.
startIndex += bufferSize;
retval = reader.GetBytes(1, startIndex, outByte, 0, bufferSize);
}
// Write the remaining buffer.
writer.Write(outByte, 0, (int)retval - 1);
writer.Flush();
// Close the output file.
writer.Close();
stream.Close();
}
// Close the reader and the connection.
reader.Close();
connection.Close();
Image _Image = Image.FromStream(stream);
System.Windows.Controls.Image _WPFImage = new System.Windows.Controls.Image();
_WPFImage.Source = System.Windows.Media.Imaging.BitmapFrame.Create(stream);

Save XLSM file to Database column and retrieve?

The code previously implemented takes in the xls file saves it on to a column in a table using the stream i use the same method but the only change is the the file saved is a xlsm or an xlsx type file it saves to the column in the database
When I try and get the contents from the database and throw the saved xlsm file or xlsx file I get an error "Excel file found unreadable content do you want to recover the contents of this work book ?"
Here's the code to save the xlsm or the xlsx file
System.IO.Stream filestream = System.IO.File.Open(file, System.IO.FileMode.Open);
int fileLength = (int)filestream.Length;
byte[] input = new byte[fileLength];
filestream.Read(input, 0, fileLength);
string Sql = "insert into upload values(#contents)";
con.Open();
System.Data.SqlClient.SqlCommand c = new System.Data.SqlClient.SqlCommand(Sql, con);
c.Parameters.Add("#contents", System.Data.SqlDbType.Binary);
c.Parameters["#contents"].Value = input;
c.ExecuteNonQuery();
To retrieve and send to user
SqlCommand comm = new SqlCommand("select contents from upload order by id desc", con);
SqlDataReader reader = comm.ExecuteReader();
int bufferSize = 32768;
byte[] outbyte = new byte[bufferSize];
long retval;
long startIndex = 0;
startIndex = 0;
retval = reader.GetBytes(0, startIndex, outbyte, 0, bufferSize);
while (retval > 0)
{
System.Web.HttpContext.Current.Response.BinaryWrite(outbyte);
startIndex += bufferSize;
if (retval == bufferSize)
{
retval = reader.GetBytes(2, startIndex, outbyte, 0, bufferSize);
}
else
{
retval = 0;
}
}
A couple of things strike me as possibilities.
Firstly, you are not calling reader.Read().
Secondly, there is not need for the check on retval == bufferSize - just call GetBytes again and it will return 0 if no bytes were read from the field.
Thirdly, as you are writing to the HttpResponse you need to make sure that you call Response.Clear() before writing the bytes to the output, and Response.End() after writing the file to the response.
The other thing to try is saving the file to the hard drive and comparing it to the original. Is it the same size? If it is bigger then you are writing too much information to the file (see previous comments about HttpResponse). If it is smaller then you are not writing enough, and are most likely exiting the loop too soon (see comment about retval).
I couldn't help but notice the number of places where your code failed to wrap an IDisposable in a using block, like the following:
using (SqlConnection con = new SqlConnection(connectionString))
{
byte[] input;
using (System.IO.Stream filestream = System.IO.File.Open(file, System.IO.FileMode.Open))
{
int fileLength = (int)filestream.Length;
input = new byte[fileLength];
filestream.Read(input, 0, fileLength);
}
const string Sql = "insert into upload values(#contents)";
con.Open();
using (System.Data.SqlClient.SqlCommand c = new System.Data.SqlClient.SqlCommand(Sql, con))
{
c.Parameters.Add("#contents", System.Data.SqlDbType.Binary);
c.Parameters["#contents"].Value = input;
c.ExecuteNonQuery();
}
using (SqlCommand comm = new SqlCommand("select contents from upload order by id desc", con))
{
using (SqlDataReader reader = comm.ExecuteReader())
{
int bufferSize = 32768;
byte[] outbyte = new byte[bufferSize];
long retval;
long startIndex = 0;
startIndex = 0;
retval = reader.GetBytes(0, startIndex, outbyte, 0, bufferSize);
while (retval > 0)
{
System.Web.HttpContext.Current.Response.BinaryWrite(outbyte);
startIndex += bufferSize;
if (retval == bufferSize)
{
retval = reader.GetBytes(2, startIndex, outbyte, 0, bufferSize);
}
else
{
retval = 0;
}
}
}
}
}

Categories