How to read binary blob from Firebird to byte[] with C#? - c#

I have very poor knowledge of C#, but I need to write code, that read binary blob to byte[].
I wrote this code:
byte[] userBlob;
myCommand.CommandText = "SELECT id, userblob FROM USERS";
myCommand.Connection = myFBConnection;
myCommand.Transaction = myTransaction;
FbDataReader reader = myCommand.ExecuteReader();
try
{
while(reader.Read())
{
Console.WriteLine(reader.GetString(0));
userBlob = // what I should to do here??
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
Console.WriteLine("Can't read data from DB");
}
But what I should place here? As I understood I need use streams, but I can't understand how to do it.

A little late to the game; I hope this is on point.
I assuming you're using the Firebird .NET provider, which is a C# implementation that does not ride on top of the native fbclient.dll. Unfortunately, it does not provide a streaming interface to BLOBs, which would allow for reading potentially huge data in chunks without blowing out memory.
Instead, you use the FbDataReader.GetBytes() method to read the data, and it all has to fit in memory. GetBytes takes a user-provided buffer and stuffs the BLOB data in the position referenced, and it returns the number of bytes it actually copied (which could be less than the full size).
Passing a null buffer to GetBytes returns you the full size of the BLOB (but no data!) so you can reallocate as needed.
Here we assume you have an INT for field #0 (not interesting) and the BLOB for #1, and this naive implementation should take care of it:
// temp buffer for all BLOBs, reallocated as needed
byte [] blobbuffer = new byte[512];
while (reader.Read())
{
int id = reader.GetInt32(0); // read first field
// get bytes required for this BLOB
long n = reader.GetBytes(
i: 1, // field number
dataIndex: 0,
buffer: null, // no buffer = size check only
bufferIndex: 0,
length: 0);
// extend buffer if needed
if (n > blobbuffer.Length)
blobbuffer = new byte[n];
// read again into nominally "big enough" buffer
n = reader.GetBytes(1, 0, blobbuffer, 0, blobbuffer.Length);
// Now: <n> bytes of <blobbuffer> has your data. Go at it.
}
It's possible to optimize this somewhat, but the Firebird .NET provider really needs a streaming BLOB interface like the native fbclient.dll offers.

byte[] toBytes = Encoding.ASCII.GetBytes(string);
So, in your case;
userBlob = Encoding.ASCII.GetBytes(reader.GetString(0));
However, I am not sure what you are trying to achieve with your code as you are pulling back all users and then creating the blob over and over.

Related

Stream File Content from DB

Back in the WebForms days, I could use Response.OutputStream.Write() and Response.Flush() to chunk file data to the client - because the files we are streaming are huge and will consume too much web server memory. How can I do that now with the new MVC classes like FileStreamResult?
My exact situation is: The DB contains the file data (CSV or XLS) in a VarBinary column. In the WebForms implementation, I send down a System.Func to the DataAccess layer, which would iterate through the IDataReader and use the System.Func to stream the content to the client. The point is that I don't want the webapp to have to have any specific DB knowledge, including IDataReader.
How can I achieve the same result using MVC?
The Func is (that I define in the web layer and send down to the DB layer):
Func<byte[], long, bool> partialUpdateFunc = (data, length) =>
{
if (Response.IsClientConnected)
{
// Write the data to the current output stream.
Response.OutputStream.Write(data, 0, (int) length);
// Flush the data to the HTML output.
Response.Flush();
return true;
}
else
{
return false;
}
};
and in the DB Layer, we get the IDataReader from the DB SP (using statement with ExecuteReader):
using (var reader = conn.ExecuteReader())
{
if (reader.Read())
{
byte[] outByte = new byte[BufferSize];
long startIndex = 0;
// Read bytes into outByte[] and retain the number of bytes returned.
long retval = reader.GetBytes(0, startIndex, outByte, 0, BufferSize);
// Continue while there are bytes beyond the size of the buffer.
bool stillConnected = true;
while (retval == BufferSize)
{
stillConnected = partialUpdateFunc(outByte, retval);
if (!stillConnected)
{
break;
}
// Reposition start index to end of last buffer and fill buffer.
startIndex += BufferSize;
retval = reader.GetBytes(0, startIndex, outByte, 0, BufferSize);
}
// Write the remaining buffer.
if (stillConnected)
{
partialUpdateFunc(outByte, retval);
}
}
// Close the reader and the connection.
reader.Close();
}
If you want to reuse FileStreamResult you need to create Stream-derived class that reads data from DB and pass that stream to the FileStreamResult.
Couple issues with that approach
action results are executed synchronously, so your download will not release thread while data is read from DB/send - may be ok for small number of parallel downloads. To get around you may need to use handler or download from async action directly (feels wrong for MVC approach)
at least old versions of FileStreamResult did not have "streaming" support (discussed here), make sure you are fine with that.

How to find the no. of bytes of a text file without reading it?

I have c# code reading a text file and printing it out which looks like this:
StreamReader sr = new StreamReader(File.OpenRead(ofd.FileName));
byte[] buffer = new byte[100]; //is there a way to simply specify the length of this to be the number of bytes in the file?
sr.BaseStream.Read(buffer, 0, buffer.Length);
foreach (byte b in buffer)
{
label1.Text += b.ToString("x") + " ";
}
Is there anyway I can know how many bytes my file has?
I want to know the length of the byte[] buffer in advance so that in the Read function, I can simply pass in buffer.length as the third argument.
System.IO.FileInfo fi = new System.IO.FileInfo("myfile.exe");
long size = fi.Length;
In order to find the file size, the system has to read from the disk. So, the above example performs data read from disk but does not read file content.
It's not clear why you're using StreamReader at all if you're going to read binary data. Just use FileStream instead. You can use the Length property to find the length of the file.
Note, however, that that still doesn't mean you should just call Read and *assume` that a single call will read all the data. You should loop until you've read everything:
byte[] data;
using (var stream = File.OpenRead(...))
{
data = new byte[(int) stream.Length];
int offset = 0;
while (offset < data.Length)
{
int chunk = stream.Read(data, offset, data.Length - offset);
if (chunk == 0)
{
// Or handle this some other way
throw new IOException("File has shrunk while reading");
}
offset += chunk;
}
}
Note that this is assuming you do want to read the data. If you don't want to even open the stream, use FileInfo.Length as other answers have shown. Note that both FileStream.Length and FileInfo.Length have a type of long, whereas arrays are limited to 32-bit lengths. What do you want to happen with a file which is bigger than 2 gigs?
You can use the FileInfo.Length method.
Take a look at the example given in the link.
I would imagine something in here should help.
I doubt you can preemptively guess the size of a file without reading it...
How do I use File.ReadAllBytes In chunks
If it is a large file; then reading in chunks should might help

File Chunking Performance in C#

I am trying to empower users to upload large files. Before I upload a file, I want to chunk it up. Each chunk needs to be a C# object. The reason why is for logging purposes. Its a long story, but I need to create actual C# objects that represent each file chunk. Regardless, I'm trying the following approach:
public static List<FileChunk> GetAllForFile(byte[] fileBytes)
{
List<FileChunk> chunks = new List<FileChunk>();
if (fileBytes.Length > 0)
{
FileChunk chunk = new FileChunk();
for (int i = 0; i < (fileBytes.Length / 512); i++)
{
chunk.Number = (i + 1);
chunk.Offset = (i * 512);
chunk.Bytes = fileBytes.Skip(chunk.Offset).Take(512).ToArray();
chunks.Add(chunk);
chunk = new FileChunk();
}
}
return chunks;
}
Unfortunately, this approach seems to be incredibly slow. Does anyone know how I can improve the performance while still creating objects for each chunk?
thank you
I suspect this is going to hurt a little:
chunk.Bytes = fileBytes.Skip(chunk.Offset).Take(512).ToArray();
Try this instead:
byte buffer = new byte[512];
Buffer.BlockCopy(fileBytes, chunk.Offset, buffer, 0, 512);
chunk.Bytes = buffer;
(Code not tested)
And the reason why this code would likely be slow is because Skip doesn't do anything special for arrays (though it could). This means that every pass through your loop is iterating the first 512*n items in the array, which results in O(n^2) performance, where you should just be seeing O(n).
Try something like this (untested code):
public static List<FileChunk> GetAllForFile(string fileName, FileMode.Open)
{
var chunks = new List<FileChunk>();
using (FileStream stream = new FileStream(fileName))
{
int i = 0;
while (stream.Position <= stream.Length)
{
var chunk = new FileChunk();
chunk.Number = (i);
chunk.Offset = (i * 512);
Stream.Read(chunk.Bytes, 0, 512);
chunks.Add(chunk);
i++;
}
}
return chunks;
}
The above code skips several steps in your process, preferring to read the bytes from the file directly.
Note that, if the file is not an even multiple of 512, the last chunk will contain less than 512 bytes.
Same as Robert Harvey's answer, but using a BinaryReader, that way I don't need to specify an offset. If you use a BinaryWriter on the other end to reassemble the file, you won't need the Offset member of FileChunk.
public static List<FileChunk> GetAllForFile(string fileName) {
var chunks = new List<FileChunk>();
using (FileStream stream = new FileStream(fileName)) {
BinaryReader reader = new BinaryReader(stream);
int i = 0;
bool eof = false;
while (!eof) {
var chunk = new FileChunk();
chunk.Number = i;
chunk.Offset = (i * 512);
chunk.Bytes = reader.ReadBytes(512);
chunks.Add(chunk);
i++;
if (chunk.Bytes.Length < 512) { eof = true; }
}
}
return chunks;
}
Have you thought about what you're going to do to compensate for packet loss and data corruption?
Since you mentioned that the load is taking a long time then I would use asynchronous file reading in order to speed up the loading process. The hard disk is the slowest component of a computer. Google does asynchronous reads and writes on Google Chrome to improve their load times. I had to do something like this in C# in a previous job.
The idea would be to spawn several asynchronous requests over different parts of the file. Then when a request comes in, take the byte array and create your FileChunk objects taking 512 bytes at a time. There are several benefits to this:
If you have this run in a separate thread, then you won't have the whole program waiting to load the large file you have.
You can process a byte array, creating FileChunk objects, while the hard disk is still trying to for-fill read request on other parts of the file.
You will save on RAM space if you limit the amount of pending read requests you can have. This allows less page faulting to the hard disk and use the RAM and CPU cache more efficiently, which speeds up processing further.
You would want to use the following methods in the FileStream class.
[HostProtectionAttribute(SecurityAction.LinkDemand, ExternalThreading = true)]
public virtual IAsyncResult BeginRead(
byte[] buffer,
int offset,
int count,
AsyncCallback callback,
Object state
)
public virtual int EndRead(
IAsyncResult asyncResult
)
Also this is what you will get in the asyncResult:
// Extract the FileStream (state) out of the IAsyncResult object
FileStream fs = (FileStream) ar.AsyncState;
// Get the result
Int32 bytesRead = fs.EndRead(ar);
Here is some reference material for you to read.
This is a code sample of working with Asynchronous File I/O Models.
This is a MS documentation reference for Asynchronous File I/O.

How to store an IStream to a file via C#?

I'm working with a 3rd party component that returns an IStream object (System.Runtime.InteropServices.ComTypes.IStream). I need to take the data in that IStream and write it to a file. I've managed to get that done, but I'm not really happy with the code.
With "strm" being my IStream, here's my test code...
// access the structure containing statistical info about the stream
System.Runtime.InteropServices.ComTypes.STATSTG stat;
strm.Stat(out stat, 0);
System.IntPtr myPtr = (IntPtr)0;
// get the "cbSize" member from the stat structure
// this is the size (in bytes) of our stream.
int strmSize = (int)stat.cbSize; // *** DANGEROUS *** (long to int cast)
byte[] strmInfo = new byte[strmSize];
strm.Read(strmInfo, strmSize, myPtr);
string outFile = #"c:\test.db3";
File.WriteAllBytes(outFile, strmInfo);
At the very least, I don't like the long to int cast as commented above, but I wonder if there's not a better way to get the original stream length than the above? I'm somewhat new to C#, so thanks for any pointers.
You don't need to do that cast, as you can read data from IStream source in chunks.
// ...
System.IntPtr myPtr = (IntPtr)-1;
using (FileStream fs = new FileStream(#"c:\test.db3", FileMode.OpenOrCreate))
{
byte[] buffer = new byte[8192];
while (myPtr.ToInt32() > 0)
{
strm.Read(buffer, buffer.Length, myPtr);
fs.Write(buffer, 0, myPtr.ToInt32());
}
}
This way (if works) is more memory efficient, as it just uses a small memory block to transfer data between that streams.
System.Runtime.InteropServices.ComTypes.IStream is a wrapper for ISequentialStream.
From MSDN: http://msdn.microsoft.com/en-us/library/aa380011(VS.85).aspx
The actual number of bytes read can be
less than the number of bytes
requested if an error occurs or if the
end of the stream is reached during
the read operation. The number of
bytes returned should always be
compared to the number of bytes
requested. If the number of bytes
returned is less than the number of
bytes requested, it usually means the
Read method attempted to read past the
end of the stream.
This documentation says, that you can loop and read as long as pcbRead is less then cb.

How to insert characters to a file using C#

I have a huge file, where I have to insert certain characters at a specific location. What is the easiest way to do that in C# without rewriting the whole file again.
Filesystems do not support "inserting" data in the middle of a file. If you really have a need for a file that can be written to in a sorted kind of way, I suggest you look into using an embedded database.
You might want to take a look at SQLite or BerkeleyDB.
Then again, you might be working with a text file or a legacy binary file. In that case your only option is to rewrite the file, at least from the insertion point up to the end.
I would look at the FileStream class to do random I/O in C#.
You will probably need to rewrite the file from the point you insert the changes to the end. You might be best always writing to the end of the file and use tools such as sort and grep to get the data out in the desired order. I am assuming you are talking about a text file here, not a binary file.
There is no way to insert characters in to a file without rewriting them. With C# it can be done with any Stream classes. If the files are huge, I would recommend you to use GNU Core Utils inside C# code. They are the fastest. I used to handle very large text files with the core utils ( of sizes 4GB, 8GB or more etc ). Commands like head, tail, split, csplit, cat, shuf, shred, uniq really help a lot in text manipulation.
For example if you need to put some chars in a 2GB file, you can use split -b BYTECOUNT, put the ouptut in to a file, append the new text to it, and get the rest of the content and add to it. This should supposedly be faster than any other way.
Hope it works. Give it a try.
You can use random access to write to specific locations of a file, but you won't be able to do it in text format, you'll have to work with bytes directly.
If you know the specific location to which you want to write the new data, use the BinaryWriter class:
using (BinaryWriter bw = new BinaryWriter (File.Open (strFile, FileMode.Open)))
{
string strNewData = "this is some new data";
byte[] byteNewData = new byte[strNewData.Length];
// copy contents of string to byte array
for (var i = 0; i < strNewData.Length; i++)
{
byteNewData[i] = Convert.ToByte (strNewData[i]);
}
// write new data to file
bw.Seek (15, SeekOrigin.Begin); // seek to position 15
bw.Write (byteNewData, 0, byteNewData.Length);
}
You may take a look at this project:
Win Data Inspector
Basically, the code is the following:
// this.Stream is the stream in which you insert data
{
long position = this.Stream.Position;
long length = this.Stream.Length;
MemoryStream ms = new MemoryStream();
this.Stream.Position = 0;
DIUtils.CopyStream(this.Stream, ms, position, progressCallback);
ms.Write(data, 0, data.Length);
this.Stream.Position = position;
DIUtils.CopyStream(this.Stream, ms, this.Stream.Length - position, progressCallback);
this.Stream = ms;
}
#region Delegates
public delegate void ProgressCallback(long position, long total);
#endregion
DIUtils.cs
public static void CopyStream(Stream input, Stream output, long length, DataInspector.ProgressCallback callback)
{
long totalsize = input.Length;
long byteswritten = 0;
const int size = 32768;
byte[] buffer = new byte[size];
int read;
int readlen = length < size ? (int)length : size;
while (length > 0 && (read = input.Read(buffer, 0, readlen)) > 0)
{
output.Write(buffer, 0, read);
byteswritten += read;
length -= read;
readlen = length < size ? (int)length : size;
if (callback != null)
callback(byteswritten, totalsize);
}
}
Depending on the scope of your project, you may want to decide to insert each line of text with your file in a table datastructure. Sort of like a database table, that way you can insert to a specific location at any given moment, and not have to read-in, modify, and output the entire text file each time. This is given the fact that your data is "huge" as you put it. You would still recreate the file, but at least you create a scalable solution in this manner.
It may be "possible" depending on how the filesystem stores files to quickly insert (ie, add additional) bytes in the middle. If it is remotely possible it may only be feasible to do so a full block at a time, and only by either doing low level modification of the filesystem itself or by using a filesystem specific interface.
Filesystems are not generally designed for this operation. If you need to quickly do inserts you really need a more general database.
Depending on your application a middle ground would be to bunch your inserts together, so you only do one rewrite of the file rather than twenty.
You will always have to rewrite the remaining bytes from the insertion point. If this point is at 0, then you will rewrite the whole file. If it is 10 bytes before the last byte, then you will rewrite the last 10 bytes.
In any case there is no function to directly support "insert to file". But the following code can do it accurately.
var sw = new Stopwatch();
var ab = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ ";
// create
var fs = new FileStream(#"d:\test.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite, 262144, FileOptions.None);
sw.Restart();
fs.Seek(0, SeekOrigin.Begin);
for (var i = 0; i < 40000000; i++) fs.Write(ASCIIEncoding.ASCII.GetBytes(ab), 0, ab.Length);
sw.Stop();
Console.WriteLine("{0} ms", sw.Elapsed.TotalMilliseconds);
fs.Dispose();
// insert
fs = new FileStream(#"d:\test.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite, 262144, FileOptions.None);
sw.Restart();
byte[] b = new byte[262144];
long target = 10, offset = fs.Length - b.Length;
while (offset != 0)
{
if (offset < 0)
{
offset = b.Length - target;
b = new byte[offset];
}
fs.Position = offset; fs.Read(b, 0, b.Length);
fs.Position = offset + target; fs.Write(b, 0, b.Length);
offset -= b.Length;
}
fs.Position = target; fs.Write(ASCIIEncoding.ASCII.GetBytes(ab), 0, ab.Length);
sw.Stop();
Console.WriteLine("{0} ms", sw.Elapsed.TotalMilliseconds);
To gain better performance for file IO, play with "magic two powered numbers" like in the code above. The creation of the file uses a buffer of 262144 bytes (256KB) that does not help at all. The same buffer for the insertion does the "performance job" as you can see by the StopWatch results if you run the code. A draft test on my PC gave the following results:
13628.8 ms for creation and 3597.0971 ms for insertion.
Note that the target byte for insertion is 10, meaning that almost the whole file was rewritten.
Why don't you put a pointer to the end of the file (literally, four bytes above the current size of the file) and then, on the end of file write the length of inserted data, and finally the data you want to insert itself. For example, if you have a string in the middle of the file, and you want to insert few characters in the middle of the string, you can write a pointer to the end of file over some four characters in the string, and then write that four characters to the end together with the characters you firstly wanted to insert. It's all about ordering data. Of course, you can do this only if you are writing the whole file by yourself, I mean you are not using other codecs.

Categories