Load or convert varbinary from sql to textbox text? - c#

I'm new one in C# programming.
At present I try to save and load a text file to sql server. I watched and searched some videos and document on youtube. They stored image file to sql. I try and already saved the text file to server (binary data). Now I stuck how to load or convert it:
When I save it, I used memorystream:
MemoryStream ms = new MemoryStream(Encoding.UTF8.GetBytes(textreview.Text);
byte[] theBytes = Encoding.UTF8.GetBytes(textreview.Text);
ms.Position = 0;
ms.Read(theBytes, 0, theBytes.Length);
//insert value to server
SqlCommand cmdb = new SqlCommand("insert into Assignment(text) values (#text)", con);
cmdb.Parameters.AddWithValue("#textname", textPathLabel.Text);
cmdb.Parameters.AddWithValue("#text", theBytes);
Now I try to load it but I stuck here!
//load from sql
byte[] picarr = (byte[])dr["text"];
ms = new MemoryStream(picarr);
ms.Seek(0, SeekOrigin.Begin);
StreamReader reader = new StreamReader(ms);
string text = reader.ReadToEnd();
textreview.Text = text;

Here's an example on how you convert a text to bytes and then back again:
var bytes = Encoding.UTF8.GetBytes("test");
This will give you a byte array that looks like this:
{ 116, 101, 115, 116 }
Now to get the text again, you can use Encoding.UTF8.GetString() like this:
Encoding.UTF8.GetString(bytes);
This means that you could simple do this when saving the value:
cmdb.Parameters.AddWithValue("#text", Encoding.UTF8.GetBytes(textreview.Text));
And then when loading it back you simply do this:
textreview.Text = Encoding.GetString((byte[])dr["text"]);

Related

How to retrieve a pdf from MongoDB database using c#

I have a collection of pdf files saved to MongoDB. I'm attempting to convert all the files to base64 but I'm having trouble retrieving the actual files from the database. The file path is not what I'm expecting and I'm unsure of what else to try
Below is the code I have tried. The error message is:
"System.IO.IOException Message=The filename, directory name, or volume label syntax is incorrect. : 'C:\Users\Jabari.LAPTOP-FERO3UB5\Desktop\codebase\LMO\API\LMO.Api\https:\LMOdevstorage.blob.core.windows.net\documents\57a36b7e6789102ddf\J_Broomstick58059342787ba42f12345.pdf'
From what I can see it's trying to pull from my local codebase and not the actual database. I'm not sure why this is. I'm new to MongoDB operations.
Below is my code
var file = document.AzureDocumentUri;
if (file != null)
{
Byte[] bytes = File.ReadAllBytes(file);
String base64String = Convert.ToBase64String(bytes);
StreamWriter sw64 = new StreamWriter($"{filePath}test64.txt");
sw64.Write(base64String);
sw64.Close();
sw64.Dispose();
}
What I'm expecting to happen is to be able to access the pdf file directly from the database and read it into memory then convert it to base64.
Why is it attempting to download from my local directory instead the collection from the database?
Byte[] bytes = File.ReadAllBytes(file);
is strange, you can't download URI like this.
To find the best way to use file in MongoDB look at GridFS.
MemoryStream ms = new MemoryStream(new byte[] { 0x00, 0x01, 0x02 });
ms.Position = 0;
// MemoryStream ms2 = new MemoryStream(await File.ReadAllBytesAsync("test.pdf"));
// Write to MongoDB
// There is a limitation of 16 MB in record size for mongodb, this what GridFS handle, unlimited file size.
// https://www.mongodb.com/docs/manual/core/document/#:~:text=Document%20Size%20Limit,MongoDB%20provides%20the%20GridFS%20API.
GridFSBucket gridFSBucket = new GridFSBucket(db);
ObjectId id = await gridFSBucket.UploadFromStreamAsync("exemple.bin", ms);
// Get the file
MemoryStream ms3 = new MemoryStream();
await gridFSBucket.DownloadToStreamAsync(id, ms3);
Full example here
https://github.com/iso8859/learn-mongodb-by-example/blob/main/dotnet/02%20-%20Intermediate/StoreFile.cs

Sending GZipped data via TcpClient [duplicate]

I've got a pesky problem with gzipstream targeting .Net 3.5. This is my first time working with gzipstream, however I have modeled after a number of tutorials including here and I'm still stuck.
My app serializes a datatable to xml and inserts into a database, storing the compressed data into a varbinary(max) field as well as the original length of the uncompressed buffer. Then, when I need it, I retrieve this data and decompress it and recreates the datatable. The decompress is what seems to fail.
EDIT: Sadly after changing the GetBuffer to ToArray as suggested, my issue remains. Code Updated below
Compress code:
DataTable dt = new DataTable("MyUnit");
//do stuff with dt
//okay... now compress the table
using (MemoryStream xmlstream = new MemoryStream())
{
//instead of stream, use xmlwriter?
System.Xml.XmlWriterSettings settings = new System.Xml.XmlWriterSettings();
settings.Encoding = Encoding.GetEncoding(1252);
settings.Indent = false;
System.Xml.XmlWriter writer = System.Xml.XmlWriter.Create(xmlstream, settings);
try
{
dt.WriteXml(writer);
writer.Flush();
}
catch (ArgumentException)
{
//likely an encoding issue... okay, base64 encode it
var base64 = Convert.ToBase64String(xmlstream.ToArray());
xmlstream.Write(Encoding.GetEncoding(1252).GetBytes(base64), 0, Encoding.GetEncoding(1252).GetBytes(base64).Length);
}
using (MemoryStream zipstream = new MemoryStream())
{
GZipStream zip = new GZipStream(zipstream, CompressionMode.Compress);
log.DebugFormat("Compressing commands...");
zip.Write(xmlstream.GetBuffer(), 0, xmlstream.ToArray().Length);
zip.Flush();
float ratio = (float)zipstream.ToArray().Length / (float)xmlstream.ToArray().Length;
log.InfoFormat("Resulting compressed size is {0:P2} of original", ratio);
using (SqlCommand cmd = new SqlCommand())
{
cmd.CommandText = "INSERT INTO tinydup (lastid, command, compressedlength) VALUES (#lastid,#compressed,#length)";
cmd.Connection = db;
cmd.Parameters.Add("#lastid", SqlDbType.Int).Value = lastid;
cmd.Parameters.Add("#compressed", SqlDbType.VarBinary).Value = zipstream.ToArray();
cmd.Parameters.Add("#length", SqlDbType.Int).Value = xmlstream.ToArray().Length;
cmd.ExecuteNonQuery();
}
}
Decompress Code:
/* This is an encapsulation of what I get from the database
public class DupUnit{
public uint lastid;
public uint complength;
public byte[] compressed;
}*/
//I have already retrieved my list of work to do from the database in a List<Dupunit> dupunits
foreach (DupUnit unit in dupunits)
{
DataSet ds = new DataSet();
//DataTable dt = new DataTable();
//uncompress and extract to original datatable
try
{
using (MemoryStream zipstream = new MemoryStream(unit.compressed))
{
GZipStream zip = new GZipStream(zipstream, CompressionMode.Decompress);
byte[] xmlbits = new byte[unit.complength];
//WHY ARE YOU ALWAYS 0!!!!!!!!
int bytesdecompressed = zip.Read(xmlbits, 0, unit.compressed.Length);
MemoryStream xmlstream = new MemoryStream(xmlbits);
log.DebugFormat("Uncompressed XML against {0} is: {1}", m_source.DSN, Encoding.GetEncoding(1252).GetString(xmlstream.ToArray()));
try{
ds.ReadXml(xmlstream);
}catch(Exception)
{
//it may have been base64 encoded... decode first.
ds.ReadXml(Encoding.GetEncoding(1254).GetString(
Convert.FromBase64String(
Encoding.GetEncoding(1254).GetString(xmlstream.ToArray())))
);
}
xmlstream.Dispose();
}
}
catch (Exception e)
{
log.Error(e);
Thread.Sleep(1000);//sleep a sec!
continue;
}
Note the comment above... bytesdecompressed is always 0. Any ideas? Am I doing it wrong?
EDIT 2:
So this is weird. I added the following debug code to the decompression routine:
GZipStream zip = new GZipStream(zipstream, CompressionMode.Decompress);
byte[] xmlbits = new byte[unit.complength];
int offset = 0;
while (zip.CanRead && offset < xmlbits.Length)
{
while (zip.Read(xmlbits, offset, 1) == 0) ;
offset++;
}
When debugging, sometimes that loop would complete, but other times it would hang. When I'd stop the debugging, it would be at byte 1600 out of 1616. I'd continue, but it wouldn't move at all.
EDIT 3: The bug appears to be in the compress code. For whatever reason, it is not saving all of the data. When I try to decompress the data using a third party gzip mechanism, I only get part of the original data.
I'd start a bounty, but I really don't have much reputation to give as of now :-(
Finally found the answer. The compressed data wasn't complete because GZipStream.Flush() does absolutely nothing to ensure that all of the data is out of the buffer - you need to use GZipStream.Close() as pointed out here. Of course, if you get a bad compress, it all goes downhill - if you try to decompress it, you will always get 0 returned from the Read().
I'd say this line, at least, is the most wrong:
cmd.Parameters.Add("#compressed", SqlDbType.VarBinary).Value = zipstream.GetBuffer();
MemoryStream.GetBuffer:
Note that the buffer contains allocated bytes which might be unused. For example, if the string "test" is written into the MemoryStream object, the length of the buffer returned from GetBuffer is 256, not 4, with 252 bytes unused. To obtain only the data in the buffer, use the ToArray method.
It should be noted that in the zip format, it first works by locating data stored at the end of the file - so if you've stored more data than was required, the required entries at the "end" of the file don't exist.
As an aside, I'd also recommend a different name for your compressedlength column - I'd initially taken it (despite your narrative) as being intended to store, well, the length of the compressed data (and written part of my answer to address that). Maybe originalLength would be a better name?

How to get the ZIP file from Base64 string in Javascript?

I am trying to get the compressed ZIP file back in Javascript. I am able to convert the zip file into Base64 String format. (Zip file is in Server)
Here is my try (at Server Side)
System.IO.FileStream fs = new System.IO.FileStream(SourceFilePath + "Arc.zip", System.IO.FileMode.Open);
Byte[] zipAsBytes = new Byte[fs.Length];
fs.Read(zipAsBytes, 0, zipAsBytes.Length);
String base64String = System.Convert.ToBase64String(zipAsBytes, 0, zipAsBytes.Length);
fs.Close();
if (zipAsBytes.Length > 0)
{
_response.Status = "ZipFile";
_response.Result = base64String;
}
return _json.Serialize(_response);
This part of code returns the JSON data. This JSON data includes the Base64 string. Now what i want to do is to get the original zip file from Base64 string. I searched over the internet but not get the idea.
Is this achievable ?.
It is achievable. First you must convert the Base64 string to an Arraybuffer. Can be done with this function:
function base64ToBuffer(str){
str = window.atob(str); // creates a ASCII string
var buffer = new ArrayBuffer(str.length),
view = new Uint8Array(buffer);
for(var i = 0; i < str.length; i++){
view[i] = str.charCodeAt(i);
}
return buffer;
}
Then, using a library like JSZip, you can convert the ArrayBuffer to a Zip file and read its contents:
var buffer = base64ToBuffer(str);
var zip = new JSZip(buffer);
var fileContent = zip.file("someFileInZip.txt").asText();
JavaScript does not have that functionality.
Theoretically there can be some js library that does this, but it's size probably would be bigger than the original text file itself.
You can also enable gzip compression on your server, so that any output text gets compressed. Most of the browsers would then uncompress the data upon its arrival.

Error while trying to save an image into SQL Server

I have created code to upload an image into SQL Server.
Here is the code to convert the image into bytes:
//Use FileInfo object to get file size.
FileInfo fInfo = new FileInfo(p);
//Open FileStream to read file
FileStream fStream = new FileStream(p, FileMode.Open, FileAccess.Read);
byte[] numBytes = new byte[fStream.Length];
fStream.Read(numBytes, 0, Convert.ToInt32(fStream.Length));
//Use BinaryReader to read file stream into byte array.
//BinaryReader br = new BinaryReader(fStream);
//When you use BinaryReader, you need to supply number of bytes to read from file.
//In this case we want to read entire file. So supplying total number of bytes.
// data = br.ReadBytes((int)numBytes);
return numBytes;
And here is the code to add the bytes to a SqlCommand parameter as values:
objCmd.Parameters.Add("#bill_Image", SqlDbType.Binary).Value = imageData;
objCmd.ExecuteNonQuery();
But I am getting error
String or binary data would be truncated. The statement has been terminated
How can I overcome this problem?
Error is clearly indicating that you're trying to save more bytes than allowed by the field definition.
Not sure what sql type you're using for bill_Image but an appropiated field definition to store an image would be varbinary(MAX).
Check definition of the bill_Image column in your database.
it should be something like
bill_Image varbinary(X)
Just increase X or put there MAX instead of a number (if you have more than 8 000 bytes image)
Info about binary/varbinary type here

Can encode/decode a Image Object to keep it in a XML file using Compact Framework?

As the title mentioned, I want to encode a Image Obj into some kind of text data (compact framework not support binaryformatter, correct me if I'm wrong). So is there any way to encode a Image Obj into text data and keep it in a XML file for being able to decode from XML file to Image obj later?
UPDATE: Here is what I did following Sam's respose. Thanks Sam!
//Write to XML
byte[] Ret;
using (MemoryStream ms = new MemoryStream())
{
myImage.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
Ret = ms.ToArray();
}
StreamWriter myWrite = new StreamWriter(myPathFile);
myWrite.Write(Convert.ToBase64String(Ret));
myWrite.Flush();
myWrite.Close();
Then when I want to decode Image from Base64String to Image:
StreamReader StrR = new StreamReader(myPathFile);
BArr = Convert.FromBase64String(StrR.ReadToEnd());
using (MemoryStream ms = new MemoryStream(BArr,0,BArr.Length))
{
ms.Write(BArr, 0, BArr.Length);
listControl1.BGImage = new Bitmap(ms);
}
Typically binary data is converted to Base64 when included in XML. Look at Convert.ToBase64String.

Categories