I have a byte array,
that I am trying to turn into a string with no success. I have tried to decode it in many ways in c# :
var stringToByte112 = System.Text.Encoding.UTF32.GetString(bytes, 0, i);
var stringToByte11 = System.Text.Encoding.UTF7.GetString(bytes, 0, i);
var stringToByte111 = System.Text.Encoding.UTF8.GetString(bytes, 0, i);
var stringToByte1111 = System.Text.Encoding.Default.GetString(bytes, 0, i);
= System.Text.Encoding.Unicode.GetString(bytes, 0, i);
var stringToByte11111 = System.Text.Encoding.ASCII.GetString(bytes, 0, i);
but to no sucsess.
The closest I was able to come to was this:
"\u0016\u0003\u0001\u0001\"\u0001\0\u0001\u001e\u0003\u0003A F?4Y'Y???\u0012}?\08?<SF?3B#??S?(?%?\0\0??0?,?(?$?\u0014?\n\0?\0?\0?\0?\0K\0J\0I\0H\09\08\07\06\0?\0?\0?\0??2?.?*?&?\u000f?\u0005\0?\0=\05\0??/?+?'?#?\u0013?\t\0?\0?\0?\0?\0G\0#\0?\0>\03\02\01\00\0?\0?\0?\0?\0E\0D\0C\0B?1?-?)?%?\u000e?\u0004\0?\0<\0/\0?\0A\0\a?\u0011?\a?\f?\u0002\0\u0005\0\u0004?\u0012?\b\0\u0016\0\u0013\0\u0010\0\r?\r?\u0003\0\n\0?\u0001\0\0I\0\v\0\u0004\u0003\0\u0001\u0002\0\n\0\u0010\0\u000e\0\u0017\0\u0019\0\u001c\0\u001b\0\u0018\0\u001a\0\u0016\0#\0\0\0\r\0 \0\u001e\u0006\u0001\u0006\u0002\u0006\u0003\u0005\u0001\u0005\u0002\u0005\u0003\u0004\u0001\u0004\u0002\u0004\u0003\u0003\u0001\u0003\u0002\u0003\u0003\u0002\u0001\u0002\u0002\u0002\u0003\0\u000f\0\u0001\u0001"
I also tried to get the encoding with this :
using (StreamReader reader = new StreamReader(new MemoryStream(bytes),
detectEncodingFromByteOrderMarks: true))
{
text = reader.ReadToEnd();
enc = reader.CurrentEncoding; // the reader detects the encoding for you!
}
and this returned utf8 but the utf8 encoder returns gibberish.
The byte array in question :
"16-03-01-01-22-01-00-01-1E-03-03-99-38-BB-63-4E-10-BB-9D-8A-1D-55-33-5C-29-1B-83-D2-40-76-57-1D-58-95-28-28-37-F0-4F-E8-4C-0E-6C-00-00-AC-C0-30-C0-2C-C0-28-C0-24-C0-14-C0-0A-00-A5-00-A3-00-A1-00-9F-00-6B-00-6A-00-69-00-68-00-39-00-38-00-37-00-36-00-88-00-87-00-86-00-85-C0-32-C0-2E-C0-2A-C0-26-C0-0F-C0-05-00-9D-00-3D-00-35-00-84-C0-2F-C0-2B-C0-27-C0-23-C0-13-C0-09-00-A4-00-A2-00-A0-00-9E-00-67-00-40-00-3F-00-3E-00-33-00-32-00-31-00-30-00-9A-00-99-00-98-00-97-00-45-00-44-00-43-00-42-C0-31-C0-2D-C0-29-C0-25-C0-0E-C0-04-00-9C-00-3C-00-2F-00-96-00-41-00-07-C0-11-C0-07-C0-0C-C0-02-00-05-00-04-C0-12-C0-08-00-16-00-13-00-10-00-0D-C0-0D-C0-03-00-0A-00-FF-01-00-00-49-00-0B-00-04-03-00-01-02-00-0A-00-10-00-0E-00-17-00-19-00-1C-00-1B-00-18-00-1A-00-16-00-23-00-00-00-0D-00-20-00-1E-06-01-06-02-06-03-05-01-05-02-05-03-04-01-04-02-04-03-03-01-03-02-03-03-02-01-02-02-02-03-00-0F-00-01-01"
No really sure where to go from here,
does anyone have an idea what encoding this is and how to get something "normal" looking?
link to file with all bytes saved to txt file : https://drive.google.com/file/d/14orWRM5LunbJOgPYQ5wbhxvwG-AFZ1SM/view?usp=sharing
At the end I was able to contact the vendor and there was a bug on their side, the bytes are nonsense.
Related
I'm tring to translate C code to C# and I stumbled upon a line of code which I'm having problems translating.
sprintf((char*)&u8FirmareBuffer[0x1C0] + strlen((char*)&u8FirmareBuffer[0x1C0]), ".B%s", argv[3]);
specifically this line.
u8FirmwareBuffer is a unsigned char array in C, a byte array in C# I would guess.
argv[3] is a string.
How can I translate this line to C#.
Thank you for your help.
Edit: This has been marked as a duplicate, but I think they differenciate because I am using pointers which don't work with the solutions presented on the marked post.
You could do something like:
string myString = "This is my string";
byte[] buffer = new byte[1024];
int offset = 0;
// if you pass a byte buffer to the constructor of a memorystream, it will use that, don't forget that it cannot grow the buffer.
using (var memStream = new MemoryStream(buffer))
{
// you can even seek to a specific position
memStream.Seek(offset, SeekOrigin.Begin);
// check your encoding..
var data = Encoding.UTF8.GetBytes(myString);
// write it on the current offset in the memory stream
memStream.Write(data, 0, data.Length);
}
It's also possible with a StreamWriter
string myString = "This is my string";
byte[] buffer = new byte[1024];
int offset = 0;
// if you pass a byte buffer to the constructor.....(see above)
using (var memStream = new MemoryStream(buffer))
using (var streamWriter = new StreamWriter(memStream))
{
// you can even seek to a specific position
memStream.Seek(offset, SeekOrigin.Begin);
streamWriter.Write(myString);
streamWriter.Flush();
// don't forget to flush before you seek again
}
I am trying to read the byte[] for each file and adding it to MemoryStream. Below is the code which throws error. What I am missing in appending?
byte[] ba = null;
List<string> fileNames = new List<string>();
int startPosition = 0;
using (MemoryStream allFrameStream = new MemoryStream())
{
foreach (string jpegFileName in fileNames)
{
ba = GetFileAsPDF(jpegFileName);
allFrameStream.Write(ba, startPosition, ba.Length); //Error here
startPosition = ba.Length - 1;
}
allFrameStream.Position = 0;
ba = allFrameStream.GetBuffer();
Response.ClearContent();
Response.AppendHeader("content-length", ba.Length.ToString());
Response.ContentType = "application/pdf";
Response.BinaryWrite(ba);
Response.End();
Response.Close();
}
Error:
Offset and length were out of bounds for the array or count is greater
than the number of elements from index to the end of the source
collection
startPosition is not offset to MemoryStream, instead to ba. Change it as
allFrameStream.Write(ba, 0, ba.Length);
All byte arrays will be appended to allFrameStream
BTW: Don't use ba = allFrameStream.GetBuffer(); instead use ba = allFrameStream.ToArray(); (You actually don't want internal buffer of MemoryStream).
The MSDN documentation on Stream.Write might help clarify the problem.
Streams are modelled as a continuous sequence of bytes. Reading or writing to a stream moves your position in the stream by the number of bytes read or written.
The second argument to Write is the index in the source array at which to start copying bytes from. In your case this is 0, since you want to read from the start of the array.
Maybe this is a simple solution, not the best but is easy
List<byte> list = new List<byte>();
list.AddRange(Encoding.UTF8.GetBytes("aaaaaaaaaaaaa"));
list.AddRange(Encoding.UTF8.GetBytes("bbbbbbbbbbbbbbbbbb"));
list.AddRange(Encoding.UTF8.GetBytes("cccccccc"));
byte[] c = list.ToArray();
Sorry for the long post, will try to make this as short as possible.
I'm consuming a json API (which has zero documentation of course) which returns something like this:
{
uncompressedlength: 743637,
compressedlength: 234532,
compresseddata: "lkhfdsbjhfgdsfgjhsgfjgsdkjhfgj"
}
The data (xml in this case) is compressed and then base64 encoded data which I am attempting to extract. All I have is their demo code written in perl to decode it:
use Compress::Zlib qw(uncompress);
use MIME::Base64 qw(decode_base64);
my $uncompresseddata = uncompress(decode_base64($compresseddata));
Seems simple enough.
I've tried a number of methods to decode the base64:
private string DecodeFromBase64(string encodedData)
{
byte[] encodedDataAsBytes = System.Convert.FromBase64String(encodedData);
string returnValue = System.Text.Encoding.Unicode.GetString(encodedDataAsBytes);
return returnValue;
}
public string base64Decode(string data)
{
try
{
System.Text.UTF8Encoding encoder = new System.Text.UTF8Encoding();
System.Text.Decoder utf8Decode = encoder.GetDecoder();
byte[] todecode_byte = Convert.FromBase64String(data);
int charCount = utf8Decode.GetCharCount(todecode_byte, 0, todecode_byte.Length);
char[] decoded_char = new char[charCount];
utf8Decode.GetChars(todecode_byte, 0, todecode_byte.Length, decoded_char, 0);
string result = new String(decoded_char);
return result;
}
catch (Exception e)
{
throw new Exception("Error in base64Decode" + e.Message);
}
}
And I have tried using Ionic.Zip.dll (DotNetZip?) and zlib.net to inflate the Zlib compression. But everything errors out. I am trying to track down where the problem is coming from. Is it the base64 decode or the Inflate?
I always get an error when inflating using zlib: I get a bad Magic Number error using zlib.net and I get "Bad state (invalid stored block lengths)" when using DotNetZip:
string decoded = DecodeFromBase64(compresseddata);
string decompressed = UnZipStr(GetBytes(decoded));
public static string UnZipStr(byte[] input)
{
using (MemoryStream inputStream = new MemoryStream(input))
{
using (Ionic.Zlib.DeflateStream zip =
new Ionic.Zlib.DeflateStream(inputStream, Ionic.Zlib.CompressionMode.Decompress))
{
using (StreamReader reader =
new StreamReader(zip, System.Text.Encoding.UTF8))
{
return reader.ReadToEnd();
}
}
}
}
After reading this:
http://george.chiramattel.com/blog/2007/09/deflatestream-block-length-does-not-match.html
And listening to one of the comments. I changed the code to this:
MemoryStream memStream = new MemoryStream(Convert.FromBase64String(compresseddata));
memStream.ReadByte();
memStream.ReadByte();
DeflateStream deflate = new DeflateStream(memStream, CompressionMode.Decompress);
string doc = new StreamReader(deflate, System.Text.Encoding.UTF8).ReadToEnd();
And it's working fine.
This was the culprit:
http://george.chiramattel.com/blog/2007/09/deflatestream-block-length-does-not-match.html
With skipping the first two bytes I was able to simplify it to:
MemoryStream memStream = new MemoryStream(Convert.FromBase64String(compresseddata));
memStream.ReadByte();
memStream.ReadByte();
DeflateStream deflate = new DeflateStream(memStream, CompressionMode.Decompress);
string doc = new StreamReader(deflate, System.Text.Encoding.UTF8).ReadToEnd();
First, use System.IO.Compression.DeflateStream to re-inflate the data. You should be able to use a MemoryStream as the input stream. You can create a MemoryStream using the byte[] result of Convert.FromBase64String.
You are likely causing all kinds of trouble trying to convert the base64 result to a given encoding; use the raw data directly to Deflate.
Background
I'm setting up a generic handler to:
Combine & compress Javascript and CSS files
Cache a GZip version & a Non-GZip version
Serve the appropriate version based on the request
I'm working in MonoDevelop v2.8.2 on OSX 10.7.2
Problem
Since I want to Cache the GZipped version, I need to GZip without using a response filter
Using this code, I can compress and decompress a string on the server successfully, but when I serve it to the client I get:
Error 330 (net::ERR_CONTENT_DECODING_FAILED): Unknown error. (Chrome)
Cannot decode raw data (Safari)
The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression. (Firefox)
Relevant Code
string sCompiled =null;
if(bCanGZip)
{
context.Response.AddHeader("Content-Encoding", "gzip");
bHasValue = CurrentCache.CompiledScripts.TryGetValue(context.Request.Url.ToString() + "GZIP", out sCompiled);
}
//...
//Process files if bHasVale is false
//Compress result of file concatination/minification
//Compression method
public static string CompressString(string text)
{
UTF8Encoding encoding = new UTF8Encoding(false);
byte[] buffer = encoding.GetBytes(text);
using(MemoryStream memoryStream = new MemoryStream()){
using (GZipStream gZipStream = new GZipStream(memoryStream, CompressionMode.Compress, true))
{
gZipStream.Write(buffer, 0, buffer.Length);
}
memoryStream.Position = 0;
byte[] compressedData = new byte[memoryStream.Length];
memoryStream.Read(compressedData, 0, compressedData.Length);
byte[] gZipBuffer = new byte[compressedData.Length + 4];
Buffer.BlockCopy(compressedData, 0, gZipBuffer, 4, compressedData.Length);
Buffer.BlockCopy(BitConverter.GetBytes(buffer.Length), 0, gZipBuffer, 0, 4);
return Convert.ToBase64String(gZipBuffer);
}
}
//...
//Return value
switch(Type){
case FileType.CSS:
context.Response.ContentType = "text/css";
break;
case FileType.JS:
context.Response.ContentType = "application/javascript";
break;
}
context.Response.AddHeader("Content-Length", sCompiled.Length.ToString());
context.Response.Clear();
context.Response.Write(sCompiled);
Attempts to Resolve
Since I'm not sure what the lines:
byte[] gZipBuffer = new byte[compressedData.Length + 4];
Buffer.BlockCopy(compressedData, 0, gZipBuffer, 4, compressedData.Length);
Buffer.BlockCopy(BitConverter.GetBytes(buffer.Length), 0, gZipBuffer, 0, 4);
are accomplishing, I tried removing them.
I tried playing with different Encodings/options.
At this point I'm really not sure how to attack the problem since I don't know the source of the error (Encoding/Compression/other).
Any help would be very appreciated!
Other Resources I've found on the subject
http://beta.blogs.microsoft.co.il/blogs/mneiter/archive/2009/03/24/how-to-compress-and-decompress-using-gzipstream-object.aspx
http://madskristensen.net/post/Compress-and-decompress-strings-in-C.aspx
http://www.codeproject.com/KB/files/GZipStream.aspx
http://www.codeproject.com/KB/aspnet/HttpCombine.aspx
http://webreflection.blogspot.com/2009/01/quick-tip-c-gzip-content.html
http://www.dominicpettifer.co.uk/Blog/17/gzip-compress-your-websites-html-css-script-in-code
This is one of those things where once you explain you problem, you quickly find the answer.
I need to write out the response as Binary. So modifying the compression algorithum to return a byte array:
public static byte[] CompressStringToArray(string text){
UTF8Encoding encoding = new UTF8Encoding(false);
byte[] buffer = encoding.GetBytes(text);
using(MemoryStream memoryStream = new MemoryStream()){
using (GZipStream gZipStream = new GZipStream(memoryStream, CompressionMode.Compress, true))
{
gZipStream.Write(buffer, 0, buffer.Length);
}
memoryStream.Position = 0;
byte[] compressedData = new byte[memoryStream.Length];
memoryStream.Read(compressedData, 0, compressedData.Length);
return compressedData;
}
}
and then calling:
//Writes a byte buffer without encoding the response stream
context.Response.BinaryWrite(GZipTools.CompressStringToArray(sCompiled));
Solves the issue. Hopefully this helps others who will face the same problem.
I'm currently developing an application in C# that uses Amazon SQS
The size limit for a message is 8kb.
I have a method that is something like:
public void QueueMessage(string message)
Within this method, I'd like to first of all, compress the message (most messages are passed in as json, so are already fairly small)
If the compressed string is still larger than 8kb, I'll store it in S3.
My question is:
How can I easily test the size of a string, and what's the best way to compress it?
I'm not looking for massive reductions in size, just something nice and easy - and easy to decompress the other end.
To know the "size" (in kb) of a string we need to know the encoding. If we assume UTF8, then it is (not including BOM etc) like below (but swap the encoding if it isn't UTF8):
int len = Encoding.UTF8.GetByteCount(longString);
Re packing it; I would suggest GZIP via UTF8, optionally followed by base-64 if it has to be a string:
using (MemoryStream ms = new MemoryStream())
{
using (GZipStream gzip = new GZipStream(ms, CompressionMode.Compress, true))
{
byte[] raw = Encoding.UTF8.GetBytes(longString);
gzip.Write(raw, 0, raw.Length);
gzip.Close();
}
byte[] zipped = ms.ToArray(); // as a BLOB
string base64 = Convert.ToBase64String(zipped); // as a string
// store zipped or base64
}
Give unzip bytes to this function.The best I could come up with was
public static byte[] ZipToUnzipBytes(byte[] bytesContext)
{
byte[] arrUnZipFile = null;
if (bytesContext.Length > 100)
{
using (var inFile = new MemoryStream(bytesContext))
{
using (var decompress = new GZipStream(inFile, CompressionMode.Decompress, false))
{
byte[] bufferWrite = new byte[4];
inFile.Position = (int)inFile.Length - 4;
inFile.Read(bufferWrite, 0, 4);
inFile.Position = 0;
arrUnZipFile = new byte[BitConverter.ToInt32(bufferWrite, 0) + 100];
decompress.Read(arrUnZipFile, 0, arrUnZipFile.Length);
}
}
}
return arrUnZipFile;
}