I need open and edit a executable file in binary mode, to replace hex value as string.
In PHP, looks like this:
<?php
$fp = fopen('file.exe', 'r+');
$content = fread($fp, filesize('file.exe'));
fclose($fp);
print $content;
/* [...] This program cannot be run in DOS mode.[...] */
?>
How I get it in C#?
public void Manipulate()
{
byte[] data = File.ReadAllBytes("file.exe");
byte[] newData;
//walkthrough data and do what you need to do and move to newData
File.WriteAllBytes("new_file.exe", newData);
}
Use File.ReadAllBytes to read the bytes of a file as a byte array.
byte[] bytes = File.ReadAllBytes('file.exe');
If you want to convert this to a hex string (and I'd in general I'd advise against doing so - strings are immutable in C# so modifying even a single byte will require copying the rest of the string) you can for example use:
string hex = BitConverter.ToString(bytes);
You ask about writting to file, but you PHP code is for reading. For working with files you can use FileStream class
using(FileStream stream = new FileStream(fileName, FileMode.Open, FileAccess.Write))
{
....
stream.WriteByte(byte);
....
}
Related
I have one scenario with class like this.
Class Document
{
public string Name {get;set;}
public byte[] Contents {get;set;}
}
Now I am trying to implement the import export functionality where I keep the document in binary so the document will be in json file with other fields and the document will be something in this format.
UEsDBBQABgAIAAAAIQCitGbRsgEAALEHAAATAAgCW0NvbnRlbnRfVHlwZXNdLnhtbCCiBAIooAACAAAAAAA==
Now when I upload this file back, I get this file as a string and I get the same data but when I try to convert this in binary bytes[] the file become corrupt.
How can I achieve this ?
I use something like this to convert
var ss = sr.ReadToEnd();
MemoryStream stream = new MemoryStream();
StreamWriter writer = new StreamWriter(stream);
writer.Write(ss);
writer.Flush();
stream.Position = 0;
var bytes = default(byte[]);
bytes = stream.ToArray();
This looks like base 64. Use:
System.Convert.ToBase64String(b)
https://msdn.microsoft.com/en-us/library/dhx0d524%28v=vs.110%29.aspx
And
System.Convert.FromBase64String(s)
https://msdn.microsoft.com/en-us/library/system.convert.frombase64string%28v=vs.110%29.aspx
You need to de-code it from base64, like this:
Assuming you've read the file into ss as a string.
var bytes = Convert.FromBase64String(ss);
There are several things going on here. You need to know the encoding for the default StreamWriter, if it is not specified it defaults to UTF-8 encoding. However, .NET strings are always either UNICODE or UTF-16.
MemoryStream from string - confusion about Encoding to use
I would suggest using System.Convert.ToBase64String(someByteArray) and its counterpart System.Convert.FromBase64String(someString) to handle this for you.
I have big big data in form of bytes around 5GB.
I need to store this data in a file ServerData.xml. This data should be first converted into string and then should be saved to file so that we can perform operation on the file.
I used below code to convert stream of bytes to string and then to save the same in a file.
private const string fileName = "ServerData.xml";
public void ProcessBuffer(byte[] receiveBuffer, int bytes)
{
if (!File.Exists(fileName))
{
using (File.Create(fileName)) { };
}
TextWriter tw = new StreamWriter(fileName, true);
tw.Write(Encoding.UTF8.GetString(receiveBuffer).TrimEnd((Char)0));
tw.Close();
}
Is it the right way ?
or please suggest better way so that there should not be any memory issue if any in future ?
The code in your question can only work if ProcessBuffer is always called with a UTF-8 encoded text that is broken on code point boundaries. That seems pretty unlikely to me, so I would expect that you encounter errors when decoding to text.
However, decoding to text and then writing, is rather pointless and indeed counter-productive. The bytes are already UTF-8 encoded. Write them directly to file as they arrive from the socket. Don't perform any processing of them. When you come to read the XML using XmlReader, the parser will read the encoding as UTF-8 from the document's XML declaration, and be able to decode the rest of the document. I am assuming that the document's XML declaration specifies UTF-8 but that seems highly likely. You should check.
You should get rid of the text writer which is no use to you for writing bytes. Write the bytes directly to a file stream. And try to avoid opening and closing the file repeatedly. That's very inefficient. Open and close the file exactly once.
Why do you need to convert it to a string?
using System.IO;
public static void WriteBytes(byte[] bytes, string filename)
{
using (FileStream fs = new FileStream(filename, FileMode.OpenOrCreate))
using (BinaryWriter writer = new BinaryWriter(fs, Encoding.UTF8))
{
writer.Write(bytes);
}
}
You can simply write these bytes to a file using FileStream:
public void ProcessBuffer(byte[] receivedBuffer, int bytes)
{
using (var fileStream = new FileStream(fileName, FileMode.Create)) // overwrites file
{
fileStream.Write(receivedBuffer, 0, bytes);
}
}
Update: You won't be able to work with such a big XML document if you don't have enough resources. I would suggest reformatting this file. For example, I would parse this XML and insert data into a SQL database. Then, you can easily operate with such amounts of data.
I would prefer that I write all bytes to file. And when reading, convert it to string and then convert to XML using XDocument, XElement etc. By writing bytes in file you will save space, and it is efficient,
Instead of using FileStream, I will prefer File.WriteAllBytes method.
private const string fileName = "ServerData.xml";
public void ProcessBuffer(byte[] receiveBuffer, int bytes)
{
File.WriteAllBytes(filename, bytes);
// And when reading
var bytes = File.ReadAllBytes(filename);
var binaryReader = new BinaryReader(new MemoryStream(bytes));
// Parse strings and make xml,
binaryReader.ReadString();
}
hiiii,
i want to read mp3 file by using binary reader, my code is :
using (BinaryReader br = new BinaryReader(File.Open("Songs/testbinary.mp3", FileMode.Open)))
{
int length = (int)br.BaseStream.Length;
byte[] bytes = br.ReadBytes(length);
txtBinary.Text = bytes.ToString();
}
.......
when i execute this code it shows and exception:
The process cannot access the file 'URL\testbinary.mp3' because it is being used by another process.
where "URL" is my actual file location.
You open the same file twice (without any sharing option). To read the content of a file as bytes you can use File.ReadAllBytes
byte[] bytes = File.ReadAllBytes("Songs/testbinary.mp3");
BTW: Don't forget txtBinary.Text = bytes.ToString(); doesn't give you what you think. You will have to use BitConverter.ToString or Convert.ToBase64String
In my website i have option to download all images uploaded by users. The problem is in images with hebrew names (i need original name of file). I tried to decode file names but this is not helping. Here is a code :
using ICSharpCode.SharpZipLib.Zip;
Encoding iso = Encoding.GetEncoding("ISO-8859-1");
Encoding utf8 = Encoding.UTF8;
byte[] utfBytes = utf8.GetBytes(file.Name);
byte[] isoBytes = Encoding.Convert(utf8, iso, utfBytes);
string name = iso.GetString(isoBytes);
var entry = new ZipEntry(name + ".jpg");
zipStream.PutNextEntry(entry);
using (var reader = new System.IO.FileStream(file.Name, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
byte[] buffer = new byte[ChunkSize];
int bytesRead;
while ((bytesRead = reader.Read(buffer, 0, buffer.Length)) > 0)
{
byte[] actual = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, actual, 0, bytesRead);
zipStream.Write(actual, 0, actual.Length);
}
}
After utf-8 encoding i get hebrew file names like this : ??????.jpg
Where is my mistake?
Unicode (UTF-8 is one of the binary encoding) can represent more characters than the other 8-bit encoding. Moreover, you are not doing a proper conversion but a re-interpretation, which means that you get garbage for your filenames. You should really read the article from Joel on Unicode.
...
Now that you've read the article, you should know that in C# string can store unicode data, so you probably don't need to do any conversion of file.Name and can pass this directly to ZipEntry constructor if the library does not contains encoding handling bugs (this is always possible).
Try using
ZipStrings.UseUnicode = true;
It should be a part of the ICSharpCode.SharpZipLib.Zip namespace.
After that you can use something like
var newZipEntry = new ZipEntry($"My ünicödë string.pdf");
and add the entry as normal to the stream. You shouldn't need to do any conversion of the string before that in C#.
You are doing wrong conversion, since strings in C# are already unicode.
What tools do you use to check file names in archive?
By default Windows ZIP implementations use system DOS encoding for file names, while other implementations can use other encoding.
I have a Base64-encoded object with the following header:
application/x-xfdl;content-encoding="asc-gzip"
What is the best way to proceed in decoding the object? Do I need to strip the first line? Also, if I turn it into a byte array (byte[]), how do I un-gzip it?
Thanks!
I think I misspoke initially. By saying the header was
application/x-xfdl;content-encoding="asc-gzip"
I meant this was the first line of the file. So, in order to use the Java or C# libraries to decode the file, does this line need to be stripped?
If so, what would be the simplest way to strip the first line?
To decode the Base64 content in C# you can use the Convert Class static methods.
byte[] bytes = Convert.FromBase64String(base64Data);
You can also use the GZipStream Class to help deal with the GZipped stream.
Another option is SharpZipLib. This will allow you to extract the original data from the compressed data.
I was able to use the following code to convert an .xfdl document into a Java DOM Document.
I used iHarder's Base64 utility to do the Base64 Decode.
private static final String FILE_HEADER_BLOCK =
"application/vnd.xfdl;content-encoding=\"base64-gzip\"";
public static Document OpenXFDL(String inputFile)
throws IOException,
ParserConfigurationException,
SAXException
{
try{
//create file object
File f = new File(inputFile);
if(!f.exists()) {
throw new IOException("Specified File could not be found!");
}
//open file stream from file
FileInputStream fis = new FileInputStream(inputFile);
//Skip past the MIME header
fis.skip(FILE_HEADER_BLOCK.length());
//Decompress from base 64
Base64.InputStream bis = new Base64.InputStream(fis,
Base64.DECODE);
//UnZIP the resulting stream
GZIPInputStream gis = new GZIPInputStream(bis);
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(gis);
gis.close();
bis.close();
fis.close();
return doc;
}
catch (ParserConfigurationException pce) {
throw new ParserConfigurationException("Error parsing XFDL from file.");
}
catch (SAXException saxe) {
throw new SAXException("Error parsing XFDL into XML Document.");
}
}
Still working on successfully modifying and re-encoding the document.
Hope this helps.
In Java, you can use the Apache Commons Base64 class
String decodedString = new String(Base64.decodeBase64(encodedBytes));
It sounds like you're dealing with data that is both gzipped and Base 64 encoded. Once you strip off any mime headers, you should convert the Base64 data to a byte array using something like Apache commons codec. You can then wrap the byte[] in a ByteArrayInputStream object and pass that to a GZipInputStream which will let you read the uncompressed data.
For java, have you tried java's built in java.util.zip package? Alternately, Apache Commons has the Commons Compress library to work with zip, tar and other compressed file types. As to decoding Base 64, there are several open source libraries, or you can use Sun's sun.misc.BASE64Decoder class.
Copied from elsewhere, for Base64 I link to commons-codec-1.6.jar:
public static String decode(String input) throws Exception {
byte[] bytes = Base64.decodeBase64(input);
BufferedReader in = new BufferedReader(new InputStreamReader(
new GZIPInputStream(new ByteArrayInputStream(bytes))));
StringBuffer buffer = new StringBuffer();
char[] charBuffer = new char[1024];
while(in.read(charBuffer) != -1) {
buffer.append(charBuffer);
}
return buffer.toString();
}