I would like to pass binary information between Python and C#. I would assume that you can open a standard in/out channel and read and write to that like a file, but there are a lot of moving parts, and I don't know C# too well. I want to do this sort of thing, but without writing a file.
# python code
with open(DATA_PIPE_FILE_PATH, 'wb') as fid:
fid.write(blob)
subprocess.Popen(C_SHARP_EXECUTABLE_FILE_PATH)
with open(DATA_PIPE_FILE_PATH, 'rb') as fid:
'Do stuff with the data'
// C# code
static int Main(string[] args){
byte[] binaryData = File.ReadAllBytes(DataPipeFilePath);
byte[] outputData;
// Create outputData
File.WriteAllBytes(outputData)
I've tried several different ways of using standard in/out, but I've had no luck matching them up, like I said, there are a lot of moving parts. I've tried things like
p = subprocess.Popen(C_SHARP_EXECUTABLE_FILE_PATH, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write(blob)
p.stdin.close()
or
p = subprocess.Popen(C_SHARP_EXECUTABLE_FILE_PATH, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = p.communicate(blob)
on the python side along with
TextReader tIn = Console.In;
TextWriter tOut = Console.Out;
String str = tIn.ReadToEnd();
//etc...
as well as a couple of other things that didn't work on the C# side. I've had mild success with some things, but I've changed it around so much that I don't remember what has worked for what. Could somebody give me a hint as to which pieces would work the best, or if this is even possible?
The data I want to pass has null and other non-printable characters.
This python code was correct
p = Popen(C_SHARP_EXECUTABLE_FILE_PATH, stdout=PIPE, stdin=PIPE, stderr=PIPE)
out, err = p.communicate(blob)
And on the C# side, I got it to work with
Stream ms = Console.OpenStandardInput();
One possibility would be to use something like Python for .NET, which provides interop directly between C# and (standard, C) Python.
Depending on what your Python routines need to do, IronPython can also be a good option, as this is directly consumable and usable from within C#.
Both of these options avoid trying to communicate through the command line, as you have direct access to the Python objects from .NET, and vice versa.
Related
I need guidance, someone to point me in the right direction. As the tittle says, I need to save information to a file: Date, string, integer and an array of integers. And I also need to be able to access that information later, when an user wants to review it.
Optional: File is plain text and I can directly check it and it is understandable.
Bonus points if chosen method can be "easily" converted to working with a database in the future instead of individual files.
I'm pretty new to C# and what I've found so far is that I should turn the array into a string with separators.
So, what'd you guys suggest?
// JSON.Net
string json = JsonConvert.SerializeObject(objOrArray);
File.WriteAllText(path, json);
// (note: can also use File.Create etc if don't need the string in memory)
or...
using(var file = File.Create(path)) { // protobuf-net
Serializer.Serialize(file, objOrArray);
}
The first is readable; the second will be smaller. Both will cope fine with "Date, string, integer and an array of integers", or an array of such objects. Protobuf-net would require adding some attributes to help it, but really simple.
As for working with a database as columns... the array of integers is the glitch there, because most databases don't support "array of integers" as a column type. I'd say "separation of concerns" - have a separate model for DB persistence. If you are using the database purely to store documents, then: pretty much every DB will support CLOB and BLOB data, so either is usable. Many databases now have inbuilt JSON support (helper methods, etc), which might make JSON as a CLOB more tempting.
I would probably serialize this to json and save it somewhere. Json.Net is a very popular way.
The advantage of this is also creating a class that can be later used to work with an Object-Relational Mapper.
var userInfo = new UserInfoModel();
// write the data (overwrites)
using (var stream = new StreamWriter(#"path/to/your/file.json", append: false))
{
stream.Write(JsonConvert.SerializeObject(userInfo));
}
//read the data
using (var stream = new StreamReader(#"path/to/your/file.json"))
{
userInfo = JsonConvert.DeserializeObject<UserInfoModel>(stream.ReadToEnd());
}
public class UserInfoModel
{
public DateTime Date { get; set; }
// etc.
}
for the Plaintext File you're right.
Use 1 Line for each Entry:
Date
string
Integer
Array of Integer
If you read the File in your code you can easily seperate them by reading line to line.
Make a string with a specific Seperator out of the Array:
[1,2,3] -> "1,2,3"
When you read the line you can Split the String by "," and gets a Array of Strings. Parse each Entry to int into an Array of Int with the same length.
How to read and write the File get a look at Easiest way to read from and write to files
If you really wants the switch to a database at a point, try a JSON Format for your File. It is easy to handle and there are some good Plugins to work with.
Mfg
Henne
The way I got started with C# is via the game Space Engineers from the Steam Platform, the Mods need to save a file Locally (%AppData%\Local\Temp\SpaceEngineers\ or %AppData%\Roaming\SpaceEngineers\Storage\) for various settings, and their logging is similar to what #H. Sandberg mentioned (line by line, perhaps a separator to parse with later), the upside to this is that it's easy to retrieve, easy to append, easy to overwrite, and I'm pretty sure it's even possible to retrieve File Size, which when combined with File Deletion and File Creation can prevent runaway File Sizes as this allows you to set an Upper Limit to check against, allowing you to run it on a Server with minimal impact (probably best to include a minimum Date filter {make sure X is at least Y days old before deleting it for being over Z Bytes} to prevent Debugging Data Loss {"Why was it over that limit?"})
As far as the actual Code behind the idea, I'm approximately at the same Skill Level as the OP, which is to say; Rookie, but I would advise looking at the Coding in the Space Engineers Mods for some Samples (plus it's not half bad for a Beta Game), as they are almost all written in C#. Also, the Programmable Blocks compile in C# as well, so you'll be able to use that to both assist in learning C# and reinforce and utilize what you already know (although certain C# commands aren't allowed for security reasons, utilizing the Mod API you'll have more flexibility to do things such as Creating/Maintaining Log Files, Retrieving/Modifying Object Properties, etc.), You are even capable of printing Text to various in Game Text Monitors.
I apologise if my Syntax needs some work, and I'm sorry I am not currently capable of just whipping up some Code to solve your issue, but I do know
using System;
Console.WriteLine("Hello World");
so at least it's not a total loss, but my example Code likely won't compile, since it's likely missing things like: an Output Location, perhaps an API reference or two, and probably a few other settings. Like I said, I'm New, but that is a valid C# Command, I know I got that part correct.
Edit: here's a better attempt:
using System;
class Test
{
static void Main()
{
string a = "Hello Hal, ";
string b = "Please open the Airlock Doors.";
string c = "I'm sorry Dave, "
string d = "I'm afraid I can't do that."
Console.WriteLine(a + b);
Console.WriteLine(c + d);
Console.Read();
}
}
This:
"Hello Hal, Please open the Airlock Doors."
"I'm sorry Dave, I'm afraid I can't do that."
Should be the result. (the "Quotation Marks" shouldn't appear in the readout {the last Code Block}, that's simply to improve readability)
I'm seriously stuck in this problem.
this problem caused because i'm weak with C# concept.
all i want do is electronic equipment return gif format data. which is binary i believe.
so i want convert this data to image.
/// below is just send command to instrument that i want " Returns an image of the display in .gif format "
my6705B.WriteString("hcop:sdump:data?", true);
string image_format = my6705B.ReadString();
So i received gif data from instrument, manual said this is " Returns an image of the display in .gif format " ==> I believe this is binary format.
below link is what's in side in string image_format.
string image_format
http://i.stack.imgur.com/UcYqV.png
my goal is convert this string to image file. (png or jpg whatever)
so i convert this string variable to byte array.
below is my code after this command ....
//// this also couldn't work ~~~
System.Text.UnicodeEncoding encode = new System.Text.UnicodeEncoding();
byte[] byte_array22 = encode.GetBytes(image_format);
MemoryStream ms4 = new MemoryStream(byte_array22);
Image image = Image.FromStream(ms4); //// error point
image.Save(#"C:\Users\Administrator\Desktop\imageTest.png");
//// this also couldn't work ~~~
byte[] byte_array22 = Encoding.Unicode.GetBytes(image_format);
MemoryStream ms4 = new MemoryStream(byte_array22);
Image image = Image.FromStream(ms4, true, true); /// always error here,,,
image.Save(#"C:\Users\Administrator\Desktop\imageTest.png", System.Drawing.Imaging.ImageFormat.Png);
both code didn't work and error point is same. i commented error point.
and anyway string to byte array is work.
I'm pain with this problem several days.
but my vendor make this code with C++,, this is working .
let me share my vendor's code,.this is implemented C++.
char szReadBuffer[102400] = {'\0', };
char szReadBinary[102400] = {'\0', };
m_iStatus = viOpenDefaultRM(&m_vDefaultRM);
m_iStatus = viOpen(m_vDefaultRM, (LPSTR)(LPCTSTR)m_strVISA, VI_NULL, VI_NULL, (ViPSession)&m_iDevHandle);
m_iStatus = viSetAttribute(m_iDevHandle, VI_ATTR_TMO_VALUE, 15000);
m_iStatus = QueryGPIB("HCOPy:SDUMp:DATA?", szReadBuffer, sizeof(szReadBuffer));
//Store the results in a text file
CFile file;
file.Open("PICTURE.GIF", CFile::modeReadWrite | CFile::modeCreate | CFile::typeBinary);
memcpy(szReadBinary, &szReadBuffer[2], sizeof(szReadBuffer));
file.Write(szReadBinary, sizeof(szReadBinary));
file.Close();
i think important point is what they declare. they declare char[] .
and adviced me that this C++ code did use String MultiByte ? (just hear from him)
i have no exp with C++.
and if i follow this c++ code then working.
my goal is implement with C#. so need to follow C++ code.
please advice my problem.
It can be confusing sometimes to port C++ to C# if you're unfamiliar with one or the other (never mind both! :) ). One thing to keep in mind: there's no "byte" type in C++. Instead, binary data is stored in char[] arrays, just like C strings.
On the other hand, C# distinguishes between the two. So when you see a char[] in C++ that's being used to store binary data instead of character data, the C# equivalent is a byte[], not a char[] or System.String as it might be for other C++ usages of char[].
Your "my6705B" object appears to be some kind of abstraction of your hardware device. Presumably in addition to the WriteString() and ReadString() methods, there are methods that can be used to write and read binary data, using a byte[] type instead of characters or strings. Use that instead.
Let's assume the proper method is named "ReadBytes()". Then your code would look like this:
byte[] image_format = my6705B.ReadBytes();
MemoryStream ms4 = new MemoryStream(image_format);
Image image = Image.FromStream(ms4);
image.Save(#"C:\Users\Administrator\Desktop\imageTest.png");
Now, that may or may not be exactly what you need. You haven't provided enough information about the "my6705B" object. Many I/O APIs allow for partial reads of available data, so it's possible you would need to read from the device in a loop until you know (somehow) that you've received all of the available bytes for the image. Or maybe the type you're using for the "my6705B" object handles that all for you. I have no way to know…you'll have to figure that out yourself.
But hopefully the above gets you oriented enough wrt the C++ vs C# issues to get you a little further.
I need to translate this C# code from NReplayGain library here https://github.com/karamanolev/NReplayGain to a working VBNET code.
TrackGain trackGain = new TrackGain(44100, 16);
foreach (sampleSet in track) {
trackGain.AnalyzeSamples(leftSamples, rightSamples)
}
double gain = trackGain.GetGain();
double peak = trackGain.GetPeak();
I've translate this:
Dim trackGain As New TrackGain(samplerate, samplesize)
Dim gain As Double = trackGain.GetGain()
Dim peak As Double = trackGain.GetPeak()
Use an online converter. C# to VB converters:
dotnet Spider
SharpDevelop
teletrik.
developerFusion
Your c# code shown above has errors. Probably it is written in pseudo code. I have not found any declaration of a sample set at the github address you mentioned.
A semicolon is missing (inside the loop). The loop variable sampleSet is not declared. Where do leftSamples and rightSamples come from? The loop variable is not used inside the loop. Probably the left and right samples are part of the sampleSet. If I correct this, I can convert the code by using one of these online converters.
C#:
TrackGain trackGain = new TrackGain(44100, 16);
foreach (SampleSet sampleSet in track) {
trackGain.AnalyzeSamples(sampleSet.leftSamples, sampleSet.rightSamples);
}
double gain = trackGain.GetGain();
double peak = trackGain.GetPeak();
VB:
Dim trackGain As New TrackGain(44100, 16)
For Each sampleSet As SampleSet In track
trackGain.AnalyzeSamples(sampleSet.leftSamples, sampleSet.rightSamples)
Next
Dim gain As Double = trackGain.GetGain()
Dim peak As Double = trackGain.GetPeak()
After all, the two versions don't look that different!
It is fairly simple to reference within assemblies written in different languages.
I frequently reference C# code from F# and have referenced VB.NET code from C#.
Just be sure to compile both projects to target the same framework version, say .NET 4.5 or Mono 2.10 , and CPU architecture.
If you need the files to reside in the same assemblies. I would suggest you study the C# syntax and convert it manually.
Edit: After browsing the Repository, I only see a handful of classes.
Besides learning new languages is a great way to improve both your ability to write code and read code in the languages you are already comfortable with.
A good one online solution to translate .NET to C# and vice-versa, to another language, as JavaScript is CodeTranslator - Carlossag. Until now, I didn't have problems with this translator.
I'm starting with C# again after 3 years (have average experience with object orientated languages; here I'm mainly missing function names). I'm not too sure it's possible in c#, so if you can recommend another language I will try to look there.
My Question(s):
On program start (or button) I want to extract a part of a Website and save it (temporary of file don't matter). That way I wont need to buffer/load (loadtime) anything again and can access the content if I go offline afterward.
I want to extract some numbers out of the content and do simple math with them.
Would be great to know if its possible and how. I'm happy if you can tell me the main functions I should look into. Some basic code would be great too if its not too much to ask.
If you want to have access to the information even if your program closes/restarts then you will need to export the source code to a file as follows:
using (WebClient wb = new WebClient())
{
string source = wb.DownloadString("http://example.com");
File.WriteAllText("c:\\exampleFile.txt", source);
}
Otherwise you can remove the File.WriteAllText("c:\\exampleFile.txt", source); and simply parse the parts you want from the source and do your calculations.
Keep in mind this will download the source code of the url as 'it is' that means you will need to do some parsing of the text in order to get the information you want out of it.
May be you are looking for this:
var contents = new System.Net.WebClient().DownloadString(url);
I've been trying to deal with some delimited text files that have non standard delimiters (not comma/quote or tab delimited). The delimiters are random ASCII characters that don't show up often between the delimiters. After searching around, I've seem to have only found no solutions in .NET will suit my needs and the custom libraries that people have written for this seem to have some flaws when it comes to gigantic input (4GB file with some field values having very easily several million characters).
While this seems to be a bit extreme, it is actually a standard in the Electronic Document Discovery (EDD) industry for some review software to have field values that contain the full contents of a document. For reference, I've previously done this in python using the csv module with no problems.
Here's an example input:
Field delimiter =
quote character = þ
þFieldName1þþFieldName2þþFieldName3þþFieldName4þ
þValue1þþValue2þþValue3þþSomeVery,Very,Very,Large value(5MB or so)þ
...etc...
Edit:
So I went ahead and created a delimited file parser from scratch. I'm kind of weary using this solution as it may be prone to bugs. It also doesn't feel "elegant" or correct to have to write my own parser for a task like this. I also have a feeling that I probably didn't have to write a parser from scratch for this anyway.
Use the File Helpers API. It's .NET and open source. It's extremely high performance using compiled IL code to set fields on strongly typed objects, and supports streaming.
It supports all sorts of file types and custom delimiters; I've used it to read files larger than 4GB.
If for some reason that doesn't do it for you, try just reading line by line with a string.split:
public IEnumerable<string[]> CreateEnumerable(StreamReader input)
{
string line;
while ((line = input.ReadLine()) != null)
{
yield return line.Split('þ');
}
}
That'll give you simple string arrays representing the lines in a streamy fashion that you can even Linq into ;) Remember however that the IEnumerable is lazy loaded, so don't close or alter the StreamReader until you've iterated (or caused a full load operation like ToList/ToArray or such - given your filesize however, I assume you won't do that!).
Here's a good sample use of it:
using (StreamReader sr = new StreamReader("c:\\test.file"))
{
var qry = from l in CreateEnumerable(sr).Skip(1)
where l[3].Contains("something")
select new { Field1 = l[0], Field2 = l[1] };
foreach (var item in qry)
{
Console.WriteLine(item.Field1 + " , " + item.Field2);
}
}
Console.ReadLine();
This will skip the header line, then print out the first two field from the file where the 4th field contains the string "something". It will do this without loading the entire file into memory.
Windows and high performance I/O means, use IO Completion ports. You may have todo some extra plumbing to get it working in your case.
This is with the understanding that you want to use C#/.NET, and according to Joe Duffy
18) Don’t use Windows Asynchronous Procedure Calls (APCs) in managed
code.
I had to learn that one the hard way ;), but ruling out APC use, IOCP is the only sane option. It also supports many other types of I/O, frequently used in socket servers.
As far as parsing the actual text, check out Eric White's blog for some streamlined stream use.
I would be inclined to use a combination of Memory Mapped Files (msdn point to a .NET wrapper here) and a simple incremental parse, yielding back to an IEnumerable list of your record / text line (or whatever)
You mention that some fields are very very big, if you try to read them in their entirety to memory you may be getting yourself into trouble. I would read through the file in 8K (or small chunks), parse the current buffer, keep track of state.
What are you trying to do with this data that you are parsing? Are you searching for something? Are you transforming it?
I don't see a problem with you writing a custom parser. The requirements seem sufficiently different to anything already provided by the BCL, so go right ahead.
"Elegance" is obviously a subjective thing. In my opinion, if your parser's API looks and works like a standard BCL "reader"-type API, then that is quite "elegant".
As for the large data sizes, make your parser work by reading one byte at a time and use a simple state machine to work out what to do. Leave the streaming and buffering to the underlying FileStream class. You should be OK with performance and memory consumption.
Example of how you might use such a parser class:
using(var reader = new EddReader(new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.Read, 8192)) {
// Read a small field
string smallField = reader.ReadFieldAsText();
// Read a large field
Stream largeField = reader.ReadFieldAsStream();
}
While this doesn't help address the large input issue, a possible solution to the parsing issue might include a custom parser that users the strategy pattern to supply a delimiter.