I have some ini file include this string : _DeviceName = #1234. Now I want to get the _DeviceName value that is 1234 but it shows me the full string that is _DeviceName = #1234.
I tried this code:
if (File.Exists("inifile.ini"))
{
if (File.ReadAllText("inifile.ini").Split('\r', '\n').First(st => st.StartsWith("_DeviceName")) != null)
{
string s = File.ReadAllText("inifile.ini").Split('\r', '\n').First(st => st.StartsWith("_DeviceName"));
MessageBox.Show(s);
}
}
You can use File.ReadAllLines instead. You may want to look at existing ini file readers if you're doing anything more complex but this should work. As a side note, it's not efficient to make two File.ReadAllText calls so quickly; in most cases it's best to just store the result in a variable).
if (File.Exists("inifile.ini"))
{
string[] allLines = File.ReadAllLines("inifile.ini");
string deviceLine = allLines.Where(st => st.StartsWith("_DeviceName")).FirstOrDefault();
if(!String.IsNullOrEmpty(deviceLine))
{
string value = deviceLine.Split('=')[1].Trim();
MessageBox.Show(value);
}
}
You could add another split to get the value out:
if (File.Exists("inifile.ini"))
{
if (File.ReadAllText("inifile.ini").Split('\r', '\n').First(st => st.StartsWith("_DeviceName")) != null)
{
string s = File.ReadAllText("inifile.ini")
.Split('\r', '\n').First(st => st.StartsWith("_DeviceName"))
.Split('=')[1];
MessageBox.Show(s);
}
}
I made an application that implements an INI Reader/Writer. Source code not available just yet, but keep checking back, I will have the source uploaded in a few days ( today is 8/14/13) . This particular INI Reader is CASE SENSITIVE and may need modification for special characters, but so far I have had no problems.
http://sourceforge.net/projects/dungeonchest/
A dirty way would be to use String.Replace("_DeviceName = #", "")
Related
I have my json ["[\"~:bbl:P5085\",\"~:cosco:NoTag\"]"] coming in
options.Type1.Values()
I am trying to keep only the values coming with bbl so from above I want to keep P5085 and remove all, there can be multiple bbl values in here and I need to keep all. I tried the below code but its not working. The splitting gives me
P5085","~:cosco
I dont understand what wrong am i doing in below code. Can someone provide the fix here?
private void InitializePayload(JsonTranslatorOptions options)
{
_payload.Add("ubsub:attributes", _attributes);
_payload.Add("ubsub:relations", _relations);
JArray newType = new JArray();
foreach (JValue elem in options.Type1.Values())
{
if (elem.ToString().Contains("rdl"))
{
string val = elem.ToString().Split(":")[1];
newType.Add(val);
}
}
_payload.Add("ubsub:type", newType);
}
Try this:
var input = "['[\"~:bbl:P5085\",\"~:cosco:NoTag\"]']";
var BBLs_List = JArray.Parse(input)
.SelectMany(m => JArray.Parse(m.ToString()))
.Select(s => s.ToString().Split(":"))
.Where(w => w[1] == "bbl")
.Select(s => s[2])
.ToList();
As I explain in the comments this isn't JSON, except at the top level which is an array with a single string value. That specific string could be parsed as a JSON array itself, but its values can't be handled as JSON in any way. They're just strings.
While you could try parsing and splitting that string, it would be a lot safer to find the actual specification of that format and write a parser for it. Or find a library for that API.
You could use the following code for parsing, but it's slow, not very readable and based on assumptions that can easily break - what happens if a value contains a colon?
foreach(var longString in JArray.Parse(input))
{
foreach(var smallString in JArray.Parse(longString))
{
var values=smallString.Split(":");
if(values[1]=="bbl")
{
return values[2];
}
}
}
return null;
You could convert that to LINQ, but that would be just as hard to read :
var value=JArray.Parse(input)
.SelectMany(longString=>JArray.Parse(longString))
.Select(smallString=>smallString.Split(":"))
.Where(values=>values[1]=="bbl")
.Select(values=>values[2])
.FirstOrDefault();
I'm trying to learn some C# here. My goal is to create and write on multiple custom files which name varies based on a part of the string to be written. Below some examples:
Let's say strings to be written are basically rows of a csv file:
2019-10-28 16:14:14;;15.5;0;;3;false;false;0;111;123;;;10;false;1;2.5;;;;0;
2019-10-28 16:13:11;;18;0;;1;false;false;222;333;123;;;10;false;1;1;;;;0;G
2019-10-29 16:13:11;;18;0;;3;false;false;true;
As you may notice, first field of each string is a date, and that's and that is the key field to choose the name of the file to write to.
First two fields have same date, so both strings will be printed on a single file, the third one in a second file since it has different date.
Expected Result:
First File:
2019-10-28 16:14:14;;15.5;0;;3;false;false;0;111;123;;;10;false;1;2.5;;;;0;
2019-10-28 16:13:11;;18;0;;1;false;false;222;333;123;;;10;false;1;1;;;;0;
Second File:
2019-10-29 16:13:11;;18;0;;3;false;false;true;
Now I have multiple rows like those, and I'd like to print them on different files based on their first value.
I managed to create a class which might represent each row:
class Value {
public DateTime date = DateTime.Now;
public decimal cod = 0;
public decimal quantity = 0;
public decimal price = 0;
//Other irrelevant fields
}
And I also tried to develop a method to write a single Value on given File:
private static void WriteValue(Value content, string folder, string fileName) {
using(StreamWriter writer = new StreamWriter(Path.Combine(folder, fileName), true, Encoding.ASCII)) {
writer.Write(content.dataora.ToString("yyyyMMdd"));
writer.Write("0000");
writer.Write("I");
writer.Write("C");
writer.Write(content.codpro.ToString().PadLeft(14, '0'));
writer.Write(Convert.ToInt64(content.qta * 100).ToString().PadLeft(8, '0'));
writer.WriteLine();
}
}
And a Method to write Values them into files
static void WriteValues(List<Value> fileContent) {
//Once I got all Values of File in a List of Values, I try to write them in files
}
if(fileContent.Count > 0) {
foreach(Value riga in fileContent) {
//Temp Dates, used to compare previous Date in order to know if I have to write Value in new File or it can be written on same File
string dataTemp = riga.dataora.ToString("yyyy-MM-dd");
string lastData = string.Empty;
string FileName = "ordinivoa_999999." + DateTime.Now.ToString("yyMMddHHmmssfff");
//If lastData is Empty we are writing first value
if (string.IsNullOrEmpty(lastData)) {
WriteValue(riga, toLinfaFolder, FileName);
lastData = dataTemp;
}
//Else if lastData is equal as last managed date we write on same file
else if (lastData == dataTemp) {
WriteValue(riga, toLinfaFolder, FileName);
}
else {
//Else current date of Value is new, so we write it in another file
string newFileName = "ordinivoa_999999." + DateTime.Now.AddMilliseconds(1).ToString("yyMMddHHmmssfff");
WriteValue(riga, toLinfaFolder, newFileName);
lastData = dataTemp;
}
}
}
}
My issue is method above has strange behavior, writes first equal dates on a single file, which is good, but writes all other values in a single file, even if we have different dates.
How to make sure each value gets printed on in a single file only if has same date value?
You can group equal dates easily with a LINQ query
private static void WriteValues(List<Value> fileContent)
{
var dateGroups = fileContent
.GroupBy(v => $"ordinivoa_999999.{v.date:yyMMddHHmmssfff}");
foreach (var group in dateGroups) {
string path = Path.Combine(toLinfaFolder, group.Key);
using (var writer = new StreamWriter(path, true, Encoding.ASCII)) {
foreach (Value item in group) {
//TODO: write item to file
writer.WriteLine(...
}
}
}
}
Since a DateTime stores values in units of one ten-millionth of a second, two dates looking equal once formatted, might still be different. So I suggest grouping on the filename to avoid this effect. I used string interpolation to create and format the file name.
Don't open and close the file for each text line.
At the top of your code file you need a
using System.Linq;
You are on the right path declaring a class, but you're also doing a whole bunch of unnecessary stuff. Using LINQ this can be simplified by a great deal.
First I define a class, and since all you want to do is write each record, I would use a DateTime field, and a string field for the entire raw record.
class MyRecordOfSomeType
{
public DateTime Date { get; set; }
public string RawData { get; set; }
}
The DateTime filed is so that it'll come in handy when you're doing LINQ.
Now we iterate through your data, split using ;, then create your class instance list.
var data = new List<string>()
{
"2019-10-28 16:14:14;;15.5;0;;3;false;false;0;111;123;;;10;false;1;2.5;;;;0;",
"2019-10-28 16:13:11;;18;0;;1;false;false;222;333;123;;;10;false;1;1;;;;0;G",
"2019-10-29 16:13:11;;18;0;;3;false;false;true;"
};
var records = new List<MyRecordOfSomeType>();
foreach (var item in data)
{
var parts = item.Split(';');
DateTime.TryParse(parts[0], out DateTime result);
var rec = new MyRecordOfSomeType() { Date = result, RawData = item };
records.Add(rec);
}
Then we group by date. Note that it's important to group by the Date component of the DateTime structure, otherwise it will consider the Time component as well and you'll have more files than you need.
var groups = records.GroupBy(x => x.Date.Date);
Finally, iterate your groups, and write contents of each group to a new file.
foreach (var group in groups)
{
var fileName = string.Format("ordinivoa_999999_{0}.csv", group.Key.ToString("yyMMddHHmmssfff"));
File.WriteAllLines(fileName, group.Select(x => x.RawData));
}
I wanna do a list without duplicates from a file which have too many lines with identifier, sometimes repeated. When I try using List<string>.Contains, it doesn't work. This is, I think, because I'm adding object instead of strings directly.
public List<string> obterRelacaoDeBlocos()
{
List<string> listaDeBlocos = new List<string>();
foreach(string linhas in arquivos.obterLinhasDoArquivo())
{
string[] linhaQuebrada = linhas.Split('|');
string bloco = linhaQuebrada[1].ToString();
if (listaDeBlocos.Contains((string)bloco) != true)
{
listaDeBlocos.Add( bloco + ":" + listaDeBlocos.Contains(bloco).ToString());
}
}
return listaDeBlocos;
}
You're appending ":" + listaDeBlocos.Contains(bloco).ToString() to the string before you add it to the list. That's not going to match when you encounter the same word again, so Contains will return false and the same word will get added again.
I don't see what point it serves to append ": true" to the end of each string in the list anyway, so just remove that part and it should work.
if (!listaDeBlocos.Contains(bloco))
{
listaDeBlocos.Add(bloco);
}
Since you're only interested in one part of each string, based on how you're splitting, you could rewrite your method using LINQ. This is untested but should work:
public List<string> obterRelacaoDeBlocos()
{
return arquivos.obterLinhasDoArquivo().Select(x => x.Split('|')[1]).Distinct().ToList();
}
I have a text file contains following similar lines for example 500k lines.
ADD GTRX:TRXID=0, TRXNAME="M_RAK_JeerExch_G_1879_18791_A-0", FREQ=81, TRXNO=0, CELLID=639, IDTYPE=BYID, ISMAINBCCH=YES, ISTMPTRX=NO, GTRXGROUPID=2556;
ADD GTRX:TRXID=1, TRXNAME="M_RAK_JeerExch_G_1879_18791_A-1", FREQ=24, TRXNO=1, CELLID=639, IDTYPE=BYID, ISMAINBCCH=NO, ISTMPTRX=NO, GTRXGROUPID=2556;
ADD GTRX:TRXID=5, TRXNAME="M_RAK_JeerExch_G_1879_18791_A-2", FREQ=28, TRXNO=2, CELLID=639, IDTYPE=BYID, ISMAINBCCH=NO, ISTMPTRX=NO, GTRXGROUPID=2556;
ADD GTRX:TRXID=6, TRXNAME="M_RAK_JeerExch_G_1879_18791_A-3", FREQ=67, TRXNO=3, CELLID=639, IDTYPE=BYID, ISMAINBCCH=NO, ISTMPTRX=NO, GTRXGROUPID=2556;
My intention is first to get value for FREQ where ISMAINBCCH=YES that I did easily, but if ISMAINBCCH=NO then concatenate FREQ values for that I have done by using File.ReadLines but it is taking a long time. Is there any better way to do this? If I take FREQ value for ISMAINBCCH=YES then concatenate the values ISMAINBCCH=NO are coming in a range of 10 lines above and below, but I don't know how to implement it. Probably I should get current line where ISMAINBCCH=YES for FREQ. Following is the code what I have done so far
using (StreamReader sr = File.OpenText(filename))
{
while ((s = sr.ReadLine()) != null)
{
if (s.Contains("ADD GTRX:"))
{
try
{
var gtrx = new Gtrx
{
CellId = int.Parse(PullValue(s, "CELLID")),
Freq = int.Parse(PullValue(s, "FREQ")),
//TrxNo = int.Parse(PullValue(s, "TRXNO")),
IsMainBcch = PullValue(s, "ISMAINBCCH").ToUpper() == "YES",
Commabcch = new List<string> { PullValue(s, "ISMAINBCCH") },
DEFINED_TCH_FRQ = null,
TrxName = PullValue(s, "TRXNAME"),
};
var result = String.Join(",",
from ss in File.ReadLines(filename)
where ss.Contains("ADD GTRX:")
where int.Parse(PullValue(ss, "CELLID")) == gtrx.CellId
where PullValue(ss, "ISMAINBCCH").ToUpper() != "YES"
select int.Parse(PullValue(ss, "FREQ")));
}
}
}
gtrx.DEFINED_TCH_FRQ = result;
}
from ss in File.ReadLines(filename)
This reads the entire file, produces an array, which you are then using in a loop (itself from reading the same file) so that array gets thrown away and then created again. You're reading the same file number_of_lines + 1 times when it hasn't changed in the meantime.
An obvious boost would therefore be to just call File.ReadLines(filename) once, store the array and then use that array both for the loop instead of while ((s = sr.ReadLine()) != null) and in the loop instead of that repeated call to ReadLines().
But there's a flaw in your logic in even looking at ReadLines() repeatedly; you're already scanning through the file so you're going to come across all the lines relevant to the same CELLID later anyway:
var gtrxDict = new Dictionary<int, Gtrx>();
using (StreamReader sr = File.OpenText(filename))
{
while ((s = sr.ReadLine()) != null)
{
if (s.Contains("ADD GTRX:"))
{
int cellID = int.Parse(PullValue(s, "CELLID"));
Gtrx gtrx;
if(gtrxDict.TryGetValue(cellID, out gtrx)) // Found previous one
gtrx.DEFINED_TCH_FRQ += "," + int.Parse(PullValue(ss, "FREQ"));
else // First one for this ID, so create a new object
gtrxDict[cellID] = new Gtrx
{
CellId = cellID,
Freq = int.Parse(PullValue(s, "FREQ")),
IsMainBcch = PullValue(s, "ISMAINBCCH").ToUpper() == "YES",
Commabcch = new List<string> { PullValue(s, "ISMAINBCCH") },
DEFINED_TCH_FRQ = int.Parse(PullValue(ss, "FREQ")).ToString(),
TrxName = PullValue(s, "TRXNAME"),
};
}
}
}
This way we don't need to keep more than one line from the file in memory at all, never mind doing so repeatedly. After this has run gtrxDict will contain a Gtrx object for each distinct CELLID in the file, with DEFINED_TCH_FRQ as a comma-separated list of the values from each matching line.
The following code snippet can be used to read the entire text file:
using System.IO;
/// Read Text Document specified by full path
private string ReadTextDocument(string TextFilePath)
{
string _text = String.Empty;
try
{
// open file if exists
if (File.Exists(TextFilePath))
{
using (StreamReader reader = new StreamReader(TextFilePath))
{
_text = reader.ReadToEnd();
reader.Close();
}
}
else
{
throw new FileNotFoundException();
}
return _text;
}
catch { throw; }
}
Get in-memory string, then apply Split() function to create string[] and process array elements in the same way as lines in original text file. In case of processing the very large file this method provides the option of reading it by chunks of data, processing them and then disposing upon completion (re: https://msdn.microsoft.com/en-us/library/system.io.streamreader%28v=vs.110%29.aspx).
As mentioned in comments by #Michael Liu, there is another option of using File.ReadAllText() which provides even more compact solution and can be used instead of reader.ReadToEnd(). Other useful methods of File class are detailed in : https://msdn.microsoft.com/en-us/library/system.io.file%28v=vs.110%29.aspx
And, finally, FileStream class can be used for both file read/write operations with various levels of granularity (re: https://msdn.microsoft.com/en-us/library/system.io.filestream%28v=vs.110%29.aspx).
SUMMARY
In response to the interesting comments thread, here is a brief summary.
The biggest bottleneck pertinent to the procedure described in PO question is Disk IO operations. Here are some numbers: average seek time in good quality HDD is about 5 msec plus the actual read time (per line). It well could be that the entire in-memory file data processing take less time than just a single HDD IO read (sometimes significantly; btw, SSD works better but still not a match to DDR3 RAM). The RAM memory size of modern PC is rather significant (typically 4...8 GB RAM is more than enough to handle most of text files). Thus, the core idea of my solution is to minimize the Disk IO read operations and do entire file data processing in-memory. Implementation can be different, apparently.
Hope this may help. Best regards,
I think that this more-or-less gets you what you want.
First read in all the data:
var data =
(
from s in File.ReadLines(filename)
where s != null
where s.Contains("ADD GTRX:")
select new Gtrx
{
CellId = int.Parse(PullValue(s, "CELLID")),
Freq = int.Parse(PullValue(s, "FREQ")),
//TrxNo = int.Parse(PullValue(s, "TRXNO")),
IsMainBcch = PullValue(s, "ISMAINBCCH").ToUpper() == "YES",
Commabcch = new List<string> { PullValue(s, "ISMAINBCCH") },
DEFINED_TCH_FRQ = null,
TrxName = PullValue(s, "TRXNAME"),
}
).ToArray();
Based on the loaded data create a lookup to return the frequencies based on each cell id:
var lookup =
data
.Where(d => !d.IsMainBcch)
.ToLookup(d => d.CellId, d => d.Freq);
Now update the DEFINED_TCH_FRQ based on the lookup:
foreach (var d in data)
{
d.DEFINED_TCH_FRQ = String.Join(",", lookup[d.CellId]);
}
I have seen several posts giving examples of how to read from text files, and examples on how to make a string 'public' (static or const), but I haven't been able to combine the two inside a 'function' in a way that is making sense to me.
I have a text file called 'MyConfig.txt'.
In that, I have 2 lines.
MyPathOne=C:\TestOne
MyPathTwo=C:\TestTwo
I want to be able to read that file when I start the form, making both MyPathOne and MyPathTwo accessible from anywhere inside the form, using something like this :
ReadConfig("MyConfig.txt");
the way I am trying to do that now, which is not working, is this :
public voice ReadConfig(string txtFile)
{
using (StreamReader sr = new StreamResder(txtFile))
{
string line;
while ((line = sr.ReadLine()) !=null)
{
var dict = File.ReadAllLines(txtFile)
.Select(l => l.Split(new[] { '=' }))
.ToDictionary( s => s[0].Trim(), s => s[1].Trim());
}
public const string MyPath1 = dic["MyPathOne"];
public const string MyPath2 = dic["MyPathTwo"];
}
}
The txt file will probably never grow over 5 or 6 lines, and I am not stuck on using StreamReader or dictionary.
As long as I can access the path variables by name from anywhere, and it doesn't add like 400 lines of code or something , then I am OK with doing whatever would be best, safest, fastest, easiest.
I have read many posts where people say the data should stored in XML, but I figure that part really doesn't matter so much because reading the file and getting the variables part would be almost the same either way. That aside, I would rather be able to use a plain txt file that somebody (end user) could edit without having to understand XML. (which means of course lots of checks for blank lines, does the path exist, etc...I am OK with doing that part, just wanna get this part working first).
I have read about different ways using ReadAllLines into an array, and some say to create a new separate 'class' file (which I don't really understand yet..but working on it). Mainly I want to find a 'stable' way to do this.
(project is using .Net4 and Linq by the way)
Thanks!!
The code you've provided doesn't even compile. Instead, you could try this:
public string MyPath1;
public string MyPath2;
public void ReadConfig(string txtFile)
{
using (StreamReader sr = new StreamReader(txtFile))
{
// Declare the dictionary outside the loop:
var dict = new Dictionary<string, string>();
// (This loop reads every line until EOF or the first blank line.)
string line;
while (!string.IsNullOrEmpty((line = sr.ReadLine())))
{
// Split each line around '=':
var tmp = line.Split(new[] { '=' },
StringSplitOptions.RemoveEmptyEntries);
// Add the key-value pair to the dictionary:
dict[tmp[0]] = dict[tmp[1]];
}
// Assign the values that you need:
MyPath1 = dict["MyPathOne"];
MyPath2 = dict["MyPathTwo"];
}
}
To take into account:
You can't declare public fields into methods.
You can't initialize const fields at run-time. Instead you provide a constant value for them at compilation time.
Got it. Thanks!
public static string Path1;
public static string Path2;
public static string Path3;
public void ReadConfig(string txtFile)
{
using (StreamReader sr = new StreamReader(txtFile))
{
var dict = new Dictionary<string, string>();
string line;
while (!string.IsNullOrEmpty((line = sr.ReadLine())))
{
dict = File.ReadAllLines(txtFile)
.Select(l => l.Split(new[] { '=' }))
.ToDictionary( s => s[0].Trim(), s => s[1].Trim());
}
Path1 = dict["PathOne"];
Path2 = dict["PathTwo"];
Path3 = Path1 + #"\Test";
}
}
You need to define the variables outside the function to make them accessible to other functions.
public string MyPath1; // (Put these at the top of the class.)
public string MyPath2;
public voice ReadConfig(string txtFile)
{
var dict = File.ReadAllLines(txtFile)
.Select(l => l.Split(new[] { '=' }))
.ToDictionary( s => s[0].Trim(), s => s[1].Trim()); // read the entire file into a dictionary.
MyPath1 = dict["MyPathOne"];
MyPath2 = dict["MyPathTwo"];
}
This question is similar to Get parameters out of text file
(I put an answer there. I "can't" paste it here.)
(Unsure whether I should "flag" this question as duplicate. "Flagging" "closes".)
(Do duplicate questions ever get consolidated? Each can have virtues in the wording of the [often lame] question or the [underreaching and overreaching] answers. A consolidated version could have the best of all, but consolidation is rarely trivial.)