I am trying to write data to csv file. I want the data to be displayed in double quotes when opened in notepad, but it shouldn't show any double quotes when I open the file in csv.
following is the code that I am using but, it doesn't give the result.
csvFile.WriteField("\"" +data.FirstName == null ? string.Empty : data.FirstName + "\"");
Can someone help me out with this?
The problem is you are trying to fight CsvHelper's quoting. According to the RFC 4180 specifications.
Fields containing line breaks (CRLF), double quotes, and commas
should be enclosed in double-quotes. For example:
"aaa","b CRLF
bb","ccc" CRLF
zzz,yyy,xxx
If double-quotes are used to enclose fields, then a double-quote
appearing inside a field must be escaped by preceding it with
another double quote. For example:
"aaa","b""bb","ccc"
So when you add double quotes to the field, CsvHelper recognizes that the whole field needs to be enclosed in double quotes and the added quotes need to be escaped with another double quote. That is why you end up with 3 double quotes.
CsvHelper has a configuration function where you can tell it when it should quote a field. #JoshClose already has an answer here for wrapping all fields in double quotes. I'll update it for the current version of CsvHelper.
void Main()
{
var records = new List<Foo>
{
new Foo { Id = 1, Name = "one" },
new Foo { Id = 2, Name = "two" },
};
using (var writer = new StringWriter())
using (var csv = new CsvWriter(writer, CultureInfo.InvariantCulture))
{
csv.Configuration.ShouldQuote = (field, context) => true;
csv.WriteRecords(records);
writer.ToString().Dump();
}
}
public class Foo
{
public int Id { get; set; }
public string Name { get; set; }
}
If you still wanted to add the double quotes yourself, you could turn off CsvHelper's double quoting.
csv.Configuration.ShouldQuote = (field, context) => false;
Breaking change for Version 20.0.0 and later
Changed CsvConfiguration to a read only record to eliminate threading issues.
You would need to create CsvConfiguration, set ShouldQuote on initialization and then pass it to the CsvWriter.
void Main()
{
var records = new List<Foo>
{
new Foo { Id = 1, Name = "one" },
new Foo { Id = 2, Name = "two" },
};
var config = new CsvConfiguration(CultureInfo.InvariantCulture)
{
ShouldQuote = (field, context) => true
};
using (var writer = new StringWriter())
using (var csv = new CsvWriter(writer, config))
{
csv.WriteRecords(records);
writer.ToString().Dump();
}
}
public class Foo
{
public int Id { get; set; }
public string Name { get; set; }
}
I made code a bit more readable by placing the assignment of firstName to a variable that will be passed later. I used a string escape to add a quote to the final result. Now you can pass it to your WriteField method.
More information about string escape can be found: here and here.
var firstName = data.FirstName == null ? string.Empty : data.FirstName;
var firstNameWithQuoutes = $#"""{firstName}""";
Using the CsvConfiguration, you can change the CSV file configuration. To add the double quotes set the ShouldQuote=true, and to remove set ShouldQuote=false.
Plz try this in the latest version, I tried it in V28, and its works for me
var config = new CsvConfiguration(CultureInfo.InvariantCulture)
{
ShouldQuote = (field) => false //Set the false to remove the double quotes
};
using (StreamWriter _streamWriter = new StreamWriter(fileNamePath))
{
using (var _csvWriter = new CsvWriter(_streamWriter, config))
{
await _csvWriter.WriteRecordsAsync(models);
}
}
Related
Recently I have migrated my CSVLibrary from 2.7.1 to 30 version.
With my old code, if the header is not there, It will not go into the IF condition so that hasHeaderRecord will be false. If the CSV file has a header, then it checks for header count.
But after upgrading if the CSV file did not have a header, it goes into the if condition which is not expected.
if (csvReader.Read())
{
var csvHeaders = csvReader.HeaderRecord;
hasHeaderRecord = csvHeaders.Intersect(_Column).Count() == csvHeaders.Count();
}
How to modify the piece of code after upgration.
I believe your 2.7.1 code did go into the if statement even if there was no header. Your code csvHeaders.Intersect(_ColumnNames).Count() == csvHeaders.Count() is what you used to determine if there was a header. The difference to version 30.0.1 is you have to manually read the header now.
Version 2.7.1 - Reads line and if no header, automatically tries to assign line to the header
csvReader.Read()
Version 30.0.1 - Reading the line and assigning the header has been split up. Read(), reads in the line. ReadHeader(), attempts to assign the read line to the header.
csvReader.Read()
csvReader.ReadHeader();
Possible upgrade of your code.
void Main()
{
var config = new CsvConfiguration(CultureInfo.InvariantCulture)
{
BadDataFound = null
};
var _ColumnNames = new string[] {"Id","Name"};
var hasHeaderRecord = false;
using (var reader = new StringReader("1,name1\n2,name2"))
using (var csvReader = new CsvReader(reader, config))
{
if (csvReader.Read())
{
csvReader.ReadHeader();
var csvHeaders = csvReader.HeaderRecord;
hasHeaderRecord = csvHeaders.Intersect(_ColumnNames).Count() == csvHeaders.Count();
}
}
}
public class Foo
{
public int Id { get; set; }
public string Name { get; set; }
}
Does CsvHelper not support quoted unescaped output or is there something wrong with my writer configuration?
Input:
Jake|"B" Street
CsvHelper reader configuration:
var readerConfig = new CsvConfiguration(CultureInfo.InvariantCulture)
{
Delimiter = "|",
TrimOptions = TrimOptions.Trim,
IgnoreBlankLines = true,
Mode = CsvMode.NoEscape
};
After record is read into memory, the value with double quotes is unescaped as expected: "B" Street
However, when I write records to CSV file if ShouldQuote = args => true is used without CsvMode.NoEscape double quotes are escaped and if I add CsvMode.NoEscape the output is not quoted at all.
writerConfiguration = new CsvConfiguration(CultureInfo.InvariantCulture)
{
ShouldQuote = args => true,
Mode = CsvMode.NoEscape
};
Output with CsvMode.NoEscape: "B" Street // Output not quoted and not escaped.
Output without CsvMode.NoEscape: """B"" Street" // Double quotes escaped.
Any ideas on how to write unescaped quoted values with CsvHelper?
This feels a little hacky, but I was able to do what I think you are trying to do with CsvMode.NoEscape and a custom string converter to add the outer quotes. Another option would be to use Convert in a ClassMap to add the outer quotes on a column by column basis.
void Main()
{
var records = new List<Foo>
{
new Foo { Id = 1, Name = "\"B\" Street" },
};
var config = new CsvConfiguration(CultureInfo.InvariantCulture)
{
Mode = CsvMode.NoEscape
};
using (var csv = new CsvWriter(Console.Out, config))
{
csv.Context.TypeConverterCache.AddConverter<string>(new OuterQuoteConverter());
csv.WriteRecords(records);
}
}
public class OuterQuoteConverter : StringConverter
{
public override string ConvertToString(object value, IWriterRow row, MemberMapData memberMapData)
{
var quoted = "\"" + (string)value + "\"";
return base.ConvertToString(quoted, row, memberMapData);
}
}
public class Foo
{
public int Id { get; set; }
public string Name { get; set; }
}
This is my first time posting so I apologize for any ignorance or failed use of examples.
I have a console app project to create where I have been given a fair few CSV files and I need to create some kind of Parent/Child/Grandchild relationship out of them (XML? maybe? - then I can use that to do the uploads and writes to the DMS with minimal calls - I don't want to be querying if a folder exists over and over)
I am a little out of my depth on this one
I need to know the best way to do this without 3rd party library dependencies, pure C#, using the OLEDB JET provider is most likely required as it will handle the parsing required, there is no order to the CSV files in regards to date, previous years could appear down the list and vice versa.
Here's an example of the CSV output
"DESCRIPTION1","8, 5/8\" X 6.4MM","STRING","filename001.pdf","2016-09-19","1"
"DESCRIPTION2","12, 3/4\" X 6.4MM","STRING","filename001.pdf","2016-09-19","1"
"DESCRIPTION3","12, 3/4\" X 6.4MM","STRING","filename001.pdf","2016-09-19","1"
"another description 20# gw","1","388015","Scan123.pdf","2015-10-24","1"
"another description 20# gw","3","385902","Scan456.pdf","2015-04-14","1"
"STRINGVAL1","273.10 X 9.27 X 6000","45032-01","KHJDWNEJWKFD9101529.pdf","2012-02-03","1"
"STRINGVAL2","273.10 X 21.44 X 6000","7-09372","DJSWH68767681540.pdf","2017-02-03","1"
The end output will be (YEAR/MONTH/FILENAME + (Attributes for each file - these are for eventually updating columns inside a DMS))
Year and Month retrieved from the column with the date
If the YEAR alread exists then it will not be created again
If the month under that year exists it will not be created again
If the filename already exists under that YEAR/MONTH it will not be created again BUT the additional ATTRIBUTES for that FileName will be added to the attributes - "line seperated?"
Required Output:
I have attempted a Linq query to begin to output the possible required XML for me to progress but it outputs every row and does no grouping, I am not familiar with Linq at the moment.
I also ran into issues with the basic escaping on the .Split(',') doing it this way (see original CSV examples above compared to me using TAB separation in my test file and example below) which is why I want the Oledb provider to handle it.
string[] source = File.ReadAllLines(#"C:\Processing\In\mockCsv.csv");
XElement item = new XElement("Root",
from str in source
let fields = str.Split('\t')
select new XElement("Year", fields[4].Substring(0, 4),
new XElement("Month", fields[4].Substring(5, 2),
new XElement("FileName", fields[3]),
new XElement("Description",fields[0]),
new XElement("Length", fields[1]),
new XElement("Type", fields[2]),
new XElement("FileName", fields[3]),
new XElement("Date", fields[4]),
new XElement("Authorised", fields[5]))
)
);
I also need to log every step of the process so I have setup a Logger class
private class Logger
{
private static string LogFile = null;
internal enum MsgType
{
Info,
Debug,
Error
}
static Logger()
{
var processingDetails = ConfigurationManager.GetSection(SECTION_PROCESSINGDETAILS) as NameValueCollection;
LogFile = Path.Combine(processingDetails[KEY_WORKINGFOLDER],
String.Format("Log_{0}.txt", StartTime.ToString("MMMyyyy")));
if (File.Exists(LogFile))
File.Delete(LogFile);
}
internal static void Write(string msg, MsgType msgType, bool isNewLine, bool closeLine)
{
if (isNewLine)
msg = String.Format("{0} - {1} : {2}", DateTime.Now.ToString("dd/MM/yyyy HH:mm:ss"), msgType, msg);
if (closeLine)
Console.WriteLine(msg);
else
Console.Write(msg);
if (String.IsNullOrEmpty(LogFile))
return;
try
{
using (StreamWriter sw = new StreamWriter(LogFile, true))
{
if (closeLine)
sw.WriteLine(msg);
else
sw.Write(msg);
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
Used as such
Logger.Write(String.Format("Reading records from csv file ({0})... ",
csvFile), Logger.MsgType.Info, true, false);
Try following. If you are reading from a file use StreamReader instead of StringReader :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Xml;
using System.Xml.Linq;
using System.IO;
using System.Text.RegularExpressions;
namespace ConsoleApplication74
{
class Program
{
static void Main(string[] args)
{
string input =
"\"DESCRIPTION1\",\"8, 5/8 X 6.4MM\",\"STRING\",\"filename001.pdf\",\"2016-09-19\",\"1\"\n" +
"\"DESCRIPTION2\",\"12, 3/4 X 6.4MM\",\"STRING\",\"filename001.pdf\",\"2016-09-19\",\"1\"\n" +
"\"DESCRIPTION3\",\"12, 3/4 X 6.4MM\",\"STRING\",\"filename001.pdf\",\"2016-09-19\",\"1\"\n" +
"\"another description 20# gw\",\"1\",\"388015\",\"Scan123.pdf\",\"2015-10-24\",\"1\"\n" +
"\"another description 20# gw\",\"3\",\"385902\",\"Scan456.pdf\",\"2015-04-14\",\"1\"\n" +
"\"STRINGVAL1\",\"273.10 X 9.27 X 6000\",\"45032-01\",\"KHJDWNEJWKFD9101529.pdf\",\"2012-02-03\",\"1\"\n" +
"\"STRINGVAL2\",\"273.10 X 21.44 X 6000\",\"7-09372\",\"DJSWH68767681540.pdf\",\"2017-02-03\",\"1\"\n";
string pattern = "\\\"\\s*,\\s*\\\"";
string inputline = "";
StringReader reader = new StringReader(input);
XElement root = new XElement("Root");
while ((inputline = reader.ReadLine()) != null)
{
string[] splitLine = Regex.Split(inputline,pattern);
Item newItem = new Item() {
description = splitLine[0].Replace("\"",""),
length = splitLine[1],
type = splitLine[2],
filename = splitLine[3],
date = DateTime.Parse(splitLine[4]),
authorized = splitLine[5].Replace("\"", "") == "1" ? true : false
};
Item.items.Add(newItem);
}
foreach(var year in Item.items.GroupBy(x => x.date.Year).OrderBy(x => x.Key))
{
XElement newYear = new XElement("_" + year.Key.ToString());
root.Add(newYear);
foreach(var month in year.GroupBy(x => x.date.Month).OrderBy(x => x.Key))
{
XElement newMonth = new XElement("_" + month.Key.ToString());
newYear.Add(newMonth);
newMonth.Add(
month.OrderBy(x => x.date).Select(x => new XElement(
x.filename,
string.Join("\r\n", new object[] {
x.description,
x.length,
x.type,
x.date.ToString(),
x.authorized.ToString()
}).ToList()
)));
}
}
}
}
public class Item
{
public static List<Item> items = new List<Item>();
public string description { get; set; }
public string length { get; set; }
public string type { get; set; }
public string filename { get; set; }
public DateTime date { get; set; }
public Boolean authorized { get; set; }
}
}
In Xamarin google maps for Android using C# you can create polygons like so based on this tutorial:
public void OnMapReady(GoogleMap googleMap)
{
mMap = googleMap;
PolylineOptions geometry = new PolylineOptions()
.Add(new LatLng(37.35, -37.0123))
.Add(new LatLng(37.35, -37.0123))
.Add(new LatLng(37.35, -37.0123));
Polyline polyline = mMap.AddPolyline(geometry);
}
However I have downloaded a CSV file from my Fusion Table Layer from google maps as I think this might be the easiest option to work with polygon/polyline data. The output looks like this:
description,name,label,geometry
,Highland,61,"<Polygon><outerBoundaryIs><LinearRing><coordinates>-5.657018,57.3352 -5.656396,57.334463 -5.655076,57.334556 -5.653439,57.334477 -5.652366,57.334724 -5.650064,57.334477 -5.648096,57.335082 -5.646846,57.335388 -5.644733,57.335539 -5.643309,57.335428 -5.641981,57.335448 -5.640451,57.33578 -5.633217,57.339118 -5.627278,57.338921 -5.617161,57.337649 -5.607948,57.341015 -5.595812,57.343583 -5.586043,57.345373 -5.583581,57.350648 -5.576851,57.353609 -5.570088,57.354017 -5.560732,57.354102 -5.555254,57.354033 -5.549713,57.353146 -5.547766,57.352275 -5.538932,57.352255 -5.525891,57.356217 -5.514888,57.361865 -5.504272,57.366027 -5.494515,57.374515 -5.469829,57.383765 -5.458661,57.389781 -5.453695,57.395033 -5.454057,57.402943 -5.449189,57.40731 -5.440583,57.411447 -5.436133,57.414616 -5.438312,57.415474 -5.438628,57.417955 -5.440956,57.417909 -5.444013,57.414976 -5.450778,57.421362 -5.455035,57.422333 -5.462081,57.420719 -5.468775,57.416975 -5.475205,57.41135 -5.475976,57.409117 -5.47705,57.407092 -5.478101,57.406056 -5.478901,57.40536 -5.479489,57.404534 -5.480051,57.403782 -5.481036,57.403107 -5.484538,57.402102 -5.485647,57.401856 -5.487358,57.401287 -5.488709,57.400962 -5.490175,57.400616 -5.491116,57.400176 -5.493832,57.399318 -5.495279,57.399134 -5.496726,57.39771 -5.498724,57.396836 -5.49974,57.396314 -5.501317,57.39627 -5.502869,57.395426</coordinates></LinearRing></innerBoundaryIs></Polygon>"
,Strathclyde,63,"<Polygon><outerBoundaryIs><LinearRing><coordinates>-5.603129,56.313564 -5.603163,56.312536 -5.603643,56.311794 -5.601467,56.311875 -5.601038,56.312481 -5.600697,56.313489 -5.60071,56.31535 -5.60159,56.316107 -5.600729,56.316598 -5.598625,56.316058 -5.596203,56.317477 -5.597024,56.318119 -5.596095,56.318739 -5.595432,56.320116 -5.589343,56.322469 -5.584888,56.325178 -5.582907,56.327169 -5.581414,56.327472 -5.581435,56.326663 -5.582355,56.325602 -5.581515,56.323891 -5.576993,56.331062 -5.57886,56.331475 -5.57676,56.334449 -5.572748,56.335689 -5.569012,56.338143 -5.564802,56.342113 -5.555237,56.346668 -5.551214,56.347448 -5.547651,56.346391 -5.54444,56.344945 -5.541247,56.345945 -5.539099,56.349674 -5.533874,56.34763 -5.525195,56.342888 -5.523518,56.345066 -5.52345,56.346605 -5.526417,56.354361 -5.535455,56.353681 -5.537463,56.35508 -5.536035,56.356271 -5.538923,56.357205 -5.53891,56.359336 -5.539952,56.361491 -5.538102,56.36372 -5.535934,56.36567 -5.53392,56.367705 -5.531369,56.369729 -5.529853,56.371022 -5.532371,56.371274 -5.534177,56.371708 -5.532846,56.373256 -5.529845,56.37496 -5.527675,56.375327 -5.528531,56.375995 -5.526732,56.376343 -5.525442,56.377809 -5.524739,56.379843 -5.526069,56.380561</coordinates></LinearRing></innerBoundaryIs></Polygon>"
I uploaded a KML file to Google Maps Fusion Table Layer, it then created the map. I then went File>Download>CSV and it gave me the above example.
I have added this csv file to my assets folder of my xamarin android google map app and my question would be because LatLng takes two doubles as its input, is there a way I could input the above data from the csv file into this method and if so how?
Not sure how to read the above csv and then extract the <coordinates> and then add those coordinates as new LatLng in the example code above?
If you notice however the coordinates are split into lat and lng and then the next latlng is seperated by a space -5.657018,57.3352 -5.656396,57.334463.
Sudo code (this may or may not require xamarin or android experience and may just require C#/Linq):
Read CSV var sr = new StreamReader(Read csv from Asset folder);
Remove description,name,label,geometry
Foreach line in CSV
Extract Item that contains double qoutes
Foreach Item Remove Qoutes and <Polygon><outerBoundaryIs><LinearRing><coordinates> from start and end
Foreach item seperated by a space Extract coordinates
(This will now leave a long list of 37.35,-37.0123 coordinates for each line)
Place in something like this maybe?:
public class Row
{
public double Lat { get; set; }
public double Lng { get; set; }
public Row(string str)
{
string[] separator = { "," };
var arr = str.Split(separator, StringSplitOptions.None);
Lat = Convert.ToDouble(arr[0]);
Lng = Convert.ToDouble(arr[1]);
}
}
private void OnMapReady()
var rows = new List<Row>();
Foreach name/new line
PolylineOptions geometry = new PolylineOptions()
ForEach (item in rows) //not sure how polyline options will take a foreach
.Add(New LatLng(item.Lat, item.Lng))
Polyline polyline = mMap.AddPolyline(geometry);
As there is no way of using Fusion Table Layers in Xamarin Android with Google Maps API v2 this may provide a quick and easier workaround for those that need to split maps into regions.
If I understand correctly, the question is how to parse the above CSV file.
Each line (except the first one with headers) can be represented with the following class:
class MapEntry
{
public string Description { get; set; }
public string Name { get; set; }
public string Label { get; set; }
public IEnumerable<LatLng> InnerCoordinates { get; set; }
public IEnumerable<LatLng> OuterCoordinates { get; set; }
}
Note the Inner and Outer coordinates. They are represented inside the XML by outerBoundaryIs (required) and innerBoundaryIs (optional) elements.
A side note: the Highland line in your post is incorrect - you seem to trimmed part of the line, leading to incorrect XML (<outerBoundaryIs>...</innerBoundaryIs>).
Here is the code that does the parsing:
static IEnumerable<MapEntry> ParseMap(string csvFile)
{
return from line in File.ReadLines(csvFile).Skip(1)
let tokens = line.Split(new[] { ',' }, 4)
let xmlToken = tokens[3]
let xmlText = xmlToken.Substring(1, xmlToken.Length - 2)
let xmlRoot = XElement.Parse(xmlText)
select new MapEntry
{
Description = tokens[0],
Name = tokens[1],
Label = tokens[2],
InnerCoordinates = GetCoordinates(xmlRoot.Element("innerBoundaryIs")),
OuterCoordinates = GetCoordinates(xmlRoot.Element("outerBoundaryIs")),
};
}
static IEnumerable<LatLng> GetCoordinates(XElement node)
{
if (node == null) return Enumerable.Empty<LatLng>();
var element = node.Element("LinearRing").Element("coordinates");
return from token in element.Value.Split(' ')
let values = token.Split(',')
select new LatLng(XmlConvert.ToDouble(values[0]), XmlConvert.ToDouble(values[1]));
}
I think the code is self explanatory. The only details to be mentioned are:
let tokens = line.Split(new[] { ',' }, 4)
Here we use the string.Split overload that allows us to specify the maximum number of substrings to return, thus avoiding the trap of processing the commas inside the XML token.
and:
let xmlText = xmlToken.Substring(1, xmlToken.Length - 2)
which strips the quotes from the XML token.
Finally, a sample usage for your case:
foreach (var entry in ParseMap(csv_file_full_path))
{
PolylineOptions geometry = new PolylineOptions()
foreach (var item in entry.OuterCoordinates)
geometry.Add(item)
Polyline polyline = mMap.AddPolyline(geometry);
}
UPDATE: To make Xamarin happy (as mentioned in the comments), replace the File.ReadLines call with a call to the following helper:
static IEnumerable<string> ReadLines(string path)
{
var sr = new StreamReader(Assets.Open(path));
try
{
string line;
while ((line = sr.ReadLine()) != null)
yield return line;
}
finally { sr.Dispose(); }
}
I'm not sure to understand your questions but I think that can help:
You have to read the csv file line by line and foreach line extract the third argument, remove the quote and read the result with a XMLReader for extract the line you want (coordinate). After you split the result on white space, you get the list of LatLng and you split each LatLng on ",", then you've got all the information you need.
System.IO.StreamReader file = new System.IO.StreamReader(#"file.csv");
string line = file.ReadLine(); //escape the first line
while((line = file.ReadLine()) != null)
{
var xmlString = line.Split({","})[2]; // or 3
**Update**
xmlString = xmlString.Replace("\"","")
using (XmlReader reader = XmlReader.Create(new StringReader(xmlString)))
{
reader.ReadToFollowing("coordinate");
string coordinate = reader.Value;
PolylineOptions geometry = new PolylineOptions();
var latLngs = coordinate.Split({' '});
foreach (var latLng in latLngs)
{
double lat = 0;
double lng = 0;
var arr = latLng.Split({','});
double.TryParse(arr[0], out lat);
double.TryParse(arr[1], out lng);
geometry.Add(new LatLng(lat, lng));
}
Polyline polyline = mMap.AddPolyline(geometry);
// Do something with your polyline
}
}
file.Close();
using System;
using System.Collections.Generic;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Xml.Linq;
namespace ConsoleApplication35
{
public class LatLng
{
public LatLng(double lat, double lng)
{
}
}
class Program
{
private static IEnumerable<LatLng> GetCoordinates(string polygon)
{
var xElement = XElement.Parse(polygon);
//var innerBoundaryCoordinates = xElement.Elements("innerBoundaryIs").FirstOrDefault()?.Value ?? "";
var outerBoundaryCoordinates = xElement.Elements("outerBoundaryIs").Single()?.Value ?? "";
return outerBoundaryCoordinates
.Split(' ')
.Select(latLng =>
{
var splits = latLng.Split(',');
var lat = double.Parse(splits[0], CultureInfo.InvariantCulture);
var lng = double.Parse(splits[1], CultureInfo.InvariantCulture);
return new LatLng(lat, lng);
});
}
static void Main()
{
const string header = "description,name,label,geometry";
var latLngs = File.ReadLines("file.csv")
.SelectMany(x => x.Split(new[] { header }, StringSplitOptions.RemoveEmptyEntries)) //all geometry`s in one array
.Select(x => x.Split('"'))
.SelectMany(x => GetCoordinates(x[1]))
.ToArray();
}
}
}
I want to use a string array stored in the web.config to easily change its values, this is in the format: full_w=670|small_w=100,q=low|tiny_h=30,c=true. Each template is split by the | (pipe) and then each of those sets comprises of a name (left of _) and its corresponding values (right of _), the values can be several and each separated by the , (comma). I think this possibly qualifies for a 3D array, I just can't seem to get an easy way to read this in a sensible manner. Any ideas or solutions as to the best way to read/manage the data from this string?
Basically, in the end I want to be able to call the template small and read its values which in this case are width=100 and quality=low.
Here's the function I wrote to parse one of these settings strings:
public static Dictionary<string, Dictionary<string, string>> getSettings(string settingsStr)
{
return settingsStr.Split('|').ToDictionary(
template => template.Split('_')[0],
template => template.Split('_')[1].Split(',').ToDictionary(
setting => setting.Split('=')[0],
setting => setting.Split('=')[1]));
}
It just uses a lot of string .Splitting and .ToDictionarying.
Here's the test, showing that it works:
var result = getSettings("full_w=670|small_w=100,q=low|tiny_h=30,c=true");
/*
result = {
[ "full" => [ "w" => "670" ] ]
[ "small" => [ "w" => "100", "q" => "low" ] ]
[ "tiny" => [ "h" => "30", "c" => "true" ] ]
}
*/
To read the values w and q from template small, you can do this:
int width = int.Parse(result["small"]["w"]);
string quality = result["small"]["q"];
Edit: As an added bonus, if you want to convert the Dictionary<string, Dictionary<string, string>> back into a single settings sting, you can use this method:
public static string getSettingsStr(Dictionary<string, Dictionary<string, string>> settings)
{
return string.Join("|",
settings.Select(kvp =>
kvp.Key + "_" + string.Join(",",
kvp.Value.Select(setting =>
setting.Key + "=" + setting.Value))));
}
Use:
string settingsStr = getSettingsStr(result);
// settingsStr = "full_w=670|small_w=100,q=low|tiny_h=30,c=true"
If you want to check that a specific template or setting exists, then use the .ContainsKey() method:
// If I have "Dictionary<string, Dictionary<string, string>> settings;"
int width = -1;
string quality = null;
if (settings.ContainsKey("small"))
{
if (settings["small"].ContainsKey("w"))
width = int.Parse(settings["small"]["w"]);
if (settings["small"].ContainsKey("q"))
quality = settings["small"]["q"];
}
Have you considered using plain old XML Serialization with your own plain old C# objects. Here is an example:
public class Program
{
static void Main(string[] args)
{
var data = new MyConfig[2];
for (int i = 0; i < 2; i++)
{
data[i] = new MyConfig { Name = "Name" + i };
data[i].Properties = new MyConfigAttribute[]
{
new MyConfigAttribute { Name = "Property Name " + i, Value = "Property Value " + i },
new MyConfigAttribute { Name = "2nd Property Name " + i, Value = "2nd Property Value " + i },
};
}
var serializer = new XmlSerializer(typeof(MyConfig[]));
using (StreamWriter tw = File.CreateText(#"c:\temp\myconfig.xml"))
{
serializer.Serialize(tw, data);
}
using (StreamReader tw = File.OpenText(#"c:\temp\myconfig.xml"))
{
var readBack = serializer.Deserialize(tw);
}
Console.ReadLine();
}
[XmlRoot("MY_CONFIG")]
public class MyConfig
{
[XmlElement("NAME")]
public string Name { get; set; }
[XmlArray]
[XmlArrayItem(typeof(MyConfigAttribute))]
public MyConfigAttribute[] Properties { get; set; }
}
[XmlRoot("MY_CONFIG_ATTRIBUTE")]
public class MyConfigAttribute
{
[XmlElement("ATTRIBUTE_NAME")]
public string Name { get; set; }
[XmlElement("ATTRIBUTE_VALUE")]
public string Value { get; set; }
}
}
Basically, you create a class to store your individual attributes (MyConfigAttribute in this case), wrap it in another class to provide your name for a group of related attributes (MyConfig in this case), then use normal XML Serialization to write the settings out to an individual XML file, like this section of the code
var serializer = new XmlSerializer(typeof(MyConfig[]));
using (StreamWriter tw = File.CreateText(#"c:\temp\myconfig.xml"))
{
serializer.Serialize(tw, data);
}
You can read it back to objects again using this section of the code:
using (StreamReader tw = File.OpenText(#"c:\temp\myconfig.xml"))
{
var readBack = serializer.Deserialize(tw);
}
The advantage of this is:
It is simple to understand and use
You can add features to your custom class, e.g. to add values to the array of properties, thereby lending itself to wrapping a custom screen around it.
Look up C# XML Serialization on Google!