I'm creating a word document to sue it with Aspose for document generation.
I need to add a conditional <> with many conditions:
I tried :
<<if [condition 1] OR [Condition 2]>>
Write something
<</if>>
<<if [condition 1] AND [Condition 2]>>
Write something
<</if>>
but all the syntaxes I tried failed.
Can anyone help with this?
You can use regular C# or Java syntax for multiple conditions:
<<if [Condition1 || Condition2]>>
Write something
<</if>>
<<if [Condition1 && Condition2]>>
Write something
<</if>>
For example see the following simple code:
Document doc = new Document();
DocumentBuilder builder = new DocumentBuilder(doc);
builder.Writeln("<<if [1!=1 || 2==2]>>Condition is true<<else>>Condition is false<</if>>");
builder.Writeln("<<if [1!=1 && 2==2]>>Condition is true<<else>>Condition is false<</if>>");
ReportingEngine engine= new ReportingEngine();
engine.BuildReport(doc, new object());
doc.Save(#"C:\Temp\out.docx");
Related
I have a csv file
Date,Open,High,Low,Close,Volume,Adj Close
2011-09-23,24.90,25.15,24.69,25.06,64768100,25.06
2011-09-22,25.30,25.65,24.60,25.06,96278300,25.06
...
and i have a class StockQuote with fields
Date,open,high...
How can i make a list of StockQuote object from csv file using linq?
I m trying something like this:`
stirng[] Data = parser.ReadFields();
var query = from d in Data
where !String.IsNullorWhiteSpace(d)
let data=d.Split(',')
select new StockQuote()
{
Date=data[0], Open=double.Parse(data [ 1 ] ),
...
`
You can do something like this..
var yourData = File.ReadAllLines("yourFile.csv")
.Skip(1)
.Select(x => x.Split(','))
.Select(x => new
{
Date= x[0],
Open = double.Parse(x[1]),
High = double.Parse(x[2]),
Low = double.Parse(x[3]),
Close = double.Parse(x[4]),
Volume = double.Parse(x[5]),
AdjClose = double.Parse(x[6])
});
You should not be using Linq, Regex or the like for CSV parsing. For CSV parsing, use a CSV Parser.
Linq and Regex will work exactly until you run into a escaped control character, multiline fields or something of the sort. Then they will plain break. And propably be unfixable.
Take a look at this question :
Parsing CSV files in C#, with header
The answer mentionning .Net integrated CSV parser seems fine.
And no, you don't need Linq for this.
I'm pretty new to programming in c# and I have some problems to process a lot of data in several csv-files into one xml-file.
The csv files I have look like the following:
"ID","NODE","PROCESS_STATE","TIME_STAMP","PREV_TIME_STAMP","CALCULATED"
206609474,2175,47,31.03.2015 00:01:25,31.03.2015 00:01:24,1
206609475,2175,47,31.03.2015 00:02:25,31.03.2015 00:01:25,1
206609476,2175,47,31.03.2015 00:03:25,31.03.2015 00:02:25,1
In a first step I remove all entries that aren't important for my calculations (e.g. I remove all files that don't contain specific dates) and then save each file again.
The second step is to merge all those prepared files (~ 100) into one big csv-file.
Until here everything works pretty good and fast.
The last step is to convert the csv-file into an xml-file of the following format:
<data-set>
<PDA_DATA>
<ID>484261933</ID>
<NODE>2190</NODE>
<PROCESS_STATE>18</PROCESS_STATE>
<PREV_TIME_STAMP>05.05.2016 22:53:41</PREV_TIME_STAMP>
</PDA_DATA>
<PDA_DATA>
<ID>484261935</ID>
<NODE>2190</NODE>
<PROCESS_STATE>47</PROCESS_STATE>
<PREV_TIME_STAMP>06.05.2016 00:44:17</PREV_TIME_STAMP>
</PDA_DATA>
</data-set>
As you can see I remove elements ("TIME_STAMP", "CALCULATED") and further more I also remove all entries where the entry "TIME_STAMP" is equal to "PREV_TIME_STAMP". I'm doing this with the following code:
string[] csvlines = File.ReadAllLines("All_Machines.csv");
XElement xml = new XElement("data-set",
from str in csvlines
let columns = str.Split(',')
select new XElement("PDA_DATA",
new XElement("ID", columns[0]),
new XElement("NODE", columns[2]),
new XElement("PROCESS_STATE", columns[5]),
new XElement("TIME_STAMP", columns[6]),
new XElement("PREV_TIME_STAMP", columns[9]),
new XElement("CALCULATED", columns[10])));
// Remove unneccessray elements
xml.Elements("PDA_DATA")
.Where(e =>
e.Element("TIME_STAMP").Value.Equals(e.Element("PREV_TIME_STAMP").Value))
.Remove(); // Remove entries with duration = 0
xml.Elements("PDA_DATA").Elements("TIME_STAMP").Remove();
xml.Elements("PDA_DATA").Elements("PREV_PROCESS_STATE").Remove();
xml.Elements("PDA_DATA").Elements("CALCULATED").Remove();
xml.Save("All_Machines.xml");
And here is my problem. If I exclude the line where I remove Elements where TimeStamp equals PrevTimeStamp everything works pretty good and fast.
But with this command, it takes a lot of time and does only work with small csv-files.
I have no knowledge about resource-efficient programming, so I'd be really glad if someone of you could tell me where the problem is or how to do that better.
This works out much faster:
string[] csvlines = File.ReadAllLines("All_Machines.csv");
XElement xml = new XElement("data-set",
from str in csvlines
let columns = str.Split(',')
select new XElement("PDA_DATA",
new XElement("ID", columns[0]),
new XElement("NODE", columns[1]),
new XElement("PROCESS_STATE", columns[2]),
new XElement("TIME_STAMP", columns[3]),
new XElement("PREV_TIME_STAMP", columns[4]),
new XElement("CALCULATED", columns[5]),
)
);
// Remove unneccessray elements
XElement xml2 = new XElement("data-set",
from el in xml.Elements()
where (el.Element("TIME_STAMP").Value != (el.Element("PREV_TIME_STAMP").Value))
select el
);
xml2.Elements("PDA_DATA").Elements("TIME_STAMP").Remove();
xml2.Elements("PDA_DATA").Elements("PREV_PROCESS_STATE").Remove();
xml2.Elements("PDA_DATA").Elements("CALCULATED").Remove();
xml2.Save("All_Machines.xml");
Still not perfect for csv-file-sizes over 150 MB.. Any better suggestions?
With Cinchoo ETL - an open source framework, you can read and write CSV/Xml large files quickly with few lines of code as below
using (var csv = new ChoCSVReader("NodeData.csv").WithFirstLineHeader(true)
.WithFields("ID", "NODE", "PROCESS_STATE", "PREV_TIME_STAMP"))
{
using (var xml = new ChoXmlWriter("NodeData.xml").WithXPath("data-set/PDA_DATA"))
xml.Write(csv);
}
The output xml look like
<data-set>
<PDA_DATA>
<ID>206609474</ID>
<NODE>2175</NODE>
<PROCESS_STATE>47</PROCESS_STATE>
<PREV_TIME_STAMP>31.03.2015 00:01:25</PREV_TIME_STAMP>
</PDA_DATA>
<PDA_DATA>
<ID>206609475</ID>
<NODE>2175</NODE>
<PROCESS_STATE>47</PROCESS_STATE>
<PREV_TIME_STAMP>31.03.2015 00:02:25</PREV_TIME_STAMP>
</PDA_DATA>
<PDA_DATA>
<ID>206609476</ID>
<NODE>2175</NODE>
<PROCESS_STATE>47</PROCESS_STATE>
<PREV_TIME_STAMP>31.03.2015 00:03:25</PREV_TIME_STAMP>
</PDA_DATA>
</data-set>
Disclosure: I'm the author of this library
I'm trying to find a solution to a problem using Microsoft Solver Foundation in C# and I'm having trouble setting up all the constraints I need. My basic model is I have a list of bays and I need to load each bay so that the total of all the bays is maximised. I'm currently doing it like this
var solver = SolverContext.GetContext();
var model = solver.CreateModel();
var decisions =
bays.Select(b => new Decision(Domain.IntegerNonnegative, "B"+b.bay.getShortName()));
model.AddDecisions(decisions.ToArray());
foreach (BayPositionLoading bay in bays)
{
model.AddConstraint(
"B" + bay.bay.getShortName() + "Cons",
model.Decisions
.First(d => d.Name == "B" + bay.bay.getShortName()) <= bay.bay.maxLoad);
}
What I really would like to be able to do is add a constraint that a certain function returns true. The function would be something like this
public bool isValid (List<Bay> bays)
{
return blah;
}
But I can't figure out how to create the list of bays to pass to this function. I would like to do something like but this keeps throwing an exception when I say ToDouble or GetDouble.
foreach(Bay b in bays)
{
var dec= model.Decisions.First(it => it.Name == "B" + bay.bay.getShortName());
b.actualLoad = dec.ToDouble(); // Or GetDouble
}
model.AddConstraint("func", isValid(bays) == true);
Can anyone suggest how this can be done?
Thanks!
you need to use the math provided with OML language only, I do not think custom functions like you try to use are supported in MSF.
This question may be asked or answered before but I feel that none of the hits really apply.
I would like to create a little class with attributes which will correspond to name and attributes in a output familiar to xml stream. The class should help the program to create a xml-alike string.
string test = "<graph caption='SomeHeader' attribute9='#someotherinfo'>" +
"<set name='2004' value='37800' color='AFD8F8' />" +
"<set name='2005' value='21900' color='F6BD0F' />" +
"<set name='2006' value='32900' color='8BBA00' />" +
"<set name='2007' value='39800' color='FF8E46' />" +
"</graph>";
I think you got the idea. I have a static amount of known attributes which will be used in the tags. The only Tags here is set and graph.
I would like to do something like this,
Helper o = new Helper()
List<Tag> tag = new List<Tag>();
foreach (var someitem in somedatabaseresult)
{
tag.Add(New Graph() { Caption = someitem.field , attribute9 = someitem.otherField });
foreach (var detail in someitem)
{
tag.Add(new Set() { name = detail.Year, value = detail.Value color = detail.Color });
}
}
o.Generate(); // Which will create the structure of result sample above
// and for future extension..
// o.GenerateXml();
// o.GenerateJson();
Please remember that this code is pesudo, just taken from my head. A result of that I have some ideas but it take a day to code and test what best (or whorse).
What would be best practice to solve this task?
[EDIT]
This mysterious "Helper" is the (unluckily typed) class who contains a list of Graph, a list of Set and also (what I think about) contains all available attributes per Graph/Set object. The work of the foreach-loops above are mentioned to fill the Helper class with the data.
[EDIT2]
Result here,
https://gist.github.com/1233331
Why not just create a couple of classes: Graph and Set. Graph would have a property of List<Set>.
In your foreach you can then create an instance or Graph and add instances of Set to its list.
When you're done use the XML Serializer to serialize the Graph object out to XML. Nice and easy to then output to another format as well if your needs change later e.g. serialize to JSON.
Edit following comment:
From top my head so may not be 100% correct...
var myGraph = BuildMeAGraph();
var serializer = new XmlSerializer(typeof(Graph));
var writer = XmlWriter.Create("myfile.xml");
serializer.Serialize(writer, myGraph);
But something like that should write it out to a file. If you want the XML in memory then write it out to an XMLTextWriter based on a memory stream instead and then you can write the contents to a string variable or do whatever you need with it.
If you want to create an XML, out of the object tree, then I think, you could try this:
XDocument doc = new XDocument
(
somedatabaseresult.Select
( someitem =>
new XElement("graph",
new XAttribute("Caption", ""),
new XAttribute("attribute9", "#something"),
someitem.Select
(detail =>
new XElement("Set",
new XAttribute("name", "2003"),
new XAttribute("value", "34784"),
new XAttribute("color", "#003300")
)
)
);
//save to file as XML
doc.Save("output.xml");
//save to local variable as XML string
string test = doc.ToString();
I wrote the save value for tags, as you've used in your code. However, I think you would like this:
new XAttribute("name", detail.name),
new XAttribute("value", detail.value),
new XAttribute("color", detail.color)
Or whatever value you want to give to each attribute from the object detail.
Use the XmlSerializer.
I'd use the ExpandoObject, but I don't see the reason for what you are doing.
I am using the Lucene.NET API directly in my ASP.NET/C# web application. When I search using a wildcard, like "fuc*", the highlighter doesn't highlight anything, but when I search for the whole word, like "fuchsia", it highlights fine. Does Lucene have the ability to highlight using the same logic it used to match with?
Various maybe-relevant code-snippets below:
var formatter = new Lucene.Net.Highlight.SimpleHTMLFormatter(
"<span class='srhilite'>",
"</span>");
var fragmenter = new Lucene.Net.Highlight.SimpleFragmenter(100);
var scorer = new Lucene.Net.Highlight.QueryScorer(query);
var highlighter = new Lucene.Net.Highlight.Highlighter(formatter, scorer);
highlighter.SetTextFragmenter(fragmenter);
and then on each hit...
string description = Server.HtmlEncode(doc.Get("Description"));
var stream = analyzer.TokenStream("Description",
new System.IO.StringReader(description));
string highlighted_text = highlighter.GetBestFragments(
stream, description, 1, "...");
And I'm using the QueryParser and the StandardAnalyzer.
you'll need to ensure you set the parser rewrite method to SCORING_BOOLEAN_QUERY_REWRITE.
This change seems to have become necessary since Lucene v2.9 came along.
Hope this helps,