I have a model which collects data in my ASP.NET MVC app:
namespace myapp.Models
{
[Table("mytable")]
public partial class mytbl
{
// column specifications
}
}
but this model takes all rows of the selected table. I want to add some filter rules. In an SQL query it's look so;
SELECT *
FROM mytable
WHERE mycol = 2 OR mycol = 3
in above example I typed 2 and 3 by hand but I have a text file toshow.txt (located at wwwroot) the contents of the toshow.txt file is;
2
3
How can I subset the data according to the text file ?
I'm not super familiar with ASP.NET MVC app development. Sorry if my question doesn't make sense.
Thanks in advance.
You'll need to
read the file
convert strings read from the file to int values
filter the data in the database
The below should at least get you started:
// get filename and read the file
string fileName = Path.Combine(_env.WebRootPath, "toshow.txt");
string[] lines = File.ReadAllLines(fileName);
// parse values in file to List<int>
List<int> filter = lines.Select(line => int.Parse(line)).ToList();
// could also be written lines.Select(int.Parse)
// filter your DB
var filteredResult = _context.MyTable.Where(x => filter.Contains(x.MyCol));
Where _env is an IWebHostingEnvironment and WebRootPath maps to the wwwroot directory. You may need to use ContentRootPath instead if you're using IHostingEnvironment. More details can be found in Static files in Asp.Net Core.
The above will filter MyCol on the values in the file.
You'll also need to perform your own error checking and validation if, for example, the file doesn't contain values which are convertible to ints.
Related
PROBLEM
I have recently started learning more about csvHelper and I need an advice on how to achieve my goal.
I have a CSV file containing some user records (thousands to hundreds of thousands records) and I need to parse the file and validate/process the data. What I need to do is two things:
I need a way to validate whole row while it is being read
the record contains date range and I need to verify it is a valid range
If it's not I need to write the offending line to the error file
one record can also be present multiple times with different date ranges and I need to validate that the ranges don't overlap and if they do, write the WHOLE ORIGINAL LINE to an error file
What I basically can get by with is a way to preserve the whole original row alongside the parsed data, but the way to verify the whole row while the raw data are still available would be better.
QUESTIONS
Are there some events/actions hidden somewhere I can use to validate row of data after it was created but before it was added to the collection?
If not is there a way to save the whole RAW row into the record so I can verify the row after parsing it AND if it is not valid do what I need with them?
CODE I HAVE
What I've created is the record class like this:
class Record
{ //simplified and omitted fluff for brevity
string Login
string Domain
DateTime? Created
DateTime? Ended
}
and a class map:
class RecordMapping<Record>
{ //simplified and omitted fluff for brevity
public RecordMapping(ConfigurationElement config)
{
//..the set up of the mapping...
}
}
and then use them like this:
public ProcessFile(...)
{
...
using(var reader = StreamReader(...))
using(var csvReader = new CsvReader(reader))
using(var errorWriter = new StreamWriter(...))
{
csvReader.Configuration.RegisterClassMap(new RadekMapping(config));
//...set up of csvReader configuration...
try
{
var records = csvReader.GetRecords<Record>();
}
catch (Exception ex)
{
//..in case of problems...
}
....
}
....
}
In this scenario the data might be "valid" from CsvHelper's viewpoint, because it can read the data, but invalid for more complex reasons (like an invalid date range.)
In that case, this might be a simple approach:
public IEnumerable<Thing> ReadThings(TextReader textReader)
{
var result = new List<Thing>();
using (var csvReader = new CsvReader(textReader))
{
while (csvReader.Read())
{
var thing = csvReader.GetRecord<Thing>();
if (IsThingValid(thing))
result.Add(thing);
else
LogInvalidThing(thing);
}
}
return result;
}
If what you need to log is the raw text, that would be:
LogInvalidRow(csvReader.Context.RawRecord);
Another option - perhaps a better one - might be to completely separate the validation from the reading. In other words, just read the records with no validation.
var records = csvReaader.GetRecords<Record>();
Your reader class returns them without being responsible for determining which are valid
and what to do with them.
Then another class can validate an IEnumerable<Record>, returning the valid rows and logging the invalid rows.
That way the logic for validation and logging isn't tied up with the code for reading. It will be easier to test and easier to re-use if you get a collection of Record from something other than a CSV file.
It is possible to generate permanent pages with asp.net with stuff stored on database?
For example a few locations/types and i want to do something like the single example:
Location: New York
Type: Car
Generated Permanent Page With information:
Cars in New York
Generated link:
mywebsite.com/cars/newyork
At the moment i've done a search filter displays the results based on Selected Location and Selected type.
Hope you can give me some tips about this.
One approach is to save HTML template in db and then use string.Format to insert values
e.g like that
string name = "test";
int age = 167;
string test = "{0} format {1}";
string[] args = new string[] { name, age.ToString() };
Console.WriteLine(string.Format(test, args));
prints: test format 167
Then you return View with string that is your html with inserted values and then you "display" (allow browser to render) it using
#model string
#Html.Raw(#Model)
https://learn.microsoft.com/en-us/dotnet/api/system.web.mvc.htmlhelper.raw?view=aspnet-mvc-5.2
But be careful, because it may be very dangerous if user is able to provide/manipulate those values. Read about XSS attack.
I have an ASP.NET MVC web application.
The SQL table has one column ProdNum and it contains data such as 4892-34-456-2311.
The user needs a form to search the database that includes this field.
The problem is that the user wants to have 4 separate fields in the UI razor view whereas each field should match with the 4 parts of data above between -.
For example ProdNum1, ProdNum2, ProdNum3 and ProdNum4 field should match with 4892, 34, 456, 2311.
Since the entire search form contains many fields including these 4 fields, the search logic is based on a predicate which is inherited from the PredicateBuilder class.
Something like this:
...other field to be filtered
if (!string.IsNullOrEmpty(ProdNum1) {
predicate = predicate.And(
t => t.ProdNum.toString().Split('-')[0].Contains(ProdNum1).ToList();
...other fields to be filtered
But the above code has run-time error:
The LINQ expression node type 'ArrayIndex' is not supported in LINQ to Entities`
Does anybody know how to resolve this issue?
Thanks a lot for all responses, finally, I found an easy way to resolve it.
instead of rebuilding models and change the database tables, I just add extra space in the search strings to match the search criteria. since the data format always is: 4892-34-456-2311, so I use Startwith(PODNum1) to search first field, and use Contains("-" + PODNum2 + "-") to search second and third strings (replace PODNum1 to PODNum3), and use EndWith("-" + PODNum4) to search 4th string. This way, I don't need to change anything else, it is simple.
Again, thanks a lot for all responses, much appreciated.
If i understand this correct,you have one column which u want to act like 4 different column ? This isn't worth it...For that,you need to Split each rows column data,create a class to handle the splitted data and finally use a `List .Thats a useless workaround.I rather suggest u to use 4 columns instead.
But if you still want to go with your existing applied method,you first need to Split as i mentioned earlier.For that,here's an example :
public void test()
{
SqlDataReader datareader = new SqlDataReader;
while (datareader.read)
{
string part1 = datareader(1).toString.Split("-")(0);///the 1st part of your column data
string part2 = datareader(1).toString.Split("-")(1);///the 2nd part of your column data
}
}
Now,as mentioned in the comments,you can rather a class to handle all the data.For example,let's call it mydata
public class mydata {
public string part1;
public string part2;
public string part3;
public string part4;
}
Now,within the While loop of the SqlDatareader,declare a new instance of this class and pass the values to it.An example :
public void test()
{
SqlDataReader datareader = new SqlDataReader;
while (datareader.read)
{
Mydata alldata = new Mydata;
alldata.Part1 = datareader(1).toString.Split("-")(0);
alldata.Part2 = datareader(1).toString.Split("-")(1);
}
}
Create a list of the class in class-level
public class MyForm
{
List<MyData> storedData = new List<MyData>;
}
Within the while loop of the SqlDatareader,add this at the end :
storedData.Add(allData);
So finally, u have a list of all the splitted data..So write your filtering logic easily :)
As already mentioned in a comment, the error means that accessing data via index (see [0]) is not supported when translating your expression to SQL. Split('-') is also not supported hence you have to resort to the supported functions Substring() and IndexOf(startIndex).
You could do something like the following to first transform the string into 4 number strings ...
.Select(t => new {
t.ProdNum,
FirstNumber = t.ProdNum.Substring(0, t.ProdNum.IndexOf("-")),
Remainder = t.ProdNum.Substring(t.ProdNum.IndexOf("-") + 1)
})
.Select(t => new {
t.ProdNum,
t.FirstNumber,
SecondNumber = t.Remainder.Substring(0, t.Remainder.IndexOf("-")),
Remainder = t.Remainder.Substring(t.Remainder.IndexOf("-") + 1)
})
.Select(t => new {
t.ProdNum,
t.FirstNumber,
t.SecondNumber,
ThirdNumber = t.Remainder.Substring(0, t.Remainder.IndexOf("-")),
FourthNumber = t.Remainder.Substring(t.Remainder.IndexOf("-") + 1)
})
... and then you could simply write something like
if (!string.IsNullOrEmpty(ProdNum3) {
predicate = predicate.And(
t => t.ThirdNumber.Contains(ProdNum3)
I have a csv file I am going to read from disk. I do not know up front how many columns or the names of the columns.
Any thoughts on how I should represent the fields. Ideally I want to say something like,
string Val = DataStructure.GetValue(i,ColumnName).
where i is the ith Row.
Oh just as an aside I will be parsing using the TextFieldParser class
http://msdn.microsoft.com/en-us/library/cakac7e6(v=vs.90).aspx
That sounds as if you would need a DataTable which has a Rows and Columns property.
So you can say:
string Val = table.Rows[i].Field<string>(ColumnName);
A DataTable is a table of in-memory data. It can be used strongly typed (as suggested with the Field method) but actually it stores it's data as objects internally.
You could use this parser to convert the csv to a DataTable.
Edit: I've only just seen that you want to use the TextFieldParser. Here's a possible simple approach to convert a csv to a DataTable:
var table = new DataTable();
using (var parser = new TextFieldParser(File.OpenRead(path)))
{
parser.Delimiters = new[]{","};
parser.HasFieldsEnclosedInQuotes = true;
// load DataColumns from first line
String[] headers = parser.ReadFields();
foreach(var h in headers)
table.Columns.Add(h);
// load all other lines as data '
String[] fields;
while ((fields = parser.ReadFields()) != null)
{
table.Rows.Add().ItemArray = fields;
}
}
If the column names are in the first row read that and store in a Dictionary<string, int> that maps the column name to the column index.
You could then store the remaining rows in a simple structure like List<string[]>.
To get a column for a row you'd do csv[rowIndex][nameToIndex[ColumnName]];
nameToIndex[ColumnName] gets the column index from the name, csv[rowIndex] gets the row (string array) we want.
This could of course be wrapped in a class.
Use the csv parser if you want, but a text parser is something very easy to do by yourself if you need customization.
For you need, i would use one (or more) Dictionnary. At least one to have the PropertyString --> column index. And maybe the reverse one column index--> PropertyString if needed.
When i parse a file for csv, i usually put the result in a list while parsing, and then in an array once complete for speed reasons (List.ToArray()).
I would like to store values in a text file as comma or tab seperated values (it doesn't matter).
I am looking for a reusable library that can manipulate this data in this text file, as if it were a sql table.
I need select * from... and delete from where ID = ...... (ID will be the first column in the text file).
Is there some code plex project that does this kind of thing?
I do not need complex functionality like joining or relationships. I will just have 1 text file, which will become 1 database table.
SQLite
:)
Use LINQ to CSV.
http://www.codeproject.com/KB/linq/LINQtoCSV.aspx
http://www.thinqlinq.com/Post.aspx/Title/LINQ-to-CSV-using-DynamicObject.aspx
If its not CSV in that case
Let your file hold one record per line. Each record at runtime should be read into a Collection of type Record [assuming Record is custom class representing individual record]. You can do LINQ operations on the collection and write back the collection into file.
Use ODBC. There is a Microsoft Text Driver for csv-Files. I think this would be possible. I don't have tested if you can manipulate via ODBC, but you can test it easily.
For querying you can also use linq.
Have you looked at the FileHelpers library? It has the capability of reading and parsing a text file into CLR objects. Combining that with the power of something like LINQ to Objects, you have the functionality you need.
public class Item
{
public int ID { get; set; }
public string Type { get; set; }
public string Instance { get; set; }
}
class Program
{
static void Main(string[] args)
{
string[] lines = File.ReadAllLines("database.txt");
var list = lines
.Select(l =>
{
var split = l.Split(',');
return new Item
{
ID = int.Parse(split[0]),
Type = split[1],
Instance = split[2]
};
});
Item secondItem = list.Where(item => item.ID == 2).Single();
List<Item> newList = list.ToList<Item>();
newList.RemoveAll(item => item.ID == 2);
//override database.txt with new data from "newList"
}
}
What about data delete. LINQ for query, not for manipulation.
However, List provides a predicate-based RemoveAll that does what you
want:
newList.RemoveAll(item => item.ID == 2);
Also you can overview more advanced solution "LINQ to Text or CSV Files"
I would also suggest to use ODBC. This should not complicate the deployment, the whole configuration can be set in the connection-string so you do not need a DSN.
Together with a schema.ini file you can even set column names and data-types, check this KB article from MS.
sqllite or linq to text and csv :)