Dategridview of my list of values
How could I add 3 column of values of Column 5 -10 from the 3 comma-separated-values (csv) files at one go
say : C:\FYP\2000data\Z1ert00000.cvs,
C:\FYP\2000data\Z1ert00001.cvs and
C:\FYP\2000data\Z1ert00002.cvs
when I click the add/import button?
P.S : I have 2000 cvs files to insert to the DatagridView. Is there
any other easier way too ? Means I will have 2000 columns appearing in
the Datagridview.
private void btnImport_Click(object sender, EventArgs e)
{
var parsedData = new List<string[]>();
using (var sr = new StreamReader(txtFilename.Text)) //
{
string line;
while ((line = sr.ReadLine()) != null)
{
string[] row = line.Split(',');
parsedData.Add(row);
}
}
dataGridView1.ColumnCount = 2;
for (int i = 0; i < 2; i++)
{
var sb = new StringBuilder(parsedData[0][i]);
dataGridView1.Columns[1].Name = sb.ToString();
}
foreach (string[] row in parsedData)
{
dataGridView1.Rows.Add(row);
}
for (int x = 0; x < 5; x++)
{
dataGridView1.Rows.Remove(dataGridView1.Rows[0]); // first 5 rows of the data
}
dataGridView1.Columns.Remove(dataGridView1.Columns[0]); // remove the first column
}
These are the codes to import only one csv file
Thanks in advance!! :)
What you're trying to achieve
You're saying you have 2000 files you'd like to see on the datagridview. I find it hard to believe that this is what you realy want as it will become
pretty slow on the initial load (parsing 2000 files)
Unreadable... who scrolls through 2000 columns?
So I think you should first consider what it is you want to do with the information in these csv files. Think about UI design. Perhaps create a search function?
Working with large data
If these are static files, I would propose to import all these csv files into a database so you have easy access to them and can use an ORM model in your program. Take a look at Entity Framework.
Importing these files into a SQL database can be as easy as this:
BULK INSERT SchoolsTemp
FROM 'C:\CSVData\Schools.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
TABLOCK
)
Or use any of the already available tutorials out there.
Then you can start thinking about paging the data you're getting out and how to visualize them so the data becomes useful.
Hope this helps.
Related
I have a datagridview in a winform that I read data into. I name each column after the count in the loop I use. Part of the function reading the data is below. The file I read from is a csv created from excel.
while (!parser.EndOfData)
{
string[] fields = parser.ReadFields(); //read in the next row of data
dgv_data.Rows.Add(); // add new row
rowCount++;
//put row number inside left margin
dgv_data.Rows[rowCount - 1].HeaderCell.Value = rowCount.ToString();
for (int i = 0; i < col; i++)
{
dgv_data.Rows[rowCount - 1].Cells[i].Value = fields[i]; //put the data into the cell
//If the cell is true or a number greater than 1 then we colour it green
if (fields[i].ToLower() == "true") dgv_data.Rows[rowCount - 1].Cells[i].Style.BackColor = Color.SpringGreen;
if (int.TryParse(fields[i], out num))
{
if (int.Parse(fields[i]) > 0) dgv_data.Rows[rowCount - 1].Cells[i].Style.BackColor = Color.SpringGreen;
}
dgv_data.Rows[rowCount - 1].Cells[i].Tag = (rowCount - 1).ToString() + ":" + i.ToString(); //Unique cell tag
}
}
I need to reorder the columns as I need to save in a different order BUT I also need to reorder them back to original order so flip-flop between the two different orders. This I do with a simple function, I only show a few of the columns here as there are 30 in total. This works well even if a bit inefficient.
private void btn_reorder_Click(object sender, EventArgs e)
{
if (flag)
{
flag = false;
dgv_data.Columns[22].DisplayIndex = 0;
dgv_data.Columns[20].DisplayIndex = 1;
dgv_data.Columns[12].DisplayIndex = 2;
}
else
{
flag = true;
dgv_data.Columns[0].DisplayIndex = 0;
dgv_data.Columns[1].DisplayIndex = 1;
dgv_data.Columns[2].DisplayIndex = 2;
}
dgv_data.Refresh();
}
The issue comes when I need to save the data to a csv file, I do not get them saved in the new order. Before I save it I need to manipulate a few columns e.g change seconds to milliseconds. Using the following method, I can do this but when I save the file it always has the original layout.
var sb = new StringBuilder();
foreach (DataGridViewRow row in dgv_data.Rows)
{
row.Cells[1].Value = (int.Parse(row.Cells[1].Value.ToString()) * 1000).ToString();
var cells = row.Cells.Cast<DataGridViewCell>();
sb.AppendLine(string.Join(",", cells.Select(cell => "\"" + cell.Value + "\"").ToArray()));
}
File.WriteAllText(saveFileDialog1.FileName, sb.ToString());
I found on internet a different method and this does save the new layout but I cannot manipulate the cells before I save them.
dgv_data.ClipboardCopyMode = DataGridViewClipboardCopyMode.EnableWithoutHeaderText;
// Select all the cells
dgv_data.SelectAll();
// Copy selected cells to DataObject
DataObject dataObject = dgv_data.GetClipboardContent();
// Get the text of the DataObject, and serialize it to a file
File.WriteAllText(saveFileDialog1.FileName,
dataObject.GetText(TextDataFormat.CommaSeparatedValue));
How can I make sure that when I reorder the columns that I can save them in the same order as they are show in the DataGridView and still be able to flip-flop between the two column orders?
DataGridView Column have many ways you can address them, two are their name or their index number in the DataGridView Collection.
The user creates the Column name, but the Index is created by the system when columns are created, and I cannot see a way to ever edit this number.
If you want to reorder, the visual order of your columns in the GUI you change the DisplayIndex. This does not Change the Index number of the column. It just changes how the DGV looks in the UI.
I created a small example which you can download from https://github.com/zizwiz/DataGridView_ReorderColumn_Example
When you save the left hand reordered DGV by copy to clipboard you get the view in the GUI but if you save by parsing through the grid you get the original Index view.
To get round this if you want to reorder and then save that by parsing through the DGV then you must Copy the original DGV to a new DGV column by column in the order you now want. There are many ways to do this I just show on simple way what you would probably want to do is put the column in a temporary list, remove all columns and add them again.
I cannot find a way of changing the Index property of a column after it has been created, so this copy method although cumbersome is what I have used.
As this is only a quick example it does not have all the bells and whistles one might want to use, it just illustrates how I got over a problem I encountered. The files you save are put in the same folder as the "exe" you run.
I have the below method, im trying to clean up a text file based on some criteria, with everyones help ive already gotten this far and everything is working and creating and filtering, BUT something i noticed during my last set of tests is that the number of rows expected in the csv files im generating(only for testing purposes) are double the size of the actual datatable, so the first datatable in my method is called "output" which the row count is 2257, but the csv that is created has 4514 records in it and the 2nd csv called outputLoc row count is 1402 but the csv file ends up with 2804 records..
Here is the code currently being executed and working, but generating the above numbers.
try
{
DataTable d = processFileData(concatFile);
// REMOVES ALL RECORDS WITH A CLASS THAT IS NON-LABEL CLASS
var query = from r in d.AsEnumerable()
where !returnClass().Any(r.Field<string>("Column7").Contains)
select r;
DataTable output = query.CopyToDataTable<DataRow>(); // Should have 2257 records
int dtoutputCount = output.Rows.Count;
for (int i = 0; i < dtoutputCount; i++)
{
DataRow rows = output.Rows[i];
output.ImportRow(rows);
}
ToCSV(output, ftype,"filteredclass"); // Only writing to csv for testing and verification of data
//// REMOVES ALL RECORDS THAT HAVE A NON-SELLING OR UNOPENED LOCATION
var queryLoc = from rL in output.AsEnumerable()
where !returnLocations().Any(rL.Field<string>("Column2").Contains)
select rL;
DataTable outputLoc = queryLoc.CopyToDataTable<DataRow>();
int dtoutputLocCount = outputLoc.Rows.Count;
for (int i = 0; i < dtoutputLocCount; i++)
{
DataRow rows = outputLoc.Rows[i];
outputLoc.ImportRow(rows);
}
ToCSV(outputLoc, ftype,"filteredlocation"); // Only writing to csv for testing and verification of data
}
catch (Exception e)
{
Console.WriteLine(e.InnerException);
}
Once everything is working as expect, we would eventually get rid of the ToCSV calls so that we can work with the data in memory and once its all cleaned, then we call that to produce the final file that has been filtered down.
Any help would be greatly appreciated in determining why im getting a file that is exactly twice as big as expected.
My Excel file is not in tabular data. I am trying to read from an excel file.
I have sections within my excel file that are tabular.
I need to loop through rows 3 to 20 which are tabular and read the data.
Here is party of my code:
string fileName = "C:\\Folder1\\Prev.xlsx";
var workbook = new XLWorkbook(fileName);
var ws1 = workbook.Worksheet(1);
How do I loop through rows 3 to 20 and read columns 3,4, 6, 7, 8?
Also if a row is empty, how do I determine that so I can skip over it without reading that each column has a value for a given row.
To access a row:
var row = ws1.Row(3);
To check if the row is empty:
bool empty = row.IsEmpty();
To access a cell (column) in a row:
var cell = row.Cell(3);
To get the value from a cell:
object value = cell.Value;
// or
string value = cell.GetValue<string>();
For more information see the documentation.
Here's my jam.
var rows = worksheet.RangeUsed().RowsUsed().Skip(1); // Skip header row
foreach (var row in rows)
{
var rowNumber = row.RowNumber();
// Process the row
}
If you just use .RowsUsed(), your range will contain a huge number of columns. Way more than are actually filled in!
So use .RangeUsed() first to limit the range. This will help you process the file faster!
You can also use .Skip(1) to skip over the column header row (if you have one).
I'm not sure if this solution will solve OP's problem but I prefer using RowsUsed method. It can be used to get the list of only those rows which are non-empty or has been edited by the user. This way I can avoid making emptiness check while processing each row.
Below code snippet can process 3rd to 20th row numbers out of all the non-empty rows. I've filtered the empty rows before starting the foreach loop. Please bear in mind that filtering the non-empty rows before starting to process the rows can affect the total count of rows which will get processed. So you need to be careful while applying any logic which is based on the total number of rows processed inside foreach loop.
string fileName = "C:\\Folder1\\Prev.xlsx";
using (var excelWorkbook = new XLWorkbook(fileName))
{
var nonEmptyDataRows = excelWorkbook.Worksheet(1).RowsUsed();
foreach (var dataRow in nonEmptyDataRows)
{
//for row number check
if(dataRow.RowNumber() >=3 && dataRow.RowNumber() <= 20)
{
//to get column # 3's data
var cell = dataRow.Cell(3).Value;
}
}
}
RowsUsed method is helpful in commonly faced problems which require processing the rows of an excel sheet.
It works easily
XLWorkbook workbook = new XLWorkbook(FilePath);
var rowCount = workbook.Worksheet(1).LastRowUsed().RowNumber();
var columnCount = workbook.Worksheet(1).LastColumnUsed().ColumnNumber();
int column = 1;
int row = 1;
List<string> ll = new List<string>();
while (row <= rowCount)
{
while (column <= columnCount)
{
string title = workbook.Worksheets.Worksheet(1).Cell(row, column).GetString();
ll.Add(title);
column++;
}
row++;
column = 1;
}
I am trying to import csv to mssql using c# code on a asp.net application.
the below c# code helps to load the csv file, choose the table to import to
and then a button click event matches columns and populates a gridview (dgvmatchdata)
the gridview has two columns where the left side column lists all the available table headers from db
and right side column lists all the csv headers in a drop down list.
now, there are 3 conditions
1. both table and csv has equal no. of column headers.
2. table has more columns than csv
3. csv has more columns than table.
i have successfully finished the first two scenarios. and now, i am stuck at the 3rd scenario
my approach to this is,
for example lets consider the table as 10 columns and csv has 15 columns.
i wish to create 15 rows in dgvmatchdata and display the 10 table column headers on the left side
on their own labels for each. the equivalent righ side of the dgvmatchdata will have a drop down list
which contains 'ignore this column' + 15 columns headers from the csv. so, the ddl will now have 16 items.
i want to place text box for the remaining 5 rows on the table column side and i want to populate
the text box with dropdown list. selected items.text during dropdown selected item change event.
now after i have successfully got text inside the remaining 5 text boxes i will write a code on a button click event
to alter table in the db and then the next step would simply import the csv data to the altered table flawlessly.
the point where i am seeking help is, i have placed the text box correctly but the text box disappears on ddl.selecteditem changed
due to post back issue and i am unable to get the text inside the text box.
kindly help.
Gridview Code
protected void dgvmatchdata_RowDataBound(object sender, GridViewRowEventArgs e)
{
DataTable dt = (DataTable)Session["importcsv"];
string[] csvcolNames = (from dc in dt.Columns.Cast<DataColumn>() select dc.ColumnName).ToArray();
string tablename = ddltable2.SelectedItem.Text;
string[] dbcolNames = loadDataBaseColumns(tablename);
int dbcol = dbcolNames.Length;
int csvcol = csvcolNames.Length;
if (e.Row.RowType == DataControlRowType.DataRow)
{
//Find the DropDownList in the Row
DropDownList ddlcsvcolumns = (e.Row.FindControl("ddlcsvcolumns") as DropDownList);
ddlcsvcolumns.DataSource = csvcolNames;
ddlcsvcolumns.DataBind();
ddlcsvcolumns.Items.Insert(0, new ListItem("Please select"));
}
for (int i = 0; i < dgvmatchdata.Rows.Count; i++)
{
DropDownList ddlselect = dgvmatchdata.Rows[i].FindControl("ddlcsvcolumns") as DropDownList;
foreach (string col in csvcolNames)
{
string tablcol = ((Label)dgvmatchdata.Rows[i].FindControl("lblcolumns1")).Text;
if (tablcol == col)
{
ddlselect.SelectedItem.Text = col;
ddlselect.Enabled = false;
dgvmatchdata.Rows[i].Cells[1].BackColor = System.Drawing.Color.SpringGreen;
}
}
}
}
Drop Down Selected Index Changed
protected void ddlcsvcolumns_SelectedIndexChanged(object sender, EventArgs e)
{
for (int i = 0; i < dgvmatchdata.Rows.Count; i++)
{
string selectcolumns = ((DropDownList)dgvmatchdata.Rows[i].FindControl("ddlcsvcolumns")).SelectedItem.Text;
Label selectlabel = (dgvmatchdata.Rows[i].FindControl("lblColumns") as Label);
TextBox txtcol = (TextBox)dgvmatchdata.Rows[i].FindControl("txtDynamicText" + i.ToString());
if (selectcolumns.Equals("Please select"))
{
selectlabel.Text = "";
}
else
{
selectlabel.Text = selectcolumns;
}
}
}
I was asked to do a report that combines 3 different crystal reports that we use. Already those reports are very slow and heavy and making 1 big one was out of the question. SO I created a little apps in VS 2010.
My main problem is this, I have 3 Datatable (same schema) that were created with the Dataset designer that I need to combine. I created an empty table to store the combined value. The queries are already pretty big so combining them in a SQL query is really out of the question.
Also I do not have write access to the SQL server (2005), because the server is maintained by the company that created our MRP program. Although I could always ask support to add a view to the server.
So my 3 datatable consist of Labor Cost, Material Cost and subcontracting Cost. I need to create a total cost table that adds all of the Cost column of each table by ID. All the table have keys to find and select them.
The problem is that when i fetch all of the current job it is ok (500ms for 400 records), because I have a query that will fetch only the working job. Problem is with Inventory, since I do not know since when those Job were finished I have to fetch the entire database (around 10000 jobs with subqueries that each have up to 100 records) and this for my 3 tables. This takes around 5000 to 8000ms, although it is very fast compared to the crystal report there is one problem.
I need to create a summary table that will combine all these different tables I created, But I also need to do them 2 times, 1 time for each date that is outputted. So my data always changes, because they are based on a Date parameter. Right now it will take around 12-20sec to fetch them all.
I need a way to reduce the load time, here is what I tried.
Tried a for loop to combine the 3 tables
Then tried with the DataReader class to read each line and used the FindByKey methods that the dataset designer created to find the value in the other table, and I have to do this 2 time. (it seems to go a little bit faster than the for loop)
Tried with Linq, don't think it is possible, and will it give more performance?
Tried to do a dynamic query that use "WHERE IN Comma Separated List" (that actually doubled the time of execution, compared to fetching all of the database)
Tried to join my Inventory query to the my Cost queries (that also increased the time it took)
1 - So is there any way to combine my tables more effectively? What is the fastest way to Merge and Sum my records of my 3 tables?
2 - Is there any way to increase performance of my queries without having write access to the server?
Below is some of the code I used for reference :
public static void Fill()
{
DateTime Date = Data.Date;
AllieesDBTableAdapters.CoutMatTableAdapter mat = new AllieesDBTableAdapters.CoutMatTableAdapter();
AllieesDBTableAdapters.CoutLaborTableAdapter lab = new AllieesDBTableAdapters.CoutLaborTableAdapter();
AllieesDBTableAdapters.CoutSTTableAdapter st = new AllieesDBTableAdapters.CoutSTTableAdapter();
Data.allieesDB.CoutTOT.Clear();
//Around 2 sec each Fill
mat.FillUni(Data.allieesDB.CoutMat, Date);
Data.allieesDB.CoutMat.CopyToDataTable(Data.allieesDB.CoutTOT, LoadOption.OverwriteChanges);
lab.FillUni(Data.allieesDB.CoutLabor, Date);
MergeTable(Data.allieesDB.CoutLabor);
st.FillUni(Data.allieesDB.CoutST, Date);
MergeTable(Data.allieesDB.CoutST);
}
Here is the MergeTable Methods (The For loop I tried is in Comment)
private static void MergeTable(DataTable Table)
{
AllieesDB.CoutTOTDataTable dtTOT = Data.allieesDB.CoutTOT;
DataTableReader r = new DataTableReader(Table);
while (r.Read())
{
DataRow drToT = dtTOT.FindByWO(r.GetValue(2).ToString());
if (drToT != null)
{
drToT["Cout"] = (decimal)drToT["Cout"] + (decimal)r.GetValue(3);
} else
{
EA_CoutsDesVentes.AllieesDB.CoutTOTRow row = dtTOT.NewCoutTOTRow();
for (int j = 0; j < r.FieldCount; j++)
{
if (r.GetValue(j) != null)
{
row[j] = r.GetValue(j);
} else
{
row[j] = null;
}
}
dtTOT.AddCoutTOTRow(row);
}
Application.DoEvents();
}
//try
//{
// for (int i = 0; i < Table.Rows.Count; i++)
// {
// DataRow drSource = Table.Rows[i];
// DataRow drToT = dtTOT.FindByWO(drSource["WO"].ToString());
//if (drToT != null)
//{
// drToT["Cout"] = (decimal)drToT["Cout"] + (decimal)drSource["Cout"];
//} else
//{
//
// EA_CoutsDesVentes.AllieesDB.CoutTOTRow row = dtTOT.NewCoutTOTRow();
// for (int j = 0; j < drSource.Table.Columns.Count; j++)
// {
// if (drSource[j] != null)
// {
// row[j] = drSource[j];
// } else
// {
// row[j] = null;
// }
// }
// dtTOT.AddCoutTOTRow(row);
//}
//Application.DoEvents();
// }
//} catch (Exception)
//{
//}
On Sql Server 2005 and up, you can create a materialized view of the aggregate values and dramatically speed up the performance.
look at Improving Performance with SQL Server 2005 Indexed Views