Padding 0's - MySQL - c#

So, I have a column that is my key column and auto-increments, so it can't be varchar or anything fun.
Please hold back the "Erhmahgerd urse werb contrerls" as I like to control my own HTML flow and don't like handing it over to .NET. I've never had good experiences with that (and I like my code to be compliant). I wouldn't like this to be a flame war or anything - I just want to pad with zeroes. I feel the need to say this because it's happened way too many times before.
So, anyway.
DataTable tabledata = new DataTable();
using (OdbcConnection con = new OdbcConnection(conString)) {
using (OdbcCommand com = new OdbcCommand("SELECT * FROM equipment_table", con)) {
OdbcDataAdapter myAdapter = new OdbcDataAdapter();
myAdapter.SelectCommand = com;
try {
con.Open();
myAdapter.Fill(tabledata);
} catch (Exception ex) {
throw (ex);
} finally {
con.Close();
}
}
}
Response.Write("<table id=\"equipment_listtable\"><thead><tr><th>Equipment ID</th><th>Equipment Name</th><th>Equipment Description</th><th>Type</th><th>In Use?</th><th>iOS Version (if applicable)</th><th>Permission Level</th><th>Status</th><th>Asset Tag</th><th>Details</th><th>Change</th><th>Apply</th></tr></thead>");
foreach (DataRow row in tabledata.Rows) {
int counter = (int)row["Equipment_ID"];
Response.Write("<tr>");
foreach (var item in row.ItemArray) {
Response.Write("<td>" + item + "</td>");
}
Response.Write("This stuff is irrelevant to my problem, so it is being left out... It uses counter though, so no complaining about not using variables...");
}
Response.Write("</table>");
As you can imagine, the value of my key column comes out like so in the generated table:
1
10
11
12
13
14
15
16
17
18
19
20
2
21
etc. I'd like to fix this with 0 padding. What is the best way to do this? Is there a way to target a SPECIFIC field while I'm generating the table? I've looked into DataRow.Item, but I've always found the MSDN documentation to be a bit difficult to comprehend.
Alternatively, could I SELECT * and then use mysql's lpad on ONE specific field within the *?
Thanks!

SELECT * is generally not a good idea to use. It inevitably causes more problems than the time it saves by writing the query.
This will allow you to use a LPAD on the column.
I was about to suggest using something like:
Response.Write("" + item.ToString.PadLeft(2, '0')+ "");
But since you are just looping round each item and rendering them all the same way, the above would pad every cell.
So I think your best option is to change your query to specify every column. Then you can pad the field as you want.
Or use an ORDER BY if you are only concerned they aren't being ordered correctly (ie, ordered as chars not ints).
alternatively, create a variable for each cell read from the database and render each seperately.
this will give you more customisation options, should you requite them.

You really should always specify your column names explicitly and not use * anyway - see here.
If you insist on using * then just bring the padded value in as another field:
SELECT *,LPAD("Equipment_ID", 2, '0') as Equipment_ID_Padded FROM equipment_table
Remember LPAD will truncate if your Equipment_ID is longer than 2 digits.
A better solution may be to just pad the values in code using String.Format or ToString("D2");
string paddedString = string.Format("{0:d2}", (int)row["Equipment_ID"]));

You can add padding in C# by using .ToString("D" + number of leading zeros);
eg. if counter = 34 and you call counter.ToString("D5"), you'll get 00034.
If you're using strings, the easiest way would be to convert.toInt32() and then apply the above.
If you'd rather keep using strings, just look into --printf whups wrong language-- String.Format.

Related

C# - break out large string into multiple smaller strings for export to a database

C# newb here - I have a script written in C# which takes the contents of several fields of the internal database of an application (Contoso Application, in this case) and exports them to a SQL Server Database table.
Here is the code:
using System;
using System.IO;
using System.Data.SqlClient;
using Contoso.Application.Api;
using Contoso.Application.Commands;
using System.Linq;
public class Script
{
public static bool ExportData(DataExportArguments args)
{
try
{
var sqlStringTest = new SqlConnectionStringBuilder();
sqlStringTest.DataSource = "SQLserverName";
sqlStringTest.InitialCatalog = "TableName";
sqlStringTest.IntegratedSecurity = True;
sqlStringTest.UserID = "userid";
sqlStringTest.Password = "password";
using (var sqlConnection = new SqlConnection(sqlStringTest.ConnectionString))
{
sqlConnection.Open();
using (IExportReader dataReader = args.Data.GetTable())
{
while (dataReader.Read())
{
using (var sqlCommand = new SqlCommand())
{
sqlCommand.Connection = sqlConnection;
sqlCommand.CommandText =
#"INSERT INTO [dbo].[Table] (
Id,
Url,
articleText)
VALUES (
#Id,
#Url,
#articleText)";
sqlCommand.Parameters.AddWithValue("#Id", dataReader.GetStringValue("Id"));
sqlCommand.Parameters.AddWithValue("#Url", dataReader.GetStringValue("Url"));
sqlCommand.Parameters.AddWithValue("#articleText",
dataReader.Columns.Any(x => x.Name == "articleText")
? dataReader.GetStringValue("articleText")
: (object)DBNull.Value);
}
}
}
}
}
catch (Exception exp)
{
args.WriteDebug(exp.ToString(), DebugMessageType.Error);
return false;
}
return true;
}
}
FYI - articleText is of type nvarchar(max)
What I'm trying to accomplish: sometimes the data in the articleText field is short, sometimes it is very long. What I need to do is break out a record into multiple records when the string in a given articleText field is greater than 10,000 characters. So if a given articleText field is 25,000 characters, there would be 3 records exported: first one would have an articleText field of 10,000 characters, 2nd, 10,000 characters, 3rd, 5,000 characters.
Further to this requirement, I need to ensure that if the character cutoff for each record falls in the middle of a word (which will likely happen most of the time) that I account for that.
Therefore, as an example, if we have a record in the application's internal database with Id of 1, Url of www.contoso.com, and articleText of 28,000 characters, I would want to export 3 records to SQL Server as such:
Record 1:
Id: 1
Url: www.contoso.com
articleText: if articleText greater than 10,000 characters, export characters 1-10,000, else export entirety of articleText.
Record 2:
Id: 1
Url: www.contoso.com
articleText: assuming Record 2 only exists if Record 1 was greater than 10k character, export characters 9,990-20,000 (start at character 9,990 in case Record 1 cuts off at the middle of a word).
Record 3:
Id: 1
Url: www.contoso.com
articleText: export characters 19,900-28,000 (or alternatively, 19,900 through end of string).
For any given export session, there are thousands of records in the internal database to be exported (hence the while loop). Approximately 20% of the records will meet the criteria of articleText exceeding 10k characters, so for any that don't, we absolutely only want to export one record. Further, although my example above only goes to 28k characters, this script needs to be able to accommodate any size.
I'm a bit stumped at how one would go about accomplishing something like this. I believe the first step is to get a character count for articleText to determine how many records need to be exported. From there, I feel I've gone down a rabbit hole. Any suggestions on how to go about this would be greatly appreciated.
EDIT #1: to clarify on the cutoff requirement - the reason the above is the approach I'm suggesting to handle the cutoff is because the article may have a person's name in it. Simply finding a space and cutting it off there wouldn't work because it's possible you would split between a first and last name. The approach I mention above would meet our requirements because the word or name only needs to exist in its entirety in one of the records.
Further, reassembly of the separated records in SQL Server is not a requirement and therefore not necessary.
This might be a start: it's not very efficient, admittedly, but just to illustrate how it might be done:
void Main()
{
string text = "012345 6789012 3456789012 34567890 1234567" +
"0123 456789 01234567 8901234567 8901234567" +
"012345 67890123456 78901234567890123456" +
"0123456 7890123456 789012345 6789012345" +
"012345 678901234 5678901234 5678901234" +
"01234567 89012345678 901234567890123" +
"ABCDEFGHI JLMNOPQES TUVWXYZ";
int startingPoint = 0;
int chunkSize = 50;
int padding = 10;
List<string> chunks = new List<string>();
do
{
if (startingPoint == 0)
{
chunks.Add(new string(text.Take(chunkSize).ToArray()));
}
else
{
chunks.Add(new string(text.Skip(startingPoint).Take(chunkSize).ToArray()));
}
startingPoint = startingPoint + chunkSize - padding;
}
while (startingPoint < text.Length);
Console.WriteLine("Original length: {0}", text.Length);
Console.WriteLine("Chunk count: {0}", chunks.Count);
Console.WriteLine("Expected new length: {0}", text.Length + (chunks.Count -1) * padding);
Console.WriteLine("Actual new length: {0}", chunks.Sum(c => c.Length));
Console.WriteLine();
Console.WriteLine("Chunks:");
foreach (var chunk in chunks)
{
Console.WriteLine(chunk);
}
}
Output:
Original length: 263
Chunk count: 7
Expected new length: 323
Actual new length: 323
Chunks:
012345 6789012 3456789012 34567890 12345670123 456
670123 456789 01234567 8901234567 8901234567012345
4567012345 67890123456 789012345678901234560123456
4560123456 7890123456 789012345 6789012345012345 6
45012345 678901234 5678901234 567890123401234567 8
01234567 89012345678 901234567890123ABCDEFGHI JLMN
EFGHI JLMNOPQES TUVWXYZ
You are going to have to tokenize the input to be able split it sensibly. In order to do that, you have to be able to make some assumptions about the input.
For example, you could split the input on the last end-of-sentence that occurs prior to the 10K character boundary. But, you have to be able to make concrete assumptions with the input about what constitutes an end-of-sentence. If you can assume that the input is well-punctuated and grammatically correct, then a simple regex like [^.!?]+[.!?] {1,2}[A-Z] can be used to detect the end of a sentence, where the sentence ends with ".", "!", or "?", is followed by at least one but no more than two spaces, and the next character is a capital letter. Since the
following capital letter is included in the match, you just drop back one character position and split.
The exact process will depend on the specific assumptions you can make about the input.

Rounding a field's value in a SELECT statement

What I need to do is to round a field to 2 decimals, but not in the usual way. I have a dropdown that's always rounded to 2 decimals (CIT_NBR). However, in the database table, it's sometimes rounded to 1 decimal. So now I'm trying to create a SELECT statement based on this field, but my front end stores it as 2 decimals and my back end can be stored as either 1 or 2 decimals. Don't ask, it's complicated. :o)
So, what I want to do in "aircode" is something like:
SELECT * FROM VW_MOS_DPL_AccountValidation WHERE CUST_NUM = #CNum
AND Format(CIT_NBR, 2 decimals) = #CITNum
This way, it forces the data in the table to use 2 decimals, so it can be compared to my dropdown.
Here's my code block:
using (SqlConnection con2 = new SqlConnection(str2))
{
using (SqlCommand cmd2 = new SqlCommand(#"SELECT * FROM VW_MOS_DPL_AccountValidation WHERE CUST_NUM = #CNum AND CIT_NBR = #CITNum", con2))
{
con2.Open();
cmd2.Parameters.AddWithValue("#CNum", TBAccountNum.Text);
string ddlCITVal2 = ddlCIT.SelectedValue;
cmd2.Parameters.AddWithValue("#CITNum", ddlCITVal2);
using (SqlDataReader DT2 = cmd2.ExecuteReader())
{
// If the SQL returns any records, process the info
if (DT2.HasRows)
{
while (DT2.Read())
{
.
.
.
etc
How could I go about doing this?
Cast the varchar to a decimal
SELECT SUM(Cast(CitNum as decimal(8,2))) as CitNum FROM table
There is a more performant approach, but this is easiest to read and maintain unless it causes a real performance problem.
SELECT *
FROM VW_MOS_DPL_AccountValidation
WHERE CUST_NUM = #CNum
AND (CIT_NBR=#CITNUM OR CIT_NBR+'0'=#CITNUM)
Unless you really meant rounded to one decimal instead of the example you gave which is just truncating a trailing zero, in which case a different approach would need to be used.

Better algorithm for a date comparison task

I would like some help making this comparison faster (sample below). The sample take each value in an array, attach an hour to a comparison-variable. If no matching value, it's add the value to a second array (which are concatenated later).
if (ticks.TypeOf == Period.Hour)
while (compareAt <= endAt)
{
if (range.Where(d => d.time.AddMinutes(-d.time.Minute) == compareAt).Count() < 1)
gaps.Add(new SomeValue() {
...some dummy values.. });
compareAt = compareAt.AddTicks(ticks.Ticks);
}
This execution is too consuming when came to i.e. hours. There are 365 * 24 = 8760 values at most in this array. In future, there will also be minutes/seconds per month 60*24*31=44640, which means unusable.
If the array most often was complete (which means no gaps/empty slots), it could easily be by-passed with if (range.Count() == (hours/day * days)). Though, that day will not be today.
How would I solve it more effective?
One example: If ther are 7800 values in the array, we miss about 950, right? But can I find just the gaps-endings, and just create the missing values? That would make the o-notation depend on amount of gaps, not the amount of values..
One other welcome answer is just an more effective loop.
[Edit]
Sorry for bad english, I try my best to describe.
Your performance is low because the range lookup is not using any indexing and rechecks the entire range every time.
One way to do this a lot quicker;
if (ticks.TypeOf == Period.Hour)
{
// fill a hashset with the range's unique hourly values
var rangehs = new HashSet<DateTime>();
foreach (var r in range)
{
rangehs.Add(r.time.AddMinutes(-r.time.Minute));
}
// walk all the hours
while (compareAt <= endAt)
{
// quickly check if it's a gap
if (!rangehs.Contains(compareAt))
gaps.Add(new SomeValue() { ...some dummy values..});
compareAt = compareAt.AddTicks(ticks.Ticks);
}
}

Error importing data with FoxPro OLEDB driver

I am importing some data from a FoxPro database to a Sql Server database using the FoxPro OLE-DB driver. The approach I am taking is to loop through the FoxPro tables, select all records into a DataTable and then use SqlBulkCopy to insert that table into Sql Server. This works fine except for a few instances where I get the following error:
System.InvalidOperationException: The provider could not determine the Decimal value. For example, the row was just created, the default for the Decimal column was not available, and the consumer had not yet set a new Decimal value.
I have investigated this and logged which rows it appears with and the issue is that the FoxPro table has a fixed width for a numeric value. 1 is stored as 1.00 however 10 is stored as 10.0 and it is the single digit after the decimal point which is causing the issues. Having now found the issue I am struggling to fix it though. The following function is what I am using to convert an OLEDBReader to a DataTable:
private DataTable FPReaderToDataTable(OleDbDataReader dr, string TableName)
{
DataTable dt = new DataTable();
//get datareader schema
DataTable SchemaTable = dr.GetSchemaTable();
List<DataColumn> cols = new List<DataColumn>();
if (SchemaTable != null)
{
foreach (DataRow drow in SchemaTable.Rows)
{
string columnName = drow["ColumnName"].ToString();
DataColumn col = new DataColumn(columnName, (Type)(drow["DataType"]));
col.Unique = (bool)drow["IsUnique"];
col.AllowDBNull = (bool)drow["AllowDBNull"];
col.AutoIncrement = (bool)drow["IsAutoIncrement"];
cols.Add(col);
dt.Columns.Add(col);
}
}
//populate data
int RowCount = 1;
while (dr.Read())
{
DataRow row = dt.NewRow();
for (int i = 0; i < cols.Count; i++)
{
try
{
row[((DataColumn)cols[i])] = dr[i];
}
catch (Exception ex) {
if (i > 0)
{
LogImportError(TableName, cols[i].ColumnName, RowCount, ex.ToString(), dr[0].ToString());
}
else
{
LogImportError(TableName, cols[i].ColumnName, RowCount, ex.ToString(), "");
}
}
}
RowCount++;
dt.Rows.Add(row);
}
return dt;
}
What I would like to do is check for values that have the 1 decimal place issue but I am unable to read from the datareader at all in these cases. I would have thought that I could have used dr.GetString(i) on the offending rows however this then returns the following error:
The provider could not determine the String value. For example, the row was just created, the default for the String column was not available, and the consumer had not yet set a new String value.
I am unable to update the FoxPro data as the column does not allow this, how can I read the record from the DataReader and fix it? I have tried all combinations of casting / dr.GetValue / dr.GetData and all give variations on the same error.
The structure of the FoxPro table is as follows:
Number of data records: 1664
Date of last update: 11/15/10
Code Page: 1252
Field Field Name Type Width Dec Index Collate Nulls Next Step
1 AV_KEY Numeric 6 Asc Machine No
2 AV_TEAM Numeric 6 No
3 AV_DATE Date 8 No
4 AV_CYCLE Numeric 2 No
5 AV_DAY Numeric 1 No
6 AV_START Character 8 No
7 AV_END Character 8 No
8 AV_SERVICE Numeric 6 No
9 AV_SYS Character 1 No
10 AV_LENGTH Numeric 4 2 No
11 AV_CWEEKS Numeric 2 No
12 AV_CSTART Date 8 No
** Total ** 61
It is the av_length column which is causing the problem
I dont know if you have access to getting Visual Foxpro, but it has an upsizing "wizard" that will allow uploading directly to SQL Server.
It looks like a free download for trial at MS via Download Visual Foxpro 9, SP2
it may be an issue with memo / blob type columns that are not getting properly interpretted.
You mentioned type-casting, but not sure how you've attempted it... In your try/catch where you have
row[((DataColumn)cols[i])] = dr[i];
you might want to explicitly test the columns data type and FORCE it... something like (not positive of the object reference for DataType.ToString() below, but you'll have to find that during your running / debugging.
if( cols[i].DataType.ToString().ToLower().Contains( "int" ))
row[((DataColumn)cols[i])] = (int)dr[i];
else
row[((DataColumn)cols[i])] = dr[i];
You could obviously test for other types too...
From your listed structure of the table, that IS CORRECT what it is doing. In VFP for the table structure listed, the AV_LENGTH is of type numeric, length of 4, 2 being allocated for decimal positons. So it will at MOST have a value of "9.99". VFP forces the input of the numeric field to a maximum of 2 decimal positions, 1 for decimal point and the rest as whole number portion.
The rest of the numeric based fields are Numeric with a length, but NO decimal positions which indicates they are all WHOLE numbers with no decimal position hence would qualify as integer data types. Numeric with decimal should go into a float or double column type.
That being said, I don't know HOW you are even getting a 10.0 value in a numeric 4, 2 decimal. This is the FIRST time I've ever seen forcing a number larger than the allocated intent of the structure being saved actually be stored in the field like this.
I don't recall the reason why FoxPro has this problem. I think it has something to do with how numbers are stored. Regardless of that, the solution is either (A) clean up the data or (B) re-size the field to allow a larger value. The sample code below demonstrates the problem.
* create a table that can store a value between -0.99 and 99.99
CREATE TABLE "TEST.DBF" (av_length N(4,2))
* insert values between 1.10 and 22,222.22222
INSERT INTO "TEST" (av_length) VALUES(1.1)
INSERT INTO "TEST" (av_length) VALUES(2.2)
INSERT INTO "TEST" (av_length) VALUES(11.11)
INSERT INTO "TEST" (av_length) VALUES(22.22)
INSERT INTO "TEST" (av_length) VALUES(111.111)
INSERT INTO "TEST" (av_length) VALUES(222.222)
INSERT INTO "TEST" (av_length) VALUES(1111.1111)
INSERT INTO "TEST" (av_length) VALUES(2222.2222)
INSERT INTO "TEST" (av_length) VALUES(11111.11111)
INSERT INTO "TEST" (av_length) VALUES(22222.22222)
* view the contents of the table
* note that records 3 to 10 do not match the field definition
BROWSE NORMAL
IF MESSAGEBOX("Fix the Data? Select to Change the Field Definition", 0+4+32) = 6
* Solution A: fix the data, and view the table contents again
REPLACE ALL av_length WITH MIN(av_length, 9.99) IN "TEST"
BROWSE NORMAL
ELSE
* Solution B: change the field definition, and view the table contents again
* note that records 9 & 10 still need to be fixed
ALTER TABLE "TEST.DBF" ALTER COLUMN av_length N(12,6)
BROWSE NORMAL
ENDIF

read text from file and apply some operations on it

I have a problem on how to read text from file and perform operations on it for example
i have this text file that include
//name-//sex---------//birth //m1//m2//m3
fofo, male, 1986, 67, 68, 69
momo, male, 1986, 99, 98, 100
Habs, female, 1988, 99, 100, 87
toto, male, 1989, 67, 68, 69
lolo, female, 1990, 89, 80, 87
soso, female, 1988, 99, 100, 83
now i know how to read line by line till i reach null .
but this time I want later to perform and average function to get the average of the first colume of numbers m1
and then get the average of m1 for females only and for males only
and some other operations that i can do no problem
I need help i don't know how to get it
what i have in mind is to read each line in the text file and put it in a string then split the string (str.Split(','); ) but how to get the m1 record on each string
I'm really confused should i use regex to get the integers ? should i use an array 2d? I'm totally lost, any ideas?
please if u can improve any ideas by a code sample that will be great and a kindness initiation from u.
and after i done it i will post it for you guys to check.
{ as a girl I Think I made the wrong decision to join the IT community :-( }
Try something like this.
var qry = from line in File.ReadAllLines(#"C:\Temp\Text.txt")
let vals = line.Split(new char[] { ',' })
select new
{
Name = vals[0].Trim(),
Sex = vals[1].Trim(),
Birth = vals[2].Trim(),
m1 = Int32.Parse(vals[3]),
m2 = Int32.Parse(vals[4]),
m3 = Int32.Parse(vals[5])
};
double avg = qry.Average(a => a.m1);
double GirlsAvg = qry.Where(a => a.Sex == "female").Average(a => a.m1);
double BoysAvg = qry.Where(a => a.Sex == "male").Average(a => a.m1);
I wrote a blog post a while back detailing the act of reading a CSV file and parsing its columns:
http://www.madprops.org/blog/back-to-basics-reading-a-csv-file/
I took the approach you mention (splitting the string), then use DateTime.TryParseExact() and related methods to convert the individual values to the types I need.
Hope the post helps!
Is there a reason for not creating a data structure that stores the fields of the file, a string, a boolean(for m/f), an integer and 3 integers, which you could make into a List that stores the values and then loop over it to compute various sums, averages, whatever other aggregate functions you'd like.
(note: this might seem an over-complicated solution, but I'm assuming that the source data is large (lots of rows), so loading it into a List<T> might not be feasible)
The file reading would be done quite well with an iterator block... if the data is large, you only want to handle one row at a time, not a 2D array.
This actually looks like a good fit for MiscUtil's PushLINQ approach, which can perform multiple aggregates at the same time on a stream of data, without buffering...
An example is below...
why is this useful?
Because it allows you to write multiple queries on a data source using standard LINQ syntax, but only read it once.
Example
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using MiscUtil.Linq;
using MiscUtil.Linq.Extensions;
static class Program
{
static void Main()
{
// prepare a query that is capable of parsing
// the input file into the expected format
string path = "foo.txt";
var qry = from line in ReadLines(path)
let arr = line.Split(',')
select new
{
Name = arr[0].Trim(),
Male = arr[1].Trim() == "male",
Birth = int.Parse(arr[2].Trim()),
M1 = int.Parse(arr[3].Trim())
// etc
};
// get a "data producer" to start the query process
var producer = CreateProducer(qry);
// prepare the overall average
var avg = producer.Average(row => row.M1);
// prepare the gender averages
var avgMale = producer.Where(row => row.Male)
.Average(row => row.M1);
var avgFemale = producer.Where(row => !row.Male)
.Average(row => row.M1);
// run the query; until now *nothing has happened* - we haven't
// even opened the file
producer.ProduceAndEnd(qry);
// show the results
Console.WriteLine(avg.Value);
Console.WriteLine(avgMale.Value);
Console.WriteLine(avgFemale.Value);
}
// helper method to get a DataProducer<T> from an IEnumerable<T>, for
// use with the anonymous type
static DataProducer<T> CreateProducer<T>(IEnumerable<T> data)
{
return new DataProducer<T>();
}
// this is just a lazy line-by-line file reader (iterator block)
static IEnumerable<string> ReadLines(string path)
{
using (var reader = File.OpenText(path))
{
string line;
while ((line = reader.ReadLine()) != null)
{
yield return line;
}
}
}
}
I recommend using the FileHelpers library. Check out example here: Quick start
You could calculate the average in a foreach-loop like the one on the page.
Suzana, I apologize in advance but I don't mean to offend you. You already said "As a girl, you made the wrong decision to join IT...", and I have heard that before from my sisters saying all the time when I tried to help them with their career selection. But if you have conceptual difficulty following the above answers without just copying and paste the code, I think you just validated part of your statement.
Having said that, there are more in IT than just writing code. In other words, coding might not just be for you, but there are other areas in IT you might excel, including becoming a manager one day. I have had many managers who are not capable of doing the above in any language, but they do a good job of managing people, projects and resources.
Believe me, it's only getting harder from here on. This is a very basic task in programming. But if you realize this soon enough, you could talk to your managers asking for non-coding challenges in the company. QA might also be an alternative. Again, I only wish to help and am sorry if you become offended. Good luck.
Re your follow-up "what if"; you would simply loop:
// rows is the jagged array of string1, string2 etc
int totalCounter = 0, totalSum = 0; // etc
foreach(string[] row in rows)
{
int m1 = int.Parse(row[3]);
totalCounter++;
totalSum += m1;
switch(row[2]) {
case "male":
maleCount++;
maleSum += m1;
break;
case "female":
femaleCount++;
femaleSum += m1;
break;
}
}
etc. However, while this works, you can do the same thing a lot more conveniently/expressively in C# 3.0 with LINQ, which is what a lot of the existing replies are trying to show... the fact is, Tim J's post already does all of this:
ReadAllLines: gets the array of rows per line
Split: gets the array of data per row
"select new {...}": parses the data into something convenient
3 "avg" lines show how to take an average over filtered data
The only change I'd make is that I'd add chuck a ToArray() in there somewhere so we only read the file once...

Categories