create JSON string from SqlDataReader - c#

UPDATE
I figured it out. Check out my answer below.
I'm trying to create a JSON string representing a row from a database table to return in an HTTP response. It seems like Json.NET would be a good tool to utilize. However, I'm not sure how to do build the JSON string while I'm reading from the database.
The problem is marked by the obnoxious comments /******** ********/
// connect to DB
theSqlConnection.Open(); // open the connection
SqlDataReader reader = sqlCommand.ExecuteReader();
if (reader.HasRows) {
while(reader.Read()) {
StringBuilder sb = new StringBuilder();
StringWriter sw = new StringWriter(sb);
using (JsonWriter jsonWriter = new JsonTextWriter(sw)) {
// read columns from the current row and build this JsonWriter
jsonWriter.WriteStartObject();
jsonWriter.WritePropertyName("FirstName");
// I need to read the value from the database
/******** I can't just say reader[i] to get the ith column. How would I loop here to get all columns? ********/
jsonWriter.WriteValue(... ? ...);
jsonWriter.WritePropertyName("LastName");
jsonWriter.WriteValue(... ? ...);
jsonWriter.WritePropertyName("Email");
jsonWriter.WriteValue(... ? ...);
// etc...
jsonWriter.WriteEndObject();
}
}
}
The problem is that I don't know how to read each column from the row from the SqlReader such that I can call WriteValue and give it the correct information and attach it to the correct column name. So if a row looks like this...
| FirstName | LastName | Email |
... how would I create a JsonWriter for each such row such that it contains all column names of the row and the corresponding values in each column and then use that JsonWriter to build a JSON string that is ready for returning through an HTTP Response?
Let me know if I need to clarify anything.

My version:
This doesn't use DataSchema and also wraps the results in an array, instead of using a writer per row.
SqlDataReader rdr = cmd.ExecuteReader();
StringBuilder sb = new StringBuilder();
StringWriter sw = new StringWriter(sb);
using (JsonWriter jsonWriter = new JsonTextWriter(sw))
{
jsonWriter.WriteStartArray();
while (rdr.Read())
{
jsonWriter.WriteStartObject();
int fields = rdr.FieldCount;
for (int i = 0; i < fields; i++)
{
jsonWriter.WritePropertyName(rdr.GetName(i));
jsonWriter.WriteValue(rdr[i]);
}
jsonWriter.WriteEndObject();
}
jsonWriter.WriteEndArray();
}

EDITED FOR SPECIFIC EXAMPLE:
theSqlConnection.Open();
SqlDataReader reader = sqlCommand.ExecuteReader();
DataTable schemaTable = reader.GetSchemaTable();
foreach (DataRow row in schemaTable.Rows)
{
StringBuilder sb = new StringBuilder();
StringWriter sw = new StringWriter(sb);
using (JsonWriter jsonWriter = new JsonTextWriter(sw))
{
jsonWriter.WriteStartObject();
foreach (DataColumn column in schemaTable.Columns)
{
jsonWriter.WritePropertyName(column.ColumnName);
jsonWriter.WriteValue(row[column]);
}
jsonWriter.WriteEndObject();
}
}
theSqlConnection.Close();

Got it! Here's the C#...
// ... SQL connection and command set up, only querying 1 row from the table
StringBuilder sb = new StringBuilder();
StringWriter sw = new StringWriter(sb);
JsonWriter jsonWriter = new JsonTextWriter(sw);
try {
theSqlConnection.Open(); // open the connection
// read the row from the table
SqlDataReader reader = sqlCommand.ExecuteReader();
reader.Read();
int fieldcount = reader.FieldCount; // count how many columns are in the row
object[] values = new object[fieldcount]; // storage for column values
reader.GetValues(values); // extract the values in each column
jsonWriter.WriteStartObject();
for (int index = 0; index < fieldcount; index++) { // iterate through all columns
jsonWriter.WritePropertyName(reader.GetName(index)); // column name
jsonWriter.WriteValue(values[index]); // value in column
}
jsonWriter.WriteEndObject();
reader.Close();
} catch (SqlException sqlException) { // exception
context.Response.ContentType = "text/plain";
context.Response.Write("Connection Exception: ");
context.Response.Write(sqlException.ToString() + "\n");
} finally {
theSqlConnection.Close(); // close the connection
}
// END of method
// the above method returns sb and another uses it to return as HTTP Response...
StringBuilder theTicket = getInfo(context, ticketID);
context.Response.ContentType = "application/json";
context.Response.Write(theTicket);
... so the StringBuilder sb variable is the JSON object that represents the row I wanted to query. Here is the JavaScript...
$.ajax({
type: 'GET',
url: 'Preview.ashx',
data: 'ticketID=' + ticketID,
dataType: "json",
success: function (data) {
// data is the JSON object the server spits out
// do stuff with the data
}
});
Thanks to Scott for his answer which inspired me to come to my solution.
Hristo

I made the following method where it converts any DataReader to JSON, but only for single depth serialization:
you should pass the reader, and the column names as a string array, for example:
String [] columns = {"CustomerID", "CustomerName", "CustomerDOB"};
then call the method
public static String json_encode(IDataReader reader, String[] columns)
{
int length = columns.Length;
String res = "{";
while (reader.Read())
{
res += "{";
for (int i = 0; i < length; i++)
{
res += "\"" + columns[i] + "\":\"" + reader[columns[i]].ToString() + "\"";
if (i < length - 1)
res += ",";
}
res += "}";
}
res += "}";
return res;
}

Related

How to export DataTable to excel but keep the formatting

I have 2 issues when converting DataTable to excel
The date format is messy, for example my original data is '2016-12-31', in excel it became '31-12-16'
Excel remove trailing zeroes, for example '01' become '1'
Here's the code
MemoryStream result = new MemoryStream();
StreamWriter writer = new StreamWriter(result);
CSVwriter csv = new CSVwriter();
string content = csv.GetFromTable(table);
writer.Write(content);
writer.Flush();
return result;
GetFromTable method
StringBuilder str = new StringBuilder();
StringBuilder rowstr = new StringBuilder();
string Encloser = "\""
foreach (DataRow row in table.Rows) {
rowstr.Length = 0;
foreach (DataColumn col in table.Columns) {
if (col.Ordinal > 0) {
rowstr.Append(",");
}
object obj = row(table.Columns.IndexOf(col));
rowstr.Append(Encloser + ToStr(obj) + Encloser);
}
if (rowstr.Length > 0) {
str.AppendLine(rowstr.ToString());
}
}
return str.ToString();
UPDATE:
str.ToString() result
"Date","Code"
"2016-01-31","01"
"2016-01-31","02"

Using SQLBulkCopy and IDatareader with ColumnMappings

I would like to use a DataReader considering my file has millions of lines and a DataTable seems to be slow for loading with SQLBulkCopy
I implemented this reader, which will read my file this way:
//snippet from CSVReader implementation
Read();
_csvHeaderstring = _csvlinestring;
_header = ReadRow(_csvHeaderstring);
int i = 0;
_csvHeaderstring = "";
foreach (var item in _header)//read each column and create a dummy header.
{
headercollection.Add("COL_" + i.ToString(), null);
_csvHeaderstring = _csvHeaderstring + "COL_" + i.ToString() + _delimiter;
i++;
}
_csvHeaderstring.TrimEnd(_delimiter);
_header = ReadRow(_csvHeaderstring);
Close(); //close and repoen to get the record position to beginning.
_file = File.OpenText(filePath);
public static void MyMethod()
{
textDataReader rdr = new textDataReader(file, '\t', false);
using (SqlBulkCopy bulkcopy = new SqlBulkCopy(connectionString))
{
bulkcopy.DestinationTableName = "[dbo].[MyTable]";
bulkcopy.ColumnMappings.Add("COL_0", "DestinationCol1");
bulkcopy.ColumnMappings.Add("COL_1", "DestinationCol2");
bulkcopy.ColumnMappings.Add("COL_3", "DestinationCol3");
bulkcopy.ColumnMappings.Add("COL_4", "DestinationCol4");
bulkcopy.ColumnMappings.Add("COL_5", "DestinationCol5");
bulkcopy.ColumnMappings.Add("COL_6", "DestinationCol6");
bulkcopy.WriteToServer(rdr);
bulkcopy.Close();
}
}
However when trying to do a SQLBulkCopy columnmapping to skip the unecessary column I get the error:
the given columnmapping does not match up with any column in the
source or destination.
I have also tried to map this way to no avail.
Will this implementation of IDataReader work for SQLBulkCopy? I tried implementing an 'ignore column' method but that didn't work.

SqlDataReader change column names when exporting to csv

I have a query that gets report data via a SqlDataReader and sends the SqlDataReader to a method that exports the content out to a .CSV file; however, the column names are showing up in the .CSV file the way that they appear in the database which is not ideal.
I do not want to alter the query itself (changing the names to have spaces) because this query is called in another location where it maps to an object and spaces would not work. I would prefer not to create a duplicate query because maintenance could be problematic. I also do not want to modify the method that writes out the .CSV as this is a method that is globally used.
Can I modify the column names after I fill the data reader but before I send it to the .CSV method? If so, how?
If I can't do it this way, could I do it if it was a DataTable instead?
Here is the general flow:
public static SqlDataReader RunMasterCSV(Search search)
{
SqlDataReader reader = null;
using (Network network = new Network())
{
using (SqlCommand cmd = new SqlCommand("dbo.MasterReport"))
{
cmd.CommandType = CommandType.StoredProcedure;
//Parameters here...
network.FillSqlReader(cmd, ref reader);
<-- Ideally would like to find a solution here -->
return reader;
}
}
}
public FileInfo CSVFileWriter(SqlDataReader reader)
{
DeleteOldFolders();
FileInfo file = null;
if (reader != null)
{
using (reader)
{
var WriteDirectory = GetExcelOutputDirectory();
double folderToSaveInto = Math.Ceiling((double)DateTime.Now.Hour / Folder_Age_Limit.TotalHours);
string uploadFolder = GetExcelOutputDirectory() + "\\" + DateTime.Now.ToString("ddMMyyyy") + "_" + folderToSaveInto.ToString();
//Add directory for today if one does not exist
if (!Directory.Exists(uploadFolder))
Directory.CreateDirectory(uploadFolder);
//Generate random GUID fileName
file = new FileInfo(uploadFolder + "\\" + Guid.NewGuid().ToString() + ".csv");
if (file.Exists)
file.Delete();
using (file.Create()) { /*kill the file stream immediately*/};
StringBuilder sb = new StringBuilder();
if (reader.Read())
{
//write the column names
for (int i = 0; i < reader.FieldCount; i++)
{
AppendValue(sb, reader.GetName(i), (i == reader.FieldCount - 1));
}
//write the column names
for (int i = 0; i < reader.FieldCount; i++)
{
AppendValue(sb, reader[i] == DBNull.Value ? "" : reader[i].ToString(), (i == reader.FieldCount - 1));
}
int rowcounter = 1;
while (reader.Read())
{
for (int i = 0; i < reader.FieldCount; i++)
{
AppendValue(sb, reader[i] == DBNull.Value ? "" : reader[i].ToString(), (i == reader.FieldCount - 1));
}
rowcounter++;
if (rowcounter == MaxRowChunk)
{
using (var sw = file.AppendText())
{
sw.Write(sb.ToString());
sw.Close();
sw.Dispose();
}
sb = new StringBuilder();
rowcounter = 0;
}
}
if (sb.Length > 0)
{
//write the last bit
using (var sw = file.AppendText())
{
sw.Write(sb.ToString());
sw.Close();
sw.Dispose();
sb = new StringBuilder();
}
}
}
}
}
return file;
}
I would try a refactoring of your CSVFileWriter.
First you should add a delegate declaration
public delegate string onColumnRename(string);
Then create an overload of your CSVFileWriter where you pass the delegate together with the reader
public FileInfo CSVFileWriter(SqlDataReader reader, onColumnRename renamer)
{
// Move here all the code of the old CSVFileWriter
.....
}
Move the code of the previous CSVFileWriter to the new method and, from the old one call the new one
public FileInfo CSVFileWriter(SqlDataReader reader)
{
// Pass null for the delegate to the new version of CSVFileWriter....
return this.CSVFileWriter(reader, null)
}
This will keep existing clients of the old method happy. For them nothing has changed.....
Inside the new version of CSVFileWriter you change the code that prepare the column names
for (int i = 0; i < reader.FieldCount; i++)
{
string colName = (renamer != null ? renamer(reader.GetName(i))
: reader.GetName(i))
AppendValue(sb, colName, (i == reader.FieldCount - 1));
}
Now it is just a matter to create the renamer function that translates your column names
private string myColumnRenamer(string columnName)
{
if(columnName == "yourNameWithoutSpaces")
return "your Name with Spaces";
else
return text;
}
This could be optimized with a static dictionary to remove the list of ifs
At this point your could call the new CSVFileWriter passing your function
FileInfo fi = CSVFileWrite(reader, myColumnRenamer);

How to create a generic text file parser for any find of text file?

Want to create a generic text file parser in c# for any find of text file.Actually i have 4 application all 4 getting input data from txt file format but text files are not homogeneous in nature.i have tried fixedwithdelemition.
private static DataTable FixedWidthDiliminatedTxtRead()
{
string[] fields;
StringBuilder sb = new StringBuilder();
List<StringBuilder> lst = new List<StringBuilder>();
DataTable dtable = new DataTable();
ArrayList aList;
using (TextFieldParser tfp = new TextFieldParser(testOCC))
{
tfp.TextFieldType = FieldType.FixedWidth;
tfp.SetFieldWidths(new int[12] { 2,25,8,12,13,5,6,3,10,11,10,24 });
for (int col = 1; col < 13; ++col)
dtable.Columns.Add("COL" + col);
while (!tfp.EndOfData)
{
fields = tfp.ReadFields();
aList = new ArrayList();
for (int i = 0; i < fields.Length; ++i)
aList.Add(fields[i] as string);
if (dtable.Columns.Count == aList.Count)
dtable.Rows.Add(aList.ToArray());
}
}
return dtable;
}
but i feel its very rigid one and really varies application to application making it configgurable .any better way ..
tfp.SetFieldWidths(new int[12] { 2,25,8,12,13,5,6,3,10,11,10,24 });
File nature :
Its a report kind of file .
position of columns are very similar
row data of file id different .
I get this as a reference
http://www.codeproject.com/Articles/11698/A-Portable-and-Efficient-Generic-Parser-for-Flat-F
any other thoughts ?
If the only thing different is the field widths, you could just try sending the field widths in as a parameter:
private static DataTable FixedWidthDiliminatedTxtRead(int[] fieldWidthArray)
{
string[] fields;
StringBuilder sb = new StringBuilder();
List<StringBuilder> lst = new List<StringBuilder>();
DataTable dtable = new DataTable();
ArrayList aList;
using (TextFieldParser tfp = new TextFieldParser(testOCC))
{
tfp.TextFieldType = FieldType.FixedWidth;
tfp.SetFieldWidths(fieldWidthArray);
for (int col = 1; col < 13; ++col)
dtable.Columns.Add("COL" + col);
while (!tfp.EndOfData)
{
fields = tfp.ReadFields();
aList = new ArrayList();
for (int i = 0; i < fields.Length; ++i)
aList.Add(fields[i] as string);
if (dtable.Columns.Count == aList.Count)
dtable.Rows.Add(aList.ToArray());
}
}
return dtable;
}
If you will have more logic to grab the data, you might want to consider defining an interface or abstract class for a GenericTextParser and create concrete implementations for each other file.
Hey I made one of these last week.
I did not write it with the intentions of other people using it so I appologize in advance if its not documented well but I cleaned it up for you. ALSO I grabbed several segments of code from stack overflow so I am not the original author of several pieces of this.
The places you need to edit are the path and pathout and the seperators of text.
char[] delimiters = new char[]
So it searches for part of a word and then grabs the whole word. I used a c# console application for this.
Here you go:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
namespace UniqueListofStringFinder
{
class Program
{
static void Main(string[] args)
{
string path = #"c:\Your Path\in.txt";
string pathOut = #"c:\Your Path\out.txt";
string data = "!";
Console.WriteLine("Current Path In is set to: " + path);
Console.WriteLine("Current Path Out is set to: " + pathOut);
Console.WriteLine(Environment.NewLine + Environment.NewLine + "Input String to Search For:");
Console.Read();
string input = Console.ReadLine();
// Delete the file if it exists.
if (!File.Exists(path))
{
// Create the file.
using (FileStream fs = File.Create(path))
{
Byte[] info =
new UTF8Encoding(true).GetBytes("This is some text in the file.");
// Add some information to the file.
fs.Write(info, 0, info.Length);
}
}
System.IO.StreamReader file = new System.IO.StreamReader(path);
List<string> Spec = new List<string>();
using (StreamReader sr = File.OpenText(path))
{
while (!file.EndOfStream)
{
string s = file.ReadLine();
if (s.Contains(input))
{
char[] delimiters = new char[] { '\r', '\n', '\t', ')', '(', ',', '=', '"', '\'', '<', '>', '$', ' ', '#', '[', ']' };
string[] parts = s.Split(delimiters,
StringSplitOptions.RemoveEmptyEntries);
foreach (string word in parts)
{
if (word.Contains(input))
{
if( word.IndexOf(input) == 0)
{
Spec.Add(word);
}
}
}
}
}
Spec.Sort();
// Open the stream and read it back.
//while ((s = sr.ReadLine()) != null)
//{
// Console.WriteLine(s);
//}
}
Console.WriteLine();
StringBuilder builder = new StringBuilder();
foreach (string s in Spec) // Loop through all strings
{
builder.Append(s).Append(Environment.NewLine); // Append string to StringBuilder
}
string result = builder.ToString(); // Get string from StringBuilder
Program a = new Program();
data = a.uniqueness(result);
int i = a.writeFile(data,pathOut);
}
public string uniqueness(string rawData )
{
if (rawData == "")
{
return "Empty Data Set";
}
List<string> dataVar = new List<string>();
List<string> holdData = new List<string>();
bool testBool = false;
using (StringReader reader = new StringReader(rawData))
{
string line;
while ((line = reader.ReadLine()) != null)
{
foreach (string s in holdData)
{
if (line == s)
{
testBool = true;
}
}
if (testBool == false)
{
holdData.Add(line);
}
testBool = false;
// Do something with the line
}
}
int i = 0;
string dataOut = "";
foreach (string s in holdData)
{
dataOut += s + "\r\n";
i++;
}
// Write the string to a file.
return dataOut;
}
public int writeFile(string dataOut, string pathOut)
{
try
{
System.IO.StreamWriter file = new System.IO.StreamWriter(pathOut);
file.WriteLine(dataOut);
file.Close();
}
catch (Exception ex)
{
dataOut += ex.ToString();
return 1;
}
return 0;
}
}
}
private static DataTable FixedWidthTxtRead(string filename, int[] fieldWidths)
{
string[] fields;
DataTable dtable = new DataTable();
ArrayList aList;
using (TextFieldParser tfp = new TextFieldParser(filename))
{
tfp.TextFieldType = FieldType.FixedWidth;
tfp.SetFieldWidths(fieldWidths);
for (int col = 1; col <= fieldWidths.length; ++col)
dtable.Columns.Add("COL" + col);
while (!tfp.EndOfData)
{
fields = tfp.ReadFields();
aList = new ArrayList();
for (int i = 0; i < fields.Length; ++i)
aList.Add(fields[i] as string);
if (dtable.Columns.Count == aList.Count) dtable.Rows.Add(aList.ToArray());
}
}
return dtable;
}
Here's what I did:
I built a factory for the type of processor needed (based on file type/format), which abstracted the file reader.
I then built a collection object that contained a set of triggers for each field I was interested in (also contained the property name for which this field is destined). This settings collection is loaded in via an XML configuration file, so all I need to change are the settings, and the base parsing process can react to how the settings are configured. Finally I built a reflection wrapper wherein once a field is parsed, the corresponding property on the model object is set.
As the file flowed through, the triggers for each setting evaluated each lines value. When it found what it was set to find (via pattern matching, or column length values) it fired and event that bubbled up and set a property on the model object. I can show some pseudo code if you're interested. It needs some work for efficiency's sake, but I like the concept.

How to efficiently write to file from SQL datareader in c#?

I have a remote sql connection in C# that needs to execute a query and save its results to the users's local hard disk. There is a fairly large amount of data this thing can return, so need to think of an efficient way of storing it. I've read before that first putting the whole result into memory and then writing it is not a good idea, so if someone could help, would be great!
I am currently storing the sql result data into a DataTable, although I am thinking it could be better doing something in while(myReader.Read(){...}
Below is the code that gets the results:
DataTable t = new DataTable();
string myQuery = QueryLoader.ReadQueryFromFileWithBdateEdate(#"Resources\qrs\qryssysblo.q", newdate, newdate);
using (SqlDataAdapter a = new SqlDataAdapter(myQuery, sqlconn.myConnection))
{
a.Fill(t);
}
var result = string.Empty;
for(int i = 0; i < t.Rows.Count; i++)
{
for (int j = 0; j < t.Columns.Count; j++)
{
result += t.Rows[i][j] + ",";
}
result += "\r\n";
}
So now I have this huge result string. And I have the datatable. There has to be a much better way of doing it?
Thanks.
You are on the right track yourself. Use a loop with while(myReader.Read(){...} and write each record to the text file inside the loop. The .NET framework and operating system will take care of flushing the buffers to disk in an efficient way.
using(SqlConnection conn = new SqlConnection(connectionString))
using(SqlCommand cmd = conn.CreateCommand())
{
conn.Open();
cmd.CommandText = QueryLoader.ReadQueryFromFileWithBdateEdate(
#"Resources\qrs\qryssysblo.q", newdate, newdate);
using(SqlDataReader reader = cmd.ExecuteReader())
using(StreamWriter writer = new StreamWriter("c:\temp\file.txt"))
{
while(reader.Read())
{
// Using Name and Phone as example columns.
writer.WriteLine("Name: {0}, Phone : {1}",
reader["Name"], reader["Phone"]);
}
}
}
I came up with this, it's a better CSV writer than the other answers:
public static class DataReaderExtension
{
public static void ToCsv(this IDataReader dataReader, string fileName, bool includeHeaderAsFirstRow)
{
const string Separator = ",";
StreamWriter streamWriter = new StreamWriter(fileName);
StringBuilder sb = null;
if (includeHeaderAsFirstRow)
{
sb = new StringBuilder();
for (int index = 0; index < dataReader.FieldCount; index++)
{
if (dataReader.GetName(index) != null)
sb.Append(dataReader.GetName(index));
if (index < dataReader.FieldCount - 1)
sb.Append(Separator);
}
streamWriter.WriteLine(sb.ToString());
}
while (dataReader.Read())
{
sb = new StringBuilder();
for (int index = 0; index < dataReader.FieldCount; index++)
{
if (!dataReader.IsDBNull(index))
{
string value = dataReader.GetValue(index).ToString();
if (dataReader.GetFieldType(index) == typeof(String))
{
if (value.IndexOf("\"") >= 0)
value = value.Replace("\"", "\"\"");
if (value.IndexOf(Separator) >= 0)
value = "\"" + value + "\"";
}
sb.Append(value);
}
if (index < dataReader.FieldCount - 1)
sb.Append(Separator);
}
if (!dataReader.IsDBNull(dataReader.FieldCount - 1))
sb.Append(dataReader.GetValue(dataReader.FieldCount - 1).ToString().Replace(Separator, " "));
streamWriter.WriteLine(sb.ToString());
}
dataReader.Close();
streamWriter.Close();
}
}
usage: mydataReader.ToCsv("myfile.csv", true)
Rob Sedgwick answer is more like it, but can be improved and simplified. This is how I did it:
string separator = ";";
string fieldDelimiter = "";
bool useHeaders = true;
string connectionString = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
using (SqlConnection conn = new SqlConnection(connectionString))
{
using (SqlCommand cmd = conn.CreateCommand())
{
conn.Open();
string query = #"SELECT whatever";
cmd.CommandText = query;
using (SqlDataReader reader = cmd.ExecuteReader())
{
if (!reader.Read())
{
return;
}
List<string> columnNames = GetColumnNames(reader);
// Write headers if required
if (useHeaders)
{
first = true;
foreach (string columnName in columnNames)
{
response.Write(first ? string.Empty : separator);
line = string.Format("{0}{1}{2}", fieldDelimiter, columnName, fieldDelimiter);
response.Write(line);
first = false;
}
response.Write("\n");
}
// Write all records
do
{
first = true;
foreach (string columnName in columnNames)
{
response.Write(first ? string.Empty : separator);
string value = reader[columnName] == null ? string.Empty : reader[columnName].ToString();
line = string.Format("{0}{1}{2}", fieldDelimiter, value, fieldDelimiter);
response.Write(line);
first = false;
}
response.Write("\n");
}
while (reader.Read());
}
}
}
And you need to have a function GetColumnNames:
List<string> GetColumnNames(IDataReader reader)
{
List<string> columnNames = new List<string>();
for (int i = 0; i < reader.FieldCount; i++)
{
columnNames.Add(reader.GetName(i));
}
return columnNames;
}
I agree that your best bet here would be to use a SqlDataReader. Something like this:
StreamWriter YourWriter = new StreamWriter(#"c:\testfile.txt");
SqlCommand YourCommand = new SqlCommand();
SqlConnection YourConnection = new SqlConnection(YourConnectionString);
YourCommand.Connection = YourConnection;
YourCommand.CommandText = myQuery;
YourConnection.Open();
using (YourConnection)
{
using (SqlDataReader sdr = YourCommand.ExecuteReader())
using (YourWriter)
{
while (sdr.Read())
YourWriter.WriteLine(sdr[0].ToString() + sdr[1].ToString() + ",");
}
}
Mind you, in the while loop, you can write that line to the text file in any format you see fit with the column data from the SqlDataReader.
Keeping your original approach, here is a quick win:
Instead of using String as a temporary buffer, use StringBuilder. That will allow you to use the function .append(String) for concatenations, instead of using the operator +=.
The operator += is specially inefficient, so if you place it on a loop and it is repeated (potentially) millions of times, the performance will be affected.
The .append(String) method won't destroy the original object, so it's faster
Using the response object without a response.Close() causes at least in some instances the html of the page writing out the data to be written to the file. If you use Response.Close() the connection can be closed prematurely and cause an error producing the file.
It is recommended to use the HttpApplication.CompleteRequest() however this appears to always cause the html to be written to the end of the file.
I have tried the stream in conjunction with the response object and have had success in the development environment. I have not tried it in production yet.
I used .CSV to export data from database by DataReader. in my project i read datareader and create .CSV file manualy. in a loop i read datareader and for every rows i append cell value to result string. for separate columns i use "," and for separate rows i use "\n". finally i saved result string as result.csv.
I suggest this high performance extension. i tested it and quickly export 600,000 rows as .CSV .
I use:
private void SaveData(string path)
{
DataTable tblResult = new DataTable();
using(SqlCommand cm = new SqlCommand("select something", objConnect))
{
tblResult.Load(cm.ExecuteLoad());
}
if (tblResult != null)
{
using(FileStream fs = new FileStream(path, FileMode.Create, FileAccess.Write))
{
BinaryFormatter bin = new BinaryFormatter();
bin.Serialize(fs, tblResult);
}
}
}
ease to use, and easy to load, with:
private DataTable LoadData(string path)
{
DataTable t = new DataTable();
using(FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read))
{
BinaryFormatter bin = new BinaryFormatter();
t = (DataTable)bin.Deserialize(fs);
}
return t;
}
you can use this method also to save a DataSet.

Categories