Populate Cellset using Objects - c#

I have been working with MDX and cellset in recent times. I was given a MDX query which can show up the data in 3 dim format and I able to get the data using CELLSET in .Net code. Later I am converting the cellset to datatable to make it lot easier to manipulate and display in the application. (similiar to the code from : http://asmdx.blogspot.in/2008/05/code-utility-code-for-converting.html )
I was just wondering why do I need to use Datatable which eats up considerable amount of memory.. I got to think of replacing the datatable with Objects. i.e., Converting a Cellset to a collection of user defined objects.. Is tat possible? Any help please ?

You could get the MDX query results in a XML format using the ExecuteXmlReader method of ADOMD.NET: your memory problems would be solved, and then you could with (relative) ease consume the resulting XML in your application (you could, for instance, use Linq for XML to transform the XML into business objects).

Related

Write SqlDataReader to XML file

I'm testing stored SQL procedures in C#. The procs return the datatype SqlDataReader and I want to write the whole thing to an XML file to compare later. Nothing I've read has provided a very simple solution. Is there a way to do this without looping through all the data in the stream? I don't know much about SQL, so I'm not sure exactly what I'm working with here.
The XML produced by DataSet, DataTable and its ilk leaves something to be desired from the point of view of humans reading it. I'd roll my own.
A SqlDataReader (and it doesn't matter whether its returning data from a stored procedure or a plain-text SQL query), returns 0 to many result sets. Each such result set has
a schema that describes the columns being returned in each row, and
the result set itself, consisting of zero or more rows.
Each row, is essentially an array of 1 or more columns, with each cell containing the value for the column with that ordinal position in the row.
each such column has certain properties, some from the schema, such as name, ordinal type, nullability, etc.
Finally, the column value within a row, is an object of the type corresponding to the SQL Server data type of the column in the result...or DbNull.Value if the column is null.
The basic loop is pretty straightforward (lots of examples in MSDN on how to do it.) And while it might be a bit of work to write it in the first place, once written, it's usable across the board, so it's a one-time hit. I would suggest doing something like this:
Determine what you want the XML to look like. Assuming your intent is to be able to diff the results from time to time, I'd probably go with something that looks like this (since I like to keep things terse and avoid redundancy):
<stored-procedure-results>
<name> dbo.some-stored-procedure-name </name>
<result-sets>
<result-set>
<column-schema column-count="N">
<column ordinal="0...N-1" name="column-name-or-null-if-column-is-unnamed-or-not-unique" data-type=".net-data-type" nullable="true|false" />
...
</schema>
<rows>
<row>
<column ordinal="0..N-1" value="..." />
...
<row/>
...
</rows>
</result-set>
...
</result-sets>
</stored-procedure-results>
Build POCO model classes to contain the data. Attribute them with XML serialization attributes to get the markup you want. From the above XML sample, these classes won't be all that complex. You'll probably want to represent column values as strings rather than native data types.
Build a mapper that will run the data reader and construct your model.
Then it's a couple of dozen lines of code to construct the XML serializer of choice and spit out nicely formatted XML.
Notes:
For QA purposes, you might want to capture the parameters, if any, that were passed to the query, along with the query itself, possibly, the date/time of the run.
There are a few oddball cases where the results set model I describe can get...wonky. For example, a select statement using compute by has to get handled somewhat differently. In my experience, it's pretty safe to ignore that sort of edge case, since you're unlikely to encounter queries like that in the wild.
Think about how you represent null in the XML: null strings are not the same as empty strings.
Try this
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
using System.Data.SqlClient;
namespace ConsoleApplication1
{
class Program
{
const string FILENAME = #"C:\temp\test.xml";
static void Main(string[] args)
{
string connstr = "Enter your connection string here";
string SQL = "Enter your SQL Here";
SqlDataAdapter adapter = new SqlDataAdapter(SQL, connstr);
SqlCommand cmd = adapter.SelectCommand;
cmd.Parameters.Add("abc", SqlDbType.VarChar);
adapter.SelectCommand.ExecuteNonQuery();
DataSet ds = new DataSet();
adapter.Fill(ds);
ds.WriteXml(FILENAME, XmlWriteMode.WriteSchema);
}
}
}
I see the main issue is how to test complicated stored procedures before releases, not writing an XML from SQLDataAdapter which can be very simple. Row by row, column by column.
You have a test database which does not contain static data and you store somehow different versions of the stored procedure.
A simple setup would be to run the (let's say 5) versions of a stored procedure you have, run them against the same
database content, store the xmls to a folder and compare them. I would use for example a different folder for each run and have a timestamp to distinguish between them for example. I would not spent too much on how the xmls are written and in order to detect if they are different you end up even using String.Compare(fileStream1.ReadToEnd(), fileStream2.ReadToEnd()). If the result is too large, then something more elaborated.
If there are differences between 2 xmls, then you can look at them with a text compare tool. ...For more complicated stored procedures with multiple joins, the most common difference will likely be the size of the xmls\ the number of rows returned, not the value of a field.
In production, the content of the database is not static, so doing this type of test would not make sense.
When serializing SqlDataReader using the built-in methods WriteXml in DataTable or DataSet as described in the accepted answer, and the data contains geography data, the geography data are lost and can't be restored latter.
For more details read Datatable with SqlGeography column can't be serialized to xml correctly with loss of Lat,Long and other elements
There is a workaround solution to save to xml provided by #dbc without loss of data and save to xml using the same built-in methods WriteXml. Try it online

Trying to get an UPSERT working on a set of data using dapper

I'm trying to get an upsert working on a collection of IDs (not the primary key - that's an identity int column) on a table using dapper. This doesn't need to be a dapper function, just including in case that helps.
I'm wondering if it's possible (either through straight SQL or using a dapper function) to run an upsert on a collection of IDs (specifically an IEnumerable of ints).
I really only need a simple example to get me started, so an example would be:
I have three objects of type Foo:
{ "ExternalID" : 1010101, "DescriptorString" : "I am a descriptive string", "OtherStuff" : "This is some other stuff" }
{ "ExternalID" : 1010122, "DescriptorString" : "I am a descriptive string123", "OtherStuff" : "This is some other stuff123" }
{ "ExternalID" : 1033333, "DescriptorString" : "I am a descriptive string555", "OtherStuff" : "This is some other stuff555" }
I have a table called Bar, with those same column names (where only 1033333 exists):
Table Foo
Column ID | ExternalID | DescriptorString | OtherStuff
Value [1]|[1033333] |["I am a descriptive string555"]|["This is some other stuff555"]
Well, since you said that this didn't need to be dapper-based ;-), I will say that the fastest and cleanest way to get this data upserted is to use Table-Valued Parameters (TVPs) which were introduced in SQL Server 2008. You need to create a User-Defined Table Type (one time) to define the structure, and then you can use it in either ad hoc queries or pass to a stored procedure. But this way you don't need to export to a file just to import, nor do you need to convert it to XML just to convert it back to a table.
Rather than copy/paste a large code block, I have noted three links below where I have posted the code to do this (all here on S.O.). The first two links are the full code (SQL and C#) to accomplish this (the 2nd link being the most analogous to what you are trying to do). Each is a slight variation on the theme (which shows the flexibility of using TVPs). The third is another variation but not the full code as it just shows the differences from one of the first two in order to fit that particular situation. But in all 3 cases, the data is streamed from the app into SQL Server. There is no creating of any additional collection or external file; you use what you currently have and only need to duplicate the values of a single row at a time to be sent over. And on the SQL Server side, it all comes through as a populated Table Variable. This is far more efficient than taking data you already have in memory, converting it to a file (takes time and disk space) or XML (takes cpu and memory) or a DataTable (for SqlBulkCopy; takes cpu and memory) or something else, only to rely on an external factor such as the filesystem (the files will need to be cleaned up, right?) or need to parse out of XML.
How can I insert 10 million records in the shortest time possible?
Pass Dictionary<string,int> to Stored Procedure T-SQL
Storing a Dictionary<int,string> or KeyValuePair in a database
Now, there are some issues with the MERGE command (see Use Caution with SQL Server's MERGE Statement) that might be a reason to avoid using it. So, I have posted the "upsert" code that I have been using for years to an answer on DBA.StackExchange:
How to avoid using Merge query when upserting multiple data using xml parameter?

Create object at runtime based on SqlQuery executed

The overall objective is to get a Json representation of the query results of the SqlQuery executed. This Json will be used to create visualizations/reports on the browser using js based charting tools.
Now, controls like gridview are able to read the column names as well as the data and give us an html representation of the data. So I think it should be possible to write code such that it can read from a sql data reader and come up with a json representation.
I could not find anything in my searches which does what I want. How do I go about doing this? any pointers?
You could use an SqlDataAdapter to fill a DataSet. This blog post describes a way of converting a DataTable or DataSet into its JSON representation.
You could use the Json.Net serializer. It supports serializing a Dictionary<string,object> to a JSON object.
Another big shot would be using NHibernate and serializing the resulting objects.
Here is another link to using the Json.Net serializer for DataSets:
If you scroll down to the comments on this page you see a much shorter solution using the Dictionary approach.

Returned types from stored procedure execution

I have a SP I want to execute and save the groos result aside (in a class field).
Later on I want to acquire the values of some columns for some rows from this result.
What returned types are possible? Which one is the most sutiable for my goal?
I know there are DataSet, DataReader, resultSet. what else?
What is the main difference between them ?
If you want to store the results and use them later (as you have written), you may use the heavy data sets or fill the lightweight lists with custom container types via the data reader.
Or in case you want to consume the results immediately, go on with the data reader.
Result set is the old VB6 class AFAIK or the current Java interface.
The traditional way to get data is by using the the classes in System.Data.SqlClient namespace. You can use the DataReader which is a read only forward type of cursor, fast and efficient when you just want to read a recordset. DataReader is bindable but you read it one record at the time and therefore don't have the options of going back, for instance. If the recordset is very big the reader is also good because it stores just one record at the time in memory.
You can use the DataAdapter and get a DataSet and then you have a complete control of all the data within the DataSet-class. It is heavier on the system but very powerful when you need to work with the data in you application. You can also use DataSet if the query returns more than one recordset.
So it really depends on what you need to do with the data after getting it from the database. If you just need to read it into something else, use DataReader otherwise DataSet.

which Data object should i use

i have a query that return only one row (always) and i want to convert this row to class object (lets say obi)
i have a feeling that using data table to this kind of query is to much
but i dont realy know which other data object to use
data reader?
is there a way to execute sql command to data row ?
DataReader is the best choice here - DataAdapters and DataSets may be overkill for a single row, although, that said, if performance is not critical then keeping-it-simple isn't a bad thing. You don't need to go from DataReader -> DataRow -> your object, just read the values off of the DataReader and you're done.
A datareader lets you query individual fields. If you want the row as a single object, I believe the DataTable/DataRowView family of objects is in fact the way to go.
You might seriously consider taking a look at Linq-to-Sql or Linq-to-Entities.
The appeal of these frameworks is they provide automatic serialization of your database data into objects, abstract away many of the mundane details of connection management, and have better compile-time support by providing strongly-typed properties which you can use without string keys or column ordinals.
When using Linq, the difference between retrieving a single row vs. retrieving multiple rows often only involves appending .Single() or .First() to your query.
At any rate, if you already use or are willing to learn one of these frameworks, you may see the bulk and difficulty of data access code reduce substantially.
With respect to DataReader vs. DataSet/DataTable, it is correct that it takes more cycles to allocate and populate a data table; however, I highly doubt you will notice the difference unless creating an extremely high volume of database calls.
In case it is helpful, here are documentation examples of data access using data readers and data sets.
DataReader
DataSet

Categories