How to programmatically update the command attribute of an embedded, SQL connection - c#

using c# openxml - I am attempting to open an excel file, bind to its connection.xml stream, and update the embedded SQL query. I am able to successfully replace individual character sequences withing the connection/command node, but attempting to explicitly set the command attribute (i.e. node.Attribute["command"].Value = select * from ....) is resulting in a corrupted
xmlDoc.Load(wkb.WorkbookPart.ConnectionsPart.GetStream());
csNode = xmlDoc.SelectSingleNode("*/*/*[#connection]");
csNode.Attributes["command"].Value = Regex.Replace(csNode.Attributes["command"].Value, #"\(\[\w*\].\[\w*\].\[\w*\].\[\w*\].*\)", "(" + subQry + ")", RegexOptions.Multiline);
xmlDoc.Save(wkb.WorkbookPart.ConnectionsPart.GetStream());
wkb.Close();

Not sure if this is the only way to solve this issue, but I was able to correct it by deleting the original connections.xml stream and creating/attaching a new one with the correct value to the workbook.
//select connections node from loaded xml Excel
csNode = xmlDoc.SelectSingleNode("*/*/*[#connection]");
//store original node values
oldConnValue = csNode.Attributes["connection"].Value;
oldCommValue = csNode.Attributes["command"].Value;
//delete existing ConnectionsPart - to ensure that bleed-over data is not present
wkb.WorkbookPart.DeletePart(wkb.WorkbookPart.ConnectionsPart);
//create a replacement ConnectionsPart
wkb.WorkbookPart.AddNewPart<ConnectionsPart>();
csNode.Attributes["connection"].Value = oldConnValue; //reassign existing connection value
csNode.Attributes["command"].Value = baseQry; //assign new query
//save changes to stream
xmlDoc.Save(wkb.WorkbookPart.ConnectionsPart.GetStream());

Related

sqlite query not matching string

I have a list of about 350 music files that I store in an sqlite table. If new files get added they are added to the db. So I have a query to check if the path of the file is in the table and if not then insert it. However 3 files never match the query even though they are in the table! Therefore they are added over and over each time I start my app.
I am using SQLite for Universal Windows Platform & SQLite.Net-PCL packages for the sql content. File is my custom class that stores some strings and other file properties from each file.
//database creation on first app launch
using (var db = DbConnection)
{
var Trackdb = db.CreateTable<Query>();
}
//where I scan all files and add to db if not already there
var dbconn = DbConnection;
foreach (File file in FileList)
{
var existingfile = dbconn.Query<Track>("select * from File where Path = '" + file .Path.Replace("'", "''") + "'").FirstOrDefault();
if (existingtrack == null)
{
file.ID = dbconn.Insert(file);
Debug.WriteLine(file.Path + " was added to db");
}
}
The output everytime, from what I see there might be 2 characters which could cause this, but why? "–" vs "-" and "ë" vs "e". Is sqlite not equipped to handle these characters or is my query not robust enough? The insert statement which works fine, leading me to believe it accepts these characters as it displays my my app fine.
"C:\Data\Users\Public\Music\David Guetta & Avicii – Sunshine.mp3 was added to db"
"C:\Data\Users\Public\Music\Gotye - Somebody That I Used To Know (Tiësto Remix).mp3 was added to db"
"C:\Data\Users\Public\Music\Lana Del Rey – Summertime Sadness (Cedric Gervais Remix).mp3 was added to db"
edit: solved by chue x
var existingfile = dbconn.Query<File>("SELECT * FROM Track WHERE Path = ?", file.Path.Replace("'", "''")).FirstOrDefault();
solved by chue x
var existingfile = dbconn.Query<File>("SELECT * FROM File WHERE Path = ?", file.Path.Replace("'", "''")).FirstOrDefault();

RFC_READ_TABLE passing "options" and "Fields" parameters (c#)

Need help, I'm trying to get sales data from SAP Using RFC_READ_TABLE, but don't know how to pass OPTIONS and FIELDS parameters to SAP. Here sample code of my app.
Connection is working, after execution, I have an exception "DATA_BUFFER_EXCEEDED"
public void RFC_READ_TABLE()
{
try
{
ECCDestinationConfig cfg = new ECCDestinationConfig();
RfcDestinationManager.RegisterDestinationConfiguration(cfg);
RfcDestination dest = RfcDestinationManager.GetDestination("ABI_ERP");
RfcRepository repo = dest.Repository;
IRfcFunction fn = repo.CreateFunction("RFC_READ_TABLE");
fn.SetValue("QUERY_TABLE", "VBAP");
fn.GetTable("DATA");
fn.Invoke(dest);
var companyCodeList = fn.GetTable("VBAP");
var companyDataTable = companyCodeList.ToDataTable("VBAP");
dataGridView1.DataSource = companyDataTable;
}
catch (RfcBaseException x)
{
MessageBox.Show("Some problems in programe execution. Check entered data, and try again." +
"\n" +
"\n<SAP Remote Execution Error>" +
"\n" +
"\nAdditional Information on Error: " + x.Message, "Oops, Runtime Error");
}
}
RFC_READ_TABLE isn't the ideal function module to be using to read sales order data (it's really designed for quick-n'-dirty table reads where nothing else exists). I'd investigate the following RFC-enabled function modules:
BAPI_SALESORDER_GETLIST - get a list of sales documents
BAPISDORDER_GETDETAILEDLIST - read the details of a single sales document
If you look at function group 2032 or at Sales and Distribution -> Sales -> Sales Order in transaction BAPI you'll find others that may help.
You should only define the columns you really need, like after SELECT in SQL, i.e Sales document, item and material number:
IRfcTable fieldsTable = fn.GetTable("FIELDS");
fieldsTable.Append();
fieldsTable.SetValue("FIELDNAME", "VBELN"); //Sales Document
fieldsTable.Append();
fieldsTable.SetValue("FIELDNAME", "POSNR"); // Sales Document Item
fieldsTable.Append();
fieldsTable.SetValue("FIELDNAME", "MATNR"); // Material number
If you don't do that, the invoke will fail because of Data buffer exceeded, as you mentioned. This is because DATA structure in RFC_READ_TABLE function is of type TAB512, this means that each row can contain only up to 512 characters. The length of the content of all columns combined in the table you query (VBAP) is around 1500 characters. Here you can see the length of each field.
Then you also wish to filter table, similarly as WHERE clause in SQL.
IRfcTable options = fn.GetTable("OPTIONS");
options.Append();
options.SetValue("TEXT", "VBELN = '000000000' AND POSNR = '000000'");
options.Append();
options.SetValue("TEXT", " OR VBELN = '000000001' AND POSNR = '000001'");
Here you also need to be careful, because each option text should be max 72 characters long. If you need multiple or longer WHERE clauses, just append new option like fileds.
I've noticed, that OR causes some performance issues on some tables.
After setting these two properties of your function you should be able to successfully invoke it.
Narrow down your query by using the 'options' and 'fields' tables options for RFC_READ_TABLE.

Compare and get difference of 2 CSV files with C#

I'm reading a CSV file a few times a day. It's about 300MB and each time I have to read it through, compare with existing data in the database, add new ones, hide old ones and update existing ones. There are also bunch of data that's not getting touch.
I have access to all files both old and new ones and I'd like to compare new one with the previous one and just update what's changed in the file. I have no idea what to do and I'm using C# to do all my work. The one thing that might be most problematic is that a row in the previous field might be in another location in the second feed even if it's not updated at all. I want to avoid that problem as well, if possible.
Any idea would help.
Use one of the existing CSV parsers
Parse each row to a mapped class object
Override Equals and GetHashCode for your object
Keep a List<T> or HashSet<T> in memory, At the first step initialize them with no contents.
On reading each line from the CSV file, check if the exist in your in-memory collection (List, HashSet)
If the object doesn't exists in your in-memory collection, add it to the collection and insert in database.
If the object exists in your in-memory collection then ignore it (Checking for it would be based on Equals and GetHashCode implementation and then it would be as simple as if(inMemoryCollection.Contains(currentRowObject))
I guess you have a windows service reading CSV files periodically from a file location. You can repeat the above process, every time you read a new CSV file. This way you will be able to maintain an in-memory collection of the previously inserted objects and ignore them, irrespective of their place in the CSV file.
If you have primary key, defined for your data then you can use Dictionary<T,T>, where you Key could be the unique field. This will help you in having more performance for comparison and you can ignore Equals and GetHashCode implementation.
As a backup to this process, your DB writing routine/stored procedure should be defined in a way that it would first check, if the record already exists in the table, in that case Update the table otherwise INSERT new record. This would be UPSERT.
Remember, if you end up maintaining an in-memory collection, then keep clearing it periodically, otherwise you could end up with out of memory exception.
Just curious, why do you have to compare the old file with the new file? Isn't the data from the old file in SQL server already? (When yous say database, you mean SQL server right? I'm assuming SQL server because you use C# .net)
My approach is simple:
Load new CSV file into a staging table
Use stored procs to insert, update, and set inactive files
public static void ProcessCSV(FileInfo file)
{
foreach (string line in ReturnLines(file))
{
//break the lines up and parse the values into parameters
using (SqlConnection conn = new SqlConnection(connectionString))
using (SqlCommand command = conn.CreateCommand())
{
command.CommandType = CommandType.StoredProcedure;
command.CommandText = "[dbo].sp_InsertToStaging";
//some value from the string Line, you need to parse this from the string
command.Parameters.Add("#id", SqlDbType.BigInt).Value = line["id"];
command.Parameters.Add("#SomethingElse", SqlDbType.VarChar).Value = line["something_else"];
//execute
if (conn.State != ConnectionState.Open)
conn.Open();
try
{
command.ExecuteNonQuery();
}
catch (SqlException exc)
{
//throw or do something
}
}
}
}
public static IEnumerable<string> ReturnLines(FileInfo file)
{
using (FileStream stream = File.Open(file.FullName, FileMode.Open, FileAccess.Read, FileShare.Read))
using (StreamReader reader = new StreamReader(stream))
{
string line;
while ((line = reader.ReadLine()) != null)
{
yield return line;
}
}
}
Now you write stored procs to insert, update, set inactive fields based on Ids. You'll know if a row is updated if Field_x(main_table) != Field_x(staging_table) for a particular Id, and so on.
Here's how you detect changes and updates between your main table and staging table.
/* SECTION: SET INACTIVE */
UPDATE main_table
SET IsActiveTag = 0
WHERE unique_identifier IN
(
SELECT a.unique_identifier
FROM main_table AS a INNER JOIN staging_table AS b
--inner join because you only want existing records
ON a.unique_identifier = b.unique_identifier
--detect any updates
WHERE a.field1 <> b.field2
OR a.field2 <> b.field2
OR a.field3 <> b.field3
--etc
)
/* SECTION: INSERT UPDATED AND NEW */
INSERT INTO main_table
SELECT *
FROM staging_table AS b
LEFT JOIN
(SELECT *
FROM main_table
--only get active records
WHERE IsActiveTag = 1) AS a
ON b.unique_identifier = a.unique_identifier
--select only records available in staging table
WHERE a.unique_identifier IS NULL
How big is the csv file?? if its small try the following
string [] File1Lines = File.ReadAllLines(pathOfFileA);
string [] File2Lines = File.ReadAllLines(pathOfFileB);
List<string> NewLines = new List<string>();
for (int lineNum = 0; lineNo < File1Lines.Length; lineNo++)
{
if(!String.IsNullOrEmpty(File1Lines[lineNum])
String.IsNullOrEmpty(File2Lines[lineNo]))
{
if(String.Compare(File1Lines[lineNo], File2Lines[lineNo]) != 0)
NewLines.Add(File2Lines[lineNo]) ;
}
else if (!String.IsNullOrEmpty(File1Lines[lineNo]))
{
}
else
{
NewLines.Add(File2Lines[lineNo]);
}
}
if (NewLines.Count > 0)
{
File.WriteAllLines(newfilepath, NewLines);
}

How to create multiple DataTables with one sql query?

I'm creating an application that loads data from SQL Database once a day and saves it into a text file.
The main table is a "Transactions" table, which holds data about all transactions made on that day. One of the columns represents a middle-man call sign.
My program saves the data in a DataTable first and then with a StringBuilder I give it the proper form and finally save it into a text file with StreamWriter.
My question is, how or on which stage of the process can I distinguish one table entry from another. I want to create two files: one with transactions made by middle-man A and B.
This is my code so far:
// Query for Data
row = new SqlDataAdapter("SELECT [MSISDN], [Amount], [Transaction_ID], POS.[Name], MNO.[Call Sign] FROM"
+ "[Transactions] join [POS] "
+ "on Transactions.POS_ID = POS.idPOS "
+ "join [MNO] on Transactions.MNO_ID = MNO.idMNO "
+ "where [Status] = '1'", con);
row.Fill(Row);
// Save Data in StringBuilder
for (int i = 0; i < Row.Rows.Count; i++)
{
sb.Append(Row.Rows[i].ItemArray[0].ToString()).Append(",");
double amount = Convert.ToDouble(Row.Rows[i].ItemArray[1].ToString());
sb.Append(Math.Round(amount, 2).ToString().Replace(",", ".")).Append(",");
sb.Append(Row.Rows[i].ItemArray[2].ToString()).Append(",");
sb.Append(Row.Rows[i].ItemArray[3].ToString()).Append(",");
sb.Append(Row.Rows[i].ItemArray[4].ToString()).Append(",").Append(Environment.NewLine);
}
// Create a file from StringBuilder
mydocpath = #"C:\Transactions\" + fileDate.ToString(format) + ".txt";
FileStream fsOverwrite = new FileStream(mydocpath, FileMode.Create);
using (StreamWriter outfile = new StreamWriter(fsOverwrite))
{
outfile.WriteAsync(sb.ToString());
}
Hope I was clear enough. English isn't my strong side. As well as coding for what it seems...
One option.
Put all your data into a DataSet. And then do Xsl transformations against the ds.GetXml().
Here is kind of an example:
http://granadacoder.wordpress.com/2007/05/15/xml-to-xml-conversion/
But what I would do is eliminate the DataTable altogether. Use an IDataReader.
Loop over the data. Maybe do the original query as "Order By Middle-Man-Identifer", and then when the middleManIdentifer "makes a jump", close the previous file and write a new one.
Something like that.
You may be able to learn something from this demo:
http://granadacoder.wordpress.com/2009/01/27/bulk-insert-example-using-an-idatareader-to-strong-dataset-to-sql-server-xml/
Here is a couple of IDataReader helpers:
http://kalit-codesnippetsofnettechnology.blogspot.com/2009/05/write-textfile-from-sqldatareader.html
and
How to efficiently write to file from SQL datareader in c#?

copy blob data from one table to another in c#

basically i have a service which looks at two tables - one resides on a remote server, the other locally. I am trying to write a program which will select any required files from the remote server and copy them locally. i can get this working for standard records but how do i handle the blob in c# - i am just starting out with the language so be gentle
a snippet of what i have is below
public static void BroadcastCheck(String ip_addr)
{
OdbcConnection local = new OdbcConnection("DSN=local");
OdbcConnection cloud = new OdbcConnection("DSN=cloud");
local.Open();
cloud.Open();
OdbcCommand update1 = new OdbcCommand("UPDATE exchange set status = '1' where `status`='0' and inp_date=chg_date and LEFT(filename,12)='" + ip_addr + "' and type='UPDATE'", cloud);
update1.ExecuteNonQuery();
OdbcCommand broadcastSelect = new OdbcCommand("select * from exchange where inp_date=chg_date and LEFT(filename,12)='" + ip_addr + "' and status='1' and type='UPDATE'", cloud);
OdbcDataReader DbReader = broadcastSelect.ExecuteReader();
int fCount = DbReader.FieldCount;
byte[] outByte = new byte[500];
while (DbReader.Read())
{
String type = DbReader.GetString(0);
String filename = DbReader.GetString(1);
String data = DbReader.GetBytes(1);
OdbcCommand broadcastCopy = new OdbcCommand("INSERT INTO exchange(type,filename) VALUES('"+type+"','"+filename+"'"+data+")", local);
broadcastCopy.ExecuteNonQuery();
}
itouchcloud.Close();
itouchlocal.Close();
Console.Write("Broadcast Check Completed \n");
}
Basically the cloud db is queried and may return multiple results, i want to process each record returned and copy it to the local DB.
i have looked around and cant seem to really get a decent solution, i can do this simply in Visual FoxPro 9 so im guessing there is a similar solution.
any help appreciated :)
The first part of the answer is, avoid dynamic SQL if you can. You're using "... VALUES ('"+type+"','"+filename+"'"+data+")" when you should be using "... VALUES (?, ?, ?)".
Then, add the parameters using, for instance,
// sample: the name of the parameter (here #Type) can be anything, and the type and length should match your schema.
broadcastCommand.Parameters.Add("#Type", OleDbType.VarChar, 10).Value = type;
The question marks will be replaced by the parameters in the order you specify them, so you should add type, then filename, then data, in that order.
Now, the value you specify should ALSO correspond to the type of field you are inserting into. So instead of String, String, String, you might want your variables to be of type String, String, byte[].
There are about a million reasons not to construct your queries dynamically, so I would recommend studying up on how to use the Parameters collection on your OdbcCommand. Start here.
UPDATE
In general you can get DataReader values simply by using the indexer [], without needing to go through the GetXXX() methods. For byte arrays, that's usually simpler, because you don't need to know or try to guess the length beforehand.
You can convert your code to use indexers this way:
String type = (string)DbReader[0];
String filename = (string)DbReader[1];
byte[] data = (byte[])DbReader[2];
Note that your GetBytes() call originally had a 1 in there, but I assume you aren't trying to get the bytes of the filename field. So, if your byte[] data is in another field, use that instead. Be aware, however, that you could also use the string field names just as easily (and it might be clearer the next time you need to read the code):
String type = (string)DbReader["type"]; // replace with whatever your fields are actually called
String filename = (string)DbReader["filename"];
byte[] data = (byte[])DbReader["data"];
On the off-chance you had filename and data both using the same field because data isn't actually in the database and instead you want to take the filename and read that filesystem object in as your data for the insert query, you'll need to use a different method.
byte[] data = System.IO.File.ReadAllBytes(filename); // requires .NET 2.0+
Either way you fill your variables, insert them with a parameterized query as explained above.

Categories