Retrieving Data from SAP using C# - c#

I have two DataGridView in the main form and the first one displays data from SAP and another displays data from Vertica DB, the FM I'm using is RFC_READ_TABLE, but there's en exception when calling this FM, which is, if there are too many columns in target table, SAP connector will returns an DATA_BUFFER_EXCEED exception, is there any other FMs or ways to retrieving data from SAP without exception?
I figured out a solution, is about split fields into several arrays and store each parts data into a datatable, then merge datatables, but I'm afraid it will cost a lot of time if the row count is too large.
screenshot of the program
here comes my codes:
RfcDestination destination = RfcDestinationManager.GetDestination(cmbAsset.Text);
readTable = destination.Repository.CreateFunction("RFC_READ_TABLE");
/*
* RFC_READ_TABLE will only extract data up to 512 chars per row.
* If you load more data, you will get an DATA_BUFFER_EXCEEDED exception.
*/
readTable.SetValue("query_table", table);
readTable.SetValue("delimiter", "~");//Assigns the given string value to the element specified by the given name after converting it appropriately.
if (tbRowCount.Text.Trim() != string.Empty) readTable.SetValue("rowcount", tbRowCount.Text);
t = readTable.GetTable("DATA");
t.Clear();//Removes all rows from this table.
t = readTable.GetTable("FIELDS");
t.Clear();
if (selectedCols.Trim() != "" )
{
string[] field_names = selectedCols.Split(",".ToCharArray());
if (field_names.Length > 0)
{
t.Append(field_names.Length);
int i = 0;
foreach (string n in field_names)
{
t.CurrentIndex = i++;
t.SetValue(0, n);
}
}
}
t = readTable.GetTable("OPTIONS");
t.Clear();
t.Append(1);//Adds the specified number of rows to this table.
t.CurrentIndex = 0;
t.SetValue(0, filter);//Assigns the given string value to the element specified by the given index after converting it appropriately.
try
{
readTable.Invoke(destination);
}
catch (Exception e)
{
}

first of all you should use BBP_READ_TABLE if it is available in your system. This one is better for much reasons. But that is not the point of your question. In RFC_READ_TABLE you have two Imports ROWCOUNT and ROWSKIPS. You have to use them.
I would recommend you a rowcount between 30.000 and 60.000. So you have to execute the RFC several times and each time you increment your ROWSKIPS. First loop: ROWCOUNT=30000 AND ROWSKIPS = 0, Second Loop: ROWCOUNT=30000 AND ROWSKIPS=30000 and so on...
Also be careful of float-fields when using the old RFC_READ_TABLE. There is one in table LIPS. This RFC has problems with them.

Use transaction
BAPI
press filter and set to all.
Under Logistics execution you will find deliveries.
The detail screen shows the function name.
Test them directly to find one thats suits then call that function instead of RFC_read_tab.
example:
BAPI_LIKP_GET_LIST_MSG

Another possibility is to have an ABAP RFC function developped to get your datas (with the advantage that you can get a structured / multi table response in one call, and the disadvantage that this is not a standard function / BAPI)

Related

CsvHelper - validate whole row

PROBLEM
I have recently started learning more about csvHelper and I need an advice on how to achieve my goal.
I have a CSV file containing some user records (thousands to hundreds of thousands records) and I need to parse the file and validate/process the data. What I need to do is two things:
I need a way to validate whole row while it is being read
the record contains date range and I need to verify it is a valid range
If it's not I need to write the offending line to the error file
one record can also be present multiple times with different date ranges and I need to validate that the ranges don't overlap and if they do, write the WHOLE ORIGINAL LINE to an error file
What I basically can get by with is a way to preserve the whole original row alongside the parsed data, but the way to verify the whole row while the raw data are still available would be better.
QUESTIONS
Are there some events/actions hidden somewhere I can use to validate row of data after it was created but before it was added to the collection?
If not is there a way to save the whole RAW row into the record so I can verify the row after parsing it AND if it is not valid do what I need with them?
CODE I HAVE
What I've created is the record class like this:
class Record
{ //simplified and omitted fluff for brevity
string Login
string Domain
DateTime? Created
DateTime? Ended
}
and a class map:
class RecordMapping<Record>
{ //simplified and omitted fluff for brevity
public RecordMapping(ConfigurationElement config)
{
//..the set up of the mapping...
}
}
and then use them like this:
public ProcessFile(...)
{
...
using(var reader = StreamReader(...))
using(var csvReader = new CsvReader(reader))
using(var errorWriter = new StreamWriter(...))
{
csvReader.Configuration.RegisterClassMap(new RadekMapping(config));
//...set up of csvReader configuration...
try
{
var records = csvReader.GetRecords<Record>();
}
catch (Exception ex)
{
//..in case of problems...
}
....
}
....
}
In this scenario the data might be "valid" from CsvHelper's viewpoint, because it can read the data, but invalid for more complex reasons (like an invalid date range.)
In that case, this might be a simple approach:
public IEnumerable<Thing> ReadThings(TextReader textReader)
{
var result = new List<Thing>();
using (var csvReader = new CsvReader(textReader))
{
while (csvReader.Read())
{
var thing = csvReader.GetRecord<Thing>();
if (IsThingValid(thing))
result.Add(thing);
else
LogInvalidThing(thing);
}
}
return result;
}
If what you need to log is the raw text, that would be:
LogInvalidRow(csvReader.Context.RawRecord);
Another option - perhaps a better one - might be to completely separate the validation from the reading. In other words, just read the records with no validation.
var records = csvReaader.GetRecords<Record>();
Your reader class returns them without being responsible for determining which are valid
and what to do with them.
Then another class can validate an IEnumerable<Record>, returning the valid rows and logging the invalid rows.
That way the logic for validation and logging isn't tied up with the code for reading. It will be easier to test and easier to re-use if you get a collection of Record from something other than a CSV file.

Iterating over Linq-to-Entities IEnumerable causes OutOfMemoryException

The part of the code I'm working on receives an
IEnumerable<T> items
where each item contains a class with properties reflecting a MSSQL database table.
The database table has a total count of 953664 rows.
The dataset in code is filtered down to a set of 284360 rows.
The following code throws an OutOfMemoryException when the process reaches about 1,5 GB memory allocation.
private static void Save<T>(IEnumerable<T> items, IList<IDataWriter> dataWriters, IEnumerable<PropertyColumn> columns) where T : MyTableClass
{
foreach (var item in items)
{
}
}
The variable items is of type
IQueryable<MyTableClass>
I can't find anyone with the same setup, and other's solutions that I've found doesn't apply here.
I've also tried paging, using Skip and Take with a page size of 500, but that just takes a long time and ends up with the same result. It seems like objects aren't being released after each iteration. How is that?
How can I rewrite this code to cope with a larger collection set?
Well, as Servy has already said you didn't provide your code so I'll try to make some predictions... (Sorry for my english)
If you have an exception in "foreach (var item in items)" when you are using paging then, I guess, something wrong with paging. I wrote a couple of examples to explain my idea.
if first example I suggest you (just for test) put your filter inside the Save function.
private static void Save<T>(IQueryable<T> items, IList<IDataWriter> dataWriters, IEnumerable<PropertyColumn> columns) where T : MyTableClass
{
int pageSize = 500; //Only 500 records will be loaded.
int currentStep = 0;
while (true)
{
//Here we create a new request into the database using our filter.
var tempList = items.Where(yourFilter).Skip(currentStep * pageSize).Take(pageSize);
foreach (var item in tempList)
{
//If you have an exception here maybe something wrong in your dataWriters or columns.
}
currentStep++;
if (tempList.Count() == 0) //No records have been loaded so we can leave.
break;
}
}
The second example show how to use paging without any changes in the Save function
int pageSize = 500;
int currentStep = 0;
while (true)
{
//Here we create a new request into the database using our filter.
var tempList = items.Where(yourFilter).Skip(currentStep * pageSize).Take(pageSize);
Save(tempList, dataWriters, columns); //Calling saving function.
currentStep++;
if (tempList.Count() == 0)
break;
}
Try both of them and you'll either resolve your problem or find another place where an exception is raised.
By the way, another potential place is your dataWriters. I guess there you store all data that your have been received from the database. Maybe you shouldn't save all data? Just calculate memory size that all objects are required.
P.S. And don't use while(true) in your code. It just an example:)

Adding numbers from two data frames in Deedle using multi key index

I am new to Deedle. I searched everywhere looking for examples that can help me to complete the following task:
Index data frame using multiple columns (3 in the example - Date, ID and Title)
Add numeric columns in multiple data frames together (Sales column in the example)
Group and add together sales occurred on the same day
My current approach is given below. First of all - it does not work because of the missing values and I don't know how to handle them easily while adding data frames. Second - I wonder if there is a better more elegant way to do it.
// Remove unused columns
var df = dfRaw.Columns[new[] { "Date", "ID", "Title", "Sales" }];
// Index data frame using 3 columns
var dfIndexed = df.IndexRowsUsing(r => Tuple.Create(r.GetAs<DateTime>("Date"), r.GetAs<string>("ID"), r.GetAs<string>("Title")) );
// Remove indexed columns
dfIndexed.DropColumn("Date");
dfIndexed.DropColumn("ID");
dfIndexed.DropColumn("Title");
// Add data frames. Does not work as it will add only
// keys existing in both data frames
dfTotal += dfIndexed
Table 1
Date,ID,Title,Sales,Market
2014-03-01,ID1,Title1,1,US
2014-03-01,ID1,Title1,2,CA
2014-03-03,ID2,Title2,3,CA
Table 2
Date,ID,Title,Sales,Market
2014-03-02,ID1,Title1,2,US
2014-03-03,ID2,Title2,2,CA
Expected Results
Date,ID,Title,Sales
2014-03-01,ID1,Title1,3
2014-03-02,ID1,Title1,2
2014-03-03,ID2,Title2,5
I think that your approach with using tuples makes sense.
It is a bit unfortunate that there is no easy way to specify default values when adding!
The easiest solution I can think of is to realign both series to the same set of keys and use fill operation to provide defaults. Using simple series as an example, something like this should do the trick:
var allKeys = seris1.Keys.Union(series2.Keys);
var aligned1 = series1.Realign(allKeys).FillMissing(0.0);
var aligned2 = series2.Realign(allKeys).FillMissing(0.0);
var res = aligned1 + aligned2;

Retrieve "row pairs" from Excel

I am trying to retrieve data from an Excel spreadsheet using C#. The data in the spreadsheet has the following characteristics:
no column names are assigned
the rows can have varying column lengths
some rows are metadata, and these rows label the content of the columns in the next row
Therefore, the objects I need to construct will always have their name in the very first column, and its parameters are contained in the next columns. It is important that the parameter names are retrieved from the row above. An example:
row1|---------|FirstName|Surname|
row2|---Person|Bob------|Bloggs-|
row3|---------|---------|-------|
row4|---------|Make-----|Model--|
row5|------Car|Toyota---|Prius--|
So unfortunately the data is heterogeneous, and the only way to determine what rows "belong together" is to check whether the first column in the row is empty. If it is, then read all data in the row, and check which parameter names apply by checking the row above.
At first I thought the straightforward approach would be to simply loop through
1) the dataset containing all sheets, then
2) the datatables (i.e. sheets) and
3) the row.
However, I found that trying to extract this data with nested loops and if statements results in horrible, unreadable and inflexible code.
Is there a way to do this in LINQ ? I had a look at this article to start by filtering the empty rows between data but didn't really get anywhere. Could someone point me in the right direction with a few code snippets please ?
Thanks in advance !
hiro
I see that you've already accepted the answer, but I think that more generic solution is possible - using reflection.
Let say you got your data as a List<string[]> where each element in the list is an array of string with all cells from corresponding row.
List<string[]> data;
data = LoadData();
var results = new List<object>();
string[] headerRow;
var en = data.GetEnumerator();
while(en.MoveNext())
{
var row = en.Current;
if(string.IsNullOrEmpty(row[0]))
{
headerRow = row.Skip(1).ToArray();
}
else
{
Type objType = Type.GetType(row[0]);
object newItem = Activator.CreateInstance(objType);
for(int i = 0; i < headerRow.Length; i++)
{
objType.GetProperty(headerRow[i]).SetValue(newItem, row[i+1]);
}
results.Add(newItem);
}
}

split SortedList to multiple lists or arrays [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to split an array into a group of n elements each?
I believe I oversimplified this question so I am editing it a bit. From within a .NET 3.5 console application I have a SortedList string,string that will contain an unknown number of key/value pairs. I will get this collection by reading in rows from a table within a Microsoft Word document. The user will then be able to add additional items into this collection. Once the user has finished adding to the collection I then need to write the collection back to a new Microsoft Word document. The difficulty is that the items must be written back to the document in alphabetical order to a multicolumn table, first down the left side of the table and then down the right side of the table and since the output will likely be spread across multiple pages I need to also keep the order across multiple pages. So the first table on the first page may contain A through C on the left side of the table and C through F on the right side of the table then if the table exceeds the page a new table is needed. The new table may contain F through I and the right side L through O.Since the table will likely span multiple pages and I know the maximum number of rows per table per page I can do the math to determine how many tables I will need overall. This image is representative of the output:
For the sake of brevity if a output table can contain a maximum of 7 rows per page and 2 items per row and I have 28 items then I will need to write the output to 2 tables but of course I won't really know how many tables I will need until I read in the data so I can't simply hardcode the number of output tables.
What is the best way to take my SortedList and split it out into n collections in order to create the table structure described?
It is not necessary to split the list (if the only purpose is to write items in a table).
You can just iterate through the list and write row breaks in appropriate places.
for (int i = 0; i < sortedList.Count; i++)
{
if (i % 3 == 0)
{
Console.Write("|"); // write beginning of the row
}
Console.Write(sortedList[i].ToString().PadRight(10)); // write cell
Console.Write("|"); // write cell divider
if (i % 3 == 2)
{
Console.WriteLine() // write end of the row
}
}
// optional: write empty cells if sortedList.Count % 3 != 0
// optional: write end of the row if sortedList.Count % 3 != 2
You should extend your question by specifying what is the output of your script. If you want to write a table to the console, the above solution is probably the best. However, if you are using rich user interface (such as WinForms or ASP.NET), you should use built-in tools and controls to display data in table.
I played with LINQ a little bit and came up with this solution. It creates some kind of tree structure based on the "input parameters" (rowsPerPage and columnsPerPage). The columns on the last page could not have the same size (the code can be easily fixed if it is a problem).
SortedList<string, string> sortedList ... // input sortedList
int rowsPerPage = 7;
int columnsPerPage = 2;
var result = from col in
(from i in sortedList.Select((item, index) => new { Item = item, Index = index })
group i by (i.Index / rowsPerPage) into g
select new { ColumnNumber = g.Key, Items = g })
group col by (col.ColumnNumber / columnsPerPage) into page
select new { PageNumber = page.Key, Columns = page };
foreach (var page in result)
{
Console.WriteLine("Page no. {0}", page.PageNumber);
foreach (var col in page.Columns)
{
Console.WriteLine("\tColumn no. {0}", col.ColumnNumber);
foreach (var item in col.Items)
{
Console.WriteLine("\t\tItem key: {0}, value: {1}", item.Item.Key, item.Item.Value);
}
}
}

Categories