Memory leak when reading from file - c#

I have this method to read from a .dbf file:
public DataTable ReadBulkDBF(string dbfFile, Dictionary<string, string> columnKeys, int maxRows, string dynamicValue, int nextId)
{
long start = DateTime.Now.Ticks;
DataTable dt = new DataTable();
BinaryReader recReader;
string number;
string year;
string month;
string day;
long lDate;
long lTime;
DataRow row;
int fieldIndex;
bool foundLastColumn = false;
List<string> keys = new List<string>(columnKeys.Keys);
List<string> values = new List<string>(columnKeys.Values);
// For testing purposes
int rowCount = 0;
// If there isn't even a file, just return an empty DataTable
if ((!File.Exists(dbfFile)))
{
return dt;
}
BinaryReader br = null;
try
{
// Will allow shared open as long as the other application using it allows it too.
// Read the header into a buffer
br = new BinaryReader(File.Open(dbfFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite));
byte[] buffer = br.ReadBytes(Marshal.SizeOf(typeof(DBFHeader)));
// Marshall the header into a DBFHeader structure
GCHandle handle = GCHandle.Alloc(buffer, GCHandleType.Pinned);
DBFHeader header = (DBFHeader)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(DBFHeader));
handle.Free();
// Read in all the field descriptors. Per the spec, 13 (0D) marks the end of the field descriptors
ArrayList fields = new ArrayList();
while ((13 != br.PeekChar()))
{
buffer = br.ReadBytes(Marshal.SizeOf(typeof(FieldDescriptor)));
handle = GCHandle.Alloc(buffer, GCHandleType.Pinned);
fields.Add((FieldDescriptor)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(FieldDescriptor)));
handle.Free();
}
// Read in the first row of records, we need this to help determine column types below
((FileStream)br.BaseStream).Seek(header.headerLen + 1, SeekOrigin.Begin);
buffer = br.ReadBytes(header.recordLen);
recReader = new BinaryReader(new MemoryStream(buffer));
// Create the columns in our new DataTable
DataColumn col = null;
dt.Columns.Add(new DataColumn("updateId", typeof(int)));
if (!dbfFile.Contains("con_compania")) { dt.Columns.Add(new DataColumn("dynamic", typeof(string))); }
dt.Columns.Add(new DataColumn("fechasync", typeof(DateTime)));
foreach (FieldDescriptor field in fields)
{
// Adds columns to DataTable dt
}
// Skip past the end of the header.
((FileStream)br.BaseStream).Seek(header.headerLen, SeekOrigin.Begin);
// Read in all the records
for (int counter = 0; counter < header.numRecords && dt.Rows.Count < maxRows; counter++)
{
// First we'll read the entire record into a buffer and then read each field from the buffer
// This helps account for any extra space at the end of each record and probably performs better
buffer = br.ReadBytes(header.recordLen);
recReader = new BinaryReader(new MemoryStream(buffer));
// All dbf field records begin with a deleted flag field. Deleted - 0x2A (asterisk) else 0x20 (space)
if (recReader.ReadChar() == '*')
{
continue;
}
// Loop through each field in a record
fieldIndex = 2;
rowCount = dt.Rows.Count;
row = dt.NewRow();
foreach (FieldDescriptor field in fields)
{
switch (field.fieldType)
{
// Casts field's value according to its type and saves it in the dt.
}
fieldIndex++;
}
// Looks for key-value combination in every row until
// it finds it to know where to start reading the new rows.
if (!foundLastColumn && columnKeys.Keys.Count > 0)
{
foundLastColumn = true;
int i = 3;
if (dbfFile.Contains("con_compania")) { i = 2; }
for (; i < keys.Count && foundLastColumn; i++)
{
if (!row[keys[i]].ToString().Equals(values[i]))
{
foundLastColumn = false;
}
}
}
else
{
dt.Rows.Add(row);
nextId++;
}
}
}
catch (Exception e)
{
throw e;
}
finally
{
if (null != br)
{
br.Close();
br.Dispose();
}
}
long count = DateTime.Now.Ticks - start;
return dt;
}
The problem is somewhere I am leaving some kind of reference to this, so I'm getting OOM.
The method is called with something like:
DataTable dt = new ParseDBF().ReadBulkDBF(...);
//Use dt
dt.Dispose();
dt = null;
If I only call Dispose() it keeps the reference and if I call null dt becomes null, but the reference to the ParseDBF object is still there somewhere.
Any idea where the leak might be? I have looked all over the internet for ideas and tried calling Dispose() and Close(), and setting as null everything I can think of after I use it and it keeps happening.

I notice that recreader may not be getting freed.
I would strongly suggest making use of using blocks within this code to ensure that IDisposable objects are cleaned when execution leaves the using scope.

Related

C# SQL Server data to long array

I am trying to use C# with a SQL Server database and I have a problem.
I have an array something like that actually original array have a 10001x1 size
long[] lvl = { 0, 7200000, 15840000, 25920000, 37440000, 50400000, 64800000, 80640000 }
When I try to take same long[] array from the database, I am getting an error.
string sorgu = "select * from paragon";
var komut = new SqlCommand(sorgu, baglanti);
var reader = komut.ExecuteReader();
IList<long> lvl = new List<long>();
while (reader.Read())
{
lvl.Add((long)reader["Paragon"]);
}
reader.Close();
reader.Dispose();
long ns = Convert.ToInt64(textBox1.Text);
long sns = Convert.ToInt64(textBox2.Text);
long nsxp = lvl[ns];
long snsxp = lvl[sns];
long toplam = nsxp + snsxp;
for (int i = 0; i < lvl.Count; i++)
{
if (toplam < lvl[i])
{
textBox3.Text = Convert.ToString(i - 1);
break;
}
}
İmage 1
Error image
your problem is data type mismatch.
reader gives you one value in every reader.Read() operation.
IList<long> myArray = new List<myArray>();
while (reader.Read())
{
myArray.Add(reader.GetInt64(0));
}
reader.Close();
reader.Dispose(); // always close and dispose your reader whenever you are done.
long ns = Convert.ToInt64(textBox1.Text);
long sns = Convert.ToInt64(textBox2.Text);
long nsxp = lvl[ns];
long snsxp = lvl[sns];
long toplam = nsxp + snsxp;
for (int i = 0; i < lvl.Length; i++)
{
if (toplam < lvl[i])
{
textBox3.Text = Convert.ToString(i - 1);
break;
}
}
SqlDataReader's Read() reads a single record from the database. You are trying to use it to read all records at once. This code illustrates an example of how to read each value sequentially:
while (reader.Read()) // This returns true if a record is available, and false once all records have been read.
{
var paragonValue = reader.GetInt64(0); // This reads the current record's Paragon value.
// Do something with paragonValue.
}
See the Microsoft Docs on SqlDataReader.Read for more information.

Cannot implicitly convert DateTime to Timespan

Sql Server has the variable TEST_TIME data type as Time(7)
I created the tables in C# and it automatically assigned the Timespan datatype.
Now, i'm trying to upload csv file data to the SQL database and it is giving me an error " Cannot implicitly convert DateTime to Timespan". What would be the best way to fix this?
The user first selects the CSV file:
private void button8_Click(object sender, EventArgs e)
{
try
{
using (OpenFileDialog openfiledialog1 = new OpenFileDialog()
{Filter = "Excel Workbook 97-2003|*.xls|Excel Workbook|*.xlsx|Excel Workbook|*.xlsm|Excel Workbook|*.csv|Excel Workbook|*.txt", ValidateNames = true })
{
--After some IFs--
else if (openfiledialog1.FilterIndex == 4)
{
DataTable oDataTable = null;
int RowCount = 0;
string[] ColumnNames = null;
string[] oStreamDataValues = null;
//using while loop read the stream data till end
while (!oStreamReader.EndOfStream)
{
String oStreamRowData = oStreamReader.ReadLine().Trim();
if (oStreamRowData.Length > 0)
{
oStreamDataValues = oStreamRowData.Split(',');
//Bcoz the first row contains column names, we will populate
//the column name by
//reading the first row and RowCount-0 will be true only once
if (RowCount == 0)
{
RowCount = 1;
ColumnNames = oStreamRowData.Split(',');
oDataTable = new DataTable();
//using foreach looping through all the column names
foreach (string csvcolumn in ColumnNames)
{
DataColumn oDataColumn = new DataColumn(csvcolumn.ToUpper(), typeof(string));
//setting the default value of empty.string to newly created column
oDataColumn.DefaultValue = string.Empty;
//adding the newly created column to the table
oDataTable.Columns.Add(oDataColumn);
}
}
else
{
//creates a new DataRow with the same schema as of the oDataTable
DataRow oDataRow = oDataTable.NewRow();
//using foreach looping through all the column names
for (int i = 0; i < ColumnNames.Length; i++)
{
oDataRow[ColumnNames[i]] = oStreamDataValues[i] == null ? string.Empty : oStreamDataValues[i].ToString();
}
//adding the newly created row with data to the oDataTable
oDataTable.Rows.Add(oDataRow);
}
}
}
result.Tables.Add(oDataTable);
//close the oStreamReader object
oStreamReader.Close();
//release all the resources used by the oStreamReader object
oStreamReader.Dispose();
dataGridView5.DataSource = result.Tables[oDataTable.TableName];
}
Here is the Code:
private void button9_Click(object sender, EventArgs e)
{
try
{
DataClasses1DataContext conn = new DataClasses1DataContext();
else if (textBox3.Text.Contains("GEN_EX"))
{
foreach (DataTable dt in result.Tables)
{
foreach (DataRow dr in dt.Rows)
{
GEN_EX addtable = new GEN_EX()
{
EX_ID = Convert.ToByte(dr[0]),
DOC_ID = Convert.ToByte(dr[1]),
PATIENT_NO = Convert.ToByte(dr[2]),
TEST_DATE = Convert.ToDateTime(dr[3]),
**TEST_TIME = Convert.ToDateTime((dr[4])),**
};
conn.GEN_EXs.InsertOnSubmit(addtable);
}
}
conn.SubmitChanges();
MessageBox.Show("File uploaded successfully");
}
else
{
MessageBox.Show("I guess table is not coded yet");
}
}
EDIT
The TEST_TIME represents HH:MM:SS
The Typed Data Set is defined as:
public virtual int Update(
byte EX_ID,
byte DOC_ID,
byte PATIENT_NO,
System.DateTime TEST_DATE,
System.TimeSpan TEST_TIME)
Based on your input that dr[4] represents time values in hours:minutes:seconds format I recommend following solution.
private TimeSpan GetTimeSpan(string timeString)
{
var timeValues = timeString.Split(new char[] { ':' });
//Assuming that timeValues array will have 3 elements.
var timeSpan = new TimeSpan(Convert.ToInt32(timeValues[0]), Convert.ToInt32(timeValues[1]), Convert.ToInt32(timeValues[2]));
return timeSpan;
}
Use above method as following.
else if (textBox3.Text.Contains("GEN_EX"))
{
foreach (DataTable dt in result.Tables)
{
foreach (DataRow dr in dt.Rows)
{
GEN_EX addtable = new GEN_EX()
{
EX_ID = Convert.ToByte(dr[0]),
DOC_ID = Convert.ToByte(dr[1]),
PATIENT_NO = Convert.ToByte(dr[2]),
TEST_DATE = Convert.ToDateTime(dr[3]),
**TEST_TIME = GetTimeSpan(dr[4].ToString()),**
};
conn.GEN_EXs.InsertOnSubmit(addtable);
}
}
conn.SubmitChanges();
MessageBox.Show("File uploaded successfully");
}
This should give your the value you want. You will face runtime issues if value of dr[4] is not in hours:minutes:seconds format. That I will leave it up to you.
First of all Timespan and DateTime are 2 differents type without implicit conversion available. Since Timespan is a time value between two DateTime, you need to know your referenced time (DateTime) used to start the mesure of your Timespan.
For example it could be from DateTime dtReferential = new DateTime(1900, 01, 01);
In order to give a SQL Timespan value you need to give it a C# Timespan! Change TEST_TIME value to a Timespan. And finally, give it the substracted value of your referenced time.
Using the previous example:
else if (textBox3.Text.Contains("GEN_EX"))
{
foreach (DataTable dt in result.Tables)
{
foreach (DataRow dr in dt.Rows)
{
GEN_EX addtable = new GEN_EX()
{
EX_ID = Convert.ToByte(dr[0]),
DOC_ID = Convert.ToByte(dr[1]),
PATIENT_NO = Convert.ToByte(dr[2]),
TEST_DATE = Convert.ToTimespan(dr[3]),
TEST_TIME = dtReferential.Subtract(Convert.ToDateTime(dr[4]))
};
conn.GEN_EXs.InsertOnSubmit(addtable);
}
}
conn.SubmitChanges();
MessageBox.Show("File uploaded successfully");
}

Convert IEnumerable string array to datatable

I have a csv file delimited with pipe(|). I am reading it using the following line of code:
IEnumerable<string[]> lineFields = File.ReadAllLines(FilePath).Select(line => line.Split('|'));
Now, I need to bind this to a GridView. So I am creating a dynamic DataTable as follows:
DataTable dt = new DataTable();
int i = 0;
foreach (string[] order in lineFields)
{
if (i == 0)
{
foreach (string column in order)
{
DataColumn _Column = new DataColumn();
_Column.ColumnName = column;
dt.Columns.Add(_Column);
i++;
//Response.Write(column);
//Response.Write("\t");
}
}
else
{
int j = 0;
DataRow row = dt.NewRow();
foreach (string value in order)
{
row[j] = value;
j++;
//Response.Write(column);
//Response.Write("\t");
}
dt.Rows.Add(row);
}
//Response.Write("\n");
}
This works fine. But I want to know if there is a better way to convert IEnumerable<string[]> to a DataTable. I need to read many CSVs like this, so I think the above code might have performance issues.
Starting from .Net 4:
use ReadLines.
DataTable FileToDataTable(string FilePath)
{
var dt = new DataTable();
IEnumerable<string[]> lineFields = File.ReadLines(FilePath).Select(line => line.Split('|'));
dt.Columns.AddRange(lineFields.First().Select(i => new DataColumn(i)).ToArray());
foreach (var order in lineFields.Skip(1))
dt.Rows.Add(order);
return dt;
}
(edit: instead this code, use the code of #Jodrell answer, This prevents double charging of the Enumerator).
Before .Net 4:
use streaming:
DataTable FileToDataTable1(string FilePath)
{
var dt = new DataTable();
using (var st = new StreamReader(FilePath))
{
// first line procces
if (st.Peek() >= 0)
{
var order = st.ReadLine().Split('|');
dt.Columns.AddRange(order.Select(i => new DataColumn(i)).ToArray());
}
while (st.Peek() >= 0)
dt.Rows.Add(st.ReadLine().Split('|'));
}
return dt;
}
since, in your linked example, the file has a header row.
const char Delimiter = '|';
var dt = new DataTable;
using (var m = File.ReadLines(filePath).GetEnumerator())
{
m.MoveNext();
foreach (var name in m.Current.Split(Delimiter))
{
dt.Columns.Add(name);
}
while (m.MoveNext())
{
dt.Rows.Add(m.Current.Split(Delimiter));
}
}
This reads the file in one pass.

Code not running outside of loop

EDIT: I am using SharpDevelop
I am new to C# so the answer may be an easy one...I have some code (below) and the WHILE loop runs just fine. The problem is that once the processing in the WHILE loop has finished, no more code is executed. If I put a breakpoint on my 'cn.Open(); line and run the program, I never hit that breakpoint. If I put a breakpoint on the curly bracket '}' just above the 'cn.Open();' line, the code will stop each time I hit that breakpoint. I am not sure how to get my additional code to run.
void MainFormLoad(object sender, EventArgs e)
{
DataTable dt = new DataTable();
string line = null;
int i = 0;
SqlConnection cn = new SqlConnection("Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=Sandbox;Data Source=test");
StreamReader sr = File.OpenText(#"C:\Users\rl\Desktop\TEST_I~1.CSV");
while ((line = sr.ReadLine()) != null)
{
string[] data = line.Split(',');
if (data.Length > 0)
{
if (i == 0)
{
foreach (var item in data)
{
dt.Columns.Add(item.ToString());
}
i++;
}
DataRow row = dt.NewRow();
row.ItemArray = data;
dt.Rows.Add(row);
}
}
cn.Open();
SqlBulkCopy copy = new SqlBulkCopy(cn);
{
// copy.ColumnMappings.Add(0, 0);
// copy.ColumnMappings.Add(1, 1);
// copy.ColumnMappings.Add(2, 2);
// copy.ColumnMappings.Add(3, 3);
// copy.ColumnMappings.Add(4, 4);
copy.DestinationTableName = "Member2";
copy.WriteToServer(dt);
}
You have a few items you may want to address. These may or may not be related to whatever issue you're having debugging with #develop.
Declaring things long before you use them (style guidelines)
Not disposing of things that implement IDisposable (use using statements!)
Inner scope block; the copy variable is being used in it's own scope for no apparently good reason (I may be wrong, but it could be what's throwing #develop's debugger for a loop)
Instead, your code should be closer to this:
void MainFormLoad(object sender, EventArgs e)
{
var dt = new DataTable();
// You may want to pass other parameters to OpenText for read mode, etc.
using (var sr = File.OpenText(#"C:\Users\rl\Desktop\TEST_I~1.CSV"))
{
var first = true;
string line = null;
while ((line = sr.ReadLine()) != null)
{
string[] data = line.Split(',');
if (data.Length > 0)
{
if (first)
{
foreach (var item in data)
{
dt.Columns.Add(item.ToString());
}
first = false;
// Don't add the first row's data in the table (headers?)
continue;
}
var row = dt.NewRow();
row.ItemArray = data;
dt.Rows.Add(row);
}
}
}
using (var cn = new SqlConnection("<connection string>"))
{
cn.Open();
using (var copy = new SqlBulkCopy(cn))
{
// copy.ColumnMappings.Add(0, 0);
// copy.ColumnMappings.Add(1, 1);
// copy.ColumnMappings.Add(2, 2);
// copy.ColumnMappings.Add(3, 3);
// copy.ColumnMappings.Add(4, 4);
copy.DestinationTableName = "Member2";
copy.WriteToServer(dt);
}
}
}
The code is a bit odd but it looks like it should work. There's probably a file lock either making you run against old builds or hang on the .csv open line.
Cory's suggestions for tidying the code are rather good.
I think you have an infinite loop going on because your while check isn't quite right. You're asking if line = sr.ReadLine() is null, not if line is null. The result of setting line to the result of the read function will never return null.

Creating Tables and Retrieving Query results with Dynamics AX 2009 business connector

I'm writing a C# command line tool to fetch data from AX and to add data (create new tables) to AX.
Fetching data from an AX table is easy and documented here: http://msdn.microsoft.com/en-us/library/cc197126.aspx
Adding data to an existing table is also easy: http://msdn.microsoft.com/en-us/library/aa868997.aspx
But I cannot figure out how to do two things:
Create a new AX Table
Retrieve data from an AX Query
Can someone please share some sample code or give some pointers on where to start looking. My searches on Google and MSDN have not revealed much.
NOTE: I am not an experienced AX or ERP developer.
I have created a query in the AOT and was able to use C# to return the data. Find the code below. It's a query that returns the sales that I create Aging Buckets with. I hope this helps.
[DataMethod(), AxSessionPermission(SecurityAction.Assert)]
public static System.Data.DataTable GetCustBuckets(String AccountNum)
{
//Report Parameters
Dictionary<string, object> d = new Dictionary<string, object>();
d.Add("CustTransOpen.AccountNum",AccountNum);
// Create a data table. Add columns for item group and item information.
DataTable table = new DataTable();
table = AxQuery.ExecuteQuery("SELECT * FROM epcCustomerAging",d);
DataTable tableBucket = new DataTable();
DataRow rowBucket;
tableBucket.Columns.Add("Current", typeof(double));
tableBucket.Columns.Add("Bucket31to60", typeof(double));
tableBucket.Columns.Add("Bucket61to90", typeof(double));
tableBucket.Columns.Add("Bucket91to120", typeof(double));
tableBucket.Columns.Add("Over120", typeof(double));
//Variables to hold BUCKETS
double dCurrent = 0;
double dBucket31to60 = 0;
double dBucket61to90 = 0;
double dBucket91to120 = 0;
double dOver120 = 0;
// Iterate through the results. Add the item group to the data table. Call the display method
foreach (DataRow TransRow in table.Rows)
{
DateTime TransDate = Convert.ToDateTime(TransRow["TransDate"].ToString());
double AmountCur = Convert.ToDouble(TransRow["AmountCur"].ToString());
DateTime Today= Microsoft.VisualBasic.DateAndTime.Now;
long nDays = Microsoft.VisualBasic.DateAndTime.DateDiff(Microsoft.VisualBasic.DateInterval.Day, TransDate, Today, 0, 0);
if (nDays <= 30)
{
dCurrent += AmountCur;
}
else if (nDays <= 60)
{
dBucket31to60 += AmountCur ;
}
else if (nDays <= 90)
{
dBucket61to90 += AmountCur;
}
else if (nDays <= 120)
{
dBucket91to120 += AmountCur;
}
else
{
dOver120 += AmountCur;
}
}
rowBucket = tableBucket.NewRow();
rowBucket["Current"] = dCurrent;
rowBucket["Bucket31to60"] = dBucket31to60;
rowBucket["Bucket61to90"] = dBucket61to90;
rowBucket["Bucket91to120"] = dBucket91to120;
rowBucket["Over120"] = dOver120;
tableBucket.Rows.Add(rowBucket);
return tableBucket;
}
Here is a way to create a new AX table from C# (this is using an extension method):
public static bool CreateAXTable(this Axapta ax)
{
string TableName = "MyCustomTable";
string size = "255"; //You could load this from a setting
bool val = false;
if (!ax.TableExists(TableName))
{
AxaptaObject TablesNode = (AxaptaObject)ax.CallStaticClassMethod("TreeNode", "findNode", #"\Data Dictionary\Tables");
AxaptaObject node;
AxaptaObject fields;
AxaptaObject fieldNode;
TablesNode.Call("AOTadd", TableName);
node = (AxaptaObject)ax.CallStaticClassMethod("TreeNode", "findNode", "\\Data dictionary\\Tables\\" + TableName);
fields = (AxaptaObject)ax.CallStaticClassMethod("TreeNode", "findNode", "\\Data dictionary\\Tables\\" + TableName + "\\Fields");
fields.Call("addString", "String1"); //add a string field
fieldNode = (AxaptaObject)fields.Call("AOTfindChild", "String1"); //grab a reference to the field
fieldNode.Call("AOTsetProperty", "StringSize", size);
fieldNode.Call("AOTsave");
fields.Call("addString", "String2"); //add a string field
fieldNode = (AxaptaObject)fields.Call("AOTfindChild", "String2"); //grab a reference to the field
fieldNode.Call("AOTsetProperty", "StringSize", size);
fieldNode.Call("AOTsave");
fields.Call("addString", "String3"); //add a string field
fieldNode = (AxaptaObject)fields.Call("AOTfindChild", "String3"); //grab a reference to the field
fieldNode.Call("AOTsetProperty", "StringSize", size);
fieldNode.Call("AOTsave");
fields.Call("addString", "String4"); //add a string field
fieldNode = (AxaptaObject)fields.Call("AOTfindChild", "String4"); //grab a reference to the field
fieldNode.Call("AOTsetProperty", "StringSize", size);
fieldNode.Call("AOTsave");
fields.Call("addReal", "Real1");
fields.Call("addReal", "Real2");
fields.Call("addReal", "Real3");
fields.Call("addReal", "Real4");
fields.Call("addDate", "Date1");
fields.Call("addDate", "Date2");
fields.Call("addDate", "Date3");
fields.Call("addDate", "Date4");
fields.Call("AOTsave");
node.Call("AOTsave");
AxaptaObject appl = ax.GetObject("appl");
appl.Call("dbSynchronize", Convert.ToInt32(node.Call("applObjectId")), false);
val = true;
}
else //Table already exists
{
val = true;
}
return val;
}
public static bool TableExists(this Axapta ax, string tableName)
{
return ((int)ax.CallStaticClassMethod("Global", "tableName2Id", tableName) > 0);
}
Here is an example of running a query in C#:
(Note: this is a very simplistic method by using an existing query definition, you could also build a query from scratch using QueryBuildDataSource objects, etc...)
Axapta ax = new Axapta();
ax.Logon("", "", "", "");
//Create a query object based on the customer group query in the AOT
AxaptaObject query = ax.CreateAxaptaObject("Query", "CustGroupSRS");
//Create a queryrun object based on the query to fecth records
AxaptaObject queryRun = ax.CreateAxaptaObject("QueryRun", query);
AxaptaRecord CustGroup = null;
;
while (Convert.ToBoolean(queryRun.Call("next")))
{
//GetTableId function is defined here: .Net Business Connector Kernel Functions
CustGroup = (AxaptaRecord)queryRun.Call("get", ax.GetTableId("CustGroup"));
System.Diagnostics.Debug.WriteLine(CustGroup.get_Field("Name").ToString());
}
CustGroup.Dispose();
queryRun.Dispose();
query.Dispose();
ax.Logoff();
ax.Dispose();
I honestly don't think it's possible to create new tables using the business connector. It has to be done within AX and the AOT.
As for returning mixed data, I would probably use a container object for that. Containers can hold sub containers, or axaptarecords. An AxaptaRecord contains data from one defined table.

Categories