Exploring WWF I've encountered a difficulty with processing a DataTable. Lets say I have a table and I want to do some calculations based on the data of each of the rows from that table. To do so, I would have added a CodeActivity as the first step in my workflow which will read that table and populate a DataTable(which would be stored as a private field of the workflow) with the data. I thought that afterwards I would use the ReplicatorActivity(as it is suggested by MSDN as a replacement to the foreach loop) for iterating through the data and it would have another CodeActivity that would do all the calculations based on the row's data. The problem is that ReplicatorActivity can iterate only through System.Collections.IList but as we know System.Data.DataTable.Rows is of type DataRowCollection which implements ICollection and IEnumerable through InternalDataCollectionBase, but not IList.
What would you suggest? Should I use a WhileActivity instead of ReplicatorActivity in this case, or some other approach?
Should I do all the calculations in a single long running CodeActivity?
From what you are describing a single CodeActivity can simply take care of the business. Look at the following code and construct something like below:
public sealed class TableManipulationActivity : CodeActivity<DataTable>
{
[Required]
public InArgument<DataTable> TableInArgument { get; set; }
private DataTable _table;
protected override DataTable Execute(CodeActivityContext context)
{
_table = TableInArgument.Get(context);
// play with _table value and do whatever you want [All sorts of CRUD operations]
var result = new DataTable(); // populate this result
// Manipulate result
// ...
// ...
return result;
}
}
Above approach is quite easy and reusable based on your problem description.
Related
I have a variable [User::WorkOrderProductIdList] in SSIS package containing records of a class object.
Work Order Product class
public class WorkOrderProduct
{
public Guid workOrderId;
public Guid workOrderProductId;
public static Guid WorkOrderId { get; set; }
public static Guid WorkOrderProductId { get; set; }
}
Main Script Task
public override void InputWOProduct_ProcessInputRow(InputWOProductBuffer Row)
{
ArrayList wopList = new ArrayList();
WorkOrderProduct wop = new WorkOrderProduct();
wop.workOrderId = pWorkOrderID;
wop.workOrderProductId = pWorkOrderProductID;
wopList.Add(wop);
}
Assign wopList to [User::WorkOrderProductIdList]
public override void PostExecute()
{
base.PostExecute();
Variables.WorkOrderProductIdList = this.wopList;
}
In another script task, it takes in [User::WorkOrderProductIdList] as ReadOnlyVariables.
May I know how can I loop through [User::WorkOrderProductIdList] and extract the values of workOrderId and workOrderProductId for each row?
I saw that my ArrayList [User::WorkOrderProductIdList] contains the records and values, but there are no functions when . on the field.
Population
Intellisense issue aside, you'll only ever have at most 1 row in there
You are using a Data Flow and within that, you have a Script Component acting as a Transformation.
In InputWOProduct_ProcessInputRow which fires for each row that passes through the component, you have
Every time a new row comes in, you are going to empty out the existing ArrayList and reinitialize it.
Instead, you need to have that variable at the class scope and have the initialization logic in the not-shown PreExecute method
ArrayList wopList;
// Or, if you wish to use the Generics
// List<WorkOrderProduct> wopList
public override void PreExecute()
{
base.PreExecute();
/*
* Add your code here
*/
this.wopList = new ArrayList();
// Or
// this.wopList = new List<WorkOrderProduct>();
}
Consumption
You use the ArrayList to hold the elements of your array but that is a weakly typed list.
We don't recommend that you use the ArrayList class for new development. Instead, we recommend that you use the generic List class.
When you're enumerating through it in your foreach loop, what is getting popped off the list is of type Object. Not only do I just "know" that, itellisense is telling you that all it knows is the type is Object because it's giving you the functions that everything has because they're all derived from Object.
Yes, the Watch window has inspection magic to show you what the values are but do you think the team that wrote the former is the same team that wrote the latter?
Since you "know" what the type should be, declare it as such.
foreach (WorkOrderProduct wopObj in ...
However, the next logical error, probably, is going to be in the accessing of
Variables.WorkOrderProductIdList itself. Your snipped image there shows you're shredding out the array in the PreExecute method. The sequence of operations is that the Data Flow is going to go through validation, then pre-execute sequences so at that point, it's going to shred the results of your array list and the value of wopObj_workOrderId is going to be the last element of your array.
I am still baffled finding out how to get a DataGridView updated automatically when changing the content of its DataSource without explicitely triggering DataGridView.Update(). It seems there is no difference at all between DataTable, List, BindingList as (direct) DataSource and as (indirect) DataSource with an additional BindingSource which uses any of the former as DataSource.
The DataGridView I am actually using for this is non-editable and just shows entries which are updated by the corresponding entity code. My last attempt was with a BindingSource that uses a BindingList and manipulating the content of the BindingSource within the code.
I have omitted some methods here, which do not play a role for the basic problem.
Form:
private void FormLog_Load(object sender, EventArgs e) {
...
dgvLog.DataSource = Log.Current.SourceEntries;
...
}
private void ClearLog() {
Log.Current.RemoveAll();
}
public void UpdateDataSource() {
dgvLog.Update();
}
Entity (singleton class):
public class LogEntry {
public int ID { get; set; }
public string DateTime { get; set; }
public string Type { get; set; }
public string Event { get; set; }
public string Details { get; set; }
}
public class Log {
public BindingList<LogEntry> Entries { get; set; }
public BindingSource SourceEntries { get; set; }
public Log() {
Entries = new BindingList<LogEntry>();
SourceEntries = new BindingSource() { DataSource = Entries };
ReadAll();
}
public void Add(string type, string logEvent, string details = "") {
LogEntry entry = MapToDB(new LogEntry() {
Type = type,
Event = logEvent,
Details = details
});
DB.Write(QueryAdd(entry));
SourceEntries.Add(entry);
if (Config.Current.GetForm("Log") != null)
((FormLog)Config.Current.GetForm("Log")).UpdateDataSource();
}
public void ReadAll() {
for (int i = SourceEntries.Count - 1; i >= 0; i--) {
SourceEntries.RemoveAt(i);
}
DataTable dt = DB.Read(QueryReadAll());
if (dt != null) {
foreach (DataRow row in dt.Rows) {
SourceEntries.Add(MapToList(row));
}
}
if (Config.Current.GetForm("Log") != null)
((FormLog)Config.Current.GetForm("Log")).UpdateDataSource();
}
public void RemoveAll() {
DB.Write(QueryRemoveAll());
for (int i = SourceEntries.Count - 1; i >= 0; i--) {
SourceEntries.RemoveAt(i);
}
Add("I", "Log cleared");
}
This works but only when I call UpdateSource() which calls dgvLog.Update() by using a selfwritten FormStack in another singleton class which I would like to avoid. Of course, one could simply call dgvLog.Update() within the form itself but, esp. with this log example, it is obvious that this does not help when updating data from/within another form while the form that displays the DataGridView is still opened in the background.
Also, as there is no difference (between using DataTable or List, etc. and BindingSource or not) I wonder what the benefit/purpose of BindingList and BindingSource are:
Is this the correct approach or am I missing something!?
By the way, I am using .NET v4.5.2.
It seems there is no difference at all between DataTable, List, BindingList as (direct) DataSource and as (indirect) DataSource with an additional BindingSource which uses any of the former as DataSource.
A BindingSource has a few uses
maintains knowledge of position/current row and can thus achieve shared navigation (a dgv and textboxes all bound to the same BS means the dgv can navigate through records and the textboxes update because they always show "current row")
provides sorting and filtering facilities
supports complex binding scenarios where it must help filter a list down to only children of some currently selected parent in a different bindingsource
provides separation for multi different positional browsing of a common DataSource
works but only when I call UpdateSource() which calls dgvLog.Update() by using a selfwritten FormStack in another singleton class which I would like to avoid. Of course, one could simply call dgvLog.Update() within the form itself but, esp. with this log example, it is obvious that this does not help when updating data from/within another form while the form that displays the DataGridView is still opened in the background.
Datagridview.Update() is concerned with repainting areas for the control that need it; it is nothing to do with committing changes to underlying data models. Perhaps you need EndEdit which finishes editing operations on the current row and commits them to the underlying data storage. This also happens when you click a different row in a grid. Bindingsource also have an EndEdit method. Mostly you don't need to call these methods yourself
To share data between forms pass the datatable the data is stored in and bind it through a bindingsource in the second form
Also, as there is no difference (between using DataTable or List, etc. and BindingSource or not) I wonder what the benefit/purpose of BindingList and BindingSource are:
DataTable is a collection of DataRow. A DataRow is at it's heart an object array. The end
Binding list is a list of whatever you want, such as your custom class Person. Arguably more useful ultimately, but it's comparing apple's and oranges. If instead you open up the DataSet Designer then you can specify tables that have named typed column. In that regard it's not a huge amount different from writing your own classes (it writes a large amount of good code on a short time though. I use them as data models sometimes
Good Day,
I have a situation where I'm using code that contains a class called ImportFileContext. The code looks like:
// One of 5 different types can be passed in
public AddImportData(CustomType ModelData)
{
// Depending on which 5 different types, the formatted type will change
FormattedType data = ConvertModelDataToFormattedData(ModelData);
using (var db = new ImportFileContext())
{
// Can this next line be made dynamic?
db.ImportFormattedData.Add(data);
db.SaveChanges();
}
}
Basically, a CustomType will always be passed in to the method. However, there are five different custom types that can be passed in. Depending on which 5 can be passed in, the data will be modified.
Use cases:
Custom Type passed in, format data to a specific format, then add that item to
List of the db instance.
Custom Type 2 passed in, format data to a specific format, then add that item to
List of the db instance.
Custom Type 3 passed in, format data to a specific format, then add that item to
List of the db instance.
So what I'm looking for is a way add an item to the List depending on the data type without having to write several different methods to test which type I'm receiving then adding the item. I know of the strategy pattern and I could use that, but what about adding an item to a list?
I'm really trying to avoid writing code that would look like:
// One of 5 different types can be passed in
public AddImportData(CustomType ModelData)
{
// Depending on which 5 different types, the formatted type will change
FormattedType data = ConvertModelDataToFormattedData(ModelData);
using (var db = new ImportFileContext())
{
if (typeof(ModelData) == "CustomType")
db.ImportFormattedData.Add(data);
elseif (typeof(ModelData) == "CustomType1")
db.ImportCsvData.Add(data);
elseif (typeof(ModelData) == "CustomType2")
db.ImportTabDelimetedData.Add(data);
db.SaveChanges();
}
}
TIA,
coson
I don't know how feasible this is for your application, but you can always add the specific behavior to the CustomType class and subclasses can implement it however they need to.
public class CustomType
{
public virtual void FormatAndWriteToDB(DataBase db);
}
And then sublasses override as needed
public class CustomType1 : CustomType
{
public override void FormatAndWriteToDB(DataBase db)
{
FormattedType data = ConvertModelDataToFormattedData(ModelData);
db.ImportCsvData.Add(data);
}
}
That would make your code very clean in the example method:
public AddImportData(CustomType ModelData)
{
ModelData.FormatAndWriteToDB(db);
db.SaveChanges();
}
Of course you can change this around a bit. For example, if the FormattedType call is common to all of them, you could leave that in the AddImportData method and pass it as an argument to the FormatAndWriteToDB method.
The advantage of this is future additions just require implementing the new subclass of CustomType and no modification is needed to AddImportData.
Sounds to me like you're looking for double-dispatch. You can do this with the Visitor Pattern--which is basically what David Mason detailed--where you decouple the algorithm from the data by putting logic (the visit) into the class that contains the data. That, of course works, but requires you to modify the class outside of the algorithm in order to visit.
I find this complex and it's really just there because statically typed object oriented languages don't normally do runtime overloading. Fortunately in C# 4 they introduced the dynamic keyword which allows use to implement double dispatch much easier--or at least in a way that looks more like method overloading. Which means you create the method overloads, assign the value to dynamic variable, then invoke the method. The method that gets call will be chosen at runtime based on the value. For example:
public static void AddImportData(CustomType ModelData)
{
FormattedType data = ConvertModelDataToFormattedData(ModelData);
using (var db = new ImportFileContext())
{
dynamic temp = ModelData;
ImportData(ModelData, data, db);
}
}
private static void ImportData(CustomType modelData, FormattedType data, ImportFileContext db)
{
db.ImportFormattedData.Add(data);
db.SaveChanges();
}
private static void ImportData(CustomType1 modelData, FormattedType data, ImportFileContext db)
{
db.ImportCsvData.Add(data);
db.SaveChanges();
}
private static void ImportData(CustomType1 modelData, FormattedType data, ImportFileContext db)
{
db.ImportTabDelimetedData.Add(data);
db.SaveChanges();
}
I have more details on my blog at: http://msmvps.com/blogs/peterritchie/archive/2010/05/24/using-the-dynamic-keyword-in-c-to-improve-object-orientation.aspx
I have a datatable with two columns. I want to store the rows of each column in an array so that I can return rows for each individual column. This way I believe I can populate a list box(the option text as one column and the option value as the other column).
Here is what I started out with:
public object dbAccess2()
{
ArrayList arg = new ArrayList();
DataTable myTable = GenericDataAccess.ExecuteSelectCmd("Vehicle_GetMakes");
foreach (DataRow dRow in myTable.Rows)
{
arg.Add(dRow["VehicleMake"]);
arg.Add(dRow["VehicleMakeId"]);
}
return arg.ToArray();
}
You can make a class to hold each individual row in this case and use a List<T> to hold the data, like this:
public class Vehicle
{
public string Make { get, set };
public string MakeId { get, set };
}
..
List<Vehicle> Vehicles = new List<Vehicle>();
..
foreach (DataRow dRow in myTable.Rows)
{
Vehicles.Add(
new Vehicle {
Make = arg.Add(dRow["VehicleMake"]),
MakeId = arg.Add(dRow["VehicleMakeId"])
});
}
And later, you can easily populate a listbox with this list:
listBox.DataSource = Vehicles;
listBox.DisplayMember = "Make";
But I think you may want to use a ListView probably.
Don't use the ArrayList class, it's practically obsolete. Use arrays or generic lists instead, so that you get a typed result.
You can get the columns into lists like this:
List<string> makes = myTable.Rows.Select(r => (string)r["VehicleMake"]).ToList();
List<int> makeIds = myTable.Rows.Select(r => (int)r["VehicleMakeId"]).ToList();
Or into arrays:
string[] makes = myTable.Rows.Select(r => (string)r["VehicleMake"]).ToArray();
int[] makeIds = myTable.Rows.Select(r => (int)r["VehicleMakeId"]).ToArray();
An alternative to populating a dropdown (as that is what I assume that you mean, as a ListBox doesn't have options) from arrays is to use data binding:
theDropdown.DataTextField = "VehicleMake";
theDropdown.DataValueField = "VehicleMakeId";
theDropdown.DataSource = myTable;
theDropdown.DataBind();
What you're attempting is to manipulate real objects without the benefit of object design, invoking raw data instead. This has very broad and far-reaching problems and is quite far behind current development strategies - to broad to go into here but not least of your problems is building in a brittle coupling between your application and your database.
Step one is to model an actual Vehicle class.
public class Vehicle
{
public string MakeId { get; set; }
public string Make { get; set; }
}
Step two is to build a managing class for your Vehicles ("Fleet" perhaps?) which can abstract the Vehicle collection behind an IEnumerable interface. Internally you will store the Vehicles collection as a concrete generic collection (a List or Dictionary most likely) and avoid at all costs the really-should-be-considered-obsolete ArrayList structure.
public class Fleet
{
private List<Vehicle> _vehicles = new List<Vehicle>();
public IEnumerable<Vehicle> Vehicles { return this._vehicles;}
}
Step three is to internalise to this class (or a class behind this one or behind that one etc, etc) the CRUD operations which will interact with the Database stored data. That's truly an implementation detail, but one you'll apply for all similar classes throughout your architecture.
At this point you'll be able to work with the IEnumerable property directly with standard Databinding methods.
In an abstract class (C# 3), there is a virtual method named GetData which returns a DataTable. The algorithm of the method is as following:
Get SQL query string from the class.
Get DataTable from database using above query.
Do transformations on DataTable and return it.
In the 3rd point, I clone the original DataTable in order to change the Type of column (I can't do that on populated table and can't setup this at the 2nd point) and enumerate every row in which I copy and transform data from the original one. Transformations depends on the class, every one has private methods which transforms data on its own.
The question is: How to make a method in a base class which will be based on the following three params: column name which will be converted, Type of new column and action during the transformation. Two first are quite simple but I'm not sure about the third one. I thought about designing a new class which will store these three parameters, and the action will be stored as following delegate:
public delegate object Convert(object objectToConvert);
The conversion would look something like that:
int rowCounter = 0;
foreach(DataRow row in dt.Rows)
{
foreach(var item in Params)
{
row[item.ColumnIndex] = item.Convert(originalDataTable.Rows[rowCounter].ItemArray[item.ColumnIndex]);
}
++rowCounter;
}
For now, I have to override the method GetData() in every class I want to have a transformations which causes a large duplication of code. The point I want to achieve is to make a base class which will be based on params I mentioned above. Is the above solution good for this problem or is it any other way to do that?
First I'm not a big fan of DataTable, you can use DataReader instead if you do a foreach.
using (var dataReader = ....)
{
while(dataReader.Read())
{
foreach(var item in Params)
{
row[item.ColumnIndex] = item.Convert(originalDataTable.Rows[rowCounter].ItemArray[item.ColumnIndex]);
}
}
}
Second, make a base class with a abstract method with some code and make the object in the list Params inherits of this class and override the method only when it necessary.
public class MyClass
{
public abstract object Convert(DataRow row)
{
....
}
}
public class foo : MyClass
{
}
Okay, for now I have a following solution which does fully satisfy me:
Every convertor implements following interface:
public interface IConvertor
{
object Convert(object item);
}
Params class is as following:
Type ColumnType { get; protected set; } // New type
string[] ColumnNames { get; protected set; }
IConvertor Convertor { get; protected set; }
Every object have its own Params array property. I've got one method for every derived class and all what I have to do is to set up parameters.