I wanted to know ,
I have a ssis package which can
1) read multiple input files
2) store the data from files to dB
3) archive the input files
I have to write functional test using specflow
One of my test case is :
To check the row count in the table in dB to be equal the summation of all lines in each input file read.
I am not sure how I can achieve this. Can anyone help me in :
how to get the summation of lines in each file.
To check the row count in the table in dB
Add a variable to your SSIS package, named something like iRowCount.
In the Data Flow Task where DB is the source, add a RowCount control, and assign the value to variable iRowCount.
to be equal the summation of all lines in each input file read.
Same concept, another two variables, named something like iFilesRowCount and iFilesRowCountTotal.
Then you'll have to pull off in each data pump a RowCount control, assigning the value to iFilesRowCount. Then outside of each data pump a script task that does an iFilesRowCount = iFilesRowCount + iFilesRowCountTotal.
Then somewhere towards the bottom a script task to perform the comparison between iRowCount (db) and iFilesRowCountTotal, and in the arrows exiting that task create precedence constraints to pull off a 'True' path and 'False' path.
Related
I am downloading a json file and successfully putting it in a table using data flow and script component object.
Within my json file there is a count of numbers row I should import and how many pages it would take to import the data.
e.g.
"count": 1925,
"current page": 1,
"total pages": 2,
"results":
I on my url to get the json file, I have to specify which page I'm trying to get.
e.g. &page=" + PageNo
I would like to automate my SSIS package to do this. I have managed to assign the user variables to get the count rows imported, count of row to be imported by using the readwrite function in the script component by assign the in the Post Execute area.
I would like to use the For Loop container to Loop the data flow and keep importing the data until the count rows imported are equal to be count of row to be imported.
My problems is when I execute the Control Flow, I keep getting the error
The collection of variables locked for read and write access is not available outside of PostExecute.
How do I get the variables to change so I can use them the for Loop container?
Added code as per Brad's request.
public override void PostExecute()
{
base.PostExecute();
Variables.RowsImported = i;
Variables.RowsTobeImported = totalcount;
if (totalcount > i)
Variables.PageNoExtra = Variables.PageNoExtra + 1;
MessageBox.Show(Variables.RowsImported.ToString());
MessageBox.Show(Variables.RowsTobeImported.ToString());
//MessageBox.Show(Variables.PageNoExtra.ToString());
}
Your issue appears to be accessing the variables in the PostExecute, move it outside of PostExecute. I access variables from script tasks like the below code examples. I had an issue accessing it a different way before but I can not remember the issue but I know this solved it and this is how I do it all the time.
To do it this way you have to set the Read/Write variables in your script task property manager like in screen shot to make them accessible this way.
Also another issue could be that you are trying to do something to the variables on PostExecute, since it does not do this till execution is complete this could cause issues (not sure but I have never done any variable work in the PostExecute).
// Read variable from SSIS package in C# script task
string VariableUseInScript = Dts.Variables["User::VariableAccessedFromSSISPackage"].Value.ToString();
// SET variable value in SSIS package to use in other parts of package
Dts.Variables("[SSISVariableValueToSet]").Value = "ValueToSet";
NOTE: In the above screen shot I only have 1 variable in each. You can have multiple in each one, or they can all be in ReadOnly (if you are not updating them) or in ReadWrite if you are updating them.
I have an SSIS package that will assemble dynamic SQL statement and execute on a different server with the results needing to be written back to the first server.
Because the SQL is created and passed in as a variable, a Foreach loop is used to run each instance. The results are put into an Object Variable and This works fine. If I put my script task in the Foreach loop itself, I can write the results back to the original server. However- I would really like, for performance reasons, to get the insert out of the Foreach loop and read the result set / object variable to open one connection and write all the data at one go. But when I pull the object doing the reading of the results and writing to the database out of the loop, it only write the last row of data, not all of them.
How can I get to all the rows in the result set outside of the Foreach loop? Is there a pointer to the first row or something? I can't imagine I'm the first person to need to do this but my search for answers has come up empty. Or maybe I'm just under caffeinated.
Well, it can be simplified, if some conditions are met. Generally, SSIS is metadata centric, i.e. set of columns and its types. So, if each of SQL query you run returns the same set of columns, including column names, data types; then you can try the following approach:
In the ForEach loop, run SQL commands and store its results into an Object Variable.
Then - create a Data Flow task with a Script Component Source, fetching its rows from the step 1 Object Variable. Add the rows to some SQL table as you usually do; if needed - you can add some other data like SQL query text. The resulted rows can be added to some table as a DFT Destination.
How to use Object Variable as a data source - here are already good answers to this question
I'm new to SSIS, your idea or solution is greatly appreciated.
I have a flat file with the first row as the file details(not the header). The second row onwards is the actual data.
Data description
First-row format= Supplier_name, Date, number of records in the file
eg:
Supplier_name^06022017^3
ID1^Member1^NEW YORK^050117^50.00^GENERAL^ANC
ID2^Member2^FLORIDA^050517^50.00^MOBILE^ANC
ID3^Member3^SEATTLE^050517^80.00^MOBILE^ANC
EOF
Problem
Using SSIS I want to split the First row into output1 and second row onwards into output2.
With the help of conditional split, I thought I can do this. But I'm not sure what condition to give in order to split the rows. Should I try with multicast?
Thanks
I would handle this by using a script task (BEFORE the dataflow) to read the first row and do whatever you want with it.
Then in the dataflow task, I would set the flat file source to ignore the first row and import the second row on as data.
Thank you all. Here is an alternative solution
I used a script component in SSIS to do this.
Step1: Create a variable called RowNumber.
Step2: Then add a script component which will add an additional column and increments row numbers.
SSIS Script component
private int m_rowNumber;
public override void PreExecute()
{
base.PreExecute();
m_rowNumber = 0;
}
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
m_rowNumber++;
Row.RowNumber = m_rowNumber;
}
Step3: Use the output of Script component as the input of conditional split and create a condition with RowNumber == 1.
The Multicast will split the data accordingly.
I would first make sure that you have the correct number of columns in your Flat File Connection:
Edit the Flat File Connection -> Advanced Tab press the New button to add columns. In your example you should have 7, Column 0 to Column 6.
Now add a conditional split and ass two case statements:
Output Name Condition
HeaderRow [Column 0] == "Supplier_Name"
DetailRow [Column 0] != "Supplier_Name"
Now route these to the Output 1 and Output 2
Expanding on Tab Allerman's answer.
For our project, we used a power shell script component inside an Execute process task which runs a simple power shell command to grab the first line of the file.
See this MSDN blog on how to run power shell script.
Power shell script to get the first line
Get-Content C:\foo\yourfolderpath\yourfilename.txt -First 1
This note only helps in case like yours, but generically helps in avoiding processing large files (in GBs and upwards) which have incorrect header. This simple power shell executes in milliseconds as opposed to most of the processes/scripts which will require to load a full file into memory, slowing things down.
I'm new to VS and SSIS. I'm trying to consolidate data that I have in visual studio using the Script Component Transformation Editor that currently looks like this:
date type seconds
2016/07/07 personal 400
2016/07/07 business 300
2016/07/07 business 600
and transform it to look like this:
date type avgSeconds totalRows
2016/07/07 personal 400 1
2016/07/07 business 450 2
Basically, counting and taking the average for type and date.
I tried both VB.net and C# options and am open to either (new to both). Does anyone have any ideas on how i can do this?
I'm using VS 2012. I'm thinking i need to create a temp or buffer table to keep count of everything as I go through the Input0_ProcessInputRow and then write it to the output at the end, I just can't figure out how to do this. Any help is much appreciated!!
Based on your comments this might work.
Set up a data flow task that will do a merge join to combine the data
https://www.simple-talk.com/sql/ssis/ssis-basics-using-the-merge-join-transformation/
Send the data to a staging table. This can be created automatically by the oledb destination if you click 'New' next to table drop down. If you plan to rerun this package then you will need to add an execute sql task before the data flow task to delete from or truncate this staging table.
Create an execute sql task with your aggregation query. Depending on your situation an insert could work. Something like:
Insert ProductionTable
Select date, type, AVG(Seconds) avgSeconds, Vount(*) totalRows
From StagingTable
Group By date, type
You can also use the above query, minus the insert production table, as a source in a data flow task in case you need to apply more transforms after the aggregation.
I have a text file that contains a list of files to load into database.
The list contains two columns:
FilePath,Type
c:\f1.txt,A
c:\f2.txt,B
c:\f3.txt,B
I want to provide this file as the source to SSIS. I then want it to go through it line by line. For each line, I want it to read the file in the FilePath column and check the Type.
If type is A then I want it to ignore the first 4 lines of the file that is located at the FilePath column of the current line and then load rest of the data inside that file in a table.
If type is B then I want it to open the file and copy first column of the file into table 1 and second column into table 2 for all of the lines.
I would really appreciate if someone can please provide me a high level list of steps I need to follow.
Any help is appreciated.
Here is one way of doing it within SSIS. Below steps are with respect to SSIS 2008 R2.
Create an SSIS package and create three package variables namely FileName, FilesToRead and Type. FilesToRead variable will hold the list of files and their types information. We will have a loop that will go through each of those records and store the information in FileName and Type variables every time it loops through.
On the control flow tab, place a Data flow task followed by a ForEach Loop container. The data flow task would read the file containing the list of files that has to be processed. The loop would then go through each file. Your control flow tab would finally look something like this. For now, there will be errors because nothing is configured. We will get to that shortly.
On the connection manager section, you need four connections.
First, you need an OLE DB connection to connect to the database. Name this as SQLServer.
Second, a flat file connection manager to read the file that contains the list of files and types. This flat file connection manager will contain two columns configured namely FileName and Type Name this as Files.
Third, another flat file connection manager to read all files of type A. Name this as Type_A. In this flat file connection manager, enter the value 4 in the text box Header rows to skip so that the first four rows are always skipped.
Fourth, one more flat file connection manager to read all files of type B. Name this as Type_B.
Let's get back to control flow. Double-click on the first data flow task. Inside the data flow task, place a flat file source that would read all the files using the connection manager Files and then place a Recordset Destination. Configure the variable FilesToRead in the recordset destination. Your first data flow task would like as shown below.
Now, let's go back to control flow tab again. Configure the ForEach loop as shown below. This loop will go through the recordset stored in the variable FilesToRead. Since, the recordset contains two columns, each time a record is looped through, the variables FileName and Type will be assigned the value of the current record.
Inside, the for each loop container, there are two data flow tasks namely Type A files and Type B files. You can configure each of these data flow tasks according to your requirements to read the files from connection managers. However, we need to disable the tasks based on the file that is being read.,
Type A files data flow task should be enabled only when A type files are being processed.
Similarly, Type B files data flow task should be enabled only when B type files are being processed.
To achieve this, click on the Type A files data flow task and press F4 to bring the properties. Click on the Ellipsis button available on the Expression property.
On the Property Expressions Editor, select Disable Property and enter the expression !(#[User::Type] == "A")
Similarly, click on the Type B files data flow task and press F4 to bring the properties. Click on the Ellipsis button available on the Expression property.
On the Property Expressions Editor, select Disable Property and enter the expression !(#[User::Type] == "B")
Here is a sample Files.txt containing only A type file in the list. When the package is executed to read this file, you will notice that only the Type A files data flow task.
Here is another sample Files.txt containing only B type files in the list. When the package is executed to read this file, you will notice that only the Type B files data flow task.
If Files.txt contains both A and B type files, the loop will execute the appropriate data flow task based on the type of file that is being processed.
Configuring Data Flow task Type A files
Let's assume that your flat files of type A have three column layout like as shown below with comma separated values. The file data here is shown using Notepad++ with all special characters. CR LF denotes that the lines are ending with Carriage return and Line Feed. This file is stored in the path C:\f1.txt
We need a table in the database to import the data. Let's create a table named dbo.Table_A in the SQL Server database as shown here.
Now, go to the SSIS package. Here are the details to configure the Flat File connection manager named Type_A. Give a name to the connection manager. You need specify the value 4 in the Header rows to skip textbox. Your flat file connection manager should look something like this.
On the Advanced tab, you can rename the column names if you would like to.
Now that the connection manager is configured, we need to configure data flow task Type A files to process the corresponding files. Double-click on the data flow task Type A files. Place a Flat file source and OLE DB Destination inside the task.
The flat file source has to be configured to read the files from flat file connection manager.
The data flow task doesn't do anything special. It simply reads the flat files of type A and inserts the data into the table dbo.Table_A. Now, we need to configure the OLE DB Destination to insert the data into database. The column names configured in the flat file connection manager and the table are not same. So, they have to be mapped manually.
Now, that the data flow task is configured. We have to make that the file path being read from the Files.txt is passed correctly. To do this, click on the Type_A flat file connection manager and press F4 to bring the properties. Set the DelayValidation property to True. Click on the Ellipsis button on the Expressions property.
On the Property Expression builder, select ConnectionString property and set it to the Expression #[User::FileName]
Here is a sample Files.txt file containing Type A files only.
Here are the sample type A files f01.txt and f02.txt
After the package execution, following data will be found in the table Table_A
Above mentioned configuration steps have to be followed for Type B files. However, the data flow task would look slightly different since the file processing logic is different. Data flow task Type B files would something like this. Since you have to insert the two columns in type B files into different tables. You have to use Multicast transformation that would create clones of the input data. You could use each of the multicast output to pass through to a different transformation or destination.
Hope that helps you to achieve your task.
I would recommend that you create a SSIS package for each different type of file load you're going to do. You can execute those packages from another program, see here: How to execute an SSIS package from .NET?
Given this information, you can write a quick program to execute the relevant packages:
var jobs = File.ReadLines("C:\\temp\\order.txt")
.Skip(1)
.Select(line => line.Split(','))
.Select(tokens => new { File = tokens[0], Category = tokens[1] });
foreach (var job in jobs)
{
// execute the relevant package for job.Category using job.File
}
My solution would look like N + 1 flat file Connection Managers to handle the source files. CM A would address the skip first 4 rows file format, B sounds like it's just a 2 column file, etc. The last CM would be used to parse the command file you've illustrated.
Now that you have all of those Connection Managers defined, you can go about the processing logic.
Create 3 variables. 2 of type string (CurrentPath, CurrentType). 1 is of type Object and I called it Recordset.
The first Data Flow reads all the rows from the flat file source using "CM Control." This is the data you supplied in your example.
We will then use that Recordset object as the source for a ForEach Loop Container in what is commonly referred to as shredding. Bingle the term "Shred recordset ssis" and you're bound to hit a number of articles describing how to do it. The net result is that for each row in that source CM Control file, you will assign those values into the CurrentPath, CurrentType variables.
Inside that Loop container, create a central point for control for control to radiate out. I find a script task works wonderfully for this. Drag it onto the canvas, give it a strong name to indicate it's not used for anything and then create a data flow to handle each processing permutation.
The magic comes from using Expressions. Dang near everything in SSIS can have expressions set on their properties which is what separates the professionals from the poseurs. Here, we will double click on the line connecting to a given data flow and change the constraint type from "Constraint" to "Expression and Constraint" The Expression you would then use is something like #[User::CurrentType] == "A" This will ensure that path is only taken when both the parent task Succeeded and the condition is true.
The second bit of expression magic will be applied to the connection managers themselves. They will need to have their ConnectionString property driven by the value of the #[User::CurrentFile] property. This will allow a design-time value of C:\filea.txt but would allow a runtime value, from the control file, to be \\network\share\ClientFileA.txt Unless all the files have the same structure, you'll most likely need to set DelayValidation to True in the properties. Otherwise, SSIS will fail PreValidation as all the "CM A" to "CM N" would be using that CurrentFile variable which may or may not be a valid connection string for that file layout.