How to write a function (external function, c#, f# or powershell script, etc)
List<string> GetFields(string ssisPackageName, string fileSourceName);
to get the field list of a SSIS package? Since the package is an Xml file, can xquery be used to get the list?
Or even better, get more information,
class Field
{
public string Name { get; set; }
public string Type { get; set; }
}
List<Field> GetFields(string ssisPackageName, string fileSourceName);
#billinkc is right, you should keep data typing issues in mind. That said, you could at best retrieve the Code Page and Unicode values for the Flat File Connection Manager itself. The following code should get you started, where you might need some lookups for the code page and data type attributes.
string path = #"MyPathTo\Package.dtsx";
XNamespace dts = "www.microsoft.com/SqlServer/Dts";
XDocument doc = XDocument.Load(path);
// get all connections
var connections = from ele in doc.Descendants(dts + "ConnectionManager")
where ele.Attributes(dts + "ObjectName").Count() != 0
select ele;
foreach (var connection in connections)
{
// look for your flat file connection
if (connection.Attribute(dts + "ObjectName").Value == "Flat File Connection Manager")
{
var connectionDetails = connection.Element(dts + "ObjectData").Element(dts + "ConnectionManager");
Console.WriteLine("CodePage: " + connectionDetails.Attribute(dts + "CodePage").Value);
Console.WriteLine("Unicode: " + connectionDetails.Attribute(dts + "Unicode").Value);
var columnList = connection.Descendants(dts + "FlatFileColumn");
foreach (var column in columnList)
{
Console.WriteLine("Column name: " + column.Attribute(dts + "ObjectName").Value);
Console.WriteLine("Column type: " + column.Attribute(dts + "DataType").Value);
}
}
}
Related
We are reviewing our logs to make it more effective for audit analysis, therefore we are trying to include the action name and all inputs applied each time by the user.
Consider this sample code:
public JsonResult SampleActionCode(int inputA, Guid inputB, bool inputC)
{ ... }
So our code would be something similar to this added at that action:
string actionName = this.ControllerContext.RouteData.Values["action"].ToString();
string userInputs = inputA.ToString() + " , " + inputB.ToString() + " , " + inputC.ToString();
string userExecuted = actionName + " , " + userInputs;
//save to database
How could we make a general code that would cycle all inputs available and concatenate those into a string, similar to userInputs shown?
The query is stored in Request.QueryString, so you can iterate over its parts:
var parts = new List<string>(); // using System.Collections.Generic
foreach (var key in Request.QueryString.AllKeys) {
parts.Add(Request.QueryString[key]);
}
string result = string.Join(", ", parts);
I'm using Sql Server Db conntected to my ASP.NET Mvc Application via Entity Framework.
Application is deployed on Virtual Machine, which is managing by Cluster. Curently max memory is setup to 32 GB. I could up it to 64 GB.
One of avaliable solution for users is loading csv files to database. Every single file is one day, which is consisting of 80-90k records with 200 columns. For this solution I used this setup:
public ActionResult AddRecordsFromCSVFile2(FileInput fileInputs)
{
BWSERWISEntities bWDiagnosticEntities2 = new BWSERWISEntities();
bWDiagnosticEntities2.Configuration.AutoDetectChangesEnabled = true;
bWDiagnosticEntities2.Configuration.ValidateOnSaveEnabled = false;
var Files = fileInputs.File;
if (ModelValidate())
{
foreach (var fileInput in Files)
{
List<VOLTER_LOGS> LogsToAdd = new List<VOLTER_LOGS>();
using (MemoryStream memoryStream = new MemoryStream())
{
fileInput.InputStream.CopyTo(memoryStream);
memoryStream.Seek(0, SeekOrigin.Begin);
{
using (var streamReader = new StreamReader(memoryStream))
{
var csvConfig = new CsvConfiguration(CultureInfo.InvariantCulture)
{
Delimiter = ";",
HasHeaderRecord = false
};
using (var csvReader = new CsvReader(streamReader, csvConfig))
{
var records = csvReader.GetRecords<ReadAllRecords>().ToList();
var firstRecord = records.Where(x => x.Parametr4 != "0" && x.Parametr5 != "0" && x.Parametr6 != "0").FirstOrDefault();
var StartDateToCheck = Convert.ToDateTime(firstRecord.Parametr4 + "-" + firstRecord.Parametr5 + "-" + firstRecord.Parametr6 + " " + firstRecord.Parametr3 + ":" + firstRecord.Parametr2 + ":" + firstRecord.Parametr1, new CultureInfo("en-GB", true));
var lastRecord = records.Where(x => x.Parametr4 != "0" && x.Parametr5 != "0" && x.Parametr6 != "0").LastOrDefault();
var EndDateToCheck = Convert.ToDateTime(lastRecord.Parametr4 + "-" + lastRecord.Parametr5 + "-" + lastRecord.Parametr6 + " " + lastRecord.Parametr3 + ":" + lastRecord.Parametr2 + ":" + lastRecord.Parametr1, new CultureInfo("en-GB", true));
int matchingMachineID = int.Parse(firstRecord.Parametr7);
var matchingElements = bWDiagnosticEntities2.VOLTER_LOGS.Where(x => (x.Parametr7 == matchingMachineID) && (x.Parametr1 >= StartDateToCheck && x.Parametr1 <= EndDateToCheck)).ToList();
bWDiagnosticEntities2.VOLTER_LOGS.RemoveRange(matchingElements);
bWDiagnosticEntities2.SaveChanges();
foreach (var record in records)
{
if (record.Parametr4 != "0" && record.Parametr5 != "0" && record.Parametr6 != "0")
{
bWDiagnosticEntities2.Configuration.AutoDetectChangesEnabled = false;
string date = record.Parametr4 + "-" + record.Parametr5 + "-" + record.Parametr6 + " " + record.Parametr3 + ":" + record.Parametr2 + ":" + record.Parametr1;
DateTime recordDate = DateTime.Parse(date, new CultureInfo("en-GB", true));
// DateTime recordDate = DateTime.ParseExact(date, "dd-MM-yyyy HH:mm:ss", new CultureInfo("en-GB"));
VOLTER_LOGS vOLTER_LOGS = new VOLTER_LOGS();
vOLTER_LOGS.Parametr1 = recordDate;
vOLTER_LOGS.Parametr2 = 0;
vOLTER_LOGS.Parametr3 = 0;
vOLTER_LOGS.Parametr4 = 0;
vOLTER_LOGS.Parametr5 = 0;
vOLTER_LOGS.Parametr6 = 0;
vOLTER_LOGS.Parametr7 = int.Parse(record.Parametr7);
7-200...
vOLTER_LOGS.Parametr200 = int.Parse(record.Parametr200);
LogsToAdd.Add(vOLTER_LOGS);
}
}
}
}
}
bWDiagnosticEntities2.VOLTER_LOGS.AddRange(LogsToAdd);
bWDiagnosticEntities2.SaveChanges();
}
}
}
return View();
}
This implementation takes 5-8 mins and taking like 45-55% of VM's memory, but during the save process sometimes is difficult to get response of other simple request (get first and last date) with error:
The request limited has expired or server is not responding.
Server has a lot of free memory, but table has a 7kk records already and still growing up.
Finalny propably files won't be save during the other users requests.
Now i'm wondering that is it good idea to use Entity Framework for that data volume.
The main goal of Application is to detect errors in data volume over a period of time.
Does anyone happen to have experience with such amount of data processing via Entity Framework?
Is any solution to split resources to every single task, so that every request could get response?
EF is an ORM, not an import tool. An ORM's job is to load graphs of related entities and give the impression of working with objects instead of tables and SQL. There are no entities or graphs in import jobs though, there are files, fields, tables and mappings.
The fastest way to import data SQL Server tables in .NET is to use the SqlBulkCopy class. This uses the same mechanism used in the bcp tool or the BULK INSERT command to import data as a stream, using minimal logging. The input can be a DataTable or an IDataReader instance.
Libraries like CsvHelper or ExcelDataReader provide IDataReader-derived classes out of the box. In other cases, FastMember's ObjectReader can be used to create an IDataReader wrapper over any IEnumerable.
Loading a CSV into a table with CsvHelper and SqlBulkCopy could be as simple as this, provided the file and table columns match:
public async Task SimpleImport(string path, string table,
string connectionString)
{
using var reader = new StreamReader(path);
var csvConfig = new CsvConfiguration(CultureInfo.InvariantCulture)
{
Delimiter = ";"
};
using var csv = new CsvReader( reader, csvConfig);
using var dr = new CsvDataReader(csv);
using var bcp = new SqlBulkCopy(connectionString);
bcp.DestinationTableName = table;
await bcp.WriteToServerAsync(dr);
}
If there are no headers, they'll have to be provided through CsvHelper's configuration, eg through GetDynamicPropertyName or a class map :
string[] fields=new[] {.....};
csvConfig.GetDynamicPropertyName = args=> {
if (args.FieldIndex<10)
{
return fields[args.FieldIndex];
}
else
{
return $"Value_{args.FieldIndex}";
}
};
Using a ClassMap allows specifying type conversions, including complex ones
public class Foo
{
...
public int Year { get; set; }
public int Month { get; set; }
public int Day { get; set; }
public DateTime Date { get; set; }
}
public class FooMap : ClassMap<Foo>
{
public FooMap()
{
AutoMap(CultureInfo.InvariantCulture);
Map(m => m.Date).Convert(row => DateFromParts(row.GetField(5),row.GetField(6),row.GetField(7));
}
DateTime DateFromParts(string year, string month, string say)
{
return DateTime.Parse($"{year}-{month}-{day}");
}
}
Struggling with a C# Component. What I am trying to do is take a column that is ntext in my input source which is delimited with pipes, and then write the array to a text file. When I run my component my output looks like this:
DealerID,StockNumber,Option
161552,P1427,Microsoft.SqlServer.Dts.Pipeline.BlobColumn
Ive been working with the GetBlobData method and im struggling with it. Any help with be greatly appreciated! Here is the full script:
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
string vehicleoptionsdelimited = Row.Options.ToString();
//string OptionBlob = Row.Options.GetBlobData(int ;
//string vehicleoptionsdelimited = System.Text.Encoding.GetEncoding(Row.Options.ColumnInfo.CodePage).GetChars(OptionBlob);
string[] option = vehicleoptionsdelimited.Split('|');
string path = #"C:\Users\User\Desktop\Local_DS_CSVs\";
string[] headerline =
{
"DealerID" + "," + "StockNumber" + "," + "Option"
};
System.IO.File.WriteAllLines(path + "OptionInput.txt", headerline);
using (System.IO.StreamWriter file = new System.IO.StreamWriter(path + "OptionInput.txt", true))
{
foreach (string s in option)
{
file.WriteLine(Row.DealerID.ToString() + "," + Row.StockNumber.ToString() + "," + s);
}
}
Try using
BlobToString(Row.Options)
using this function:
private string BlobToString(BlobColumn blob)
{
string result = "";
try
{
if (blob != null)
{
result = System.Text.Encoding.Unicode.GetString(blob.GetBlobData(0, Convert.ToInt32(blob.Length)));
}
}
catch (Exception ex)
{
result = ex.Message;
}
return result;
}
Adapted from:
http://mscrmtech.com/201001257/converting-microsoftsqlserverdtspipelineblobcolumn-to-string-in-ssis-using-c
Another very easy solution to this problem, because it is a total PITA, is to route the error output to a derived column component and cast your blob data to a to a STR or WSTR as a new column.
Route the output of that to your script component and the data will come in as an additional column on the pipeline ready for you to parse.
This will probably only work if your data is less than 8000 characters long.
Basically im trying to save a new password and avatar for my twitter type website.
Any help would be appreciated
My coding is:
string newPasswordString = Server.MapPath("~") + "/App_Data/tuitterUsers.txt";
string[] newPasswordArray = File.ReadAllLines(newPasswordString);
string newString = Server.MapPath("~") + "/App_Data/tuitterUsers.txt";
newString = File.ReadAllText(newString);
string[] newArray = newString.Split(' ');
for (int i = 0; i < 3; i++)
{
for (int j = 0; j < 3; i++)
{
newArray[1] = newPasswordTextBox.Text;
newArray[2] = avatarDropDownList.SelectedValue;
newPasswordArray.Replace(" " + Session["Username"].ToString() + " " + Session["UserPassword"].ToString() + " " + Session["UserAvatarID"].ToString() + " ", " " + Session["Username"].ToString() + " " + newPasswordArray[1] + " " + newPasswordArray[2]);
}
}
string newPasswordString = string.Join(Environment.NewLine, newPasswordArray);
File.WriteAllText(Server.MapPath("~") + "/App_Data/tuitterUsers.txt", newPasswordString);
If I understand your problem correctly you need to move the
File.WriteAllText(Server.MapPath("~") + "/App_Data/tuitterUsers.txt", newPasswordArray);
outside the loop, otherwise you rewrite the file at each loop, but this is not enough, you need also to rebuild the Whole text file
string fileToWrite = string.Join(Environment.NewLine, newPasswordArray);
File.WriteAllText(Server.MapPath("~") + "/App_Data/tuitterUsers.txt", fileToWrite);
EDIT: After the code update and the comment below
The looping is totally wrong as well the rebuilding of the array
string userDataFile = Server.MapPath("~") + "/App_Data/tuitterUsers.txt";
string[] userDataArray = File.ReadAllLines(userDataFile);
for(int x = 0; x < userDataArray.Length; x++)
{
string[] info = userData[x].Split(' ');
if(Session["Username"].ToString() == info[0])
{
userData[x] = string.Join(" ", Session["UserName"].ToString(),
newPasswordTextBox.Text,
avatarDropDownList.SelectedValue.ToString());
break;
}
}
string fileToWrite = string.Join(Environment.NewLine, userDataArray);
File.WriteAllText(Server.MapPath("~") + "/App_Data/tuitterUsers.txt", fileToWrite);
Keep in mind that this works for a limited number of users.
If you are lucky and you site becomes the new Twitter, you cannot think to use a solution where you read in memory the names of all your users.
Firstly, what you're doing is A Bad Idea™. Given that a web server can have multiple threads in operation, you can't be certain that two threads aren't going to be writing different data at the same time. The more users you have the larger your user file will be, which means it takes longer to read and write the data, which makes it more likely that two threads will come into conflict.
This is why we use databases for things like this. Instead of operating on the whole file every time you want to read or write, you operate on a single record. There are plenty of other reasons to do it to.
That said, if you insist on using a text file...
If you treat each line in the file as a record - a single user's details in this case - then it makes sense to build a class to handle the content of those records, and make that class able to read and write the line format.
Something like this:
class UserRecord
{
public string Name;
public string Password;
public string Avatar;
public UserRecord(string name, string password, string avatar)
{
Name = name;
Password = password;
Avatar = avatar;
}
// static factory method
public static UserRecord Parse(string source)
{
if (string.IsNullOrEmpty(source))
return null;
string[] parts = source.Split(',');
if (parts.Length < 3)
return null;
return new UserRecord(parts[0], parts[1], parts[2]);
}
// convert to string
public string ToString()
{
return (new string[] { Name, Password, Avatar }).Join(",");
}
}
Adjust the Parse method to handle whatever format you're using for the data in the line, and change the ToString method to produce that format.
Once you have that working, use it to parse the contents of your file like this:
// Somewhere to put the data - a Dictionary is my first choice here
Dictionary<string, UserRecord> users = new Dictionary<string, UserRecord>();
// Don't forget to use 'using' where appropriate
using (TextReader userfile = File.OpenText(userDataFile))
{
string srcline;
while ((srcline = userfile.ReadLine()) != null)
{
UserRecord user = UserRecord.Parse(line);
if (user != null)
users[user.Name] = user;
}
}
Then you can access the user's data by username, manipulate it as required, and save it back out whenever you like.
Writing the data back out from a Dictionary of users is as simple as:
StringBuilder sb = new StringBuilder;
foreach (UserRecord user in users.Values)
{
sb.AppendFormat("{0}\n", user);
}
File.WriteAllText(userDataFile, sb.ToString());
Meanwhile, you have a users collection that you can save for future checks and manipulations.
I still think you should use a database though. They're not hard to learn and they are far better for this sort of thing.
I have a windows service , that takes files with metadata(FIDEF) and corresponding video file and , translates the XML(FIDEF) using XSLT .
I get the file directory listing for FIDEF's and if a video file of the same name exists it translates it. That works ok , but it is on a timer to search every minute. I am trying to handle situations where the same file name enters the input directory but is already in the output directory. I just have it changing the output name to (copy) thus if another file enters i should get (copy)(copy).mov but the service won't start with filenames of the same directory already in the output , it works once and then does not seem to pick up any new files.
Any Help would be great as I have tried a few things with no good results. I believe its the renaming methods, but I've put most of the code up in case its a clean up issue or something else.
(forgive some of the names just trying different things).
private void getFileList()
{
//Get FILE LIST FROM Directory
try
{
// Process Each String/File In Directory
string result;
//string filename;
filepaths = null;
filepaths = Directory.GetFiles(path, Filetype);
foreach (string s in filepaths)
{
for (int i = 0; i < filepaths.Length; i++)
{
//Result Returns Video Name
result = Path.GetFileNameWithoutExtension(filepaths[i]);
FileInfo f = new FileInfo(filepaths[i]);
PreformTranslation(f, outputPath + result , result);
}
}
}
catch (Exception e)
{
EventLog.WriteEntry("Error " + e);
}
}
private void MoveVideoFiles(String Input, String Output)
{
File.Move(Input, Output);
}
private string GetUniqueName(string name)
{
//Original Filename
String ValidName = name;
//remove FIDEF from filename
String Justname1 = Path.GetFileNameWithoutExtension(name);
//get .mov extension
String Extension2 = Path.GetExtension(Justname1);
//get filename with NO extensions
String Justname = Path.GetFileNameWithoutExtension(Justname1);
//get .Fidef
String Extension = Path.GetExtension(name);
int cnt = 0;
//string[] FileName = Justname.Split('(');
//string Name = FileName[0];
while (File.Exists(ValidName)==true)
{
ValidName = outputPath + Justname + "(Copy)" + Extension2 + Extension;
cnt++;
}
return ValidName;
}
private string getMovFile(string name)
{
String ValidName = name;
String Ext = Path.GetExtension(name);
String JustName = Path.GetFileNameWithoutExtension(name);
while(File.Exists(ValidName))
{
ValidName = outputPath + JustName + "(Copy)" + Ext;
}
return ValidName;
}
//Preforms the translation requires XSL & FIDEF name.
private void PreformTranslation(FileInfo FileName, String OutputFileName , String result)
{
string FidefName = OutputFileName + ".FIDEF";
String CopyName;
String copyVidName = outputPath + result;
XslCompiledTransform myXslTransform;
myXslTransform = new XslCompiledTransform();
try
{
myXslTransform.Load(XSLname);
}
catch
{
EventLog.WriteEntry("Error in loading XSL");
}
try
{ //only process FIDEF's with corresponding Video file
if (AllFidef == "no")
{
//Check if video exists if yes,
if (File.Exists(path + result))
{
//Check for FIDEF File Already Existing in the Output Directory.
if (File.Exists(FidefName))
{
//Get unique name
CopyName = GetUniqueName(FidefName);
copyVidName= getMovFile(copyVidName);
//Translate and create new FIDEF.
//double checking the file is here
if (File.Exists(outputPath + result))
{
myXslTransform.Transform(FileName.ToString(), CopyName);
File.Delete(FileName.ToString());
MoveVideoFiles(path + result, copyVidName);
}
////Move Video file with Corresponding Name.
}
else
{ //If no duplicate file exsists in Directory just move.
myXslTransform.Transform(FileName.ToString(), OutputFileName + ".FIDEF");
MoveVideoFiles(path + result, outputPath + result);
}
}
}
else
{
//Must have FIDEF extension
//Processes All FIDEFS and moves any video files if found.
myXslTransform.Transform(FileName.ToString(), OutputFileName + ".FIDEF");
if (File.Exists(path + result))
{
MoveVideoFiles(path + result, outputPath + result);
}
}
}
catch (Exception e)
{
EventLog.WriteEntry("Error Transforming " + "FILENAME = " + FileName.ToString()
+ " OUTPUT_FILENAME = " + OutputFileName + "\r\n" +"\r\n"+ e);
}
}
There is a lot wrong with your code. getFileList has the unneeded inner for loop for starters. Get rid of it. Your foreach loop has s, which can replace filepaths[i] from your for loop. Also, don't do outputPath + result to make file paths. Use Path.Combine(outputPath, result) instead, since Path.Combine handles directory characters for you. Also, you need to come up with a better name for getFileList, since that is not what the method does at all. Do not make your method names liars.
I would simply get rid of MoveVideoFiles. The compiler just might too.
GetUniqueName only works if your file name is of the form name.mov.fidef, which I'm assuming it is. You really need better variable names though, otherwise it will be a maintenance nightware later on. I would get rid of the == true in the while loop condition, but that is optional. The assignment inside the while is why your files get overwritten. You always generate the same name (something(Copy).mov.fidef), and as far as I can see, if the file exists, I think you blow the stack looping forever. You need to fix that loop to generate a new name (and don't forget Path.Combine). Maybe something like this (note this is untested):
int copyCount = 0;
while (File.Exists(ValidName))
{
const string CopyName = "(Copy)";
string copyString = copyCount == 0 ? CopyName : (CopyName + "(" + copyCount + ")");
string tempName = Justname + copyString + Extension2 + Extension;
ValidName = Path.Combine(outputPath, tempName);
copyCount++;
}
This generates something(Copy).mov.fidef for the first copy, something(Copy)(2).mov.fidef for the second, and so on. Maybe not what you want, but you can make adjustments.
At this point you have a lot to do. getMovFile looks as though it could use work in the same manner as GetUniqueName. You'll figure it out. Good luck.