I currently need to scan a folder within a c# application for files that match a pattern such as RecordId*.*
The folder currently has several hundred thousand files in it and we cannot change that, at least at the moment.
Doing this:
var folderPath = #"\\server\folder\";
var options = new EnumerationOptions()
{
BufferSize = 64000,
RecurseSubdirectories = false,
MatchType = MatchType.Simple,
MatchCasing = MatchCasing.CaseInsensitive
};
var searchPattern = $"{request.RecordId}*.*";
var isMediaFileInFolder = _fileSystem.Directory
.EnumerateFiles(folderPath, searchPattern, options);
Would take about 10 minutes to return the list of found files.
However, we can run this within PowerShell ISE and it runs in less than a second.
$Directory = "\\server\folder\"
$Filter = "REC1234*.*"
Get-ChildItem -Path $Directory -Filter $Filter
I tried running the powershell within c# through something like this:
public static List<string> PSFiles(string path, string pattern)
{
try
{
ICollection<PSObject> collection = null;
List<string> returnStrings = new List<string>();
script = "Get-ChildItem -Path " + path + " -Filter" + pattern;
using (PowerShell powerShell = PowerShell.Create())
{
powerShell.AddScript(script);
collection = powerShell.Invoke();
}
foreach (PSObject obj in collection)
{
if (!String.IsNullOrEmpty(obj.ToString()))
{
foreach(var i in Regex.Split(obj.ImmediateBaseObject.ToString().Trim(), "\r\n"))
{
returnStrings.Add(i.Trim());
}
}
}
return returnStrings;
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
And it took longer to run than the enumerate files attempt with nearly all of the time spent on the powerShell.Invoice(); step.
After testing in Powershell Core, that is also taking over 10 minutes. Further, using the Directory.EnumerateFiles() on .NET took less than a second.
So the question is, is there a way to mirror the performance of file enumeration within .NET when running on .NET Core?
Related
I want to use a Invoke-RDUserLogoff cmdlet to logoff users on remote computer. This cmdlet works well when is submitted using PowerShell. However, every time I try to execute it using the below code I get a "Invoke-RDUserLogoff is not recognized as the name of a cmdlet.....". Any clue or help will be appreciate it. Thanks!
System.Management.Automation.PowerShell psinstance = System.Management.Automation.PowerShell.Create();
string errorMesg = string.Empty;
//string result = string.Empty;
string command = "";
command = #"Invoke-RDUserLogoff -HostServer " + computerNameTB.Text + " -UnifiedSessionID " + sessionIDTB.Text + " -Force";
psinstance.AddCommand(command);
//Make sure return values are outputted to the stream captured by C#
psinstance.AddCommand("out-string");
PSDataCollection<PSObject> outputCollection = new PSDataCollection<PSObject>();
psinstance.Streams.Error.DataAdded += (object sender1, DataAddedEventArgs e1) =>
{
errorMesg = ((PSDataCollection<ErrorRecord>)sender1)[e1.Index].ToString();
};
//IAsyncResult result = psinstance.BeginInvoke<PSObject, PSObject>(null, outputCollection);
IAsyncResult result = psinstance.BeginInvoke();
//wait for powershell command/script to finish executing
psinstance.EndInvoke(result);
StringBuilder sb = new StringBuilder();
foreach (var outputItem in outputCollection)
{
sb.AppendLine(outputItem.BaseObject.ToString());
}
if (!string.IsNullOrEmpty(errorMesg))
{
MessageBox.Show(errorMesg, "Logoff Users");
}
else
{
MessageBox.Show(sb.ToString(), "LogoffUsers app");
}
A hint I think worth trying is checking the project settings: found few claims that changing the target platform cpu to x64 from x86 might fix locating the proper modules please see: Import-Module does not work in c# powershell command
Good luck!🤞
I am currently working on a utility that is responsible for pulling audio and video files from the cloud and merging them together via FFMPEG. As I am new to FFMPEG, I am going to split the question into an FFMPEG part and a C# part just so people can answer either 1 part or the other (or both!).
FFMPEG Part
Currently, I have a working FFMPEG arg if there is only 1 video file present and it needs to be merged with multiple files.
ffmpeg -i input1.mkv -i input1.mka -i input2.mka -i input3.mka -i input4.mka -filter_complex "[1:a]adelay=0s:all=1[a1pad];[2:a]adelay=20s:all=1[a2pad];[3:a]adelay=30s:all=1[a3pad];[4:a]adelay=40s:all=1[a4pad];[a1pad][a2pad][a3pad][a4pad]amix=inputs=4:weights=1|1|1|1[aout]" -map [aout] -map 0:0 output4.mkv
The delays you see in there are determined by subtracting the start time of each file from the start time of the earliest created audio or video file. I know that if I wanted to create a horizontal stack of multiple videos, i could just do
ffmpeg -i input1.mkv -i input1.mka -i input2.mkv -i input2.mka -i input3.mka -i input4.mka
-filter_complex
"[2:v]tpad=start_duration=120:color=black[vpad];
[3:a]adelay=120000:all=1[a2pad];
[4:a]adelay=180000:all=1[a3pad];
[5:a]adelay=200000:all=1[a4pad];
[0:v][vpad]hstack=inputs=2[vout];
[1:a][a2pad][a3pad][a4pad]amix=inputs=4:weights=1|1|1|1[aout]"
-map [vout] -map [aout]
output.mkv
but what I want to do is both keep those delays for the audio and video files AND concatenate (not stack) those videos, how would i go about doing that?
C# Part
You see that giant arg up there? The utility is supposed to generate that based on a List of recordings. Here is the model.
List<FileModel> _records;
public class FileModel {
public string Id { get; set; }
public string FileType { get; set; }
public string StartTime { get; set; }
}
The utility has to then go through that list and create the arg (as seen in the FFMPEG part) to be executed by the Xabe.FFMPEG package. The way i was thinking to approach this is to basically create 2 string builders. 1 string builder will be responsible for dealing with the inputs, the other string builder. Here is what i have so far
private async Task CombineAsync()
{
var minTime = _records.Min(y => Convert.ToDateTime(y.StartTime));
var frontBuilder = new StringBuilder("-y ");
var middleBuilder = new StringBuilder("-filter_complex \"");
var endString = $" -map [vout] -map [aout] {_folderPath}\\CombinedOutput.mkv";
for (var i = 0; i < _records.Count; i++)
{
var type = _records[i].FileType.ToLower();
var delay = (Convert.ToDateTime(_records[i].StartTime).Subtract(minTime)).TotalSeconds;
frontBuilder.Append($"-i {_folderPath + "\\" + _records[i].Id} ");
var addColon = i != _records.Count - 1 ? ";" : "";
middleBuilder.Append(type.Equals("video") ? $"[{i}:v]tpad=start_duration={delay}:color=black[v{i}pad]{addColon} " : $"[{i}:a]adelay={delay}s:all=1[a{i}pad]{addColon} ");
}
middleBuilder.Append("\"");
Console.WriteLine(frontBuilder.ToString() + middleBuilder.ToString() + endString);
// var args = frontBuilder + middleBuilder + endString;
// try
// {
// var conversionResult = await FFmpeg.Conversions.New().Start(args);
// Console.WriteLine(JsonConvert.SerializeObject(conversionResult));
// }
// catch (Exception e)
// {
// Console.WriteLine(e);
// }
}
Is this the correct way to go about building the argument out?
How in god's name do i get something like this in there, since it relies on naming and total count for the piping and inputs=
[0:v][vpad]hstack=inputs=2[vout]; // This part will change for video concatenation depending on what gets answered above
[1:a][a2pad][a3pad][a4pad]amix=inputs=4:weights=1|1|1|1[aout]
In amix document
Note that this filter only supports float samples(the amerge and pan audio filters support many formats).
Maybe your files is many format, try amerge
For easy generate arguments with so much filters, try FFmpegArgs
FFmpegArg ffmpegArg = new FFmpegArg().OverWriteOutput();
List<ImageMap> imageMaps = new List<ImageMap>();
List<AudioMap> audioMaps = new List<AudioMap>();
foreach (var item in _records)
{
if (item.IsVideo)
{
imageMaps.Add(ffmpegArg.AddImageInput(new ImageFileInput(item.FilePath))
.TpadFilter().StartDuration(item.Delay).MapOut);
}
else
{
audioMaps.Add(ffmpegArg.AddAudioInput(new AudioFileInput(item.FilePath))
.AdelayFilter().Delays(item.Delay).All(true).MapOut);
}
}
var imageMap = imageMaps.HstackFilter().MapOut;
var audioMap = audioMaps.AmergeFilter().MapOut;
ffmpegArg.AddOutput(new VideoFileOutput("out.mp4", imageMap, audioMap));
var result = ffmpegArg
.Render(c => c
.WithFFmpegBinaryPath("path to ffmpeg.exe")
.WithWorkingDirectory("working dir"))
.Execute();
result.EnsureSuccess();
Or by kesh comment
FFmpegArg ffmpegArg = new FFmpegArg().OverWriteOutput();
List<ImageMap> imageMaps = new List<ImageMap>();
List<AudioMap> audioMaps = new List<AudioMap>();
foreach (var item in _records)
{
if (item.IsVideo)
{
imageMaps.Add(ffmpegArg.AddImageInput(new ImageFileInput(item.FilePath))
.TpadFilter().StartDuration(item.Delay).MapOut);
}
else
{
audioMaps.Add(ffmpegArg.AddAudioInput(new AudioFileInput(item.FilePath)));
//audioMaps.Add(ffmpegArg.AddAudioInput(new AudioFileInput(item.FilePath).SsPosition(item.Skip)));
}
}
var imageMap = imageMaps.HstackFilter().MapOut;
var concatFilter = audioMaps.Select(x => new ConcatGroup(x)).ConcatFilter();
ffmpegArg.AddOutput(new VideoFileOutput("out.mp4", imageMap, concatFilter.AudioMapsOut.First()));
var result = ffmpegArg
.Render(c => c
.WithFFmpegBinaryPath("path to ffmpeg.exe")
.WithWorkingDirectory("working dir"))
.Execute();
result.EnsureSuccess();
My goal is to come up with a Powershell script to clean the working directory of XAML build agents.
To get the working directory of build agents I could use below C# code, which works fine.
I would like to implement the same in Powershell.
static void Main(string[] args)
{
var TPC = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("CollectionURI"));
IBuildServer buildServer = TPC.GetService<IBuildServer>();
\\ {Microsoft.TeamFoundation.Build.Client.BuildServer}
var buildController = buildServer.GetBuildController("ControllerName");
var buildAgent = buildController.Agents;
var workingFolder = string.Empty;
List<string> list = new List<string>();
foreach (IBuildAgent agent in buildAgent)
{
list.Add(agent.BuildDirectory);
}
}
If it is not possible to find the Powershell equivalent I will have to consume the C# in Powershell through an exe or dll.
It is similar with your C# code:
$url = "http://xxxx:8080/tfs/CollectionName/";
$tfs = [Microsoft.TeamFoundation.Client.TfsTeamProjectCollectionFactory]::GetTeamProjectCollection($url);
$buildservice = $tfs.GetService("Microsoft.TeamFoundation.Build.Client.IBuildServer");
$buildcontroller = $buildservice.GetBuildController("ControllerName");
$buildagents = $buildcontroller.Agents;
foreach($buildagent in $buildagents)
{
Write-Host $buildagent.BuildDirectory;
}
I've been attempting to create a site collection from a custom web service I have built in C#. I already had some methods in this that ran some powershell commands so I figured I would just create the site using powershell commands like the code below. This code runs fine without error but does not create a site collection, seems as if it is doing nothing. Is there a better way or can someone see what may be wrong below?
public string CreateSiteCollection(string urlroot, string urlname, string database, string primaryadmin, string secondadmin, string language, string description, string title, string template)
{
//Find language and template code
string lang_code = get_lang_code(language);
string temp_code = get_temp_code(template);
Call the PowerShell.Create() method to create an empty pipeline
PowerShell ps = PowerShell.Create();
// Place our script in the string myscript
string myscript = string.Format(#"
Add-PSSnapin Microsoft.SharePoint.Powershell -erroraction 'silentlycontinue'
$template = Get-SPWebTemplate ""{0}""
New-SPSite {1}{2} -OwnerAlias '{3}' -SecondaryOwnerAlias '{4}' -Language {5} -Description '{6}' -ContentDatabase {7} -Template $template -Name '{8}'
", temp_code, urlroot, urlname, primaryadmin, secondadmin, lang_code, description, database, title);
// Create PowerShell runspace
Runspace runspace = RunspaceFactory.CreateRunspace();
runspace.Open();
Pipeline pipeline = runspace.CreatePipeline(); // create pipepline then feed it myscript
pipeline.Commands.AddScript(myscript);
pipeline.Commands.Add("Out-String");
Collection<PSObject> results;
try
{
// Executing the script
results = pipeline.Invoke();
}
catch (Exception e)
{
// An error occurred, return the exception
return string.Format("Exception caught: {0}", e);
}
runspace.Close();
StringBuilder stringBuilder = new StringBuilder();
foreach (PSObject obj in results)
{
stringBuilder.AppendLine(obj.ToString());
}
string output = stringBuilder.ToString().Trim();
if (output == "")
{
return "PowerShell Commands ran sucessfully with no output";
}
else
{
return String.Format("PowerShell Commands ran sucessfully and returned the following: {0}", output);
}
} // End of CreateSiteColleciton
I may use the Admin.asmx web service instead, but it would be easier if I could get this working because it allows me more customization.
Is there a reason why you can't call the SharePoint server side object model directly from your web service like this, http://msdn.microsoft.com/en-us/library/office/ms411953(v=office.14).aspx.
Or is there a requirement to go through a set of CmdLets?
I decided to go the with the object model just because it makes more sense. Below is the code for creating a new site collection:
SPSecurity.RunWithElevatedPrivileges(delegate()
{
using (SPSite site = new SPSite("http://servername:port/"))
{
using (SPWeb web = site.OpenWeb())
{
HttpContext.Current = null;
site.AllowUnsafeUpdates = true;
web.AllowUnsafeUpdates = true;
var newSite = site.WebApplication.Sites.Add(....);
}
}
});
I have the following sample Powershell script that is embedded in my C# application.
Powershell Code
$MeasureProps = "AssociatedItemCount", "ItemCount", "TotalItemSize"
$Databases = Get-MailboxDatabase -Status
foreach($Database in $Databases) {
$AllMBStats = Get-MailboxStatistics -Database $Database.Name
$MBItemAssocCount = $AllMBStats | %{$_.AssociatedItemCount.value} | Measure-Object -Average -Sum
$MBItemCount = $AllMBStats | %{$_.ItemCount.value} | Measure-Object -Average -Sum
New-Object PSObject -Property #{
Server = $Database.Server.Name
DatabaseName = $Database.Name
ItemCount = $MBItemCount.Sum
}
}
Visual Studio offers me the following embedding options:
Every PowerShell sample I've seen (MSDN on Exchange, and MSFT Dev Center) required me to chop up the Powershell command into "bits" and send it through a parser.
I don't want to leave lots of PS1 files with my application, I need to have a single binary with no other "supporting" PS1 file.
How can I make it so myapp.exe is the only thing that my customer sees?
Many customers are averse to moving away from a restricted execution policy because they don't really understand it. It's not a security boundary - it's just an extra hoop to jump through so you don't shoot yourself in the foot. If you want to run ps1 scripts in your own application, simply use your own runspace and use the base authorization manager which pays no heed to system execution policy:
InitialSessionState initial = InitialSessionState.CreateDefault();
// Replace PSAuthorizationManager with a null manager which ignores execution policy
initial.AuthorizationManager = new
System.Management.Automation.AuthorizationManager("MyShellId");
// Extract psm1 from resource, save locally
// ...
// load my extracted module with my commands
initial.ImportPSModule(new[] { <path_to_psm1> });
// open runspace
Runspace runspace = RunspaceFactory.CreateRunspace(initial);
runspace.Open();
RunspaceInvoke invoker = new RunspaceInvoke(runspace);
// execute a command from my module
Collection<PSObject> results = invoker.Invoke("my-command");
// or run a ps1 script
Collection<PSObject> results = invoker.Invoke("c:\temp\extracted\my.ps1");
By using a null authorization manager, execution policy is completed ignored. Remember - this is not some "hack" because execution policy is something for protecting users against themselves. It's not for protecting against malicious third parties.
http://www.nivot.org/nivot2/post/2012/02/10/Bypassing-Restricted-Execution-Policy-in-Code-or-in-Script.aspx
First of all you should try removing your customer's aversion To scripts. Read up about script signing, execution policy etc.
Having said that, you can have the script as a multiline string in C# code itself and execute it.Since you have only one simple script, this is the easiest approach.
You can use the AddScript ,ethos which takes the script as a string ( not script path)
http://msdn.microsoft.com/en-us/library/dd182436(v=vs.85).aspx
You can embed it as a resource and retrieve it via reflection at runtime. Here's a link from MSDN. The article is retrieving embedded images, but the principle is the same.
You sort of hovered the answer out yourself. By adding it as content, you can get access to it at runtime (see Application.GetResourceStream). Then you can either store that as a temp file and execute, or figure out a way to invoke powershell without the use of files.
Store your POSH scripts as embedded resources then run them as needed using something like the code from this MSDN thread:
public static Collection<PSObject> RunScript(string strScript)
{
HttpContext.Current.Session["ScriptError"] = "";
System.Uri serverUri = new Uri(String.Format("http://exchangsserver.contoso.com/powershell?serializationLevel=Full"));
RunspaceConfiguration rc = RunspaceConfiguration.Create();
WSManConnectionInfo wsManInfo = new WSManConnectionInfo(serverUri, SHELL_URI, (PSCredential)null);
using (Runspace runSpace = RunspaceFactory.CreateRunspace(wsManInfo))
{
runSpace.Open();
RunspaceInvoke scriptInvoker = new RunspaceInvoke(runspace);
scriptInvoker.Invoke("Set-ExecutionPolicy Unrestricted");
PowerShell posh = PowerShell.Create();
posh.Runspace = runSpace;
posh.AddScript(strScript);
Collection<PSObject> results = posh.Invoke();
if (posh.Streams.Error.Count > 0)
{
bool blTesting = false;
string strType = HttpContext.Current.Session["Type"].ToString();
ErrorRecord err = posh.Streams.Error[0];
if (err.CategoryInfo.Reason == "ManagementObjectNotFoundException")
{
HttpContext.Current.Session["ScriptError"] = "Management Object Not Found Exception Error " + err + " running command " + strScript;
runSpace.Close();
return null;
}
else if (err.Exception.Message.ToString().ToLower().Contains("is of type usermailbox.") && (strType.ToLower() == "mailbox"))
{
HttpContext.Current.Session["ScriptError"] = "Mailbox already exists.";
runSpace.Close();
return null;
}
else
{
HttpContext.Current.Session["ScriptError"] = "Error " + err + "<br />Running command " + strScript;
fnWriteLog(HttpContext.Current.Session["ScriptError"].ToString(), "error", strType, blTesting);
runSpace.Close();
return null;
}
}
runSpace.Close();
runSpace.Dispose();
posh.Dispose();
posh = null;
rc = null;
if (results.Count != 0)
{
return results;
}
else
{
return null;
}
}
}
The customer just can't see the PowerShell script in what you deploy, right? You can do whatever you want at runtime. So write it to a temporary directory--even try a named pipe, if you want to get fancy and avoid files--and simply start the PowerShell process on that.
You could even try piping it directly to stdin. That's probably what I'd try first, actually. Then you don't have any record of it being anywhere on the computer. The Process class is versatile enough to do stuff like that without touching the Windows API directly.