Password to NTLM from dump file - c#

I am in security and want to consume a massive password dump file (3GB) as part of my usual password audits
The file is delimited into two columns, SHA1, and the actual password
For my purposes, because Windows stores password as NTLM hashes at rest (Kerberos only used during transport) I need passwords in NTLM, not SHA1. (You can easily prove it to yourself by doing a password dump, I use DSInternals)
I am currently converting clear-text passwords to NTLM with this script
#Install-Module DSInternals
Import-Module DSInternals
$reader = [System.IO.File]::OpenText("C:\...\68_linkedin_found_hash_plain.txt")
try {
for() {
$line = $reader.ReadLine()
if ($line -eq $null) { break }
$pwd = ConvertTo-SecureString $line.Split(':')[1] -AsPlainText -Force
$hash = ConvertTo-NTHash $pwd;
Add-Content C:\...\68_linkedin_ntlm.txt $hash
}
}
finally {
$reader.Close()
}
Any obvious way of processing this faster? I suppose I can ingest into a DB and process it threaded via a little C# app but maybe there's something quick and dirty?
The file format is (no these are not my passwords, these are passwords from a common password dump file that is publicly available)
8c9fcfbf9ead0d63d04b5d3120c42cb885af899e:16piret
8c9fd045ee531744a4fdc1f52e59c3878e742ee0:louie310
8c9fd070274a0eebecf58f8f50e283bf53cec215:kery62
8c9fd08d1c17266f7c1e42a3f16a1161613c7572:sa81nt
8c9fd1093bd8592bbaea195785f8d1c81589073f:cuchilleros
8c9fd1a963bbf44ea9b531e91e5cb1b591c454cc:198962914685590
8c9fd1d8cc6d4fa8164a2fcb3adc7a45f3409547:sculp1011
8c9fd20540d66831f6f65a39ce1bca0e654fd5e6:ume1431965
8c9fd2b4a9571db21c4226bf9ecaea282ecadd5e:534015629819772
8c9fd2f3e63c20314cc962b624178ba82c6674a7:siegenthaler
8c9fd3713fe9600d2bea05b4e8cd33efe12bddb1:mkenrick
8c9fd3a39cca8fb8cdeeb52999aed7e6e9435fd3:billscot
8c9fd3b96ee1485e0fd7d6c71ffe3efd2e8a4614:ndiyehova
8c9fd43aef9804dab6e0aebc58415543175fea00:662566123
8c9fd481cf8f35edb6ebd683fffb0efce0478f21:371874conv
8c9fd4f37632294093fb057eb0168a05d9396e74:h3aww7w
8c9fd53dce9b046f73c5f298e2f694213f8f90f1:squishy23
8c9fd55206e0525d119f4946d3ae75e347cccb4b:NEH3112
8c9fd555303ac08f9103ff8451f8c05cf48cf120:marco22580
8c9fd5c6a94b1171518d0ba264033d779a075e8c:Nowornever2010
8c9fd613fb632b5bc6ae20a671aa40decdb8609a:MKSmks1976##
8c9fd627a48f9971df5bee874501156e9d3c011d:Steripro5
TIA
EDIT:
By reading into memory and writing to separate files speed up the process a bit. Also used suggestion from TessellatingHeckler
Import-Module DSInternals
$lines = [System.IO.File]::ReadAllLines('C:\...\68_linkedin_found_hash_plain.txt')
foreach($line in $lines) {
try {
$password = $line.Substring($line.IndexOf(':')+1);
if ($password.Length -lt 128)
{
$pwd = ConvertTo-SecureString $line.Substring($line.IndexOf(':')+1) -AsPlainText -Force
$hash = ConvertTo-NTHash $pwd;
Set-Content C:\Temp\Hashes\$hash.txt $hash
}
}
finally {
}
}
The afterwards I can combine the files with
copy *.txt combined.log

If those are typical line lengths, and your file is 3GB, we're talking 50-60 million lines.
Change $line.Split(':')[1] to $line.Substring($line.IndexOf(':')+1), that will save creating and cleaning up 50 million arrays and 50 million strings of the bit you don't want. (Is that right? Your example file format has the hash on the left, your use of [1] will pick the username part?)
PowerShell calling the .Net static methods like [system.io.file] is reasonably fast, but these bits:
$pwd = ConvertTo-SecureString $line.Split(':')[1] -AsPlainText -Force
$hash = ConvertTo-NTHash $pwd;
Add-Content C:\...\68_linkedin_ntlm.txt $hash
have a huge overhead. Starting and initializing cmdlets costs a lot more than function calls in other languages, and having add-content close/open the file 50 million times adds needless file system overhead. Change that so you open the file once, and write to it in the loop:
# before the loop
$outStream = [System.IO.StreamWriter]::new(
[System.IO.FileStream]::new(
'c:\path\output.txt',
[system.io.filemode]::OpenOrCreate))
# in the loop
$outStream.WriteLine($hash)
# after the loop
$outStream.Close()
The next bit would be to see if you can get the code which does ConvertTo-SecureString and ConvertTo-NTHash and inline it. I don't know what the NTHash one is, but ConvertTo-SecureString source is here, it's not going to be trivial to wrap / inline that into PowerShell code.
That's it as far as I can see for "quick and dirty", but it might knock some 20-30% off the runtime.

Related

How to open SolidWorks sldprt files as read-only with PowerShell?

I built this open-file function in PowerShell for a GUI I wrote that lets you find and open various files on a server. I mainly use it for opening SolidWorks files as read-only, but also for PDF files and it should work for just about any other file if there is a file association for it.
The problem is that sometimes it doesn't work when opening the sldprt files. SolidWorks will either ignore the open file request or it wont open as read-only. I think this is mostly just a solidworks issue as sometimes it wont open files when double clicked on from windows explorer.
Anyway my solution is to set the file attribute to read-only. start a job that opens the file in SolidWorks, and then waits for the SolidWorks process to go idle before removing the read-only attribute. It does this through an event that watches for the job state to change. Since this is running through a GUI it has to be done in the background to prevent the GUI from locking up.
Is there a simpler way to open files as read-only with PowerShell?
I think it might be possible using the SolidWorks .dll files, but they are meant to be loaded in C# or VB-script and I have no idea what i'm doing in either of those languages.
function open-File{
param(
[parameter(Mandatory=$true)]$file,
[bool]$readOnly = $true,
$processName=$null
)
[scriptblock]$openFileScriptBlock = {
param(
$file,
$readOnly,
$processName=$null
)
#initiate variables
$loaded = $false
$file = get-item $file
$processLastCpu = 0
$timeout = 0
if ($readonly -and !$file.isReadOnly){
$file.isReadOnly = $true
#call file with default application
$attempts = 0
while ($true){
try{$startedProcess = start-process "$($file.fullname)" -PassThru; break}
catch{
$attempts++
if ($attempts -eq 3){return "cannot open file: $file, Error:$_"}
}
}
start-sleep -seconds 2
if ($processName){
$processName = $startedProcess.name
if ($processName -eq "SWSHEL~1"){$processName = "SLDWORKS"}
}
#wait until process shows up in the process manager
while ($loaded -eq $false -and $timeout -lt 25 ){
try {
$process = get-process -name $processName -erroraction 'stop'
if ($?){$loaded = $true; $timeout = 0} else {throw}
}catch{start-sleep -milliseconds 200; $timeout++}
}
start-sleep -seconds 2
#wait for process to go idle
while ($process.cpu -ne $processLastCpu -and $timeout -lt 10){
$processLastCpu = $process.cpu
start-sleep -milliseconds 500
$timeout++
}
$file.isreadonly = $false
} else {start-process "$($file.fullname)"}
return ,$file
}
if (!(test-path -path $file)){update-message "File not found: $file"; return}
$openFileJob = start-job -name 'openfile' -scriptblock $openFileScriptBlock -argumentlist $file, $readOnly, $processName
Register-ObjectEvent $OpenFileJob StateChanged -Action {
$jobResult = $sender | receive-job
$sender | remove-job -Force
unregister-event -sourceIdentifier $event.sourceIdentifier
remove-job -name $event.sourceIdentifier -force
try{update-message "opened file $($jobResult.name)"}
catch{update-message $jobResult}
} | out-null
}
I know its a old question, but i was wondering if you ever managed to get a solution?
If not, there is a few things you could try: first off, if your code is opening any other file just fine, it does not seem to be there the problem is.
File association with all SLD-files are working most of the time; but we do see it going bad from time to time (often related to updates), in that case, double-check that all SLD-file types are set to open with 'Solidworks-Launcher' (and not Solidworks directly).
Using the launcher, will ensure Solidworks does not try to open a file, into an already running instance of Solidworks.
Also, try to check the following: Solidworks Options -> Collaboration ->
'Enable Multi-user environments'... is this set?
whatever state it is in; try changing is to the opposite.
That checkmark is allowing multiple Solidworks-users to open the same file at the same time, and it does so by changing the read-state of the file, back and fourth.
(it could be it is interfering with your code)
Both of these things will be PC-specific, so if you change them on one machine, they might also need to be changed on other machines.

PowerShell Set Drive Labels Persisting And Unchangeable Until Reboot

Our software needs to map a network drive depending on which database the User logs in to.
The software first checks that the drive isn't already mapped, if it is then it skips the mapping step.
If the drive isn't mapped, or it is mapped to a different share (i.e. the User was previously logged in to a different database), then it clears any existing drive mapping, and maps the required drive.
It does this by generating and then running a PowerShell script.
Remove-SmbMapping -LocalPath "R:" -Force -UpdateProfile;
Remove-PSDrive -Name "R" -Force;
net use "R" /delete /y;
$password = ConvertTo-SecureString -String "Password" -AsPlainText -Force;
$credential = New-Object System.Management.Automation.PSCredential -ArgumentList "Username", $password;
New-PSDrive -Name "R" -PSProvider "FileSystem" -Root "\\server\share" -Credential $credential -Persist;
$a = New-Object -ComObject shell.application;
$a.NameSpace( "R:" ).self.name = "FriendlyName";
The first three lines remove any existing mapping on that drive letter. They all theoretically do the same thing, however thanks to Microsoft it's entirely random which line will actually work. It only consistently works if all three lines are run.
The middle three lines map the new drive.
The last two lines change the drive label of the new drive to something more user-friendly than the default \\server\share label
The first time someone logs in after a reboot the above script works perfectly. The new drive is mapped, and the label is changed.
However, if the User then logs out and logs into a different database, the label will not change.
For example, the User first logs in to 'Database A', and the drive is mapped with the label 'DatabaseAFiles'. All well and good.
But if the User then logs out, and logs in to 'Database B', the drive is correctly mapped and points to the correct share, but the label still says 'DatabaseAFiles' and not 'DatabaseBFiles'.
If the User reboots their PC, however, and logs in to 'Database B', then the label will correctly say 'DatabaseBFiles', but any subsequent log ins to other databases again won't change the label.
Reboot
Log in to Database A, label is DatabaseAFiles
Log out and into Database B, label is still DatabaseAFiles
Reboot
Log in to Database B, label is now DatabaseBFiles
This is not dependent on the last two script lines being present (the two that set the label), I actually added those to try to fix this issue. If those two lines are removed, the label is the default \\server\share label, and still doesn't change correctly, i.e.
Reboot
Log in to Database A, label is \\servera\sharea
Log out and into Database B, label is still \\servera\sharea
Reboot
Log in to Database B, label is now \\serverb\shareb
Regardless of the label, the drive is always correctly mapped to the correct share, and using it has all the correct directories and files.
Everything works correctly, it's just the label that is incorrect after the first login per reboot.
The script is run from within a C# program in a created PowerShell instance
using (PowerShell PowerShellInstance = PowerShell.Create())
{
PowerShellInstance.AddScript(script);
IAsyncResult result = PowerShellInstance.BeginInvoke();
while (result.IsCompleted == false)
{
Thread.Sleep(1000);
}
}
As it maps a drive, it cannot be run in Adminstrator mode (the drive won't be mapped for the actual User), it has to be run in normal mode, so there is a check earlier up for that.
If I take a copy of the script and run it in a PowerShell session outside the C# program, I get exactly the same results (everything works but the label is wrong after the first login), so it's not that it's being run from within the C# program.
It's entirely possible that the issue is with either File Explorer or with Windows, either caching the label somewhere and reusing it could be the problem, of course.
Anyone have any suggestions of things I can try please?
A time ago, I have had to rename file shares and therefor I wrote this function. Maybe this is helpful for you.
#--------------------------------------
function Rename-NetworkShare {
#--------------------------------------
param(
[string]$sharePattern,
[string]$value
)
$regPath = Get-ChildItem 'HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2'
$propertyName = '_LabelFromReg'
foreach( $child in $regPath ) {
if( $child.PSChildName -like ('*' + $sharePattern + '*') ) {
if( !$child.Property.Contains( $propertyName ) ) {
New-ItemProperty $child.PSPath -Name $propertyName -PropertyType String | Out-Null
}
Set-ItemProperty -Path $child.PSPath -Name $propertyName -Value $value | Out-Null
}
}
}
Rename-NetworkShare -sharePattern 'patternOldName' -value 'NewFriendlyName'
It's not ideal, there's one bit I'm not happy about, but this is the best I've been able to come up with so far. If I come up with something better I'll post that instead.
Firstly, I check if there is already a drive mapped to the letter I want to use:-
// Test if mapping already exists for this database
var wrongMapping = false;
var drives = DriveInfo.GetDrives();
foreach (var drive in drives)
{
var driveLetter = drive.RootDirectory.ToString().Substring(0, 1);
if (driveLetter == mappingDetails.DriveLetter && Directory.Exists(drive.Name))
{
wrongMapping = true; // Assume this is the wrong drive, if not we'll return from the method before it's used anyway
var unc = "Unknown";
using (RegistryKey key = Registry.CurrentUser.OpenSubKey("Network\\" + driveLetter))
{
if (key != null)
{
unc = key.GetValue("RemotePath").ToString();
}
}
if (unc == mappingDetails.Root)
{
View.Status = #"Drive already mapped to " + mappingDetails.DriveLetter + ":";
ASyncDelay(2000, () => View.Close());
return; // Already mapped, carry on with login
}
}
}
If we already have the correct path mapped to the correct drive letter, then we return and skip the rest of the mapping code.
If not, we'll have the variable wrongMapping, which will be true if we have a different path mapped to the drive letter we want. This means that we'll need to unmap that drive first.
This is done via a Powershell script run the by C# program, and contains the bit I'm not happy about:-
Remove-PSDrive mappingDetails.DriveLetter;
Remove-SmbMapping -LocalPath "mappingDetails.DriveLetter:" -Force -UpdateProfile;
Remove-PSDrive -Name "mappingDetails.DriveLetter" -Force;
net use mappingDetails.DriveLetter /delete /y;
Stop-Process -ProcessName explorer;
The first four lines are different ways to unmap a drive, and at least one of them will work. Which one does work seems to be random, but between all four the drives (so far) always get unmapped.
Then we get this bit:
Stop-Process -ProcessName explorer;
This will close and restart the Explorer process, thus forcing Windows to admit that the drive we just unmapped is really gone. Without this, Windows won't fully release the drive, and most annoyingly it will remember the drive label and apply it to the next drive mapped (thus making a mapping to CompanyBShare still say CompanyAShare).
However, in so doing it will close any open File Explorer windows, and also briefly blank the taskbar, which is not good.
But, given that currently no Company sites have more than one share, and it's only the Developers and Support that need to remove existing drives and map new ones, for now we'll put up with it.
Once any old drive is unmapped, we then carry on and map the new drive, which again is done via a PowerShell script run from the C# code.
$password = ConvertTo-SecureString -String "mappingDetails.Password" -AsPlainText -Force;
$credential = New-Object System.Management.Automation.PSCredential -ArgumentList "mappingDetails.Username", $password;
New-PSDrive -Name "mappingDetails.DriveLetter" -PSProvider "FileSystem" -Root "mappingDetails.Root" -Credential $credential -Persist;
$sh=New_Object -com Shell.Application;
$sh.NameSpace('mappingDetails.DriveLetter:').Self.Name = 'friendlyName';
New-Item –Path "HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\" –Name "foldername";
Remove-ItemProperty -Path "HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\foldername" -Name "_LabelFromReg";
New-ItemProperty -Path "HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\foldername" -Name "_LabelFromReg" -Value "friendlyName" -PropertyType "String\";
The first part maps the drive:
$password = ConvertTo-SecureString -String "mappingDetails.Password" -AsPlainText -Force;
$credential = New-Object System.Management.Automation.PSCredential -ArgumentList "mappingDetails.Username", $password;
New-PSDrive -Name "mappingDetails.DriveLetter" -PSProvider "FileSystem" -Root "mappingDetails.Root" -Credential $credential -Persist;
The middle part changes the name directly:
$sh=New_Object -com Shell.Application;
$sh.NameSpace('mappingDetails.DriveLetter:').Self.Name = 'friendlyName';
And the end part changes the name in the Registry:
New-Item –Path "HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\" –Name "foldername";
Remove-ItemProperty -Path "HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\foldername" -Name "_LabelFromReg";
New-ItemProperty -Path "HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\foldername" -Name "_LabelFromReg" -Value "friendlyName" -PropertyType "String\";
Firstly, it creates a key for this path (it the key already exists it'll fail but the script will carry on)
Then it removes the existing property _LabelFromReg (if it doesn't exist it'll fail but the script will carry on)
Then it (re)creates the property _LabelFromReg with the new friendlyname.
So, again doing the same thing two ways, but between the two it works.
I'd like to find some alternative to having to kill and restart the Explorer process, it's really tacky, but it seems to be the only way to get Windows to acknowledge the changes.
And at least I now get the correct labels on the drives when mapped.

Xmlstarlet ed encoding and powershell inside Process C#

I want to use xmlstarlet from the powershell started with Process in a C# application.
My main problem is that when I use this code:
./xml.exe ed -N ns=http://www.w3.org/2006/04/ttaf1 -d '//ns:div[not(contains(#xml:lang,''Italian''))]' "C:\Users\1H144708H\Downloads\a.mul.ttml" > "C:\Users\1H144708H\Downloads\a.mul.ttml.conv"
on powershell I get a file with the wrong encoding (I need UTF-8).
On Bash I used to just
export LANG=it_IT.UTF-8 &&
before xmlstarlet but on powershell I really don't know how to do it.
Maybe there is an alternative, I saw that xmlstarlet is able to use sel --encoding utf-8 but I don't know how to use it in ed mode (I tried to use it after xml.exe after ed etc... but it always fail).
What is the alternative to export LANG=it_IT.UTF-8 or how to use --encoding utf-8?
PS. I tried many and many things like:
$MyFile = Get-Content "C:\Users\1H144708H\Downloads\a.mul.ttml"; $Utf8NoBomEncoding = New-Object System.Text.UTF8Encoding $False; [System.IO.File]::WriteAllLines("C:\Users\1H144708H\Downloads\a.mul.ttml.conv", $MyFile, $Utf8NoBomEncoding)
And:
./xml.exe ed -N ns=http://www.w3.org/2006/04/ttaf1 -d '//ns:div[not(contains(#xml:lang,''Italian''))]' "C:\Users\1H144708H\Downloads\a.mul.ttml" | Out-File "C:\Users\1H144708H\Downloads\a.mul.ttml.conv" -Encoding utf8
But characters like è à ì ù are still wrong. If I try to save the original file with Notepad before the conversion it works (only if I don't use xmlstarlet)... but I need to do the same thing in powershell and I don't know how.
EDIT.
I was able to print my utf8 on powershell:
Get-Content -Path "C:\Users\1H144708H\Downloads\a.mul.ttml" -Encoding UTF8
But I'm still not able to do the same thing with xmlstarlet.
In the end I decided to create a native C# method and I just used a StreamReader to ReadLine by line the file. With a simple Contains I decide where is the xml:lang="Language" and I then start to add every line to a string. Of course I added the head and the end of my file before the while loop and I stop to add every line when I read a line that Contains . I know that this is not the best way to do things, but it works for my case.

Loading a Powershell Module from the C# code of a custom Provider

I've been working on a VERY specific functionality "need" to tie into a custom Provider I'm writing in C#.
Basically I set out to find a way to replicate the
A:
B:
etc functions defined when PowerShell loads so instead of having to type
CD A:
You can just do the aforementioned
A:
I tried first to have my provider inject the functions into the runspace but it seems I'm completely missing the timing of how to get that to work so I went another route.
Basically I have a VERY simple PSM1 file UseColons.psm1
function Use-ColonsForPSDrives
{
[CmdletBinding()] Param()
Write-Verbose "Looping Through Installed PowerShell Providers"
Get-PSProvider | % `
{
Write-Verbose "Found $($_.Name) checking its drives"
$_.Drives | ? { (Get-Command | ? Name -eq "$($_.Name):") -eq $null } | `
{
Write-Verbose "Setting up: `"function $($_.Name):() {Set-Location $($_.Name):}`""
if ($Verbose)
{
. Invoke-Expression -Command "function $($_.Name):() {Set-Location $($_.Name):}"
}
else
{
. Invoke-Expression -Command "function $($_.Name):() {Set-Location $($_.Name):}" -ErrorAction SilentlyContinue
}
Write-Verbose "Finished with drive $($_.Name)"
}
}
# Cert and WSMan do not show up as providers until you try to naviagte to their drives
# As a result we will add their functions manually but we will check if they are already set anyways
if ((Get-Command | ? Name -eq "Cert:") -eq $null) { . Invoke-Expression -Command "function Cert:() {Set-Location Cert:}" }
if ((Get-Command | ? Name -eq "WSMan:") -eq $null) { . Invoke-Expression -Command "function WSMan:() {Set-Location WSMan:}" }
}
. Use-ColonsForPSDrives
In simple terms it loops through all loaded providers, then through all the drives of each provider, then it checks if the Function: drive contains a function matching the {DriveName}: format and if one is not found it creates one.
The psd1 file is nothing more than export all functions
This is stored in the %ProgramFiles%\WindowsPowerShell\Modules path under its own folder
And finally I have profile.ps1 under the %windir%\system32\windowspowershell\v1.0 directory that just does
Remove-Module UseColons -ErrorAction SilentlyContinue
Import-Module UseColons
So when I load PowerShell or the ISE if I want to get to say dir through the variables I can just call
Variable:
Or if I need to switch back to the registry
HKLM:
HKCU:
Which when you are working with multiple providers typing that CD over and over as you switch is just annoying.
Now to the problem I'm still working on developing the actual PowerShell provider this was originally intended for. But when I debug it the UseColons module loads BEFORE visual studio turns around and loads the new provider so if I manually remove and import the module again it does its thing and I have all my drive functions for my provider.
I wanted to know after that LONG explanation how can I either:
Setup my UseColons module to load LAST
Find a way to have my Custom Provider (technically a module since it has the provider AND custom Cmdlets) load the UseColons module when it initializes
I don't want to remove it from my standard profile because it is very helpful when I'm not working on the new provider and just tooling around using powershell for administrative stuff.
Hopefully someone can give me some ideas or point me in the direction of some good deeper dive powershell provider documentations and how-tos.
In your module manifest (.psd1), you have a DLL as the RootModule?
This is a horrible hack, and does not help for drives that get created in the future, but...
In your module manifest, instead of YourProvider.dll as the RootModule, use Dummy.psm1 instead (can be an empty file). Then, for NestedModules, use #( 'YourProvider.dll', 'UseColons' ). This allows the UseColons module to be loaded after YourProvider.dll. (Dummy will be last.)

PowerShell -WebClient DownloadFile Wildcards?

I want to copy multiple files from a SharePoint libary to a local directory.
It is possible to use Wildcards?
The following code is not working. But is there a way to use the WebClient and Wildcards?
(I must use the WebClient. It is not possible to use the SharePoint WebServices :-( )
$url = "http://mySharePoint/websites/Site/TestDocBib/*.jpg"
$path = "D:\temp\"
$WebClient = New-Object System.Net.WebClient
$WebClient.UseDefaultCredentials = $true
$WebClient.DownloadFile($url, $path)
No, sorry, you can't use wildcards with WebClient. It's not part of HTTP.
What about using WEBDAV?
c:\> copy \\my.sharepoint.site\sites\foo\doclib\*.jpg c:\temp\
If the client end (i.e. not sharepoint) is a server 2008+ platform, you'll need to add the "desktop experience" role and enable the "webclient" service. This is not the same thing as system.net.webclient; it's the HTTP/DAV network redirector service.
If you need to log in with different credentials, you can use this:
c:\> net use * "http://my.sharepoint.site/sites/foo/doclib" /user:foobar
mapped h: to ...
c:\> copy h:\*.jpg c:\temp
Hope this helps.
you can parse though the html of the list.
# dummy url - i've added allitems.aspx
$url = "http://mySharePoint/websites/Site/TestDocBib/allitems.aspx"
$path = "D:\temp\"
$dl_file = $path + "allitems.html"
$WebClient = New-Object System.Net.WebClient
$WebClient.UseDefaultCredentials = $true
$WebClient.DownloadFile($url, $dl_file)
once you've downloaded the file you can parse though the file - a quick google turned up that Lee Holmes had done most of it already:
http://www.leeholmes.com/blog/2005/09/05/unit-testing-in-powershell-%E2%80%93-a-link-parser/
the main bit you want is the regex:
$regex = “<\s*a\s*[^>]*?href\s*=\s*[`"']*([^`"'>]+)[^>]*?>”
I very quick hack - that may (or may not) work... but the gist is there :)
$test = gc $dl_file
$t = [Regex]::Matches($test, $regex, "IgnoreCase")
$i = 0
foreach ($tt in $t) {
# this assumes absolute paths - you may need to add the hostname if the paths are relative
$url = $tt.Groups[1].Value.Trim()
$WebClient = New-Object System.Net.WebClient
$WebClient.UseDefaultCredentials = $true
$WebClient.DownloadFile($url, $($path + $i + ".jpg"))
$i = $i + 1
}

Categories