I am using the below code to check-in a file to SharePoint 2016 using C#. But it is throwing Microsoft.SharePoint.Client.ServerException: File Not Found. The file URL is valid and file.name is printed in the console confirming its validity. Please advise what is going wrong here.
string url = "valid url of the file";
var file = clientContext.Web.GetFileByServerRelativeUrl(url);
clientContext.Load(file);
clientContext.ExecuteQuery();
Console.WriteLine(file.Name); //successfully printed expected result
file.CheckIn("Test", CheckinType.MajorCheckIn);
clientContext.Load(file);
clientContext.ExecuteQuery(); //File Not Found Exception thrown at this point
Try this:
using (ClientContext clientContext = GetContextObject())
{
string url = "valid url of the file";
var file = clientContext.Web.GetFileByServerRelativeUrl(url);
using (FileStream fs = new FileStream(file, FileMode.Open))
{
File.SaveBinaryDirect(clientContext, file, fs, true);
}
}
You might find that the "valid url of the file" is not actually valid. I am getting this error from a URL that is "valid" in the sense that opens the file in Sharepoint from the browser but I think the filename has some characters that the C# CSOM interface doesn't support. In my case the file is called "GetPDF.aspx%2F4AA3-8247ENW.pdf", so it's probably because it has two periods, but I'm struggling to confirm that.
You might take a look at https://www.codesharepoint.com/csom/get-checkin-comment-of-file-in-sharepoint-using-csom. I've never used CSOM and GetFileByServerRelativeUrl, but I seem to recall CheckIn has to occur on the file as opposed to the item containing the file.
Related
I am trying to download a zip of pdf files without having to go through the full directory path
So when I am downloading the files the user will have to click through the entire path to get to the pdf. I want just the files to download without the entire path.
So I did this:
public ActionResult DownloadStatements(string[] years, string[] months, string[] radio, string[] emails, string acctNum)
{
List<string> manypaths = (List<string>)TempData["temp"];
using (Ionic.Zip.ZipFile zip = new Ionic.Zip.ZipFile())
{
zip.AddFiles(manypaths , #"\");
MemoryStream output = new MemoryStream();
zip.Save(output);
return File(output.ToArray(), "application/zip");
}
}
In the line zip.AddFiles(manypaths, #"\") I added the #"\" and that seemed to do the trick. But now at that line of code I am getting an error:
System.ArgumentException: 'An item with the same key has already been added.
None of the files are duplicates as I have checked that. I just don't understand what is going on?
I was thinking about maybe if I added a timestamp to it it might help but can't figure out the proper way to do this in dotnetzip library.
Ultimately I'm trying to upload a document from the user's file system via MVC .NET web site to Google Drive, which utilizes a service account.
I'm not sure if I'm implementing the appropriate design to accomplish the upload but I am getting hung up on the path of the file to be uploaded.
Web
#Html.TextBox("file", "file", new { type = "file", id = "fileUpload" })
Controller
public ActionResult GoogleDriveList(GoogleDrivePageVM vm, HttpPostedFileBase file)
File _file = new File();
var _uploadFile = System.IO.Path.GetFileName(file.FileName);
byte[] byteArray = System.IO.File.ReadAllBytes(_uploadFile);
Error occurs on the ReadAllBytes statement. It could not find file 'C:\Program Files (x86)\IIS Express\Map of Universe.txt'. The file name is correct but the path is not.
byte[] byteArray = System.IO.File.ReadAllBytes(_uploadFile);
System.IO.MemoryStream stream = new System.IO.MemoryStream(byteArray);
... Google Drive file stuff goes here
Then upload the file from the stream.
FilesResource.InsertMediaUpload request = _service.Files.Insert(body, stream, body.MimeType);
request.Upload();
So, am I going down the right path using the HTML file helper? And if so, what's the trick to get the path to work correctly? Also, I want to be able to support file sizes up to 500 MB (if that makes a difference).
If your getting the filename from a user selected windows file explorer dialog, then you shouldn't be using the below as will just strip out the filename without the path into your upload file variable and I am assuming that bogus path is where your're running the code from, so ReadAllBytes is trying to read from that path with the filename
var _uploadFile = System.IO.Path.GetFileName(file.FileName)
just change so it has that full path and filename you need to use in ReadAllBytes
var _uploadFile = file.FileName
I am trying to download a file using webclient method DownloadFile But its giving me error that
The process cannot access the file '...\d915877c-cb7c-4eeb-97d8-41d49b75aa27.docx' because it is being used by another process.
But when I open the file by clicking it, it opening.
There are same question requesting same info but none are accepted answers.
any help will be appreciated
It may be oS that is not letting file go. Whatever it is but after searching a lot I am unable to find solution
Here is the code to create a file
Document d = new Document();
d.Save(HttpContext.Current.Server.MapPath(#"Invoice\" + iname + ".docx"));
I am using aspose word dll
and following way I am accessing it
using (var client = new System.Net.WebClient())
{
client.UseDefaultCredentials = true;
client.DownloadFile(Server.MapPath("invoice/" + Request.QueryString["id"].ToString() + ".docx"),Server.MapPath("invoice/" +Request.QueryString["id"].ToString() + ".docx"));
client.Dispose();
}
and BTW its giving same error even to files that are not creaated using code.
Give a different path where to save the download file which is different from the download source path. If you want to replace the file do it after disposing the webclient by using File.replace() method.
string downloadPath = "Your download path";
string destinationPath = "the path where the file should be saved";`//this should be different from "download path"
File.Download(downloadPath,destinationPath);
I am using Visual Studio C# to parse an XML document for a file location from a local search tool I am using. Specifically I am using c# to query if the user has access to certain files and hide those to which it does not have access. I seem to have files that should return access is true however because not all files are local (IE some are web files without proper names) it is not showing access to files it should be showing access to. The error right now is caused by a url using .aspx?i=573, is there a work around or am I going to have to just remove all of these files... =/
Edit: More info...
I am using right now....
foreach (XmlNode xn in nodeList)
{
string url = xn.InnerText;
//Label1.Text = url;
try
{ using (FileStream fs = File.OpenRead(url)) { }
}
catch { i++; Label2.Text = i.ToString(); Label1.Text = url; }
}
The issue is, when it attempts to open files like the ....aspx?i=573 it puts them in the catch stack. If I attempt to open the file however the file opens just fine. (IE I have read access but because of either the file type or the append of the '?=' in the file name it tosses it into the unreadable stack.
I want everything that is readable either via url or local access to display else it will catch the error files for me.
I'm not sure exactly what you are trying to do, but if you only want the path of a URI, you can easily drop the query string portion like this:
Uri baseUri = new Uri("http://www.domain.com/");
Uri myUri = new Uri(baseUri, "home/default.aspx?i=573");
Console.WriteLine(myUri.AbsolutePath); // ie "home/default.aspx"
You cannot have ? in file names in Windows, but they are valid in URIs (that is why IE can open it, but Windows cannot).
Alternatively, you could just replace the '?' with some other character if you are converting a URL to a filename.
In fact thinking about it now, you could just check to see if your "document" was a URI or not, and if it isn't then try to open the file on the file system. Sounds like you are trying to open any and everything that is supplied, but it wouldn't hurt to performs some checks on the data.
private static bool IsLocalPath(string p)
{
return new Uri(p).IsFile;
}
This is from Check if the path input is URL or Local File it looks like exactly what you are looking for.
FileStream reads and writes local files. "?" is not valid character for local file name.
It looks like you want to open local and remote files. If it is what you are trying to do you should use approapriate metod of downloading for each type - i.e. for HTTP you WebRequest or related classes.
Note: it would be much easier to answer if you'd say: when url is "..." File.OpenRead(url) failes with exception, mesasge "...".
What is the best way to download all files in a remote directory using C# and FTP and save them to a local directory?
Thanks.
downloading all files in a specific folder seems to be an easy task. However, there are some issues which has to be solved. To name a few:
How to get list of files (System.Net.FtpWebRequest gives you unparsed list and directory list format is not standardized in any RFC)
What if remote directory has both files and subdirectories. Do we have to dive into the subdirs and download it's content?
What if some of the remote files already exist on the local computer? Should they be overwritten? Skipped? Should we overwrite older files only?
What if the local file is not writable? Should the whole transfer fail? Should we skip the file and continue to the next?
How to handle files on a remote disk which are unreadable because we don’t have sufficient access rights?
How are the symlinks, hard links and junction points handled? Links can easily be used to create an infinite recursive directory tree structure. Consider folder A with subfolder B which in fact is not the real folder but the *nix hard link pointing back to folder A. The naive approach will end in an application which never ends (at least if nobody manage to pull the plug).
Decent third party FTP component should have a method for handling those issues. Following code uses our Rebex FTP for .NET.
using (Ftp client = new Ftp())
{
// connect and login to the FTP site
client.Connect("mirror.aarnet.edu.au");
client.Login("anonymous", "my#password");
// download all files
client.GetFiles(
"/pub/fedora/linux/development/i386/os/EFI/*",
"c:\\temp\\download",
FtpBatchTransferOptions.Recursive,
FtpActionOnExistingFiles.OverwriteAll
);
client.Disconnect();
}
The code is taken from my blogpost available at blog.rebex.net. The blogpost also references a sample which shows how ask the user how to handle each problem (e.g. Overwrite/Overwrite older/Skip/Skip all).
Using C# FtpWebRequest and FtpWebReponse, you can use the following recursion (make sure the folder strings terminate in '\'):
public void GetAllDirectoriesAndFiles(string getFolder, string putFolder)
{
List<string> dirIitems = DirectoryListing(getFolder);
foreach (var item in dirIitems)
{
if ( item.Contains('.') )
{
GetFile(getFolder + item, putFolder + item);
}
else
{
var subDirPut = new DirectoryInfo(putFolder + "\\" + item);
subDirPut.Create();
GetAllDirectoriesAndFiles(getFolder + item + "\\", subDirPut.FullName + "\\");
}
}
}
The "item.Contains('.')" is a bit primitive, but has worked for my purposes. Post a comment if you need an example of the methods:
GetFile(string getFileAndPath, string putFileAndPath)
or
DirectoryListing(getFolder)
For FTP protocol you can use FtpWebRequest class from .NET framework. Though it does not have any explicit support for recursive file operations (including downloads). You have to implement the recursion yourself:
List the remote directory
Iterate the entries, downloading files and recursing into subdirectories (listing them again, etc.)
Tricky part is to identify files from subdirectories. There's no way to do that in a portable way with the FtpWebRequest. The FtpWebRequest unfortunately does not support the MLSD command, which is the only portable way to retrieve directory listing with file attributes in FTP protocol. See also Checking if object on FTP server is file or directory.
Your options are:
Do an operation on a file name that is certain to fail for file and succeeds for directories (or vice versa). I.e. you can try to download the "name". If that succeeds, it's a file, if that fails, it's a directory. But that can become a performance problem, when you have a large number of entries.
You may be lucky and in your specific case, you can tell a file from a directory by a file name (i.e. all your files have an extension, while subdirectories do not)
You use a long directory listing (LIST command = ListDirectoryDetails method) and try to parse a server-specific listing. Many FTP servers use *nix-style listing, where you identify a directory by the d at the very beginning of the entry. But many servers use a different format. The following example uses this approach (assuming the *nix format)
void DownloadFtpDirectory(
string url, NetworkCredential credentials, string localPath)
{
FtpWebRequest listRequest = (FtpWebRequest)WebRequest.Create(url);
listRequest.UsePassive = true;
listRequest.Method = WebRequestMethods.Ftp.ListDirectoryDetails;
listRequest.Credentials = credentials;
List<string> lines = new List<string>();
using (WebResponse listResponse = listRequest.GetResponse())
using (Stream listStream = listResponse.GetResponseStream())
using (StreamReader listReader = new StreamReader(listStream))
{
while (!listReader.EndOfStream)
{
lines.Add(listReader.ReadLine());
}
}
foreach (string line in lines)
{
string[] tokens =
line.Split(new[] { ' ' }, 9, StringSplitOptions.RemoveEmptyEntries);
string name = tokens[8];
string permissions = tokens[0];
string localFilePath = Path.Combine(localPath, name);
string fileUrl = url + name;
if (permissions[0] == 'd')
{
Directory.CreateDirectory(localFilePath);
DownloadFtpDirectory(fileUrl + "/", credentials, localFilePath);
}
else
{
var downloadRequest = (FtpWebRequest)WebRequest.Create(fileUrl);
downloadRequest.UsePassive = true;
downloadRequest.UseBinary = true;
downloadRequest.Method = WebRequestMethods.Ftp.DownloadFile;
downloadRequest.Credentials = credentials;
var response = downloadRequest.GetResponse();
using (Stream ftpStream = response.GetResponseStream())
using (Stream fileStream = File.Create(localFilePath))
{
ftpStream.CopyTo(fileStream);
}
}
}
}
The url must be like:
ftp://example.com/ or
ftp://example.com/path/
Or use 3rd party library that supports recursive downloads.
For example with WinSCP .NET assembly you can download whole directory with a single call to Session.GetFiles:
// Setup session options
SessionOptions sessionOptions = new SessionOptions
{
Protocol = Protocol.Ftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
};
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
// Download files
session.GetFiles("/home/user/*", #"d:\download\").Check();
}
Internally, WinSCP uses the MLSD command, if supported by the server. If not, it uses the LIST command and supports dozens of different listing formats.
(I'm the author of WinSCP)
You could use System.Net.WebClient.DownloadFile(), which supports FTP. MSDN Details here
You can use FTPClient from laedit.net. It's under Apache license and easy to use.
It use FtpWebRequest :
first you need to use WebRequestMethods.Ftp.ListDirectoryDetails to get the detail of all the list of the folder
for each files you need to use WebRequestMethods.Ftp.DownloadFile to download it to a local folder