I have a C# application that needs to populate a list of all the filenames within a particular SharePoint web environment, in which there is a specific document library from which I have to read all the filenames.
Let's say the URL for the document library in question is "http://example.com/lib.aspx".
If I used Server.MapPath like so:
Directory.GetFiles(Server.MapPath("http://example.com/lib.aspx"), SearchOption.TopDirectoryOnly);
This would effectively treat the document library as a physical pathname and successfully populate the an array of filenames, correct?
I don't currently have the ability to test this and I am wondering if this operation would be valid; in other words, the filenames would (most likely) be indexed successfully.
That won't work at all. The documents in the library are not located in the server's file system.
If you're enumerating all files in the library, then you can use the Items property of the library and for each item, use the File property to retrieve the SPFile associated with the item.
Related
I am currently creating an API (asp.net core 2.2) which takes in a json input, generates XML files based on the json, adds them to a folder structure and then returns this folder structure zipped as a response. I am currently struggling to imagine how I am going to store the file once generated within Docker and then how to add them to a zip file in a specific folder format to return to the requester.
I created a similar app (without using APIs) in Winforms, meaning I could just add folders and files to a temporary file on the clients machine and then zip it up all locally, but I'm wondering whether this is possible on Docker too (or if there is a better way all together?). Once the zip file has been returned to the requester there is no requirement to keep it stored anywhere either.
I have done some research into using file/memory streams but not come across anything particularly useful.
Any resources or recommendations would be greatly appreciated!
I am creating some unit tests for our application, and I'd like to store a serialized (XML) list of classes in a resource file to be used by some of our tests, as a mock. Some of these lists could contain hundreds or thousands of items, so an individual resource entry could be quite large. Is a C# Resource file the best approach for this? Can an individual resource in a resource file contain an item this large?
I've thought about adding the serialized lists directly in my project as a file, but then I'm not sure how I would access this file in my app, without assuming a particular directory structure, which other developers may not have, because they have their source control mappings set differently.
Why do you want to use a Resource file, do you have different versions of your list per culture you want to test?
If not, you can just as well add your XML list as an XML file straight into your Project. The maximum size of an XML file is the maximum size the OS supports; so even if you're using Windows XP, it can be up to 4 GB in length.
I am writing an application that would download and replace a pdf file only if the timestamp is newer than that of the already existing one ...
I know its possible to read the time stamp of a file on a local computer via the code line below,
MessageBox.Show(File.GetCreationTime("C:\\test.pdf").ToString());
is it possible to read the timestamp of a file that is online without downloading it .. ?
Unless the directory containing the file on the site is configured to show raw file listings there's no way to get a timestamp for a file via HTTP. Even with raw listings you'd need to parse the HTML yourself to get at the timestamp.
If you had FTP access to the files then you could do this. If just using the basic FTP capabilities built into the .NET Framework you'd still need to parse the directory listing to get at the date. However there are third party FTP libraries that fill in the gaps such as editFTPnet where you get a FTPFile class.
Updated:
Per comment:
If I were to set up a simple html file with the dates and filenames
written manually , I could simply read that to find out which files
have actually been updated and download just the required files . is
that a feasible solution ..
That would be one approach, or if you have scripting available (ASP.NET, ASP, PHP, Perl, etc) then you could automate this and have the script get the timestamp of the files(s) and render them for you. Or you could write a very simple web service that returns a JSON or XML blob containing the timestamps for the files which would be less hassle to parse than some HTML.
It's only possible if the web server explicitly serves that data to you. The creation date for a file is part of the file system. However, when you're downloading something over HTTP it's not part of a file system at that point.
HTTP doesn't have a concept of "files" in the way people generally think. Instead, what would otherwise be a "file" is transferred as response data with a response header that gives information about the data. The header can specify the type of the data (such as a PDF "file") and even specify a default name to use if the client decides to save the data as a file on the client's local file system.
However, even when saving that, it's a new file on the client's local file system. It has no knowledge of the original file which produced the data that was served by the web server.
I am building an interface whose primary function would be to act as a file renaming tool (the underlying task here is to manually classify each file within a folder according to rules that describe their content). So far, I have implemented a customized file explorer and a preview window for the files.
I now have to find a way to inform a user if a file has already been renamed (this will show up in the file explorer's listView). The program should be able to read as well as modify that state as the files are renamed. I simply do not know what method is optimal to save this kind of information, as I am not fully used to C#'s potential yet. My initial solution involved text files, but again, I do not know if there should be only one text file for all files and folders or simply a text file per folder indicating the state of its contained items.
A colleague suggested that I use an Excel spreadsheet and then simply import the row or columns corresponding to my query. I tried to find more direct data structures, but again I would feel a lot more comfortable with some outside opinion.
So, what do you think would be the best way to store this kind of data?
PS: There are many thousands of files, all of them TIFF images, located on a remote server to which I have complete access.
I'm not sure what you're asking for, but if you simply want to keep some file's information such as name, date, size etc. you could use the FileInfo class. It is marked as serializable, so that you could easily write an array of them in an xml file by invoking the serialize method of an XmlSerializer.
I am not sure I understand you question. But what I gather you want to basically store the meta-data regarding each file. If this is the case I could make two suggestions.
Store the meta-data in a simple XML file. One XML file per folder if you have multiple folders, the XML file could be a hidden file. Then your custom application can load the file if it exists when you navigate to the folder and present the data to the user.
If you are using NTFS and you know this will always be the case, you can store the meta-data for the file in a file stream. This is not a .NET stream, but a extra stream of data that can be store and moved around with each file without impacting the actual files content. The nice thin about this is that no matter where you move the file, the meta-data will move with the file, as long as it is still on NTFS
Here is more info on the file streams
http://msdn.microsoft.com/en-us/library/aa364404(VS.85).aspx
You could create an object oriented structure and then serialize the root object to a binary file or to an XML file. You could represent just about any structure this way, so you wouldn't have to struggle with the
I do not know if there should be only one text file for all files and folders or simply a text file per folder indicating the state of its contained items.
design issues. You would just have one file containing all of the metadata that you need to store. If you want speedier opening/saving and smaller size, go with binary, and if you want something that other people could open and view and potentially write their own software against, you can use XML.
There's lots of variations on how to do this, but to get you started here is one article from a quick Google:
http://www.codeproject.com/KB/cs/objserial.aspx
I am newbie in Lucene.net. I want to search a content from the folder which may have all type of files (.txt, .xls, .pdf, .exe, .ppt, .doc,...).
Suppose if I search any content, I want to list the filepath & content matched (it should be highlighted) inside the file if any.
Any sample code would be appreciated.
Note : I am want to use this result in C# class library.
I haven't used it myself, but you should look into using SOLR. AFAIK you cannot host it on a .NET server, but you can connect to it from .NET using solrSHARP.