Open filestream from sharepoint file - c#

I want to open a filestream from a sharepoint file (Microsoft.SharePoint.Client.File) but I don't seem to find out how.
I only have access to Microsoft.SharePoint.Client because the Microsoft.SharePoint package can't be installed due to some errors.
This is the code I have so far:
ClientContext ctx = new ClientContext("https://factionxyz0.sharepoint.com/sites/faktion-devs");
ctx.Credentials = CredentialCache.DefaultCredentials;
Microsoft.SharePoint.Client.File temp = ctx.Web.GetFileByServerRelativeUrl(filePath);
FileStream fs = new FileStream(???);

You can only create a System.IO.FileStream if the file exists on a physical disk (or is mapped to a disk via the Operating System).
Workaround: Are you able to access the raw URL of the file? In which case, download the file to disk (if the size is appropriate) and then read from there.
For example:
var httpClient = new HttpClient();
// HTTP GET Request
var response = await httpClient.GetAsync(... SharePoint URL ...);
// Get the Content Stream
var stream = await response.Content.ReadAsSteamAsync();
// Create a temporary file
var tempFile = Path.GetTempFileName();
using (var fs = File.OpenWrite(tempFile))
{
await stream.CopyToAsync(fs);
}
// tempFile now contains your file locally, you can access it like
var fileStream = File.OpenRead(tempFile);
// Make sure you delete the temporary file after using it
File.Delete(tempFile);

FileStream must map to a file. The following code demonstrates how to get a stream via CSOM, then we can convert it to FileStream by using a temp file.
ResourcePath filepath = ResourcePath.FromDecodedUrl(filename);
Microsoft.SharePoint.Client.File temp = context.Web.GetFileByServerRelativePath(filepath);
ClientResult<System.IO.Stream> crstream = temp.OpenBinaryStream();
context.Load(temp);
context.ExecuteQuery();
var tempFile = Path.GetTempFileName();
FileStream fs = System.IO.File.OpenWrite(tempFile);
if (crstream.Value != null){
crstream.Value.CopyTo(fs);
}
As for Azure function temp storage, you may take a reference of following thread:
Azure Functions Temp storage
Or you can store data to Azure storage:
Upload data to blob storage with Azure Functions
Best Regards,
Baker Kong

Been a while since the question was asked, however, this is how I solved this while I was working on a project. Obviously passing in the credentials directly like this isn't the best way, but due to timing constraints I was not able to convert this project into a newer version of .NET and use Azure AD.
Note that the class is implementing an interface.
public void SetServer(string domainName) {
if (string.IsNullOrEmpty(domainName)) throw new Exception("Invalid domain name. Name cannot be null");
_server = domainName.Trim('/').Trim('\\');
}
private string MapPath(string urlPath) {
var url = string.Join("/", _server, urlPath);
return url.Trim('/');
}
public ISharePointDocument GetDocument(string path, string fileName) {
var serverPath = MapPath(path);
var filePath = string.Join("/", serverPath, TemplateLibrary, fileName).Trim('/');
var document = new SharePointDocument();
var data = GetClientStream(path, fileName);
using(var memoryStream = new MemoryStream()) {
if (data == null) return document;
data.CopyTo(memoryStream);
var byteArray = memoryStream.ToArray();
document = new SharePointDocument {
FullPath = filePath,
Bytes = byteArray
};
}
return document;
}
public Stream GetClientStream(string path, string fileName) {
var serverPath = MapPath(path);
var filePath = string.Join("/", serverPath, TemplateLibrary, fileName).Trim('/');
var context = GetClientContext(serverPath);
var web = context.Web;
context.Load(web);
context.ExecuteQuery();
var file = web.GetListItem(filePath).File;
var data = file.OpenBinaryStream();
context.Load(file);
context.ExecuteQuery();
return data.Value;
}
private static ClientContext GetClientContext(string serverPath) {
var context = new ClientContext(serverPath) {
Credentials = new SharePointOnlineCredentials("example#example.com", GetPassword())
};
return context;
}
private static SecureString GetPassword() {
const string password = "XYZ";
var securePassword = new SecureString();
foreach(var c in password.ToCharArray()) securePassword.AppendChar(c);
return securePassword;
}

Related

File path when uploading files through OneDrive SDK

Use the OneDrive SDK to upload files.
At this time, you have to pass the file path, but uploading using the code takes a long time.
Can I upload files even if I pass the temporary file path?
Currently I get the file path after saving the file to the server.
In this case, an issue arises from speed problems.
Is there any way to look at the temporary file path?
public async Task<JObject> UploadLargeFiles(string upn, IFormFile files)
{
var jObject = new JObject();
int fileSize = Convert.ToInt32(files.Length);
var folderName = Path.Combine("wwwroot", "saveLargeFiles");
var pathToSave = Path.Combine(System.IO.Directory.GetCurrentDirectory(), folderName);
var fullPath = "";
if (files.Length > 0)
{
var fileName = files.FileName;
fullPath = Path.Combine(pathToSave, fileName);
using (var stream = new FileStream(fullPath, FileMode.Create))
files.CopyTo(stream);
}
var filePath = fullPath;
var fileStream = System.IO.File.OpenRead(filePath);
GraphServiceClient client = await MicrosoftGraphClient.GetGraphServiceClient();
var uploadProps = new DriveItemUploadableProperties
{
ODataType = null,
AdditionalData = new Dictionary<string, object>
{
{ "#microsoft.graph.conflictBehavior", "rename" }
}
};
var item = this.SelectUploadFolderID(upn).Result;
var uploadSession = await client.Users[upn].Drive.Items[item].ItemWithPath(files.FileName).CreateUploadSession(uploadProps).Request().PostAsync();
int maxChunkSize = 320 * 1024;
var uploadTask = new LargeFileUploadTask<DriveItem>(uploadSession, fileStream, maxChunkSize);
var response = await uploadTask.UploadAsync();
if (response.UploadSucceeded)
{
return
}
else
{
return null;
}
}
Your server's disk is probably not what makes this slow. By default, uploaded files are stored in a temporary directory, which you can save permanently by using CopyTo(FileStream) like you do.
You can skip this step and call IFormFile.OpenReadStream() to obtain a stream to the temporary file, then pass that to the LargeFileUploadTask.
Point is, it's probably the uploading to OneDrive that takes the largest amount of time. Depending on your setup, you may want to save files into a queue directory (the temp file gets deleted after the request completes), and have a background service read that queue and upload them to OneDrive.

Get absolute file path on the machine. ASP.NET Core

I am using AWS S3 SDK and I want to upload the files from my WEB API to the Bucket. It is working perfectly normal if the provided filePath is of the sort C:\User\Desktop\file.jpg, but if I use Path.GetFullPath(file.FileName) It is looking for the .jpg file inside my project folder. How can I get the absolute path on the machine, not the path in the project folder.
public async Task UploadFileAsync(IFormFile file, string userId)
{
var filePath = Path.GetFullPath(file.FileName);
var bucketName = this.configuration.GetSection("Amazon")["BucketName"];
var accessKey = this.configuration.GetSection("Amazon")["AWSAccessKey"];
var secretKey = this.configuration.GetSection("Amazon")["AWSSecretKey"];
var bucketRegion = RegionEndpoint.EUWest1;
var s3Client = new AmazonS3Client(accessKey, secretKey, bucketRegion);
try
{
var fileTransferUtility =
new TransferUtility(s3Client);
using (var fileToUpload =
new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
await fileTransferUtility.UploadAsync(fileToUpload, bucketName, file.FileName);
}
await this.filesRepository.AddAsync(new FileBlob
{
Name = file.FileName,
Extension = file.FileName.Split('.')[1],
Size = file.Length,
UserId = userId,
UploadedOn = DateTime.UtcNow,
});
await this.filesRepository.SaveChangesAsync();
}
catch (AmazonS3Exception e)
{
Console.WriteLine(e.Message);
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
path i get when i use GetFullPath when i should be getting C:\Users\Pepi\Desktop\All\corgi.png
I am sure that I am missing a lot of things here but in order for it to work i need the path to the file on the machine. If i try to escape using filepath and upload the file itself through memoryStream S3 says access denied.

Getting a file from "file://A/"

How can I get a file from external path same as "file://A/B/C/D/"
In local machine I have access to the path of "file://" but the user has not access.
Now I want to read some files from "file://A/B/C/D/" and make downloadable for user.
How can I do it?
(current directory is "https://localhost:44331/")
public async Task<IActionResult> DownloadDocument(string berichtsnummer)
{
var constantPath = "file://A/B/C/D/";
using (FileStream fileStream = System.IO.File.OpenRead(constantPath))
{
MemoryStream memStream = new MemoryStream();
memStream.SetLength(fileStream.Length);
fileStream.Read(memStream.GetBuffer(), 0, (int)fileStream.Length);
return File(fileStream, "application/octet-stream");
}
}
when I click to download link, I get this error:
"IOException: The syntax for filename, directory name, or volume label
is incorrect:"
[
A view of path "file://A/B/C/D/":
A local file path is not "file://". You can read the file normally using the local file path as
var path = "C:\\...";
and then send to content to the client browser.
If the file is not on the local machine, the only way is to access it using a network share. You can then use UNC paths, like
var path = #"\\Server\Path\...";
That's important to change the constantPath to "\\\\A\\B\\C\\D\\"
private string[] GetListOfDocumentLink()
{
string path = string.Empty;
string constantPath = "\\\\A\\B\\C\\D\\";
string folderName = string.Empty;
string year = string.Empty;
// determine folderName and year.
path = constantPath
+ Path.DirectorySeparatorChar.ToString()
+ folderName
+ Path.DirectorySeparatorChar.ToString()
+ year;
var filter = Berichtsnummer + "*.pdf";
string[] allFiles = Directory.GetFiles(path, filter);
return allFiles;
}
Now you can send the path to DownloadDocument method:
public async Task<IActionResult> DownloadDocument(string path)
{
byte[] berichtData = null;
FileInfo fileInfo = new FileInfo(path);
long berichtFileLength = fileInfo.Length;
FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read);
BinaryReader br = new BinaryReader(fs);
berichtData = br.ReadBytes((int)berichtFileLength);
return File(berichtData, MimeTypeHelper.GetMimeType("pdf"));
}

How to upload the Stream from an HttpContent result to Azure File Storage

I am attempting to download a list of files from urls stored in my database, and then upload them to my Azure FileStorage account. I am successfully downloading the files and can turn them into files on my local storage or convert them to text and upload them. However I lose data when converting something like a pdf to a text and I do not want to have to store the files on the Azure app that this endpoint is hosted on as I do not need to manipulate the files in any way.
I have attempted to upload the files from the Stream I get from the HttpContent object using the UploadFromStream method on the CloudFile. Whenever this command is run I get an InvalidOperationException with the message "Operation is not valid due to the current state of the object."
I've tried converting the original Stream to a MemoryStream as well but this just writes a blank file to the FileStorage account, even if I set the position to the beginning of the MemoryStream. My code is below and if anyone could point out what information I am missing to make this work I would appreciate it.
public DownloadFileResponse DownloadFile(FileLink fileLink)
{
string fileName = string.Format("{0}{1}{2}", fileLink.ExpectedFileName, ".", fileLink.ExpectedFileType);
HttpStatusCode status;
string hash = "";
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromSeconds(10); // candidate for .config setting
client.DefaultRequestHeaders.Add("User-Agent", USER_AGENT);
var request = new HttpRequestMessage(HttpMethod.Get, fileLink.ExpectedURL);
var sendTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var response = sendTask.Result; // not ensuring success here, going to handle error codes without exceptions
status = response.StatusCode;
if (status == HttpStatusCode.OK)
{
var httpStream = response.Content.ReadAsStreamAsync().Result;
fileStorage.WriteFile(fileLink.ExpectedFileType, fileName, httpStream);
hash = HashGenerator.GetMD5HashFromStream(httpStream);
}
}
return new DownloadFileResponse(status, fileName, hash);
}
public void WriteFile(string targetDirectory, string targetFilePath, Stream fileStream)
{
var options = SetOptions();
var newFile = GetTargetCloudFile(targetDirectory, targetFilePath);
newFile.UploadFromStream(fileStream, options: options);
}
public FileRequestOptions SetOptions()
{
FileRequestOptions options = new FileRequestOptions();
options.ServerTimeout = TimeSpan.FromSeconds(10);
options.RetryPolicy = new NoRetry();
return options;
}
public CloudFile GetTargetCloudFile(string targetDirectory, string targetFilePath)
{
if (!shareConnector.share.Exists())
{
throw new Exception("Cannot access Azure File Storage share");
}
CloudFileDirectory rootDirectory = shareConnector.share.GetRootDirectoryReference();
CloudFileDirectory directory = rootDirectory.GetDirectoryReference(targetDirectory);
if (!directory.Exists())
{
throw new Exception("Target Directory does not exist");
}
CloudFile newFile = directory.GetFileReference(targetFilePath);
return newFile;
}
Had the same problem, the only way it worked is by reading the coming stream (in your case it is httpStream in DownloadFile(FileLink fileLink) method) to a byte array and using UploadFromByteArray (byte[] buffer, int index, int count) instead of UploadFromStream
So your WriteFile(FileLink fileLink) method will look like:
public void WriteFile(string targetDirectory, string targetFilePath, Stream fileStream)
{
var options = SetOptions();
var newFile = GetTargetCloudFile(targetDirectory, targetFilePath);
const int bufferLength= 600;
byte[] buffer = new byte[bufferLength];
// Buffer to read from stram This size is just an example
List<byte> byteArrayFile = new List<byte>(); // all your file will be here
int count = 0;
try
{
while ((count = fileStream.Read(buffer, 0, bufferLength)) > 0)
{
byteArrayFile.AddRange(buffer);
}
fileStream.Close();
}
catch (Exception ex)
{
throw; // you need to change this
}
file.UploadFromByteArray(allFile.ToArray(), 0, byteArrayFile.Count);
// Not sure about byteArrayFile.Count.. it should work
}
According to your description and codes, I suggest you could use Steam.CopyTo to copy the stream to the local memoryStream firstly, then upload the MemoryStream to azure file storage.
More details, you could refer to below codes:
I just change the DownloadFile method to test it.
HttpStatusCode status;
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromSeconds(10); // candidate for .config setting
// client.DefaultRequestHeaders.Add("User-Agent", USER_AGENT);
//here I use my blob file to test it
var request = new HttpRequestMessage(HttpMethod.Get, "https://xxxxxxxxxx.blob.core.windows.net/media/secondblobtest-eypt.txt");
var sendTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var response = sendTask.Result; // not ensuring success here, going to handle error codes without exceptions
status = response.StatusCode;
if (status == HttpStatusCode.OK)
{
MemoryStream ms = new MemoryStream();
var httpStream = response.Content.ReadAsStreamAsync().Result;
httpStream.CopyTo(ms);
ms.Position = 0;
WriteFile("aaa", "testaa", ms);
// hash = HashGenerator.GetMD5HashFromStream(httpStream);
}
}
I had a similar problem and got to find out that the UploadFromStream method only works with buffered streams. Nevertheless I was able to successfully upload files to azure storage by using a MemoryStream. I don't think this to be a very good solution as you are using up your memory resources by copying the content of the file stream to memory before handing it to the azure stream. What I have come up with is a way of writing directly to an azure stream by using instead the OpenWriteAsync method to create the stream and then a simple CopyToAsync from the source stream.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse( "YourAzureStorageConnectionString" );
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
CloudFileShare share = fileClient.GetShareReference( "YourShareName" );
CloudFileDirectory root = share.GetRootDirectoryReference();
CloudFile file = root.GetFileReference( "TheFileName" );
using (CloudFileStream fileWriteStream = await file.OpenWriteAsync( fileMetadata.FileSize, new AccessCondition(),
new FileRequestOptions { StoreFileContentMD5 = true },
new OperationContext() ))
{
await fileContent.CopyToAsync( fileWriteStream, 128 * 1024 );
}

File upload in asp.net c#

hey guys, m using an api of "Bits on the Run" following is the code of upload API
public string Upload(string uploadUrl, NameValueCollection args, string filePath)
{
_queryString = args; //no required args
WebClient client = createWebClient();
_queryString["api_format"] = APIFormat ?? "xml"; //xml if not specified - normally set in required args routine
queryStringToArgs();
string callUrl = _apiURL + uploadUrl + "?" + _args;
callUrl = uploadUrl + "?" + _args;
try {
byte[] response = client.UploadFile(callUrl, filePath);
return Encoding.UTF8.GetString(response);
} catch {
return "";
}
}
and below is my code to upload a file, m using FileUpload control to get the full path of a file(but m not succeeded in that)...
botr = new BotR.API.BotRAPI("key", "secret_code");
var response = doc.Descendants("link").FirstOrDefault();
string url = string.Format("{0}://{1}{2}", response.Element("protocol").Value, response.Element("address").Value, response.Element("path").Value);
//here i want fullpath of the file, how can i achieve that here
string filePath = fileUpload.PostedFile.FileName;//"C://Documents and Settings//rkrishna//My Documents//Visual Studio 2008//Projects//BitsOnTheRun//BitsOnTheRun//rough_test.mp4";
col = new NameValueCollection();
FileStream fs = new FileStream(filePath, FileMode.Open);
col["file_size"] = fs.Length.ToString();
col["file_md5"] = BitConverter.ToString(HashAlgorithm.Create("MD5").ComputeHash(fs)).Replace("-", "").ToLower();
col["key"] = response.Element("query").Element("key").Value;
col["token"] = response.Element("query").Element("token").Value;
fs.Dispose();
string uploadResponse = botr.Upload(url, col, filePath);
i read in some forums saying that for some security purpose you can't get fullpath of a file from client side. If it is true then how can i achieve file upload in my scenario ?
Yes, this is true, for security reason you cannot get the fullpath of the client machine, what you can do is, try the following,
Stream stream = fileUpload.PostedFile.InputStream;
stream.Read(bytes, 0, fileUpload.PostedFile.ContentLength);
instead of creating your own FileStream use the stream provided by the FileUploadControl. Hoep it shall help.

Categories