my company is working with an Excel-file (xlsx) on Sharepoint. For my application I need the content of the file as a byte array.
When located on my local machine, I would use System.IO.File.ReadAllBytes(path).
How can I do the same with a file hosted on a server? The url is something like "https://sharepoint.com/excel.xlsx"
I tried new WebClient().DownloadData(url) but this returns something different I can't use. I think it returns the byte array of the file itself and not the content of the file.
Any ideas?
Rather than WebClient, try HttpClient:
using (var client = new HttpClient())
using (HttpResponseMessage response = await client.GetAsync("https://sharepoint.com/excel.xlsx"))
{
byte[] fileContents = await response.Content.ReadAsByteArrayAsync();
}
Related
I'm submitting a trusted url path to my webapi and then trying to upload the image to azure-storage. The uri I have to upload includes blob:https://localhost/... Which points to a image stored locally. I need to read this stream however I'm receiving an exception on first line of code:
"The URI prefix is not recognized."
var req = System.Net.WebRequest.Create("blob:https://localhost:5001/2b28e86c-fef1-482e-ae16-12466e6f729f");
using (var stream = req.GetResponse().GetResponseStream())
{
containerClient.UploadBlob(image.guid.ToString(), stream);
}
I did a bunch more searching and discovered that only the browser can access blob: files. Ended up switching back to FormData.
How to get list from another form and download files on the that list via web client one by one.
So, I'm going to assume that the other list is a collection of URL endpoints which contain file content. Knowing this, we could do something like:
var urls = MyFormName.ItemsSource
Or
var urls = MyFormName.ItemsSource.Cast<Type>();
Knowing this, we now can donwload the data from each URL:
using System.Net
WebClient webClient = new WebClient();
foreach(var endpoint in urls){
webClient.DownloadFile(endpoint, myFileLocation);
}
From there, you can grab all the urls and download their content into a folder. If you want to read the downloaded file contents to a byte array or so, you could re read the file using
using(var file = File.OpenRead(filePAth)){}
If you want, you can also download tyo a byte array using:
byte[] response = new System.Net.WebClient().DownloadData(endpoint);
I'm familiar with Winform and WPF, but new to web developing. One day saw WebClient.UploadValues and decided to try it.
static void Main(string[] args)
{
using (var client = new WebClient())
{
var values = new NameValueCollection();
values["thing1"] = "hello";
values["thing2"] = "world";
//A single file that contains plain html
var response = client.UploadValues("D:\\page.html", values);
var responseString = Encoding.Default.GetString(response);
Console.WriteLine(responseString);
}
Console.ReadLine();
}
After run, nothing printed, and the html file content becomes like this:
thing1=hello&thing2=world
Could anyone explain it, thanks!
The UploadValues method is intended to be used with the HTTP protocol. This means that you need to host your html on a web server and make the request like that:
var response = client.UploadValues("http://some_server/page.html", values);
In this case the method will send the values to the server by using application/x-www-form-urlencoded encoding and it will return the response from the HTTP request.
I have never used the UploadValues with a local file and the documentation doesn't seem to mention anything about it. They only mention HTTP or FTP protocols. So I suppose that this is some side effect when using it with a local file -> it simply overwrites the contents of this file with the payload that is being sent.
You are using WebClient not as it was intended.
The purpose of WebClient.UploadValues is to upload the specified name/value collection to the resource identified by the specified URI.
But it should not be some local file on your disk, but instead it should be some web-service listening for requests and issuing responces.
I have a self hosted web api using Owin and Katana. I would like to send files (can be pretty large, a few hundred MB) from a sample client, and would like to save these files on the server's disk. Currently just testing the server on my local machine.
I have the following on the test client's machine (it says image here, but it's not always going to be an image):
using System;
using System.IO;
using System.Net.Http;
class Program
{
string port = "1234";
string fileName = "whatever file I choose will be here";
static void Main(string[] args)
{
string baseAddress = "http://localhost:" + port;
InitiateClient(baseAddress);
}
static void InitiateClient(string serverBase)
{
Uri serverUri = new Uri(serverBase);
using(HttpClient client = new HttpClient())
{
client.BaseAddress = serverUri;
HttpResponseMessage response = SendImage(client, fileName);
Console.WriteLine(response);
Console.ReadLine();
}
}
private static HttpResponseMessage SendImage(HttpClient client, string imageName)
{
using (var content = new MultipartFormDataContent())
{
byte[] imageBytes = System.IO.File.ReadAllBytes(imageName);
content.Add(new StreamContent(new MemoryStream(imageBytes)), "File", "samplepic.png");
client.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("multipart/form-data"));
return client.PostAsync("api/ServiceA", content).Result;
}
}
First, is this the right way of sending a file using POST?
And now here is where I'm really lost. I am not sure how to save the file I receive in the Post method of my ServiceAController which inherits ApiController. I saw some other examples which used HttpContext.Current, but since it's self hosted, it seems to be null.
I would split files into chunks before uploading. 100 Mb is a bit large for single HTTP POST request. Most of web servers also have certain limit on HTTP request size.
By using chunks you won't need to resend all data again if connection times out.
It should not matter whether you use self hosting or IIS, and whether it is an image file or any type of file.
You can check my answer that will give you simple code to do so
https://stackoverflow.com/a/10765972/71924
Regarding size, it is definitely better to chunk if possible but this gives more work for your clients (unless you also own the API client code) and for you on the server as you must rebuild the file.
It will depend if all files will be over 100MB or if only few. If they are consistently large, I would suggest to look for http byte range support. This is part of the http standard and I am sure you can find somebody who implemented it using WebAPI
I need to download a Cab file from a Url into a stream.
using (WebClient client = new WebClient())
{
client.Credentials = CredentialCache.DefaultCredentials;
byte[] fileContents = client.DownloadData("http://localhost/sites/hfsc/FormServerTemplates/HfscInspectionForm.xsn");
using (MemoryStream ms = new MemoryStream(fileContents))
{
FormTemplate = formExtractor.ExtractFormTemplateComponent(ms, "template.xml");
}
}
This is fairly straight forward, however my cab extractor (CabLib) is throwing an exception that it's not a valid cabinet.
I was previously using a SharePoint call to get the byte stream and that was returning 30942 bytes. The stream I get through that method worked correctly with CabLib. The stream I get with the WebClient returns only 28087 bytes.
I have noticed that the responce header content-type is coming back as text/html; charset=utf-8
I'm not too sure why but I think it's what's affecting the data I get back.
I beleive the problem is that SharePoint is passing the xsn to the Forms Server to render as an info path form in HTML for you. You need to stop this from happening. You can do this by adding some query string parameters to the URL request.
These can be found at:
http://msdn.microsoft.com/en-us/library/ms772417.aspx
I suggest you use NoRedirect=true