Upload large file using dropnet - c#

I'm using dropnet to upload files to the dropbox. Until then everything is working well, but only for small files on it. The following code I'm using to send:
private void btnEnviar_Click(object sender, EventArgs e)
{
var _client = new DropNetClient("xxxxxxxxxxxxx", "xxxxxxxxxxxxxx", "xxxxxxxxxxxxx", "xxxxxxxxxxxxxxxx");
_client.UseSandbox = true;
string arq = "";
string path = "";
foreach (DataGridViewRow dr in dgvArquivos.Rows)
{
if (dr.Cells[0].Value != null)
{
arq = dr.Cells[3].Value.ToString();
path = "//server/documentos/Scanner_/exames";
try
{
var filebytes = new FileInfo(#path+"/"+arq);
byte[] content = _client.GetFileContentFromFS(filebytes);
var result=_client.UploadFile("/exames",arq,content);
this.lblMsg.Text = result.ToString();
dr.Cells[4].Value = "17/12/2014";
}
catch (Exception ex)
{
this.lblMsg.Text= ex.Message.ToString();
}
}
}
}
How do I send files larger on average than 50mb?

Depending on the actual error you're getting the answer might change. ie, if its out of memory or a HTTP error from the API.
DropNet does have support for chunked uploading of files which you might want to have a look at. Documentation for it is a little lacking at the moment but looking at the source should tell you how to use it. https://github.com/DropNet/DropNet/blob/master/DropNet/Client/Files.Sync.cs#L224
Making a call to StartChunkedUpload to start the upload, call AppendChunkedUpload to append more bytes to the upload then call CommitChunkedUpload to complete the upload. Use this with a streaming read of the file if possible to reduce the memory usage.

Related

C# How to send a file to webhook and get sent file's link?

Hello im trying upload file to a link and i tried this:
`private void buttonInput_Click(object sender, EventArgs e)
{
try
{
using (WebClient client = new WebClient())
{
var resStr = client.UploadFile(#"https://anonfiles.com", #"C:\Users\sadettin\desktop\test.txt");
var jObjResult = JObject.Parse(Encoding.UTF8.GetString(resStr));
var linkToFile = jObjResult["link"];
}
}
catch(Exception err)
{
MessageBox.Show(err.Message);
}
}`
But Im taking 404 error.
Now i want to send any txt file to my discord webhook address and take sent file's link.
How can i do?
Despite your claims, using the correct end-point and a non-zero bytes file does lead to an uploaded file:
using (WebClient client = new WebClient())
{
var resStr = client.UploadFile(#"https://api.anonfiles.com/upload", #"C:\tmp\test.txt");
var jObjResult = JObject.Parse(Encoding.UTF8.GetString(resStr));
var linkToFile = jObjResult["data"]["file"]["url"]["full"].ToString();
MessageBox.Show(linkToFile);
}
Do note that the JSON structure that is returned is different then you seem to handle. The url is found in an attribute full under this path /data/file/url hence this line in my code example:
var linkToFile = jObjResult["data"]["file"]["url"]["full"];
Here is one of the full urls that the service returned to me with my test file
https://anonfiles.com/nai0Z3S0x5/test_txt
It is 106 bytes in total.

Upload file to Rest API in WindowsForms C#

I am able to upload the files to an API, but I need a small help. Right now I just hard-coded it. But actually, I will be having a PDF and XML files in two different local file storage locations, I need to get the files from that location and needs to upload them to API. Can anyone help me to achieve this?
private void btnsubmit_Click(object sender, EventArgs e)
{
UploadFileAsync(#"D:\test\SBP-1102.pdf");
}
public static async Task UploadFileAsync(string path)
{
HttpClient client = new HttpClient();
// we need to send a request with multipart/form-data
var multiForm = new MultipartFormDataContent();
// add file and directly upload it
FileStream fs = File.OpenRead(path);
multiForm.Add(new StreamContent(fs), "files", Path.GetFileName(path));
// send request to API
var url = "https://spaysaas-dev/api/getOCRDocuments";
var response = await client.PostAsync(url, multiForm);
if(response.IsSuccessStatusCode)
{
MessageBox.Show("Success");
}
else
{
MessageBox.Show(response.ToString());
}
}
This answer is incomplete in that it doesn't actually explain why the file isn't being uploaded, but it might help you diagnose the problem.
The documentation on WebClient.UploadFileAsync says:
The file is sent asynchronously using thread resources that are automatically allocated from the thread pool. To receive notification when the file upload completes, add an event handler to the UploadFileCompleted event.
So you could try handling WebClient.UploadFileCompleted and checking the UploadFileCompletedEventArgs for errors.
private void Upload(string fileName)
{
var client = new WebClient();
client.UploadFileCompleted += Client_UploadFileCompleted;
try
{
var uri = new Uri("https://saas-dev/api/getDocs");
{
client.Headers.Add("fileName", System.IO.Path.GetFileName(fileName));
client.UploadFileAsync(uri, fileName);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
private void Client_UploadFileCompleted(object sender, UploadFileCompletedEventArgs e)
{
// Check e.Error for errors
}
I am able to upload the PDF file to an API using multi-part form data

Mass file download - Drive API .NET

Question:
How can I tell my backup tool to download all the files it recorded in fileids?
The method I'm using is C#/.NET https://developers.google.com/drive/v3/web/manage-downloads#examples
I'll spare the boring details and say that part of my program logs in Once as each user (well, using the Apps Service API), grabs all their files' fileIds and records them to a flat text file. My program then opens that flat text file and begins downloading each fileId recorded for that user, but the problem is: it's soooo slow because it opens a new connection for a file, waits for the file to finish, then gets a new fileid and starts the whole process over again. It's not very efficient.
Google's example, which I copied pretty much Verbatim (I modified the vars a little bit by immediately grabbing and exporting their mimetype, so the first 3 lines are moot):
var fileId = "0BwwA4oUTeiV1UVNwOHItT0xfa2M";
var request = driveService.Files.Get(fileId);
var stream = new System.IO.MemoryStream();
// Add a handler which will be notified on progress changes.
// It will notify on each chunk download and when the
// download is completed or failed.
request.MediaDownloader.ProgressChanged +=
(IDownloadProgress progress) =>
{
switch(progress.Status)
{
case DownloadStatus.Downloading:
{
Console.WriteLine(progress.BytesDownloaded);
break;
}
case DownloadStatus.Completed:
{
Console.WriteLine("Download complete.");
break;
}
case DownloadStatus.Failed:
{
Console.WriteLine("Download failed.");
break;
}
}
};
request.Download(stream);
Is there Any way I can streamline this so that my program can download all the files it knows for the user in one big handshake, vs reading a fileid individually, then opening a session, exporting, downloading, closing, then doing the same exact thing for the next file? Hope this makes sense.
Thank you for any help ahead of time!
--Mike
---EDIT---
I wanted to add more details so that hopefully what I'm looking to do makes more sense:
So what's happening in the following code is: I am creating a "request" that will let me export the filetype (which I have from the flat text file as the fileId[0], and the "mimetype" which is in the array as fileId[1].)
What's killing the speed of the program is having to build the "BuildService" request each time for each file.
foreach (var file in deltafiles)
{
try
{
if (bgW.CancellationPending)
{
stripLabel.Text = "Backup canceled!";
e.Cancel = true;
break;
}
DateTime dt = DateTime.Now;
string[] foldervalues = File.ReadAllLines(savelocation + "folderlog.txt");
cnttototal++;
bgW.ReportProgress(cnttototal);
// Our file is a CSV. Column 1 = file ID, Column 2 = File name
var values = file.Split(',');
string fileId = values[0];
string fileName = values[1];
string mimetype = values[2];
mimetype = mimetype.Replace(",", "_");
string folder = values[3];
int foundmatch = 0;
int folderfilelen = foldervalues.Count();
fileName = fileName.Replace('\\', '_').Replace('/', '_').Replace(':', '_').Replace('!', '_').Replace('\'', '_').Replace('*', '_').Replace('#', '_').Replace('[', '_').Replace(']', '_');
var request = CreateService.BuildService(user).Files.Export(fileId, mimetype);
//Default extensions for files. Not sure what this should be, so we'll null it for now.
string ext = null;
// Things get sloppy here. The reason we're checking MimeTypes
// is because we have to export the files from Google's format
// to a format that is readable by a desktop computer program
// So for example, the google-apps.spreadsheet will become an MS Excel file.
if (mimetype == mimeSheet || mimetype == mimeSheetRitz || mimetype == mimeSheetml)
{
request = CreateService.BuildService(user).Files.Export(fileId, exportSheet);
ext = ".xls";
}
if (mimetype == mimeDoc || mimetype == mimeDocKix || mimetype == mimeDocWord)
{
request = CreateService.BuildService(user).Files.Export(fileId, exportDoc);
ext = ".docx";
}
if (mimetype == mimePres || mimetype == mimePresPunch)
{
request = CreateService.BuildService(user).Files.Export(fileId, exportPres);
ext = ".ppt";
}
if (mimetype == mimeForm || mimetype == mimeFormfb || mimetype == mimeFormDrawing)
{
request = CreateService.BuildService(user).Files.Export(fileId, exportForm);
ext = ".docx";
}
// Any other file type, assume as know what it is (which in our case, will be a txt file)
// apply the mime type and carry on.
string dest = Path.Combine(savelocation, fileName + ext);
var stream = new System.IO.FileStream(dest, FileMode.Create, FileAccess.ReadWrite);
int oops = 0;
// Add a handler which will be notified on progress changes.
// It will notify on each chunk download and when the
// download is completed or failed.
request.MediaDownloader.ProgressChanged +=
(IDownloadProgress progress) =>
{
switch (progress.Status)
{
case DownloadStatus.Downloading:
{
throw new Exception("File may be corrupted.");
break;
}
case DownloadStatus.Completed:
{
Console.WriteLine("Download complete.");
break;
}
case DownloadStatus.Failed:
{
oops = 1;
logFile.WriteLine(fileName + " could not be downloaded. Possible Google draw/form OR bad name.\n");
break;
}
}
};
request.Download(stream);
stream.Close();
stream.Dispose();
Is there any way I could streamline this process so I don't have to build the drive service Every time I want to download a file? The flat text file the program reads looks similar to
FILEID,ACTUAL FILE NAME,MIMETYPE
So is there any way I could cut out the middle man and feed the request.Download method without constantly reminding the "foreach" statement to export the file type as a file system-readable file? (good grief, sorry, I know this sounds like a lot.)
Any pointers would be great!!
You might want to try the tutorial : Google Drive API with C# .net – Download. This is a much simpler code to download a file. Also there are other factors like intermittent internet connect that may affect the ETA of downloading the file.
Code Sample :
/// Download a file
/// Documentation: https://developers.google.com/drive/v2/reference/files/get
///
/// a Valid authenticated DriveService
/// File resource of the file to download
/// location of where to save the file including the file name to save it as.
///
public static Boolean downloadFile(DriveService _service, File _fileResource, string _saveTo)
{
if (!String.IsNullOrEmpty(_fileResource.DownloadUrl))
{
try
{
var x = _service.HttpClient.GetByteArrayAsync(_fileResource.DownloadUrl );
byte[] arrBytes = x.Result;
System.IO.File.WriteAllBytes(_saveTo, arrBytes);
return true;
}
catch (Exception e)
{
Console.WriteLine("An error occurred: " + e.Message);
return false;
}
}
else
{
// The file doesn't have any content stored on Drive.
return false;
}
}
Using _service.HttpClient.GetByteArrayAsync we can pass it the download url of the file we would like to download. Once the file is download its a simple matter of wright the file to the disk.
Hope this helps!
This isn't an answer as much as it is a work around, even then it's only half the answer (for right now.) I threw my gloves off and played dirty.
First, I updated my nuget google api packages to the latest version available today inside my VS project, then went to https://github.com/google/google-api-dotnet-client, forked/cloned it, changed the Google.Apis.Drive.v3.cs file (which compiles to google.apis.drive.v3.dll) so that the mimetype is no longer read only (it can do get; and set;, when by default, it only allowed get).
Because I already knew the mime types, I am able to force assign the mime type now to the request and go on with my life, instead of having to build the client service, connect, only to export the file type that I already know it is.
It's not pretty, not how it should be done, but this was really bothering me!
Going back to #Mr.Rebot, I thank you again for your help and research! :-)

Convert Postscript to Text file using ghostscript

my client wants me to do one task. which is whenever I print ctrl+P from the browser it will go automatically that contents to the database which is sql.
Now, Let me explain what I have tried to achieve this. Usually printerPlusPlus which is third party tool. Which adds virtual printer and prints the files PS to the temp directory than I can read the contents of that postscript file and save it to the database.
My real question is there anything from which I can convert this post script files to text or read them and save the texts to the database?
Or is there any better way to achieve this task?
Ghostscript is the alternate and ow-some feature to convert the postscripts to the text or pdf. But, I am completely clueless about the documentation and how to execute their commands.
_viewer.Interpreter.RunFile("C:\\PrinterPlusPlus\\Temp\\ankit_SONY-VAIO_sony_20151227_185020_3.ps");
GhostscriptPngDevice dev = new GhostscriptPngDevice(GhostscriptPngDeviceType.Png16m);
dev.GraphicsAlphaBits = GhostscriptImageDeviceAlphaBits.V_4;
dev.TextAlphaBits = GhostscriptImageDeviceAlphaBits.V_4;
dev.ResolutionXY = new GhostscriptImageDeviceResolution(96, 96);
dev.InputFiles.Add(#"C:\\PrinterPlusPlus\\Temp\\ankit_SONY-VAIO_sony_20151227_185020_3.ps");
dev.OutputPath = #"C:\\PrinterPlusPlus\\Temp\\ankit_SONY-VAIO_sony_20151227_185020_3.txt";
dev.Process();
_preview.Activate();
I tried this but this seems to be not working and adding ASCII text to the txt file.
I found ghostscript little confusing. But, I found the solution from here
string inputFile = #"E:\gss_test\test_postscript.ps";
GhostscriptPipedOutput gsPipedOutput = new GhostscriptPipedOutput();
// pipe handle format: %handle%hexvalue
string outputPipeHandle = "%handle%" + int.Parse(gsPipedOutput.ClientHandle).ToString("X2");
using (GhostscriptProcessor processor = new GhostscriptProcessor())
{
List<string> switches = new List<string>();
switches.Add("-empty");
switches.Add("-dQUIET");
switches.Add("-dSAFER");
switches.Add("-dBATCH");
switches.Add("-dNOPAUSE");
switches.Add("-dNOPROMPT");
switches.Add("-sDEVICE=pdfwrite");
switches.Add("-o" + outputPipeHandle);
switches.Add("-q");
switches.Add("-f");
switches.Add(inputFile);
try
{
processor.StartProcessing(switches.ToArray(), null);
byte[] rawDocumentData = gsPipedOutput.Data;
//if (writeToDatabase)
//{
// Database.ExecSP("add_document", rawDocumentData);
//}
//else if (writeToDisk)
//{
// File.WriteAllBytes(#"E:\gss_test\output\test_piped_output.pdf", rawDocumentData);
//}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
gsPipedOutput.Dispose();
gsPipedOutput = null;
}
}
This reads the postscript files easily :)

Reversing a tsv/csv file or reading only last line using asp.net

I am pretty much stuck on a problem from last few days. I have a file while is located on a remote server can be access by using userId and password. Well no problem in accessing.
Problem is I have around 150 of them. and each of them is of variable size minimum is 2 MB and max is 3 MB.
I have to read them one by one and read last row/line data from them. I am doing it in my current code.
The main problem is it is taking too much time since it is reading files from top to bottom.
public bool TEst(string ControlId, string FileName, long offset)
{
// The serverUri parameter should use the ftp:// scheme.
// It identifies the server file that is to be downloaded
// Example: ftp://contoso.com/someFile.txt.
// The fileName parameter identifies the local file.
//The serverUri parameter identifies the remote file.
// The offset parameter specifies where in the server file to start reading data.
Uri serverUri;
String ftpserver = "ftp://xxx.xxx.xx.xxx/"+FileName;
serverUri = new Uri(ftpserver);
if (serverUri.Scheme != Uri.UriSchemeFtp)
{
return false;
}
// Get the object used to communicate with the server.
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(serverUri);
request.Credentials = new NetworkCredential("test", "test");
request.Method = WebRequestMethods.Ftp.DownloadFile;
//request.Method = WebRequestMethods.Ftp.DownloadFile;
request.ContentOffset = offset;
FtpWebResponse response = null;
try
{
response = (FtpWebResponse)request.GetResponse();
// long Size = response.ContentLength;
}
catch (WebException e)
{
Console.WriteLine(e.Status);
Console.WriteLine(e.Message);
return false;
}
// Get the data stream from the response.
Stream newFile = response.GetResponseStream();
// Use a StreamReader to simplify reading the response data.
StreamReader reader = new StreamReader(newFile);
string newFileData = reader.ReadToEnd();
// Append the response data to the local file
// using a StreamWriter.
string[] parser = newFileData.Split('\t');
string strID = parser[parser.Length - 5];
string strName = parser[parser.Length - 3];
string strStatus = parser[parser.Length-1];
if (strStatus.Trim().ToLower() != "suspect")
{
HtmlTableCell control = (HtmlTableCell)this.FindControl(ControlId);
control.InnerHtml = strName.Split('.')[0];
}
else
{
HtmlTableCell control = (HtmlTableCell)this.FindControl(ControlId);
control.InnerHtml = "S";
}
// Display the status description.
// Cleanup.
reader.Close();
response.Close();
//Console.WriteLine("Download restart - status: {0}", response.StatusDescription);
return true;
}
Threading:
protected void Page_Load(object sender, EventArgs e)
{
new Task(()=>this.TEst("controlid1", "file1.tsv", 261454)).Start();
new Task(()=>this.TEst1("controlid2", "file2.tsv", 261454)).Start();
}
FTP is not capable of seeking a file to read only the last few lines. Reference: FTP Commands You'll have to coordinate with the developers and owners of the remote ftp server and ask them make an additional file containing the data you need.
Example Ask owners of remote ftp server to create for each of the files a [filename]_lastrow file that contains the last row of the files. Your program would then operate on the [filename]_lastrow files. You'll probably be pleasantly surprised with an accommodating answer of "Ok we can do that for you"
If the ftp server can't be changed ask for a database connection.
You can also download all your files in parallel and start popping them into a queue for parsing when they are done rather than doing this process synchronously. If the ftp server can handle more connections, use as many as would be reasonable for the scenario. Parsing can be done in parallel too.
More reading: System.Threading.Tasks
It's kinda buried, but I placed a comment in your original answer. This SO question leads to this blog post which has some awesome code you can draw from.
Rather than your while loop you can skip directly to the end of the Stream by using Seek. You then want to work your way backwards though the stream until you find the first new line variable. This post should give you everything your need to know.
Get last 10 lines of very large text file > 10GB
FtpWebRequest includes the ContentOffset property. Find/choose a way to keep the offset of the last line (locally or remotely - ie by uploading a 4 byte file to ftp). This is the fastest way to do it and the most optimal for network traffic.
More information about FtpWebRequest can be found at MSDN

Categories