SSH.NET Upload Multiple files asynchronously throws an exception - c#

I am creating an application that will
process a CSV file,
create JObject for each record in CSV file and save the JSON as txt file, and finally
upload all these JSON files to SFTP server
After looking around for a free library for the 3rd point, I decided to use SSH.NET.
I have created the following class to perform the upload operation asynchronously.
public class JsonFtpClient : IJsonFtpClient
{
private string _sfptServerIp, _sfptUsername, _sfptPassword;
public JsonFtpClient(string sfptServerIp, string sfptUsername, string sfptPassword)
{
_sfptServerIp = sfptServerIp;
_sfptUsername = sfptUsername;
_sfptPassword = sfptPassword;
}
public Task<string> UploadDocumentAsync(string sourceFilePath, string destinationFilePath)
{
return Task.Run(() =>
{
using (var client = new SftpClient(_sfptServerIp, _sfptUsername, _sfptPassword))
{
client.Connect();
using (Stream stream = File.OpenRead(sourceFilePath))
{
client.UploadFile(stream, destinationFilePath);
}
client.Disconnect();
}
return (destinationFilePath);
});
}
}
The UploadDocumentAsync method returns a TPL Task so that I can call it to upload multiple files asynchronously.
I call this UploadDocumentAsync method from the following method which is in a different class:
private async Task<int> ProcessJsonObjects(List<JObject> jsons)
{
var uploadTasks = new List<Task>();
foreach (JObject jsonObj in jsons)
{
var fileName = string.Format("{0}{1}", Guid.NewGuid(), ".txt");
//save the file to a temp location
FileHelper.SaveTextIntoFile(AppSettings.ProcessedJsonMainFolder, fileName, jsonObj.ToString());
//call the FTP client class and store the Task in a collection
var uploadTask = _ftpClient.UploadDocumentAsync(
Path.Combine(AppSettings.ProcessedJsonMainFolder, fileName),
string.Format("/Files/{0}", fileName));
uploadTasks.Add(uploadTask);
}
//wait for all files to be uploaded
await Task.WhenAll(uploadTasks);
return jsons.Count();
}
Although the CSV file results in thousands of JSON records, but I want to upload these in batches of at least 50. This ProcessJsonObjects always receives a list of 50 JObjects at a time which I want to upload asynchronously to the SFTP server. But I receive the following error on client.Connect(); line of the UploadDocumentAsync method:
Session operation has timed out
Decreasing the batch size to 2 works fine but sometimes results in the following error:
Client not connected.
I need to be able to upload many files at the same time. Or tell me if IIS or SFTP server needs configuration for this type of operation and what is it.
What am I doing wrong? Your help is much appreciated.

Related

Getting "Invalid path" when sending MemoryStream from Amazon S3 bucket to a remote server using SFTP in C#

I'm working on logic to retrieve a file from an Amazon S3 Bucket and then send it to a remote server using SFTP.
However, I'm not able to send the file because I don't have a path value - i.e., since my file is in S3, I don't have a path for it my local machine.
This is my method to send a file - it's inside a class called SftpService:
public async Task SendFile(SftpRequest request, CancellationToken cancellationToken = default)
{
// Connect to server
SftpClient client = new SftpClient(request.Host, request.Port, request.Username, request.Password);
client.Connect();
if (!client.IsConnected)
{
throw new SftpClientException("SFTP Connection was not complete.");
}
// Create an object of File Stream and pass a temp file path.
var tempFilePath = Path.GetTempFileName();
FileStream fileStream = new FileStream(tempFilePath, FileMode.Open);
// Copy the MemoryStream (from S3) to the FileStream
request.RefundFile.CopyTo(fileStream);
fileStream.Close();
// Upload the file. [TBD] What's the path for the file? (it is a MemoryStream)
client.UploadFile(fileStream, tempFilePath);
// Dispose the object by calling dispose method of sftpClient once the file has uploaded.
client.Dispose();
}
Now, I'm trying to perform a test on this method (using NUnit).
[SetUp]
public void Setup()
{
_sut = new SftpService(); //class
}
[Test]
public async Task SftpService_ShouldSendFileToServer_Success()
{
var request = MockSftpRequest();
await _sut.SendFile(request);
}
private SftpRequest MockSftpRequest()
{
var serverMock = new ServerOptions()
{
BaseAddress = "195.144.107.198",
Username = "demo",
Password = "password",
};
// Create a mock MemoryStream
var s3FileMock = new MemoryStream(Encoding.UTF8.GetBytes("This is a mock of the s3 file."));
SftpRequest request = new SftpRequest(serverMock)
{
RefundFile = refundFileMock,
Port = 22,
};
return request;
}
When running the test, I'm getting this error:
Message: Renci.SshNet.Common.SftpPathNotFoundException : Invalid path.
So, my question is: how can I achieve to send a MemoryStream over SFTP, without a path?
You have both parameters of SftpClient.UploadFile wrong.
Your immediate issue it that you are passing local path to the second parameter of SftpClient.UploadFile. Why? Obviously one parameter must be for local file/data and the other for remote. You are already passing local data to the first argument. So SftpClient.UploadFile does not need the local path anymore. It needs the remote path.
Why are you saving the [Memory]Stream to a temporary file only to open the file as another [File]Stream to pass it to SftpClient.UploadFile? Pass the MemoryStream to the SftpClient.UploadFile straight away.
client.UploadFile(request.RefundFile, "/remote/path/file");
Related question: Upload data from memory to SFTP server using SSH.NET

Return file from ADLS2 via Azure Function http endpoint

I am trying to build an http triggered azure function which accepts file name and pick the file from ADLS2 location and return back to the consumer application. Doesnt matter the file type, it just return the requested file.
For example the input will be like
{
"fileName":"test.mp4",
"filePath":"container\directory1\directory2\directory3"
}
So it picks the file from that location and return to the requester.
Here is the code I did
public async Task<byte[]> GetFileAsync(string fileName, string containerName, string filePath)
{
try
{
if (_dataLakeServiceClient == null)
{
return new byte[] { };
}
_logger.LogInformation($"Retrieving {fileName} from {containerName}/{filePath}");
DataLakeFileSystemClient fileSystemClient = _dataLakeServiceClient.GetFileSystemClient(containerName);//container name
DataLakeDirectoryClient directoryClient = fileSystemClient.GetDirectoryClient(filePath);//immediate directory name or rest of the file path.
DataLakeFileClient dataLakeFileClient = directoryClient.GetFileClient(fileName);//file name
var response = await dataLakeFileClient.ReadAsync().ConfigureAwait(false);
if (response == null)
{
_logger.LogInformation($"Retrieved {fileName} file response is null from {containerName}/{filePath}");
return new byte[] { };
}
else
{
_logger.LogInformation($"Successfully retrieved the file {fileName} from {containerName}/{filePath}");
return TypeConverterHelper.StreamToByteArray(response.Value.Content);
}
}
catch(Exception ex)
{
_logger.LogInformation($"Unable to retrieve the file due to:{ex.Message}");
return new byte[] { };
}
}
But I have found a little lag in the processing and I guess its because I am trying to read the file itself. So if its 100MB file it will take some good time to read.
But my question is do I really require to read the whole file?? Instead, is there is any way to return the file as just a stream directly from the source to the consumer. Or is there is any better way to accomplish the task?

C# Server downloads the file and transfers it to the user at the same time. Download fails [Google Drive]

I'm writing a program with ASP.NET Core.
The program will download the file from Google Drive without storing it in Memory or disk and transfer it to the user.
Download function in Google Drive official libraries does not continue until the download has finished. For this reason, I send a normal GET request to the API and read the file as a stream and return to the user.
But after a certain size, downloading in programs like IDM or browser results in an error.
In short, I want to make a program that uses the server as a bridge, and it should not be interrupted.
[HttpGet]
[DisableRequestSizeLimit]
public async Task<FileStreamResult> Download([FromQuery(Name = "file")] string fileid)
{
if (!string.IsNullOrEmpty(fileid))
{
var decoded = Base64.Base64Decode(fileid);
var file = DriveAPI.service.Files.Get(decoded);
file.SupportsAllDrives = true;
file.SupportsTeamDrives = true;
var fileinf = file.Execute();
var filesize = fileinf.FileSize;
var cli = new HttpClient(DriveAPI.service.HttpClient.MessageHandler);
//var req = await cli.SendAsync(file.CreateRequest());
var req = await cli.GetAsync($"https://www.googleapis.com/drive/v2/files/{decoded}?alt=media", HttpCompletionOption.ResponseHeadersRead);
//var req = await DriveAPI.service.HttpClient.GetAsync($"https://www.googleapis.com/drive/v2/files/{decoded}?alt=media", HttpCompletionOption.ResponseHeadersRead);
var contenttype = req.Content.Headers.ContentType.MediaType;
if (contenttype == "application/json")
{
var message = JObject.Parse(req.Content.ReadAsStringAsync().Result).SelectToken("error.message");
if (message.ToString() == "The download quota for this file has been exceeded")
{
throw new Exception("Google Drive Günlük İndirme Kotası Aşıldı. Lütfen 24-48 Saat Sonra Tekrar Deneyin.");
}
else
{
throw new Exception(message.ToString());
}
}
else
{
return File(req.Content.ReadAsStream(), contenttype, fileinf.OriginalFilename, false);
}
}
else
{
return null;
}
}
Some errors are written to the log file when downloading:
Received an unexpected EOF or 0 bytes from the transport stream.
Unable to read data from the transport connection.
etc.
If user is using IDM, the error is:
Server sent wrong answer on restart command
If user is downloading from browser, the error is:
Network Error
I started a 1.5Gb file download with 8 mbps internet when approximate download size is 900Mb, IDM stopped download with said error.
I have no idea other than returning FileStreamResult in ASP.NET Core - how to download concurrent files?

asp.net web api how to create zip file from multiple server files and download the zip file in your machine

I am trying to get multiple files from network and zip into one file and then open a window to allow user to save the zipped file in his/her machine.
My current working code zips the files and return it to client in angularJS, but it doesn't initiate the download. From client I am making the git call which gets the object list and pass it to downloadFiles function. I want to handle the download from API, not from client side.
//obj List contains all the files information that needs to be downloaded in zip
public HttpResponseMessage downloadFiles(List<obj> arr)
{
using (ZipFile zip = new ZipFile())
{
foreach (var d in arr)
{
zip.AddEntry(d.FileName, d.URL);
}
return ZipContentResult(zip);
}
}
protected HttpResponseMessage ZipContentResult(ZipFile zipFile)
{
var pushStreamContent = new PushStreamContent((stream, content, context) =>
{
zipFile.Save(stream);
stream.Close();
}, "application/zip");
return new HttpResponseMessage(HttpStatusCode.OK) { Content = pushStreamContent };
}

When I use the .NET WebClient DownloadFileAsync I randomly get zero length files returned

I'm trying to download files from my FTP server - multiples at the same time. When i use the DownloadFileAsync .. random files are returned with a byte[] Length of 0. I can 100% confirm the file exists on the server and has content AND there FTP server (running Filezilla Server) isn't erroring and say's the file has been transferred.
private async Task<IList<FtpDataResult>> DownloadFileAsync(FtpFileName ftpFileName)
{
var address = new Uri(string.Format("ftp://{0}{1}", _server, ftpFileName.FullName));
var webClient = new WebClient
{
Credentials = new NetworkCredential(_username, _password)
};
var bytes = await webClient.DownloadDataTaskAsync(address);
using (var stream = new MemoryStream(bytes))
{
// extract the stream data (either files in a zip OR a file);
return result;
}
}
When I try this code, it's slower (of course) but all the files have content.
private async Task<IList<FtpDataResult>> DownloadFileAsync(FtpFileName ftpFileName)
{
var address = new Uri(string.Format("ftp://{0}{1}", _server, ftpFileName.FullName));
var webClient = new WebClient
{
Credentials = new NetworkCredential(_username, _password)
};
// NOTICE: I've removed the AWAIT and a different method.
var bytes = webClient.DownloadData(address);
using (var stream = new MemoryStream(bytes))
{
// extract the stream data (either files in a zip OR a file);
return result;
}
}
Can anyone see what I'm doing wrong, please? Why would the DownloadFileAsync be randomly returning zero bytes?
Try out FtpWebRequest/FtpWebResponse classes. You have more available to you for debugging purposes.
FtpWebRequest - http://msdn.microsoft.com/en-us/library/system.net.ftpwebrequest(v=vs.110).aspx
FtpWebResponse - http://msdn.microsoft.com/en-us/library/system.net.ftpwebresponse(v=vs.110).aspx
Take a look at http://netftp.codeplex.com/. It appears as though almost all methods implement IAsyncResult. There isn't much documentation on how to get started, but I would assume that it is similar to the synchronous FTP classes from the .NET framework. You can install the nuget package here: https://www.nuget.org/packages/System.Net.FtpClient/

Categories