I'm very new to C#, but not to programming. I'm trying to use SSH.NET to make an SSH connection using a private key, using the example is here:
public ConnectionInfo CreateConnectionInfo()
{
const string privateKeyFilePath = #"C:\some\private\key.pem";
ConnectionInfo connectionInfo;
using (var stream = new FileStream(privateKeyFilePath, FileMode.Open, FileAccess.Read))
{
var privateKeyFile = new PrivateKeyFile(stream);
AuthenticationMethod authenticationMethod =
new PrivateKeyAuthenticationMethod("ubuntu", privateKeyFile);
connectionInfo = new ConnectionInfo(
"my.server.com",
"ubuntu",
authenticationMethod);
}
return connectionInfo;
}
I don't have the key stored in a file, it comes from another source and I have it in a string. I realize I could write it to a temporary file, then pass that filename, then trash the file, but I'd prefer not to have to write it out to disk if possible. I don't see anything in the documentation that lets me pass a string for this, only a filename. I've tried something like this to convert the string to a stream:
byte[] private_key = Encoding.UTF8.GetBytes(GetConfig("SSHPrivateKey"));
var private_key_stream = new PrivateKeyFile(new MemoryStream(private_key));
And then pass private_key_stream as the second parameter of PrivateKeyAuthenticationMethod. There seem to be no complaints from the compiler, but the method doesn't appear to actually be getting the key (SSH doesn't authenticate, and external attempts like PuTTY using this key to the server do work), so it looks like I'm missing something.
Any thoughts on how to accomplish this without writing out a temp file?
Related
My class inherits from FluentFTP and I have created a class like this. I need to create a function called Read in this class. The purpose of the read function is to return a string to me by reading the contents of the files I have read from the FTP line by line. I'll process the rotating string later. Is there a method for this in FluentFTP? Ff there is none, how should I create the function?
using FluentFTP;
public class CustomFtpClient : FtpClient
{
public CustomFtpClient(
string host, int port, string username, string password) :
base(host, port, username, password)
{
Client = new FtpClient(host, port, username, password);
Client.AutoConnect();
}
private FtpClient Client { get; }
public string ReadFile(string remoteFileName)
{
Client.BufferSize = 4 * 1024;
return Client.ReadAllText(remoteFileName);
}
}
I can't write like this because the Client I'm writing comes from FTP. Since I derived it from SFTP in my previous codes, I wanted to use a similar code snippet to it, but there is no such code snippet in FluentFTP. How should I perform an operation in the Read function?
In another file, I want to call it like this.
CustomFtpClient = new CustomFtpClient(ftpurl, 21, ftpusername, ftppwd);
var listedfiles = CustomFtpClient.GetListing("inbound");
var onlyedifiles = listedfiles.Where(z =>
z.FullName.ToLower().Contains(".txt") || z.FullName.ToLower().Contains("940"))
.ToList();
foreach (var item in onlyedifiles)
{
//var filestr = CustomFtpClient.ReadFile(item.FullName);
}
To read file to a string using FluentFTP, you can use FtpClient.Download (FtpClient.DownloadStream or FtpClient.DownloadBytes in upcoming versions) that either writes the file contents to Stream or byte[] array. The following examples uses the latter.
if (!client.Download(out byte[] bytes, "/remote/path/file.txt"))
{
throw new Exception("Cannot read file");
}
string contents = Encoding.UTF8.GetString(bytes);
I am using couchdb for some reason as a content management to upload files as binary data, there is no GridFs support like mongoDB to upload large files, so I need to upload files as chunks then retrieve them as one file.
here is my code
public string InsertDataToCouchDb(string dbName, string id, string filename, byte[] image)
{
var connection = System.Configuration.ConfigurationManager.ConnectionStrings["CouchDb"].ConnectionString;
using (var db = new MyCouchClient(connection, dbName))
{
// HERE I NEED TO UPLOAD MY IMAGE BYTE[] AS CHUNKS
var artist = new couchdb
{
_id = id,
filename = filename,
Image = image
};
var response = db.Entities.PutAsync(artist);
return response.Result.Content._id;
}
}
public byte[] FetchDataFromCouchDb(string dbName, string id)
{
var connection = System.Configuration.ConfigurationManager.ConnectionStrings["CouchDb"].ConnectionString;
using (var db = new MyCouchClient(connection, dbName))
{
//HERE I NEED TO RETRIVE MY FULL IMAGE[] FROM CHUNKS
var test = db.Documents.GetAsync(id, null);
var doc = db.Serializer.Deserialize<couchdb>(test.Result.Content);
return doc.Image;
}
}
THANK YOU
Putting image data in a CouchDB document is a terrible idea. Just don't. This is the purpose of CouchDB attachments.
The potential of bloating the database with redundant blob data via document updates alone will surely have major, negative consequences for anything other than a toy database.
Further there seems to be a lack of understanding how async/await works as the code in the OP is invoking async methods, e.g. db.Entities.PutAsync(artist), without an await - the call surely will fail every time (if the compiler even allows the code). I highly recommend grok'ing the Microsoft document Asynchronous programming with async and await.
Now as for "chunking": If the image data is so large that it needs to be otherwise streamed, the business of passing it around via a byte array looks bad. If the images are relatively small, just use Attachment.PutAsync as it stands.
Although Attachment.PutAsync at MyCouch v7.6 does not support streams (effectively chunking) there exists the Support Streams for attachments #177 PR, which does, and it looks pretty good.
Here's a one page C# .Net Core console app that uploads a given file as an attachment to a specific document using the very efficient streaming provided by PR 177. Although the code uses PR 177, it most importantly uses Attachments for blob data. Replacing a stream with a byte array is rather straightforward.
MyCouch + PR 177
In a console get MyCouch sources and then apply PR 177
$ git clone https://github.com/danielwertheim/mycouch.git
$ cd mycouch
$ git pull origin 15a1079502a1728acfbfea89a7e255d0c8725e07
(I don't know git so there's probably a far better way to get a PR)
MyCouchUploader
With VS2019
Create a new .Net Core console app project and solution named "MyCouchUploader"
Add the MyCouch project pulled with PR 177 to the solution
Add the MyCouch project as MyCouchUploader dependency
Add the Nuget package "Microsoft.AspNetCore.StaticFiles" as a MyCouchUploader dependency
Replace the content of Program.cs with the following code:
using Microsoft.AspNetCore.StaticFiles;
using MyCouch;
using MyCouch.Requests;
using MyCouch.Responses;
using System;
using System.IO;
using System.Linq;
using System.Net;
using System.Security.Cryptography;
using System.Threading.Tasks;
namespace MyCouchUploader
{
class Program
{
static async Task Main(string[] args)
{
// args: scheme, database, file path of asset to upload.
if (args.Length < 3)
{
Console.WriteLine("\nUsage: MyCouchUploader scheme dbname filepath\n");
return;
}
var opts = new
{
scheme = args[0],
dbName = args[1],
filePath = args[2]
};
Action<Response> check = (response) =>
{
if (!response.IsSuccess) throw new Exception(response.Reason);
};
try
{
// canned doc id for this app
const string docId = "SO-68998781";
const string attachmentName = "Image";
DbConnectionInfo cnxn = new DbConnectionInfo(opts.scheme, opts.dbName)
{ // timely fail if scheme is bad
Timeout = TimeSpan.FromMilliseconds(3000)
};
MyCouchClient client = new MyCouchClient(cnxn);
// ensure db is there
GetDatabaseResponse info = await client.Database.GetAsync();
check(info);
// delete doc for succcessive program runs
DocumentResponse doc = await client.Documents.GetAsync(docId);
if (doc.StatusCode == HttpStatusCode.OK)
{
DocumentHeaderResponse del = await client.Documents.DeleteAsync(docId, doc.Rev);
check(del);
}
// sniff file for content type
FileExtensionContentTypeProvider provider = new FileExtensionContentTypeProvider();
if (!provider.TryGetContentType(opts.filePath, out string contentType))
{
contentType = "application/octet-stream";
}
// create a hash for silly verification
using var md5 = MD5.Create();
using Stream stream = File.OpenRead(opts.filePath);
byte[] fileHash = md5.ComputeHash(stream);
stream.Position = 0;
// Use PR 177, sea-locks:stream-attachments.
DocumentHeaderResponse put = await client.Attachments.PutAsync(new PutAttachmentStreamRequest(
docId,
attachmentName,
contentType,
stream // :-D
));
check(put);
// verify
AttachmentResponse verify = await client.Attachments.GetAsync(docId, attachmentName);
check(verify);
if (fileHash.SequenceEqual(md5.ComputeHash(verify.Content)))
{
Console.WriteLine("Atttachment verified.");
}
else
{
throw new Exception(String.Format("Attachment failed verification with status code {0}", verify.StatusCode));
}
}
catch (Exception e)
{
Console.WriteLine("Fail! {0}", e.Message);
}
}
}
}
To run:
$ MyCouchdbUploader http://name:password#localhost:5984 dbname path-to-local-image-file
Use Fauxton to visually verify the attachment for the doc.
I'm creating a feature for an app to store a file on a webserver while maintaining data about the file on SQL Server. I generate a SHA256 hash and store it as BINARY(32) and then upload the file to a WebDav server using HTTPClient. Later when I want to view the file in the app, I do a GET request, download the file, and check the SHA256 hash with the stored hash. It doesn't match :( Why?
I've tried checking the hash on the server and the local machine and it doesn't match either. I've done a ton of research and made sure I wasn't hashing the filename (you can see the code below).
public static byte[] GetSHA256(string path) {
using (var stream = File.OpenRead(path)) {
using (var sha256 = SHA256.Create()) {
return sha256.ComputeHash(stream);
}
}
}
To Upload a file:
public async Task<bool> Upload(string path, string name) {
var storedHash = GetSHA256(path/name);
//Store this hash in a database, omitted for brevity
using (var file = File.OpenRead(path)) {
var content = new MultipartFormDataContent();
content.Headers.ContentType.Media = "multipart/form-data";
content.Add(new StreamContent(file));
var result = await HttpClient.PutAsync(uri, content);
}
}
To download:
var result = await HttpClient.GetAsync(uri);
using (var stream = await result.Content.ReadAsStreamAsync()) {
var fileInfo = new FileInfo("TestFile");
using(var fileStream = fileInfo.Open(FileMode.CreateNew, FileAccess.ReadWrite, FileShare.Delete)) {
await stream.CopyToAsync(fileStream);
}
}
var downloadedFileHash = GetSHA256("TestFile");
//check if downloadedFileHash matches the storedHash by comparing byte[] length and content with for loop.
I expect that the hash would match. I know I'm missing a few using statements and other code but I omitted a bunch for brevity.
EDIT: The hashes for the downloaded files stay the same so the problem isn't downloading but uploading. I uploaded the same files multiple times but get back different hashes for each one. But the different hashes stay constant.
Sorry y'all, you can delete this question because I found the problem/answer but am still confused why this is occurring.
Turns out webdav was adding extra headers to my file for some reason, see: Header info being written into file when PUT-ing to a Webdav server
Strangest thing. So I encountered this post. https://blogs.msdn.microsoft.com/robert_mcmurray/2011/10/18/sending-webdav-requests-in-net-revisited/
Rewrote my code to be
public static async Task<HttpResponseMessage> Upload(string path, string name, FileStream file) {
var method = new HttpMethod(#"PUT");
var message = new HttpRequestMessage(method, path/name) {
Content = new StreamContent(file)
};
return await HttpClient.SendAsync(message);
}
And it works... But I'm wonder how the two methods of uploading differ.
Given a stream object which contains an xlsx file, I want to save it as a temporary file and delete it when not using the file anymore.
I thought of creating a class that implementing IDisposable and using it with the using code block in order to delete the temp file at the end.
Any idea of how to save the stream to a temp file and delete it on the end of use?
Thanks
You could use the TempFileCollection class:
using (var tempFiles = new TempFileCollection())
{
string file = tempFiles.AddExtension("xlsx");
// do something with the file here
}
What's nice about this is that even if an exception is thrown the temporary file is guaranteed to be removed thanks to the using block. By default this will generate the file into the temporary folder configured on the system but you could also specify a custom folder when invoking the TempFileCollection constructor.
You can get a temporary file name with Path.GetTempFileName(), create a FileStream to write to it and use Stream.CopyTo to copy all data from your input stream into the text file:
var stream = /* your stream */
var fileName = Path.GetTempFileName();
try
{
using (FileStream fs = File.OpenWrite(fileName))
{
stream.CopyTo(fs);
}
// Do whatever you want with the file here
}
finally
{
File.Delete(fileName);
}
Another approach here would be:
string fileName = "file.xslx";
int bufferSize = 4096;
var fileStream = System.IO.File.Create(fileName, bufferSize, System.IO.FileOptions.DeleteOnClose)
// now use that fileStream to save the xslx stream
This way the file will get removed after closing.
Edit:
If you don't need the stream to live too long (eg: only a single write operation or a single loop to write...), you can, as suggested, wrap this stream into a using block. With that you won't have to dispose it manually.
Code would be like:
string fileName = "file.xslx";
int bufferSize = 4096;
using(var fileStream = System.IO.File.Create(fileName, bufferSize, System.IO.FileOptions.DeleteOnClose))
{
// now use that fileStream to save the xslx stream
}
// Get a random temporary file name w/ path:
string tempFile = Path.GetTempFileName();
// Open a FileStream to write to the file:
using (Stream fileStream = File.OpenWrite(tempFile)) { ... }
// Delete the file when you're done:
File.Delete(tempFile);
EDIT:
Sorry, maybe it's just me, but I could have sworn that when you initially posted the question you didn't have all that detail about a class implementing IDisposable, etc... anyways, I'm not really sure what you're asking in your (edited?) question. But this question: Any idea of how to save the stream to temp file and delete it on the end of use? is pretty straight-forward. Any number of google results will come back for ".NET C# Stream to File" or such.
I just suggest for creating file use Path.GetTempFileName(). but others depends on your usage senario, for example if you want to create it in your temp creator class and use it just there, it's good to use using keyword.
So in my program I'm using COM Auotmation (AutomationFactory in Silverlight 4) to create a FileSystemObject, to which I write a string (theContent). theContent in this case is a small UTF-8 XML file, which I serialized using MemoryStream into the string.
The string is fine, but for some reason whenever I call the FileSystemObject's Write method I get the error "HRESULT 0x800A0005 (CTL_E_ILLEGALFUNCTIONCALL from google)." The strangest part is that if I pass another simple string, like "hello," it works with no problems.
Any ideas?
Alternatively, if there's a way to expose a file/text stream with FileSystemObject that I could serialize to directly, that would be good as well (I can't seem to find anything not in VB).
Thanks in advance!
string theContent = System.Text.Encoding.UTF8.GetString(content, 0, content.Length);
string hello = "hello";
using (dynamic fsoCom = AutomationFactory.CreateObject("Scripting.FileSystemObject"))
{
dynamic file = fsoCom.CreateTextFile("file.xml", true);
file.Write(theContent);
file.Write(hello);
file.Close();
}
I solved the same problem today using ADODB.Stream instead of Scripting.FileSystemObject.
In a Silverlight 4 OOB App (even with elevated trust), you cannot access files in locations outside of 'MyDocuments' and a couple of other user related special folders. You have to use the workaround 'COM+ Automation'. But the Scripting.FileSystemObject, which works great for text files, cannot handle binary files. Fortunately you can also use ADODB.Stream there. And that handles binary files just fine. Here is my code, tested with Word Templates, .dotx files:
public static void WriteBinaryFile(string fileName, byte[] binary)
{
const int adTypeBinary = 1;
const int adSaveCreateOverWrite = 2;
using (dynamic adoCom = AutomationFactory.CreateObject("ADODB.Stream"))
{
adoCom.Type = adTypeBinary;
adoCom.Open();
adoCom.Write(binary);
adoCom.SaveToFile(fileName, adSaveCreateOverWrite);
}
}
A file read can be done like this:
public static byte[] ReadBinaryFile(string fileName)
{
const int adTypeBinary = 1;
using (dynamic adoCom = AutomationFactory.CreateObject("ADODB.Stream"))
{
adoCom.Type = adTypeBinary;
adoCom.Open();
adoCom.LoadFromFile(fileName);
return adoCom.Read();
}
}
Why not just:
File.WriteAllText("file.xml", theContent, Encoding.UTF8);
or even
File.WriteAllBytes("file.xml", content);