Im developing a service fabric Stateless Service App and at some point i need to download a file and save it to my desktop folder, but even if i run as Administrator i get the error:
Access to the path 'C:/Users/me/Desktop/filename' is denied
using (var response = await _httpClient.SendAsync(httpRequestMessage, HttpCompletionOption.ResponseHeadersRead))
{
using (var streamToReadFrom = await response.Content.ReadAsStreamAsync())
{
var fileName = $"C:\\Users\\me\\Desktop\\{name}{DateTime.Now.ToString("yyyyMMdd")}";
using (var streamToWriteTo = File.Create(fileName))
{
await streamToReadFrom.CopyToAsync(streamToWriteTo);
}
}
}
Im testing runnning a local debug cluster but when i deploy it i want to save to a network path.
If you run Task Manager and go to the Details tab, you'll see that Service Fabric's processes (Fabric.exe, FabricGateway.exe, and others) are running as the NETWORK SERVICE user, not your own user account. The NETWORK SERVICE user doesn't have access to your user profile.
It's hard to suggest what you could do without knowing more about why you're trying to save files. But if that's something you're going to continue to do once your service makes it to production, I'd just implement your final solution now. I guess that's probably saving to Blob Storage, or maybe a database of some kind.
Related
I'm creating a MAUI app and I'm trying to connect it to Firestore.
I have set everything up in Firebase and I can successfully connect to the Firestore by setting the environment variable: "GOOGLE_APPLICATION_CREDENTIALS" on my windows machine locally.
However, I cannot make it work if I test out the app using my local android device since this device doesn't have the environment variable set up.
So my question is, how would I go about doing this, so I can connect to the Firestore on my android device?
Some thoughts:
I would imagine that I need to set the environment variable in the MAUI code and somehow read it, but wouldn't that expose the secret file since it would now exist inside the MAUI project?
It looks like some people are using the google-services.json file for this purpose (which is not strictly secret) so if I could use that instead, then I guess that would be the best, but again, how would I do that?
I have tried adding the files in the Asset files, .csproj file, setting environment variables in the MainProgram etc. but I just can't make it work and even worse, I'm unsure whether that approach would be insecure.
Any help would be greatly appreciated.
Possible duplicate: How do I connect my MAUI app to a Firestore database? (via service account json file)
I just still don't understand how they made it work.
This is how I successfully set it up in .NET Maui:
Copy the "Json" file from Google under the "Resource/Raw" folder
with a BuildAction "MauiAsset"
Below is my initialization code:
public async Task<FirestoreDb> initFirestore()
{
try
{
var stream = await FileSystem.OpenAppPackageFileAsync(Constants.FirestoreKeyFilename);
var reader = new StreamReader(stream);
var contents = reader.ReadToEnd();
FirestoreClientBuilder fbc = new FirestoreClientBuilder { JsonCredentials = contents };
return FirestoreDb.Create(Constants.FirestoreProjectID, fbc.Build());
}
catch (Exception)
{
throw;
}
}
Hope that helps.
This is frustratingly weird. I'm suddenly getting this error in Production, but not locally in development.
Specs:
.Net Core 3.1
IIS 10
Access to the path '\\[network share]\Files\Video' is denied.
It occurs when this method is called:
public Task<bool> UploadAsync()
{
if (request.Metadata.ChunkIndex == 0)
{
if (!Directory.Exists(request.Metadata.UploadLocation))
{
Directory.CreateDirectory(request.Metadata.UploadLocation);
}
}
var fixedFileName = FileManager.FixFileName(request.Metadata.FileName);
var basePath = Path.Combine(request.Metadata.UploadLocation, PARTIALS);
if (!Directory.Exists(basePath))
{
Directory.CreateDirectory(basePath);
}
var filePath = Path.Combine(basePath, fixedFileName);
AppendToFile(filePath, request.File);
return Task.FromResult(true);
}
... and this line is called:
Directory.CreateDirectory(request.Metadata.UploadLocation)
It seems like it's due to some missing permission with IIS and the web application, since it works when I run the application in IISExpress.
And this is important, the application in both cases is pointing to the same network share location to drop the video file.
So it seems like any permissions on the network share folder is correct, but IIS can't access it. That's my theory, but my network administrator says he hasn't changed anything with the application's app pool, so I don't have any other ideas.
Any ideas what is causing this issue?
According to your description and error codes, it seems this is related with permission issue.
To solve this issue, I suggest you could firstly modify the IIS application pool identity with a user which have the enough permission to access the network share and try again. If this is working well, this is related with the application pool identity account permission.
More details about how to modify the application pool identity permission, you could refer to this answer.
I've written an asp.net webapp that writes a file to a location on our iSeries FileShare.
The path looks like this: \IBMServerAddress\Filepath
This code executes perfectly on my local machine, but fails when it's deployed to my (windows) WebServer.
I understand that i may need to do some sort of impersonation to authenticate access to the IFS, but i'm unsure of how to proceed.
Here's the code i'm working with:
string filepath = "\\\\IBMServerAddress\\uploads\\";
public int SaveToDisk(string data, string plant)
{
//code for saving to disk
StreamWriter stream = null;
stream = File.CreateText(filepath + plant + ".txt"); // creating file
stream.Write(data + "\r\n"); //Write data to file
stream.Close();
return 0;
}
Again, this code executes perfectly on my local machine but does not work when deployed to my Windows WebServer - access to filepath is denied.
Thanks for your help.
EDIT: I've tried adding a network account with the same credentials as the IFS user, created a UNC path (iseries)on IIS7 to map the network drive (using the same credentials) - but receive this error:
Access to the path 'iseries\' is denied.
My understanding of Windows in general is that normally services don't have access to standard network shares like a program being run by a user does.
So the first thing would be to see if you can successfully write to a windows file share from the web server.
Assuming that works, you'll need one of two things in order to write to the IBM i share..
1) An IBM i user ID and password that matches the user ID and password the process is being run under
2) A "guest account" configured on IBM i Netserver
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzahl/rzahlsetnetguestprof.htm
You might have better luck with using Linux/UNIX based Network File System (NFS) which is supported in both Windows and the IBM i.
I have been attempting to upload files using c# into Hadoop using the WebHDFS REST API.
This code works fine:
using (var client = new System.Net.WebClient())
{
string result = client.DownloadString("http:/ /host:50070/webhdfs/v1/user/myuser/?op=LISTSTATUS");
client.DownloadFile("http:/ /host:50070/webhdfs/v1/user/myuser/tbible.txt?user.name=myuser&op=OPEN","d:\tbible.txt");
}
This code gets a 403 Forbidden:
using (var client = new System.Net.WebClient())
{
client.UploadFile("http:/ /host:50070/webhdfs/v1/user/myuser/?user.name=myuser&op=CREATE", "PUT", "d:\bible.txt");
}
I have tried adding a network credential, with no luck.
How do I authenticate to our Cluster from .NET? The Cluster is Hortonworks HDP1.3 on RHEL5.
(The extra spaces in this post are to keep http:/ / from being a link)
Also, I would have liked to use Microsoft's hadoop SDK, but it is alpha and wont compile in my environment :(
Make sure that you are writing to a directory that is under the group which WebHDFS operates under. By default this is hdfs.
A quick way to check this doing hadoop fs -ls on the directory's parent directory to get the group permission settings (the second column that may look like a username).
I have a job that needs to connect to two fileshares and copy some files for a data feed.
The source server is on our domain's network, and that works fine. The remote server, however, chokes on me and throws a "Could not find part of the path" error. I should add the destination server lives in a different domain than my source server.
The source and destination paths are read out of my app.config file.
I thought persistently mapping a drive would work, but since this is a scheduled task, that doesn't seem to work. I thought about using NET USE, but that doesn't seem to like taking a username and password.
The really weird thing - if I double click on the job while I'm logged into the machine, it'll run successfully.
Sample code:
DirectoryInfo di = new DirectoryInfo(srcPath);
try
{
FileInfo[] files = di.GetFiles();
foreach (FileInfo fi in files)
{
if(!(fi.Name.Contains("_desc")))
{
Console.WriteLine(fi.Name + System.Environment.NewLine);
File.Copy(fi.FullName, destPath + fi.Name, true);
fi.Delete();
}
}
}
Apparently this isn't as simple as copying the files over. Any suggestions on mapping a drive with credentials in C# 4.0?
EDIT
I'm trying to use a batch file called from the console application that maps the drive while the program is running. I'll know for sure whether that works in the morning.
I'd suggest looking into a proper file transfer protocol, like FTP.
Assuming that's out of the question, try using a UNC path like \\servername\path\file.txt. You will still need credentials, but assuming that the account running the application has those permissions you should be fine. Given that you mention a web.config file, I am guessing that would be an ASP.NET application, and therefore I mean the account that runs the Application Pool in IIS. See http://learn.iis.net/page.aspx/624/application-pool-identities/
What I finally wound up doing was mapping the drive in a batch file called by my program. I just launch a NET USE command and pause for a few seconds for the mapping to complete.
It looks like while the user is logged out, there's no context around mapped drives.