I would like to write data to a file on a server. Therefore, I use the following code:
System.Net.WebClient webClient = new System.Net.WebClient();
string webAddress = "https://sd2dav.1und1.de/"; //1&1 Webdav
string destinationFilePath = webAddress + "testfile.txt";
webClient.Credentials = new System.Net.NetworkCredential("myuser", "mypassword");
Stream stream = webClient.OpenWrite(destinationFilePath,"PUT");
UnicodeEncoding uniEncoding = new UnicodeEncoding();
using(StreamWriter sw = new StreamWriter(stream, uniEncoding))
{
sw.Write("text");
sw.Flush();
}
webClient.Dispose();
This works fine when compiled with Visual Studio 2008, .net 3.5 and executed on Windows 7. However, I would like to do this with Mono for Android (4.4.54). Here, the code executes, but nothing happens: The file is not created on the server.
Is this a bug in Mono for Android or is anything wrong with the code above? Do you know a way to get this working?
Update: I filed a bug at https://bugzilla.xamarin.com/show_bug.cgi?id=10163 regarding this issue. Hopefully, the guys from Xamarin will take care of this. Furthermore, I presented an alternative way of sending the data to the server at https://stackoverflow.com/questions/14786214/alternative-to-webclient-openwrite.
Related
Version .NetCore 2.1
There is a share path on IIS like h ttps://foo.com/bar"
this bar folder on IIS my web site
http-s://mysite.com/bar/request returns status code 200
if targetFilePath is "z:\myfiles\cookieRecipe.txt" this code will success.
But if targetFilePath is "http-s://mysite.com/bar/cookieRecipe.txt" this throw an excepiton like "System.IO.IOException","HResult":-2147024773,"Message":"The filename, directory name, or volume label syntax is incorrect Z:\Publish\MyWeb.Web\http-s:\mysite.com\bar" why this code added Z:\Publish\MyWeb.Web to my path? how can I solve this?
using (var targetStream = File.Create(targetFilePath))
{
await item.CopyToAsync(targetStream);
}
The document has explained that only relative or absolute path get supported by file.open(path).Relative path information is interpreted as relative to the current working directory
https://learn.microsoft.com/en-us/dotnet/api/system.io.file.create?view=netcore-3.1
If you want to read something from remote, then please use webclient instead.
WebClient client = new WebClient();
Stream stream = client.OpenRead("https://mysite/a");
StreamReader reader = new StreamReader(stream);
String content = reader.ReadToEnd();
I am writing an Rcon in Visual Studio for Black Ops. I know its an old game but I still have a server running.
I am trying to download the data from this link
Black Ops Log File
I am using this code.
System.Net.WebClient wc = new System.Net.WebClient();
string raw = wc.DownloadString(logFile);
Which take between 6441ms to 13741ms according to Visual Studio.
Another attempt was...
string html = null;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(logFile);
request.AutomaticDecompression = DecompressionMethods.GZip;
request.Proxy = null;
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
using (Stream stream = response.GetResponseStream())
using (StreamReader reader = new StreamReader(stream))
{
html = reader.ReadToEnd();
}
Which also takes around 6133ms according to VS debugging.
I have seen other rcon respond to commands really quickly. Mine take on best 5000ms which is not really acceptable. How can I download this this information quicker. I am told it shouldn't take this long??? What am I doing wrong?
This is just how long the server takes to answer:
In the future you can debug such problems yourself using network tools such as Fiddler or by profiling your code to see what takes the longest amount of time.
I have a small function that uploads and download through c# WebClient. I just want to make sure if this is the correct way of doing it without breaking any security rules. This currently work for my application but i am just wondering is the way im doing it is right?
WebClient client = new WebClient();
client.Proxy = new WebProxy();
client.Credentials = new System.Net.NetworkCredential(_username, _password);
client.BaseAddress = _URLPath; // ftp://10.10.10.10/
client.UploadFile(_filePath.Substring(_filePath.LastIndexOf("\\") + 1), _filePath); //filePath = C:\\text.bak
client.DownloadFile(myFile, myFile); //Download myFile = "text.txt"
client.Dispose();
Basically the application uploads a file called "text.bak" from my C:\ and the server right away generates a text file from that called "text.txt" and i download it right away.
Am i leaking any security issues? Thanks
I have a Web application. It sends a series of images to the server (no problem there) and then uses code from this tutorial to create a PowerPoint presentation. The presentation is saved in a directory on the web server and the URL is returned to the user.
However, the file is still in use and attempting to access it generates a 500.0 error in IIS 7.5.
If I open the task manager and kill the w3wp.exe process that belongs to the NETWORK SERVICE user everything works as intended (the file can be downloaded and viewed). I also have another w3wp process belonging to DefaultAppPool, but it doesn't seem to cause a problem.
I'm new to this .Net coding, so it is very possible I forgot to close something down in code. Any ideas?
Edit: This is the method that creates a series of png's from image data that is encoded into a string. It uses a Guid to create a unique bit of a directory path, and checks to make sure it doesn't exist and then creates the directory and places the images there.
It looks like the offending method is this one:
So the offending method is this one:
public void createImages(List<String> imageStrings)
{
UTF8Encoding encoding = new UTF8Encoding();
Decoder decoder = encoding.GetDecoder();
Guid id = Guid.NewGuid();
String idString = id.ToString().Substring(0, 8);
while (Directory.Exists(imageStorageRoot + idString))
{
id = Guid.NewGuid();
idString = id.ToString().Substring(0, 8);
}
String imageDirectoryPath = imageStorageRoot + idString + "\\";
DirectoryInfo imagePathInfo = Directory.CreateDirectory(imageDirectoryPath);
for (int i = 0; i < imageStrings.Count; i++)
{
String imageString = imageStrings[i];
Byte[] binary = Convert.FromBase64String(imageString);
using (FileStream stream = new FileStream(imageDirectoryPath + idString + i.ToString() + ".png", FileMode.Create))
{
using (BinaryWriter writer = new BinaryWriter(stream))
{
writer.Write(binary);
}
}
}
}
Edit 2: If there is a better to about doing things please let me know. I am trying to learn here!
Edit 3: So upon further examination, I can comment out all this code. In fact, the second instance of w3wp.exe starts up as soon as a browser hits the website. I am now wondering if this might have something else to do with our stack? Its a Flex app, that uses WebOrb for remoting to some C# classes.
Does anyone know why this second open instance of w3wp.exe (owned by NETWORK SERVICE) would prevent the file from opening properly? Is there some way to get it release it's hold on the file in question?
Make sure you've closed the file after using it (better yet put the code that accesses your files in using statements). Post some code if you need help figuring out where the issue is.
The sample looks good. Did you deviate from the code structure?
using (Package pptPackage =
Package.Open(fileName, FileMode.Open, FileAccess.ReadWrite))
{
// your code goes inside there
}
If your code does contain the using statement you should be fine. If not add a Dispose call to your Package object when you are done or you will leave the file open for a long time until (possibly) the finalizer will kill it.
I am using this code for getting list of all the files in directory
here webRequestUrl = something.com/directory/
FtpWebRequest fwrr = (FtpWebRequest)FtpWebRequest.Create(new Uri("ftp://" + webRequestUrl));
fwrr.Credentials = new NetworkCredential(username, password);
fwrr.Method = WebRequestMethods.Ftp.ListDirectoryDetails;
StreamReader srr = new StreamReader(fwrr.GetResponse().GetResponseStream());
string str = srr.ReadLine();
ArrayList strList = new ArrayList();
while (str != null)
{
strList.Add(str);
str = srr.ReadLine();
}
but I am not getting the list of files, but getting some HTML document type lines.
This ftp server is windows based while it is working fine in unix server.
Please help.
Thanks.
It works for me when the FTP on a internal machine and I do a ftp://192.168.0.155 - If I try that in IE I get the same HTML result like yours.
I doubt if its happening because of the url. Can you try replacing the url with the IP address (just a wild guess). Even if you are getting HTML, you can strip the unnecessary part and parse the files.
I even tried with a ftp://sub.a.com/somefolder and it worked for me. It seems the browser wraps the HTML around the FTP response because I get different HTML when I opened the FTP site in IE and Chrome.