I want to read the response from the URI and modify it by replacing all S's to X's and to return that string back to client.
Below is my code, but replace is not working.
I downloaded the "response" string to check and there are lots of S characters.
Any idea why this is not working or how can I manipulate this ?
try
{
// open and read from the supplied URI
stream = webClient.OpenRead(uri);
reader = new StreamReader(stream);
response = reader.ReadToEnd();
response.Replace('S', 'X');
webClient.DownloadFile(uri, "C://Users//MyPC//Desktop//a.txt");
}
Thanks..
you can use webClient.DownloadString(uri)
like this:
string str = webClient.DownloadString(uri).Replace('S', 'X');
File.WriteAllText(#"C://Users//MyPC//Desktop//a.txt", str);
Related
Im trying to create a web service which gets to a URL e.g. www.domain.co.uk/prices.csv and then reads the csv file. Is this possible and how? Ideally without downloading the csv file?
You could use:
public string GetCSV(string url)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(url);
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
StreamReader sr = new StreamReader(resp.GetResponseStream());
string results = sr.ReadToEnd();
sr.Close();
return results;
}
And then to split it:
public static void SplitCSV()
{
List<string> splitted = new List<string>();
string fileList = getCSV("http://www.google.com");
string[] tempStr;
tempStr = fileList.Split(',');
foreach (string item in tempStr)
{
if (!string.IsNullOrWhiteSpace(item))
{
splitted.Add(item);
}
}
}
Though there are plenty of CSV parsers out there and i would advise against rolling your own. FileHelpers is a good one.
// Download the file to a specified path. Using the WebClient class we can download
// files directly from a provided url, like in this case.
System.Net.WebClient client = new WebClient();
client.DownloadFile(url, csvPath);
Where the url is your site with the csv file and the csvPath is where you want the actual file to go.
In your Web Service you could use the WebClient class to download the file, something like this ( I have not put any exception handling, not any using or Close/Dispose calls, just wanted to give the idea you can use and refine/improve... )
using System.Net;
WebClient webClient = new WebClient();
webClient.DownloadFile("http://www.domain.co.uk/prices.csv");
then you can do anything you like with it once the file content is available in the execution flow of your service.
if you have to return it to the client as return value of the web service call you can either return a DataSet or any other data structure you prefer.
Sebastien Lorion's CSV Reader has a constructor that takes a Stream.
If you decided to use this, your example would become:
void GetCSVFromRemoteUrl(string url)
{
HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest;
HttpWebResponse response = request.GetResponse() as HttpWebResponse;
using (CsvReader csvReader = new CsvReader(response.GetResponseStream(), true))
{
int fieldCount = csvReader.FieldCount;
string[] headers = csvReader.GetFieldHeaders();
while (csvReader.ReadNextRecord())
{
//Do work with CSV file data here
}
}
}
The ever popular FileHelpers also allows you to read directly from a stream.
The documentation for WebRequest has an example that uses streams. Using a stream allows you to parse the document without storing it all in memory
I am creating a small dictionary, with additional option to use google translate. So here is the problem: when I receive the respond from Google and show it in a textbox I see some kind of strange symbols.
Here is the code of the method which "asks" google:
public string TranslateText(string inputText, string languagePair)
{
string url = String.Format("http://www.google.com/translate_t?hl=en&ie=UTF8&text={0}&langpair={1}", inputText, languagePair);
WebClient webClient = new WebClient();
webClient.Encoding = System.Text.Encoding.UTF8;
// Get translated text
string result = webClient.DownloadString(url);
result = result.Substring(result.IndexOf("<span title=\"") + "<span title=\"".Length);
result = result.Substring(result.IndexOf(">") + 1);
result = result.Substring(0, result.IndexOf("</span>"));
return result.Trim();
}
..and calling this method like this(after translate button clicked):
string resultText;
string inputText = tbInputWord.Text.ToString();
if (inputText != null && inputText.Trim() != "")
{
ExtendedGoogleTranslate urlTranslate = new ExtendedGoogleTranslate();
resultText = urlTranslate.TranslateText(inputText, "en|bg");
tbOutputWord.Text = resultText;
}
So I am translating from English(en) to Bulgarian(bg) and encoding webClient with UTF8 so I think that I am missing something on caller code to parse resultText somehow before putting it to tbOutputWord textbox. I know that this code works, because if I choose to translate from English to French(for example) it shows the correct result.
Somehow, Google doesn't respect the ie=UTF8 query parameter. We need to add some headers to our request so that UTF8 is returned:
WebClient webClient = new WebClient();
webClient.Encoding = System.Text.Encoding.UTF8;
webClient.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0");
webClient.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8");
I'm using HttpClient to POST MultipartFormDataContent to a Java web application. I'm uploading several StringContents and one file which I add as a StreamContent using MultipartFormDataContent.Add(HttpContent content, String name, String fileName) using the method HttpClient.PostAsync(String, HttpContent).
This works fine, except when I provide a fileName that contains german umlauts (I haven't tested other non-ASCII characters yet). In this case, fileName is being base64-encoded. The result for a file named 99 2 LD 353 Temp Äüöß-1.txt
looks like this:
__utf-8_B_VGVtcCDvv73vv73vv73vv71cOTkgMiBMRCAzNTMgVGVtcCDvv73vv73vv73vv70tMS50eHQ___
The Java server shows this encoded file name in its UI, which confuses the users. I cannot make any server-side changes.
How do I disable this behavior? Any help would be highly appreciated.
Thanks in advance!
I just found the same limitation as StrezzOr, as the server that I was consuming didn't respect the filename* standard.
I converted the filename to a byte array of the UTF-8 representation, and the re-armed the bytes as chars of "simple" string (non UTF-8).
This code creates a content stream and add it to a multipart content:
FileStream fs = File.OpenRead(_fullPath);
StreamContent streamContent = new StreamContent(fs);
streamContent.Headers.Add("Content-Type", "application/octet-stream");
String headerValue = "form-data; name=\"Filedata\"; filename=\"" + _Filename + "\"";
byte[] bytes = Encoding.UTF8.GetBytes(headerValue);
headerValue="";
foreach (byte b in bytes)
{
headerValue += (Char)b;
}
streamContent.Headers.Add("Content-Disposition", headerValue);
multipart.Add(streamContent, "Filedata", _Filename);
This is working with spanish accents.
Hope this helps.
I recently found this issue and I use a workaround here:
At server side:
private static readonly Regex _regexEncodedFileName = new Regex(#"^=\?utf-8\?B\?([a-zA-Z0-9/+]+={0,2})\?=$");
private static string TryToGetOriginalFileName(string fileNameInput) {
Match match = _regexEncodedFileName.Match(fileNameInput);
if (match.Success && match.Groups.Count > 1) {
string base64 = match.Groups[1].Value;
try {
byte[] data = Convert.FromBase64String(base64);
return Encoding.UTF8.GetString(data);
}
catch (Exception) {
//ignored
return fileNameInput;
}
}
return fileNameInput;
}
And then use this function like this:
string correctedFileName = TryToGetOriginalFileName(fileRequest.FileName);
It works.
In order to pass non-ascii characters in the Content-Disposition header filename attribute it is necessary to use the filename* attribute instead of the regular filename. See spec here.
To do this with HttpClient you can do the following,
var streamcontent = new StreamContent(stream);
streamcontent.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment") {
FileNameStar = "99 2 LD 353 Temp Äüöß-1.txt"
};
multipartContent.Add(streamcontent);
The header will then end up looking like this,
Content-Disposition: attachment; filename*=utf-8''99%202%20LD%20353%20Temp%20%C3%84%C3%BC%C3%B6%C3%9F-1.txt
I finally gave up and solved the task using HttpWebRequest instead of HttpClient. I had to build headers and content manually, but this allowed me to ignore the standards for sending non-ASCII filenames. I ended up cramming unencoded UTF-8 filenames into the filename header, which was the only way the server would accept my request.
So I have a page that is accepting XML through a POST method. Here's a small bit of the code:
if (Request.ContentType != "text/xml")
throw new HttpException(500, "Unexpected Content Type");
StreamReader stream = new StreamReader(Request.InputStream);
string x = stream.ReadToEnd(); // added to view content of input stream
XDocument xmlInput = XDocument.Load(stream);
I was getting an error, so I converted the stream to a string, just to see if everything was being sent correctly. When I looked at the content, it looked like this:
%3c%3fxml+version%3d%271.0%27+encoding%3d%27UTF-8%27%3f%3e%0d%0a
So I guess I need to decode the stream. The only problem is that I don't know how I can use HtmlDecode on the stream, and still keep it as a StreamReader object.
Is there any way to do this?
Apparently the client is sending the content as URL-encoded XML. So you need to decode the content like this:
StreamReader stream = new StreamReader(Request.InputStream);
string x = stream.ReadToEnd();
string xml = HttpUtility.UrlDecode(x);
XDocument xmlInput = XDocument.LoadXml(xml);
Anyway, the problem is probably on the client side... why is it encoding the XML this way?
I have a webpage which has nothing on it except some string(s). No images, no background color or anything, just some plain text which is not really that long in length.
I am just wondering, what is the best (by that, I mean fastest and most efficient) way to pass the string in the webpage so that I can use it for something else (e.g. display in a text box)? I know of WebClient, but I'm not sure if it'll do what I want it do and plus I don't want to even try that out even if it did work because the last time I did it took approximately 30 seconds for a simple operation.
Any ideas would be appreciated.
The WebClient class should be more than capable of handling the functionality you describe, for example:
System.Net.WebClient wc = new System.Net.WebClient();
byte[] raw = wc.DownloadData("http://www.yoursite.com/resource/file.htm");
string webData = System.Text.Encoding.UTF8.GetString(raw);
or (further to suggestion from Fredrick in comments)
System.Net.WebClient wc = new System.Net.WebClient();
string webData = wc.DownloadString("http://www.yoursite.com/resource/file.htm");
When you say it took 30 seconds, can you expand on that a little more? There are many reasons as to why that could have happened. Slow servers, internet connections, dodgy implementation etc etc.
You could go a level lower and implement something like this:
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create("http://www.yoursite.com/resource/file.htm");
using (StreamWriter streamWriter = new StreamWriter(webRequest.GetRequestStream(), Encoding.UTF8))
{
streamWriter.Write(requestData);
}
string responseData = string.Empty;
HttpWebResponse httpResponse = (HttpWebResponse)webRequest.GetResponse();
using (StreamReader responseReader = new StreamReader(httpResponse.GetResponseStream()))
{
responseData = responseReader.ReadToEnd();
}
However, at the end of the day the WebClient class wraps up this functionality for you. So I would suggest that you use WebClient and investigate the causes of the 30 second delay.
If you're downloading text then I'd recommend using the WebClient and get a streamreader to the text:
WebClient web = new WebClient();
System.IO.Stream stream = web.OpenRead("http://www.yoursite.com/resource.txt");
using (System.IO.StreamReader reader = new System.IO.StreamReader(stream))
{
String text = reader.ReadToEnd();
}
If this is taking a long time then it is probably a network issue or a problem on the web server. Try opening the resource in a browser and see how long that takes.
If the webpage is very large, you may want to look at streaming it in chunks rather than reading all the way to the end as in that example.
Look at http://msdn.microsoft.com/en-us/library/system.io.stream.read.aspx to see how to read from a stream.
Regarding the suggestion
So I would suggest that you use WebClient and investigate the causes of the 30 second delay.
From the answers for the question
System.Net.WebClient unreasonably slow
Try setting Proxy = null;
WebClient wc = new WebClient();
wc.Proxy = null;
Credit to Alex Burtsev
If you use the WebClient to read the contents of the page, it will include HTML tags.
string webURL = "https://yoursite.com";
WebClient wc = new WebClient();
wc.Headers.Add("user-agent", "Only a Header!");
byte[] rawByteArray = wc.DownloadData(webURL);
string webContent = Encoding.UTF8.GetString(rawByteArray);
After getting the content, the html tags should be removed. Regex can be used for this:
var result= Regex.Replace(webContent, "<.*?>", String.Empty);
But this method is not very accurate, the better way is to install HtmlAgilityPack and use the following code:
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(webData);
string result = doc.DocumentNode.InnerText;
You say it takes 30 seconds, It has nothing to do with using WebClient (The main factor is internet connections or proxy). WebClient has worked very well for me. example
WebClient client = new WebClient();
using (Stream data = client.OpenRead(Text))
{
using (StreamReader reader = new StreamReader(data))
{
string content = reader.ReadToEnd();
string pattern = #"((https?|ftp|gopher|telnet|file|notes|ms-help):((//)|(\\\\))+[\w\d:##%/;$()~_?\+-=\\\.&]*)";
MatchCollection matches = Regex.Matches(content,pattern);
List<string> urls = new List<string>();
foreach (Match match in matches)
{
urls.Add(match.Value);
}
}
XmlDocument document = new XmlDocument();
document.Load("www.yourwebsite.com");
string allText = document.InnerText;