I need to read only the mode segment from the below response body.
grant_type=password&username=demouser&password=test123&client_id=500DWCSFS-D3C0-4135-A188-17894BABBCCF&mode=device
I used the below function to read the HTTP body and it gives me the entire body. How to chop the mode segment without using substring or changing the value in seek() : bodyStream.BaseStream.Seek(3, SeekOrigin.Begin);
var bodyStream = new StreamReader(HttpContext.Current.Request.InputStream);
bodyStream.BaseStream.Seek(0, SeekOrigin.Begin);
var bodyText = bodyStream.ReadToEnd();
You can't. HTTP uses TCP, which requires you to read the entire body anyway, you can't "seek" into a TCP stream. Well, you can, but that still reads the entire body and discards the unused pieces.
So you have to read the entire stream, and you have to meaningfully parse it, because another parameter could also contain the string "mode", and it could also be at the start, so you also can't search for &mode.
Given this is a form post, you can simply access Request.Form["mode"]. If you do want to parse it yourself:
string formData;
using (reader = new StreamReader(HttpContext.Current.Request.InputStream))
{
formData = reader.ReadToEnd();
}
var queryString = HttpUtility.ParseQueryString(formData);
var mode = queryString["mode"];
Related
I have a request that calls a post method. It is posting XML in the request content (but sending it as raw text). In testing, the length of the xml is 106880 characters.
In the wep api post method, I process the request body to pull out the XML and store each element/value in a dictionary using the following:
var stream = new System.IO.StreamReader(Request.Body);
XmlReaderSettings settings = new XmlReaderSettings() { Async = true };
using (XmlReader r = XmlReader.Create(stream, settings))
{
bool rowsExist = true;
while (rowsExist && await r.ReadAsync())
{
if (nodeType == r.NodeType)
{
var name = r.Name;
rowsExist = await r.ReadAsync();
if (r.NodeType == XmlNodeType.Text)
{
xmlDic[name] = r.Value;
}
}
}
}
This works fine with small XML, however when the text value is relatively large, when calling the second ReadAsync method, the data is truncated and the XmlReader throws an exception saying
"Synchronous operations are disallowed. Call ReadAsync or set AllowSynchronousIO to true instead."
The exception makes no sense because ReadAsync is being called, but appears to be related to the size of the data, as it wasn't doing it with a smaller set of XML.
I tested a workaround which is to read the entire request body into a string, and then run the XmlReader using the entire body. However, that does use up more memory as it is loading the entire request into memory first, something that shouldn't be necessary.
I wondered if there might be a default max size/limit that stream or XmlReader uses, and see that the XmlReader settings class has 2 properties that control the Max characters:
settings.MaxCharactersFromEntities
settings.MaxCharactersInDocument
However the first has a default set to 10000000, which is way more than I am posting, and the second is set to zero, which means no limit. As a reault, these don't appear to make any difference.
What could be causing this to fail when reading the body using a StreamReader?
My aim is to get content from a website (for instance a league table from a sports website) and put it in a .txt file so that I can code with a local file.
I have tried multiple lines of code and others examples such as:
// prepare the web page we will be asking for
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create("http://www.stackoverflow.com");
// prepare the web page we will be asking for
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create("http://www.stackoverflow.com");
// execute the request
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
// we will read data via the response stream
Stream resStream = response.GetResponseStream();
string tempString = null;
int count = 0;
do
{
// fill the buffer with data
count = resStream.Read(buf, 0, buf.Length);
// make sure we read some data
if (count != 0)
{
// translate from bytes to ASCII text
tempString = Encoding.ASCII.GetString(buf, 0, count);
// continue building the string
sb.Append(tempString);
}
while (count > 0); // any more data to read?
}
My issue is when trying this, is that the words request and response are underlined in read and all the tokens are invalid.
Is there a better method to get content from a website to a .txt file or is there a way to fix the code supplied?
Thanks
is there a way to fix the code supplied?
The code you submitted works for me, make sure you have the proper name spaces defined.
In this case : using System.Net;
Or might it be that the duplicate creation of the variable request isn't a typo?
If so remove one of the request variables.
Is there a better method to get content from a website to a .txt file
Since you're reading all the content from the site anyway there isn't really a need for the while loop. Instead you can use the ReadToEnd method supplied by the StreamReader.
string siteContent = "";
using (StreamReader reader = new StreamReader(resStream)) {
siteContent = reader.ReadToEnd();
}
Also be sure to dispose of the WebResponse, other than that your code should work fine.
I just started a new project on WCF and to be honest I'm very new at this with limited knowledge.
So what I'm trying to do is open a file that is stored in my computer (e.g. word, pdf, etc.) and display the contents in the webpage in JSon format. I converted the file in a byte array and tried to display the Stream. When I did that it asked me to open the file or save it. I don't want that - I just want the contents of the file to be displayed on my local host when i call the method.
Here's what I have:
public string GetRawFile()
{
string file = #"C:\.....\TestFile.pdf";
byte[] rawFile = File.ReadAllBytes(file);
//Stream stream = new MemoryStream(rawFile);
//DataContractJsonSerializer obj = newDataContractJsonSerializer(typeof(string));
//string result = obj.ReadObject(stream).ToString();
//Deserializing
MemoryStream stream = new MemoryStream();
BinaryFormatter binForm = new BinaryFormatter();
stream.Write(rawFile, 0, rawFile.Length);
stream.Seek(0, SeekOrigin.Begin);
Object obj = (Object) binForm.Deserialize(stream);
System.Web.Script.Serialization.JavaScriptSerializer xyz = new System.Web.Script.Serialization.JavaScriptSerializer();
string ejson = xyz.Serialize(obj);
WebOperationContext.Current.OutgoingRequest.ContentType = "text/json";
return ejson;
}
I'm trying to return a string and it's not working, but when I return just the stream it's popping up the "openwith" message.
Also should I use the GET or POST on my datacontract. I'm using REST in C#.
I'm assuming that your file actually contains json. If that is the case just do this;
string file = File.ReadAllText("C:\path\to\file.extension");
You're making the problem a lot more complicated than it needs to be. Just read the file and return it's data as a string. I think you want to use GET for the http method. Generally speaking, you all use post if you're adding new content. If for example the users request would cause the application to write some data to a file or data base then you would typically use POST for the http method. If they're just requesting data, you almost always use GET.
So I have a page that is accepting XML through a POST method. Here's a small bit of the code:
if (Request.ContentType != "text/xml")
throw new HttpException(500, "Unexpected Content Type");
StreamReader stream = new StreamReader(Request.InputStream);
string x = stream.ReadToEnd(); // added to view content of input stream
XDocument xmlInput = XDocument.Load(stream);
I was getting an error, so I converted the stream to a string, just to see if everything was being sent correctly. When I looked at the content, it looked like this:
%3c%3fxml+version%3d%271.0%27+encoding%3d%27UTF-8%27%3f%3e%0d%0a
So I guess I need to decode the stream. The only problem is that I don't know how I can use HtmlDecode on the stream, and still keep it as a StreamReader object.
Is there any way to do this?
Apparently the client is sending the content as URL-encoded XML. So you need to decode the content like this:
StreamReader stream = new StreamReader(Request.InputStream);
string x = stream.ReadToEnd();
string xml = HttpUtility.UrlDecode(x);
XDocument xmlInput = XDocument.LoadXml(xml);
Anyway, the problem is probably on the client side... why is it encoding the XML this way?
I'm trying to obtain an image to encode to a WordML document. The original version of this function used files, but I needed to change it to get images created on the fly with an aspx page. I've adapted the code to use HttpWebRequest instead of a WebClient. The problem is that I don't think the page request is getting resolved and so the image stream is invalid, generating the error "parameter is not valid" when I invoke Image.FromStream.
public string RenderCitationTableImage(string citation_table_id)
{
string image_content = "";
string _strBaseURL = String.Format("http://{0}",
HttpContext.Current.Request.Url.GetComponents(UriComponents.HostAndPort, UriFormat.Unescaped));
string _strPageURL = String.Format("{0}{1}", _strBaseURL,
ResolveUrl("~/Publication/render_citation_chart.aspx"));
string _staticURL = String.Format("{0}{1}", _strBaseURL,
ResolveUrl("~/Images/table.gif"));
string _fullURL = String.Format("{0}?publication_id={1}&citation_table_layout_id={2}",
_strPageURL, publication_id, citation_table_id);
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(_fullURL);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream image_stream = response.GetResponseStream();
// Read the image data
MemoryStream ms = new MemoryStream();
int num_read;
byte[] crlf = System.Text.Encoding.Default.GetBytes("\r\n");
byte[] buffer = new byte[1024];
for (num_read = image_stream.Read(buffer, 0, 1024); num_read > 0; num_read = image_stream.Read(buffer, 0, 1024))
{
ms.Write(buffer, 0, num_read);
}
// Base 64 Encode the image data
byte[] image_bytes = ms.ToArray();
string encodedImage = Convert.ToBase64String(image_bytes);
ms.Position = 0;
System.Drawing.Image image_original = System.Drawing.Image.FromStream(ms); // <---error here: parameter is not valid
image_stream.Close();
image_content = string.Format("<w:p>{4}<w:r><w:pict><w:binData w:name=\"wordml://{0}\">{1}</w:binData>" +
"<v:shape style=\"width:{2}px;height:{3}px\">" +
"<v:imagedata src=\"wordml://{0}\"/>" +
"</v:shape>" +
"</w:pict></w:r></w:p>", _word_image_id, encodedImage, 800, 400, alignment.center);
image_content = "<w:br w:type=\"text-wrapping\"/>" + image_content + "<w:br w:type=\"text-wrapping\"/>";
}
catch (Exception ex)
{
return ex.ToString();
}
return image_content;
Using a static URI it works fine. If I replace "staticURL" with "fullURL" in the WebRequest.Create method I get the error. Any ideas as to why the page request doesn't fully resolve?
And yes, the full URL resolves fine and shows an image if I post it in the address bar.
UPDATE:
Just read your updated question. Since you're running into login issues, try doing this before you execute the request:
request.Credentials = CredentialCache.DefaultCredentials
If this doesn't work, then perhaps the problem is that authentication is not being enforced on static files, but is being enforced on dynamic files. In this case, you'll need to log in first (using your client code) and retain the login cookie (using HttpWebRequest.CookieContainer on the login request as well as on the second request) or turn off authentication on the page you're trying to access.
ORIGINAL:
Since it works with one HTTP URL and doesn't work with another, the place to start diagnosing this is figuring out what's different between the two requests, at the HTTP level, which accounts for the difference in behavior in your code.
To figure out the difference, I'd use Fiddler (http://fiddlertool.com) to compare the two requests. Compare the HTTP headers. Are they the same? In particular, are they the same HTTP content type? If not, that's likely the source of your problem.
If headers are the same, make sure both the static and dynamic image are exactly the same content and file type on the server. (e.g. use File...Save As to save the image in a browser to your disk). Then use Fiddler's Hex View to compare the image content. Can you see any obvious differences?
Finally, I'm sure you've already checked this, but just making sure: /Publication/render_citation_chart.aspx refers to an actual image file, not an HTML wrapper around an IMG element, right? This would account for the behavior you're seeing, where a browser renders the image OK but your code doesn't.