Got a problem here... If I put the XML file on the server, then I can read it through steamReader, convert to variable and got everything working in the MSSQL database.
However, it is required that I send through html POST, and it doesn't work for the code below:
page.Response.ContentType = "text/xml";
StreamReader reader = new StreamReader(page.Request.InputStream);
inputString = reader.ReadToEnd();
deleteShip(inputString);
it seems to me that the above code didn't get the XML that POST from my program. Because for the same code in deleteShip, if I use an xml on the server then it works fine.
Is there a way to solve this problem? As long as I can send any string to deleteShip(string s) then I'm happy. The string will be in XML format though
Thanks for the help!
It would be useful to see how the XML is POSTed to your program. Typically, data is sent from an HTML form as name-value pairs in the HTTP request body when using the POST method. It's not clear from your question whether you're using an HTML form to POST the XML to your program and it's hard to tell what might be going wrong without more information.
From your code it looks like you're reading the entire HTTP request where you'd usually read the value of a request parameter for example:
Request["XmlParameterName"]
Where XmlParameterName is the name of an HTML form input field.
Have you inspected the value of the inputString variable? Is it valid XML? Is it encoded correctly? Are any invalid characters like ampersands (&) escaped correctly?
Update your question with a bit more information if none of the things I mentioned are the problem.
OK, I got it fixed.
Here is the code.
System.IO.Stream stream;
string inputString;
Int32 stringLength;
stream = Request.InputStream;
stringLength = Convert.ToInt32(stream.Length);
byte[] stringArray = new byte[stringLength];
inputString = System.Text.Encoding.ASCII.GetString(stringArray, 0, stringLength);
deleteShip(inputString);
By this it will access the POST body from my html request (which in this case XML).
Related
I'm struggling finding a feasible solution to this. I've tried looking around but can't find any documentation regarding this issue. If a customer sends out a message with quote(s), it break the payload syntax and android spits me back a 400 Bad Request error.
The only solution I can think of is by doing my own translations and validations. Allow only the basics, and for the restricted do my own "parsing" Ie take a quote, replace them with "/q" and then replace "/q" on the App when received. I don't like this solution because it involves logic on the App that if, I forget something. I want to be able to change it immediately rather then update everyones phone, app, etc.
I'm looking for an existing encoding I could apply that is processed correctly by the GCM servers. Allowing them to be accepted then broadcasted. Received by the phone with the characters intact.
Base64 encoding should get rid of the special characters. Just encode it before sending and decode it again on receiving:
Edit: sorry, just got a java/android sample here, I don't know how exactly xamarin works and what functions it provides:
// before sending
byte[] data = message.getBytes("UTF-8");
String base64Message = Base64.encodeToString(data, Base64.DEFAULT);
// on receiving
byte[] data = Base64.decode(base64Message , Base64.DEFAULT);
String message= new String(data, "UTF-8");
.Net translation of #tknell solution
Decode:
Byte[] data = System.Convert.FromBase64String(encodedString);
String decoded = System.Text.Encoding.UTF8.GetString(data);
Encode:
Byte[] data = System.Text.Encoding.UTF8.GetBytes(decodedString);
String encoded = System.Convert.ToBase64String(data);
I have a WPF application that sends out a HTML-formatted email when a button is clicked. The entire email message is in HTML-format and it does work.
However, I was wondering if there was a way to read a html file and send it out rather than writing the whole message in the code behind...keeping all the HTML formatting in-tact.
I tried something like this:
string MessageTosend = File.ReadAllText("path to txt/html file");
But that just sent out an email that only has text (no styling, no html...just the plain text found in the file).
Then I thought, I may have to convert everything:
string MessageTosend = Convert.ToString(File.ReadAllText("path to txt/html file"));
But that does the same thing as before.
Is there a way to do achieve this? Or will I have to stick to having
string MessageTosend = #"<html> ... lots of html stuff ... </html>";
for every button that sends an email?
For notice: The contents of the .txt and .html file I attempted to read from was tested using the same contents of the above string (which, again, works as expected), and without the double quotes (example: width=""100"" and width="100")
Try adding an encoding to your file read:
string MessageTosend = File.ReadAllText("path to txt/html file", Encoding.UTF8);
Try reading a file simply containing < and compare it to the string "<". Repeat for any special characters until you find a mismatch. Then find the character number like this:
(int)MessageTosend[0] // < should be 60 (3C in UTF-8)
Find out what the offending characters are, and we may be able to help. If I read a file, I do not see this problem.
After lots of efforts I created my own mail parser. Now successfully able to parse and display emails. But few mails especially sent from apple or Iphone appear like this after parsing. I have no idea why this is happening. Please help.
=D8=AA=D9=88=D8= =A7=D8=AC=D9=87=D9=86=D9=8A =D9=85=D8=B4=D9=83=D9=84=D8=A9 =D8=A5=D8=B4=D8= =A7=D8=B1=D8=A9 =D9=84=D9=84=D9=83=D8=B1=D8=AA =D8=B1=D9=82=D9=85 410814189= 68 =D8=B9=D9=84=D9=85=D8=A7=D9=8B =D8=A8=D8=A3=D9=86 =D8=A5=D8=B4=D8=
It would appear that you mail parser does not handle decoding of Quoted Printable content.
I imagine that if you looked at the headers, you'd find a header like this:
Content-Transfer-Encoding: quoted-printable
I've written several email clients and multiple mime parsers and am currently working on writing a new mime parser in C# (the others were in C) called MimeKit here: http://github.com/jstedfast/MimeKit. This may be of interest to you...
I've got a filterable stream class that you can add a QuotedPrintableDecoder to (which I've also implemented), then pass your data through that to decode it. Or you could just pass it through the QuotedPrintableDecoder directly, depending on whatever is easiest for you.
Example usage:
var decoder = new QuotedPrintableDecoder ();
var output = new byte[decoder.EstimateOutputLength (input.Length)];
var outputLength = decoder.Decode (input, 0, input.Length, output);
// convert the output into a string displayable to the user...
var text = System.Text.Encoding.UTF8.GetString (output, 0, outputLength);
Obviously you'd use the proper System.Text.Encoding for the content (by looking at the "charset" parameter in the Content-Type header) instead of blindly using System.Text.Encoding.UTF8.
Scenario :
I need to parse millions of HTML files/pages (as fact as I can) & then read only only Title or Meta part of it & Dump it to Database
What I am doing is using System.Net.WebClient Class's DownloadString(url_path) to download & then Saving it to Database by LINQ To SQL
But this DownloadString function gives me complete html source, I just need only Title part & META tag part.
Any ideas, to download only that much content?
I think you can open a stream with this url and use this stream to read the first x bytes, I can't tell the exact number but i think you can set it to reasonable number to get the title and the description.
HttpWebRequest fileToDownload = (HttpWebRequest)HttpWebRequest.Create("YourURL");
using (WebResponse fileDownloadResponse = fileToDownload.GetResponse())
{
using (Stream fileStream = fileDownloadResponse.GetResponseStream())
{
using (StreamReader fileStreamReader = new StreamReader(fileStream))
{
char[] x = new char[Number];
fileStreamReader.Read(x, 0, Number);
string data = "";
foreach (char item in x)
{
data += item.ToString();
}
}
}
}
I suspect that WebClient will try to download the whole page first, in which case you'd probably want a raw client socket. Send the appropriate HTTP request (manually, since you're using raw sockets), start reading the response (which will not be immediately) and kill the connection when you've read enough. However, the rest will have probably already been sent from the server and winging its way to your PC whether you want it or not, so you might not save much - if anything - of the bandwidth.
Depending on what you want it for, many half decent websites have a custom 404 page which is a lot simpler than a known page. Whether that has the information you're after is another matter.
You can use the verb "HEAD" in a HttpWebRequest to return the the response headers (not element. To get the full element with the meta data you'll need to download the page and parse out the meta data you want.
System.Net.WebRequest.Create(uri) { Method = "HEAD" };
I'm using automatic conversion from wsdl to c#, everything works apart from encoding, whenever
I have native characters (like 'ł' or 'ó') I get '??' insted of them in string fields ('G????wny' instead of 'Główny'). How to deal with it? Server sends document with correct encoding, with header .
EDIT: I noticed in Wireshark, that packets send FROM me have BOM, but packets sends TO me, don't have it - maybe it's a root of problem?
So maybe the following will help:
What I am sure I did is:
In the webservice PHP file, after connecting to the Mysql Database I call:
mysql_query("SET CHARSET utf8");
mysql_query("SET NAMES utf8 COLLATE utf8_polish_ci");
The second I did:
In the same PHP file,
I added utf8_encode to the service on the $POST_DATA variable:
$server->service(utf8_encode($POST_DATA));
in the class.nusoap_base.php I changed:
`//var $soap_defencoding = 'ISO-8859-1';
var $soap_defencoding = 'UTF-8';`
and olso in the nusoap.php the same as above:
//var $soap_defencoding = 'ISO-8859-1';
var $soap_defencoding = 'UTF-8';
and in the nusoap.php file again:
var $decode_utf8 = true;
Now I can send and receive properly encoded data.
Hope this helps.
Regards,
The problem was on the server side with sent Content-Type parameter in header (it was set to "text/xml"). It occurs that for utf-8 it HAVE TO be "text/xml; charset=utf-8", other methods such as placing BOM aren't correct (according to RFC 3023). More info here: http://annevankesteren.nl/2005/03/text-xml