SOAP to Stream to String - c#

I have a SOAP object that I want to capture as a string. This is what I have now:
RateRequest request = new RateRequest();
//Do some stuff to request here
SoapFormatter soapFormat = new SoapFormatter();
using (MemoryStream myStream = new MemoryStream())
{
soapFormat.Serialize(myStream, request);
myStream.Position = 0;
using (StreamReader sr = new StreamReader(myStream))
{
string reqString = sr.ReadToEnd();
}
}
Is there a more elegant way to do this? I don't care that much about the resulting string format - just so it's human readable. XML is fine.

No, that's pretty much the way to do it. You could always factor this out to a method which will do this work for you, and then you can just reduce it to a single call where you need it.

I think you can also do this:
soapFormat.Serialize(myStream, request);
string xml=System.Text.ASCIIEncoding.ASCII.GetString(myStream.GetBuffer());

Related

Reading response body stream resulted in an empty string

I'm using .netCore 3.1 to create a RESTful API.
Here I'm trying to modify the response body to filter out some values based on a corporate use case that I have.
My problem is that at first, I figured that the CanRead value of context.HttpContext.Response.Body is false, thus it is unreadable, so I searched around and found this question and its answers.
which
basically converts a stream that can't seek to one that can
so I applied the answer with a little modification to fit my use case :
Stream originalBody = context.HttpContext.Response.Body;
try
{
using (var memStream = new MemoryStream())
{
context.HttpContext.Response.Body = memStream;
memStream.Position = 0;
string responseBody = new StreamReader(memStream).ReadToEnd();
memStream.Position = 0;
memStream.CopyTo(originalBody);
string response_body = new StreamReader(originalBody).ReadToEnd();
PagedResponse<List<UserPhoneNumber>> deserialized_body;
deserialized_body = JsonConvert.DeserializeObject<PagedResponse<List<UserPhoneNumber>>>(response_body);
// rest of code logic
}
}
finally
{
context.HttpContext.Response.Body = originalBody;
}
But when debugging, I found out that memStream.Length is always 0, and therefore the originalBody value is an empty string : "" .
Even though after this is executed , the response is returned successfully, (thanks to the finally block).
I can't seem to understand why this is happening, is this an outdated method? what am I doing wrong?
Thank you in advance.
using is closing the stream try
string body= new StreamReader(Request.Body).ReadToEnd();

What is the correct way to post and save stream response to file using Flurl

I am trying to implement an asynchronous POST file and read the response directly to a file using Flurl. The code below works fine but not sure about the writing stream to file using c.Result.CopyTo or c.Result.CopyToAsync? What method is correct?
var result = new Url(url)
.WithHeader("Content-Type", "application/octet-stream")
.PostAsync(new FileContent(Conversion.SourceFile.FileInfo.ToString()))
.ReceiveStream().ContinueWith(c =>
{
using (var fileStream = File.Open(DestinationLocation + #"\result." + model.DestinationFileFormat, FileMode.Create))
{
c.Result.CopyTo(fileStream);
//c.Result.CopyToAsync(fileStream);
}
});
if (!result.Wait(model.Timeout * 1000))
throw new ApiException(ResponseMessageType.TimeOut);
You can certainly use CopyToAsync here, but that's cleaner if you avoid ContinueWith, which generally isn't nearly as useful since async/await were introduced. It also makes disposing the HTTP stream cleaner. I'd go with something like this:
var request = url.WithHeader("Content-Type", "application/octet-stream");
var content = new FileContent(Conversion.SourceFile.FileInfo.ToString());
using (var httpStream = await request.PostAsync(content).ReceiveStream())
using (var fileStream = new FileStream(path, FileMode.CreateNew))
{
await httpStream.CopyToAsync(fileStream);
}

Using Json.Net, how can I stream a lot of text to a single json property?

I need to construct a JObject that has a single property that could potentially contain a very large amount of text. I have this text being read from a stream, but I can't figure out how to write it to a single JToken.
Here's what I've tried so far:
using (var stream = new MemoryStream())
{
using (var streamWriter = new StreamWriter(stream))
{
// write a lot of random text to the stream
var docSize = 1024 * 1024;
var rnd = new Random();
for (int i = 0; i < docSize; i++)
{
var c = (char) rnd.Next('A', 'Z');
streamWriter.Write(c);
}
streamWriter.Flush();
stream.Seek(0, SeekOrigin.Begin);
// read from the stream and write a token
using (var streamReader = new StreamReader(stream))
using (var jTokenWriter = new JTokenWriter())
{
const int blockSize = 1024;
var buffer = new char[blockSize];
while (!streamReader.EndOfStream)
{
var charsRead = streamReader.Read(buffer, 0, blockSize);
var str = new string(buffer, 0, charsRead);
jTokenWriter.WriteValue(str);
}
// add the token to an object
var doc = new JObject();
doc.Add("Text", jTokenWriter.Token);
// spit out the json for debugging
var json = doc.ToString(Formatting.Indented);
Debug.WriteLine(json);
}
}
}
This is just a proof of concept. Of course, in reality, I will be getting the stream from elsewhere (a filestream, for example). The data could potentially be very large - hundreds of megabytes. So just working with strings is out of the question.
This example doesn't work. Only the last block read is left in the token. How can I write a value to the token and have it append to what was previously written instead of replacing it?
Is there a more efficient way to do this?
To clarify - the text being written is not already in json format. It is closer to human readable text. It will need to go through the same escaping and formatting that would occur if you wrote a plain string value.
After much research, I believe that the answer is "It can't be done".
Really, I think a single JValue of a very large string is something to avoid. I instead broke it up into smaller values stored in a JArray.
If I am wrong, please post a better answer. Thanks.

HttpWebClient has High Memory Use in MonoTouch

I have a MonoTouch based iOS universal app. It uses REST services to make calls to get data. I'm using the HttpWebRequest class to build and make my calls. Everything works great, with the exception that it seems to be holding onto memory. I've got usings all over the code to limit the scope of things. I've avoided anonymous delegates as well as I had heard they can be a problem. I have a helper class that builds up my call to my REST service. As I make calls it seems to just hold onto memory from making my calls. I'm curious if anyone has run into similar issues with the HttpWebClient and what to do about it. I'm currently looking to see if I can make a call using an nsMutableRequest and just avoid the HttpWebClient, but am struggling with getting it to work with NTLM authentication. Any advice is appreciated.
protected T IntegrationCall<T,I>(string methodName, I input) {
HttpWebRequest invokeRequest = BuildWebRequest<I>(GetMethodURL(methodName),"POST",input, true);
WebResponse response = invokeRequest.GetResponse();
T result = DeserializeResponseObject<T>((HttpWebResponse)response);
invokeRequest = null;
response = null;
return result;
}
protected HttpWebRequest BuildWebRequest<T>(string url, string method, T requestObject, bool IncludeCredentials)
{
ServicePointManager.ServerCertificateValidationCallback = Validator;
var invokeRequest = WebRequest.Create(url) as HttpWebRequest;
if (invokeRequest == null)
return null;
if (IncludeCredentials)
{
invokeRequest.Credentials = CommonData.IntegrationCredentials;
}
if( !string.IsNullOrEmpty(method) )
invokeRequest.Method = method;
else
invokeRequest.Method = "POST";
invokeRequest.ContentType = "text/xml";
invokeRequest.Timeout = 40000;
using( Stream requestObjectStream = new MemoryStream() )
{
DataContractSerializer serializedObject = new DataContractSerializer(typeof(T));
serializedObject.WriteObject(requestObjectStream, requestObject);
requestObjectStream.Position = 0;
using(StreamReader reader = new StreamReader(requestObjectStream))
{
string strTempRequestObject = reader.ReadToEnd();
//byte[] requestBodyBytes = Encoding.UTF8.GetBytes(strTempRequestObject);
Encoding enc = new UTF8Encoding(false);
byte[] requestBodyBytes = enc.GetBytes(strTempRequestObject);
invokeRequest.ContentLength = requestBodyBytes.Length;
using (Stream postStream = invokeRequest.GetRequestStream())
{
postStream.Write(requestBodyBytes, 0, requestBodyBytes.Length);
}
}
}
return invokeRequest;
}
Using using is the right thing to do - but your code seems to be duplicating the same content multiple times (which it should not do).
requestObjectStream is turned into a string which is then turned into a byte[] before being written to another stream. And that's without considering what the extra code (e.g. ReadToEnd and UTF8Encoding.GetBytes) might allocate themselves (e.g. like more strings, byte[]...).
So if what you serialize is large then you'll consume a lot of extra memory (for nothing). It's even a bit worse for stringand byte[] since you can't dispose them manually (GC will decide when, making measurement harder).
I would try (but did not ;-) something like:
...
using (Stream requestObjectStream = new MemoryStream ()) {
DataContractSerializer serializedObject = new DataContractSerializer(typeof(T));
serializedObject.WriteObject(requestObjectStream, requestObject);
requestObjectStream.Position = 0;
invokeRequest.ContentLength = requestObjectStream.Length;
using (Stream postStream = invokeRequest.GetRequestStream())
requestObjectStream.CopyTo (postStream);
}
...
That would let the MemoryStream copy itself to the request stream. An alternative is to call ToArray to the MemoryStream (but that's another copy of the serialized object that the GC will have to track and free).

Reading data from a website using C#

I have a webpage which has nothing on it except some string(s). No images, no background color or anything, just some plain text which is not really that long in length.
I am just wondering, what is the best (by that, I mean fastest and most efficient) way to pass the string in the webpage so that I can use it for something else (e.g. display in a text box)? I know of WebClient, but I'm not sure if it'll do what I want it do and plus I don't want to even try that out even if it did work because the last time I did it took approximately 30 seconds for a simple operation.
Any ideas would be appreciated.
The WebClient class should be more than capable of handling the functionality you describe, for example:
System.Net.WebClient wc = new System.Net.WebClient();
byte[] raw = wc.DownloadData("http://www.yoursite.com/resource/file.htm");
string webData = System.Text.Encoding.UTF8.GetString(raw);
or (further to suggestion from Fredrick in comments)
System.Net.WebClient wc = new System.Net.WebClient();
string webData = wc.DownloadString("http://www.yoursite.com/resource/file.htm");
When you say it took 30 seconds, can you expand on that a little more? There are many reasons as to why that could have happened. Slow servers, internet connections, dodgy implementation etc etc.
You could go a level lower and implement something like this:
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create("http://www.yoursite.com/resource/file.htm");
using (StreamWriter streamWriter = new StreamWriter(webRequest.GetRequestStream(), Encoding.UTF8))
{
streamWriter.Write(requestData);
}
string responseData = string.Empty;
HttpWebResponse httpResponse = (HttpWebResponse)webRequest.GetResponse();
using (StreamReader responseReader = new StreamReader(httpResponse.GetResponseStream()))
{
responseData = responseReader.ReadToEnd();
}
However, at the end of the day the WebClient class wraps up this functionality for you. So I would suggest that you use WebClient and investigate the causes of the 30 second delay.
If you're downloading text then I'd recommend using the WebClient and get a streamreader to the text:
WebClient web = new WebClient();
System.IO.Stream stream = web.OpenRead("http://www.yoursite.com/resource.txt");
using (System.IO.StreamReader reader = new System.IO.StreamReader(stream))
{
String text = reader.ReadToEnd();
}
If this is taking a long time then it is probably a network issue or a problem on the web server. Try opening the resource in a browser and see how long that takes.
If the webpage is very large, you may want to look at streaming it in chunks rather than reading all the way to the end as in that example.
Look at http://msdn.microsoft.com/en-us/library/system.io.stream.read.aspx to see how to read from a stream.
Regarding the suggestion
So I would suggest that you use WebClient and investigate the causes of the 30 second delay.
From the answers for the question
System.Net.WebClient unreasonably slow
Try setting Proxy = null;
WebClient wc = new WebClient();
wc.Proxy = null;
Credit to Alex Burtsev
If you use the WebClient to read the contents of the page, it will include HTML tags.
string webURL = "https://yoursite.com";
WebClient wc = new WebClient();
wc.Headers.Add("user-agent", "Only a Header!");
byte[] rawByteArray = wc.DownloadData(webURL);
string webContent = Encoding.UTF8.GetString(rawByteArray);
After getting the content, the html tags should be removed. Regex can be used for this:
var result= Regex.Replace(webContent, "<.*?>", String.Empty);
But this method is not very accurate, the better way is to install HtmlAgilityPack and use the following code:
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(webData);
string result = doc.DocumentNode.InnerText;
You say it takes 30 seconds, It has nothing to do with using WebClient (The main factor is internet connections or proxy). WebClient has worked very well for me. example
WebClient client = new WebClient();
using (Stream data = client.OpenRead(Text))
{
using (StreamReader reader = new StreamReader(data))
{
string content = reader.ReadToEnd();
string pattern = #"((https?|ftp|gopher|telnet|file|notes|ms-help):((//)|(\\\\))+[\w\d:##%/;$()~_?\+-=\\\.&]*)";
MatchCollection matches = Regex.Matches(content,pattern);
List<string> urls = new List<string>();
foreach (Match match in matches)
{
urls.Add(match.Value);
}
}
XmlDocument document = new XmlDocument();
document.Load("www.yourwebsite.com");
string allText = document.InnerText;

Categories