I'm using .netCore 3.1 to create a RESTful API.
Here I'm trying to modify the response body to filter out some values based on a corporate use case that I have.
My problem is that at first, I figured that the CanRead value of context.HttpContext.Response.Body is false, thus it is unreadable, so I searched around and found this question and its answers.
which
basically converts a stream that can't seek to one that can
so I applied the answer with a little modification to fit my use case :
Stream originalBody = context.HttpContext.Response.Body;
try
{
using (var memStream = new MemoryStream())
{
context.HttpContext.Response.Body = memStream;
memStream.Position = 0;
string responseBody = new StreamReader(memStream).ReadToEnd();
memStream.Position = 0;
memStream.CopyTo(originalBody);
string response_body = new StreamReader(originalBody).ReadToEnd();
PagedResponse<List<UserPhoneNumber>> deserialized_body;
deserialized_body = JsonConvert.DeserializeObject<PagedResponse<List<UserPhoneNumber>>>(response_body);
// rest of code logic
}
}
finally
{
context.HttpContext.Response.Body = originalBody;
}
But when debugging, I found out that memStream.Length is always 0, and therefore the originalBody value is an empty string : "" .
Even though after this is executed , the response is returned successfully, (thanks to the finally block).
I can't seem to understand why this is happening, is this an outdated method? what am I doing wrong?
Thank you in advance.
using is closing the stream try
string body= new StreamReader(Request.Body).ReadToEnd();
Related
I am using BotFramework to draw strings on a PNG image. Then I convert it to a base64 string from a byte array in memory. I post the base64 string to a service and get the correct response. Everything works fine but, I get the "Sorry, my bot code is having an issue" message after the process.
case 5:
{
try
{
...
graphics.DrawString(text, fonti, brush, drawRect, stringFormat);
using (MemoryStream m = new MemoryStream())
{
image.Save(m, image.RawFormat);
// I've tried changing this line to String or StringBuilder but
// nothing changed
IMAGE = $"data:image/png;base64,{Convert.ToBase64String(m.ToArray())}";
m.Close();
}
}
catch { await context.PostAsync("ERR1"); }
string json = null;
try
{
string FormStuff = string.Format($"somecontent");
StringContent content = new StringContent(
FormStuff,
Encoding.UTF8,
"application/x-www-form-urlencoded");
string url = string.Format("http://www.example.com/");
var response = await client.PostAsync(url, content);
json = (await response.Content.ReadAsStringAsync()).ToString();
}
catch { await context.PostAsync("ERR2"); }
...
}
break;
IMAGE variable is a string.
Whenever I remove or change Convert.ToBase64String() the problem is gone but then I can't use the service as I want and the process is broken.
Only problem here is the exception thrown and shown to the end user.
Sorry, my bot code is having an issue
EDIT: I found out that if the content in my post request is too long, I get the error message. I've tried using FormUrlEncodedContent but it throws this:
Invalid URI: The Uri string is too long.
How could I post it in another way?
I do not know what service you are calling or do not have all the values for your parameters to test this with. but this is too long for a comment.
Try this instead of .ToArray() try something like
var image = Convert.ToBase64String(File.ReadAllBytes("FileName"));
You also may want to try this setting this to a variable outside of the string you are forming IMAGE with. Something like this I have had success with in the past, but again I do not know what your service is doing or expecting.
var image = Convert.ToBase64String(File.ReadAllBytes(
"FileName"));
image64 = "data:image/png;base64," + image;
I have an CSV file in memory that I want to upload to a Web API.
If I save the CSV file to disk and upload it, it gets accepted.
However, I want to avoid the extra work and also make the code cleaner by simply uploading the text I have as a MemoryStream Object (I think that's the correct format?).
The following code works for uploading the file:
string webServiceUrl = "XXX";
string filePath = #"C:\test.csv";
string cred = "YYY";
using (var client = new WebClient()){
client.Headers.Add("Authorization", "Basic " + cred);
byte[] rawResponse = client.UploadFile(webServiceUrl, "POST", filePath);
Console.WriteLine(System.Text.Encoding.ASCII.GetString(rawResponse));
}
How would I do if I had a string with all the contents and I want to upload it in the same way without having to save it down to a file?
WebClient.UploadData or WebClient.UploadString perhaps?
Thank you
EDIT:
I tried what you said but by using a local file (in case there was something wrong with the string), but I get the same error.
Here is what I suppose the code would be using your solution
string webServiceUrl = "XXX";
string file = #"C:\test.csv";
string cred = "YYY";
FileStream fs = new FileStream(file, FileMode.Open, FileAccess.Read);
BinaryReader r = new BinaryReader(fs);
byte[] postArray = r.ReadBytes((int)fs.Length);
using (var client = new WebClient())
{
client.Headers.Add("Authorization", "Basic " + cred);
using (var postStream = client.OpenWrite(webServiceUrl, "POST"))
{
postStream.Write(postArray, 0, postArray.Length);
}
}
Any thoughts?
Use OpenWrite() from the WebClient.
using (var postStream = client.OpenWrite(endpointUrl))
{
postStream.Write(memStreamContent, 0, memStream.Length);
}
As documentation mentioned:
The OpenWrite method returns a writable stream that is used to send data to a resource.
Update
Try to set the position of the MemoryStream to 0 before uploading.
memoryStream.Position = 0;
When you copy the file into the MemoryStream, the pointer is moved to the end of the stream, so when you then try to read it, you're getting a null byte instead of your stream data.
MSDN - CopyTo()
Copying begins at the current position in the current stream, and does not reset the position of the destination stream after the copy operation is complete.
I finally managed to solve it.
First I made a request using CURL that worked.
I analyzed the packet data and made an except copy of the packet.
I did a lot of changes, however, the final change was that using the different functions I found online it never closed the packet with a "Last-Boundary" while CURL did.
So by modifying the function, making sure it properly wrote a Last-Boundary it finally worked.
Also, another crucial thing was to set PreAuthenticate to true, the examples online didn't do that.
So, all in all:
1. Make sure that the packet is properly constructed.
2. Make sure you pre authenticate if you need to authenticate.
webrequest.PreAuthenticate = true;
webrequest.Headers[HttpRequestHeader.Authorization] = string.Format("Basic {0}", cred);
Don't forget to add SSL if using a https (which you probably do if you authenticate):
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Ssl3;
Hope this helps someone.
And thanks for the help earlier!
I am trying to re-upload a stream that I just retrieved. It shouldn't really matter that I am using AWS I believe... Maybe my understanding of working with streams is just too limited? :-)
I am using the following method straight from the AWS documentation to download and upload streams:
File upload:
public bool UploadFile(string keyName, Stream stream)
{
using (client = new AmazonS3Client(Amazon.RegionEndpoint.USEast1))
{
try
{
TransferUtility fileTransferUtility = new TransferUtility(new AmazonS3Client(Amazon.RegionEndpoint.USEast1));
fileTransferUtility.Upload(stream, bucketName, keyName);
return true;
}
catch (AmazonS3Exception amazonS3Exception)
{
[...]
}
}
}
Getting the file:
public Stream GetFile(string keyName)
{
using (client = new AmazonS3Client(Amazon.RegionEndpoint.USEast2))
{
try
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = keyName
};
GetObjectResponse response = client.GetObject(request);
responseStream = response.ResponseStream;
return responseStream;
}
catch (AmazonS3Exception amazonS3Exception)
{
[...]
}
}
}
Now, I am trying to combine the two methods: I am getting a stream and immediately want to upload it again. However, I am getting the following error message: System.NotSupportedException : HashStream does not support seeking
I am guessing it has something to do that I am getting a stream which is somehow now immediately ready to be uploaded again?
This is how I am trying to get the stream of an existing file (.jpg) and immediately try to upload it with a different filename:
newAWS.UploadFile(newFileName, oldAWS.GetFile(oldFile));
Where newAWS and oldAWS are instances of the AWS class, and newFileName is a string :-)
This is the line I am getting the error message pasted above.
Please let me know if I am missing something obvious here why I would not be able to re-upload a fetched stream. Or could it be that it is related to something else I am not aware of and I am on the wrong track trying to troubleshoot the returned stream?
What I am basically trying to do is to copy one file from an AWS bucket to another using streams. But for some reason I get the error message trying to upload the stream that I just downloaded.
Thank you very much for taking your time digging through my code :-)
As the error indicates, the Hashstream object that Amazon returns does not support seeking.
Something in your code, or a method you're calling, is trying to do seeking on that stream. Most likely it's trying to seek to the beginning of the stream.
So you need to convert Hashstream to a different stream that supports seeking:
using (GetObjectResponse response = client.GetObject(request))
using (Stream responseStream = response.ResponseStream)
using (MemoryStream memStream = new MemoryStream())
{
responseStream.CopyTo(memStream);
memStream.Seek(0, SeekOrigin.Begin);
// now use memStream wherever you were previously using responseStream
}
It looks like that my approach won't work and I am not really sure why it doesn't.
The good news is, that you can simply use the following code to copy from one AWS bucket to another:
static IAmazonS3 client;
client = new AmazonS3Client(Amazon.RegionEndpoint.USEast1);
CopyObjectRequest request = new CopyObjectRequest()
{
SourceBucket = bucketName,
SourceKey = objectKey,
DestinationBucket = bucketName,
DestinationKey = destObjectKey
};
CopyObjectResponse response = client.CopyObject(request);
So I didn't have to try to recreate this functionality using the upload and download methods I wrote but simply use the copy method provided.
HOWEVER, I still would like to know why my approach was failing? I really think I am missing something regarding the streams. Any feedback would be greatly appreciated :-)
I will leave this question as unanswered since I would really like to know what I am missing regarding streams. I would be happy to mark an answer as accepted that explains my missing knowledge :-)
I need to upload a file using Stream (Azure Blobstorage), and just cannot find out how to get the stream from the object itself. See code below.
I'm new to the WebAPI and have used some examples. I'm getting the files and filedata, but it's not correct type for my methods to upload it. Therefore, I need to get or convert it into a normal Stream, which seems a bit hard at the moment :)
I know I need to use ReadAsStreamAsync().Result in some way, but it crashes in the foreach loop since I'm getting two provider.Contents (first one seems right, second one does not).
[System.Web.Http.HttpPost]
public async Task<HttpResponseMessage> Upload()
{
if (!Request.Content.IsMimeMultipartContent())
{
this.Request.CreateResponse(HttpStatusCode.UnsupportedMediaType);
}
var provider = GetMultipartProvider();
var result = await Request.Content.ReadAsMultipartAsync(provider);
// On upload, files are given a generic name like "BodyPart_26d6abe1-3ae1-416a-9429-b35f15e6e5d5"
// so this is how you can get the original file name
var originalFileName = GetDeserializedFileName(result.FileData.First());
// uploadedFileInfo object will give you some additional stuff like file length,
// creation time, directory name, a few filesystem methods etc..
var uploadedFileInfo = new FileInfo(result.FileData.First().LocalFileName);
// Remove this line as well as GetFormData method if you're not
// sending any form data with your upload request
var fileUploadObj = GetFormData<UploadDataModel>(result);
Stream filestream = null;
using (Stream stream = new MemoryStream())
{
foreach (HttpContent content in provider.Contents)
{
BinaryFormatter bFormatter = new BinaryFormatter();
bFormatter.Serialize(stream, content.ReadAsStreamAsync().Result);
stream.Position = 0;
filestream = stream;
}
}
var storage = new StorageServices();
storage.UploadBlob(filestream, originalFileName);**strong text**
private MultipartFormDataStreamProvider GetMultipartProvider()
{
var uploadFolder = "~/App_Data/Tmp/FileUploads"; // you could put this to web.config
var root = HttpContext.Current.Server.MapPath(uploadFolder);
Directory.CreateDirectory(root);
return new MultipartFormDataStreamProvider(root);
}
This is identical to a dilemma I had a few months ago (capturing the upload stream before the MultipartStreamProvider took over and auto-magically saved the stream to a file). The recommendation was to inherit that class and override the methods ... but that didn't work in my case. :( (I wanted the functionality of both the MultipartFileStreamProvider and MultipartFormDataStreamProvider rolled into one MultipartStreamProvider, without the autosave part).
This might help; here's one written by one of the Web API developers, and this from the same developer.
Hi just wanted to post my answer so if anybody encounters the same issue they can find a solution here itself.
here
MultipartMemoryStreamProvider stream = await this.Request.Content.ReadAsMultipartAsync();
foreach (var st in stream.Contents)
{
var fileBytes = await st.ReadAsByteArrayAsync();
string base64 = Convert.ToBase64String(fileBytes);
var contentHeader = st.Headers;
string filename = contentHeader.ContentDisposition.FileName.Replace("\"", "");
string filetype = contentHeader.ContentType.MediaType;
}
I used MultipartMemoryStreamProvider and got all the details like filename and filetype from the header of content.
Hope this helps someone.
I have a MonoTouch based iOS universal app. It uses REST services to make calls to get data. I'm using the HttpWebRequest class to build and make my calls. Everything works great, with the exception that it seems to be holding onto memory. I've got usings all over the code to limit the scope of things. I've avoided anonymous delegates as well as I had heard they can be a problem. I have a helper class that builds up my call to my REST service. As I make calls it seems to just hold onto memory from making my calls. I'm curious if anyone has run into similar issues with the HttpWebClient and what to do about it. I'm currently looking to see if I can make a call using an nsMutableRequest and just avoid the HttpWebClient, but am struggling with getting it to work with NTLM authentication. Any advice is appreciated.
protected T IntegrationCall<T,I>(string methodName, I input) {
HttpWebRequest invokeRequest = BuildWebRequest<I>(GetMethodURL(methodName),"POST",input, true);
WebResponse response = invokeRequest.GetResponse();
T result = DeserializeResponseObject<T>((HttpWebResponse)response);
invokeRequest = null;
response = null;
return result;
}
protected HttpWebRequest BuildWebRequest<T>(string url, string method, T requestObject, bool IncludeCredentials)
{
ServicePointManager.ServerCertificateValidationCallback = Validator;
var invokeRequest = WebRequest.Create(url) as HttpWebRequest;
if (invokeRequest == null)
return null;
if (IncludeCredentials)
{
invokeRequest.Credentials = CommonData.IntegrationCredentials;
}
if( !string.IsNullOrEmpty(method) )
invokeRequest.Method = method;
else
invokeRequest.Method = "POST";
invokeRequest.ContentType = "text/xml";
invokeRequest.Timeout = 40000;
using( Stream requestObjectStream = new MemoryStream() )
{
DataContractSerializer serializedObject = new DataContractSerializer(typeof(T));
serializedObject.WriteObject(requestObjectStream, requestObject);
requestObjectStream.Position = 0;
using(StreamReader reader = new StreamReader(requestObjectStream))
{
string strTempRequestObject = reader.ReadToEnd();
//byte[] requestBodyBytes = Encoding.UTF8.GetBytes(strTempRequestObject);
Encoding enc = new UTF8Encoding(false);
byte[] requestBodyBytes = enc.GetBytes(strTempRequestObject);
invokeRequest.ContentLength = requestBodyBytes.Length;
using (Stream postStream = invokeRequest.GetRequestStream())
{
postStream.Write(requestBodyBytes, 0, requestBodyBytes.Length);
}
}
}
return invokeRequest;
}
Using using is the right thing to do - but your code seems to be duplicating the same content multiple times (which it should not do).
requestObjectStream is turned into a string which is then turned into a byte[] before being written to another stream. And that's without considering what the extra code (e.g. ReadToEnd and UTF8Encoding.GetBytes) might allocate themselves (e.g. like more strings, byte[]...).
So if what you serialize is large then you'll consume a lot of extra memory (for nothing). It's even a bit worse for stringand byte[] since you can't dispose them manually (GC will decide when, making measurement harder).
I would try (but did not ;-) something like:
...
using (Stream requestObjectStream = new MemoryStream ()) {
DataContractSerializer serializedObject = new DataContractSerializer(typeof(T));
serializedObject.WriteObject(requestObjectStream, requestObject);
requestObjectStream.Position = 0;
invokeRequest.ContentLength = requestObjectStream.Length;
using (Stream postStream = invokeRequest.GetRequestStream())
requestObjectStream.CopyTo (postStream);
}
...
That would let the MemoryStream copy itself to the request stream. An alternative is to call ToArray to the MemoryStream (but that's another copy of the serialized object that the GC will have to track and free).