I'm using RestSharp to build a Rest access to my MVC entry points (actually so I can use them from monotouch, but right now I'm testing on Windows 7, vs2010, .net 4, RestSharp 104.1)
if I create a request and call
client.ExecuteAsPost<Model.Client>( request );
it works, I can see in fiddler the raw packet
POST http://localhost.:49165/Services/Client/ClientAdminService/FindClient HTTP/1.1
Timestamp: Monday, March 18, 2013 1:56:02 AM
X-PS-Authentication: YADAYADA:<deleted for brevity>==
Accept: application/xml
User-Agent: RestSharp 104.1.0.0
Content-Type: application/xml; charset=utf-8
Host: localhost.:49165
Content-Length: 256
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
<Client xmlns="http://schemas.datacontract.org/2004/07/PSRMWebService.Model.Version1" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"><ID>0</ID><MailingAddress i:nil="true"/><Mask>Name</Mask><Name>Rykercom</Name><PhysicalAddress i:nil="true"/></Client>
as you can see at the end is the serialized data blob I need to send to the server (of type Model.Client) this is added to the request using
Request.AddParameter("application/xml; charset=utf-8", DataPacket, RestSharp.ParameterType.RequestBody);
where DataPacket is the serialized blob created using a DataContractSerializer
Now if I change the code to call
Client.ExecuteAsyncPost<Model.Client>(Request, (response, handle) => { OnFindClientAsyncComplete(response, handle, Callback ); }, "POST");
Using Fiddler I get quite a different packet with no Body, no content type, and therefore a failed response from the server.
POST http://localhost.:49165/Services/Client/ClientAdminService/FindClient HTTP/1.1
Timestamp: Monday, March 18, 2013 2:35:08 AM
X-PS-Authentication: YADAYADA:<deleted for bevity>==
Accept: application/xml
User-Agent: RestSharp 104.1.0.0
Host: localhost.:49165
Content-Length: 0
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
the X-PS-Authentication is just a custom auth string. Anyone any ideas why the async call is leaving me with an empty message body ?
The simple answer is clone the current github repository for rest sharp and build it yourself. It appears the fix for executeasync is already in the tree.
Any chance who ever own the Nuget package can update it to the lastest sources ?
Thanks
Related
From postman I'm trying to post a file to an asp.net core 2.2 api-controller.
POST /api/epd/noshow/uploadfile HTTP/1.1
Host: localhost:44361
User-Agent: PostmanRuntime/7.11.0
Accept: */*
Cache-Control: no-cache
Postman-Token: 3d633228-2554-442b-84f4-4e8214972886,a8f994f4-973e-411f-b96b-778af2c082b4
Host: localhost:44361
accept-encoding: gzip, deflate
content-type: multipart/form-data; boundary=--------------------------201375848683546790642901
content-length: 237
Connection: keep-alive
cache-control: no-cache
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="file"; filename="C:\temp\caldr.json
------WebKitFormBoundary7MA4YWxkTrZu0gW--
The api-controller method
[HttpPost]
[AllowAnonymous]
public IActionResult UploadFile([FromForm]IFormFile file)
{
return Ok();
}
I've already tried FromForm, FromBody, nothing. Setting content-type in postman, not setting the content-type in postman always returns the same error.
Any suggestions ?
That is simple than expected :) You to send the request body in a form data. Here, this example shows only the simple file post call with ASp.NET Core 2.2 - WebAapi
Postman:
API:
Found the problem and solution. Just for reference if someone else encounters the same mind boggling problem.
On the 'grandmother' class of the controller, there is the following attribute :
[Consumes("application/json")]
of course form-data was not accepted :-)
override this default behaviour by adding the following attribute to the action
[Consumes("multipart/form-data")]
I'm successfully using the Graph API for a variety of things but I need to access to the OneNote API to perform student and teacher add/remove operations on Class Notebooks. When I request a token the same way that I do for Graph with the https://www.onenote.com resource it provides one but when I try to use it to access the OneNote API no matter what (valid) request I send I get 401 - "The request does not contain a valid authentication token."
I've tried using the v1.0 endpoint to generate a token instead with the same results.
My token requests:
POST https://login.microsoftonline.com/{my tenant}/oauth2/v2.0/token HTTP/1.1
Accept: application/json
Content-Type: application/x-www-form-urlencoded
Host: login.microsoftonline.com
Content-Length: 213
Expect: 100-continue
Connection: Keep-Alive
grant_type=client_credentials&client_id={my appid}&client_secret={my secret}&tenant={my tenant}&scope=https%3A%2F%2Fwww.onenote.com%2F.default
OR
POST https://login.microsoftonline.com/{my tenant}/oauth2/token HTTP/1.1
Accept: application/json
Content-Type: application/x-www-form-urlencoded
Host: login.microsoftonline.com
Content-Length: 161
Expect: 100-continue
grant_type=client_credentials&client_id={my appid}&client_secret={my secret}&resource=https%3A%2F%2Fwww.onenote.com
Both return something containing an access_token, like:
{"token_type":"Bearer","expires_in":"3600","ext_expires_in":"3600","expires_on":"1543513719","not_before":"1543509819","resource":"https://www.onenote.com","access_token":"{a token}"}
Request:
GET https://www.onenote.com/api/v1.0/myorganization/groups/{group id}/notes/ HTTP/1.1
ContentType: application/json
Authorization: Bearer {token returned from /token request}
Cache-Control: no-store, no-cache
Host: www.onenote.com
Response:
code=40001
message=The request does not contain a valid authentication token.
For reference, this question is basically a follow-up to: Adding Students with the API and Class Notebook
You're on the right track.
Resource is the right way with the 1.0 auth endpoint.
Scopes need to be registered on the app portal, so you'll need to go back in and add OneNote scopes in the portal.
I'm not 100% sure, but IIRC the resource for onenote might require a trailing '/'.
I have a C# .NET application that is writing a zipped file back to the client for download. However, the browser does not receive the file or it rejects the file. The browser does not show any notifications. I have tried it both on Firefox and Chrome.
I have captured the request and response from the client and server using Fiddler:
Request:
POST http://localhost:62526/Reports/_Report_RewardLetters HTTP/1.1
Host: localhost:62526
Connection: keep-alive
Content-Length: 228
Accept: */*
Origin: http://localhost:62526
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.109 Safari/537.36
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
DNT: 1
Referer: http://localhost:62526/Reports
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8
Response:
HTTP/1.1 200 OK
Cache-Control: private
Transfer-Encoding: chunked
Content-Type: application/octet-stream
Server: Microsoft-IIS/10.0
content-dispostion: filename=Letter.zip
X-AspNet-Version: 4.0.30319
X-SourceFiles: =?UTF-8?B?QzpcVXNlcnNcc2FpbmlfaFxTb3VyY2VcUmVwb3NcUmVzZWFyY2hPZmZpY2VEYXNoYm9hcmRcUmVzZWFyY2hPZmZpY2VEYXNoYm9hcmRcUmVwb3J0c1xfUmVwb3J0X1Jld2FyZExldHRlcnM=?=
X-Powered-By: ASP.NET
Date: Thu, 11 Feb 2016 23:13:22 GMT
....Truncated the file contents.....
My code:
Response.Clear();
Response.ClearHeaders();
Response.ClearContent();
Response.AddHeader("content-dispostion", "filename=MyFile.zip");
Response.ContentType = "application/octet-stream";
Response.Flush();
Response.WriteFile(myfile);
Response.Flush();
Response.End();
I have tried numerous combinations of Response.Flush(), Response.Clear(), HttpContext.ApplicationInstance.CompleteRequest(), Response.BinaryWrite(), Response.TransmitFile(), etc. but none seem to work. Additionally, I have in my code the necessary checks to determine the existence of the file.
From the fiddler captures, I think there is something wrong in the encoding or the server response of the file being sent to the client whereby the browser is rejecting the file without any notification.
Thanks for your help!
Just a thought: do you need the Response.Flush statements? Setting your headers, writing your file and then calling Response.End should be enough.
Also, set the ContentType to "application/zip, application/octet-stream"
I had made a mistake that was causing all the fuss. The view was using an Ajax form to make the request instead of a normal Html form.
I fixed the problem by changing my controller and view according to the information present here:
http://geekswithblogs.net/rgupta/archive/2014/06/23/downloading-file-using-ajax-and-jquery-after-submitting-form-data.aspx
Download Excel file via AJAX MVC
I create a web services and a ConsoleApplication to consume it. I am using fiddler to see the traffic, i was hoping that the request and response was SOAP but that's don't happend, only the request was SOAP, not the response.
How can i force to my service that respond using soap?
This is the raw http header obtained:
HTTP/1.1 200 OK
Cache-Control: private, max-age=0
Content-Type: text/xml; charset=utf-8
Content-Encoding: gzip
Vary: Accept-Encoding
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Mon, 17 Nov 2014 20:19:22 GMT
Content-Length: 311
The respond is in binary.
My web-service was created adding a Web Services item to ASP.NET Empty Web Application project, and i don't modified, later I published in IIS.
My consumer is very simple too. I just add the service reference and create an instance.
var binding = new BasicHttpBinding();
var endpointAddress = new EndpointAddress("http://172.20.48.59/web-services/MyWebService.asmx");
var serviceRefWithoutConfig = new ServiceReference1.MyWebServiceSoapClient(binding, endpointAddress);
Console.WriteLine(serviceRefWithoutConfig.HelloWorld());
The HTTP response in question is compressed with GZIP.
What happens after you click the big yellow bar in Fiddler that says Response is encoded and may require decoding before inspection. Click here to transform.?
If you are creating something new, I strongly suggest you look at using WebAPI for something like this.
It's much easier to control the output of your methods, and the resulting service will be easier to consume from mobile platforms, should the need arise in future.
I'm downloading a site for its content using a Webcrawler I wrote with Microsoft WebBrowser.
A part of the site's content is sent only after some kind of verification sent from the client side - my guess is that its cookies / session cookies.
When i'm trying to download the page from my crawler i see (with Fiddler's help) that the inner link for the ajax sends 'false' for one of the parameters and the data is not received.
When I try to perform the same action from any browser, Fiddler shows that the property is sent as '1'.
After a day of testing, any lead will be grateful - Is there a way to manipulate this property? plant cookies? any other idea?
Following khunj answer, I'm adding Headers from IE and from my WebBrowser:
In both headers i removed fields which have the same value
From IE:
GET /feed/prematch/1-1-234562-8527419630-1-2.dat HTTP/1.1
x-requested-with: XMLHttpRequest
Referer: http://www.mySite.com/ref=12345
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0)
Connection: Keep-Alive
Cookie: __utma=1.1088924975.1299439925.1299976891.1300010848.14;
__utmz=1.1299439925.1.1.utmcsr=(direct)|utmccn=
(direct)|__utmb=2.1.10.1300010848; __utmc=136771054; user_cookie=63814658;
user_hash=58b923a5a234ecb78b7cc8806a0371c5; user_time=1297166428; infobox_8=1;
user_login_id=12345; mySite=5e1c0u8g6qh41o2798ua2bfbi3
HTTP/1.1 200 OK
Date: Sun, 13 Mar 2011 10:07:38 GMT
Server: Apache
Last-Modified: Sun, 13 Mar 2011 10:07:25 GMT
ETag: "26a6d9-19df-49e5a5c9ed140"
Accept-Ranges: bytes
Content-Length: 6623
Cache-Control: max-age=0, no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Connection: close
Content-Type: text/plain
Content-Encoding: gzip
From WebBrowser:
GET /feed/prematch/1-1-234562-8527419630-false-2.dat HTTP/1.1
x-requested-with: XMLHttpRequest
Referer: http://www.mySite.com/ref=12345
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0)
Connection: Keep-Alive
Cookie: __utma=1.1782626598.1299416994.1299974912.1300011023.129;
__utmb=2.1.10.1300011023; __utmz=1.1299416994.1.1.utmcsr=
(direct)|utmccn=(direct)|__utmc=136771054; user_cookie=65192487;
user_hash=6425034050442671103fdd614e4a2932; user_time=1299416986;
user_full_time_zone=37;user_login_id=12345; mySite=q9qlqqm9bunm9siho32tdqdjo0
HTTP/1.1 404 Not Found
Date: Sun, 13 Mar 2011 10:10:33 GMT
Server: Apache
Content-Length: 313
Connection: close
Content-Type: text/html; charset=iso-8859-1
Thanks in advance,
Oz.
Well, the server is obviously treating your request from your crawler differently. Since you already have fiddler involved, what is different in your request headers when you make the request from IE versus using your crawler. The reason I say IE is because the webbrowser control uses the same engine as IE for doing its work.
The way I solved my problem is by using Fiddler as a proxy and defining a custom reply to the server that whenever the PathAndQuery property contains the site address, replace the 'false' to '1'.
Not the most elegant solution but fits my problem.
I learned the most from these 2 pages:
FiddlerScript CookBook
A site which teaches on the specific customRules.js file and the field i needed to edit
Thanks for the help,
Oz.