422 Unprocessable Entity when posting to Laravel - c#

I cannot figure out how to solve this.
Response is showing me Content-Type: application/json instead of multipart/form-data....
did anyone know what i need to do?
I build in .net 6 MAUI, Android 12 and RestSharp as http client
Please give a look on this image:
(IMAGE) Response StatusCode
var client = new RestClient();
var request = new RestRequest(PostImageUrl, Method.Post);
request.AddHeader("Content-Type", "multipart/form-data");
request.AddFile("image", bauzeichnung);
var response = client.Execute(request);
Everything is tested with Postman and works like expected.
EDIT
also tried:
request.AlwaysMultipartFormData = true;
Is this Directory Path correct?
"/data/user/0/com.Lippert.Digital/cache/b81fe7a766a64981918f1012d7865c8c.jpg"
Can AddFile work with this type of path from my Android phone?
This picture was taken with MediaPicker.Default.CapturePhotoAsync();
(IMAGE) Directory Path
EDIT
PHP Controller
public function uploadImage(Request $request)
{
$this->validate($request, [
'image' => 'file',
]);
if ($request->file('image'))
{
$name = time().$request->file('image')->getClientOriginalName();
$request->file('image')->move('Bauzeichnungen',$name);
$image = url('Bauzeichnungen/'.$name);
}else
{
$image = 'Image not found';
}
date_default_timezone_set('Europe/Berlin');
DB::table('Bauzeichnung')->insert([
'image' => "$image",
'erstellt_von' => 26,
'aktualisiert_von' => 26,
'Zeitstempel' => date('Y-m-d H:i:s'),
]);
}
PHP Route
Route::post('/uploadImage','ImageController#uploadImage');

Directory Path is not correct, this path is belong to your cache storage. When you upload an image, image is writing your cache and when you post is, you are posting your image in cache. And when you try to take path, it giving cache path to you.

422 Unprocessable Entity - Solution
After couple of days i found the Solution for the error above...
The reason for that:
Images are too big and only smaller values were allowed in php.ini.
How it can be solved:
open your php.ini - (location: /etc/php/8.1/fpm/php.ini)
set memory_limit = (i set to -1)
Sets the maximum amount of memory, in bytes, that a script can use. This can be used to prevent badly written scripts from "eating up" all of the available memory on a server. To set no memory limit, set this directive to the value -1.
Set post_max_size = (i set to 128M)
Sets the maximum allowed size of POST data. This option also affects file upload. To upload larger files, the value must be greater than upload_max_filesize . In general, memory_limit should be greater than post_max_size. If an int value is used, that value is measured in bytes. You can also use the shorthand form as described in this FAQ . If the size of the POST data is greater than post_max_size, the $_POST and $_FILES superglobals become to be empty. This can be tracked in a number of ways, e.g. B. by passing the $_GET variable to the script that processes the data, ie and then checking if $_GET['processed'] is set.
Set upload_max_filesize = (i set to 64M)
The maximum size that an uploaded file can have.
Where i got the Information - PHP Manual:
https://www.php.net/manual/de/ini.core.php
Big thanks to : #Jason

Related

How to implement resumable upload using Microsoft.Graph.GraphServiceClient from C#

Does anyone know how to use the C# OneDrive SDK to perform a resumable upload?
When I use IDriveItemRequestBuilder.CreateUploadSession I always get a new session with the NextExpectedRanges reset.
If I use the .UploadURL and manually send a HTTP Post I get the correct, next ranges back however I don't then know the means to resume the upload session using the sdk. There doesn't seem to be a means from the API to 'OpenUploadSession', or at least that I can find.
Nor can I find a working example.
I suspect this must be a common use case.
Please note that keywords in the text - resumable.
I was looking for the same thing and just stepped on an example from the official docs:
https://learn.microsoft.com/en-us/graph/sdks/large-file-upload?tabs=csharp.
I tried the code and it worked.
In case, my sample implementation: https://github.com/xiaomi7732/onedrive-sample-apibrowser-dotnet/blob/6639444d6298492c38f841e411066635760930c2/OneDriveApiBrowser/FormBrowser.cs#L565
The method of resumption depends on how much state you have. The absolution minimum that is required is UploadSession.UploadUrl (think of it as unique identifier for the session). If you don't have that URL you'd need to create a new upload session and start from the beginning, otherwise if you do have it you can do something like the following to resume:
var uploadSession = new UploadSession
{
NextExpectedRanges = Enumerable.Empty<string>(),
UploadUrl = persistedUploadUrl,
};
var maxChunkSize = 320 * 1024; // 320 KB - Change this to your chunk size. 5MB is the default.
var provider = new ChunkedUploadProvider(uploadSession, graphClient, ms, maxChunkSize);
// This will query the service and make sure the remaining ranges are accurate.
uploadSession = await provider.UpdateSessionStatusAsync();
// Since the remaining ranges is now accurate, this will return the requests required to
// complete the upload.
var chunkRequests = provider.GetUploadChunkRequests();
...
If you have more state you'd be able to skip some of the above. For example, if you already had a ChunkedUploadProvider but don't know that it's accurate (maybe it was serialized to disk or something) then you can just start the process with the call to UpdateSessionStatusAsync.
FYI, you can see the code for ChunkedUploadProvider here in case that'll be helpful to see what's going on under the covers.

Dropbox file uploading not showing 409 error

I'm uploading file using Dropbox core API. I have written the upload code like-
RequestResult strReq = OAuthUtility.Put
(
"https://api-content.dropbox.com/1/files_put/auto/",
new HttpParameterCollection
{
{"access_token", "Token"},
{"path","/file.txt"},
{"overwrite", "false"},
{"autorename","false"},
{stream}
}
);
Suppose there is a existing file in root folder named file.txt and I'm again trying to upload the same name file to same folder.I have written
overwrite= false and autorename=false but surprisingly there is no error status code returning in the response.Always returning the success code 200 in the response.I need to show the proper error code.
Two things stand out:
Your URL is https://api-content.dropbox.com/1/files_put/auto/, but it should be (for this example) https://api-content.dropbox.com/1/files_put/auto/file.txt. The path parameter should be removed from the HttpParameterCollection.
I'm unfamiliar with the library you're using, but are you sure that those parameters are turned into query parameters and that stream becomes the HTTP body? I.e. the resulting URL should be https://api-content.dropbox.com/1/files_put/auto/file.txt?overwrite=false&autorename=false&access_token=<TOKEN>, and then the file content should go in the body of the request. Please make sure this is what's happening.
Please also share the body that comes back with the 200 response. It should tell you, for example, the path of the file that got written.
Note that if you upload the exact same file content to the same path, it doesn't count as a conflict, so when looking for a 409, make sure you're uploading different content to the file.

How do you set up the correct HTTP Response object for a Range request coming from BITS (Background Intelligent Transfer Service)?

I have a requirement to implement a web-service that can issue files to the bits (Background Intelligent Transfer Service). The language is ASP.NET (C#). The problem I am having is with the "range" stuff.
My code currently receives the http request (with a valid range is present in the http headers of 0 - 4907), and subsequently dishes out a portion of a byte array in the response object.
Here's my server code:
_context.Response.Clear();
_context.Response.AddHeader("Content-Range", "bytes " + lower.ToString() + "-" + upper.ToString() + "//" + view.Content.Length.ToString());
_context.Response.AddHeader("Content-Length", upper.ToString());
_context.Response.AddHeader("Accept-Ranges", "bytes");
_context.Response.ContentType = "application/octet-stream";
_context.Response.BinaryWrite(data);
_context.Response.End();
What happens next is that the subsequent request does not have any "range" key in the header at all... it's like it is asking for the entire file! Needless to say, the bits job errors stating that the servers response was not valid.
I suspect that it's all down to the headers that the server is returning in the response object... I am pretty sure that I am following protocol here.
If anyone can help with this it would be greatly appreciated... mean while... I'll keep on searching!
Regards
Yes, I found that I had a few issues in total. IIS was an initial problem, then my length calculations... and then like you say, the range request it's self. Ignoring the latter, my final code for this segment was:
_context.Response.StatusCode = 206;
_context.Response.AddHeader("Content-Range", string.Format("bytes {0}-{1}/{2}", lower.ToString(), upper.ToString(), view.Content.Length.ToString()));
_context.Response.AddHeader("Content-Length", length.ToString());
_context.Response.AddHeader("Accept-Ranges", "bytes");
_context.Response.OutputStream.Write(view.Content.ToArray(), lower, length);
Handling multi-request ranges can be tackled a different day! Should BITS request in this way (like it does on the second request after the first request which asks for the entire file), my code simply returns nothing... and then BITS sends a single range in the request... things work ok from there.
Thanks for the response.
You could also test-run your BITS request(s) against a known static file and sniff the packets with WireShark. That's sure to reveal exactly how to do it.

write to specific position in .json file + serilaize size limit issue C#

I have a method that retrieves data from a json serialized string and writes it to a .json file using:
TextWriter writer = new StreamWriter("~/example.json");
writer2.Write("{\"Names\":" + new JavaScriptSerializer().Serialize(jsonData) + "}");
data(sample):
{"People":{"Quantity":"4"}, ,"info" :
[{"Name":"John","Age":"22"}, {"Name":"Jack","Age":"56"}, {"Name":"John","Age":"82"},{"Name":"Jack","Age":"95"}]
}
This works perfectly however the jsonData variable has content that is updated frequently. Instead of always deleting and creating a new example.json when the method is invoked,
Is there a way to write data only to a specific location in the file? in the above example say to the info section by appending another {"Name":"x","Age":"y"}?
My reasoning for this is I ran into an issue when trying to serialize a large amount of data using visual studio in C#. I got "The length of the string exceeds the value set on the maxJsonLength propertyā€¯ error. I tried to increase the max allowed size in the web.config using a few suggested methods in this forum but they never worked. As the file gets larger I feel I may run into the same issue again. Any other alternatives are always welcome. Thanks in advance.
I am not aware of a JSON serializer that works with chunks of JSON only. You may try using Json.NET which should work with larger data:
var data = JsonConvert.SerializeObject(new { Names = jsonData });
File.WriteAllText("example.json", data);

C#: HttpListener Error Serving Content

I have implemented something similar to this
only real difference is
string filename = context.Request.RawUrl.Replace("/", "\\").Remove(0,1);
string path = Uri.UnescapeDataString(Path.Combine(_baseFolder, filename));
so that I can traverse to subdirectories. This works great for webpages and other text file types but when trying to serve up media content I get the exception
HttpListenerException: The I/O
operation has been aborted because of
either a thread exit or an application
request
Followed by
InvalidOperationException: Cannot close stream until all bytes are written.
In the using statement.
Any suggestions on how to handle this or stop these exceptions?
Thanks
I should mention that I am using Google Chrome for my browser (Google Chrome doesn't seem to care about the MIME types, when it sees audio it will try to use it like it's in a HTML5 player), but this is also applicable if you are trying to host media content in a page.
Anyways, I was inspecting my headers with fiddler and noticed that Chrome passes 3 requests to the server. I started playing with other browsers and noticed they did not do this, but depending on the browser and what I had hard coded as the MIME type I would either get a page of crazy text, or a download of the file.
On further inspection I noticed that chrome would first request the file. Then request the file again with a few different headers most notably the range header. The first one with byte=0- then the next with a different size depending on how large the file was (more than 3 requests can be made depending how large the file is).
So there was the problem. Chrome will first ask for the file. Once seeing the type it would send another request which seems to me looking for how large the file is (byte=0-) then another one asking for the second half of the file or something similar to allow for a sort of streaming experienced when using HTML5. I coded something quickly up to handle MIME types and threw a HTML5 page together with the audio component and found that other browsers also do this (except IE)
So here is a quick solution and I no longer get these errors
string range = context.Request.Headers["Range"];
int rangeBegin = 0;
int rangeEnd = msg.Length;
if (range != null)
{
string[] byteRange = range.Replace("bytes=", "").Split('-');
Int32.TryParse(byteRange[0], out rangeBegin);
if (byteRange.Length > 1 && !string.IsNullOrEmpty(byteRange[1]))
{
Int32.TryParse(byteRange[1], out rangeEnd);
}
}
context.Response.ContentLength64 = rangeEnd - rangeBegin;
using (Stream s = context.Response.OutputStream)
{
s.Write(msg, rangeBegin, rangeEnd - rangeBegin);
}
Try:
using (Stream s = context.Response.OutputStream)
{
s.Write(msg, 0, msg.Length);
s.Flush()
}

Categories