How to implement resumable upload using Microsoft.Graph.GraphServiceClient from C# - c#

Does anyone know how to use the C# OneDrive SDK to perform a resumable upload?
When I use IDriveItemRequestBuilder.CreateUploadSession I always get a new session with the NextExpectedRanges reset.
If I use the .UploadURL and manually send a HTTP Post I get the correct, next ranges back however I don't then know the means to resume the upload session using the sdk. There doesn't seem to be a means from the API to 'OpenUploadSession', or at least that I can find.
Nor can I find a working example.
I suspect this must be a common use case.
Please note that keywords in the text - resumable.

I was looking for the same thing and just stepped on an example from the official docs:
https://learn.microsoft.com/en-us/graph/sdks/large-file-upload?tabs=csharp.
I tried the code and it worked.
In case, my sample implementation: https://github.com/xiaomi7732/onedrive-sample-apibrowser-dotnet/blob/6639444d6298492c38f841e411066635760930c2/OneDriveApiBrowser/FormBrowser.cs#L565

The method of resumption depends on how much state you have. The absolution minimum that is required is UploadSession.UploadUrl (think of it as unique identifier for the session). If you don't have that URL you'd need to create a new upload session and start from the beginning, otherwise if you do have it you can do something like the following to resume:
var uploadSession = new UploadSession
{
NextExpectedRanges = Enumerable.Empty<string>(),
UploadUrl = persistedUploadUrl,
};
var maxChunkSize = 320 * 1024; // 320 KB - Change this to your chunk size. 5MB is the default.
var provider = new ChunkedUploadProvider(uploadSession, graphClient, ms, maxChunkSize);
// This will query the service and make sure the remaining ranges are accurate.
uploadSession = await provider.UpdateSessionStatusAsync();
// Since the remaining ranges is now accurate, this will return the requests required to
// complete the upload.
var chunkRequests = provider.GetUploadChunkRequests();
...
If you have more state you'd be able to skip some of the above. For example, if you already had a ChunkedUploadProvider but don't know that it's accurate (maybe it was serialized to disk or something) then you can just start the process with the call to UpdateSessionStatusAsync.
FYI, you can see the code for ChunkedUploadProvider here in case that'll be helpful to see what's going on under the covers.

Related

422 Unprocessable Entity when posting to Laravel

I cannot figure out how to solve this.
Response is showing me Content-Type: application/json instead of multipart/form-data....
did anyone know what i need to do?
I build in .net 6 MAUI, Android 12 and RestSharp as http client
Please give a look on this image:
(IMAGE) Response StatusCode
var client = new RestClient();
var request = new RestRequest(PostImageUrl, Method.Post);
request.AddHeader("Content-Type", "multipart/form-data");
request.AddFile("image", bauzeichnung);
var response = client.Execute(request);
Everything is tested with Postman and works like expected.
EDIT
also tried:
request.AlwaysMultipartFormData = true;
Is this Directory Path correct?
"/data/user/0/com.Lippert.Digital/cache/b81fe7a766a64981918f1012d7865c8c.jpg"
Can AddFile work with this type of path from my Android phone?
This picture was taken with MediaPicker.Default.CapturePhotoAsync();
(IMAGE) Directory Path
EDIT
PHP Controller
public function uploadImage(Request $request)
{
$this->validate($request, [
'image' => 'file',
]);
if ($request->file('image'))
{
$name = time().$request->file('image')->getClientOriginalName();
$request->file('image')->move('Bauzeichnungen',$name);
$image = url('Bauzeichnungen/'.$name);
}else
{
$image = 'Image not found';
}
date_default_timezone_set('Europe/Berlin');
DB::table('Bauzeichnung')->insert([
'image' => "$image",
'erstellt_von' => 26,
'aktualisiert_von' => 26,
'Zeitstempel' => date('Y-m-d H:i:s'),
]);
}
PHP Route
Route::post('/uploadImage','ImageController#uploadImage');
Directory Path is not correct, this path is belong to your cache storage. When you upload an image, image is writing your cache and when you post is, you are posting your image in cache. And when you try to take path, it giving cache path to you.
422 Unprocessable Entity - Solution
After couple of days i found the Solution for the error above...
The reason for that:
Images are too big and only smaller values were allowed in php.ini.
How it can be solved:
open your php.ini - (location: /etc/php/8.1/fpm/php.ini)
set memory_limit = (i set to -1)
Sets the maximum amount of memory, in bytes, that a script can use. This can be used to prevent badly written scripts from "eating up" all of the available memory on a server. To set no memory limit, set this directive to the value -1.
Set post_max_size = (i set to 128M)
Sets the maximum allowed size of POST data. This option also affects file upload. To upload larger files, the value must be greater than upload_max_filesize . In general, memory_limit should be greater than post_max_size. If an int value is used, that value is measured in bytes. You can also use the shorthand form as described in this FAQ . If the size of the POST data is greater than post_max_size, the $_POST and $_FILES superglobals become to be empty. This can be tracked in a number of ways, e.g. B. by passing the $_GET variable to the script that processes the data, ie and then checking if $_GET['processed'] is set.
Set upload_max_filesize = (i set to 64M)
The maximum size that an uploaded file can have.
Where i got the Information - PHP Manual:
https://www.php.net/manual/de/ini.core.php
Big thanks to : #Jason

Acquiring waveform of LeCroy oscilloscope from C#/.NET

I am trying to load a waveform from a Teledyne Lecroy Wavesurfer 3054 scope using NI-VISA / IVI library. I can connect to the scope and read and set control variables but I can't figure out how to get the trace data back from the scope into my code. I am using USBTMC and can run the sample code in the Lecroy Automation manual but it does not give an example for getting the waveform array data, just control variables. They do not have a driver for IVI.NET. Here is a distilled version of the code:
// Open session to scope
var session = (IMessageBasedSession)GlobalResourceManager.Open
("USB0::0x05FF::0x1023::LCRY3702N14729::INSTR");
session.TimeoutMilliseconds = 5000;
session.Clear();
// Don't return command header with query result
session.FormattedIO.WriteLine("COMM_HEADER OFF");
// { other setup stuff that works OK }
// ...
// ...
// Attempt to query the Channel 1 waveform data
session.FormattedIO.WriteLine("vbs? 'return = app.Acquisition.C1.Out.Result.DataArray'");
So the last line above (which seems to be what the manual suggests) causes a beep and there is no data that can be read. I've tried all the read functions and they all time out with no data returned. If I query the number of data points I get 100002 which seems correct and I know the data must be there. Is there a better VBS query to use? Is there a read function that I can use to read the data into a byte array that I have overlooked? Do I need to read the data in blocks due to a buffer size limitation, etc.? I am hoping that someone has solved this problem before. Thanks so much!
Here is the first effort I got at making it work:
var session = (IMessageBasedSession)GlobalResourceManager.Open("USB0::0x05FF::0x1023::LCRY3702N14729::INSTR");
session.TimeoutMilliseconds = 5000;
session.Clear();
// Don't return command header with query result
session.FormattedIO.WriteLine("COMM_HEADER OFF");
//
// .. a bunch of setup code...
//
session.FormattedIO.WriteLine("C1:WF?"); // Query waveform data for Channel 1
buff = session.RawIO.Read(MAX_BUFF_SIZE); // buff has .TRC-like contents of waveform data
The buff[] byte buffer will end up with the same file formatted data as the .TRC files that the scope saves to disk, so it has to be parsed. But at least the waveform data is there! If there is a better way, I may find it and post, or someone else feel free to post it.
The way I achieved this is by saving the screenshot to a local drive. Map the local drive to the current system & simply use File.Copy() to copy image file from the mapped drive to the local computer. It saves time to parse data & re-plot it if one uses TRC-like contents.

c# parallel writes to Azure Data Lake File

In our Azure Data Lake, we have daily files recording events and coordinates for those events. We need to take these coordinates and lookup what State, County, Township, and Section these coordinates fall into. I've attempted several versions of this code.
I attempted to do this in U-SQL. I even uploaded a custom assembly that implemented Microsoft.SqlServer.Types.SqlGeography methods, only to find ADLA isn't set up to perform row-by-row operations like geocoding.
I pulled all the rows into SQL Server, converted the coordinates into a SQLGeography and built T-SQL code that would perform the State, County, etc. lookups. After much optimization, I got this process down to ~700ms / row. (with 133M rows in the backlog and ~16k rows added daily we're looking at nearly 3 years to catch up. So I parallelized the T-SQL, things got better, but not enough.
I took the T-SQL code, and built the process as a console application, since the SqlGeography library is actually a .Net library, not a native SQL Server product. I was able to get single threaded processing down t0 ~ 500ms. Adding in .Net's parallelism (parallel.ForEach) and throwing 10/20 of the cores of my machine at it does a lot, but still isn't enough.
I attempted to rewrite this code as an Azure Function and processing files in the data lake file-by-file. Most of the files timed out, since they took longer than 10 minutes to process. So I've updated the code to read in the files, and shread the rows into Azure Queue storage. Then I have a second Azure function that fires for each row in the queue. The idea is, Azure Functions can scale out far greater than any single machine can.
And this is where I'm stuck. I can't reliably write rows to files in ADLS. Here is the code as I have it now.
public static void WriteGeocodedOutput(string Contents, String outputFileName, ILogger log) {
AdlsClient client = AdlsClient.CreateClient(ADlSAccountName, adlCreds);
//if the file doesn't exist write the header first
try {
if (!client.CheckExists(outputFileName)) {
using (var stream = client.CreateFile(outputFileName, IfExists.Fail)) {
byte[] headerByteArray = Encoding.UTF8.GetBytes("EventDate, Longitude, Latitude, RadarSiteID, CellID, RangeNauticalMiles, Azimuth, SevereProbability, Probability, MaxSizeinInchesInUS, StateCode, CountyCode, TownshipCode, RangeCode\r\n");
//stream.Write(headerByteArray, 0, headerByteArray.Length);
client.ConcurrentAppend(outputFileName, true, headerByteArray, 0, headerByteArray.Length);
}
}
} catch (Exception e) {
log.LogInformation("multiple attempts to create the file. Ignoring this error, since the file was created.");
}
//the write the data
byte[] textByteArray = Encoding.UTF8.GetBytes(Contents);
for (int attempt = 0; attempt < 5; attempt++) {
try {
log.LogInformation("prior to write, the outputfile size is: " + client.GetDirectoryEntry(outputFileName).Length);
var offset = client.GetDirectoryEntry(outputFileName).Length;
client.ConcurrentAppend(outputFileName, false, textByteArray, 0, textByteArray.Length);
log.LogInformation("AFTER write, the outputfile size is: " + client.GetDirectoryEntry(outputFileName).Length);
//if successful, stop trying to write this row
attempt = 6;
}
catch (Exception e){
log.LogInformation($"exception on adls write: {e}");
}
Random rnd = new Random();
Thread.Sleep(rnd.Next(attempt * 60));
}
}
The file will be created when it needs to be, but I do get several messages in my log that several threads tried to create it. I'm not always getting the header row written.
I also no longer get any data rows only:
"BadRequest ( IllegalArgumentException concurrentappend failed with error 0xffffffff83090a6f
(Bad request. The target file does not support this particular type of append operation.
If the concurrent append operation has been used with this file in the past, you need to append to this file using the concurrent append operation.
If the append operation with offset has been used in the past, you need to append to this file using the append operation with offset.
On the same file, it is not possible to use both of these operations.). []
I feel like I'm missing some fundamental design idea here. The code should try to write a row into a file. If the file doesn't yet exist, create it and put the header row in. Then, put in the row.
What's the best-practice way to accomplish this kind of write scenario?
Any other suggestions of how to handle this kind of parallel-write workload in ADLS?
I am a bit late to this but I guess one of the problems could be due to the use of "Create" and "ConcurrentAppend" on the same file stream?
ADLS documentation mentions that they can't be used on the same file. Maybe, try changing the "Create" command to "ConcurrentAppend" as the latter can be used to create a file if it doesn't exist.
Also, if you found a better way to do it, please do post your solution here.

How to read only a small part of a .XML

I built an application in order to read a file, but even with the fact that my connection is fast, the page takes several seconds to load, I would like to know how to read only the first records of this .xml
string rssURL = "http://www.cnt.org.br/Paginas/feed.aspx?t=n";
System.Net.WebRequest myRequest = System.Net.WebRequest.Create(rssURL);
System.Net.WebResponse myResponse = myRequest.GetResponse();
System.IO.Stream rssStream = myResponse.GetResponseStream();
System.Xml.XmlDocument rssDoc = new System.Xml.XmlDocument();
rssDoc.Load(rssStream);
System.Xml.XmlNodeList rssItems = rssDoc.SelectNodes("rss/channel/item");
Tks..
As the fore posters mention you can’t download part of a web request. But you can start parsing Xml before the request finished. Using XmlDocument is the wrong approach for your use case, because it needs the complete request to create the object. Try using XmlTextReader.
There is no easy way to download part of a web request and ensure it is what you want. One workaround would be to use the Google Feed API.
You'd have to use the JSON interface since they don't provide a library for C#, but since it's going through Google's servers it will be much faster. You'd have to modify your code a little bit, since it returns JSON by default instead of XML, but that is a trivial change to make. You can also change the parameter output=xml to retrieve the XML representation of the data.
Try going to this page, that is your same feed, with fewer elements and loads much faster. That only returns a few elements, but if you want 10 elements, all you have to do is add num=10 to the URL. For example, this url has 10 elements. Read the API documentation a little more to see what variables you can add to cater the request to what you want to do.

File IO in Windows 8

I have been trying to read a file, and calculate the hash of the contents to find duplicates. The problem is that in Windows 8 (or WinRT or windows store application or however it is called, I'm completely confused), System.IO has been replaced with Windows.Storage, which behaves differently, and is very confusing. The official documentation is not useful at all.
First I need to get a StorageFile object, which in my case, I get from browsing a folder from a file picker:
var picker = new Windows.Storage.Pickers.FolderPicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.MusicLibrary;
picker.FileTypeFilter.Add("*");
var folder = await picker.PickSingleFolderAsync();
var files = await folder.GetFilesAsync(Windows.Storage.Search.CommonFileQuery.OrderByName);
Now in files I have the list of files I need to index. Next, I need to open that file:
foreach (StorageFile file in files)
{
var filestream = file.OpenAsync(Windows.Storage.FileAccessMode.Read);
Now is the most confusing part: getting the data from the file. The documentation was useless, and I couldn't find any code example. Apparently, Microsoft thought getting pictures from the camera is more important than opening a file.
The file stream has a member ReadAsync which I think reads the data. This method needs a buffer as a parameter and returns another buffer (???). So I create a buffer:
var buffer = new Windows.Storage.Streams.Buffer(1024 * 1024 * 10); // 10 mb should be enough for an mp3
var resultbuffer = await filestream.ReadAsync(buffer, 1024 * 1024 * 10, Windows.Storage.Streams.InputStreamOptions.ReadAhead);
I am wondering... what happens if the file doesn't have enough bytes? I haven't seen any info in the documentation.
Now I need to calculate the hash for this file. To do that, I need to create an algorithm object...
var alg = Windows.Security.Criptography.Core.HashAlgorithmProvider.OpenAlgorithm("md5");
var hashbuff = alg.HashData(resultbuffer);
// Cleanup
filestream.Dispose();
I also considered reading the file in chunks, but how can I calculate the hash like that? I looked everywhere in the documentation and found nothing about this. Could it be the CryptographicHash class type with it's 'append' method?
Now I have another issue. How can I get the data from that weird buffer thing to a byte array? The IBuffer class doesn't have any 'GetData' member, and the documentation, again, is useless.
So all I could do now is wonder about the mysteries of the universe...
// ???
}
So the question is... how can I do this? I am completely confused, and I wonder why did Microsoft choose to make reading a file so... so... so... impossible! Even in Assembly I could figure it out easier than.... this thing.
WinRT or Windows Runtime should not be confused with .NET as it is not .NET. WinRT has access to only a subset of the Win32 API but not to everything like the .NET is. Here is a pretty good article on what are the rules and restrictions in WinRT.
The WinRT in general does not have access to the file system. It works with capabilities and you can allow file access capability but this would restrict your app's access only to certain areas. Here is a good example of how do to file access via WinRT.

Categories