I need to send a photo album in a bundle to the telegram bot. The number of photos is unknown in advance.
I wrote the code:
List<IAlbumInputMedia> streamArray = new List<IAlbumInputMedia> {};
foreach (var formFile in files)
{
if (formFile.Length > 0)
{
using var stream = formFile.OpenReadStream();
streamArray.Add(stream); // there is a mistake here. cannot convert to System.IO.Stream to Telegram.Bot.Types.IAlbumInputmedia
//await clientTg.SendPhotoAsync(groupId,stream); // it works fine
}
}
await clientTg.SendMediaGroupAsync(groupId, streamArray);
I can't add stream to List arrayStream, error "cannot convert to System.IO.Stream to Telegram.Bot.Types.IAlbumInputmedia"
In a single instance, the stream is normally sent via the SendPhotoAsync method, commented out in the code.
How do I convert these types and send a group photo?
According to the docs:
Message[] messages = await botClient.SendMediaGroupAsync(
chatId: chatId,
media: new IAlbumInputMedia[]
{
new InputMediaPhoto("https://cdn.pixabay.com/photo/2017/06/20/19/22/fuchs-2424369_640.jpg"),
new InputMediaPhoto("https://cdn.pixabay.com/photo/2017/04/11/21/34/giraffe-2222908_640.jpg"),
}
);
You must explicitly set the type of files.
In your case it will be like:
streamArray.Add(new InputMediaPhoto(stream, $"file{DateTime.Now.ToString("s").Replace(":", ".")}")
#Vadim's answer didn't work, probably because I can't Add in this case. But their answer did push me in the right direction. I decided to write code for a different number of photos of different branches of the program.
if (files.Count == 2) // <<<< 2 photos
{
await using var stream1 = files[0].OpenReadStream();
await using var stream2 = files[1].OpenReadStream();
IAlbumInputMedia[] streamArray =
{
new InputMediaPhoto(new InputMedia(stream1, "111"))
{
Caption = "Cap 111"
},
new InputMediaPhoto(new InputMedia(stream2, "222"))
{
Caption = "Cap 222"
},
};
await clientTg.SendMediaGroupAsync(groupId, streamArray);
}
I'm not sure that I'm using await using correctly, but at least it works.
Related
I want to write export/download functionality for files from external API.
I've created separate Action for it. Using external API I can get stream for that file.
When I am saving that stream to local file, everything is fine, file isn't empty.
var exportedFile = await this.GetExportedFile(client, this.ReportId, this.WorkspaceId, export);
// Now you have the exported file stream ready to be used according to your specific needs
// For example, saving the file can be done as follows:
string pathOnDisk = #"D:\Temp\" + export.ReportName + exportedFile.FileSuffix;
using (var fileStream = File.Create(pathOnDisk))
{
await exportedFile.FileStream.CopyToAsync(fileStream);
}
But when I return exportedFile object that contains in it stream and do next:
var result = await this._service.ExportReport(reportName, format, CancellationToken.None);
var fileResult = new HttpResponseMessage(HttpStatusCode.OK);
using (var ms = new MemoryStream())
{
await result.FileStream.CopyToAsync(ms);
ms.Position = 0;
fileResult.Content = new ByteArrayContent(ms.GetBuffer());
}
fileResult.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = $"{reportName}{result.FileSuffix}"
};
fileResult.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return fileResult;
Exported file is always empty.
Is it problem with stream or with code that try to return that stream as file?
Tried as #Nobody suggest to use ToArray
fileResult.Content = new ByteArrayContent(ms.ToArray());
the same result.
Also tried to use StreamContent
fileResult.Content = new StreamContent(result.FileStream);
still empty file.
But when I'm using StreamContent and MemmoryStream
using (var ms = new MemoryStream())
{
await result.FileStream.CopyToAsync(ms);
ms.Position = 0;
fileResult.Content = new StreamContent(ms);
}
in result I got
{
"error": "no response from server"
}
Note: from 3rd party API I get stream that is readonly.
you used GetBuffer() to retrieve the data of the memory stream.
The function you should use is ToArray()
Please read the Remarks of the documentation of these functions.
https://learn.microsoft.com/en-us/dotnet/api/system.io.memorystream.getbuffer?view=net-6.0
using (var ms = new MemoryStream())
{
ms.Position = 0;
await result.FileStream.CopyToAsync(ms);
fileResult.Content = new ByteArrayContent(ms.ToArray()); //ToArray() and not GetBuffer()
}
Your "mistake" although it's an obvious one is that you return a status message, but not the actual file itself (which is in it's own also a 200).
You return this:
var fileResult = new HttpResponseMessage(HttpStatusCode.OK);
So you're not sending a file, but a response message. What I'm missing in your code samples is the procedure call itself, but since you use a HttpResonseMessage I will assume it's rather like a normal Controller action. If that is the case you could respond in a different manner:
return new FileContentResult(byteArray, mimeType){ FileDownloadName = filename };
where byteArray is ofcourse just a byte[], the mimetype could be application/octet-stream (but I suggest you'd actually find the correct mimetype for the browser to act accordingly) and the filename is the filename you want the file to be named.
So, if you were to stitch above and my comment together you'd get this:
var exportedFile = await this.GetExportedFile(client, this.ReportId, this.WorkspaceId, export);
// Now you have the exported file stream ready to be used according to your specific needs
// For example, saving the file can be done as follows:
string pathOnDisk = #"D:\Temp\" + export.ReportName + exportedFile.FileSuffix;
using (var fileStream = File.Create(pathOnDisk))
{
await exportedFile.FileStream.CopyToAsync(fileStream);
}
return new FileContentResult(System.IO.File.ReadAllBytes(pathOnDisk), "application/octet-stream") { FileDownloadName = export.ReportName + exportedFile.FileSuffix };
I suggest to try it, since you still report a 200 (and not a fileresult)
I'm trying to upload a file using Microsoft's Graph SDK but have hit a problem.
I have pretty much copied verbatim the C# example from here, commented-out the progress part and updated the using statement for C# 8, and here's what I have...
public async Task<bool> UploadFileAsync(string parentFolderId, string filename, byte[] bytes)
{
var graphClient = GetGraphClient();
// Declare the variable outside the `using` statement to get around a little C# problem - https://www.tabsoverspaces.com/233779-using-await-using-iasyncdisposable-with-configureawait
var memoryStream = new MemoryStream(bytes);
await using (memoryStream.ConfigureAwait(false))
{
// Use properties to specify the conflict behavior.
// - in this case, replace.
var uploadProps = new DriveItemUploadableProperties
{
AdditionalData = new Dictionary<string, object> {{"#microsoft.graph.conflictBehavior", "replace"}},
ODataType = null
};
try
{
// Create the upload session.
// - itemPath does not need to be a path to an existing item.
var uploadSession = await graphClient.Drives[_driveId]
.Items[parentFolderId]
.ItemWithPath(filename)
.CreateUploadSession(uploadProps)
.Request()
.PostAsync()
.ConfigureAwait(false);
// Max slice size must be a multiple of 320KB.
const int maxSliceSize = 320 * 1024;
var fileUploadTask = new LargeFileUploadTask<DriveItem>(uploadSession, memoryStream, maxSliceSize);
// Upload the file.
var uploadResult = await fileUploadTask.UploadAsync().ConfigureAwait(false);
if (uploadResult.UploadSucceeded)
{
// The ItemResponse object in the result represents the created item.
return true;
}
return false;
}
catch (ServiceException exception)
{
// ...
}
}
}
However the line...
var uploadSession = await graphClient.Drives[_driveId]
.Items[parentFolderId]
...
...throws an exception:
Microsoft.Graph.ServiceException: Code: BadRequest
Message: Multiple action overloads were found with the same binding parameter for 'microsoft.graph.createUploadSession'.
Can anyone help?
I figured out the cause of the problem - the filename that I was using contained invalid symbols (in my case I was stringifying a DateTime and that contained :).
It's frustrating that this exception doesn't bubble-up correctly and instead I got that "Multiple action overloads" message.
I have 200 gb text file on azure blob storage . I want to search in the text and then matching line need to download instead of whole 200 gb file and then select that line.
I have written code in c# by downloading complete file and then searching and selecting but its taking too much time and then failed with timeout error .
var content ="" ////Downloading whole text from azure blob storage
StringReader strReader = new StringReader(contents);
var searchedLines1 = contents.Split(new string[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries).
Select((text, index) => new { text, lineNumber = index + 1 })
.Where(x => x.text.Contains("TYLER15727#YAHOO.COM") || x.lineNumber == 1);
You will need to stream the file and set the timeout. I have wrapped the stream implementation in IAsyncEnumerable which is completely unnecessary... but why not
Given
public static async IAsyncEnumerable<string> Read(StreamReader stream)
{
while(!stream.EndOfStream)
yield return await stream.ReadLineAsync();
}
Usage
var blobClient = new BlobClient( ... , new BlobClientOptions()
{
Transport = new HttpClientTransport(new HttpClient {Timeout = Timeout.InfiniteTimeSpan}),
Retry = {NetworkTimeout = Timeout.InfiniteTimeSpan}
});
await using var stream = await blobClient.OpenReadAsync();
using var reader = new StreamReader(stream);
await foreach (var line in Read(reader))
if (line.Contains("bob"))
{
Console.WriteLine("Yehaa");
// exit or what ever
}
Disclaimer : Completely untested
Note : If you are using C#4 you will need to remove all all the awaits and async methods, and just use the for loop with stream.ReadLine
I'm working on Pdf to text file conversion using google cloud vision API.
I got an initial code help through there side, image to text conversion working fine with JSON key which I got through registration and activation,
here is a code which I got for pdf to text conversion
private static object DetectDocument(string gcsSourceUri,
string gcsDestinationBucketName, string gcsDestinationPrefixName)
{
var client = ImageAnnotatorClient.Create();
var asyncRequest = new AsyncAnnotateFileRequest
{
InputConfig = new InputConfig
{
GcsSource = new GcsSource
{
Uri = gcsSourceUri
},
// Supported mime_types are: 'application/pdf' and 'image/tiff'
MimeType = "application/pdf"
},
OutputConfig = new OutputConfig
{
// How many pages should be grouped into each json output file.
BatchSize = 2,
GcsDestination = new GcsDestination
{
Uri = $"gs://{gcsDestinationBucketName}/{gcsDestinationPrefixName}"
}
}
};
asyncRequest.Features.Add(new Feature
{
Type = Feature.Types.Type.DocumentTextDetection
});
List<AsyncAnnotateFileRequest> requests =
new List<AsyncAnnotateFileRequest>();
requests.Add(asyncRequest);
var operation = client.AsyncBatchAnnotateFiles(requests);
Console.WriteLine("Waiting for the operation to finish");
operation.PollUntilCompleted();
// Once the rquest has completed and the output has been
// written to GCS, we can list all the output files.
var storageClient = StorageClient.Create();
// List objects with the given prefix.
var blobList = storageClient.ListObjects(gcsDestinationBucketName,
gcsDestinationPrefixName);
Console.WriteLine("Output files:");
foreach (var blob in blobList)
{
Console.WriteLine(blob.Name);
}
// Process the first output file from GCS.
// Select the first JSON file from the objects in the list.
var output = blobList.Where(x => x.Name.Contains(".json")).First();
var jsonString = "";
using (var stream = new MemoryStream())
{
storageClient.DownloadObject(output, stream);
jsonString = System.Text.Encoding.UTF8.GetString(stream.ToArray());
}
var response = JsonParser.Default
.Parse<AnnotateFileResponse>(jsonString);
// The actual response for the first page of the input file.
var firstPageResponses = response.Responses[0];
var annotation = firstPageResponses.FullTextAnnotation;
// Here we print the full text from the first page.
// The response contains more information:
// annotation/pages/blocks/paragraphs/words/symbols
// including confidence scores and bounding boxes
Console.WriteLine($"Full text: \n {annotation.Text}");
return 0;
}
this function required 3 parameters
string gcsSourceUri,
string gcsDestinationBucketName,
string gcsDestinationPrefixName
I don't understand which value should I set for those 3 params.
I never worked on third party API before so it's a little bit confusing for me
Suppose you own a GCS bucket named 'giri_bucket' and you put a pdf at the root of the bucket 'test.pdf'. If you wanted to write the results of the operation to the same bucket you could set the arguments to be
gcsSourceUri: 'gs://giri_bucket/test.pdf'
gcsDestinationBucketName: 'giri_bucket'
gcsDestinationPrefixName: 'async_test'
When the operation completes, there will be 1 or more output files in your GCS bucket at giri_bucket/async_test.
If you want, you could even write your output to a different bucket. You just need to make sure your gcsDestinationBucketName + gcsDestinationPrefixName is unique.
You can read more about the request format in the docs: AsyncAnnotateFileRequest
I need to upload a file using Stream (Azure Blobstorage), and just cannot find out how to get the stream from the object itself. See code below.
I'm new to the WebAPI and have used some examples. I'm getting the files and filedata, but it's not correct type for my methods to upload it. Therefore, I need to get or convert it into a normal Stream, which seems a bit hard at the moment :)
I know I need to use ReadAsStreamAsync().Result in some way, but it crashes in the foreach loop since I'm getting two provider.Contents (first one seems right, second one does not).
[System.Web.Http.HttpPost]
public async Task<HttpResponseMessage> Upload()
{
if (!Request.Content.IsMimeMultipartContent())
{
this.Request.CreateResponse(HttpStatusCode.UnsupportedMediaType);
}
var provider = GetMultipartProvider();
var result = await Request.Content.ReadAsMultipartAsync(provider);
// On upload, files are given a generic name like "BodyPart_26d6abe1-3ae1-416a-9429-b35f15e6e5d5"
// so this is how you can get the original file name
var originalFileName = GetDeserializedFileName(result.FileData.First());
// uploadedFileInfo object will give you some additional stuff like file length,
// creation time, directory name, a few filesystem methods etc..
var uploadedFileInfo = new FileInfo(result.FileData.First().LocalFileName);
// Remove this line as well as GetFormData method if you're not
// sending any form data with your upload request
var fileUploadObj = GetFormData<UploadDataModel>(result);
Stream filestream = null;
using (Stream stream = new MemoryStream())
{
foreach (HttpContent content in provider.Contents)
{
BinaryFormatter bFormatter = new BinaryFormatter();
bFormatter.Serialize(stream, content.ReadAsStreamAsync().Result);
stream.Position = 0;
filestream = stream;
}
}
var storage = new StorageServices();
storage.UploadBlob(filestream, originalFileName);**strong text**
private MultipartFormDataStreamProvider GetMultipartProvider()
{
var uploadFolder = "~/App_Data/Tmp/FileUploads"; // you could put this to web.config
var root = HttpContext.Current.Server.MapPath(uploadFolder);
Directory.CreateDirectory(root);
return new MultipartFormDataStreamProvider(root);
}
This is identical to a dilemma I had a few months ago (capturing the upload stream before the MultipartStreamProvider took over and auto-magically saved the stream to a file). The recommendation was to inherit that class and override the methods ... but that didn't work in my case. :( (I wanted the functionality of both the MultipartFileStreamProvider and MultipartFormDataStreamProvider rolled into one MultipartStreamProvider, without the autosave part).
This might help; here's one written by one of the Web API developers, and this from the same developer.
Hi just wanted to post my answer so if anybody encounters the same issue they can find a solution here itself.
here
MultipartMemoryStreamProvider stream = await this.Request.Content.ReadAsMultipartAsync();
foreach (var st in stream.Contents)
{
var fileBytes = await st.ReadAsByteArrayAsync();
string base64 = Convert.ToBase64String(fileBytes);
var contentHeader = st.Headers;
string filename = contentHeader.ContentDisposition.FileName.Replace("\"", "");
string filetype = contentHeader.ContentType.MediaType;
}
I used MultipartMemoryStreamProvider and got all the details like filename and filetype from the header of content.
Hope this helps someone.