I am using JSReport in order to generate reports in .netCore2; In the below method the view is returned to the user and report is saved in specified directory;
public IActionResult ImageDownload()
{
HttpContext.JsReportFeature().Recipe(Recipe.PhantomPdf)
.Configure((r) => r.Template.Phantom = new Phantom
{
Format = PhantomFormat.A4,
Orientation = PhantomOrientation.Portrait
}).OnAfterRender( (r) =>
{
var streamIo = r.Content; // streamIo is of type System.IO
streamIo.CopyTo(System.IO.File.OpenWrite("C:GeneratedReports\\myReport.pdf"));
streamIo.Seek(0, SeekOrigin.Begin);
}
);
var dp = new Classes.DataProvider();
var lstnames = dp.GetRegisteredNames();
var lst = lstnames.ToArray<string>();
return View("Users", lst);
}
Once the view is returned a new browser opens displaying the pdf report. Also the same pdf will be saved in the webserver in the given directory; The problem is that the created report in the directory seems to be locked, I cannot copy it, open it... unless I close the .net solution. Any explanation of what's happening here?
You FileStream isn't closed when OnAfterRender has completed meaning no other app can open/access it. Try changing the code to put a using block around the File.OpenWrite call to contain a using statement for the FileStream e.g.
public IActionResult ImageDownload()
{
HttpContext.JsReportFeature().Recipe(Recipe.PhantomPdf)
.Configure((r) => r.Template.Phantom = new Phantom
{
Format = PhantomFormat.A4,
Orientation = PhantomOrientation.Portrait
}).OnAfterRender( (r) =>
{
var streamIo = r.Content; // streamIo is of type System.IO
using(var fs = System.IO.File.OpenWrite("C:GeneratedReports\\myReport.pdf"))
{
streamIo.CopyTo(fs);
}
streamIo.Seek(0, SeekOrigin.Begin);
}
);
var dp = new Classes.DataProvider();
var lstnames = dp.GetRegisteredNames();
var lst = lstnames.ToArray<string>();
return View("Users", lst);
}
Related
I want to write export/download functionality for files from external API.
I've created separate Action for it. Using external API I can get stream for that file.
When I am saving that stream to local file, everything is fine, file isn't empty.
var exportedFile = await this.GetExportedFile(client, this.ReportId, this.WorkspaceId, export);
// Now you have the exported file stream ready to be used according to your specific needs
// For example, saving the file can be done as follows:
string pathOnDisk = #"D:\Temp\" + export.ReportName + exportedFile.FileSuffix;
using (var fileStream = File.Create(pathOnDisk))
{
await exportedFile.FileStream.CopyToAsync(fileStream);
}
But when I return exportedFile object that contains in it stream and do next:
var result = await this._service.ExportReport(reportName, format, CancellationToken.None);
var fileResult = new HttpResponseMessage(HttpStatusCode.OK);
using (var ms = new MemoryStream())
{
await result.FileStream.CopyToAsync(ms);
ms.Position = 0;
fileResult.Content = new ByteArrayContent(ms.GetBuffer());
}
fileResult.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = $"{reportName}{result.FileSuffix}"
};
fileResult.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return fileResult;
Exported file is always empty.
Is it problem with stream or with code that try to return that stream as file?
Tried as #Nobody suggest to use ToArray
fileResult.Content = new ByteArrayContent(ms.ToArray());
the same result.
Also tried to use StreamContent
fileResult.Content = new StreamContent(result.FileStream);
still empty file.
But when I'm using StreamContent and MemmoryStream
using (var ms = new MemoryStream())
{
await result.FileStream.CopyToAsync(ms);
ms.Position = 0;
fileResult.Content = new StreamContent(ms);
}
in result I got
{
"error": "no response from server"
}
Note: from 3rd party API I get stream that is readonly.
you used GetBuffer() to retrieve the data of the memory stream.
The function you should use is ToArray()
Please read the Remarks of the documentation of these functions.
https://learn.microsoft.com/en-us/dotnet/api/system.io.memorystream.getbuffer?view=net-6.0
using (var ms = new MemoryStream())
{
ms.Position = 0;
await result.FileStream.CopyToAsync(ms);
fileResult.Content = new ByteArrayContent(ms.ToArray()); //ToArray() and not GetBuffer()
}
Your "mistake" although it's an obvious one is that you return a status message, but not the actual file itself (which is in it's own also a 200).
You return this:
var fileResult = new HttpResponseMessage(HttpStatusCode.OK);
So you're not sending a file, but a response message. What I'm missing in your code samples is the procedure call itself, but since you use a HttpResonseMessage I will assume it's rather like a normal Controller action. If that is the case you could respond in a different manner:
return new FileContentResult(byteArray, mimeType){ FileDownloadName = filename };
where byteArray is ofcourse just a byte[], the mimetype could be application/octet-stream (but I suggest you'd actually find the correct mimetype for the browser to act accordingly) and the filename is the filename you want the file to be named.
So, if you were to stitch above and my comment together you'd get this:
var exportedFile = await this.GetExportedFile(client, this.ReportId, this.WorkspaceId, export);
// Now you have the exported file stream ready to be used according to your specific needs
// For example, saving the file can be done as follows:
string pathOnDisk = #"D:\Temp\" + export.ReportName + exportedFile.FileSuffix;
using (var fileStream = File.Create(pathOnDisk))
{
await exportedFile.FileStream.CopyToAsync(fileStream);
}
return new FileContentResult(System.IO.File.ReadAllBytes(pathOnDisk), "application/octet-stream") { FileDownloadName = export.ReportName + exportedFile.FileSuffix };
I suggest to try it, since you still report a 200 (and not a fileresult)
I tried 'using' but it says that the method is not Idisposable. I checked for running processes in Task Manager, nothing there. My goal is to upload a file from local directory to the Rich Text editor in my website. Please help me resolve this issue. Thanks in Advance
public void OnPostUploadDocument()
{
var projectRootPath = Path.Combine(_hostingEnvironment.ContentRootPath, "UploadedDocuments");
var filePath = Path.Combine(projectRootPath, UploadedDocument.FileName);
UploadedDocument.CopyTo(new FileStream(filePath, FileMode.Create));
// Retain the path of uploaded document between sessions.
UploadedDocumentPath = filePath;
ShowDocumentContentInTextEditor();
}
private void ShowDocumentContentInTextEditor()
{
WordProcessingLoadOptions loadOptions = new WordProcessingLoadOptions();
Editor editor = new Editor(UploadedDocumentPath, delegate { return loadOptions; }); //passing path and load options (via delegate) to the constructor
EditableDocument document = editor.Edit(new WordProcessingEditOptions()); //opening document for editing with format-specific edit options
DocumentContent = document.GetBodyContent(); //document.GetContent();
Console.WriteLine("HTMLContent: " + DocumentContent);
//string embeddedHtmlContent = document.GetEmbeddedHtml();```
//Console.WriteLine("EmbeddedHTMLContent: " + embeddedHtmlContent);
}
FileStream is disposable, so you can use using on it:
using (var stream = new FileStream(filePath, FileMode.Create)
{
UploadedDocument.CopyTo(stream);
}
I would like to generate the PDF based on the View, but I don't want to display it after generating it. Just save to disk.
As I am in Azure, I had to use the version with Docker, but it did not print the footer (page count)
With that I will use iText7 to add the footer (page count) to delete the original PDF and display the new output.
Huge job, but the only way I found it, since Rotativa and other components that work with wkhtmltopdf.org did not print the CSS correctly.
So my problem is:
How to save the PDF without displaying it?
With the example of the site:
https://jsreport.net/learn/dotnet-aspnetcore#save-to-file
Itneeds the return View() which makes me unable to display the modified
PDF and not the original
.
[MiddlewareFilter(typeof(JsReportPipeline))]
public async Task<IActionResult> InvoiceWithHeader()
{
HttpContext.JsReportFeature().Recipe(Recipe.ChromePdf);
HttpContext.JsReportFeature().OnAfterRender((r) => {
using (var file = System.IO.File.Open("report.pdf", FileMode.Create))
{
r.Content.CopyTo(file);
}
r.Content.Seek(0, SeekOrigin.Begin);
});
return View(InvoiceModel.Example());
}
OnAfterRender does not answer my problem, is there how to do the step as I said? or is there another better solution?
Generate PDF of Action by JsReport
Save PDF from JsReport
Add page count by iText7
Delete original PDF from JsReport
View the pdf modified by iText7 in the view
NOTE: using new jsreport.Local.LocalReporting() works perfectly the problem was when going up to Azure.
Update:
I'm tried, but it didn't work
var htmlContent = await JsReportMVCService.RenderViewToStringAsync(HttpContext, RouteData, "/Views/OcorrenciaTalaos/GerarPdf.cshtml", retorno);
(var contentType, var generatedFile) = await GeneratePDFAsync(htmlContent);
using (var fileStream = new FileStream("tempJsReport.pdf", FileMode.Create))
{
await generatedFile.CopyToAsync(fileStream);
}
public async Task<(string ContentType, MemoryStream GeneratedFileStream)> GeneratePDFAsync(string htmlContent)
{
IJsReportFeature feature = new JsReportFeature(HttpContext);
feature.Recipe(Recipe.ChromePdf);
if (!feature.Enabled) return (null, null);
feature.RenderRequest.Template.Content = htmlContent;
// var htmlContent = await JsReportMVCService.RenderViewToStringAsync(HttpCSexontext, RouteData, "GerarPdf", retorno);
var report = await JsReportMVCService.RenderAsync(feature.RenderRequest);
var contentType = report.Meta.ContentType;
MemoryStream ms = new MemoryStream();
report.Content.CopyTo(ms);
return (contentType, ms);
}
You can overwrite the final response inside the OnAfterRender this way:
[MiddlewareFilter(typeof(JsReportPipeline))]
public IActionResult InvoiceDownload()
{
HttpContext.JsReportFeature().Recipe(Recipe.ChromePdf)
.OnAfterRender((r) =>
{
// write current report to file
using(var fileStream = System.IO.File.Create("c://temp/out.pdf"))
{
r.Content.CopyTo(fileStream);
}
// do modifications
// ...
// overwrite response with a new pdf
r.Content = System.IO.File.OpenRead("c://temp/final.pdf");
});
return View("Invoice", InvoiceModel.Example());
}
However, the page numbers should work and you shouldn't need to do it this complicated way. No matter you are in docker or not. Here is the answer in another question
I'm working on Pdf to text file conversion using google cloud vision API.
I got an initial code help through there side, image to text conversion working fine with JSON key which I got through registration and activation,
here is a code which I got for pdf to text conversion
private static object DetectDocument(string gcsSourceUri,
string gcsDestinationBucketName, string gcsDestinationPrefixName)
{
var client = ImageAnnotatorClient.Create();
var asyncRequest = new AsyncAnnotateFileRequest
{
InputConfig = new InputConfig
{
GcsSource = new GcsSource
{
Uri = gcsSourceUri
},
// Supported mime_types are: 'application/pdf' and 'image/tiff'
MimeType = "application/pdf"
},
OutputConfig = new OutputConfig
{
// How many pages should be grouped into each json output file.
BatchSize = 2,
GcsDestination = new GcsDestination
{
Uri = $"gs://{gcsDestinationBucketName}/{gcsDestinationPrefixName}"
}
}
};
asyncRequest.Features.Add(new Feature
{
Type = Feature.Types.Type.DocumentTextDetection
});
List<AsyncAnnotateFileRequest> requests =
new List<AsyncAnnotateFileRequest>();
requests.Add(asyncRequest);
var operation = client.AsyncBatchAnnotateFiles(requests);
Console.WriteLine("Waiting for the operation to finish");
operation.PollUntilCompleted();
// Once the rquest has completed and the output has been
// written to GCS, we can list all the output files.
var storageClient = StorageClient.Create();
// List objects with the given prefix.
var blobList = storageClient.ListObjects(gcsDestinationBucketName,
gcsDestinationPrefixName);
Console.WriteLine("Output files:");
foreach (var blob in blobList)
{
Console.WriteLine(blob.Name);
}
// Process the first output file from GCS.
// Select the first JSON file from the objects in the list.
var output = blobList.Where(x => x.Name.Contains(".json")).First();
var jsonString = "";
using (var stream = new MemoryStream())
{
storageClient.DownloadObject(output, stream);
jsonString = System.Text.Encoding.UTF8.GetString(stream.ToArray());
}
var response = JsonParser.Default
.Parse<AnnotateFileResponse>(jsonString);
// The actual response for the first page of the input file.
var firstPageResponses = response.Responses[0];
var annotation = firstPageResponses.FullTextAnnotation;
// Here we print the full text from the first page.
// The response contains more information:
// annotation/pages/blocks/paragraphs/words/symbols
// including confidence scores and bounding boxes
Console.WriteLine($"Full text: \n {annotation.Text}");
return 0;
}
this function required 3 parameters
string gcsSourceUri,
string gcsDestinationBucketName,
string gcsDestinationPrefixName
I don't understand which value should I set for those 3 params.
I never worked on third party API before so it's a little bit confusing for me
Suppose you own a GCS bucket named 'giri_bucket' and you put a pdf at the root of the bucket 'test.pdf'. If you wanted to write the results of the operation to the same bucket you could set the arguments to be
gcsSourceUri: 'gs://giri_bucket/test.pdf'
gcsDestinationBucketName: 'giri_bucket'
gcsDestinationPrefixName: 'async_test'
When the operation completes, there will be 1 or more output files in your GCS bucket at giri_bucket/async_test.
If you want, you could even write your output to a different bucket. You just need to make sure your gcsDestinationBucketName + gcsDestinationPrefixName is unique.
You can read more about the request format in the docs: AsyncAnnotateFileRequest
I am building an add-in for Word, with the goal of being able to save the open Word document to our MVC web application. I have followed this guide and am sending the slices like this:
function sendSlice(slice, state) {
var data = slice.data;
if (data) {
var fileData = myEncodeBase64(data);
var request = new XMLHttpRequest();
request.onreadystatechange = function () {
if (request.readyState == 4) {
updateStatus("Sent " + slice.size + " bytes.");
state.counter++;
if (state.counter < state.sliceCount) {
getSlice(state);
}
else {
closeFile(state);
}
}
}
request.open("POST", "http://localhost:44379/api/officeupload/1");
request.setRequestHeader("Slice-Number", slice.index);
request.setRequestHeader("Total-Slices", state.sliceCount);
request.setRequestHeader("FileId", "abc29572-8eca-473d-80de-8b87d64e06a0");
request.setRequestHeader("FileName", "file.docx");
request.send(fileData);
}
}
And then receiving the slices like this:
public void Post()
{
if (Files == null) Files = new Dictionary<Guid, Dictionary<int, byte[]>>();
var slice = int.Parse(Request.Headers.GetValues("Slice-Number").First());
var numSlices = int.Parse(Request.Headers.GetValues("Total-Slices").First());
var filename = Request.Headers.GetValues("FileName").First();
var fileId = Guid.Parse(Request.Headers.GetValues("FileId").First());
var content = Request.Content.ReadAsStringAsync().Result;
if (!Files.ContainsKey(fileId)) Files[fileId] = new Dictionary<int, byte[]>();
Files[fileId][slice] = Convert.FromBase64String(content);
if (Files[fileId].Keys.Count == numSlices)
{
byte[] array = Combine(Files[fileId].OrderBy(x => x.Key).Select(x => x.Value).ToArray());
System.IO.FileStream writeFileStream = new System.IO.FileStream("c:\\temp\\test.docx", System.IO.FileMode.Create, System.IO.FileAccess.Write);
writeFileStream.Write(array, 0, array.Length);
writeFileStream.Close();
Files.Remove(fileId);
}
}
The problem is that the file that is produced by the controller is unreadable in Word. I have tested with a word document with "Test123" as the entire contents of the document, and when the file is saved through word it is 13kb, but when sent to the web app and saved from there the file is 41kb.
My assumption is that the I am missing something either with the encoding or decoding, since I am only sending a single slice so there shouldn't be an issue with recombining them.
There's an Excel snippet in Script Lab that produces the base64 encoded file which you can paste into an online decoder like www.base64decode.org. The APIs are the same as in Word. This can help you isolate the encoding code. After you install Script Lab, open the Samples tab, scroll to the Document section. It's the Get file (using slicing) snippet.