In order to prevent the usual issues where I have an Xml file in a folder in one project and want to access it from other projects and have to deal with the file path issues, I want to download the Xml file contents directly from Azure blob storage where it resides now.
Not sure how to accomplish that although I see many examples of how to download images into streams, not sure how that works for Xml.
I am currently using the following ( which works until I move the Xml file)
public class MenuLoader
{
//var rootpath = HttpContext.Current.Server.MapPath("~");
private static readonly string NavMenuXmlPath = Path.Combine(ServicesHelpers.GetClassLibraryRootPath(),
#"ServicesDataFiles\MRNavigationMenu.xml");
);
//load the menus, based on the users role into the AppCache
public static void LoadMenus(IPrincipal principal)
{
var navXml = new NavigationMenusFromXml(NavMenuXmlPath);
var nmim = new NavigationMenuItemManager(navXml);
AppCache.Menus = nmim.Load(principal);
}
}
I want to eliminate all the bs associated with path combining and just download the xml from the file on Azure, i.e. replacing the string
#"ServicesDataFiles\MRNavigationMenu.xml"
with
"https://batlgroupimages.blob.core.windows.net:443/files/MRNavigationMenu.xml"
Naturally, that wouldn't work but there must be someway to load that xml into a file variable for use with the method.
Note: That is a publicly accessible file on azure for testing.
Use a memory stream. Remember to set position to zero before attempting to read.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
using System.IO;
using System.Xml;
namespace ConsoleApplication1
{
class Program
{
const string URL = "https://batlgroupimages.blob.core.windows.net/files/MRNavigationMenu.xml";
static void Main(string[] args)
{
MemoryStream stream = new MemoryStream();
XmlWriter writer = XmlWriter.Create(stream);
XmlDocument doc = new XmlDocument();
doc.Load(URL);
writer.WriteRaw(doc.OuterXml.ToString());
writer.Flush();
}
}
}
Related
I have an ASP.NET Azure web application written in C# that involves the user uploading different pdfs into Azure Blob storage. I'd like the user to later download a combined PDF inclusive of previously-uploaded blobs in a specific order. Any idea on the best way to accomplish this?
Here are 2 workarounds that you can try
Use of Azure Functions.
Download your pdf files from Azure Blob to your local computer, then merge them.
Use of Azure Functions
Create an azure function project and use the HTTP Trigger.
Make sure you install the below packages before getting started with coding.
Create the Function code.
Create Azure function in the portal.
Publish the code.
We are ready to start writing code. We need two files:
ResultClass.cs – returns the merged file(s) as a list.
Function1.cs – CCode that takes the file names from the URL, grabs them from the Storage account, merges them into one, and returns a download URL.
ResultClass.cs
using System;
using System.Collections.Generic;
namespace FunctionApp1
{
public class Result
{
public Result(IList<string> newFiles)
{
this.files = newFiles;
}
public IList<string> files { get; private set; }
}
}
Function1.cs
using System;
using System.Collections.Generic;
using System.IO;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Configuration;
using Microsoft.WindowsAzure.Storage.Blob;
using Newtonsoft.Json;
using PdfSharp.Pdf;
using PdfSharp.Pdf.IO;
namespace FunctionApp1
{
public class Function1
{
static Function1()
{
// This is required to avoid the "No data is available for encoding 1252" exception when saving the PdfDocument
System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance);
}
[FunctionName("Function1")]
public async Task<Result> SplitUploadAsync(
[HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)] HttpRequestMessage req,
//container where files will be stored and accessed for retrieval. in this case, it's called temp-pdf
[Blob("temp-pdf", Connection = "")] CloudBlobContainer outputContainer,
ILogger log)
{
//get query parameters
string uriq = req.RequestUri.ToString();
string keyw = uriq.Substring(uriq.IndexOf('=') + 1);
//get file name in query parameters
String fileNames = keyw.Split("mergepfd&filenam=")[1];
//split file name
string[] files = fileNames.Split(',');
//process merge
var newFiles = await this.MergeFileAsync(outputContainer, files);
return new Result(newFiles);
}
private async Task<IList<string>> MergeFileAsync(CloudBlobContainer container, string[] blobfiles)
{
//init instance
PdfDocument outputDocument = new PdfDocument();
//loop through files sent in query
foreach (string fileblob in blobfiles)
{
String intfile = $"" + fileblob;
// get file
CloudBlockBlob blob = container.GetBlockBlobReference(intfile);
using (var memoryStream = new MemoryStream())
{
await blob.DownloadToStreamAsync(memoryStream);
//get file content
string contents = blob.DownloadTextAsync().Result;
//open document
var inputDocument = PdfReader.Open(memoryStream, PdfDocumentOpenMode.Import);
//get pages
int count = inputDocument.PageCount;
for (int idx = 0; idx < count; idx++)
{
//append
outputDocument.AddPage(inputDocument.Pages[idx]);
}
}
}
var outputFiles = new List<string>();
var tempFile = String.Empty;
//call save function to store output in container
tempFile = await this.SaveToBlobStorageAsync(container, outputDocument);
outputFiles.Add(tempFile);
//return file(s) url
return outputFiles;
}
private async Task<string> SaveToBlobStorageAsync(CloudBlobContainer container, PdfDocument document)
{
//file name structure
var filename = $"merge-{DateTime.Now.ToString("yyyyMMddhhmmss")}-{Guid.NewGuid().ToString().Substring(0, 4)}.pdf";
// Creating an empty file pointer
var outputBlob = container.GetBlockBlobReference(filename);
using (var stream = new MemoryStream())
{
//save result of merge
document.Save(stream);
await outputBlob.UploadFromStreamAsync(stream);
}
//get sas token
var sasBlobToken = outputBlob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(5),
Permissions = SharedAccessBlobPermissions.Read
});
//return sas token
return outputBlob.Uri + sasBlobToken;
}
}
}
Download your pdf files from Azure Blob to your local computer, then merge them
internal static void combineNormalPdfFiles()
{
String inputFilePath1 = #"C:\1.pdf";
String inputFilePath2 = #"C:\2.pdf";
String inputFilePath3 = #"C:\3.pdf";
String outputFilePath = #"C:\Output.pdf";
String[] inputFilePaths = new String[3] { inputFilePath1, inputFilePath2, inputFilePath3 };
// Combine three PDF files and output.
PDFDocument.CombineDocument(inputFilePaths, outputFilePath);
}
REFERENCES:
Azure Function to combine PDF Blobs in Azure Storage Account (Blob container)
C# Merge PDF SDK: Merge, combine PDF files in C#.net, ASP.NET, MVC, Ajax, WinForms, WPF
I have an objective to send a pdf file from one server to a REST API which handles some archiving. I am using .NET Core 3.1 and the RestEase API library to help with some of the abstraction.
I have a simple console app that runs at a certain time everyday. The relevant code is as follows:
using System;
using System.Linq;
using System.Threading.Tasks;
using RestEase;
namespace CandidateUploadFile
{
class Program
{
static async Task Main(string[] args)
{
try
{
var apiClientBuilder = new ApiClientBuilder<ITestApi>();
var api = apiClientBuilder.GetApi("https://my.api.com");
var candidate = await api.GetCandidateByEmailAddress("tester#aol.com");
var fileName = "tester.pdf";
var fileBytesToUpload = await FileHelper.GetBytesFromFile($#"./{fileName}");
var result = await api.UploadCandidateFileAsync(fileBytesToUpload, candidate.Data.First().Id, fileName);
}
catch (Exception e)
{
System.Console.WriteLine(e);
}
}
}
}
apiClientBuilder does some auth-header adding, and that's really it. I'm certain that bit isn't relevant.
ITestApi looks like this:
using System.Collections.Generic;
using System.Net.Http;
using System.Threading.Tasks;
using Models;
using RestEase;
namespace CandidateUploadFile
{
public interface ITestApi : IApi
{
[Get("v1/candidates/{candidateId}")]
Task<Models.Response<Candidate>> GetCandidate([Path] string candidateId);
[Get("v1/candidates")]
Task<Models.Response<IEnumerable<Candidate>>> GetCandidateByEmailAddress([Query] string email);
[Get("v1/candidates")]
Task<Models.Response<IEnumerable<Candidate>>> GetCandidates();
[Post("v1/candidates/{candidateId}/files?perform_as=327d4d21-5cb0-4bc7-95f5-ae43aabc2db7")]
Task<string> UploadFileAsync([Path] string candidateId, [Body] HttpContent content);
[Get("v1/users")]
Task<Models.Response<IEnumerable<User>>> GetUsers();
}
}
It's UploadFileAsync that is really relevant here.
You'll note from Program.Main that I don't explicitly invoke UploadFileAsync. I instead invoke an extension method that basically wraps UploadFileAsync for the purpose of uploading the pdf using a multipart/form-data request. This approach is what comes as a recommendation in the RestEase library docs.. That extension method looks like this:
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
namespace CandidateUploadFile
{
public static class ApiExtension
{
public static async Task<string> UploadCandidateFileAsync(this ITestApi api, byte[] data, string candidateId, string fileName)
{
var content = new MultipartFormDataContent();
var fileContent = new ByteArrayContent(data);
fileContent.Headers.ContentType = new MediaTypeHeaderValue("application/pdf");
fileContent.Headers.ContentDisposition = new ContentDispositionHeaderValue("form-data")
{
Name = "file",
FileName = fileName
};
content.Add(fileContent);
return await api.UploadFileAsync(candidateId, content);
}
}
}
So what will happen when my console app executes is: I will get a successful response from the upload endpoint, and the file on the archive server gets created, but it's blank.
It may be important to know that this does not happen when I send, say, a .txt file. The .txt file will save with the expected content.
Any insight would be helpful. I'm not sure where to start on this one.
Thank you!
The issue was due to what I was doing in my GetBytesFromFile static helper method.
My static helper was using UTF-8 encoding to encode the binary content in the .pdfs I was uploading. However, it was working fine with .txt files I was uploading, which can be expected.
Lesson learned: there is no need -- and makes no sense -- to try to encode binary content before assign it to the multipart/form-data content. I just had to "pass-through" the binary content as-is, more-or-less.
I am creating a console application that will modify dicom tags. I will load up a single dicom file and update the PatientID tag.
I can not seem to to get anything to modify. I am able to read tags, but updating/adding does not seem to work for me. Previously I have used the DICOM ToolKit on powershell and it is very straight forward and easy, but I want to start developing in c# and so far I am failing.
using System;
using System.IO;
using SpiromicsImporterPrep.FileMethods;
using Dicom;
namespace SpiromicsImporterPrep
{
class Program
{
static void Main(string[] args)
{
string filename = #"Z:\SPIROMICS\Human_Scans\Dispatch_Received\NO_BACKUP_DONE_HERE\MIFAR\FORCE\JH114062-FU4\119755500\Non_Con_FRC__0.75__Qr40__5_7094\IM001139";
var file = DicomFile.Open(filename, readOption: FileReadOption.ReadAll);
var dicomDataset = file.Dataset;
dicomDataset.AddOrUpdate(DicomTag.PatientID, "TEST-PATIENT");
}
}
}
I expect after running the code when I look at the Dicom Header tags for this file with ImageJ or and other dicom reader that the value for the PatientID tag will be "TEST-PATIENT" The code runs with no errors but nothing seems to be updated or changed when I look at the dicom header.
you should invoke DicomFile.Save() Method.
string[] files = System.IO.Directory.GetFiles(#"D:\AcquiredImages\20191107\1.2.826.0.1.3680043.2.461.11107149.3266627937\1.2.276.0.7230010.3.1.3.3632557514.6848.1573106796.739");
foreach (var item in files)
{
DicomFile dicomFile = DicomFile.Open(item,FileReadOption.ReadAll);
dicomFile.Dataset.AddOrUpdate<string>(DicomTag.PatientName, "abc");
dicomFile.Save(item);
}
FileReadOption.ReadAll is required.
I use openxml sdk 2.5 in combination with the power tools by Eric White. I've managed to create dynamic pptx presentations using template files. (In C#)
Unfortunately the thumbnail gets lost during the process. Is there any way to (re)create the thumbnail of a pptx-file using openxml or power tools? I successfully wrote some code that changes an existing thumbnail with an image. But when there is is no thumbnail it gives me a System.NullReferenceException. Here is the code:
using OpenXmlPowerTools;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using DocumentFormat.OpenXml.Packaging;
namespace ConsoleApplication1
{
class AddThumbnail_
{
public static void ReplaceThumbnail(ThumbnailPart thumbnailPart, string newThumbnail)
{
using (
FileStream imgStream = new FileStream(newThumbnail, FileMode.Open, FileAccess.Read))
{
thumbnailPart.FeedData(imgStream);
}
}
static void Main(string[] args)
{
var templatePresentation = "Modified.pptx";
var outputPresentation = "Modified.pptx";
var baPresentation = File.ReadAllBytes(templatePresentation);
var pmlMainPresentation = new PmlDocument("Main.pptx", baPresentation);
OpenXmlMemoryStreamDocument streamDoc = new OpenXmlMemoryStreamDocument(pmlMainPresentation);
PresentationDocument document = streamDoc.GetPresentationDocument();
var thumbNailPart = document.ThumbnailPart;
ReplaceThumbnail(thumbNailPart, #"C:\Path\to\image\image.jpg");
document.SaveAs(outputPresentation);
}
}
}
EDIT:
I realize this question has been asked before (How to generate thumbnail image for a PPTX file in C#?) and the answer is "enable preview screenshot when saving the presentation" but this would mean I'd have to open every pptx and manually set this flag. I would appreciate a C# solution.
Thank you in advance!
If the thumbnail has never existed then the ThumbnailPart won't necessarily exist in the document and so the thumbNailPart variable in your code will be null. In that scenario, as well as setting the image for the ThumbnailPart you need to add the part itself.
Normally when using the OpenXml SDK you would call the AddPart method passing in a new ThumbnailPart but for some reason the ThumbnailPart constructor is protected internal and thus not accessible to you. Instead, there is an AddThumbnailPart method on the PresentationDocument which will create a new ThumbnailPart. The AddThumbnailPart method takes either a string for the content type or a ThumbnailPartType enum member.
Adding the following to your code should fix your issue:
if (document.ThumbnailPart == null)
document.AddThumbnailPart(ThumbnailPartType.Jpeg);
var thumbNailPart = document.ThumbnailPart;
Im trying to decompress a bz2 file via code using the ICSharpCode.SharpZipLib.
It seems no matter where I make my file, even though I have FULL ACCESS control over it, I keep getting this Exception. Any help greatly appreciated.
using System;
using System.IO;
using ICSharpCode.SharpZipLib.BZip2;
namespace decompressor
{
class MainClass
{
public static void Main(string[] args)
{
string filePath = "C:\\FreeBase\\opinions.tsv.bz2";
string decompressPath = "C:\\Users\\mike\\Desktop\\Decompressed";
Console.WriteLine("Decompressing {0} to {1}", file, path);
BZip2.Decompress(File.OpenRead(filePath),File.OpenWrite(decompressPath), true);
}
}
}
Your code can have no access to create new paths at your desktop.
Check the permissions for the "C:\\Users\\mike\\Desktop\\Decompressed".
Maybe, you should write so:
string decompressPath = "C:\\Users\\mike\\Desktop\\Decompressed\\opinions.tsv";