I´m facing this issue a long time, after months i´m still not able to find any solution. Here´s the scenario:
VS 2019, Framework 4.6 and Crystal reports 13_0_27.
The following code takes hours to export a pdf (about 400 pages and 30.000 rows) . If i open the report
with crystal reports and export the document, same query by code, only takes seconds.
I tried a couple things, like ExportToStream and save the stream to file, or exporting direcly to disk
and other post did read that "pdfFormatOptions.UsePageRange = True" should help, but same result.
The code works fine with smalls pdfs with, for example, 100 rows.
Informe.Load(Application.StartupPath + #"\informes\report.rpt");
for (i = 0; i < Informe.Database.Tables.Count; ++i)
{
logOnInfo.ConnectionInfo.ServerName = "Server";
logOnInfo.ConnectionInfo.DatabaseName = "BBDD";
logOnInfo.ConnectionInfo.UserID = "user";
logOnInfo.ConnectionInfo.Password = "user";
Informe.Database.Tables[i].ApplyLogOnInfo(logOnInfo);
}
diskOpts.DiskFileName = PDFPath + _cabe.Guid + "_minutos.pdf";
ExportOptions exportOpts2 = Informe.ExportOptions;
exportOpts2.DestinationOptions = diskOpts;
exportOpts2.ExportFormatType = ExportFormatType.PortableDocFormat;
exportOpts2.ExportDestinationType = ExportDestinationType.DiskFile;
try
{
Informe.RecordSelectionFormula = #" {CabeceraFacturas.Guid}='{" + _cabe.Guid.ToString() + "}'";
//Informe.Export();
Stream oStream;
oStream = (Stream)Informe.ExportToStream(ExportFormatType.PortableDocFormat);
using (FileStream fileStream = File.Create(RutaGeneracionPDF + _cabe.Guid + "_minutos.pdf", (int)oStream.Length))
{
byte[] bytesInStream = new byte[oStream.Length];
oStream.Read(bytesInStream, 0, bytesInStream.Length);
fileStream.Write(bytesInStream, 0, bytesInStream.Length);
fileStream.Close();
}
}
Thanks!
After expending days and hours and headaches i finally did the trick.
in each detail (About 30.000 rows) i had a formula wich calculated some value with two fields of the detail and two from a joined view. the view was the problem when i was exporting by code (Exporting within Crystal Reports worked ok with no delay). I had to create a new table in SQL, inserting all the rows in view in this new table and add this table to report and ..voilá, it worked, exported report in seconds.
Related
In an Excel Power Query file the data connection can be from a SQL server. We have a large number of files that specify a SQL server by name and this server is going to be decommissioned. We need to update the connection to replace the older server name with the new server name. This is possible by opening the Excel file, browsing to the query and editing the server name manually. Due to the large number of files it is desired to do this using C#. The image below shows the input fields (with the names removed) where you would update this manually.
First by unzipping the Excel file and browsing the contents under the folder xl > connections.xml I would have expected it to specfiy the connection there but it only says $Workbook$
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<connections xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main">
<connection id="1" keepAlive="1" name="Query" description="Connection to the query in the workbook." type="5" refreshedVersion="6" background="1" saveData="1">
<dbPr connection="Provider=Microsoft.Mashup.OleDb.1;Data Source=$Workbook$;Location="table"" command="SELECT * FROM [table]"/>
</connection>
</connections>
On the MDSN forms there is a reference to this topic and the answer provided by Will Gregg says:
External data source connection information is stored in the XLSX package in a custom part. You can locate the custom part under the customXML folder of the package. For example: customXml\iem1.xml.
Contained in item1.xml is a element. The definition for the element can be found in the [MS-QDEFF]: Query Definition File Format document (https://msdn.microsoft.com/en-us/library/mt577220(v=office.12).aspx).
In order to work with the data of the element you will need to decode the contents as described in the [MS-QDEFF]: Query Definition File Format document.
Once the data is decoded, you will need to examine the contents of the PackagePart. Within that package you will find the external data connection information in the Forumlas\Section1.m part.
This is helpful to point me to the item.xml file in the customXml folder but does not give any details on how to decode the information in the DataMashup object. The answer did mention the [MS-QDEFF]: Query Definition File Format document is available at this link from the main article about the query definition format. The information in this document can seem dense and complex at first glance.
On Stack Overflow there are 6 questions that mention DataMashup and 4 of them are related to Power BI, which while similar to this issue are not the same. The links to each of those questions are listed below:
how to decode/ get encoding of file (Power BI desktop file)
How to edit Power BI Desktop document parameters or data sources programmatically with C#?
Is there documentation/an API for the PBix file format?
How to update clients' Power BI files without ruining their reports?
The other 2 questions are more relevant as they ask about Excel not Power BI which I will discuss below:
This question asks how to remove Power Query queries' custom XML data using VBA. I do not want to delete the query but rather update the connection string and I would like to do this in C# not VBA. The questions shows the result using the Macro recorder and I do not want to open each Excel file to run a VBA Macro.
This question asks how to find the query information and comes across the same $Workbook$ that I did. In the comment by Axel Richter he says In *.xlsx/customXml/ you will find a item1.xml which contains a DataMashup element which contains a base64Binary which is the binary query definition file. I have no clue how to work with that. That's why only a comment and not a answer. Over a year later an answer was added by Tom Jebo pointing to the Open Specifications details I found as well but does not offer a solution on how to manipulate the DataMashup object. I am adding this as a new question since the this question is looking to solve a little different problem than I am and it is also looking for a solution in JavaScript.
What is the best way to decode the DataMashup object, change the server name, and then save the updated connection back to the Excel file?
In this blog post by Jeff Atwood on July 1, 2011, asking and answering your own questions is encouraged. In addition this page form the Stack Overflow help center address the same issue. I decided to post a full working solution in C# for others to modify and use, hopefully saving them the time of needing to sludge through all the working out I did.
As mentioned in the question, the most helpful document is the [MS-QDEFF]: Query Definition File Format. I will include the most relevant parts of this document here but refer to the original document if needed. Below shows the example XML with the DataMashup provided by Microsoft. This is for a short query but expect something similar if you open up the customXml > item1.xml file.
<DataMashup sqmid="7690c5d6-5698-463c-a560-a0093d4f6332"
xmlns="http://schemas.microsoft.com/DataMashup">
AAAAAEUDAABQSwMEFAACAAgAta0pR62KRJynAAAA+QAAABIAHABDb25maWcvUGFja2FnZS54bWwgohgA
KKAUAAAAAAAAAAAAAAAAAAAAAAAAAAAhY9NDoIwGESvQrqnP4jGkI+ycCuJCdG4bUqFRiiGFsvdXHgkr
yCJYti5nMmb5M3r8YRsbJvgrnqrO5MihikKlJFdqU2VosFdwi3KOByEvIpKBRNsbDJanaLauVtCiPce+
xXu+opElDJyzveFrFUrQm2sE0Yq9FuV/1eIw+kjwyMcxTimmzVmMWVA5h5ybRbMpIwpkEUJu6FxQ6+4M
uGxADJHIN8b/A1QSwMEFAACAAgAta0pRw/K6aukAAAA6QAAABMAHABbQ29udGVudF9UeXBlc10ueG1sI
KIYACigFAAAAAAAAAAAAAAAAAAAAAAAAAAAAG2OSw7CMAxErxJ5n7qwQAg1ZQHcgAtEwf2I5qPGReFsL
DgSVyBtd4ilZ+Z55vN6V8dkB/GgMfbeKdgUJQhyxt961yqYuJF7ONbV9Rkoihx1UUHHHA6I0XRkdSx8I
Jedxo9Wcz7HFoM2d90Sbstyh8Y7JseS5x9QV2dq9DSwuKQsr7UZB3Fac3OVAqbEuMj4l7A/eR3C0BvN2
cQkbZR2IXEZXn8BUEsDBBQAAgAIALWtKUdi3rmEPAAAAEsAAAATABwARm9ybXVsYXMvU2VjdGlvbjEub
SCiGAAooBQAAAAAAAAAAAAAAAAAAAAAAAAAAAArTk0uyczPUwiG0IbWvFy8XMUZiUWpKQqBpalFlYYKt
go5qSW8XApAEJxfWpScChQx1Dbk5crMQxa1BgBQSwECLQAUAAIACAC1rSlHrYpEnKcAAAD5AAAAEgAAA
AAAAAAAAAAAAAAAAAAAQ29uZmlnL1BhY2thZ2UueG1sUEsBAi0AFAACAAgAta0pRw/K6aukAAAA6QAAA
BMAAAAAAAAAAAAAAAAA8wAAAFtDb250ZW50X1R5cGVzXS54bWxQSwECLQAUAAIACAC1rSlHYt65hDwAA
ABLAAAAEwAAAAAAAAAAAAAAAADkAQAARm9ybXVsYXMvU2VjdGlvbjEubVBLBQYAAAAAAwADAMIAAABtA
gAAAAA0AQAA77u/PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz48UGVybWlzc2lvb
kxpc3QgeG1sbnM6eHNpPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNjaGVtYS1pbnN0YW5jZSIge
G1sbnM6eHNkPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNjaGVtYSI+PENhbkV2YWx1YXRlRnV0d
XJlUGFja2FnZXM+ZmFsc2U8L0NhbkV2YWx1YXRlRnV0dXJlUGFja2FnZXM+PEZpcmV3YWxsRW5hYmxlZ
D50cnVlPC9GaXJld2FsbEVuYWJsZWQ+PFdvcmtib29rR3JvdXBUeXBlIHhzaTpuaWw9InRydWUiIC8+P
C9QZXJtaXNzaW9uTGlzdD7LBwAAAAAAAKkHAADvu788P3htbCB2ZXJzaW9uPSIxLjAiIGVuY29kaW5nP
SJ1dGYtOCI/PjxMb2NhbFBhY2thZ2VNZXRhZGF0YUZpbGUgeG1sbnM6eHNpPSJodHRwOi8vd3d3LnczL
m9yZy8yMDAxL1hNTFNjaGVtYS1pbnN0YW5jZSIgeG1sbnM6eHNkPSJodHRwOi8vd3d3LnczLm9yZy8yM
DAxL1hNTFNjaGVtYSI+PEl0ZW1zPjxJdGVtPjxJdGVtTG9jYXRpb24+PEl0ZW1UeXBlPkFsbEZvcm11b
GFzPC9JdGVtVHlwZT48SXRlbVBhdGggLz48L0l0ZW1Mb2NhdGlvbj48U3RhYmxlRW50cmllcyAvPjwvS
XRlbT48SXRlbT48SXRlbUxvY2F0aW9uPjxJdGVtVHlwZT5Gb3JtdWxhPC9JdGVtVHlwZT48SXRlbVBhd
Gg+U2VjdGlvbjEvUXVlcnkxPC9JdGVtUGF0aD48L0l0ZW1Mb2NhdGlvbj48U3RhYmxlRW50cmllcz48R
W50cnkgVHlwZT0iSXNQcml2YXRlIiBWYWx1ZT0ibDAiIC8+PEVudHJ5IFR5cGU9IlJlc3VsdFR5cGUiI
FZhbHVlPSJzTnVtYmVyIiAvPjxFbnRyeSBUeXBlPSJGaWxsRW5hYmxlZCIgVmFsdWU9ImwxIiAvPjxFb
nRyeSBUeXBlPSJGaWxsVG9EYXRhTW9kZWxFbmFibGVkIiBWYWx1ZT0ibDAiIC8+PEVudHJ5IFR5cGU9I
kZpbGxDb3VudCIgVmFsdWU9ImwxIiAvPjxFbnRyeSBUeXBlPSJGaWxsRXJyb3JDb3VudCIgVmFsdWU9I
mwwIiAvPjxFbnRyeSBUeXBlPSJGaWxsQ29sdW1uVHlwZXMiIFZhbHVlPSJzQlE9PSIgLz48RW50cnkgV
HlwZT0iRmlsbENvbHVtbk5hbWVzIiBWYWx1ZT0ic1smcXVvdDtRdWVyeTEmcXVvdDtdIiAvPjxFbnRye
SBUeXBlPSJGaWxsRXJyb3JDb2RlIiBWYWx1ZT0ic1Vua25vd24iIC8+PEVudHJ5IFR5cGU9IkZpbGxMY
XN0VXBkYXRlZCIgVmFsdWU9ImQyMDE1LTA5LTEwVDA0OjQ1OjQxLjkyNzU5MDBaIiAvPjxFbnRyeSBUe
XBlPSJSZWxhdGlvbnNoaXBJbmZvQ29udGFpbmVyIiBWYWx1ZT0ic3smcXVvdDtjb2x1bW5Db3VudCZxd
W90OzoxLCZxdW90O2tleUNvbHVtbk5hbWVzJnF1b3Q7OltdLCZxdW90O3F1ZXJ5UmVsYXRpb25zaGlwc
yZxdW90OzpbXSwmcXVvdDtjb2x1bW5JZGVudGl0aWVzJnF1b3Q7OlsmcXVvdDtTZWN0aW9uMS9RdWVye
TEvQXV0b1JlbW92ZWRDb2x1bW5zMS57UXVlcnkxLDB9JnF1b3Q7XSwmcXVvdDtDb2x1bW5Db3VudCZxd
W90OzoxLCZxdW90O0tleUNvbHVtbk5hbWVzJnF1b3Q7OltdLCZxdW90O0NvbHVtbklkZW50aXRpZXMmc
XVvdDs6WyZxdW90O1NlY3Rpb24xL1F1ZXJ5MS9BdXRvUmVtb3ZlZENvbHVtbnMxLntRdWVyeTEsMH0mc
XVvdDtdLCZxdW90O1JlbGF0aW9uc2hpcEluZm8mcXVvdDs6W119IiAvPjxFbnRyeSBUeXBlPSJGaWxsZ
WRDb21wbGV0ZVJlc3VsdFRvV29ya3NoZWV0IiBWYWx1ZT0ibDEiIC8+PEVudHJ5IFR5cGU9IkFkZGVkV
G9EYXRhTW9kZWwiIFZhbHVlPSJsMCIgLz48RW50cnkgVHlwZT0iUmVjb3ZlcnlUYXJnZXRTaGVldCIgV
mFsdWU9InNTaGVldDIiIC8+PEVudHJ5IFR5cGU9IlJlY292ZXJ5VGFyZ2V0Q29sdW1uIiBWYWx1ZT0ib
DEiIC8+PEVudHJ5IFR5cGU9IlJlY292ZXJ5VGFyZ2V0Um93IiBWYWx1ZT0ibDEiIC8+PEVudHJ5IFR5c
GU9Ik5hbWVVcGRhdGVkQWZ0ZXJGaWxsIiBWYWx1ZT0ibDAiIC8+PEVudHJ5IFR5cGU9IkZpbGxUYXJnZ
XQiIFZhbHVlPSJzUXVlcnkxIiAvPjxFbnRyeSBUeXBlPSJCdWZmZXJOZXh0UmVmcmVzaCIgVmFsdWU9I
mwxIiAvPjxFbnRyeSBUeXBlPSJGaWxsU3RhdHVzIiBWYWx1ZT0ic0NvbXBsZXRlIiAvPjxFbnRyeSBUe
XBlPSJRdWVyeUlEIiBWYWx1ZT0iczdlMDQzNjJlLTkyZjUtNGQ4Mi04YjA3LTI3NjFlYWY2OGFlNSIgL
z48L1N0YWJsZUVudHJpZXM+PC9JdGVtPjxJdGVtPjxJdGVtTG9jYXRpb24+PEl0ZW1UeXBlPkZvcm11b
GE8L0l0ZW1UeXBlPjxJdGVtUGF0aD5TZWN0aW9uMS9RdWVyeTEvU291cmNlPC9JdGVtUGF0aD48L0l0Z
W1Mb2NhdGlvbj48U3RhYmxlRW50cmllcyAvPjwvSXRlbT48L0l0ZW1zPjwvTG9jYWxQYWNrYWdlTWV0Y
WRhdGFGaWxlPhYAAABQSwUGAAAAAAAAAAAAAAAAAAAAAAAA2gAAAAEAAADQjJ3fARXREYx6AMBPwpfrA
QAAACLWGAG5O6FHjkAGtB+m5EQAAAAAAgAAAAAAA2YAAMAAAAAQAAAAaH8KNe2ciHwfVosIvSCr6gAAA
AAEgAAAoAAAABAAAAA40fOKWe6kmTAWJSBXs4cYUAAAAPNy7uF6Dtr9PvADu+eZdeV7JutpIQTh41qqT
3QnFoWPwE0Xyrur5N6Q2s2TEzjlBDfkEmNaGtr3htemOjWZYXKQHP+R5u/90zHWiwOwjjowFAAAAF2UC
6Jm8C98hVmJBo638e4Qk65V
</DataMashup>
The value of this object is encoded in a Base64 string. If you are not familiar with Base 64, this Wikipedia article would be a good place to start. The first step in the solution will be to open the XML document and convert this into its byte representation. This can be done as follows:
string file = #"\customXml\item1.xml"; // or wherever your xml file is
XDocument doc = XDocument.Load(file);
byte[] dataMashup = Convert.FromBase64String(doc.Root.Value);
NOTE: In the full example provided at the bottom of this answer, all the manipulation is done in memory.
From the Microsoft definition document:
Version (4 bytes): Unsigned integer that MUST be set to 0.
Package Parts Length (4 bytes): Unsigned integer that specifies the length of the Package Parts field.
Package Parts (variable): Variable-length binary stream (section 2.3).
Permissions Length (4 bytes): Unsigned integer that specifies the length of the Permissions field.
Permissions (variable): Variable-length binary stream (section 2.4).
Metadata Length (4 bytes): Unsigned integer that specifies the length of the Metadata field.
Metadata (variable): Variable-length binary stream (section 2.5).
Permission Bindings Length (4 bytes): Unsigned integer that specifies the length of the Permission Bindings field.
Permission Bindings (variable): Variable-length binary stream (section 2.6).
Since each field that defines the length of its content is 4 bytes I defined a constant
private const int FIELDS_LENGTH = 4;
Then each of the values defined in this section (quoted from Microsoft) can be found as shown below:
int version = BitConverter.ToUInt16(dataMashup.Take(FIELDS_LENGTH).ToArray(), 0);
int packagePartsLength = BitConverter.ToUInt16(dataMashup.Skip(FIELDS_LENGTH).Take(FIELDS_LENGTH).ToArray(), 0);
byte[] packageParts = dataMashup.Skip(FIELDS_LENGTH * 2).Take(packagePartsLength).ToArray();
int permissionsLength = BitConverter.ToUInt16(dataMashup.Skip(FIELDS_LENGTH * 2 + packagePartsLength).Take(FIELDS_LENGTH).ToArray(), 0);
byte[] permissions = dataMashup.Skip(FIELDS_LENGTH * 3).Take(permissionsLength).ToArray();
int metadataLength = BitConverter.ToUInt16(dataMashup.Skip(FIELDS_LENGTH * 3 + packagePartsLength + permissionsLength).Take(FIELDS_LENGTH).ToArray(), 0);
byte[] metadata = dataMashup.Skip(FIELDS_LENGTH * 4 + packagePartsLength + permissionsLength).Take(metadataLength).ToArray();
int permissionsBindingLength = BitConverter.ToUInt16(dataMashup.Skip(FIELDS_LENGTH * 4 + packagePartsLength + permissionsLength + metadataLength).Take(FIELDS_LENGTH).ToArray(), 0);
byte[] permissionsBinding = dataMashup.Skip(FIELDS_LENGTH * 5 + packagePartsLength + permissionsLength + metadataLength).Take(permissionsBindingLength).ToArray();
Using the byte[] for the package parts, it represents a Package object from the System.IO.Packaging namespace.
using (MemoryStream ms = new MemoryStream(packageParts)) {
using (Package package = Package.Open(ms, FileMode.Open, FileAccess.ReadWrite)) {
PackagePart section = package.GetParts().Where(x => x.Uri.OriginalString == "/Formulas/Section1.m").FirstOrDefault();
string query;
using (StreamReader reader = new StreamReader(section.GetStream())) {
query = reader.ReadToEnd();
// do other replacing, removing of query here
}
using (BinaryWriter writer = new BinaryWriter(section.GetStream())) {
// write updated query back to package part
writer.Write(Encoding.ASCII.GetBytes(query));
}
}
packageParts = ms.ToArray();
}
Finally I need to update the original byte[] with the new information from the updated package.
bytes = BitConverter.GetBytes(version)
.Concat(BitConverter.GetBytes(packageParts.Length))
.Concat(packageParts)
.Concat(BitConverter.GetBytes(permissionsLength))
.Concat(permissions)
.Concat(BitConverter.GetBytes(metadataLength))
.Concat(metadata)
.Concat(BitConverter.GetBytes(permissionsBindingLength))
.Concat(permissionsBinding);
doc.Root.Value = Convert.ToBase64String(bytes.ToArray());
entryStream.SetLength(0);
doc.Save(entryStream);
Below is the full example for completeness. It is a console application which takes in a directory of files to update as command line arguments and then replaces the old server name with the new server name.
using System;
using System.Collections.Generic;
using System.Linq;
using System.IO;
using System.IO.Compression;
using System.Xml.Linq;
using System.IO.Packaging;
using System.Text;
namespace MyApp {
class Program {
private const int FIELDS_LENGTH = 4;
static void Main(string[] args) {
if (args.Length != 1) {
Console.WriteLine("specify one directory to update");
}
if (!Directory.Exists(args[0])) {
Console.WriteLine("directory does not exist");
}
IEnumerable<FileInfo> files = Directory.GetFiles(args[0]).Where(x => Path.GetExtension(x) == ".xlsx").Select(x => new FileInfo(x));
foreach (FileInfo file in files) {
using (FileStream fileStream = File.Open(file.FullName, FileMode.OpenOrCreate)) {
using (ZipArchive archive = new ZipArchive(fileStream, ZipArchiveMode.Update)) {
ZipArchiveEntry entry = archive.GetEntry("customXml/item1.xml");
IEnumerable<byte> bytes;
using (Stream entryStream = entry.Open()) {
XDocument doc = XDocument.Load(entryStream);
byte[] dataMashup = Convert.FromBase64String(doc.Root.Value);
int version = BitConverter.ToUInt16(dataMashup.Take(FIELDS_LENGTH).ToArray(), 0);
int packagePartsLength = BitConverter.ToUInt16(dataMashup.Skip(FIELDS_LENGTH).Take(FIELDS_LENGTH).ToArray(), 0);
byte[] packageParts = dataMashup.Skip(FIELDS_LENGTH * 2).Take(packagePartsLength).ToArray();
int permissionsLength = BitConverter.ToUInt16(dataMashup.Skip(FIELDS_LENGTH * 2 + packagePartsLength).Take(FIELDS_LENGTH).ToArray(), 0);
byte[] permissions = dataMashup.Skip(FIELDS_LENGTH * 3).Take(permissionsLength).ToArray();
int metadataLength = BitConverter.ToUInt16(dataMashup.Skip(FIELDS_LENGTH * 3 + packagePartsLength + permissionsLength).Take(FIELDS_LENGTH).ToArray(), 0);
byte[] metadata = dataMashup.Skip(FIELDS_LENGTH * 4 + packagePartsLength + permissionsLength).Take(metadataLength).ToArray();
int permissionsBindingLength = BitConverter.ToUInt16(dataMashup.Skip(FIELDS_LENGTH * 4 + packagePartsLength + permissionsLength + metadataLength).Take(FIELDS_LENGTH).ToArray(), 0);
byte[] permissionsBinding = dataMashup.Skip(FIELDS_LENGTH * 5 + packagePartsLength + permissionsLength + metadataLength).Take(permissionsBindingLength).ToArray();
// use double memory stream to solve issue as memory stream will change
// size when re-saving the data mashup object
using (MemoryStream packagePartsStream = new MemoryStream(packageParts)) {
using (MemoryStream ms = new MemoryStream()) {
packagePartsStream.CopyTo(ms);
using (Package package = Package.Open(ms, FileMode.Open, FileAccess.ReadWrite)) {
PackagePart section = package.GetParts().Where(x => x.Uri.OriginalString == "/Formulas/Section1.m").FirstOrDefault();
string query;
using (StreamReader reader = new StreamReader(section.GetStream())) {
query = reader.ReadToEnd();
// do other replacing, removing of query here
query = query.Replace("old-server", "new-server");
}
using (BinaryWriter writer = new BinaryWriter(section.GetStream())) {
writer.Write(Encoding.ASCII.GetBytes(query));
}
}
packageParts = ms.ToArray();
}
bytes = BitConverter.GetBytes(version)
.Concat(BitConverter.GetBytes(packageParts.Length))
.Concat(packageParts)
.Concat(BitConverter.GetBytes(permissionsLength))
.Concat(permissions)
.Concat(BitConverter.GetBytes(metadataLength))
.Concat(metadata)
.Concat(BitConverter.GetBytes(permissionsBindingLength))
.Concat(permissionsBinding);
doc.Root.Value = Convert.ToBase64String(bytes.ToArray());
entryStream.SetLength(0);
doc.Save(entryStream);
}
}
}
}
}
}
}
}
NOTE: As I only needed to update the Package Parts part, I can confirm this decoding / encoding works but I did not test the the decoding / encoding of the Permissions, Metadata, or Permissions Binding. If you need to use these this should at least get you started.
NOTE: This code does not catch errors or handle every case. It is meant to be a working example of how to update the connections in a Power Query file. Feel free to adapt this as you need.
As I am new to Aspose, I need help in below case.
I want to merge multiple PDF into 1 PDF using Aspose, I can do it easily but the problem is, I want to limit the PDF size to 200MB.
That means, If my merged PDF size is greater than 200MB, then I need to split the PDF into multiple PDF. For Example, If my merged PDF is of 300MB, then first PDF should be of 200MB and second one PDF should be 100MB.
Main problem is, I am not able to find the size of the document in below code. I am using below code.
Document destinationPdfDocument = new Document();
Document sourcePdfDocument = new Document();
//Merge PDF one by one
for (int i = 0; i < filesFromDirectory.Count(); i++)
{
if (i == 0)
{
destinationPdfDocument = new Document(filesFromDirectory[i].FullName);
}
else
{
// Open second document
sourcePdfDocument = new Document(filesFromDirectory[i].FullName);
// Add pages of second document to the first
destinationPdfDocument.Pages.Add(sourcePdfDocument.Pages);
//** I need to check size of destinationPdfDocument over here to limit the size of resultant PDF**
}
}
// Encrypt PDF
destinationPdfDocument.Encrypt("userP", "ownerP", 0, CryptoAlgorithm.AESx128);
string finalPdfPath = Path.Combine(destinationSourceDirectory, destinatedPdfPath);
// Save concatenated output file
destinationPdfDocument.Save(finalPdfPath);
Other way of merging PDF based on size also be appreciated.
Thanks in Advance
I am afraid that there is no direct way to determine PDF file size before saving it physically. Therefore, we have already logged a feature request as PDFNET-43073 in our issue tracking system and product team has been investigating the feasibility of this feature. As soon as we have some significant updates regarding availability of the feature, we will definitely inform you. Please spare us little time.
However, as a workaround, you may save document into a memory stream and place a check on the size of that memory stream, whether it exceeds from your desired PDF size or not. Please check following code snippet, where we have generated PDFs with desired size of 200MBs with aforementioned approach.
//Instantiate document objects
Document destinationPdfDocument = new Document();
Document sourcePdfDocument = new Document();
//Load source files which are to be merged
var filesFromDirectory = Directory.GetFiles(dataDir, "*.pdf");
for (int i = 0; i < filesFromDirectory.Count(); i++)
{
if (i == 0)
{
destinationPdfDocument = new Document(filesFromDirectory[i]);
}
else
{
// Open second document
sourcePdfDocument = new Document(filesFromDirectory[i]);
// Add pages of second document to the first
destinationPdfDocument.Pages.Add(sourcePdfDocument.Pages);
//** I need to check size of destinationPdfDocument over here to limit the size of resultant PDF**
MemoryStream ms = new MemoryStream();
destinationPdfDocument.Save(ms);
long filesize = ms.Length;
ms.Flush();
// Compare the filesize in MBs
if (i == filesFromDirectory.Count() - 1)
{
destinationPdfDocument.Save(dataDir + "PDFOutput_" + i + ".pdf");
}
else if ((filesize / (1024 * 1024)) < 200)
continue;
else
{
destinationPdfDocument.Save(dataDir + "PDFOutput_" + i.ToString() + ".pdf");
destinationPdfDocument = new Document();
}
}
}
I hope this will be helpful. Please let us know if you need any further assistance.
I work with Aspose as Developer Evangelist.
I have my reports in a Reporting Services server, inside my .rdl there is a query that accepts parameters. I pass those parameters with an instance of ReportViewer. I have a method that downloads the result of the report in Excel format without using the ReportViewer directly. The method is the following:
private void CreateEXCEL(Dictionary<string, string> parametros, string nombreReporte)
{
// Variables
Warning[] warnings;
string[] streamIds;
string mimeType = string.Empty;
string encoding = string.Empty;
string extension = string.Empty;
// Setup the report viewer object and get the array of bytes
string ReportServerURL = ConfigurationManager.AppSettings["ReportServerCompletitudURL"];
string ReportName = ConfigurationManager.AppSettings["ReportNameRankingVentaPDV"] + "/" + nombreReporte;
MyReportViewer.Reset();
MyReportViewer.ProcessingMode = ProcessingMode.Remote;
MyReportViewer.ServerReport.ReportPath = ReportName;
MyReportViewer.ServerReport.ReportServerUrl = new Uri(ReportServerURL);
List<ReportParameter> parameters = new List<ReportParameter>();
foreach (var d in parametros)
{
parameters.Add(new ReportParameter(d.Key, d.Value));
}
MyReportViewer.ServerReport.SetParameters(parameters);
byte[] bytes = MyReportViewer.ServerReport.Render("EXCEL", null, out mimeType, out encoding, out extension, out streamIds, out warnings);
// Now that you have all the bytes representing the PDF report, buffer it and send it to the client.
Response.Buffer = true;
Response.Clear();
Response.ContentType = mimeType;
Response.AddHeader("content-disposition", "attachment; filename=" + nombreReporte + "." + extension);
Response.BinaryWrite(bytes); // create the file
Response.Flush(); // send it to the client to download
}
Now the idea is that I can't create a file with more that 65536 rows as an Excel file, the idea is to "Ask" if the result of the query inside the Report will yield more than 65k rows, then use csv format.
I dont see that reportviewer server control have a method that checks the result of the query.
I don't want to use pagebreaks inside the SSRS reports. Is there any way to ask in my code behind?
Not sure if this helps but this is a work around for exporting to excel.
Create a parent group on the tablix (or table, or list) and in the Group on: field enter the expression below.
Add Page break between each instance of a group
=CInt(Ceiling(RowNumber(nothing)/65000))
See Question on Here.
I found the solution to this particular problem like this:
Put this expression in my "Details" Group. In Disabled property: =IIF(rownumber(nothing) mod 10000=0,false,true) BreakLocation: End.
After this change, I can save this excel divided in different worksheets in the same excel sheet for every 10k rows. I tried doing the ceiling but if you have a rownumber expression inside that group it wont work.
Web systems (exact same site) has been migrated to new servers. The mime-type tif file attachments worked on the previous servers in production and no code has been changed, but since the migration we cannot open specifically, .tif file. PDF files spin to a blank page in the browser.
The code calls a webservice(which works fine) to get a Cache Document from a JDE environment
object[] file = docA.CacheDocument("/" + path, filename, doctype, xxx.Global.JDEEnvironment);
fileSize = (int)file[0];
mimeType = (string)file[1];
There is no issue returning the mime-type, which is a "image/tiff". Settings have been set on the server level to accept both .tif and .tiff in MIME-TYPE properties.
HttpContext.Current.Response.ClearHeaders();
HttpContext.Current.Response.ClearContent();
HttpContext.Current.Response.Buffer = true;
HttpContext.Current.Response.ContentType = mimeType;
string tempPath = "/" + path;
string tempFile = filename;
int i = 0;
while (i < fileSize)
{
int[] byteRangeAry = new int[2];
byteRangeAry[0] = i;
if ((i + _chunkSize) < fileSize)
{
byteRangeAry[1] = i + _chunkSize;
}
else
{
byteRangeAry[1] = fileSize;
}
var docdata = docA.GetByteRange(tempPath, tempFile, byteRangeAry);
HttpContext.Current.Response.BinaryWrite(docdata);
HttpContext.Current.Response.Flush();
//Move the index to the next chunk
i = byteRangeAry[1] + 1;
}
HttpContext.Current.Response.Flush();
this snipit is untouched code that worked in production and now errors out with an object reference error.
var docdata = docA.GetByteRange(tempPath, tempFile, byteRangeAry);
However when I add a .mime extention to the tempFile, it no longer errors out and gets the byteRange.
var docdata = docA.GetByteRange(tempPath, tempFile + ".mime", byteRangeAry);
The dialog box appears - downloads the file - but opens to a blank or an error saying the file appears to be damages, corrupted or is too large. I have tried opening in several other formats to no avail. This happens with the .tif file. The PDF just leaves a blank page in the brower without an option download dialog box.
This is the same code that worked in production and is a .NET V2 app. Any suggestions would be much appreciated.
This was resolved, it was a cacheing issue. We re-wrote the Get CacheDocument method that was corrupting the header. Now its a GetDocument method and we are now able to grab documents, and load them. The problem was the code, it is still strange that it worked in the previous production.
Problem is now solved. Mistake by me that I hadn't seen before.
I am pretty new to coding in general and am very new to C# so I am probably missing something simple. I wrote a program to pull data from a login website and save that data to files on the local hard drive. The data is power and energy data for solar modules and each module has its own file. On my main workstation I am running Windows Vista and the program works just fine. When I run the program on the machine running Server 2003, instead of the new data being appended to the files, it just overwrites the data originally in the file.
The data I am downloading is csv format text over a span of 7 days at a time. I run the program once a day to pull the new day's data and append it to the local file. Every time I run the program, the local file is a copy of the newly downloaded data with none of the old data. Since the data on the web site is only updated once a day, I have been testing by removing the last day's data in the local file and/or the first day's data in the local file. Any time I change the file and run the program, the file contains the downloaded data and nothing else.
I just tried something new to test why it wasn't working and think I have found the source of the error. When I ran on my local machine, the "filePath" variable was set to "". On the server and now on my local machine I have changed the "filePath" to #"C:\Solar Yard Data\" and on both machines it catches the file not found exception and creates a new file in the same directory which overwrites the original. Anyone have an idea as to why this happens?
The code is the section that download's each data set and appends any new data to the local file.
int i = 0;
string filePath = "C:/Solar Yard Data/";
string[] filenamesPower = new string[]
{
"inverter121201321745_power",
"inverter121201325108_power",
"inverter121201326383_power",
"inverter121201326218_power",
"inverter121201323111_power",
"inverter121201324916_power",
"inverter121201326328_power",
"inverter121201326031_power",
"inverter121201325003_power",
"inverter121201326714_power",
"inverter121201326351_power",
"inverter121201323205_power",
"inverter121201325349_power",
"inverter121201324856_power",
"inverter121201325047_power",
"inverter121201324954_power",
};
// download and save every module's power data
foreach (string url in modulesPower)
{
// create web request and download data
HttpWebRequest req_csv = (HttpWebRequest)HttpWebRequest.Create(String.Format(url, auth_token));
req_csv.CookieContainer = cookie_container;
HttpWebResponse res_csv = (HttpWebResponse)req_csv.GetResponse();
// save the data to files
using (StreamReader sr = new StreamReader(res_csv.GetResponseStream()))
{
string response = sr.ReadToEnd();
string fileName = filenamesPower[i] + ".csv";
// save the new data to file
try
{
int startIndex = 0; // start index for substring to append to file
int searchResultIndex = 0; // index returned when searching downloaded data for last entry of data on file
string lastEntry; // will hold the last entry in the current data
//open existing file and find last entry
using (StreamReader sr2 = new StreamReader(fileName))
{
//get last line of existing data
string fileContents = sr2.ReadToEnd();
string nl = System.Environment.NewLine; // newline string
int nllen = nl.Length; // length of a newline
if (fileContents.LastIndexOf(nl) == fileContents.Length - nllen)
{
lastEntry = fileContents.Substring(0, fileContents.Length - nllen).Substring(fileContents.Substring(0, fileContents.Length - nllen).LastIndexOf(nl) + nllen);
}
else
{
lastEntry = fileContents.Substring(fileContents.LastIndexOf(nl) + 2);
}
// search the new data for the last existing line
searchResultIndex = response.LastIndexOf(lastEntry);
}
// if the downloaded data contains the last record on file, append the new data
if (searchResultIndex != -1)
{
startIndex = searchResultIndex + lastEntry.Length;
File.AppendAllText(filePath + fileName, response.Substring(startIndex+1));
}
// else append all the data
else
{
Console.WriteLine("The last entry of the existing data was not found\nin the downloaded data. Appending all data.");
File.AppendAllText(filePath + fileName, response.Substring(109)); // the 109 index removes the file header from the new data
}
}
// if there is no file for this module, create the first one
catch (FileNotFoundException e)
{
// write data to file
Console.WriteLine("File does not exist, creating new data file.");
File.WriteAllText(filePath + fileName, response);
//Debug.WriteLine(response);
}
}
Console.WriteLine("Power file " + (i + 1) + " finished.");
//Debug.WriteLine("File " + (i + 1) + " finished.");
i++;
}
Console.WriteLine("\nPower data finished!\n");
Couple of suggestions wich I think will probably resolve the issue
First change your filePath string
string filePath = #"C:\Solar Yard Data\";
create a string with the full path
String fullFilePath = filePath + fileName;
then check to see if it exists and create it if it doesnt
if (!File.Exists(fullFilePath ))
File.Create(fullFilePath );
put the full path to the file in your streamReader
using (StreamReader sr2 = new StreamReader(fullFilePath))