Writing a byte array to an Excel file - c#

I've been given the task of writing automated tests that check our own API. Part of this process involves testing an end point that generates an Excel template that the recipient is then supposed to fill back out and submit back to us.
From the looks of things this template gets sent back to the user from within the browser using a FileContentResult object that also specifies the content type (application/vnd.ms-excel.sheet.macroEnabled.12; the intended file format is .xlsm).
The problem I have is this: whilst getting the file in terms of a byte array works without issue with regards to the call to the end point is concerned I have yet to successfully take that byte array returned and use it for anything useful. Creating an excel file from that seems to be problematic. Just using File.WriteAllBytes() doesn't seem to work for example nor does using a BinaryWriter.
Does anybody have any idea how to achieve this from within C# code that isn't running as part of a website?

Related

How to read Slither IO websocket binary data with c#

I'm trying to build a custom client for a game called slither.io with csharp, but I've run into a small problem: I need to be able to read the binary data sent and received through their websocket.
I should add that you might also need to explain it like I'm dumb.
Here's a screenshot of the binary data I need to decode in C#:
Probably no need to reverse engineer it by yourself, there is a github project with some details. You can learn it and try to incrementally write your own code to parse it, probably in some sandbox you construct with technology you feel most familiar with, like winapi, unity or anything else. Later you will be able to move the code you created in proper modules and environment you need to use
https://github.com/ClitherProject/Slither.io-Protocol/blob/master/Protocol.md#type_l_detail
To parse binary data you will have to learn some additional stuff, writing your own hex viewer will be a relatively simple but suffficient way to learn how to deal with binary. I think this tutorial is good https://www.taniarascia.com/bits-bytes-bases-and-a-hex-dump-javascript/ although it is javascript. You can write some simple console output of parsed data and compare it to some existing hex viewer like HxD
If you want to master it even better you can quickly inspect some Chip8 or other emulators code to see how they parse commands. But in two words you can mae some parsing with logical ors, ands, binary shifts. For example if you are interested in third and forth byte of int (0x1243 (11A3) 12_12_AF) variable named "a" you can write following:
(a >> 3) & 0xFFFF will have 0x11A3 in a result, so in case of these commands you can check the type of it and values of its arguments with similar approach. By basically shifting bytes and covering them with mask of needed size. If you receive this data in byte array it will be even easier, you will just access the byte you want to check.
But in case you reverse engineer some browser game, you can look into some browser js code, make some source owerrides with logs, sometimes put breakpoints if it is possible from game dynamics side and check received ws data in some hex editor like HxD. In case of the snake it can be useful to see how its segments are placed, mouse position and angle is calculated etc

Without first parsing, convert Excel file to Json string

Okay, this is different than other posts I'm seeing. I'm not trying to first open an Excel file and parse the contents into a Json object. I'm trying to take the file and convert it to a stream object of some sort or byte[] and then convert that to Json so I can use it as an input parameter to a POST method for a WebAPI.
Here is the full scenario.
I have clients that will use an internal-only website to select one or more Excel files. The workstations the users work on may or may not have Excel installed, thus, all of my Excel processing has to be done on the server. Once the Excel files are processed, they are combined into a System.Data.DataTable and the values are aggregated into one master report. This aggregated report needs to be returned to the client system so it can be saved.
I currently have this site working just fine in ASP.NET using C#. However, I need the "guts" of the website to be a WebAPI so that automation programs I have can make calls directly to the WebAPI and accomplish the same task that the internal-only website does. This will allow all processing for this sort of task to run through one code base (right now, there are about 4 versions of this task and they all behave differently, providing differing output).
The way I thought to do this was to, from the client, convert the Excel files to an array of System.IO.MemoryStream objects, then serialize the full array as a Json.NET stream and upload the stream to the webserver where it will be deserialized back into an array of MemoryStream. Once that is done, I can iterate the array and process each Excel file by the MemoryStream.
My problem is I can't figure out how to convert the MemoryStream[] into Json and then deserialize that up on the server.
Rather than trying to pass the excel file around as JSON let the user upload the file to the server and then process it from there.
In the JSON rather than giving the content of the file put a link to the file.

How to Save a Binary Representation to file

I have the following textual binary representation: "0x255044462D312E340D0A25FFFFFFF..."
I know it's a pdf.
I know it's the textual represantation from a sql server column (image data type).
But im lost to find out how to save this binary to a pdf file on my disk and view the content.
Maybe someone can hint me in the right direction.
Best Regards and Thanks in Advance
You're correct that it is a PDF file (at least it masquerades like on. You have hexadecimally encoded bytes; the first read:
255044462D312E340D0A
%PDF-1.4<CR><LF>
So you appear to have a PDF 1.4 string.
Just take two characters from the string, treat them as hex, convert them to the correct byte and write them to a file. Write binary, not textually (you don't want to add additional line-breaks in there, PDF is too binary to let that work.
(I did the conversion using this site: http://www.dolcevie.com/js/converter.html)
I'm not sure what database you are working with or how you are getting your string that you have above.
Many databases allow you to save binary data as a blob or some other byte array type. I believe in MSSQL this is called an "image" but I am not 100% on that. I would start by looking into the two following links in order. The first link talks about how to pull byte array data from a database. The example is in Visual Basic but should be easily changed to C# if that is what you are using.
The second link contains an example of how to save that byte array data to the file system.
I would also suggest posting some of the code you have tried as well so that the community may comment and point out areas you possibly had misunderstandings on.
1.) http://support.microsoft.com/kb/308042
2.) Save and load MemoryStream to/from a file
http://www.pdfsharp.com/PDFsharp/ can read in binary data and you can call .Save() and it will make the PDF file to disk for you.

SSRS report corrupt when writing to file with WriteAllBytes C#

We have a process that has SQL Server Reporting Services create a pdf file via
ReportExecutionService.Render
from data in the database. Then we save the byte array that Render returns to the database. Later I get the byte array and do a
File.WriteAllBytes
to write it to disk before attaching it to an email and sending it. The problem I'm running into is that after writing the file to disk, it is corrupt somehow. I'm not sure what to look at, can anyone help?
Thanks
EDIT:
I can write the file from SSRS to disk before saving the byte array to the database and I can view that fine.
If you work with the byte[] returned by render, then things are fine, but if once you write that to the DB and read it back, you have problems, correct?
Why don't you compare the array written in to the DB with the one you retrieve to find the problem? Then start looking into your DB write and read routines, finally your DB storage.
I've done similar things without problems, such as taking the results of a Reporting Services call into a bytestream and attaching that directly to an email, both using a memorystream and an on-disk file. So the basics of this are sound and should work.
Not sure if this is your issue or not, but if the PDF file itself is corrupt you might want to look at how it's being written. If Windows Preview can view the PDF but Adobe cannot, it may have to do with the fact that Adobe is expecting %PDF in the first 1024 bytes of the file (otherwise it will consider it corrupt).

getting images out of mssql in C# using streams

I have a database which stores .png images as the sql "image" type. I have some code which retrieves these images as a byte[], and sends them to the page via the FileContentResult object in .Net. Performance is key in this application, and the images have to be retrieved and displayed as quickly as possible. My question is, can this operation be performed quicker by passing a byte stream from the database to the browser, and not at anytime storing the whole byte array in memory. If this is possible and worthwhile doing, how do I do it?
Here is the code I have so far:
// Get: /Image/Get/5
public FileResult Get(int id)
{
Response.Cache.SetExpires(DateTime.Now.AddSeconds(300));
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.Cache.SetValidUntilExpires(true);
// Get full size image by PageId.
return base.File(page.getFullsizeImage(id), "image/png");
}
And
public byte[] getFullsizeImage(int pageId)
{
return (from t in tPage
// Filter on pageId.
where t.PageId == pageId
select t.Image).Single().ToArray();
}
Thanks for any help!
A nice question.
Reality is the code required to send the image as a stream is really minimal. It is just Response.Write~~~ byte array and setting the HTTP's content-type header which must be very fast.
Now you seem to need to open up your database to the world to get it done quicker. That, being probably possible using features that allow SQL server to serve HTTP/interact with IIS (long time ago I looked at it), not a good idea so I do not believe you should take that risk.
You are already using the caching so that is cool but files being large, cache gets purged frequently.
But one thing to do is to have a local File Cache on the IIS and if image is used, it is written to the file on teh web server and from then on (until maybe next day when this is cleared) this other URL (to the static asset) is returned so requests would not have to go through the ASP.NET layer. It is not a great idea but will achieve what you need with least risk.
Changing the linq from single to first should give you nicer SQL, if PageId is the primary key you can safely assume first and single will return the same result.
Edit: Based on your comments, I think you should consider using DeepZoom from microsoft. Essentially, what this allows you to do is generate a specialized image file on the server. When a user is browsing the image in full view, just the couple of million or so pixels that are displayed on the screen are sent to the browser via AJAX. Then when the user zooms in, the appropriate pixels for the zoom level and x and y axis are streamed out.
There is a DeepZoom Composer which can be accessed via the command line to generate these image files on demand and write them to a network share. Your users will be really impressed.
Take a look at this example. This is a massive image - Gigabytes. in about the middle of the image you will see some newspaper pages. You can zoom right in and read the articles.
End of Edit
Do you have to have images with a large file size? If they are only meant for displaying in the browser, they should be optimized for the web. All main image editing applications have this ability.
If you do need the large file size, then you could provide optimized images and then when the user clicks on the image, allow them to download the full file. They should expect this download to take some time.
In Photoshop, the task is "Save for web". There is a similarly named plugin for Gimp.
I know that this doesn't answer your direct question ("can this operation be performed quicker by passing a byte stream"), but it might help solve your problem.

Categories