Attached images in Folder or Database? - c#

I'm currently working on a .NET (core 3.1) website project and I am a little stuck on how to handle images and as I could not find a proper response for my case, here it is.
I'm working on a reports system where the user should be allowed to create a report and attach images if necessary. My question is, should I store the images in a database or a folder? The images will not contain "National security threats" but I guess they could be of a private nature.
Is it a good practice to store them on a Database?
I found it a bit messy the procedure to store them:
public async Task<IActionResult> Create(IFormFile image)
{
if (ModelState.IsValid)
{
byte[] p1 = null; //As I understand, it should be store as byte[]
using (var fs1 = image.OpenReadStream())
using (var ms1 = new MemoryStream())
{
fs1.CopyTo(ms1);
p1 = ms1.ToArray();
}
Image img = new Image(); //This is my Image model
img.Img = p1; //The property .IMG is of type "varbinary" on the DB.
_imagesDB.Images.Add(img); //My context
await _imagesDB.SaveChangesAsync();
return RedirectToAction(nameof(Index)); //if everything went well go back to index-
}
return View(report);
}
This is more or less ok (I guess) but I was not able to read it back from the database and send it to the View for showing.
Any ideas on how to read back the images from my context and, specially, how to send it from the controller to the View?
Thanks in advance.-
Alvaro.

There are pros and cons of both methods of storing files. It's convenient to have your files where your data is - however it takes a toll on the database side.
Text (the file path) in the database is only a few thousand bytes max (varchar data type, not the text data type in SQL), while a file can be enormous.
Imagine you wanted to query 1,000,000 users (hypothetically) - you would also be querying 1,000,000 files. That an enormous amount of data. Storing text (the file path) is minimal and a query could retrieve 1,000,000 rows of text rather quickly.
This can slow down your web app by causing longer load times due to your queries. I've had this issue personally and had to actually make a lazy load workaround to speed up the app.
Also, you have to consider the backup/restore process for your database. The larger the database then the longer your backup/restore times will take - and databases only grow. I heard a story about a company who backed up their database nightly, but their backup time took longer than a day due to files in their database. They weren't even done with the backup the day prior when the next backup started.
There are other factors to consider but those few alone are significant considerations.
In regards to the C# view/controller process...
Files are stored as bytes in a database (varbinary). You'll have to query the data and store them in a byte[] just like you are now and convert it to a file.
Here's a simplified snippet of one of my controllers in my .NET Core 3.1 web app.
This was only to download 1 PDF file - you will have to change it for your needs of course.
public async Task<IActionResult> Download(string docId, string docSource)
{
// Some kinda of validation...
if (!string.IsNullOrEmpty(docId))
{
// These are my query parameters (I'm using Dapper)
var p = new
{
docId,
docSource // This is just a parameter for my specific query
};
// Query the database for the document
// DocumentModel doc = some kinda of async query using
// the p variables as parameters
// I cut this part out since your database methods may be different
try
{
// Return the file
return File(doc.Content, "application/pdf", doc.LeafName);
}
catch
{
// You'll probably want to pass some kind of error message to your view
return View();
}
}
return View();
}
The doc.Content are the bytes and the doc.LeafName is just the name of the document.
You can also pass the file back to your View by setting properties on it's ViewModel/Model.
return View(new YourViewModel
{
SomeViewModelProperty = someProp,
Documents = documents
});
If you use a file server that's accessible to your API or web app then I believe you can retrieve the file directly from there.

Related

Acquiring waveform of LeCroy oscilloscope from C#/.NET

I am trying to load a waveform from a Teledyne Lecroy Wavesurfer 3054 scope using NI-VISA / IVI library. I can connect to the scope and read and set control variables but I can't figure out how to get the trace data back from the scope into my code. I am using USBTMC and can run the sample code in the Lecroy Automation manual but it does not give an example for getting the waveform array data, just control variables. They do not have a driver for IVI.NET. Here is a distilled version of the code:
// Open session to scope
var session = (IMessageBasedSession)GlobalResourceManager.Open
("USB0::0x05FF::0x1023::LCRY3702N14729::INSTR");
session.TimeoutMilliseconds = 5000;
session.Clear();
// Don't return command header with query result
session.FormattedIO.WriteLine("COMM_HEADER OFF");
// { other setup stuff that works OK }
// ...
// ...
// Attempt to query the Channel 1 waveform data
session.FormattedIO.WriteLine("vbs? 'return = app.Acquisition.C1.Out.Result.DataArray'");
So the last line above (which seems to be what the manual suggests) causes a beep and there is no data that can be read. I've tried all the read functions and they all time out with no data returned. If I query the number of data points I get 100002 which seems correct and I know the data must be there. Is there a better VBS query to use? Is there a read function that I can use to read the data into a byte array that I have overlooked? Do I need to read the data in blocks due to a buffer size limitation, etc.? I am hoping that someone has solved this problem before. Thanks so much!
Here is the first effort I got at making it work:
var session = (IMessageBasedSession)GlobalResourceManager.Open("USB0::0x05FF::0x1023::LCRY3702N14729::INSTR");
session.TimeoutMilliseconds = 5000;
session.Clear();
// Don't return command header with query result
session.FormattedIO.WriteLine("COMM_HEADER OFF");
//
// .. a bunch of setup code...
//
session.FormattedIO.WriteLine("C1:WF?"); // Query waveform data for Channel 1
buff = session.RawIO.Read(MAX_BUFF_SIZE); // buff has .TRC-like contents of waveform data
The buff[] byte buffer will end up with the same file formatted data as the .TRC files that the scope saves to disk, so it has to be parsed. But at least the waveform data is there! If there is a better way, I may find it and post, or someone else feel free to post it.
The way I achieved this is by saving the screenshot to a local drive. Map the local drive to the current system & simply use File.Copy() to copy image file from the mapped drive to the local computer. It saves time to parse data & re-plot it if one uses TRC-like contents.

Issues with load times when attempting to retrieve photos from Azure AD via Graph query

I currently use the following code to pull information from Azure AD for a company directory:
List<QueryOption> options = new List<QueryOption>();
options.Add(new QueryOption("$filter", "accountEnabled%20eq%20true"));
options.Add(new QueryOption("$select", "displayName,companyName,profilePhoto,userPrincipalName"));
IGraphServiceUsersCollectionPage users = await gsc.Users.Request(options).GetAsync();
userResult.AddRange(users);
while (users.NextPageRequest != null)
{
users = await users.NextPageRequest.GetAsync();
userResult.AddRange(users);
}
It works relatively well, retrieving ~400 users' worth of data in roughly 5 seconds (I believe I can drop that time, but I'm not 100% clear on the best practices for dealing with async calls yet). The issue comes when I implement the following code to pull the user profile photos:
foreach(User u in userResult)
{
Stream photo = await gsc.Users[u.UserPrincipalName].Photo.Content.Request().GetAsync();
byte[] buffer = new byte[16 * 1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = photo.Read(buffer, 0, buffer.Length)) > 0)
ms.Write(buffer, 0, read);
imgMe.ImageUrl = "data:image/jpg;base64," + Convert.ToBase64String(ms.ToArray());
}
}
This bit raises the page load time to over 30 seconds, and I'm having issues reducing that. So my questions in this specific situation are as follows:
Why does the original query (where I do specify profilePhoto in the select options) not actually pull the profile photo information?
What am I doing wrong here that creates such a drastic load time?
1. Why does the original query (where I do specify profilePhoto in the select options) not actually pull the profile photo information?
In Microsoft Graph API, it is designed as this. You can use Get a user API to get basic user profile, at the same time, you need to use Get photo API to get photo.
So, in SDK, you will have the same experience.
2. What am I doing wrong here that creates such a drastic load time?
Actually, there is nothing wrong. But there is way to reduce the time cost. As #Matt.G said in comment, you can create a separated method which returns a Task for downloading photo data. And then you can download photos in parallel.
For more about this, you may refer to the official tutorial: How to extend the async walkthrough by using Task.WhenAll (C#)

Ensure 2 methods succeed else roll back

Can anyone please tell me whether this is possible.
I have some code that allows a user to upload/change their image, before the change takes place I delete the default/old image from disk before uploading new image.
Problem is if something goes wrong either with the delete or upload, how can I roll back both so that the original image is returned.
I thought I could use tranactionscope, but either I'm not using it correctly or it not applicable for this case.
All the examples I have found involve using 2 call to database, but my code only involves one call and that's to update.
//TODO check transactionscope works ok
using (var tran = new TransactionScope())
{
//Delete old image before updating new image
//123 bogus number to throw error
var deleteOldImage = _igmpfu.DisplayProfileDetailsForUpdate("123")
.FirstOrDefault();
if (Convert.ToString(deleteOldImage) !=
"5bb188f0-2508-4cbd-b83d-9a5fe5914a1b.png")
{
DeleteOldImage(deleteOldImage);
}
//Insert new image
var imageGuid = imageId + ".png";
bool imageUrl = _iuma.UpdateAvatar(cookieId, imageGuid);
if (imageUrl)
{
TempData["Message"] = "Image updated";
return RedirectToAction("Index", "Members");
}
tran.Complete();
}
Any assistance in helping a newbie would be appreciated
//------------------------
I have been looking at the computer to long, all I had to do was
var deleteOldImage = _igmpfu.DisplayProfileDetailsForUpdate("123").FirstOrDefault();
if (deleteOldImage != null)
{
code here for writing to disk
}
I have spent ages trying to work this out and thats all I had to do :-(
Thanks everyone for their replies.
The code you have would only work if the classes you are using know how to enlist in the transaction.
The main issue you will encounter when dealing with files is that the file system is difficult to deal with transactionally. The approach I would use is this:
Save the new file on disk with a different filename.
Update the database with the new filename.
If the db update was successful, delete the old file from disk, if not delete the new file from disk.

Save files in database with entity framework

I have an ASP.NET MVC solution built on Entity Framework with Microsoft SQL Server 2008. I need to create a function that lets my users upload files.
What I would like is:
A solution that uses the Entity Framework to store files in the Database
A solution that detects and prevents from uploading the same file twice via some kind of hash/checksum
Tips on database/table design
In your entity model, map the BLOB database column to a byte[] property. Assign the content of the uploaded file to that property of the entity object, and save changes in the ObjectContext.
To compute a hash, you can use the MD5CryptoServiceProvider class
The "right" way to store a file in a SQL Server 2008 database is to use the FILESTREAM data type. I'm not aware that the Entity Framework supports that, but you can certainly try and see what happens.
That said, most of the time when people do this, they don't store the file in the database. Doing so means that you need to go through ASP.NET and the database server just to serve a file which you could be serving directly from the web server. It can also somewhat complicate the backup picture for your database and site. So when we upload files to our MVC/Entity Framework, we store only a reference to the file location in the database, and store the file itself elsewhere.
Obviously, which strategy is right for you depends a lot on the particulars of your application.
Here's how I do it for Podcasts:
ID Title Path Summary UploadDate
--- ----- -------- ---------------- -----------
1 TestPodcast /Podcasts/ep1.mp3 A test podcast 2010-02-12
The path stores a reference to the physical location of the Podcast. I used a post from Scott Hanselman on File Uploads with ASP.NET MVC to deal with the file upload part.
A working example (only for file upload because this one comes first in google) on basis of #Thomas's answer :
public void AddDocument(HttpPostedFileBase file)
{
try
{
using (TransactionScope scope = new TransactionScope())
{
try
{
using (var ctx = new Entities())
{
EntityDoc doc = new EntityDoc(); //The document table
doc.DocumentFileName = file.FileName; //The file Name
using (var reader = new System.IO.BinaryReader(file.InputStream))
{
doc.DocumentFile = reader.ReadBytes(file.ContentLength); // the Byte [] Field
}
ctx.EntityDocs.Add(doc);
ctx.SaveChanges();
scope.Complete();
}
}
catch (Exception ex)
{
throw ex;
}
}
}
catch (Exception ex)
{
throw ex;
}
}

New Access database, how can it be done?

I have a project in C# using Microsoft Office Access for storage. I can read and save to the database.
Now I need to allow the user to use the new database project but structured like the working one, and also to implement Save As option.
Besides I need to export to a text file/CSV.
Any ideas or sample codes would be helpful.
One way to create a blank DB is to try the following
using System;
using ADOX;
public class CreateDB
{
public static void Main( string [] args )
{
ADOX.CatalogClass cat = new ADOX.CatalogClass();
string create =
#"Provider=Microsoft.Jet.OLEDB.4.0;Data
Source=C:\BlankAccessDB\MyAccessDBCreatedFromCsharp.mdb;" +
"Jet OLEDB:Engine Type=5";
cat.Create(create);
cat = null;
}
}
Both Save and SaveAs is as easy as using SaveFileDialog to prompt the user to specify the filename and location to save the file.
The way I did this was to create a new empty access database file (this comes to about 100 KB) and then embed that file as a resource in my application. To "create" a new database is then simply a matter of extracting the resource to a file - which gives you a blank database - and then running a schema update code to create the schema you require in the blank database and then off you go.
I have a project that contains an empty database set to be embedded, a class with one method as below and, er, that's about it.
This is the code to dump the file from the embedded resource - it's not up to date, I wrote it 6 years ago but have had no need to change it:
public void CreateDatabase(string sPath)
{
// Get the resource and, er write it out?
System.IO.Stream DBStream;
System.IO.StreamReader dbReader;
System.IO.FileStream OutputStream;
OutputStream = new FileStream(sPath, FileMode.Create);
Assembly ass = System.Reflection.Assembly.GetAssembly(this.GetType());
DBStream = ass.GetManifestResourceStream("SoftwareByMurph.blank.mdb");
dbReader = new StreamReader(DBStream);
for(int l=0;l < DBStream.Length;l++)
{
OutputStream.WriteByte((byte)DBStream.ReadByte());
}
OutputStream.Close();
}
Simple, effective and the .dll is 124 KB.
Note I use an utterly blank and empty Access file - attempting to maintain the right schema in the embedded file is going to cause it to grow (because of the way .mdb files work) and may result in shipping data - which probably shouldn't happen. The schema itself is created/updated/maintained by a separate lump of DDL (SQL) that I run from code.
Export to .CSV is moderately trivial to do by hand since you pretty much just need to iterate over the columns in a table but for a smarter approach look at FileHelpers.

Categories