RestSharp, Forge API - Getting error:overlapping ranges on file upload - c#

I am trying to upload a file to a bucket using the forge .NET SDK. It works most of the time but gives an {error: overlapping ranges} occasionally. Here is the code snippet.
private string uploadFileToBucket(Configuration configuration, string bucketKey, string filePath)
{
ObjectsApi objectsApi = new ObjectsApi(configuration);
string fileName = Path.GetFileName(filePath);
string base64EncodedUrn, objectKey;
using (FileStream fileStream = File.Open(filePath, FileMode.Open))
{
long contentLength = fileStream.Length;
string content_range = "bytes 0-" + (contentLength - 1) + "/" + contentLength;
dynamic result = objectsApi.UploadChunk(bucketKey, fileName, (int)fileStream.Length, content_range,
"12313", fileStream);
DynamicJsonResponse dynamicJsonResponse = (DynamicJsonResponse)result;
JObject json = dynamicJsonResponse.ToJson();
JToken urn = json.GetValue("objectId");
string urnStr = urn.ToString();
base64EncodedUrn = ApiClient.encodeToSafeBase64(urnStr);
objectKey = fileName;
}
return base64EncodedUrn;
}

Before uploading, the file content must have to read to the computer memory, otherwise, the FileStream object in your code snippet is empty.
However, I would like to advise you to use PUT buckets/:bucketKey/objects/:objectName instead if you want to upload the whole file in a single chunk only. Here is my test code. Hope it helps~
private static TwoLeggedApi oauth2TwoLegged;
private static dynamic twoLeggedCredentials;
private static Random random = new Random();
public static string RandomString(int length)
{
const string chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
return new string(Enumerable.Repeat(chars, length)
.Select(s => s[random.Next(s.Length)]).ToArray());
}
// Initialize the 2-legged OAuth 2.0 client, and optionally set specific scopes.
private static void initializeOAuth()
{
// You must provide at least one valid scope
Scope[] scopes = new Scope[] { Scope.DataRead, Scope.DataWrite, Scope.BucketCreate, Scope.BucketRead };
oauth2TwoLegged = new TwoLeggedApi();
twoLeggedCredentials = oauth2TwoLegged.Authenticate(FORGE_CLIENT_ID, FORGE_CLIENT_SECRET, oAuthConstants.CLIENT_CREDENTIALS, scopes);
objectsApi.Configuration.AccessToken = twoLeggedCredentials.access_token;
}
private static void uploadFileToBucket(string bucketKey, string filePath)
{
Console.WriteLine("*****Start uploading file to the OSS");
string path = filePath;
//File Total size
var info = new System.IO.FileInfo(path);
long fileSize = info.Length;
using (FileStream fileStream = File.Open(filePath, FileMode.Open))
{
string sessionId = RandomString(12);
Console.WriteLine(string.Format("sessionId: {0}", sessionId));
long contentLength = fileSize;
string content_range = "bytes 0-" + (contentLength - 1) + "/" + contentLength;
Console.WriteLine("Uploading rangeļ¼š " + content_range);
byte[] buffer = new byte[contentLength];
MemoryStream memoryStream = new MemoryStream(buffer);
int nb = fileStream.Read(buffer, 0, (int)contentLength);
memoryStream.Write(buffer, 0, nb);
memoryStream.Position = 0;
dynamic response = objectsApi.UploadChunk(bucketKey, info.Name, (int)contentLength, content_range,
sessionId, memoryStream);
Console.WriteLine(response);
}
}
static void Main(string[] args)
{
initializeOAuth();
uploadFileToBucket(BUCKET_KEY, FILE_PATH);
}

Related

MVC WebAPI return multiple images

I have 3 images in a directory but my code always returns one of them. I'd like to return 3 images image1.jpg, image2.jpg, image3.jpg and get them in my Xamarin app.
I think returning the result like an array might solve the problem but I don't understand what I need.
var result = new HttpResponseMessage(HttpStatusCode.OK);
MemoryStream ms = new MemoryStream();
for (int i = 0; i < 3; i++)
{
String filePath = HostingEnvironment.MapPath("~/Fotos/Empresas/Comer/" + id + (i + 1) + ".jpg");
FileStream fileStream = new FileStream(filePath, FileMode.OpenOrCreate);
Image image = Image.FromStream(fileStream);
image.Save(ms, ImageFormat.Jpeg);
fileStream.Close();
byte[] bytes = File.ReadAllBytes(filePath);
byte[] length = BitConverter.GetBytes(bytes.Length);
// Write length followed by file bytes to stream
ms.Write(length, 0, 3);
ms.Write(bytes, 0, bytes.Length);
}
result.Content = new StreamContent(ms);
return result;
Now i getting bytes, i edit a little bit the code now
byte[] imageAsBytes = client.GetByteArrayAsync(url).Result;
MemoryStream stream1 = new MemoryStream(imageAsBytes);
img.Source = ImageSource.FromStream(() => { return stream1; });
this is my xamarin code to get images, but i still getting nothing =/
If you just return a memorystream is not easy to differentiate one image from the other in the stream, instead of this, you can return a List of byte arrays, then you can access each position in the array and convert from byte array to image...
Here is a fully functional dotnet core webapi controller :
public class GetImagesController : Controller
{
private readonly IWebHostEnvironment _host;
public GetImagesController(IWebHostEnvironment host)
{
_host = host;
}
[HttpGet("{images}")]
public async Task<List<byte[]>> Get([FromQuery]string images)
{
List<byte[]> imageBytes = new List<byte[]>();
String[] strArray = images.Split(',');
for (int i = 0; i < strArray.Length; i++)
{
String filePath = Path.Combine(_host.ContentRootPath, "images", strArray[i]+".jpg");
byte[] bytes = System.IO.File.ReadAllBytes(filePath);
imageBytes.Add(bytes);
}
return imageBytes;
}
}
This controller can be called like this :
https://localhost:44386/getImages?images=P1,P2,P3
Given that you have a folder called images with files P1.jpg, P2.jpg and P3.jpg under your ContentRooPath.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/web-host?view=aspnetcore-3.0
You'll need something in the response to delimit where each image starts and finishes. As a basic solution, you could write the image length as an Int32 and follow it with the image data. On the other end, you'll need to read the 4-byte length followed by that x number of bytes:
[HttpGet]
public HttpResponseMessage Get(string id)
{
var result = new HttpResponseMessage(HttpStatusCode.OK);
String[] strArray = id.Split(',');
var ms = new MemoryStream();
for (int i = 0; i < strArray.Length; i++)
{
String filePath = HostingEnvironment.MapPath("~/Fotos/Empresas/Comer/" + strArray[i] + (i + 1) + ".jpg");
byte[] bytes = File.ReadAllBytes(filePath);
byte[] length = BitConverter.GetBytes(bytes.Length);
// Write length followed by file bytes to stream
ms.Write(length, 0, 4);
ms.Write(bytes, 0, bytes.Length);
}
result.Content = new StreamContent(ms);
return result;
}

Trying to download large blob files

I need to download large backup files from my storage account.
I try it with SAS and I have generated link, when I use that link and enter it
directly into browser it downloads the file, but when I am trying to download through my code it gives me empty file or doesn't download file at all. Commented out lines are some that I already tried, last one is Redirect(blobSasuri);
public async Task DownloadBlobItemAsync([FromQuery] string userId, [FromRoute] string fileName, [FromBody] PathObject path, [FromRoute] int filestorageConnectionId)
{
var fileStorageConnection = await _customerProvider.GetFileStorageConnection(filestorageConnectionId);
var customer = await _customerProvider.GetCustomer(fileStorageConnection.CustomerId);
CloudBlockBlob blob = _fileStorage.DownloadBlobFile(fileStorageConnection.Id, userId, customer.Id, fileName, path.Path);
var sas = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
SharedAccessStartTime = DateTime.UtcNow.AddHours(-5),
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(5),
Permissions = SharedAccessBlobPermissions.Read
});
string blobSasUri = (string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas));
// CloudBlockBlob blobNew = new CloudBlockBlob(new Uri(blobSasUri));
// var pathNew = Directory.GetCurrentDirectory();
// blobNew.DownloadToFileAsync(pathNew, FileMode.Create);
//await blob.DownloadToFileAsync(blobSasUri, FileMode.Create);
Redirect(blobSasUri);
//using (var client = new WebClient())
//{
// client.DownloadFile(blobSasUri, fileName);
//}
}
I don't know what method you used to download the blob, I test with blobSas.DownloadToStream(), it worked for me. So maybe you could try with my code.
static void Main(string[] args)
{
string storageConnectionString = "connectin string";
// Check whether the connection string can be parsed.
CloudStorageAccount storageAccount;
CloudStorageAccount.TryParse(storageConnectionString, out storageAccount);
var containerName = "test";
var blobName = "testfile.zip";
string saveFileName = #"E:\testfilefolder\myfile1.zip";
var blobContainer = storageAccount.CreateCloudBlobClient().GetContainerReference(containerName);
var blob = blobContainer.GetBlockBlobReference(blobName);
var sas =blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
SharedAccessStartTime = DateTime.UtcNow.AddHours(-5),
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(5),
Permissions = SharedAccessBlobPermissions.Read
});
string blobSasUri = (string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas));
//Download Blob through SAS url
CloudBlockBlob blobSas = new CloudBlockBlob(new Uri(blobSasUri));
long startPosition = 0;
using (MemoryStream ms = new MemoryStream())
{
blobSas.DownloadToStream(ms);
byte[] data = new byte[ms.Length];
ms.Position = 0;
ms.Read(data, 0, data.Length);
using (FileStream fs = new FileStream(saveFileName, FileMode.OpenOrCreate))
{
fs.Position = startPosition;
fs.Write(data, 0, data.Length);
}
}
}
And except with sas url to download large blob, another option is to serve the file in chunks. Here is the code.
int segmentSize = 1 * 1024 * 1024;//1 MB chunk
var blobContainer = storageAccount.CreateCloudBlobClient().GetContainerReference(containerName);
var blob = blobContainer.GetBlockBlobReference(blobName);
blob.FetchAttributes();
var blobLengthRemaining = blob.Properties.Length;
long startPosition = 0;
string saveFileName = #"E:\testfilefolder\myfile.zip";
do
{
long blockSize = Math.Min(segmentSize, blobLengthRemaining);
byte[] blobContents = new byte[blockSize];
using (MemoryStream ms = new MemoryStream())
{
blob.DownloadRangeToStream(ms, startPosition, blockSize);
ms.Position = 0;
ms.Read(blobContents, 0, blobContents.Length);
using (FileStream fs = new FileStream(saveFileName, FileMode.OpenOrCreate))
{
fs.Position = startPosition;
fs.Write(blobContents, 0, blobContents.Length);
}
}
startPosition += blockSize;
blobLengthRemaining -= blockSize;
}
while (blobLengthRemaining > 0);
Hope this could help you, if you still have other problem please feel free to let me know.
This doesnt work for me and for large files >5 GB. What I did is I returned path to the file + added SAS on it and return it to frontend. So now on frontend I have link with sas and it downloads it directly thorugh browser there.

How to download image and save it in local storage using Xamarin-Forms.?

I want to download an image and store it in specific folder in local storage.
I am using this to download image:
var imageData = await AzureStorage.GetFileAsync(ContainerType.Image, uploadedFilename);
var img = ImageSource.FromStream(() => new MemoryStream(imageData));
Create a FileService interface
in your Shared Code, create a new Interface, for instance, called IFileService.cs
public interface IFileService
{
void SavePicture(string name, Stream data, string location="temp");
}
Implementation Android
In your android project, create a new class called "Fileservice.cs".
Make sure it derives from your interface created before and decorate it with the dependency information:
[assembly: Dependency(typeof(FileService))]
namespace MyApp.Droid
{
public class FileService : IFileService
{
public void SavePicture(string name, Stream data, string location = "temp")
{
var documentsPath = Environment.GetFolderPath(Environment.SpecialFolder.Personal);
documentsPath = Path.Combine(documentsPath, "Orders", location);
Directory.CreateDirectory(documentsPath);
string filePath = Path.Combine(documentsPath, name);
byte[] bArray = new byte[data.Length];
using (FileStream fs = new FileStream(filePath , FileMode.OpenOrCreate))
{
using (data)
{
data.Read(bArray, 0, (int)data.Length);
}
int length = bArray.Length;
fs.Write(bArray, 0, length);
}
}
}
}
Implementation iOS
The implementation for iOS is basically the same:
[assembly: Dependency(typeof(FileService))]
namespace MyApp.iOS
{
public class FileService: IFileService
{
public void SavePicture(string name, Stream data, string location = "temp")
{
var documentsPath = Environment.GetFolderPath(Environment.SpecialFolder.Personal);
documentsPath = Path.Combine(documentsPath, "Orders", location);
Directory.CreateDirectory(documentsPath);
string filePath = Path.Combine(documentsPath, name);
byte[] bArray = new byte[data.Length];
using (FileStream fs = new FileStream(filePath , FileMode.OpenOrCreate))
{
using (data)
{
data.Read(bArray, 0, (int)data.Length);
}
int length = bArray.Length;
fs.Write(bArray, 0, length);
}
}
}
}
In order to save your file, in your shared code, you call
DependencyService.Get<IFileService>().SavePicture("ImageName.jpg", imageData, "imagesFolder");
and should be good to go.
public void DownloadImage(string URL)
{
var webClient = new WebClient();
webClient.DownloadDataCompleted += (s, e) =>
{
byte[] bytes = new byte[e.Result.Length];
bytes=e.Result; // get the downloaded data
string documentsPath = Android.OS.Environment.GetExternalStoragePublicDirectory(Android.OS.Environment.DirectoryPictures).AbsolutePath;
var partedURL = URL.Split('/');
string localFilename = partedURL[partedURL.Length-1];
string localPath = System.IO.Path.Combine(documentsPath, localFilename);
File.WriteAllBytes(localPath, bytes); // writes to local storage
Application.Current.MainPage.IsBusy = false;
Application.Current.MainPage.DisplayAlert("Download", "Download Finished", "OK");
MediaScannerConnection.ScanFile(Forms.Context,new string[] { localPath }, null, null);
};
var url = new Uri(URL);
webClient.DownloadDataAsync(url);
}
Here you have to use dependency service from xamarin forms PCL to call this method from android project.This will store your image to public folder. Will edit this if i get time to make a demo with iOS also.
I really liked Karan's approach.
I've made a sort of combination of them both and I wanted to share them here. Worked pretty well actually.
public String DownloadImage(Uri URL)
{
WebClient webClient = new WebClient();
string folderPath = System.IO.Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Personal), "Images", "temp");
string fileName = URL.ToString().Split('/').Last();
string filePath = System.IO.Path.Combine(folderPath, fileName);
webClient.DownloadDataCompleted += (s, e) =>
{
Directory.CreateDirectory(folderPath);
File.WriteAllBytes(filePath, e.Result);
};
webClient.DownloadDataAsync(URL);
return filePath;
}

Opening large files

I have a processes I made that has been working well for several months now. The process recursively zips up all files and folders in a given directory and then uploads the zip file to an FTP server. Its been working, but now, the zip file is exceeding 2gb and its erroring out. Can someone please help me figure out how to get around this 2gb limit? I commented the offending line in the code. Here is the code:
class Program
{
// Location of upload directory
private const string SourceFolder = #"C:\MyDirectory";
// FTP server
private const string FtpSite = "10.0.0.1";
// FTP User Name
private const string FtpUserName = "myUserName";
// FTP Password
private const string FtpPassword = "myPassword";
static void Main(string[] args)
{
try
{
// Zip everything up using SharpZipLib
string tmpFile = Path.GetTempFileName();
var zip = new ZipOutputStream(File.Create(tmpFile));
zip.SetLevel(8);
ZipFolder(SourceFolder, SourceFolder, zip);
zip.Finish();
zip.Close();
// Upload the zip file
UploadFile(tmpFile);
// Delete the zip file
File.Delete(tmpFile);
}
catch (Exception ex)
{
throw ex;
}
}
private static void UploadFile(string fileName)
{
string remoteFileName = "/ImagesUpload_" + DateTime.Now.ToString("MMddyyyyHHmmss") + ".zip";
var request = (FtpWebRequest)WebRequest.Create("ftp://" + FtpSite + remoteFileName);
request.Credentials = new NetworkCredential(FtpUserName, FtpPassword);
request.Method = WebRequestMethods.Ftp.UploadFile;
request.KeepAlive = false;
request.Timeout = -1;
request.UsePassive = true;
request.UseBinary = true;
// Error occurs in the next line!!!
byte[] b = File.ReadAllBytes(fileName);
using (Stream s = request.GetRequestStream())
{
s.Write(b, 0, b.Length);
}
using (var resp = (FtpWebResponse)request.GetResponse())
{
}
}
private static void ZipFolder(string rootFolder, string currentFolder, ZipOutputStream zStream)
{
string[] subFolders = Directory.GetDirectories(currentFolder);
foreach (string folder in subFolders)
ZipFolder(rootFolder, folder, zStream);
string relativePath = currentFolder.Substring(rootFolder.Length) + "/";
if (relativePath.Length > 1)
{
var dirEntry = new ZipEntry(relativePath) {DateTime = DateTime.Now};
}
foreach (string file in Directory.GetFiles(currentFolder))
{
AddFileToZip(zStream, relativePath, file);
}
}
private static void AddFileToZip(ZipOutputStream zStream, string relativePath, string file)
{
var buffer = new byte[4096];
var fi = new FileInfo(file);
string fileRelativePath = (relativePath.Length > 1 ? relativePath : string.Empty) + Path.GetFileName(file);
var entry = new ZipEntry(fileRelativePath) {DateTime = DateTime.Now, Size = fi.Length};
zStream.PutNextEntry(entry);
using (FileStream fs = File.OpenRead(file))
{
int sourceBytes;
do
{
sourceBytes = fs.Read(buffer, 0, buffer.Length);
zStream.Write(buffer, 0, sourceBytes);
} while (sourceBytes > 0);
}
}
}
You are trying to allocate an array possessing more than 2billion elements. .NET limits the maximum size of an array is System.Int32.MaxValue i.e. 2Gb is the upper bound.
You're better off reading the file in pieces an uploading it in pieces; e.g using a loop reading:
int buflen = 128 * 1024;
byte[] b = new byte[buflen];
FileStream source = new FileStream(fileName, FileMode.Open);
Stream dest = request.GetRequestStream();
while (true) {
int bytesRead = source.Read(buf, 0, buflen);
if (bytesRead == 0) break;
dest.Write(buf, 0, bytesRead);
}
The problem isn't in the zip, but in the File.ReadAllBytes call, which returns an array which has the default size limit of 2GB.
It is possible to disable this limit, as detailed here. I'm assuming you're already compiling this specifically for 64 bit to handle these kind of file sizes. Enabling this option switches .NET over to using 64 bit addresses for arrays instead of the default 32 bit addresses.
It would probably be better to split the archive into parts and upload them separately however. As far as I know the built in ZipFile class doesn't support multi-part archives, but several of the third party libraries do.
Edit: I was thinking about the resulting zip output, rather than the input. To load a huge amount of data INTO the ZipFile, you should use the Buffer based approach suggested by Petesh and philip.

File got corrupted when I Read the file's all bytes on list type of object then write a file again Using c#

I've lot of tried to write file from collection of bytes. but file always get corrupted. not sure why its happening. If somebody knows about it would be helpful me more.
Note: Its always working good when I uncomment under while loop this line //AppendAllBytes(pathSource, bytes);
but I need bytes from object. later on I will use this concept on p2p.
namespace Sender
{
static class Program
{
static void Main(string[] args)
{
string pathSource = "../../Ok&SkipButtonForWelcomeToJakayaWindow.jpg";
using (FileStream fsSource = new FileStream(pathSource,
FileMode.Open, FileAccess.Read))
{
// Read the source file into a byte array.
const int numBytesToRead = 100000; // Your amount to read at a time
byte[] bytes = new byte[numBytesToRead];
int numBytesRead = 0;
if (File.Exists(pathSource))
{
Console.WriteLine("File of this name already exist, you want to continue?");
System.IO.FileInfo obj = new System.IO.FileInfo(pathSource);
pathSource = "../../Files/" + Guid.NewGuid() + obj.Extension;
}
int i = 0;
byte[] objBytes = new byte[numBytesRead];
List<FileInfo> objFileInfo = new List<FileInfo>();
Guid fileID = Guid.NewGuid();
FileInfo fileInfo = null;
while (numBytesToRead > 0)
{
// Read may return anything from 0 to numBytesToRead.
int n = fsSource.Read(bytes, numBytesRead, numBytesToRead);
i++;
//AppendAllBytes(pathSource, bytes);
fileInfo = new FileInfo { FileID = fileID, FileBytes = bytes, FileByteID = i };
objFileInfo.Add(fileInfo);
// Break when the end of the file is reached.
if (n == 0)
{
break;
}
// Do here what you want to do with the bytes read (convert to string using Encoding.YourEncoding.GetString())
}
//foreach (var b in objFileInfo.OrderBy(m => m.FileByteID))
//{
// AppendAllBytes(pathSource, b.FileBytes);
//}
foreach (var item in objFileInfo)
{
AppendAllBytes(pathSource, item.FileBytes);
}
fileInfo = null;
}
}
static void AppendAllBytes(string path, byte[] bytes)
{
using (var stream = new FileStream(path, FileMode.Append))
{
stream.Write(bytes, 0, bytes.Length);
}
}
}
class FileInfo
{
public Guid FileID { get; set; }
public int FileByteID { get; set; }
public byte[] FileBytes { get; set; }
}
}
You don't increase numBytesRead and don't decrease numBytesToRead.
objFileInfo contains a List of FileInfo which contains a reference type byte[].
You copy the reference to the bytes when you create a new FileInfo and then repeatedly overwrite those bytes until you reach the end of the file.
byte[] bytes = new byte[numBytesToRead];
//...
List<FileInfo> objFileInfo = new List<FileInfo>();
//...
//...
while (numBytesToRead > 0)
{
int n = fsSource.Read(bytes, numBytesRead, numBytesToRead);
//First time here bytes[0] == the first byte of the file
//Second time here bytes[0] == 10000th byte of file
//...
//The following line should copy the bytes into file info instead of the reference to the existing byte array
fileInfo = new FileInfo { ..., FileBytes = bytes, ... };
objFileInfo.Add(fileInfo);
//First time here objFileInfo[0].FileBytes[0] == first byte of file
//Second time here objFileInfo[0].FileBytes[0] == 10000th byte of file because objFileInfo[All].FileBytes == bytes
//...
}
You can test this by looking in the FileBytes variable for multiple FileInfo. I'd bet the contents look similar
There is two problem in your code :
The block of data is all of size 100000, which cannot work most of time unless the file size is exactly a multiple of it. So, the last block of data will contains 0s.
FileInfo.FileBytes will change, if you change the buffer to something new causing the every single block of data being the identical to the last block read.
using System;
using System.Collections.Generic;
using System.IO;
static class Program
{
static void Main(string[] args)
{
string pathSource = "test.jpg";
using (FileStream fsSource = new FileStream(pathSource, FileMode.Open, FileAccess.Read))
{
// Read the source file into a byte array.
const int BufferSize = 100000; // Your amount to read at a time
byte[] buffer = new byte[BufferSize];
if (File.Exists(pathSource))
{
Console.WriteLine("File of this name already exist, you want to continue?");
System.IO.FileInfo obj = new System.IO.FileInfo(pathSource);
pathSource = "Files/" + Guid.NewGuid() + obj.Extension;
}
int i = 0, offset = 0, bytesRead;
List<FileInfo> objFileInfo = new List<FileInfo>();
Guid fileID = Guid.NewGuid();
while (0 != (bytesRead = fsSource.Read(buffer, offset, BufferSize)))
{
var data = new byte[bytesRead];
Array.Copy(buffer, data, bytesRead);
objFileInfo.Add(new FileInfo { FileID = fileID, FileBytes = data, FileByteID = ++i });
}
foreach (var item in objFileInfo)
{
AppendAllBytes(pathSource, item.FileBytes);
}
}
}
static void AppendAllBytes(string path, byte[] bytes)
{
using (var stream = new FileStream(path, FileMode.Append))
{
stream.Write(bytes, 0, bytes.Length);
}
}
}
class FileInfo
{
public Guid FileID { get; set; }
public int FileByteID { get; set; }
public byte[] FileBytes { get; set; }
}

Categories