I have a code-first Entity Framework class that looks like this:
public class Image
{
public int ImageID { get; set; }
public string DataType { get; set; }
public int Size { get; set; }
public byte[] Content { get; set; }
}
I have written several lines of code in a controller that receives a IFormFile and then extracts the file data and reads it into an Image:
var content = formFile.OpenReadStream();
byte[] buf = new byte[content.Length];
content.Read(buf, 0, buf.Length);
var newImg = new Image();
newImg.Content = buf;
newImg.DataType = formFile.ContentType;
Since this code is re-used wherever an image is being uploaded, I would like to somehow encapsulate it as a 'SaveUpload' function. It will be used in various controllers across the solution.
Is the best way to add this as a {set;} only non-mapped property to the Image class/table?
Or do I somehow encapsulate or inherit this property (sorry, I am still learning OOP terms)?
You may want to consider creating an UploadService class to encapsulate this logic. Neither the controller(s), nor the Image class should really be aware of these details and by abstracting it into a service you will likely find that the code is more readable and easier to maintain.
UploadService:
public class UploadService
{
public void SaveUpload(IFormFile formFile)
{
var content = formFile.OpenReadStream();
byte[] buf = new byte[content.Length];
content.Read(buf, 0, buf.Length);
var newImg = new Image();
newImg.Content = buf;
newImg.DataType = formFile.ContentType;
//insert newImg...
}
}
I would even consider taking it one step further by injecting this service into your controllers using a dependency injection framework like Unity or Ninject, which will allow you to isolate the service for unit testing.
You could do something like
public class Image
{
public int ImageID { get; set; }
public string DataType { get; set; }
public int Size { get; set; }
public byte[] Content { get; set;
public static Image SaveUpload(IFormFile formFile) {
var newImage = new Image();
var content = formFile.OpenReadStream();
byte[] buf = new byte[content.Length];
content.Read(buf, 0, buf.Length);
var newImg = new Image();
newImg.Content = buf;
newImg.DataType = formFile.ContentType;
return newImage;
}
}
Save with
var image = Image.SaveUpload(formFile);
Inside your class do next step:
public static void YourMethod(/*parameters, if you need them*/){
var content = formFile.OpenReadStream();
byte[] buf = new byte[content.Length];
content.Read(buf, 0, buf.Length);
var newImg = new Image();
newImg.Content = buf;
newImg.DataType = formFile.ContentType;
}
It's possible to add normal static function, but for good practic: POCO class should't have any logic expect fields. Better way is to create other class, f.e: UploadService which method: Save(Image image). Your choice.
Related
I need to store big data (in the order of Giga bytes) to stream using protobuf-net 2.4.0.
Currently, I use the strategy of writing a small header with LengthPrefix with PrefixStyle.Base128 and the big body with the standard protobuf serialization approach with the below code, and it works like a charm.
private void Serialize(Stream stream)
{
Model.SerializeWithLengthPrefix(stream, FileHeader, typeof(FileHeader), PrefixStyle.Base128, 1);
if (FileHeader.SerializationMode == serializationType.Compressed)
{
using (var gzip = new GZipStream(stream, CompressionMode.Compress, true))
using (var bs = new BufferedStream(gzip, GZIP_BUFFER_SIZE))
{
Model.Serialize(bs, FileBody);
}
}
else
Model.Serialize(stream, FileBody);
}
Now I need to split the body into 2 different objects, so I have to use the LengthPrefix approach for them too, but I don't know what is the best PrefixStyle in such a scenario. Can I continue to use Base128? What does "useful for compatibility" mean in Fixed32 description?
UPDATE
I found this post where Marc Gravell explains that there is an option to use a start-marker and an end-marker, but I'm not sure if it can be used with the LengthPrefix approach or not.
To be more clear, is a valid approach the one shown in the below code?
[ProtoContract]
public class FileHeader
{
[ProtoMember(1)]
public int Version { get; }
[ProtoMember(2)]
public string Author { get; set; }
[ProtoMember(3)]
public string Organization { get; set; }
}
[ProtoContract(IsGroup = true)] // can IsGroup=true help with LengthPrefix for big data?
public class FileBody1
{
[ProtoMember(1), DataFormat = DataFormat.Group)]
public List<Foo1> Foo1s { get; }
[ProtoMember(2), DataFormat = DataFormat.Group)]
public List<Foo2> Foo2s { get; }
[ProtoMember(3), DataFormat = DataFormat.Group)]
public List<Foo3> Foo3s { get; }
}
[ProtoContract(IsGroup = true)] // can IsGroup=true help with LengthPrefix for big data?
public class FileBody2
{
[ProtoMember(1), DataFormat = DataFormat.Group)]
public List<Foo4> Foo4s { get; }
[ProtoMember(2), DataFormat = DataFormat.Group)]
public List<Foo5> Foo5s { get; }
[ProtoMember(3), DataFormat = DataFormat.Group)]
public List<Foo6> Foo6s { get; }
}
public static class Helper
{
private static void SerializeFile(Stream stream, FileHeader header, FileBody1 body1, FileBody2 body2)
{
var model = RuntimeTypeModel.Create();
var serializationContext = new ProtoBuf.SerializationContext();
model.SerializeWithLengthPrefix(stream, header, typeof(FileHeader), PrefixStyle.Base128, 1);
model.SerializeWithLengthPrefix(stream, body1, typeof(FileBody1), PrefixStyle.Base128, 1, serializationContext);
model.SerializeWithLengthPrefix(stream, body2, typeof(FileBody2), PrefixStyle.Base128, 1, serializationContext);
}
private static void DeserializeFile(Stream stream, ref FileHeader header, ref FileBody1 body1, ref FileBody2 body2)
{
var model = RuntimeTypeModel.Create();
var serializationContext = new ProtoBuf.SerializationContext();
header = model.DeserializeWithLengthPrefix(stream, null, typeof(FileHeader), PrefixStyle.Base128, 1) as FileHeader;
body1 = model.DeserializeWithLengthPrefix(stream, null, typeof(FileBody1), PrefixStyle.Base128, 1, null, out _, out _, serializationContext) as FileBody1;
body2 = model.DeserializeWithLengthPrefix(stream, null, typeof(FileBody2), PrefixStyle.Base128, 1, null, out _, out _, serializationContext) as FileBody2;
}
}
If so, I suppose that I can continue to store big data without worrying about the prefix-length (I mean the marker indicating the length of the message)
Base128 is probably the best general-purpose choice, simply because it maintains protocol compatibility (the others: do not). What I would suggest, though, is that for very large files, using "group" mode on the collections (and sub-objects in general) may be highly desirable; this makes serialization faster, by virtue of not having to calculate any length-prefixes for large object graphs.
I'm developing a system that use Entity Framework Code First technology for data base creation and, I need to store an image file to the database using this. I have done this but have a problem when there is no image file selected.
So I need to know, how to pass a default image file to the db if no image selected, (let's think i have no default image in my hard drive, so by default code need to generate the image file any how).
I'm using layered architecture,
1.Entity Layer - for code first db creation
2.Data Access Layer - for accessing db and all the db related methods are here
3.Business Logic layer - for connecting interface with data access layer
4.Presentation Layer - for interface designing
codes on 1,2,3 has completed as follows,
-------Entity-------
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace BisCare.BMS.Entity
{
public class RentingGood
{
public RentingGood() { }
public Guid RentingGoodId { get; set; }
public string RentingGoodName { get; set; }
public bool RentingGoodStatus { get; set; }
public byte[] RentingGoodImage { get; set; }
public virtual Category Category { get; set; }
public virtual Company Company { get; set; }
public virtual ICollection<RentingGoodsStock> RentingGoodsStocks { get; set; }
}
}
------- Data access-------
public bool AddRentingGoodData(string name, string catName, byte[] image)
{
try
{
Guid catGuid = new Guid();
using (EntityModel model = new EntityModel())
{
var CatId = from b in model.Categories
where b.CategoryName == catName
select b;
foreach (var x in CatId)
{
catGuid = x.CategoryId;
}
Category category = model.Categories.Find(catGuid);
RentingGood newGood = new RentingGood() { RentingGoodId = Guid.NewGuid(), RentingGoodName = name, Category = category, RentingGoodStatus = true,RentingGoodImage = image };
model.RentingGoods.Add(newGood);
model.SaveChanges();
return true;
}
}
catch
{
return false;
}
}
You could simply generate a standard image at runtime:
Bitmap standardImage = new Bitmap(200, 200);
Graphics standardGraphics = Graphics.FromImage(standardImage);
int red = 0;
int white = 10;
while (white <= 200)
{
standardGraphics.FillRectangle(Brushes.Red, 0, red, 200, 10);
standardGraphics.FillRectangle(Brushes.White, 0, white, 200, 10);
red += 20;
white += 20;
}
//This will be your byte array
byte[] result = ConvertImageToByteArray(standardImage);
This example above draws a red white flag. More examples and tutorials about drawing in C# can be foun here.
I made the method ConvertImageToByteArray to convert the image to a byte array which can be saved in the database which looks like this:
public byte[] ConvertImageToByteArray(System.Drawing.Image image)
{
using (MemoryStream ms = new MemoryStream())
{
image.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp);
return ms.ToArray();
}
}
It would be better if you could save the generated byte array somewhere. Then you don't need to regenerate the image everytime someone don't upload an image.
I'm using ProtoBuf.NET to serialize/deserialize some classes. I'm finding that on deserializing, I'm getting a corrupt byte[] (extra 0's). Before you ask, yes, I need the *WithLengthPrefix() versions of the ProtoBuf API since the ProtoBuf portion is at the starts of a custom stream :)
Anyway, I see
Original object is (JSON depiction):
{"ByteArray":"M+B6q+PXNuF8P5hl","ByteArray2":"DmErxVQ2y87IypSRcfxcWA==","K":2,"V
":1.0}
Protobuf: Raw Hex (42 bytes):
29-2A-20-0A-0C-33-E0-7A-AB-E3-D7-36-E1-7C-3F-98-65-12-10-0E-61-2B-C5-54-36-CB-CE
-C8-CA-94-91-71-FC-5C-58-08-02-15-00-00-80-3F
Regenerated object is (JSON depiction):
{"ByteArray":"AAAAAAAAAAAAAAAAM+B6q+PXNuF8P5hl","ByteArray2":"DmErxVQ2y87IypSRcf
xcWA==","K":2,"V":1.0}
The extra AAA*A in the ByteArray member are basically hex 0x00's in base64.
The app logic is similar to
static void Main(string[] args)
{
var parent = new Parent();
parent.Init();
Console.WriteLine("\nOriginal object is (JSON depiction):");
Console.WriteLine(JsonConvert.SerializeObject(parent));
using (var ms = new MemoryStream())
{
Serializer.SerializeWithLengthPrefix(ms, parent, PrefixStyle.Base128);
byte[] bytes2 = ms.ToArray();
var hex2 = BitConverter.ToString(bytes2);
Console.WriteLine("\nProtobuf: Hex ({0} bytes):\n{1}", bytes2.Length, hex2);
ms.Seek(0, SeekOrigin.Begin);
var backFirst = Serializer.DeserializeWithLengthPrefix<Parent>(ms,PrefixStyle.Base128);
Console.WriteLine("\nRegenerated object is (JSON depiction):");
Console.WriteLine(JsonConvert.SerializeObject(backFirst));
}
}
The DTO classes are
[DataContract]
[ProtoContract]
internal class Parent : Child
{
[DataMember(Name = "ByteArray", Order = 10)]
[ProtoMember(1)]
public byte[] ByteArray { get; set; }
[DataMember(Name = "ByteArray2", Order = 30, EmitDefaultValue = false)]
[ProtoMember(2)]
public byte[] ByteArray2 { get; set; }
public Parent()
{
ByteArray = new byte[12];
}
internal void Init(bool bindRow = false)
{
base.Init();
var rng = new RNGCryptoServiceProvider();
rng.GetBytes(ByteArray);
ByteArray2 = new byte[16];
rng.GetBytes(ByteArray2);
}
}
[DataContract]
[ProtoContract]
[ProtoInclude(5, typeof(Parent))]
public class Child
{
[DataMember(Name = "K", Order = 100)]
[ProtoMember(1)]
public Int32 K { get; set; }
[DataMember(Name = "V", Order = 110)]
[ProtoMember(2)]
public float V { get; set; }
internal void Init()
{
K = 2;
V = 1.0f;
}
}
I do see that when I move ByteArray = new byte[12] out form the Parent constructor into it's Init() method, ProtoBuf works fine. However we have app logic that prevents that in the real version (vs the SO trim down code you see above).
Are we doing something wrong or is this a bug in ProtoBuf?
Here we go:
public Parent()
{
ByteArray = new byte[12];
}
Note: protobuf is designed (by google) to be both appendable and mergeable. Where append / merge is synonymous (for lists / arrays etc) with "append".
Two options (both possible via attributes):
disable the constructor: [ProtoContract(SkipConstructor = true)]
disable the append: [ProtoMember(1, OverwriteList = true)]
There are other options too, but those are the ones I'd lean towards.
You remark that the array initialisation is different in the real code, but: I can't comment on code I can't see.
I was facing the same issue and my Bytestring data was actually an XML data I get from the server, so in my application I was already having an XML Serialaizer, thus I decided to use that instead of introducing a new serializer for Photobuf and decorate all of my models, I found this task is very time consuming.
Here is my function which will deserialize my Bytestring.
public SceneDTO ParseSceneAsyncFromByteString(ByteString byteStringData) {
var serializer = new CustomXmlSerializer(typeof(SceneDTO), _timingManager);
serializer.UnknownNode += OnUnknownNode;
serializer.UnknownAttribute += OnUnknownAttribute;
var ms = new MemoryStream(byteStringData.ToByteArray());
var scene = (SceneDTO) serializer.Deserialize(ms);
serializer.UnknownNode -= OnUnknownNode;
serializer.UnknownAttribute -= OnUnknownAttribute;
return scene;
}
The other reason why I was following this is I was getting the below error after used the Photobuf serializer.
Unexpected end-group in source data; this usually means the source
data is corrupt photobuf deserialzer
And even after fixing this, my model data was always null, and I decided not to spend many hours in fixing that issue as I already had a working solution.
I am having a hell of a time getting a variable from the function of another class usable in my Game1 (main) class. Specifically, I want to take width and height from the function SaveData in SetWindowSize.cs and use it in ReadSettings in Game1.cs.
I get the error
'ShovelShovel.SetWindowSize' does not contain a definition for
'height'. Same for 'width'.
Game1.cs (the function only)
protected void ReadSettings()
{
try
{
if (File.Exists(SetWindowSize.savePath))
{
using (FileStream fileStream = new FileStream(SetWindowSize.savePath,
FileMode.Open))
{
using (BinaryReader binaryReader = new BinaryReader(fileStream))
{
SetWindowSize.width = binaryReader.ReadInt32();
SetWindowSize.height = binaryReader.ReadInt32();
}
}
}
}
catch
{
}
}
SetWindowSize.cs
namespace ShovelShovel
{
protected void ReadSettings()
{
try
{
if (File.Exists(savePath))
{
using (FileStream fileStream = new FileStream(savePath, FileMode.Open))
{
using (BinaryReader binaryReader = new BinaryReader(fileStream))
{
var windowSize = WindowSizeStorage.ReadSettings();
WindowSize.Width = windowSize.Width;
WindowSize.Height = windowSize.Height;
}
}
}
}
catch
{
}
}
Thank you so much to anyone and everyone that can help me, I really appreciate it.
This might do the trick:
public class WindowSize
{
public int Width { get; set; }
public int Height { get; set; }
}
public static class WindowSizeStorage
{
public static string savePath = "WindowSize.dat";
public static WindowSize ReadSettings()
{
var result = new WindowSize();
using (FileStream fileStream = new FileStream(SetWindowSize.savePath, FileMode.Open))
{
using (BinaryReader binaryReader = new BinaryReader(fileStream))
{
result.Width = binaryReader.ReadInt32();
result.Height = binaryReader.ReadInt32();
}
}
return result;
}
public static void WriteSettings(WindowSize toSave)
{
using (BinaryWriter binaryWriter = new BinaryWriter(File.Open(savePath, FileMode.Create)))
{
binaryWriter.Write(toSave.Width);
binaryWriter.Write(toSave.Height);
}
}
}
Usage:
// Read
var windowSize = WindowSizeStorage.ReadSettings();
myForm.Width = windowSize.Width;
myForm.Height = windowSize.Height;
// Write
var windowSize = new WindowSize { Width = myForm.Width, Height = myForm.Height };
WindowSizeStorage.WriteSettings(windowSize);
Please note that writing an answer like this (presenting all code) is not the common way; I just felt like it. I tried to show some object-oriented design principles, where each class does its own thing.
If you want to transfer compicated objects between methods (i.e. more than one primitive type), you will usually create a Data Transfer Object (DTO) like WindowSize.
The WindowSizeStorage class has the sole responsibility to store and retreive such a WindowSize object. From your code you simply tell the storage to store or retreive the settings you wish.
But as I get from your question and comments, you haven't got much experience using C# or perhaps any programming experience at all. Try to pick up on a tutorial or two so you can understand how to put your thoughts into code.
You cannot access Function variable outside that function.
To achieve these define static variable assign height and width to the static variable then access it wherever you want.
A function's purpose is not to store and expose variables. You can't access variables if their scope is not wide enough. In your case, you should expose those in a way or another by storing them in the class and making them public, for example :
class SetWindowSize
{
public const string savePath = "file.dat";
public int width {get; set; }
public int height {get; set; }
public static void SaveData(string width, string height)
{
using (BinaryWriter binaryWriter = new BinaryWriter(File.Open(savePath, FileMode.Create)))
{
binaryWriter.Write(width);
binaryWriter.Write(height);
SetWindowSize.width = width;
SetWindowSize.height = height;
}
}
}
}
That is far from optimal, but it works.
Your SetWindowSize class does not contain width or height members. You can add them as properties:
public class SetWindowSize
{
// other stuff
public int Height {get;set;}
public int Width {get;set;}
// other stuff
}
Then you can access them using the SetWindowSize instance:
SetWindowSize sws = new SetWindowSize();
sws.Height = 512;
sws.Width = 512;
Hey all, I am developing a site for work that will push info from a database into Wordpress using Wordpress XML RPC. I can grab info and post it just fine, however when I get to the point of uploading images it seems to work(no runtime errors/image in WP Media Tab) however it uploads a broken image link. It appears it is somehow no getting the data from my image and I am not certain why here is some of my code.
MemoryStream ms = new MemoryStream();
System.Drawing.Image img = System.Drawing.Image.FromFile(HttpContext.Current.Server.MapPath("_Images/DownloadButton-PSD.png"));
img.Save(ms, ImageFormat.Png);
byte[] imagebytes = new byte[ms.Length];
ms.Position = 0;
ms.Read(imagebytes, 0, Convert.ToInt32(ms.Length));
after that code loads the image info I pass it to the function in the format of a Data variable
var data = new Data
{
Base64 = Convert.ToBase64String(imagebytes),
Name = "DownloadButton-PSD.png",
Type = "image/png",
Overwrite = false,
};
_wpWrapper.UploadFile(data);
FYI: I am also using the dll's from
http://joeblogs.codeplex.com/
for my project
The Data Class looks like this:
public class Data
{
public string Name { get; set; }
public string Type { get; set; }
public string Base64 { get; set; }
public bool Overwrite { get; set; }
}
The Upload File Function looks like this:
public void UploadFile(Data data)
{
var xmlRpcData = Map.From.Data(data);
var result = _wrapper.UploadFile(this.BlogID, Username, Password, xmlRpcData);
}
In JoeBlogs library try using the class MetaWeblogWrapper and method: MediaObjectInfo NewMediaObject(MediaObject mediaObject) - for upload image.