We have a huge client/server WinForms app that uses .NET remoting to pass DAOs between the layers, which has a few problems.
All of the DAOs were defined to use fields instead of properties long before I got here, and you can't bind fields to controls.
Adding fields or properties to a DAO changes the serialization format, requiring a dual client/server deployment, which is much more difficult for us than either a client or server deployment (we have to work around doctors' schedules to minimize downtime).
Using a simple, contrived, and imaginary example, would changing the object from this:
public class Employee
{
public int ID;
public string Name;
public DateTime DateOfBirth;
}
to this:
public class Employee
{
public int ID { get; set; }
public string Name { get; set; }
public DateTime DateOfBirth { get; set; }
}
change the serialization format, breaking compatibility with older clients?
Important edit: this should be compatible and allow binding?
public class Employee
{
private int id;
private string name;
private DateTime dateOfBirth;
public int ID { get {return id;} set {id = value;} }
public string Name { get {return name;} set {name = value;} }
public DateTime DateOfBirth { get {return dateOfBirth;}
set {dateOfBirth = value;} }
}
Certainly worth a try, no?
Yes, this will cause problems if client/server are out of sync.
.NET remoting uses BinaryFormatterm which (without a bespoke ISerializable implementation) uses the field names. Using automatic properties breaks the field names.
As long as you update client and server at the same time, it should work. Another option is to use name-independent serialization, such as protobuf-net. I can provide an example if you want (it supports ISerializable usage).
(by the way, adding properties should not affect BinaryFormatter, since it is field-based)
As requested, here's an example of using protobuf-net to control remoting serialization (taken directly from one of my unit tests); note that this will also be incompatible until both client and server agree, but should withstand changes after that (it is designed to be very extensible). Note that there are lots of ways of using it - either explicit member notation (like data-contracts) or implicit fields (like BinaryFormatter), etc (everything in between)... this is just one way of using it:
[Serializable, ProtoContract]
public sealed class ProtoFragment : ISerializable
{
[ProtoMember(1, DataFormat=DataFormat.TwosComplement)]
public int Foo { get; set; }
[ProtoMember(2)]
public float Bar { get; set; }
public ProtoFragment() { }
private ProtoFragment(
SerializationInfo info, StreamingContext context)
{
Serializer.Merge(info, this);
}
void ISerializable.GetObjectData(
SerializationInfo info, StreamingContext context)
{
Serializer.Serialize(info, this);
}
}
Here, the bottom 2 methods satisfy ISerializable, and simply pass execution to the protobuf-net engine. The [ProtoMember(...)] defines fields (with unique identification markers). As already stated, it can also infer these, but it is safer (less brittle) to be explicit.
Related
I've seen the following code in various places:
namespace My.name.space
{
class myClass
{
public CustomObject Name
{
get { return new CustomObject (this.Dog); }
set { return; }
}
}
}
What is the purpose of set { return; }?
I don't understand what purpose set return would serve.
I would think you could just remove the set accessor completely.
None. It's somebody who doesn't quite know that a read-only property can be expressed much more simply by not including the set
public Derp MuhDerp { get { return _derp; } }
Interesting point brought up by CSharpie in a comment...
If you have to have a set because it is defined in an interface, you can add the set but omit the return:
public Derp MuhDerp { get { return _derp; } set { } }
Of course, if the interface defines a setter, you should probably make sure it works as expected :)
It is basically to give the illusion that there is a setter, well there is, but does nothing. But it was probably done to keep some interface happy or a parent class:
public CustomObject Name
{
get { return new CustomObject( this.Dog ); }
set { return; } // does absolutely nothing
}
Here is a class:
public abstract class A {
public abstract void DoWork();
public abstract string SomeProperty { get; set; }
}
Here is giving the illusion that it is implementing the abstract interface, but it really is not implementing everything:
public class B : A {
public override string SomeProperty
{
get
{
return "whatever";
}
set
{
return; // keep interface happy
}
}
public override void DoWork() {
// I am not doing nothing but compiler is happy
}
}
That code also breaks Liskov Substitution Principle.
That simply means "don't do anything, I don't want to make an assignment". It's like a no-op in the setter. It's also equivalent to an empty setter, i.e. set { }. It's a matter of preference, really; some people prefer not to have empty code bodies, I guess.
Of course, you wouldn't typically do it that way (as Will points out). You would just use a read-only property, but there is a key difference: when using a read-only property, an attempt to set it will fail at compile time; if you're using the one you asked about, then it won't fail at all, it will simply "do nothing" at runtime.
Which one you use largely depends on what you want your application to do. I'll point out that using this approach (rather than a read-only property) can lead to brittle code, as the programmer may not be aware that their deliberate attempt to assign a value is being ignored.
Properties in C# are just syntax candy for a special case of methods.
A "regular" property like
public int Foo {get;set;}
is actually (sommething similiar)
int _foo;
public int get_Foo() { return _foo;}
public void set_Foo(int value) { _foo = value;}
However you are allowed to speficy yourself, what happens in the setter and getter
public int Foo { get { return 47; } set { Console.WriteLine(value); } }
So taking your example the compiler will turn it into
public CustomObject get_Name() { return new CustomObject (this.Dog); }
public void set_Name(CustomObject value) { return; }
Which does nothing in the set-Method at all. So why would someone do this?
There are a few reasons that make sense:
They want to later introduce functionality to that setter, so it serves as a placeholder for now
The setter is required, because the Property comes from an interface, yet it makes no sense to set the value in that concrete implementation
Some API or stuff based on reflection requires a set-method even if it is not used.
this is the same as a read only property, but the property can be setted (obviously with no sense).
There are some great answers here (and I know this is an old question), but I wanted to provide some additional context for what this is, when it might be useful, and how I've used it in production code.
First, the answer: This is normally not useful. Just define a readonly property. As others have said, you're completely right that you could remove the set and have almost the same effect -- except that the property could no longer be assigned to.
Now, here's where this can be useful. For a lot of the smaller microservices I write, I use Azure Tables for storage. This is a key-value database, where your key comes in two parts: a PartitionKey and a RowKey. They're pretty self-explanatory.
Generally I'll end up with a couple of small tables (such as ones holding basic application-managed settings) that don't make sense to use a PartitionKey with -- i.e. they only have one identifying piece of information, such as an id, along with a value.
The Azure.Data.Tables library is really nice, and provides an interface to implement for your DTOs. Implementing this interface would normally look like the following:
using Azure;
using Azure.Data.Tables;
namespace SmartsheetIntegration.Qar.DataStores.Models;
public class MyDto : ITableEntity
{
public MyDto() {}
public MyDto(string partitionKey, string rowKey)
{
PartitionKey = partitionKey;
RowKey = rowKey;
}
public string PartitionKey { get; set; }
public string RowKey { get; set; }
public DateTimeOffset? Timestamp { get; set; }
public ETag ETag { get; set; }
}
Now, what about these simple app settings? I want to make sure that they never end up with a PartitionKey, because I need to know for sure that if I want to retrieve all of them, I can query for 'PartitionKey = ""' and get everything back.
The simple solution to this is:
using Azure;
using Azure.Data.Tables;
namespace SmartsheetIntegration.Qar.DataStores.Models;
public class MyDto : ITableEntity
{
public MyDto() {}
public MyDto(string partitionKey, string rowKey)
{
PartitionKey = partitionKey;
RowKey = rowKey;
}
public string PartitionKey
{
get => "";
set { } // the same as set { return; }
}
public string RowKey { get; set; }
public DateTimeOffset? Timestamp { get; set; }
public ETag ETag { get; set; }
}
Now Azure Tables can do whatever it wants in its library, but I can guarantee that the PartitionKey saved to the database will always be an empty string.
So yes, the number of situations where this is useful isn't huge -- but there are interface implementations where it makes sense.
We want to send two collections of objects out of a WCF service. The classes for the two objects share some common properties, and each have others, unique to them...
public class TypeA {
public string A { get; set; }
public string B { get; set; }
public string C { get; set; }
public string D { get; set; }
}
public class TypeB {
public string A { get; set; }
public string B { get; set; }
public string E { get; set; }
public string F { get; set; }
}
Yes I could/should use a common base class, but that isn't the question here
One the one hand, having two classes like this means that each class only has the properties it needs, which keeps it slim and focused. On the other hand, as these are basically two different views of the same underlying object, it's perfectly reasonable to combine them, and just populate the properties needed.
I can't find a way of seeing how big the WCF payload is, so don't know if using one common class is going to consume more bandwidth than using specific classes. I need this to be as efficient as possible.
Anyone know if using one common class is going to increase the WCF payload? If so, any way of find out how much?
You can put the XmlSerializerFormat attribute on your services in order to force WCF to use ASP.NET serialization engine when sending your objects. This will allow you to use other attributes and methods to take full control over which properties are serialized.
Then, you can restrict properties from being serialized using two methods:
1. Implement a ShouldSerializeXXXX() method
2. Use the DefaultValue attribute. Properties which already have the default value are not serialized.
Example:
[XmlSerializerFormat, ServiceContract]
public interface IMyService
{
[OperationContract]
MyData GetData();
}
[DataContract]
public class MyData
{
[XmlAttribute, DataMember]
public int Value1 { get; set; }
// Explicit method to control serialization of Value1 property
public bool ShouldSerializeValue1()
{
// do not serialize this value if it's 0
return Value1 != 0;
}
// Use default value of 0 to prevent serializing zeros
[XmlAttribute, DataMember, DefaultValue(0)]
public int Value2 { get; set; }
}
I would be more concerned with the data transfer type you are using if you are worried about packet size. For example, if you are using the SOAP protocol, then your packet size is going to be very large and instead of worrying about base classes to objects, you could shrink down your packet size tremendously by switching to binary, or even JSON. See here for reference: https://dzone.com/articles/wcf-rest-xml-json-or-both
Also, to inspect true packet size I would install 'fiddler' on your machine and inspect the data as it goes over the network.
I hope that sets you on the correct path...
This is kind of a design problem:
class Suite
{
List<Plugins> Plugins {get;set;}
}
class DataCollector : Plugin
{
string Property1 {get;set;}
string Property2 {get;set;}
public void PrintMyName();
}
class Measurement: Plugin
{
string Property1 {get;set;}
string Property2 {get;set;}
public void PrintMyName();
}
Now, I want the class suite to be serializable. The problem here is this:
XMl serialization is basically used to serialize DTO kind of objects so you basically serialize/deserialize your stateful properties and that is fine
This particular class which is going to serialized contains of Type Plugin(which contains combination of properties which contains some property values) + functionalists.
Which means I need to use factory to get the real instance of the object for Plugin to come to life with all its functionality with the property values.
Am I looking at an XMLSrializable + Factory combination? Is there any good elegant way to achieve this?
Why not implement the IDeserializationCallback interface so upon deserialization you can bring your object back to life via a call to your factory.
I don't really understand the purpose of your classes, but I will give an example as best I can:
public class MyDataClass : IDeserializationCallback
{
public int SimpleKeyProperty { get; set; }
public string TextProperty{ get; set; }
public void OnDeserialization(object sender)
{
//upon deserialization use the int key to get the latest value from
//factory (this is make believe...)
TextProperty = SomeFactory.GetCurrentValue( this.SimpleKeyProperty) ;
}
}
I am trying to work out the best way to approach the following ..
i need to pass AvailabilityOption/LimitedAvailabilityOption types to - well a service as it happens, and then get back BookingOption types.
I have routines which will generate the availability types, but am unsure if i need to go through each of my Option objects effectively duplicating versions of them inheriting from BookingOption and AvailabilityOption in turn. or can I do some sort of 'Decoration' of the simple options with the availability classes and then cast them back down to booking ones when i pass them back again.
I know there is a decoration pattern but having read a bit about this it appears that it is more about decorating at runtime.. I may well be misunderstanding.
I suspect I havent explained this very well but here is some code..
EDIT: effectively the option is a base for a number of possible options on a booking - such as an excursion or some other extra, of which there are quite afew. the availability extends that to determine what space there is on any option, but it is just extending the option itself, with the price and possibly the numbers already booked on that option.
the BookingOption is there to be returned from the routine that effectively chooses from the options based on their price and availability. I am just trying to return the bare minimum at the booking point, which is really the date when the option is required and which option it is. the availability at this point is moot..
public abstract class Option{
public int OptionID { get; set; }
public OptionType OptionType { get; set; }
public string EqtCode { get; set; }
public string CentreCode { get; set; }
public virtual string Description { get; set; }
}
public abstract BookingOption : Option{
public DateTime WeekStartDate{get;set;}
}
public abstract class AvailabilityOption : BookingOption {
public decimal Price{get;set;}
public override string Description{
get{return string.format("{0} # {1}", base.Description, Price.ToString());
set{ base.Description = value;}
}
}
public abstract class LimitedAvailabilityOption : AvailabilityOption{
public int MinNumber { get; set; }
public int MaxNumber { get; set; }
public int TotalBooked { get; set; }
public int TotalRemaining { get; set; }
public override string Description
{
get
{
return string.Format("{0} ({1} # {2})",
base.Description, TotalRemaining.ToString(), Price.ToString());
}
set { base.Description = value;}
}
}
public class Option1 : Option{
public Option1(){}
}
public class Option2 : Option{
public Option2(){}
}
public List<BookingOption> BookWithAvail(List<AvailabiliyOption> options){
//pick options based on avail and pass back the booking versions so write away...
}
It looks like answer depends on how you plan to use Availability and Limited availabilitty qualities of essences. If those availability-qualities are only applicable to LimitedAvailabilityOption and AvailabilityOption classes - than it does not seems there is a big need of implementation Availability or LimitedAvailability in separate classes, as each of those classes will be used to distinguish only one kind of other essences (LimitedAvailabilityOption and AvailabilityOption respectively). It would make sense to use decoration pattern and implement classes for Availability and Limited availability qualities only if you plan assign each of them to multiple essences that are not connected with inheritance relationship (including inheritance through intermediate classes). And if you plan to spread usage of availability-qualities across multiple classes that are not supposed to have any inheritance connection via common ancestor that has availability property - then the only choice is to extract those availability qualities into separate classes.
I have a group of POCO classes:
class ReportBase
{
public string Name { get; set; }
public int CustomerID { get; set; }
}
class PurchaseReport : ReportBase
{
public int NumberOfPurchases { get; set; }
public double TotalPurchases { get; set; }
public bool IsVip { get; set; }
}
class SaleReport : ReportBase
{
public int NumberOfSales { get; set; }
public double TotalSales { get; set; }
}
I have a web method to return ReportBase. The caller uses the return value to update UI(WPF) based on the actually type by downcasting and checking the type (one grid for sale and one for purchase). Someone suggested to use three web methods and each return the specific type.
I understand that downcast is in general against design principle by introducing if/else. Instead we should use virtual functions. But in POCO class, we don't really have virtual behaviors (only extra fields).
Are you for or against downcast in this case, why?
IMO it's all about intention. Returning just the base class doesn't say anything, especially as you return it only to save some key strokes. As a developer what do you prefer?
ReportBase GetReport() // if type==x downcast.
//or
PurchaseReport GetPurchaseReport()
SaleReport GetSalesReport()
What approach would you want to use to make the code more maintainable? Checking type and downcasting is an implementation detail after all and you probably have a method like this
public void AssignReport(ReportBase report)
{
//check, cast and dispatch to the suitable UI
}
What's wrong with this? It's lacking transparency, and this method should always know about what reports are needed by what UI elements. Any time you add/remove an element you have to modify this method too.
I think is much clear and maintainable something like this
salesGrid.DataSource=repository.GetSalesReport();
purchaseGrid.DataSource=repository.GetPurchaseReport();
than this
var report=repository.GetReport();
AssignReport(report); //all UI elements have their data assigned here or only 2 grids?
So I think that, POCO or not, I will favour the three web methods approach.