I recently got a "The mapping of interface member ..... is not supported" error, which I resolved based on this thread. To demonstrate:
public interface IMyInterface { string valueText { get; set; } }
public class MyData : IMyInterface
{
int ID { get; set;}
string valueText { get; set;}
}
public class MyOtherData : IMyInterface
{
long ID { get; set;}
string valueText { get; set;}
}
and
public static IEnumerable<T> GetByValue<T>(string value) : where T : class, IMyInterface, new()
{
using (var context = new DataContext())
{
// The important line
return context.GetTable<T>().Where(x => x.valueText == value);
}
}
Running this code, I'd get a NotSupportedException: "The mapping of interface member IMyInterface.valueText is not supported". However, if I replace the x.valueText == value with x.valueText.Equals(value), this works entirely as expected.
I've solved this in my code, but I want to understand why it behaves this way. Can anyone explain it?
Update: As per my comment below, the LINQ to SQL team closed this as a "Won't fix". I think that means it now counts as a known bug, but one that isn't going to be resolved any time soon. I'd still like to know why it behaves differently in the first place, though.
Apparently the decision to push the query upstream to the server is made based on an incomplete set of rules, and then LINQ-to-SQL finds a construct (an interface) that it can't deal with.
The method call isn't supported by LINQ-to-SQL, so it generates a query to retrieve all records and then uses LINQ-to-Objects to filter them. (Actually, based on your other thread, LINQ-to-SQL may make a special exception for object.Equals and knows how to convert that to SQL).
LINQ-to-SQL probably should fall back to the LINQ-to-Objects behavior when an interface is involved, but apparently it just throws an exception instead.
Related
I have a library of fairly heavy-weight DTOs that is currently being used by some WCF services. We are attempting to bring it into protobuf-net world, with as little modification as possible. One particular set of items is giving me trouble in serialization. I'm going to simply them here because it gets a little complicated, but the gist of the problem is:
public class Key
{
public string Id {get; set;}
}
public class KeyCollection : IEnumerable<Key>
{
private readonly List<Key> list;
#region IEnumerable
// etc...
#endregion
}
public class Item
{
public long Id { get; set; }
}
public abstract class ContainerBase
{ }
public abstract class ContainerBase<T> : ContainerBase
where T : Item
{ }
public abstract class ContainerType1Base : ContainerBase<Item>
{
public KeyCollection Keys { get; set; }
}
public class ContainerType1 : ContainerType1Base
{ }
I've left out the decorators because I don't they're the problem, mostly because if I add void Add(Key item) { } to KeyCollection the whole thing seems to work. Otherwise, I run into problems attempting to serialize an instance of ContainerType1.
Actually, changing the signature of KeyCollection is kind of prohibitive, so I'm attempting to follow this answer to try to do it programatically. Specifically, setting itemType and defaultType to null on the "Keys" ValueMember of ContainerType1, ContainerType1Base and ContainerBase<Item>. I also set IgnoreListHandling to true on KeyCollection... which totally doesn't work. I get a generic "failed to deserialize" exception on the client, which I can post here if it would help. On the server side, I serialize it out using Serializer.Serialize(), and I spit out Serializer.GetProto<>() as well as JSON of the object, and they all seem to be work okay.
How can I turn off the list handling? Related to that, is there a way to turn on extra debugging while serializing to try to get some more information of the problem?
Fundamentally, the code shown looks fine. Unfortunately, there's currently a "feature" in gRPC that means that it discards the original exception when a marshaller (serializer) fails for some reason, so gRPC does not currently expose the actual problem. I have submitted a fix for this - it may or may not be accepted.
In the interim, I suggest that you simply remove gRPC from the equation, and simulate just the marshaller workload; to do this, on the server: generate the data you are trying to send, and do:
var ms = new MemoryStream();
Serializer.Serialize(ms, yourDataHere);
var payload = Convert.ToBase64String(ms.ToArray());
and obtain the value of payload (which is just a string). Now at the client, reverse this:
var ms = new MemoryStream(Convert.FromBase64String(thatStringValue));
Serialize.Deserialize<YourTypeHere>(ms);
My expectation here is that this should throw an exception that will tell you what the actual problem is.
If the gRPC change gets merged, then the fault should be available via:
catch (RpcException fault)
{
var originalFault = fault.Status.DebugException;
// ^^^
}
I have a 'Validator' class that needs to do some simple validation. However, there are some instances where all or just a single method may need to be called.
The interface for the validator is defined as:
internal interface IBrandValidator
{
BrandInfo ValidateBrands();
}
The class definition for the object being returned:
internal class BrandInfo
{
public Organisation Brand { get; set; }
public Client Client { get; set; }
public Location Location { get; set; }
public Language Language { get; set; }
}
The class that implements this interface:
internal class ClientValidator : IBrandValidator
{
private readonly int? clientId;
private readonly int? locationId;
private readonly int? languageId;
public ClientValidator(int clientId, int? locationId, int? languageId)
{
this.clientId = clientId;
this.locationId = locationId;
this.languageId = languageId;
}
public BrandInfo ValidateBrandDimensions()
{
var brandInfo= new BrandInfo();
//Optional validation
if(client != null)
brandDimensions.Client = ValidateClient(clientId);
if(locationId != null)
brandDimensions.Location = ValidateLocation(locationId);
if(languageId != null)
brandDimensions.Language = ValidateLanguage(languageId);
return brandInfo;
}
}
My question is. The 3 validation methods under the comment 'Optional Validation'. May or may not need to be called. However, there may be additional things I need to validate in future and using the nullable int with the if statement is a bad route.
Is there a design pattern I can implement to achieve something similar?
Your code is hardly predictable by reading for example:
brandDimensions.Client = ValidateClient(clientId);
ValidateClient should return truthy or falsy object. But is assigned to an Object with name "Client".
Your validator returns an BrandInfo Object. But does not include any property or method which indicates if it is valid or not ?!?
The ClientValidator does not have to validate for a client - because it is nullable?
It think you should consider to reorganize part of your codes.
If a class creates many objects from an Identifier you could probably use the Factory Pattern.
If you want to validate a complex object name it after ComplexObjectValidator.
Every part of the complex object gets validated.
If it is valid that for example an Id is nullable put that check in the Validator Implementation.
It is hard to tell more specifics because it is unclear what your code does or intends to do.
Edit:
As rule of thumb:
Truthy or falsy Methods: Prefix with "Is" "Must" "Should" "Has" "Can" etc.
If a method should return an Object: "GetValidatedClient" "ValidateAndReturnClient" "CreateClient"
So someone reading your code which can be you in the future (6 months, 3 years, 10 years) can just infer the behaviour from your function names.
ValidateClient would imply that it is just Validating. More specifically it just returns void. Because it just Validates. If it returns truthy or falsy values use one of the prefixes listed above. If it returns an Validator Object use "GetValidationResultFor(xyz)" for example.
Yesterday I was working on a code refactor and came across an exception that I really couldn't find much information on. Here is the situation.
We have an a pair of EF entities that have a many to many relationship through a relation table. The objects in question look like this, leaving out the unnecessary bits.
public partial class MasterCode
{
public int MasterCodeId { get; set; }
...
public virtual ICollection<MasterCodeToSubCode> MasterCodeToSubCodes { get; set; }
}
public partial class MasterCodeToSubCodes
{
public int MasterCodeToSubCodeId { get; set; }
public int MasterCodeId { get; set; }
public int SubCodeId { get; set; }
...
}
Now, I attempted to run a LINQ query against these entities. We use a lot of LINQ projections into DTOs. The DTO and the query follow. masterCodeId is a parameter passed in.
public class MasterCodeDto
{
public int MasterCodeId { get; set; }
...
public ICollection<int> SubCodeIds { get; set; }
}
(from m in MasterCodes
where m.MasterCodeId == masterCodeId
select new MasterCodeDto
{
...
SubCodeIds = (from s in m.MasterCodeToSubCodes
select s.SubCodeId).ToList(),
...
}).SingleOrDefaultAsync();
The internal query throws the following exception
Expression of type 'System.Data.Entity.Infrastructure.ObjectReferenceEqualityComparer' cannot be used for constructor parameter of type 'System.Collections.Generic.IEqualityComparer`1[System.Int32]'
We have done inner queries like this before in other places in our code and not had any issues. The difference in this one is that we aren't new-ing up an object and projecting into it but rather returning a group of ints that we want to put in a list.
I have found a workaround by changing the ICollection on MasterCodeDto to IEnumerable and dropping the ToList() but I was never able to find out why I couldn't just select the ids and return them as a list.
Does anyone have any insight into this issue? Normally returning just an id field and calling ToList() works fine when it is not part of an inner query. Am I missing a restriction on inner queries that prevents an operation like this from happening?
Thanks.
Edit: To give an example of where this pattern is working I'll show you an example of a query that does work.
(from p in Persons
where p.PersonId == personId
select new PersonDto
{
...
ContactInformation = (from pc in p.PersonContacts
select new ContactInformationDto
{
ContactInformationId = pc.PatientContactId,
...
}).ToList(),
...
}).SingleOrDefaultAsync();
In this example, we are selecting into a new Dto rather than just selecting a single value. It works fine. The issues seems to stem from just selecting a single value.
Edit 2: In another fun twist, if instead of selecting into a MasterCodeDto I select into an anonymous type the exception is also not thrown with ToList() in place.
I think you stumbled upon a bug in Entity Framework. EF has some logic for picking an appropriate concrete type to materialize collections. HashSet<T> is one of its favorites. Apparently (I can't fully follow EF's source code here) it picks HashSet for ICollections and List for IEnumerable.
It looks like EF tries to create a HashSet by using the constructor that accepts an IEqualityComparer<T>. (This happens in EF'sDelegateFactory class, method GetNewExpressionForCollectionType.) The error is that it uses its own ObjectReferenceEqualityComparer for this. But that's an IEqualityComparer<object>, which can not be converted to an IEqualityComparer<int>.
In general I think it is best practice not to use ToList in LINQ queries and to use IEnumerable in collections in DTO types. Thus, EF will have total freedom to pick an appropriate concrete type.
How to I check if a nested model object, has any items.
Ie. if I have an object/viewmodel:
public class CarViewModel
{
public string Type { get; set; }
public long ID { get; set; }
public virtual IQueryable<Feature> Features { get; set; }
}
public class Feature
{
public string Offer { get; set; }
public decimal Rate { get; set; }
public virtual CarViewModel CarViewModel { get; set; }
}
...and it is populated as follows - so that 1 car object has 2 additional features, and the other car object, has no additional features:
[
{"Type":"SoftTop","ID":1,
"Features":
[{"Offer":"Alloys","Rate":"500"},{"Offer":"Standard","Rate":"100"}]},
{"Type":"Estate","ID":2,
"Features":[]}
]
So in my code, I had "Cars" populated with the data above:
foreach (var car in Cars)
{
if (!car.Features.Any())
{
car.Type = "Remove";
}
}
However, I get the message: This method is not supported against a materialized query result. at the if (!car.Features.Any()) line.
I got the same error when trying if (car.Features.Count()==0)
Is there a way of checking if the number of Features is 0?
Or is there a linq way of removing any items from the object, where the number of features is 0?
Thank you,
Mark
UPDATE
I changed the viewModel to use IEnumerable and then the following:
cars=cars.Where(x => x.Feature.Count()>0).ToList();
That seems to work - although I'm not 100% sure. If anyone can say whether this is a "bad" fix or not, I'd appreciate it.
Thanks, Mark
Try fetching the results first then checking the count
car.Features.ToList().Count
I don't think there any anything wrong with the fix - when you're using IQueryable<T> that came from a Linq to DB (L2S, Entity Framework, etc) you pretty much have to materialise it before you can use things like Any() or Count() when you ask for these things inside foreach.
As to why this is - I actually am not 100% certain and I believe that the error is a bit misleading in this respect, but I think that what it's complaining about is that neither Cars not car.Features() has actually been fully evaluated and run yet (i.e you are only starting to hit the database at the point when you go foreach ... in your code because it's IQueryable<T>).
However on a broader note I'd recommend you not use IQueryable<T> in your Viewmodels, much safer to use IEnumerable<T> - no chance of accidentally setting off a database access when rendering your view, for example.
And also when you are returning data from your DataLayer or wherever, a good rule of thumb is to materialise it as quickly as possible so as to be able to move on with an actual list of actual things as opposed to a "promise to go and look" for certain things in the database at some unspecificed point in the future :) So your DataLayers should only ever return IEnumerable<T>'s
You can always cast an IEnumerable to IQueryable if for some reason you need to...
Here is some code that uses a parameter class to contain the possible parameters to the Show() method. The values in this FooOption class aren't very related. You can see this by looking at the implementation of Show() below. I know this is bad code, but are there any anti-patterns related to doing this?
class FooOptions {
public int? Id { get; set; }
public string BazContext { get; set; }
public int? BazId { get; set; }
}
class BarMgr {
public Bar Show(FooOptions options) {
if (options == null)
options = new FooOptions();
if (options.Id.HasValue)
return svc.GetBar(options.Id.Value);
if (!string.IsNullOrEmpty(options.BazContext) && options.BazId.HasValue)
return svc.GetBar(options.BazContext, options.BazId.Value);
return null;
}
}
Update:
I know that parameter objects are not an anti-pattern. In my experience, parameter object properties are related. This is the possible anti-pattern that I am trying to locate. setting all three properties makes no sense.
After your update, here my answer:
As far as I know, there is no real name for an anti-pattern like this, but there is at least one principle that this method violates:
The Single-Responsibility-Principle.
And it really is a problem of the method and not of the parameter object.
It's called the parameter object pattern, and it's not considered an antipattern -- it's a good way to deal with methods that would otherwise have too many parameters.
There might be an anti-pattern if you use options a lot we have something called feature envy and is an indication that you might want to move functionality into the actual feature being used.