I created a static class to hold user messages such as "Item successfully added" or "Password successfully changed", etc. Inside the class is a static dictionary which holds all the messages. The key of the dictionary is the UserId. I then have an Action which renders in _Layout.cshtml so that the messages will follow the user even if they are redirected off the page.
For example, I might allow a user to "Add" an item, and then once the item is added successfully, it will redirect to the List page for that item and then display the message "Item successfully added."
This worked great until I deployed to my production site and I noticed that the messages were "lagging". I would add an item and then it would redirect to the list page, but the message would not display. I would then navigate to somewhere else in the application and then the message would display on that page.
Any ideas why this would be happening?
Here's the code for my UserMessageManager
public static class UserMessageManager
{
private static readonly Dictionary<int, Queue<UserMessage>> UserMessages = new Dictionary<int, Queue<UserMessage>>();
public static void Add(int userId, string message)
{
if (string.IsNullOrWhiteSpace(message))
return;
if (!UserMessages.Keys.Contains(userId))
{
UserMessages.Add(userId, new Queue<UserMessage>());
}
UserMessages[userId].Enqueue(new UserMessage { Message = message});
}
public static List<UserMessage> Get(int userId)
{
if (!UserMessages.Keys.Contains(userId))
{
UserMessages.Add(userId, new Queue<UserMessage>());
}
var messages = new List<UserMessage>();
while (UserMessages[userId].Any())
{
messages.Add(UserMessages[userId].Dequeue());
}
return messages;
}
}
public class UserMessage
{
public string Message { get; set; }
}
EDIT: After some playing around, the messages will sometimes even "bunch up". I will add a messages to the dictionary after creating a few "items" and the messages will suddenly all display at once at a seemingly random time.
Firstly, if this is for a real site, I wouldn't take this approach anyway. If you want information persisted, use something persistent (e.g. a database). Otherwise when the AppDomain is recycled or the whole server goes down (you do have redundancy, right?) you'll lose all the messages. This approach also doesn't load balance nicely.
Secondly, it's just possible that you're seeing the results of the standard collection classes not being thread-safe. Without any synchronization or other memory barriers, it's possible that one thread isn't seeing the data written by another. It would also be entirely possible for two threads to write to the dictionary (or write to the same list) at the same time. You could just add some synchronization, e.g. have a lock object and lock on it for the entirety of each method. I'd start using a database instead though...
Related
How can I add items to my list SearchedVideos?
I would like to have these items on the list until the end of my application.
Now I have error like this:
NullReferenceException: Object reference not set to an instance of an object.
I create context with prop as Singleton like this:
public List<QueryViewModel> SearchedVideos { get; set; }
In startup
services.AddSingleton<YtContext>();
My model
public class ExecutedQuery
{
public Query Query { get; }
public string Title { get; set; }
public IReadOnlyList<Video> Videos { get; set; }
public ExecutedQuery(Query query, string title, IReadOnlyList<Video> videos)
{
Query = query;
Title = title;
Videos = videos;
}
}
My service
public async Task<ExecutedQuery> ExecuteQueryAsync(Query query)
{
// Search
if (query.Type == QueryType.Search)
{
var videos = await _youtubeClient.SearchVideosAsync(query.Value);
var title = $"Search: {query.Value}";
var executedQueries = new ExecutedQuery(query, title, videos);
var qw = new QueryViewModel
{
ExecutedQueries = executedQueries,
};
_ytcontext.SearchedVideos.Add(qw);
return executedQueries;
}
}
My QueryViewModel
public ExecutedQuery ExecutedQueries { get; set; }
My Controller
[HttpGet("Search/all")]
public async Task<IActionResult> ListAllQueriesAsync(string query)
{
var req = _queryService.ParseQuery(query);
var res = await _queryService.ExecuteQueryAsync(req);
return View(res);
}
If you are wanting to edit this list from one instance to another then you'll need to use some kind of datasource. If a database is not an option then a text file will have to do. Use a Json string and serialize/deserialize to your object. https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-serialize-and-deserialize-json-data. I've used this method to mockup an application but if you are going to be doing alot of writing to the file you may run into issues.
If you can hard code the list in the application then a Singleton will work. Read up here. https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-2.2
Each request is its own thing, unaffected by anything that's happened before or since. As such, you pretty much start from a blank slate. The typical means for persisting state between one or more additional requests is the session. Sessions are essentially fake state, through a combination of server-side (some persistent store) and client-side components (cookies), something that appears like persistence of state can be achieved. However, particularly on the server-side, you still need some sort of store, which is generally a database of some sort, be it relational (SQL Server, etc.) or NoSQL (Redis, etc.). The default session store will be in-memory, which may suffice for your needs, but as memory is volatile, any sort of app restart will take anything stored there along with it.
Alternatively, there's statics and objects with singleton lifetimes. In either case, they're virtually the same as in-memory storage - they'll persist the life of the application and no more.
Statics are just members with a static keyword on them. It's probably the simplest and most straight-forward approach, but also the most fragile. It's virtually impossible to test statics, so you're basically creating black-holes in your code where anything could happen.
A better approach is to simply use an object with a singleton lifetime. These can be create via the AddSingleton<T> method on the service collection. For example, you could create a class like:
public class MySingleton
{
public ICollection<IReadOnlyList<Video>> SearchedVideo { get; set; }
}
And then register it as a singleton in ConfigureServices:
services.AddSingleton<MySingleton>();
Then, in your controllers, views, and such, you can inject MySingleton to access the SearchedVideos property. As a singleton, the data there will persist for the life of the application.
The chief difference between sessions, particularly in-memory sessions, and either statics or singletons is one of breadth. Sessions will always be tied to a particular client, whereas statics and singletons will be scoped to the application. That means that if you use statics or singletons, all clients will see the same data and will potentially manipulate the same data. If you need something that is client-specific, you must use sessions, instead.
#natsukiss i guess you are trying to call Add() method from null property. Even you create a list you should set an initial instance for SearchedVideo Property. Because if you dont create an instance, it means that property will not have address in memory. Because of that sometimes we are using string TestVal = "". That means we sets initial value on Common Language Runtime(CLR) to locate Address in Memory.
public List<QueryViewModel> SearchedVideos { get; set; } = new List<QueryViewModel>(); //<==
or if you are working with EntityFramework you should use
public ICollection<QueryViewModel> SearchedVideos { get; set; } = new HashSet<QueryViewModel>(); //<===
So I'm not sure if it is correct for me to ask this, but I've been self learning WPF and I can't figure out a method to save the data the user enters in my application.
Let's say a project requires the user to input a IList<int> of values. So I have a class storing that information. This information can be loaded from a json filed if the user has already input it and saved within the application.
public class Vault : BindableBase
{
public Vault(string savedFilePath = null)
{
if (string.IsNullOrEmpty(savedFilePath))
{
Measures = new List<int> { 1, 2, 3, 4 };
}
else
{
Measures = (List<int>)JsonConverter.DeserializeObject<List<int>>(savedFilePath);
}
}
public IList<int> Measures { get; set; }
}
Now, when I create the application view, I want to load all the ViewModels the user will use. In each ViewModel, an element of the Measures List must go.
public MainWindowViewModel()
{
vault = new Vault(savedFilePath);
Collection = new ObservableCollection<object>
{
new FirstViewViewModel(vault.Measures[0]),
new SecondViewViewModel(vault.Measures[1])
};
}
So that when I press Save, the Vault class can be serialized.
public void Save()
{
File.WriteAllText(fileLocation, JsonConvert.SerializeObject(vault));
}
As I want to modify the values in Vault with the user input, I need a direct reference to it, therefore in the ViewModels what I do is
public class FirstViewViewModel : BindableBase
{
private int _measure;
public FirstViewViewModel(int measure)
{
_measure = measure;
}
public int Measure
{
get => _measure;
set => SetProperty(ref _measure, value);
}
}
Nevertheless this seems an awful way to connect the user input with the data i want to save in a file.
This is a simplified case of what I want to achieve. However I am sure there are a better way that would allow me to change the values in Vault when Raising a property on the ViewModel. Ideally one that would make UnitTest easy (I haven't started with that yet).
If anyone could offer me a clue to find a better method to deal with this kind of situation, I would really appreciate it.
This will probably get flagged for being too broad in scope, but in general you should serialize the data to a database. This article is a great place to start:
https://learn.microsoft.com/en-us/ef/ef6/modeling/code-first/workflows/new-database
If your data structures are very lite then you might want to use something like SQLite, which stores the database in a local file and doesn't require installing any 3rd-party applications along with your application. Plenty of info here on how to get that working with Entity Framework:
Entity Framework 6 with SQLite 3 Code First - Won't create tables
I'm currently trying to find a better design for my multi-module solution using DI/IOC, but now I'm somehow lost. I have a solution where different kind of entities can be distributed to recipients via different channels.
This is a simplified version of my classes:
#region FTP Module
public interface IFtpService
{
void Upload(FtpAccount account, byte[] data);
}
public class FtpService : IFtpService
{
public void Upload(FtpAccount account, byte[] data)
{
}
}
#endregion
#region Email Module
public interface IEmailService :IDistributionService
{
void Send(IEnumerable<string> recipients, byte[] data);
}
public class EmailService : IEmailService
{
public void Send(IEnumerable<string> recipients, byte[] data)
{
}
}
#endregion
public interface IDistributionService { }
#region GenericDistributionModule
public interface IDistributionChannel
{
void Distribute();
}
public interface IDistribution
{
byte[] Data { get; }
IDistributionChannel DistributionChannel { get; }
void Distribute();
}
#endregion
#region EmailDistributionModule
public class EmailDistributionChannel : IDistributionChannel
{
public void Distribute()
{
// Set some properties
// Call EmailService???
}
public List<string> Recipients { get; set; }
}
#endregion
#region FtpDistributionModule
public class FtpDistributionChannel : IDistributionChannel
{
public void Distribute()
{
// Set some properties
// Call FtpService???
}
public FtpAccount ftpAccount { get; set; }
}
#endregion
#region Program
public class Report
{
public List<ReportDistribution> DistributionList { get; private set; }
public byte[] reportData{get; set; }
}
public class ReportDistribution : IDistribution
{
public Report Report { get; set; }
public byte[] Data { get { return Report.reportData; } }
public IDistributionChannel DistributionChannel { get; private set; }
public void Distribute()
{
DistributionChannel.Distribute();
}
}
class Program
{
static void Main(string[] args)
{
EmailService emailService = new EmailService();
FtpService ftpService = new FtpService();
FtpAccount aAccount;
Report report;
ReportDistribution[] distributions =
{
new ReportDistribution(new EmailDistributionChannel(new List<string>("test#abc.xyz", "foo#bar.xyz"))),
new ReportDistribution(new FtpDistributionChannel(aAccount))
};
report.DistributionList.AddRange(distributions);
foreach (var distribution in distributions)
{
// Old code:
// if (distribution.DistributionChannel is EmailDistributionChannel)
// {
// emailService.Send(...);
// }else if (distribution.DistributionChannel is FtpDistributionChannel)
// {
// ftpService.Upload(...);
// }else{ throw new NotImplementedException();}
// New code:
distribution.Distribute();
}
}
}
#endregion
In my current solution it is possible to create and store persistent IDistribution POCOs (I'am using a ReportDistribution here) and attach them to the distributable entity (a Report in this example).
E.g. someone wants to distribute an existing Report via Email to a set of recipients. Therefore he creates a new ReportDistribution' with anEmailDistributionChannel'. Later he decides to distribute the same Report via FTP to a specified FtpServer. Therefore he creates another ReportDistribution with an FtpDistributionChannel.
It is possible to distribute the same Report multiple times on the same or different channels.
An Azure Webjob picks up stored IDistribution instances and distributes them. The current, ugly implementation uses if-else to distribute Distributions with a FtpDistributionChannel via a (low-level) FtpService and EmailDistributionChannels with an EmailService.
I'm now trying to implement the interface method Distribute() on FtpDistributionChannel and EmailDistributionChannel. But for this to work the entities need a reference to the services. Injecting the Services into the entities via ConstructorInjection seems to be considered bad style.
Mike Hadlow comes up with three other solutions:
Creating Domain Services. I could e.g. create a FtpDistributionService, inject a FtpService and write a Distribute(FtpDistributionChannel distribution) method (and also a EmailDistributionService). Apart from the drawback mentioned by Mike, how can I select a matching DistributionService based on the IDistribution instance? Replacing my old if-else with another one does not feel right
Inject IFtpService/EMailService into the Distribute() method. But how should I define the Distribute() method in the IDistribution interface? EmailDistributionChannel needs an IEmailService while FtpDistributionChannel need an IFtpService.
Domain events pattern. I'm not sure how this can solve my problem.
Let me try to explain why I came up with this quite complicated solution:
It started with a simple list of Reports. Soon someone asked me to send reports to some recipients (and store the list of recipients). Easy!
Later, someone else added the requirement to send a report to a FtpAccount. Different FtpAccounts are managed in the application, therefore the selected account should also be stored.
This was to the point where I added the IDistributionChannel abstraction. Everything was still fine.
Then someone needed the possibility to also send some kind of persistent Logfiles via Email. This lead to my solution with IDistribution/IDistributionChannel.
If now someone needs to distribute some other kind of data, I can just implement another IDistribution for this data. If another DistributionChannel (e.g. Fax) is required, I implement it and it is available for all distributable entities.
I would really appreciate any help/ideas.
First of all, why do yo create interfaces for the FtpAccount? The class is isolated and provide no behavior that need to be abstracted away.
Let's start with your original problem and build from there. The problem as I interpret it as that you want to send something to a client using a different set of mediums.
By expressing it in code it can be done like this instead:
public void SendFileToUser(string userName, byte[] file)
{
var distributions = new []{new EmailDistribution(), new FtpDistribution() };
foreach (var distribution in distributions)
{
distribution.Distribute(userName, file);
}
}
See what I did? I added a bit of context. Because your original use case was way to generic. It's not often that you want to distribute some arbitrary data to an arbitrary distribution service.
The change that I made introduces a domain and a real problem.
With that change we can also model the rest of the classes a bit different.
public class FtpDistributor : IDistributor
{
private FtpAccountRepository _repository = new FtpAccountRepository();
private FtpClient _client = new FtpClient();
public void Distribute(string userName, byte[] file)
{
var ftpAccount = _repository.GetAccount(userName);
_client.Connect(ftpAccount.Host);
_client.Authenticate(ftpAccount.userName, ftpAccount.Password);
_Client.Send(file);
}
}
See what I did? I moved the responsibility of keeping track of the FTP account to the actual service. In reality you probably have an administration web or similar where the account can be mapped to a specific user.
By doing so I also isolated all handling regarding FTP to within the service and therefore reduced the complexity in the calling code.
The email distributor would work in the same way.
When you start to code problems like this, try to go from top->down. It's otherwise easy to create an architecture that seems to be SOLID while it doesn't really solve the actual business problem.
Update
I've read your update and I don't see why you must use the same classes for the new requirements?
Then someone needed the possibility to also send some kind of persistent Logfiles via Email
That's an entirely different use case and should be separated from the original use case. Create new code for it. The SmtpClient in .NET is quite easy to us and do not need to be abstracted away.
If now someone needs to distribute some other kind of data, I can just implement another IDistribution for this data.
Why? what complexity are you trying to hide?
If another DistributionChannel (e.g. Fax) is required, I implement it and it is available for all distributable entities
No. Distributing thing A is not the same as distributing thing B. You can't for instance transport parts of a large bridge on an airpane, either a freight ship or a truck is required.
What I'm trying to say is that creating too generic abstractions/contracts to promote code reuse seems like a good idea, but it usually just make your application more complex or less readable.
Create abstractions when there is real complexity issues and not on before hand.
I am relativly new at MVVM, and have run into a problem. We are writing a database application in WPF using the MVVM-Light framework. The specs of the program state we must be able to have multiple instances of the ClaimView open at once.
To open new windows we are sending a Message from the ViewModel that is caught in the View, and opens the new window. We are using Enumerated tokens to identify the correct recipient to get the request.
Now, if I have 2 instances of the ClaimView open at once, and I call the Messanger, it opens 2 of the same windows, because both Views are recieveing the message.
We have tried running each instance of the ViewModel on a seperate thread, and verified by outputing the ManagedThreadId, and the message is still being recieved by both instances.
We have unregistered the Registered Message also, so that is not the problem.
Any help would be appreciated.
New Answer
As pointed out by the OP (Daryl), my original answer (see below) was not quite right, so I'm providing a new answer in case someone with the same problem comes across this later:
It makes sense that if you have two instances of something that are registering for the same message type with the same token, both instances will receive the message. The solution is to provide a token that is unique to each View-ViewModel pair.
Instead of just using a plain enum value as your token, you can place your enum value in a class, like this:
public class UniqueToken
{
public MessengerToken Token { get; private set; }
public UniqueToken(MessengerToken token)
{
Token = token;
}
}
Then in your ViewModel, add a new property to store one of these unique tokens:
// add a property to your ViewModel
public UniqueToken OpenWindowToken { get; private set; }
// place this in the constructor of your ViewModel
OpenWindowToken = new UniqueToken(MessengerToken.OpenWindow);
// in the appropriate method, send the message
Messenger.Send(message, OpenWindowToken);
Finally, in your View, you can now grab the unique token and use it to register for the OpenWindow message:
var viewModel = (MyViewModel)DataContext;
var token = viewModel.OpenWindowToken;
Messenger.Register<TMessage>(this, token, message => OpenWindow(message));
It is necessary for both the ViewModel and View to use a single instance of UniqueToken, because the messenger will only send a message if the receiver token and sender token are the exact same object, not just instances with the same property values.
Original Answer (not quite correct)
I think there may be a typo in your question: You say that to open a new window, you send a message from the ViewModel to the View, but then later you say both ViewModels are receiving the message. Did you mean both Views are receiving the message?
In any case, it makes sense that if you have two instances of something that are registering for the same message type with the same token, both instances will receive the message.
To solve this, you will first need each instance of your ViewModel to have a unique ID. This could accomplished with a Guid. Something like:
// add a property to your ViewModel
public Guid Id { get; private set; }
// place this in the constructor of your ViewModel
Id = Guid.NewGuid();
Then you would need your token to be an object that has two properties: one for the guid and one for the enum value:
public class UniqueToken
{
public Guid Id { get; private set; }
public MessengerToken Token { get; private set; }
public UniqueToken(Guid id, MessengerToken token)
{
Id = id;
Token = token;
}
}
Then when you register in your View (or is it your ViewModel?), you need to grab the Guid from the ViewModel. This could work like this:
var viewModel = (MyViewModel)DataContext;
var id = viewModel.Id;
var token = new UniqueToken(id, MessengerToken.OpenWindow);
Messenger.Register<TMessage>(this, token, message => OpenWindow(message));
Finally, in your ViewModel, you need to do something like this:
var token = new UniqueToken(Id, MessengerToken.OpenWindow);
Messenger.Send(message, token);
Edit
After typing all that out, it occurred to me that you don't really need an Id property on the ViewModel. You could just use the ViewModel itself as the unique identifier. So, for UniqueToken, you could just replace public Guid Id with public MyViewModel ViewModel, and it should still work.
Hey all. I realize this is a rather long question, but I'd really appreciate any help from anyone experienced with RIA services. Thanks!
I'm working on a Silverlight 4 app that views data from the server. I'm relatively inexperienced with RIA Services, so have been working through the tasks of getting the data I need down to the client, but every new piece I add to the puzzle seems to be more and more problematic. I feel like I'm missing some basic concepts here, and it seems like I'm just 'hacking' pieces on, in time-consuming ways, each one breaking the previous ones as I try to add them. I'd love to get the feedback of developers experienced with RIA services, to figure out the intended way to do what I'm trying to do. Let me lay out what I'm trying to do:
First, the data. The source of this data is a variety of sources, primarily created by a shared library which reads data from our database, and exposes it as POCOs (Plain Old CLR Objects). I'm creating my own POCOs to represent the different types of data I need to pass between server and client.
DataA - This app is for viewing a certain type of data, lets call DataA, in near-realtime. Every 3 minutes, the client should pull data down from the server, of all the new DataA since the last time it requested data.
DataB - Users can view the DataA objects in the app, and may select one of them from the list, which displays additional details about that DataA. I'm bringing these extra details down from the server as DataB.
DataC - One of the things that DataB contains is a history of a couple important values over time. I'm calling each data point of this history a DataC object, and each DataB object contains many DataCs.
The Data Model - On the server side, I have a single DomainService:
[EnableClientAccess]
public class MyDomainService : DomainService
{
public IEnumerable<DataA> GetDataA(DateTime? startDate)
{
/*Pieces together the DataAs that have been created
since startDate, and returns them*/
}
public DataB GetDataB(int dataAID)
{
/*Looks up the extended info for that dataAID,
constructs a new DataB with that DataA's data,
plus the extended info (with multiple DataCs in a
List<DataC> property on the DataB), and returns it*/
}
//Not exactly sure why these are here, but I think it
//wouldn't compile without them for some reason? The data
//is entirely read-only, so I don't need to update.
public void UpdateDataA(DataA dataA)
{
throw new NotSupportedException();
}
public void UpdateDataB(DataB dataB)
{
throw new NotSupportedException();
}
}
The classes for DataA/B/C look like this:
[KnownType(typeof(DataB))]
public partial class DataA
{
[Key]
[DataMember]
public int DataAID { get; set; }
[DataMember]
public decimal MyDecimalA { get; set; }
[DataMember]
public string MyStringA { get; set; }
[DataMember]
public DataTime MyDateTimeA { get; set; }
}
public partial class DataB : DataA
{
[Key]
[DataMember]
public int DataAID { get; set; }
[DataMember]
public decimal MyDecimalB { get; set; }
[DataMember]
public string MyStringB { get; set; }
[Include] //I don't know which of these, if any, I need?
[Composition]
[Association("DataAToC","DataAID","DataAID")]
public List<DataC> DataCs { get; set; }
}
public partial class DataC
{
[Key]
[DataMember]
public int DataAID { get; set; }
[Key]
[DataMember]
public DateTime Timestamp { get; set; }
[DataMember]
public decimal MyHistoricDecimal { get; set; }
}
I guess a big question I have here is... Should I be using Entities instead of POCOs? Are my classes constructed correctly to be able to pass the data down correctly? Should I be using Invoke methods instead of Query (Get) methods on the DomainService?
On the client side, I'm having a number of issues. Surprisingly, one of my biggest ones has been threading. I didn't expect there to be so many threading issues with MyDomainContext. What I've learned is that you only seem to be able to create MyDomainContextObjects on the UI thread, all of the queries you can make are done asynchronously only, and that if you try to fake doing it synchronously by blocking the calling thread until the LoadOperation finishes, you have to do so on a background thread, since it uses the UI thread to make the query. So here's what I've got so far.
The app should display a stream of the DataA objects, spreading each 3min chunk of them over the next 3min (so they end up displayed 3min after the occurred, looking like a continuous stream, but only have to be downloaded in 3min bursts). To do this, the main form initializes, creates a private MyDomainContext, and starts up a background worker, which continuously loops in a while(true). On each loop, it checks if it has any DataAs left over to display. If so, it displays that Data, and Thread.Sleep()s until the next DataA is scheduled to be displayed. If it's out of data, it queries for more, using the following methods:
public DataA[] GetDataAs(DateTime? startDate)
{
_loadOperationGetDataACompletion = new AutoResetEvent(false);
LoadOperation<DataA> loadOperationGetDataA = null;
loadOperationGetDataA =
_context.Load(_context.GetDataAQuery(startDate),
System.ServiceModel.DomainServices.Client.LoadBehavior.RefreshCurrent, false);
loadOperationGetDataA.Completed += new
EventHandler(loadOperationGetDataA_Completed);
_loadOperationGetDataACompletion.WaitOne();
List<DataA> dataAs = new List<DataA>();
foreach (var dataA in loadOperationGetDataA.Entities)
dataAs.Add(dataA);
return dataAs.ToArray();
}
private static AutoResetEvent _loadOperationGetDataACompletion;
private static void loadOperationGetDataA_Completed(object sender, EventArgs e)
{
_loadOperationGetDataACompletion.Set();
}
Seems kind of clunky trying to force it into being synchronous, but since this already is on a background thread, I think this is OK? So far, everything actually works, as much of a hack as it seems like it may be. It's important to note that if I try to run that code on the UI thread, it locks, because it waits on the WaitOne() forever, locking the thread, so it can't make the Load request to the server.
So once the data is displayed, users can click on one as it goes by to fill a details pane with the full DataB data about that object. To do that, I have the the details pane user control subscribing to a selection event I have setup, which gets fired when the selection changes (on the UI thread). I use a similar technique there, to get the DataB object:
void SelectionService_SelectedDataAChanged(object sender, EventArgs e)
{
DataA dataA = /*Get the selected DataA*/;
MyDomainContext context = new MyDomainContext();
var loadOperationGetDataB =
context.Load(context.GetDataBQuery(dataA.DataAID),
System.ServiceModel.DomainServices.Client.LoadBehavior.RefreshCurrent, false);
loadOperationGetDataB.Completed += new
EventHandler(loadOperationGetDataB_Completed);
}
private void loadOperationGetDataB_Completed(object sender, EventArgs e)
{
this.DataContext =
((LoadOperation<DataB>)sender).Entities.SingleOrDefault();
}
Again, it seems kinda hacky, but it works... except on the DataB that it loads, the DataCs list is empty. I've tried all kinds of things there, and I don't see what I'm doing wrong to allow the DataCs to come down with the DataB. I'm about ready to make a 3rd query for the DataCs, but that's screaming even more hackiness to me.
It really feels like I'm fighting against the grain here, like I'm doing this in an entirely unintended way. If anyone could offer any assistance, and point out what I'm doing wrong here, I'd very much appreciate it!
Thanks!
I have to say it does seem a bit overly complex.
If you use the entity framework (which with version 4 can generate POCO's if you need) and Ria /LINQ You can do all of that implicitly using lazy loading, 'expand' statements and table per type inheritance.