Silverlight 4 + WCF RIA - Data Service Design Best Practices - c#

Hey all. I realize this is a rather long question, but I'd really appreciate any help from anyone experienced with RIA services. Thanks!
I'm working on a Silverlight 4 app that views data from the server. I'm relatively inexperienced with RIA Services, so have been working through the tasks of getting the data I need down to the client, but every new piece I add to the puzzle seems to be more and more problematic. I feel like I'm missing some basic concepts here, and it seems like I'm just 'hacking' pieces on, in time-consuming ways, each one breaking the previous ones as I try to add them. I'd love to get the feedback of developers experienced with RIA services, to figure out the intended way to do what I'm trying to do. Let me lay out what I'm trying to do:
First, the data. The source of this data is a variety of sources, primarily created by a shared library which reads data from our database, and exposes it as POCOs (Plain Old CLR Objects). I'm creating my own POCOs to represent the different types of data I need to pass between server and client.
DataA - This app is for viewing a certain type of data, lets call DataA, in near-realtime. Every 3 minutes, the client should pull data down from the server, of all the new DataA since the last time it requested data.
DataB - Users can view the DataA objects in the app, and may select one of them from the list, which displays additional details about that DataA. I'm bringing these extra details down from the server as DataB.
DataC - One of the things that DataB contains is a history of a couple important values over time. I'm calling each data point of this history a DataC object, and each DataB object contains many DataCs.
The Data Model - On the server side, I have a single DomainService:
[EnableClientAccess]
public class MyDomainService : DomainService
{
public IEnumerable<DataA> GetDataA(DateTime? startDate)
{
/*Pieces together the DataAs that have been created
since startDate, and returns them*/
}
public DataB GetDataB(int dataAID)
{
/*Looks up the extended info for that dataAID,
constructs a new DataB with that DataA's data,
plus the extended info (with multiple DataCs in a
List<DataC> property on the DataB), and returns it*/
}
//Not exactly sure why these are here, but I think it
//wouldn't compile without them for some reason? The data
//is entirely read-only, so I don't need to update.
public void UpdateDataA(DataA dataA)
{
throw new NotSupportedException();
}
public void UpdateDataB(DataB dataB)
{
throw new NotSupportedException();
}
}
The classes for DataA/B/C look like this:
[KnownType(typeof(DataB))]
public partial class DataA
{
[Key]
[DataMember]
public int DataAID { get; set; }
[DataMember]
public decimal MyDecimalA { get; set; }
[DataMember]
public string MyStringA { get; set; }
[DataMember]
public DataTime MyDateTimeA { get; set; }
}
public partial class DataB : DataA
{
[Key]
[DataMember]
public int DataAID { get; set; }
[DataMember]
public decimal MyDecimalB { get; set; }
[DataMember]
public string MyStringB { get; set; }
[Include] //I don't know which of these, if any, I need?
[Composition]
[Association("DataAToC","DataAID","DataAID")]
public List<DataC> DataCs { get; set; }
}
public partial class DataC
{
[Key]
[DataMember]
public int DataAID { get; set; }
[Key]
[DataMember]
public DateTime Timestamp { get; set; }
[DataMember]
public decimal MyHistoricDecimal { get; set; }
}
I guess a big question I have here is... Should I be using Entities instead of POCOs? Are my classes constructed correctly to be able to pass the data down correctly? Should I be using Invoke methods instead of Query (Get) methods on the DomainService?
On the client side, I'm having a number of issues. Surprisingly, one of my biggest ones has been threading. I didn't expect there to be so many threading issues with MyDomainContext. What I've learned is that you only seem to be able to create MyDomainContextObjects on the UI thread, all of the queries you can make are done asynchronously only, and that if you try to fake doing it synchronously by blocking the calling thread until the LoadOperation finishes, you have to do so on a background thread, since it uses the UI thread to make the query. So here's what I've got so far.
The app should display a stream of the DataA objects, spreading each 3min chunk of them over the next 3min (so they end up displayed 3min after the occurred, looking like a continuous stream, but only have to be downloaded in 3min bursts). To do this, the main form initializes, creates a private MyDomainContext, and starts up a background worker, which continuously loops in a while(true). On each loop, it checks if it has any DataAs left over to display. If so, it displays that Data, and Thread.Sleep()s until the next DataA is scheduled to be displayed. If it's out of data, it queries for more, using the following methods:
public DataA[] GetDataAs(DateTime? startDate)
{
_loadOperationGetDataACompletion = new AutoResetEvent(false);
LoadOperation<DataA> loadOperationGetDataA = null;
loadOperationGetDataA =
_context.Load(_context.GetDataAQuery(startDate),
System.ServiceModel.DomainServices.Client.LoadBehavior.RefreshCurrent, false);
loadOperationGetDataA.Completed += new
EventHandler(loadOperationGetDataA_Completed);
_loadOperationGetDataACompletion.WaitOne();
List<DataA> dataAs = new List<DataA>();
foreach (var dataA in loadOperationGetDataA.Entities)
dataAs.Add(dataA);
return dataAs.ToArray();
}
private static AutoResetEvent _loadOperationGetDataACompletion;
private static void loadOperationGetDataA_Completed(object sender, EventArgs e)
{
_loadOperationGetDataACompletion.Set();
}
Seems kind of clunky trying to force it into being synchronous, but since this already is on a background thread, I think this is OK? So far, everything actually works, as much of a hack as it seems like it may be. It's important to note that if I try to run that code on the UI thread, it locks, because it waits on the WaitOne() forever, locking the thread, so it can't make the Load request to the server.
So once the data is displayed, users can click on one as it goes by to fill a details pane with the full DataB data about that object. To do that, I have the the details pane user control subscribing to a selection event I have setup, which gets fired when the selection changes (on the UI thread). I use a similar technique there, to get the DataB object:
void SelectionService_SelectedDataAChanged(object sender, EventArgs e)
{
DataA dataA = /*Get the selected DataA*/;
MyDomainContext context = new MyDomainContext();
var loadOperationGetDataB =
context.Load(context.GetDataBQuery(dataA.DataAID),
System.ServiceModel.DomainServices.Client.LoadBehavior.RefreshCurrent, false);
loadOperationGetDataB.Completed += new
EventHandler(loadOperationGetDataB_Completed);
}
private void loadOperationGetDataB_Completed(object sender, EventArgs e)
{
this.DataContext =
((LoadOperation<DataB>)sender).Entities.SingleOrDefault();
}
Again, it seems kinda hacky, but it works... except on the DataB that it loads, the DataCs list is empty. I've tried all kinds of things there, and I don't see what I'm doing wrong to allow the DataCs to come down with the DataB. I'm about ready to make a 3rd query for the DataCs, but that's screaming even more hackiness to me.
It really feels like I'm fighting against the grain here, like I'm doing this in an entirely unintended way. If anyone could offer any assistance, and point out what I'm doing wrong here, I'd very much appreciate it!
Thanks!

I have to say it does seem a bit overly complex.
If you use the entity framework (which with version 4 can generate POCO's if you need) and Ria /LINQ You can do all of that implicitly using lazy loading, 'expand' statements and table per type inheritance.

Related

how to ensure that the editing (IsEdited or whatever) state of an EFCore entity is notified

I've got a class Invoice that has InvoiceRows.
public class Invoice
{
[Key]
public int? Id { get; set; }
public DateTime InvoiceDate {
get => invoiceDate;
set => PropertySet<DateTime>(value, ref invoiceDate);
}
private DateTime invoiceDate;
public List<InvoiceRow> Rows { get; } = new List<InvoiceRow>();
[NotMapped]
public bool IsEdited { get; set; } = false;
public void PropertySet<T>(T value, ref T field)
{
if (value.Equals(field)) return;
field = value;
IsEdited = true;
}
}
The Invoice is edited in a WPF graphical interface that need notifications when the invoice has been edited in order to activate the save button, for example.
(In a first implementation i thought that IsEdited was a viewmodel thing and I omitted it from the model and included in the viewmodel instead. Turns out to be quite complex when handling sub-records. However I've not yet fully realized which option the model or the viewmodel implementation is best, and this is possibly a second question.)
The IsEdited is easily managed at record level, but how to handle it at the InvoiceRow level?
The first idea that comes to mind is to notify back to the main record.
This in turn requires additional code to wire up notifications between Invoice and Rows.
Another idea is to leverage the DbContext state that holds in one place the required information instead of gathering it around sub-records.
Are there any other options left? One of the challages of such decision is to fully evaluate the consequences of each approach before hand. What are the pros/cons of different ways of handling this?
In my little experience of WPF I've read something about hirearchical viewmodels. Maybe they are suitable for this, handling the wrap-up at the viewmodel level?

How to create collection without database in .net core

How can I add items to my list SearchedVideos?
I would like to have these items on the list until the end of my application.
Now I have error like this:
NullReferenceException: Object reference not set to an instance of an object.
I create context with prop as Singleton like this:
public List<QueryViewModel> SearchedVideos { get; set; }
In startup
services.AddSingleton<YtContext>();
My model
public class ExecutedQuery
{
public Query Query { get; }
public string Title { get; set; }
public IReadOnlyList<Video> Videos { get; set; }
public ExecutedQuery(Query query, string title, IReadOnlyList<Video> videos)
{
Query = query;
Title = title;
Videos = videos;
}
}
My service
public async Task<ExecutedQuery> ExecuteQueryAsync(Query query)
{
// Search
if (query.Type == QueryType.Search)
{
var videos = await _youtubeClient.SearchVideosAsync(query.Value);
var title = $"Search: {query.Value}";
var executedQueries = new ExecutedQuery(query, title, videos);
var qw = new QueryViewModel
{
ExecutedQueries = executedQueries,
};
_ytcontext.SearchedVideos.Add(qw);
return executedQueries;
}
}
My QueryViewModel
public ExecutedQuery ExecutedQueries { get; set; }
My Controller
[HttpGet("Search/all")]
public async Task<IActionResult> ListAllQueriesAsync(string query)
{
var req = _queryService.ParseQuery(query);
var res = await _queryService.ExecuteQueryAsync(req);
return View(res);
}
If you are wanting to edit this list from one instance to another then you'll need to use some kind of datasource. If a database is not an option then a text file will have to do. Use a Json string and serialize/deserialize to your object. https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-serialize-and-deserialize-json-data. I've used this method to mockup an application but if you are going to be doing alot of writing to the file you may run into issues.
If you can hard code the list in the application then a Singleton will work. Read up here. https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-2.2
Each request is its own thing, unaffected by anything that's happened before or since. As such, you pretty much start from a blank slate. The typical means for persisting state between one or more additional requests is the session. Sessions are essentially fake state, through a combination of server-side (some persistent store) and client-side components (cookies), something that appears like persistence of state can be achieved. However, particularly on the server-side, you still need some sort of store, which is generally a database of some sort, be it relational (SQL Server, etc.) or NoSQL (Redis, etc.). The default session store will be in-memory, which may suffice for your needs, but as memory is volatile, any sort of app restart will take anything stored there along with it.
Alternatively, there's statics and objects with singleton lifetimes. In either case, they're virtually the same as in-memory storage - they'll persist the life of the application and no more.
Statics are just members with a static keyword on them. It's probably the simplest and most straight-forward approach, but also the most fragile. It's virtually impossible to test statics, so you're basically creating black-holes in your code where anything could happen.
A better approach is to simply use an object with a singleton lifetime. These can be create via the AddSingleton<T> method on the service collection. For example, you could create a class like:
public class MySingleton
{
public ICollection<IReadOnlyList<Video>> SearchedVideo { get; set; }
}
And then register it as a singleton in ConfigureServices:
services.AddSingleton<MySingleton>();
Then, in your controllers, views, and such, you can inject MySingleton to access the SearchedVideos property. As a singleton, the data there will persist for the life of the application.
The chief difference between sessions, particularly in-memory sessions, and either statics or singletons is one of breadth. Sessions will always be tied to a particular client, whereas statics and singletons will be scoped to the application. That means that if you use statics or singletons, all clients will see the same data and will potentially manipulate the same data. If you need something that is client-specific, you must use sessions, instead.
#natsukiss i guess you are trying to call Add() method from null property. Even you create a list you should set an initial instance for SearchedVideo Property. Because if you dont create an instance, it means that property will not have address in memory. Because of that sometimes we are using string TestVal = "". That means we sets initial value on Common Language Runtime(CLR) to locate Address in Memory.
public List<QueryViewModel> SearchedVideos { get; set; } = new List<QueryViewModel>(); //<==
or if you are working with EntityFramework you should use
public ICollection<QueryViewModel> SearchedVideos { get; set; } = new HashSet<QueryViewModel>(); //<===

REST API Best practice for handling junction data

I am working on a service oriented architecture. I have 3 tables Meeting, Stakeholder and MeetingStakeholder (a junction table).
A simple representation of POCO classes for all 3 tables:
public class Meeting
{
public int Id { get; set; }
public IList<MeetingStakeholder> MeetingStakeholders { get; set; }
}
public class Stakeholder
{
public int Id { get; set; }
}
public class MeetingStakeholder
{
public int Id { get; set; }
public int MeetingId { get; set; }
public Meeting Meeting { get; set; }
public int StakeholderId { get; set; }
public Stakeholder Stakeholder { get; set; }
}
A simple representation of Meeting Dto:
public class MeetingDto
{
public int Id { get; set; }
public IList<int> StakeholderIds { get; set; }
}
In PUT action,
PUT: api/meetings/1
First I removes all existing records from MeetingStakeholder (junction table) then prepares new List<MeetingStakeholder> meetingStakeholders using meetingDto.StakeholderIds and create it.
{
List<MeetingStakeholder> existingMeetingStakeholders = _unitOfWork.MeetingStakeholderRepository.Where(x=> x.MeetingId == meetingDto.Id);
_unitOfWork.MeetingStakeholderRepository.RemoveRange(existingMeetingStakeholders);
List<MeetingStakeholder> meetingStakeholders = ... ;
_unitOfWork.MeetingRepository.Update(meeting);
_unitOfWork.MeetingStakeholderRepository.CreateRange(meetingStakeholders);
_unitOfWork.SaveChanges();
return OK(meetingDto);
}
Everything is fine to me. But my architect told me that i am doing wrong thing.
He said, in PUT action (according to SRP) I should not be removing and re-creating MeetingStakeholder records, I should be responsible for updating meeting object only.
According to him, MeetingStakeholderIds (array of integers) should be send in request body to these routes.
For assigning new stakeholders to meeting.
POST: api/meetings/1/stakeholders
For removing existing stakeholders from meeting.
Delete: api/meetings/1/stakeholders
But the problem is, In meeting edit screen my front-end developer uses multi-select for Stakeholders. He will need to maintain two Arrays of integers.
First Array for those stakeholders Ids which end-user unselect from multi-select.
Second Array for new newly selected stakeholders Ids.
Then he will send these two arrays to their respective routes as I mentioned above.
If my architect is right then I have no problem but how should my front-end developer handle stakeholders selection in edit screen?
One thing I want to clarify that my junction table is very simple, it does not contain additional columns other than MeetingId and StakeholderId ( a very basic junction). So in this scenario, does it make sense to create separate POST/DELETE actions on "api/meetings/1/stakeholders" that receives StakeholderIds (list of integers) instead of receiving StakeholderIds directly in MeetingDto??
First of all, if I am not mistaken:
you have a resource: "Meeting";
you want to update the said resource (using HTTP/PUT).
So updating a meeting by requesting a PUT on "/api/meetings/:id" seems fairly simple, concise, direct and clear. All good traits for designing a good interface. And it still respects the Single Responsibility Principle: You are updating a resource"
Nonetheless, I also agree with you architect in providing, in addition to the previous method, POST/Delete actions on "api/meetings/1/stakeholders" if the requisites justify so. We should be pragmatic at some level not to overengineer something that isn't required to.
Now if your architect just said that because of HOW IT IS PERSISTED, then he is wrong. Interfaces should be clear to the end user (frontend today, another service or app tomorrow ...), but most importantly, in this case, ignorant of its persistence or any implementation for that matter.
Your api should focus on your domain and your business rules, not on how you store the information.
This is just my view. If someone does not agree with me I would like to be called out and so both could grow and learn together.
:) Hope I Could be of some help. Cheers

Event sourcing - should we save to event store only changes in values of object or all object?

I am trying to understand event sourcing with cqrs. As I understand - command handler saves changes to relational database and throws event which should be save in nosql database. I have Notes class (that is my table in database):
public partial class Notes
{
public System.Guid Id { get; set; }
public string Title { get; set; }
public string Content { get; set; }
public System.DateTime CreatedDate { get; set; }
public System.DateTime ModifiedDate { get; set; }
}
If somebody modify only Title of a single note then I should create event with properties: Id and Title or maybe with all properties of class Notes? Because now I create event with all properties - I don't check which property has changed:
public abstract class DomainEvent
{
public Guid Id { get; set; }
public DateTime CreatedDateEvent { get; set; }
}
// my event:
public class NoteChangedEvent : DomainEvent
{
public string Title { get; set; }
public string Content { get; set; }
public DateTime ModifiedDate { get; set; }
}
// my command handler class:
public class EditNoteCommandHandler : ICommandHandler<EditNoteCommand>
{
private readonly DbContext _context;
private readonly IEventDispatcher _eventDispatcher;
public EditNoteCommandHandler(DbContext context, IEventDispatcher eventDispatcher)
{
_context = context;
_eventDispatcher = eventDispatcher;
}
public void Execute(EditNoteCommand command)
{
Notes note = _context.Notes.Find(command.Id);
note.Title = command.Title;
note.Content = command.Content;
note.ModifiedDate = DateTime.Now;
_context.SaveChanges(); // save to relational database
// throw event:
NoteChangedEvent noteChangedEvent = new NoteChangedEvent
{
Id = note.Id,
Title = note.Title,
Content = note.Content,
ModifiedDate = note.ModifiedDate,
CreatedDateEvent = DateTime.Now
};
_eventDispatcher.Dispatch(noteChangedEvent);
}
}
// my dispatcher class:
public class EventDispatcher : IEventDispatcher
{
public void Dispatch<TEvent>(TEvent #event) where TEvent : DomainEvent
{
foreach (IEventHandler<TEvent> handler in DependencyResolver.Current.GetServices<IEventHandler<TEvent>>())
{
handler.Handle(#event);
}
}
}
Am I doing that correct and understand that correct or not?
I am trying to understand event sourcing with cqrs.
As I understand - command handler saves changes to relational database and throws event which should be save in nosql database.
No. Think two different actions, happening at different times.
First: the command handler saves the events (there will often be more than one). That's the event sourcing part - the sequence of events is the system of record; the events are used to rehydrate your entities for the next write, so the event store is part of the C side of CQRS.
Using a nosql database for this part is an implementation detail; it's fairly common to use a relational database to store the events (example: http://blog.oasisdigital.com/2015/cqrs-linear-event-store/ ).
After the write to the system of record is complete, that data (the event stream) is used to create new projections in the read model. That can have any representation you like, but ideally is whatever is most perfectly suited to your read model. In other words, you could create a representation of the data in a relational database; or you could run a bunch of queries and cache the results in a nosql store, or.... The point of CQRS is to separate the write concerns from the read concerns -- once you have done that, you can tune each side separately.
Mind you, if you are starting with a write model where you are reading and writing from an object store, extending that model to dispatch events (as you have done here) is a reasonable intermediate step to take.
But if you were starting from scratch, you would not ever save the current state of the Note in the write model (no object store), just the sequence of events that brought the note to the current state.
If somebody modify only Title of a single note then I should create event with properties: Id and Title or maybe with all properties of class Notes?
Best practices are to describe the change.
This is normally done with more finely grained events. You can get away with EntityChanged, enumerating the fields that have changed (in most cases, including the entity id). In CRUD solutions, sometimes that's all that makes sense.
But if you are modeling a business domain, there will usually be some context - some motivation for the change. A fully fleshed out event model might have more than one event describing changes to the same set of fields (consider the use cases of correcting the mailing address of a customer vs tracking that a customer has relocated).
The point being that if the write model can describe the meaning behind the change, the read model doesn't need to try to guess it from the generic data included in the event.
Discovering that intent often means capturing more semantic information in the command (before it is sent to the write model). In other words, EditNoteCommand itself is vague, and like the events can be renamed to more explicitly express the different kinds of edits that are of interest.

Dealing with Object Graphs - Web API

I recently encountered a (hopefully) small issue when toying around with a Web API project that involves returning object graphs so that they can be read as JSON.
Example of Task Object (generated through EF) :
//A Task Object (Parent) can consist of many Activities (Child Objects)
public partial class Task
{
public Task()
{
this.Activities = new HashSet<Activity>();
}
public int TaskId { get; set; }
public string TaskSummary { get; set; }
public string TaskDetail { get; set; }
public virtual ICollection<Activity> Activities { get; set; }
}
within my ApiController, I am requested a specific Task (by Id) along with all of it's associated Activities, via:
Example of Single Task Request
//Simple example of pulling an object along with the associated activities.
return repository.Single(t => t.Id == id).Include("Activities");
Everything appears to be working fine - however when I attempt to navigate to a URL to access this, such as /api/tasks/1, the method executes as it should, but no object is returned (just a simple cannot find that page).
If I request an Task that contains no activities - everything works as expected and it returns the proper JSON object with Activities : [].
I'm sure there are many way to tackle this issue - I just thought I would get some insight as to what people consider the best method of handling this.
Considered Methods (so far):
Using an alternative JSON Parser (such as Newtonsoft.JSON) which fixed the issue but appended $id and $refs throughout the return data, which could make parsing for Knockout difficult I believe.
Using projection and leveraging anonymous types to return the data. (Untested so far)
Removing the Include entirely and simply accessing the Child Data through another request.
Any and all suggestions would be greatly appreciated.
I had a similar issue with EF types and Web API recently. Depending on how your generated EF models are setup, the navigation properties may result in circular dependencies. So if your generated Activity class has a Task reference the serializer will try to walk the object graph and get thrown in a little nasty cycle.
One solution would be to create a simple view model to get the serializer working
public class TaskViewModel {
public TaskViewModel ()
{
this.Activities = new List<ActivityViewModel>();
}
public int TaskId { get; set; }
public string TaskSummary { get; set; }
public string TaskDetail { get; set; }
public virtual IList<ActivityViewModel> Activities { get; set; }
}
public class ActivityViewModel{
public ActivityViewModel()
{
}
//Activity stuff goes here
//No reference to Tasks here!!
}
Depending on what you're doing, you may even be able to create a flatter model than this but removing the Task reference will help the serialization. That's probably why it worked when Activities was empty

Categories