How can I add items to my list SearchedVideos?
I would like to have these items on the list until the end of my application.
Now I have error like this:
NullReferenceException: Object reference not set to an instance of an object.
I create context with prop as Singleton like this:
public List<QueryViewModel> SearchedVideos { get; set; }
In startup
services.AddSingleton<YtContext>();
My model
public class ExecutedQuery
{
public Query Query { get; }
public string Title { get; set; }
public IReadOnlyList<Video> Videos { get; set; }
public ExecutedQuery(Query query, string title, IReadOnlyList<Video> videos)
{
Query = query;
Title = title;
Videos = videos;
}
}
My service
public async Task<ExecutedQuery> ExecuteQueryAsync(Query query)
{
// Search
if (query.Type == QueryType.Search)
{
var videos = await _youtubeClient.SearchVideosAsync(query.Value);
var title = $"Search: {query.Value}";
var executedQueries = new ExecutedQuery(query, title, videos);
var qw = new QueryViewModel
{
ExecutedQueries = executedQueries,
};
_ytcontext.SearchedVideos.Add(qw);
return executedQueries;
}
}
My QueryViewModel
public ExecutedQuery ExecutedQueries { get; set; }
My Controller
[HttpGet("Search/all")]
public async Task<IActionResult> ListAllQueriesAsync(string query)
{
var req = _queryService.ParseQuery(query);
var res = await _queryService.ExecuteQueryAsync(req);
return View(res);
}
If you are wanting to edit this list from one instance to another then you'll need to use some kind of datasource. If a database is not an option then a text file will have to do. Use a Json string and serialize/deserialize to your object. https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-serialize-and-deserialize-json-data. I've used this method to mockup an application but if you are going to be doing alot of writing to the file you may run into issues.
If you can hard code the list in the application then a Singleton will work. Read up here. https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-2.2
Each request is its own thing, unaffected by anything that's happened before or since. As such, you pretty much start from a blank slate. The typical means for persisting state between one or more additional requests is the session. Sessions are essentially fake state, through a combination of server-side (some persistent store) and client-side components (cookies), something that appears like persistence of state can be achieved. However, particularly on the server-side, you still need some sort of store, which is generally a database of some sort, be it relational (SQL Server, etc.) or NoSQL (Redis, etc.). The default session store will be in-memory, which may suffice for your needs, but as memory is volatile, any sort of app restart will take anything stored there along with it.
Alternatively, there's statics and objects with singleton lifetimes. In either case, they're virtually the same as in-memory storage - they'll persist the life of the application and no more.
Statics are just members with a static keyword on them. It's probably the simplest and most straight-forward approach, but also the most fragile. It's virtually impossible to test statics, so you're basically creating black-holes in your code where anything could happen.
A better approach is to simply use an object with a singleton lifetime. These can be create via the AddSingleton<T> method on the service collection. For example, you could create a class like:
public class MySingleton
{
public ICollection<IReadOnlyList<Video>> SearchedVideo { get; set; }
}
And then register it as a singleton in ConfigureServices:
services.AddSingleton<MySingleton>();
Then, in your controllers, views, and such, you can inject MySingleton to access the SearchedVideos property. As a singleton, the data there will persist for the life of the application.
The chief difference between sessions, particularly in-memory sessions, and either statics or singletons is one of breadth. Sessions will always be tied to a particular client, whereas statics and singletons will be scoped to the application. That means that if you use statics or singletons, all clients will see the same data and will potentially manipulate the same data. If you need something that is client-specific, you must use sessions, instead.
#natsukiss i guess you are trying to call Add() method from null property. Even you create a list you should set an initial instance for SearchedVideo Property. Because if you dont create an instance, it means that property will not have address in memory. Because of that sometimes we are using string TestVal = "". That means we sets initial value on Common Language Runtime(CLR) to locate Address in Memory.
public List<QueryViewModel> SearchedVideos { get; set; } = new List<QueryViewModel>(); //<==
or if you are working with EntityFramework you should use
public ICollection<QueryViewModel> SearchedVideos { get; set; } = new HashSet<QueryViewModel>(); //<===
Related
Is there a way to reduce/remove constant duplication of user access checks (or some other checks) in a business layer?
Let's consider a following example: simple CRUD application with one entity BlogPost:
public class BlogPost
{
public int Id { get; set; }
public string Title { get; set; }
public string Body { get; set; }
public int AuthorId { get; set; }
}
In PUT/DELETE requests before modifying or deleting entity I need to make a check whether the user that's making request is author of BlogPost, so he is permitted to delete/edit it.
So both in UpdateBlogPost and DeleteBlogPost of imaginary BlogPostService I'll have to write something like this:
var blogPostInDb = _blogPostRepository.GetBlogPost();
if(blogPostInDb == null)
{
// throw exception or do whatever is needed
}
if(blogPostInDb.AuthorId != _currentUser.Id)
{
// throw exception etc...
}
This kind of code will be the same for both Updateand Delete methods as well as other methods that may be added in future and the same for all entities.
Is there any way to reduce or completely remove such duplication?
I thought this over and came up with following solutions, but they don't satisfy me fully.
First solution
Using filters. We can create some custom filters like [EnsureEntityExists] and [EnsureUserCanManageEntity] but this way we're spreading some of business logic in our API layer and it's not flexible enough since we need to create such filter for every entity. Perhaps some kind of generic filter can be made using reflection.
Also there is another problem with this approach, let's say we've made such filter that's checking our rules. We're fetching entity from db, doing checks, throwing exceptions and all that stuff and letting controller method execute. BUT in service layer we need to fetch entity again, so we're making two roundtrips to db. Maybe I'm overthinking this problem and that's fine to make 2 roundtrips, taking into account that fact that caching can be applied.
Second solution
Since I'm using CQRS (or at least some kind of it) I have MediatR library and I can make use of Pipeline Behaviors and even pass fetched entity further into pipeline via mutating TRequest (which I don't wanna do). This solution requires some common interface for all requests to be able to retrieve id of the entity. The roundtrip problem also applicable here too.
public interface IBlogPostAccess
{
public int Id { get; set; }
}
public class ChangeBlogPostCommand: IRequest, IBlogPostAccess
{
// ...
}
public class DeleteBlogPostCommand: IRequest, IBlogPostAccess
{
// ...
}
public class BlogPostAccessBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IBlogPostAccess
{
// all nessesary stuff injected via DI
public BlogPostAccessBehavior()
{
}
public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
{
var blogPostInDb = _blogPostRepository.GetBlogPost(request.Id);
if(blogPostInDb == null)
{
// throw exception or do whatever is needed
}
if(blogPostInDb.AuthorId != _currentUser.Id)
{
// throw exception etc...
}
return await next();
}
}
Third solution
Create something like request context service. In a very simplified way it will be a dictionary that will be persisted across request where we can store data (in this case our BlogPost that we've fetched in filter/pipeline). This seems lame and recalls me a ViewBag in ASP.NET MVC.
Fourth solution
It's more enhancement than a solution, but we can use GuardClause or extension methods to reduce nesting of if statements.
Again, maybe I'm overthinking this problem or it's not a problem at all or that's a design issue. Any help, thoughts appreciated.
If you are concerned about many database calls you could try caching the returned objects per request with something like LazyCache https://github.com/alastairtree/LazyCache
I would not recommend caching across requests...
For code organization, I would recommend extracting the authorization logic into a separate method and calling that method each request. Benefit is that if the logic changes then only need to updated it in one place.
For example something like this:
bool canEdit(userId){
var user = getUserByUserId(userId);
if(user.IsAdmin) return true;
//depending on where this method lives might have access to blogpost here
if(_blogPost.AuthorId == userId) return true;
return false;
}
In my scenario, I have a Winforms client that connects to WebApi2. The data is stored in a SQL Server database.
To speed up performance, I am researching if storing data in local cache is a viable solution. Preferably, the local cache should be stored in files instead of kept in-memory as RAM might be an issue. The data is all POCO classes, some being much more complex than others, and most classes being related to each other.
I have made a shortlist of which frameworks might be viable:
MemoryCache
MemCached
CacheManager
StackExchange.Redis
Local Database
Using MemoryCache, I would need to implement my own solution, but it will fit my initial requirements.
However, one common problem that I am seeing is the updating of related classes. For example, I have a relationship between CustomerAddress and PostCode. If I change some properties in a postcode object, I can easily update its local cache. But how is it possible to update/invalidate any other classes that use this postcode, in this case CustomerAddress?
Does any of the frameworks above have methods that help in this kind of situation, or is it totally dependent on the developer to handle such cache invalidation?
The CachingFramework.Redis library provides a mechanism to relate tags to keys and hashes so you can then invalidate them in a single operation.
I'm assuming that you will:
Store the Customer Addresses in Redis with keys like "Address:{AddressId}".
Store the Post Codes in Redis with keys like "PostCode:{PostCodeId}".
And that your model is something like this:
public class CustomerAddress
{
public int CustomerAddressId { get; set; }
public int CustomerId { get; set; }
public int PostCodeId { get; set; }
}
public class PostCode
{
public int PostCodeId { get; set; }
public string Code { get; set; }
}
My suggestion is to:
Mark the Customer Addresses objects on Redis with tags like "Tag-PostCode:{PostCodeId}".
Use a cache-aside pattern to retrieve the Customer Addresses and Post Codes from cache/database.
Invalidate the cache objects by tag when a Post Code is changed.
Something like this should probably work:
public class DataAccess
{
private Context _cacheContext = new CachingFramework.Redis.Context("localhost:6379");
private string FormatPostCodeKey(int postCodeId)
{
return string.Format("PostCode:{0}", postCodeId);
}
private string FormatPostCodeTag(int postCodeId)
{
return string.Format("Tag-PostCode:{0}", postCodeId);
}
private string FormatAddressKey(int customerAddressId)
{
return string.Format("Address:{0}", customerAddressId);
}
public void InsertPostCode(PostCode postCode)
{
Sql.InsertPostCode(postCode);
}
public void UpdatePostCode(PostCode postCode)
{
Sql.UpdatePostCode(postCode);
//Invalidate cache: remove CustomerAddresses and PostCode related
_cacheContext.Cache.InvalidateKeysByTag(FormatPostCodeTag(postCode.PostCodeId));
}
public void DeletePostCode(int postCodeId)
{
Sql.DeletePostCode(postCodeId);
_cacheContext.Cache.InvalidateKeysByTag(FormatPostCodeTag(postCodeId));
}
public PostCode GetPostCode(int postCodeId)
{
// Get/Insert the postcode from/into Cache with key = PostCode{PostCodeId}.
// Mark the object with tag = Tag-PostCode:{PostCodeId}
return _cacheContext.Cache.FetchObject(
FormatPostCodeKey(postCodeId), // Redis Key to use
() => Sql.GetPostCode(postCodeId), // Delegate to get the value from database
new[] { FormatPostCodeTag(postCodeId) }); // Tags related
}
public void InsertCustomerAddress(CustomerAddress customerAddress)
{
Sql.InsertCustomerAddress(customerAddress);
}
public void UpdateCustomerAddress(CustomerAddress customerAddress)
{
var updated = Sql.UpdateCustomerAddress(customerAddress);
if (updated.PostCodeId != customerAddress.PostCodeId)
{
var addressKey = FormatAddressKey(customerAddress.CustomerAddressId);
_cacheContext.Cache.RenameTagForKey(addressKey, FormatPostCodeTag(customerAddress.PostCodeId), FormatPostCodeTag(updated.PostCodeId));
}
}
public void DeleteCustomerAddress(CustomerAddress customerAddress)
{
Sql.DeleteCustomerAddress(customerAddress.CustomerAddressId);
//Clean-up, remove the postcode tag from the CustomerAddress:
_cacheContext.Cache.RemoveTagsFromKey(FormatAddressKey(customerAddress.CustomerAddressId), new [] { FormatPostCodeTag(customerAddress.PostCodeId) });
}
public CustomerAddress GetCustomerAddress(int customerAddressId)
{
// Get/Insert the address from/into Cache with key = Address:{CustomerAddressId}.
// Mark the object with tag = Tag-PostCode:{PostCodeId}
return _cacheContext.Cache.FetchObject(
FormatAddressKey(customerAddressId),
() => Sql.GetCustomerAddress(customerAddressId),
a => new[] { FormatPostCodeTag(a.PostCodeId) });
}
}
To speed up performance, I am researching if storing data in local
cache is a viable solution. Preferably, the local cache should be
stored in files instead of kept in-memory as RAM might be an issue
The whole issue is to avoid storing it in files, to avoid DISK operations which are slow, thus Redis is RAM based memory.
Does any of the frameworks above have methods that help in this kind
of situation, or is it totally dependent on the developer to handle
such cache invalidation?
You can save the entire object as JSON instead of applying logic and disassembles the objects, which will be also slow and error prone when applying changes.
I'm currently developing a SPA in Angular, and so I've created a REST service using ServiceStack. I am also using ServiceStack's default authentication and authorization solution, which allows me to decorate services with the Authenticate attribute, and also allows me to authorize roles.
However, since my application has users, and users own resources, I need a way to restrict non-authorized users from performing certain actions. Furthermore, I would like to be able to create a single service for each discrete entity which can properly figure out what is safe to write to the database and what is safe to return to the user depending on their level of authorization.
So as an example, let's say I've created a service to handle operations on a Group entity. One of the actions I allow on a Group is to get the details for it:
Route: api/groups/{Id}
Response: Name, Description, CoverImageUrl, Members
However, depending on who the user is, I wish to restrict what data is returned:
Not authenticated: Name, CoverImageUrl
Authenticated: Name, CoverImageUrl, Decription
Member of requested group: Full access
Admin of website: Full access
So one simple approach to doing this is to create 3 different response DTOs, one for each type of response. Then in the service itself I can check who the user is, check on their relation to the resource, and return the appropriate response. The problem with this approach is that I would be repeating myself a lot, and would be creating DTOs that are simply subsets of the "master" DTO.
For me, the ideal solution would be some way to decorate each property on the DTO with attributes like:
[CanRead("Admin", "Owner", "Member")]
[CanWrite("Admin", "Owner")]
Then somewhere during the request, it would limit what is written to the database based on who the user is and would only serialize the subset of the "master" DTO that the user is permitted to read.
Does anyone know how I can attain my ideal solution within ServiceStack, or perhaps something even better?
The direct approach is the easiest, but you could also take advantage of custom filters attributes.
[Route("/groups/{Id}"]
public class UpdateGroup
{
public int Id { get; set; }
public string Name { get; set; }
public string CoverImageUrl { get; set; }
public string Description { get; set; }
}
[RequiresAnyRole("Admin", "FullAccess")]
[Route("/admin/groups/{Id}"]
public class AdminUpdateGroup
{
public int Id { get; set; }
public string Name { get; set; }
public string CoverImageUrl { get; set; }
public string Description { get; set; }
//... other admin properties
}
Service implementation:
public object Any(UpdateGroup request)
{
var session = base.SessionAs<AuthUserSession>();
if (session.IsAuthenticated) {
//.. update Name, CoverImageUrl, Description
}
else {
//.. only update Name, CoverImageUrl
}
}
public object Any(AdminUpdateGroup request)
{
//... Full Access
}
What ended up being the most pragmatic solution for me was actually pretty simple. The basic idea is that whichever service requires row-level authorization should implement a GetUserRole method, which in my case returns the user's most permissive role.
protected string GetUserRole(Domain.Group entity)
{
var session = SessionAs<AuthUserSession>();
var username = session.UserName;
if (session.Roles.Contains("Admin"))
{
return "Admin";
}
if (entity.Id == default(int) || entity.Leader.Username.Equals(username))
{
return "Leader";
}
// More logic here...
return session.IsAuthenticated ? "User" : "Anonymous";
}
Then I can use the user's role to figure out what to let them write:
var entityToWriteTo = ... // code that gets your entity
var userRole = GetUserRole(entityToWriteTo);
if (new[] {"Admin"}.Contains(userRole))
{
// write to admin-only entity properties
}
if (new[] {"Admin", "Leader"}.Contains(userRole))
{
// write to admin or leader entity properties
}
// Etc.
And the same logic applies for reads: You populate a DTO with properties set conditionally based on their role. Later on when you return the DTO back to the client, any properties that you haven't set either won't be serialized or will be serialized with a null value.
Ultimately, this solution allows you to use a single service for a resource instead of creating multiple services each with their own request DTO. There are, of course, refactorings you can do that makes this solution more streamlined. For example, you can isolate all of your reads and writes to one part of your code which will keep the services themselves free of role checks and things like that.
I'm still not yet sure on the best way to store selectlist options for front end display or db storage.
I've been using Enums at the moment, and also using description decorators (How do you create a dropdownlist from an enum in ASP.NET MVC?)
I'm now thinking that I might as well just create a full class for this stuff, so I can store the following information properly with full control:
Item Name
Full description
int for storage in db
order
Any methods to get information in anyway from the list.
Is it right I should be thinking about implementing all this myself by hand? I want a really solid way of doing this, and an enum doesn't really feel like it's going to cut it.
Is it right I should be thinking about implementing all this myself by
hand?
Yes. Enums are often leaky and insufficient abstractions that aren't always suitable for the complex domain model you actually wish to represent.
Rather than roll your own, you may want to consider Headspring's Enumeration class (via github, nuget). We use it all the time instead of enums because it's nearly as simple and is much more flexible.
An example of a "State" enumeration and using it as a select list:
public class State : Enumeration<State>
{
public static State Alabama = new State(1, "AL", "Alabama");
public static State Alaska = new State(2, "AK", "Alaska");
// .. many more
public static State Wyoming = new State(3, "WY", "Wyoming");
public State(int value, string displayName, string description) : base(value, displayName)
{
Description = description;
}
public string Description { get; private set; }
}
public IEnumerable<SelectListItem> Creating_a_select_list(State selected)
{
return State.GetAll().Select(
x => new SelectListItem
{
Selected = x == selected,
Text = x.Description,
Value = x.Value.ToString()
});
}
I'm not trying to sell you on this particular implementation, you could certainly hand code your own (the Enumeration class is only about 100 lines of code). But I definitely think you'd benefit from moving beyond basic enums. It is the right approach given the scenario you described in your question.
The first place where such information shoiuld be is the database...or any "virtual store" such as a web service that offers an interface to you db. In fact if there are other db entiies that use these values THEY MUST be represented in the database, otherwise you will run in big troubles. In fact, suppose one of such values is a string....if you don't define a table containing all possible values+a key and simply write the string as it is in other tables...it will be impossible for you to change the format of the string since it will be "spread" all over your db...On the contrary, if you just use an external key to refer to such strings...you can easily change them since the string is stored in just ONE place in your db.
Also the enumeration solution suffers of the problem that you cannot add or deleted values...so if such operations "conceptually" might make sense you cannot use an enumeration. You can use enumeration when all options "conceptually span" all possibilities, so you are sure you will never add/delete other options, such as in the case of the enumeration (yes, no, unknown).
That said, once you have your options in the db the remainder is easy...you will have DTO entities or Business entities representing them in exactly the same way you do for all other DB entities.
For visualization purposes you may have a ViewModel version of this options that might just contain key and description, and a "Repository method" that your controllers can call to have the list of all options.
Once retrieved you controllers put them in the overall page ViewViewModel...together with all other information to be shown on the page. From the ViewModel...you can access them to put them in a dropdown.
Summing up:
1) You need a DB representation of your options
2) Then you will have DTO, business layer, and View versions of this entities...as needed, exactly as for all other DB entities.
Are you looking for a one-size-fits-all solution for all your select list options? I personally advocate choosing the option that best fits the specific issue.
In a recent project I was introduced to a hybrid of a Smart Enum. Here's an example (I apologize for typos, I'm typing this cold):
public class Priority
{
public enum Types
{
High,
Medium,
Low
}
public Types Type { get; private set; }
public string Name { get { return this.Type.ToString(); } } // ToString() with no arguments is not deprecated
public string Description { get; private set; }
public static High = new Priority{ Type = Types.High, Description = "..."};
public static Medium = new Priority{ Type = Types.Medium, Description = "..."};
public static Low = new Priority{ Type = Types.Low, Description = "..."};
public static IEnumerable<Priority> All = new[]{High, Medium, Low};
public static Priority For(Types priorityType)
{
return All.Single(x => x.Type == priorityType);
}
}
So, in implementation, you could store the Enum value, but you would reference the object itself (Priority.For(entity.priority)) for the additional metadata when rendering your views.
Is that closer to what you're looking for?
Of course, one of the gotchas is if you need to write a query against the database that relies on the metadata on the lookup, this solution is going to create a few tears along the way.
You can use "repository pattern" for data access and use viewmodels between your controllers and views. Example:
//Model
public class CustomerViewModel
{
public Customer customer { get;set; }
public IEnumerable<Village> Villages { get; set; }
}
//Controller
public ActionResult Index()
{
var customerViewModel = new CustomerViewModel
{
Customer = new Customer(),
Villages = _villageService.GetAll()
};
return View(customerViewModel);
}
//View
#model ViewModel.RegisterViewModel
#Html.DropDownListFor(q => q.Customer.VillageId, new SelectList(Model.Villages, "Id", "Title"), "Please Select")
I have written a blog post about repository pattern, you may have a look.
I store my options in the View Models themselves:
public class ViewModel {
[Required]
public int SelectListValue { get; set; }
public IDictionary<String,String> SelectListOptions {
get {
return new Dictionary<String, String>{
{ "0", Resources.Option1},
{ "1", Resources.Option2},
{ "2", Resources.Option3}
};
}
}
}
Then I can just drop the following line into my view to render the select list:
<%= Html.DropDownListFor(m => m.SelectListValue, new SelectList(this.Model.SelectListOptions, "Key", "Value", "")) %>
Hey all. I realize this is a rather long question, but I'd really appreciate any help from anyone experienced with RIA services. Thanks!
I'm working on a Silverlight 4 app that views data from the server. I'm relatively inexperienced with RIA Services, so have been working through the tasks of getting the data I need down to the client, but every new piece I add to the puzzle seems to be more and more problematic. I feel like I'm missing some basic concepts here, and it seems like I'm just 'hacking' pieces on, in time-consuming ways, each one breaking the previous ones as I try to add them. I'd love to get the feedback of developers experienced with RIA services, to figure out the intended way to do what I'm trying to do. Let me lay out what I'm trying to do:
First, the data. The source of this data is a variety of sources, primarily created by a shared library which reads data from our database, and exposes it as POCOs (Plain Old CLR Objects). I'm creating my own POCOs to represent the different types of data I need to pass between server and client.
DataA - This app is for viewing a certain type of data, lets call DataA, in near-realtime. Every 3 minutes, the client should pull data down from the server, of all the new DataA since the last time it requested data.
DataB - Users can view the DataA objects in the app, and may select one of them from the list, which displays additional details about that DataA. I'm bringing these extra details down from the server as DataB.
DataC - One of the things that DataB contains is a history of a couple important values over time. I'm calling each data point of this history a DataC object, and each DataB object contains many DataCs.
The Data Model - On the server side, I have a single DomainService:
[EnableClientAccess]
public class MyDomainService : DomainService
{
public IEnumerable<DataA> GetDataA(DateTime? startDate)
{
/*Pieces together the DataAs that have been created
since startDate, and returns them*/
}
public DataB GetDataB(int dataAID)
{
/*Looks up the extended info for that dataAID,
constructs a new DataB with that DataA's data,
plus the extended info (with multiple DataCs in a
List<DataC> property on the DataB), and returns it*/
}
//Not exactly sure why these are here, but I think it
//wouldn't compile without them for some reason? The data
//is entirely read-only, so I don't need to update.
public void UpdateDataA(DataA dataA)
{
throw new NotSupportedException();
}
public void UpdateDataB(DataB dataB)
{
throw new NotSupportedException();
}
}
The classes for DataA/B/C look like this:
[KnownType(typeof(DataB))]
public partial class DataA
{
[Key]
[DataMember]
public int DataAID { get; set; }
[DataMember]
public decimal MyDecimalA { get; set; }
[DataMember]
public string MyStringA { get; set; }
[DataMember]
public DataTime MyDateTimeA { get; set; }
}
public partial class DataB : DataA
{
[Key]
[DataMember]
public int DataAID { get; set; }
[DataMember]
public decimal MyDecimalB { get; set; }
[DataMember]
public string MyStringB { get; set; }
[Include] //I don't know which of these, if any, I need?
[Composition]
[Association("DataAToC","DataAID","DataAID")]
public List<DataC> DataCs { get; set; }
}
public partial class DataC
{
[Key]
[DataMember]
public int DataAID { get; set; }
[Key]
[DataMember]
public DateTime Timestamp { get; set; }
[DataMember]
public decimal MyHistoricDecimal { get; set; }
}
I guess a big question I have here is... Should I be using Entities instead of POCOs? Are my classes constructed correctly to be able to pass the data down correctly? Should I be using Invoke methods instead of Query (Get) methods on the DomainService?
On the client side, I'm having a number of issues. Surprisingly, one of my biggest ones has been threading. I didn't expect there to be so many threading issues with MyDomainContext. What I've learned is that you only seem to be able to create MyDomainContextObjects on the UI thread, all of the queries you can make are done asynchronously only, and that if you try to fake doing it synchronously by blocking the calling thread until the LoadOperation finishes, you have to do so on a background thread, since it uses the UI thread to make the query. So here's what I've got so far.
The app should display a stream of the DataA objects, spreading each 3min chunk of them over the next 3min (so they end up displayed 3min after the occurred, looking like a continuous stream, but only have to be downloaded in 3min bursts). To do this, the main form initializes, creates a private MyDomainContext, and starts up a background worker, which continuously loops in a while(true). On each loop, it checks if it has any DataAs left over to display. If so, it displays that Data, and Thread.Sleep()s until the next DataA is scheduled to be displayed. If it's out of data, it queries for more, using the following methods:
public DataA[] GetDataAs(DateTime? startDate)
{
_loadOperationGetDataACompletion = new AutoResetEvent(false);
LoadOperation<DataA> loadOperationGetDataA = null;
loadOperationGetDataA =
_context.Load(_context.GetDataAQuery(startDate),
System.ServiceModel.DomainServices.Client.LoadBehavior.RefreshCurrent, false);
loadOperationGetDataA.Completed += new
EventHandler(loadOperationGetDataA_Completed);
_loadOperationGetDataACompletion.WaitOne();
List<DataA> dataAs = new List<DataA>();
foreach (var dataA in loadOperationGetDataA.Entities)
dataAs.Add(dataA);
return dataAs.ToArray();
}
private static AutoResetEvent _loadOperationGetDataACompletion;
private static void loadOperationGetDataA_Completed(object sender, EventArgs e)
{
_loadOperationGetDataACompletion.Set();
}
Seems kind of clunky trying to force it into being synchronous, but since this already is on a background thread, I think this is OK? So far, everything actually works, as much of a hack as it seems like it may be. It's important to note that if I try to run that code on the UI thread, it locks, because it waits on the WaitOne() forever, locking the thread, so it can't make the Load request to the server.
So once the data is displayed, users can click on one as it goes by to fill a details pane with the full DataB data about that object. To do that, I have the the details pane user control subscribing to a selection event I have setup, which gets fired when the selection changes (on the UI thread). I use a similar technique there, to get the DataB object:
void SelectionService_SelectedDataAChanged(object sender, EventArgs e)
{
DataA dataA = /*Get the selected DataA*/;
MyDomainContext context = new MyDomainContext();
var loadOperationGetDataB =
context.Load(context.GetDataBQuery(dataA.DataAID),
System.ServiceModel.DomainServices.Client.LoadBehavior.RefreshCurrent, false);
loadOperationGetDataB.Completed += new
EventHandler(loadOperationGetDataB_Completed);
}
private void loadOperationGetDataB_Completed(object sender, EventArgs e)
{
this.DataContext =
((LoadOperation<DataB>)sender).Entities.SingleOrDefault();
}
Again, it seems kinda hacky, but it works... except on the DataB that it loads, the DataCs list is empty. I've tried all kinds of things there, and I don't see what I'm doing wrong to allow the DataCs to come down with the DataB. I'm about ready to make a 3rd query for the DataCs, but that's screaming even more hackiness to me.
It really feels like I'm fighting against the grain here, like I'm doing this in an entirely unintended way. If anyone could offer any assistance, and point out what I'm doing wrong here, I'd very much appreciate it!
Thanks!
I have to say it does seem a bit overly complex.
If you use the entity framework (which with version 4 can generate POCO's if you need) and Ria /LINQ You can do all of that implicitly using lazy loading, 'expand' statements and table per type inheritance.