ASP.NET REST API Collections - c#

I'm trying to build REST API based on existing database model. I have already one built but I want to make it simpler and clear before I start coding client app. I've decided to use ASP.NET Core as back-end technology and WPF front end (also there will be Angular/Ionic frontend). The database model is very simple, it contains around 30 tables (different documents with related resources and collections).
So far API use flat URL - this way sometimes I have to post/put child object with its parent. Should I go with nested URL (API/Document/{id}/Item) to make sending object simpler or even use the only id which makes this object flat?
The second problem I have when I need data from child object for data needed to data grid source - should I add new method/controller to get ViewModel with all properties needed for data grid or should I get parent object collection first and then get child objects and construct view in client app?

Ultimately, this choice depends on many parameters and also on your team preferences. You didn't give enough details to give a absolute advice, but even though you did give them, there might not be any absolute answer anyway.
When in doubt, for both of your problems, I would recommend to go for the flat, simple, complete data transfer objects. (EDIT : of course won't be flat if you have linked collections, but still it would be simple and complete)
This has the advantage of reducing the number of connections / calls to the API, which have some overhead for all the network infrastructure, and for the client too.
Second, I think this simplifies development (but I admit this is debatable)
And also, about the second problem, it helps separate the concerns between your API and your client app. Building a ViewModel is often necessary (you maybe don't want to expose some informations, for security or performance reasons), but don't make it too complicated just for the client app; you want your API to be easily used by a new client / new version later.
To show you why it's usually worse to do many individual calls :
Imagine if you want to retrieve 40 documents.
If each document has Item1 and Item2, that would be 80 more calls if you have to retrieve Documents/1/Item1, Documents/1/Item2, etc.. !
Also, for your front-end development, you have to manage the callbacks (first call the document, once it's done get item1 and item2) which seems more complicated than getting the whole lot in one go (since ultimately you need to wait for everything to be there).
Worse, maybe some of the object has changed, and his children too, in between the call. You might end with Version A of your parent object, but with version B of it's children items !
Of course, there are some situations that could make the decomposed children items calls interesting.
If you often have to get only the item part of a document, without needing to reload the whole, that would be a good argument for that.
Or if the overall document is large, and you want to be able to display parts of the loaded documents before the complete loading is finished.
A last drawback I can see, when you have linked collection of related objects, is that you can have many repetition of linked objects. In this case it could make sense to do something more tricky if you need to avoid too many repetition, and have a few separate calls for main object, relations, and load related objects only once even if some are used multiple times can be beneficial.

Related

Do I have to really create multiple models?

MS stack developer historically.
I have committed to retooling to the following stack
angular -> ms web.api2 -> C# business objects -> sql server
Being old, I develop the database from requirements and use Codesmith to generate the business logic layer. (yes, I have heard of entity framework. even tried it once).
As I embrace Angular and web API 2
I find that Angular wants me to write a model on the front end. This seems to be just a data structure, I cant even add helper methods to it
So I also often write a class with helper methods that takes an instance of the model. Kind of ugly,but it does marry structure and logic.
I find that Web API2 wants me to write a model. This again seems to be just a data structure. I am exploring the dynamic data type, but really this doesn't buy me much. Instead of writing a class, I'm writing a mapping function.
The question is this:
Is there any way around having 3+ copies of each class spread across the stack?
Codesmith is a very capable code generator... it can gen multiple files... but...
If its just a couple data members, and 3 places, I can copy paste edit and get it done.
Just seems to me that now committing to keeping a data structure in synch in 3 different environments is setting oneself up for a lot of work.
I have spent the last 15 years trying to shove as much code as I can into a framework of inheritable classes so I can keep things DRY.
Am I missing something? Are there any patterns that can be suggested?
[I know this isn't a question tailored for SO, but it is where all the smart people shop. Downvote me if you feel honor bound to do so.]
Not entirely familiar with how CodeSmith generates it's classes, but if they are just plain-old-CLR-objects that serialize nicely, you can have WebApi return them directly to your Angular application. There are purists that will frown upon this, but depending on the application, there may be a justification.
Then, in the world of Angular, you have a few options, again, depending on your requirements/justification, and your application - again, purists will definitely frown upon some of the options.
create classes that match what's coming down from the server (more correct method)
Treat everything as "any", lose type safety, and just access properties as you need them i.e. don't create the model. (obviously less correct method)
find a code generation tool that will explore API end points to determine what they return, and generate your typescript classes for you.
Personally, using Entity Framework, I (manually) create my POCO's for database interraction, have a "view"/DTO class that WebAPI would then send back to the client, and a definition of the object in Typescript, but I am a control freak, and don't like generated code.

When using protobuf-net, how do I know what fields will be updated (or have been updated) when using merge on an existing object

Using Protobuf-net, I want to know what properties of an object have been updated at the end of a merge operation so that I can notify interested code to update other components that may relate to those updated properties.
I noticed that there are a few different types of properties/methods I can add which will help me serialize selectively (Specified and ShouldSerialize). I noticed in MemberSpecifiedDecorator that the ‘read’ method will set the specified property to true when it reads. However, even if I add specified properties for each field, I’d have to check each one (and update code when new properties were added)
My current plan is to create a custom SerializationContext.context object, and then detect that during the desearalization process – and update a list of members. However… there are quite a few places in the code I need to touch to do that, and I’d rather do it using an existing system if possible.
It is much more desirable to get a list of updated member information. I realize that due to walking down an object graph that may result in many members, but in my use case I’m not merging complex objects, just simple POCO’s with value type properties.
Getting a delta log isn't an inbuilt feature, partly because of the complexity when it comes to complex models, as you note. The Specified trick would work, although this isn't the purpose it was designed for - but to avoid adding complexity to your own code,that would be something best handled via reflection, perhaps using the Expression API for performance. Another approach might be to use a ProtoReader to know in advance which fields will be touched, but that demands an understanding of the field-number/member map (which can be queried via RuntimeTypeModel).
Are you using habd-crafted models? Or are you using protogen? Yet another option would be to have code in the setters that logs changes somewhere. I don't think protogen currently emits partial method hooks, but it possibly could.
But let me turn this around: it isn't a feature that is built in right now, and it is somewhat limited due to complexity anyway, but: what would a "good" API for this look like to you?
As a side note: this isn't really a common features in serializers - you'd have very similar challenges in any mainstream serializer that I can think of.

.net/sql server creating abstract object layer architecture

I have a a broad-scoped architecture question. I have multiple .net projects and multiple databases all interacting with each other across multiple web applications all for internal use. We would like to create an over-arching layer, somewhere, somehow, to create objects out of the data we query from a database. These objects would have functions/methods available to them, defined somewhere. The question is, what is the most efficient, flexible and advantages/disadvantages of way to define these objects.
For example. Let's say I have multiple HR-related tables: tbl_employees, tbl_departments. When I pull an employee into an application, I then have a whole bunch of code in that project that is associated with that employee, most predominately the functions I can do to that employee - functions such as edit_contact_info or increase_salary or add disciplinary_note. Similar with a department and functions such as edit_department, manage_department_employees. Some functions may include some logic in them, some may just redirect to a particular page.
My question is, how or where or what can I make to classify an employee entry or even series of employee entries as an "object", meaning whenever I pull that "object" somewhere, I also have the associated actions with it. Whether I am pulling the data as a list somewhere or even as part of a data-visualization, I would like to have the appropriate functions/methods available. Even if it is in a different project.
I am looking for different possibilities, not necessarily one answer and I am not entirely sure the best way to go about it although I have thought maybe about creating another layer within the database that holds all the "object" definition data or perhaps some definitely with the .net framework but I lack the expertise to know exactly what I am talking about. From my limited knowledge, I believe I am looking for some sort of ORM (maybe in-memory) implementation, but I am not sure how to get started exactly.
Does anyone have any ideas or a direction to point me in perhaps?
I appreciate any and all help!
Edit
To be clear, what I am trying to find is something I can apply on top of projects and applications I already have up and running and that are being used. I would like a way to implement this over-arching object functionality on top of pre-existing mvc projects

Moving SqlDataSource to class file ASP.NET

I have been tasked with moving all SqlDataSource objects out of an ASP.NET pages aspx files and putting them into a separate class file but am lost. Is there a way to create a SqlDataSource object in a separate class and assign query strings to the SelectCommand, DeleteParameters, InsertParameters, etc?
To where you can call the object on a separate page rather than have the code in the aspx?
Yes, you can do that. I would recommend that you move all the database IO to a web service.
I also want to add that this is a very good step that you are taking, as far as the security of your application. Separating your data access from your user IO like this is something I consider to be a must-have security measure. Done right, you will bump up the security of your application significantly.
You can create a WCF web service easily enough. There are plenty of tutorials on the web, and I'll be happy to give you pointers as you go along. The web service would have CRUD (insert, select, update, delete) operations. You can then create an "Object" type datasource on the web page, that points to your web service, and the elements on the page can get their data from those object datasources. You can also instantiate the web service in your code behind, and use it to manipulate the data. When you create the object type data sources, you will specify the service methods that correspond to each of the commands (select, insert, update...). Hope this points you in the right direction, and feel free to ask me more in the comments, or you can initiate a chat and I will give you my email where you can holler at me.
As already mentioned, it is possible to move the SqlDataSource controls out of the WebForm. I'll assume for the moment that your current code declares these controls in the .aspx file. You could change to an imperative approach, for example, and instantiate the datasource controls in an event handler of a WebForm's code-behind class, but you don't gain much by doing so. Indeed, you could even move a lot of the code that does the instantiation into a helper class that is called by the code-behind class, but this doesn't get you much further.
The original intent of the DataSource controls was to provide developers with a way to create rapid prototypes / proofs of concept. But these controls aren't really meant for production systems. They are a violation of the separation of concerns and make unit testing difficult, if not impossible.
In some ways, the DataSource controls can be easier to work with (say, in conjunction with a GridView control). But, this convenience comes at a price, which probably helps explain (in part) why you're being asked to do something with the controls. It's unfortunate that at when ASP.Net 2.0 (WebForms) was released in 2005, the literature that was published at the time heavily promoted the use of these DataSource controls. The community has learned since then that the production value of these controls is questionable, unless you are working on simple systems that don't need to evolve much over time.
As was mentioned by Anon316, you could use a web service to handle the CRUD operations. However, this solution might not be what you really need. Additional overhead is incurred by using a web service (i.e., additional HTTP requests to the service). Having your application make direct calls to the database can still be a very good approach.
With that said, consider creating a separate class (or classes) that provide data access facilities (e.g., a Repository). Entity Framework, for example, makes creating this kind of thing fairly straightforward (and there are many other data access libraries available in the .Net ecosystem). Be prepared for adding more code to the code-behind classes of your WebForms in order to make them interact with your Repository (or other Data Access) class(es). The benefit you'll gain is more testability and reuse of your data access code. Consider putting your data access class(es) into a separate project in your solution (to start).
Whether you create separate data access class(es) in your solution or a web service, you still have significant refactoring to do in order to move away from the DataSource controls. So, again, be mindful of the additional overhead involved in using a web service, recognizing that a web service tends to make sense when you have multiple clients (e.g. web and mobile), not when you only have one.

C# reference collection for storing reference types

I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better).
I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)?
So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?
The easiest way to do what you're trying to accomplish is to create a wrapper that holds on to the list. This wrapper will have an add method which takes in a ref. In the add it looks up the value in the list and creates it when it can't find the value. Or a Cache
But... this statement would make me worry.
I don't want re-create an object if
I've already created it before during
the entire application life
But as Raymond Chen points out that A cache with a bad policy is another name for a memory leak. What you've described is a cache with no policy
To fix this you should consider using for a non-web app either System.Runtime.Caching for 4.0 or for 3.5 and earlier the Enterprise Library Caching Block. If this is a Web App then you can use the System.Web.Caching. Or if you must roll your own at least get a sensible policy in place.
All of this of course assumes that your database's caching is insufficient.
Using Ioc will save you many many many bugs, and make your application easier to test and your modules will be less coupled.
Ioc performance are pretty good.
I recommend you to use the implementation of Castle project
http://stw.castleproject.org/Windsor.MainPage.ashx
maybe you'll need a day to learn it, but it's great.

Categories