We're building a survey system and utilising ASP.NET MVC and wondered if anyone can offer suggestions on the architecture.
Here's the problem we're trying to solve. Essentially an agency sends out several surveys every year. They're very structured and not like SurveyMonkey style of surveys - they're actually applications of feedback. Much like a Visa Application there are lots of things they need to do and sometimes it takes them 2-3 weeks to fill it out.
They can upload files (proofs of purchase etc - PDF/JPG) and also multiple "items". Eg. Say for instance they've worked for McDonalds, there could be 20 different franchises, they build a list of locations they've worked. 3 weeks later there could be another 3 new locations and 2 may have closed down. So we need to ensure the forms are able to handle those situations.
The forms themselves (markup and data) change every year - I should mention that this for a taxation/finance/budget system.
We were thinking of using MVC, using Xml to store the data (temporarily), XSD to validate the data, XSL to transform the data to presentable markup (for them to fill out) and then once they "Submit" an application it gets stored into the DB in relevant areas.
When the user starts the application process, they can save the progress so far (we validate whatever they entred and ignore any they havent), save it as an Xml blob and store in the DB. When they're finally ready to submit it, then we do a full validation and upload the files and store them securely (it has their business proofs and accounting statements) and then run some workflows.
What I'm really concerned about is how to manage changing forms versions (a year later). How are form/application systems written these days? We have 2 months to pull this off and about 30 forms to deliver. So 30xXML, 30xXSD, 30xXSL.
This might be a case for integration with Windows Workflow Foundation, since you're talking about maintaining the state of a long-running workflow (completing the application).
If you could compartmentalize the various components of the application process, you could modify the workflow in future years by removing, rerouting, and/or modifying existing portions of the workflow.
That said, it sounds like you might have pretty tight time constraints. It might be worth a couple of hours' investigation into WF, but consider carefully whether introducing something new might jeopardize your deadline.
As for the XML, XSD, XSL route, I think it depends on your team's experience. Personally, I shy away from that and would store the data in one or more "pending applications" tables in a relational database. From there (of course, you could do this from XML, too), build up proper business objects and models to which my MVC views can bind. Field-level validation is performed with Enterprise Validation or Fluent Validation or the like, and final validation is performed by one or more validator classes that inspect all the constituent parts of the application.
To deal with possible changes, keep a clean separation between each of the 30 forms. You should be able to modify a given form next year without messing up others. Remember, you can always subclass or compose a model type if there are new requirements in future years, and you don't have to remove obsolete parts -- your new views just won't expose certain parts of the model.
Related
Im new in DDD and I would like yours advise.
In my UI I need to view data from 2 aggregates.Im using EF Core and as I have read its better to keep only one navigation between entities so not to mix two aggregates and avoids serialization issues due to circular references.
How should I make the query?
Do I need to create a new view whenever I need data from 2 aggregates?
If needs to create views in which layer this view can exist? In infrastructure persistance layer or domain?
Thank you
How should I make the query?
With the simplest and fastest technology you can use. I mean: if building the query with EF Core requires several steps and a lot of extra objects, change approach and try with a direct SQL request. It's query, something you can test fast and you can change equally fast, whenever you need to do.
Do I need to create a new view whenever I need data from 2 aggregates?
You don't. With a view you hide away (in the view) the complexity oft the data read (at the code to change the DB every time the data to show should change), with the illusion/feeling that you manage an entity. Or course it should be clear that the data comes from a view. A query, on the other side, is more code related (to change the data shown you just change the query), but you also show "directly" that that data come from several sources.
Note: I've used EF Core years ago, and for a a really simple project. If with view you mean instead a view of the EF Core, than I would say yes. But only if building it doesn't require several steps/joins to gather the information. I would always think about a direct approach, when it looks that the code starts to be a bit too complex to show some data.
Here, anyway, the things can go really deep: do you have all your entities (root) in the same project? Or you have several microservices? With microservices, how do you share the data and how do you store it? Maybe a query is not viable, or it reads partially old data. As you can see, there're several thing to take into account when you have to read the data.
If needs to create views in which layer this view can exist? In infrastructure persistance layer or domain?
As stated before, if you mean a view within the EF Core, I would put really close to the layer where you're going to use it. But, it could depend. You could have a look here.
Personally I use 3 layers: domain, application and infrastructure. My views are in the application layer, because I have several queries that I reuse for different purposes. But before going into the infrastructure (where the requests are) I transform the results into the format required for UI.
DDD is about putting together all the business logic that otherwise is spread around several entities, services and even controllers. With this solution, all the actions that the domain offers could be performed without requiring extra logic outside the domain itself. Of course you need to implement the services that the domain is going to use, this is obvious.
On the other side is clear, at least for me, that the reuse is limited to the domain itself. I mean:
I can build a big query, that collects a lot of information from different sources, and reuse it for several UI views, but I've to be ready to pay the price of something that I have to touch every time something in the UI changes (anyway I need to transform this into a view related object);
I can build small, specialized queries that I use for 1, 2 (if they are the same) UI views, paying the price of more code (but simple and specialized, and really fast to test!) to maintain (here the query can produce close to/equal to view related object).
The second approach is the basic of CQRS, and I prefer that one. Remember, you can do CQRS even without event store and eventually consistency: you just take part of it, not the whole. We design to simplify our lives, not to make them harder.
I'm trying to figure out how to write an IQueryable data source that can pull and combine data from multiple sources (in this case Azure Table, Azure Blobs, and ElasticSearch). I'm really having a hard time figuring out where to start with this though.
The idea is that a web service (in this case an Asp.Net Web Api) can present a queryable, OData interface, but when it gets queried it pulls data from multiple sources depending on what is requested. So large queries might hit the indexing service (ElasticSearch) which wouldn't necessarily have the full object available, but calls to get an individual object would go directly to the Azure Tables. But from the service users perspective it's always just accessing the same data source.
While I would like to just use the index as our search service and the tables as our backup, I have a design requirement that it has to pull data from multiple sources, which greatly complicates this whole thing.
I'm wondering if anyone has any guidance on this or can point me towards the right technologies. Some of the big issues I'm seeing are:
the backend objects aren't necessarily the same as the front end object being queried. Multiple back end objects may get combined into a single front end one, or it may have computed values. So a LINQ query would have to be translated or mapped
changing data sources based on query parameters
Here is a quick overview of the technology I'm working with:
ASP.Net Web API 2 web service running as an Azure Cloud service
ElasticSearch running on SUSE VMs (on Azure)
Azure Tables
Azure Blobs
First, you need to separate the data access from the Web API project. The Web API project is merely an interface, so remove it from the equation. The solution to the problem should be the same regardless of whether it is web API or an ASP.NET web page, an MVC solution, a WPF desktop application, etc.
You can then focus on the data problem. What you need is some form of "router" to determine the data source based on the parameters that make the decision. In this case, you are talking about 1 item = azure and more than 1 item - and map reduce when more than 1 item (I would set up the rules as a strategy or similar so you can swap out if you find 1 versus 2+ is not a good condition to change routing).
Then you solve the data access problem for each methodology.
The system as a whole.
User asks for data (user can be a real person or another system through the web api)
Query is partially parsed to determine routing path
Router sends data request to proper class that handles data access for the route
Data is returned
Data is routed back to the user via whatever User interface is used (in this case Web API - see paragraph 1 for other options)
One caution. Don't try to mix all types of persistence, as a generic "I can pull data or a blob or a {name your favorite other persistant storage here}" often ends up becoming a garbage can.
This post has been out a while. The 2nd / last paragraph is close, yet still restricted... Even a couple years ago, this architecture is common place.
Whether a WPF or ASP.NET or Java, or whatever the core interface is written in - the critical path is the result set based on a query for information. High-level, but sharing more than I should because of other details of a project I've been part of for several years.
Develop your core interface. We did a complete shell that replaced Windows/Linux entirely.
Develop a solution architecture wherein Providers are the source component. Providers are the publishing component.
Now - regardless of your query 'source' - it's just another Provider. The interfacing to that Provider - is abstract and consistent - regardless of the Provider::SourceAPI/ProviderSourceAPI::Interface
When the user wants to query for anything... literally anything... Criminal background checks.... Just hit Google... Query these specific public libraries in SW somewhere USA/Anywhere USA - for activity on checkouts or checkins - it's really relevant. Step back - and consider the objective. No solution is too small, and guaranteed - too large for this - abstract the objectives of the solution - and code them.
All queries - regardless of what is being searhed for - are simply queries.
All responses - regardless of the response/result-set - are results - the ResultantProviderModel / ResultantProviderController (no, I'm not referencing MVC specifically).
I cannot code you a literal example here.. but I hope I challenge you to consider the approach and solution much more abstract and open than what I've read here. The physical implementation should be much more simplified and VERY abstract form a specific technology stack. The searched source? MUST be abstract - and use a Provider Architecture to implement. So - if I have a tool my desktop or basically office workers use - they query for something... What has John Doe written on physics???
In a corporation leveraging SharePoint and FAST Search? This is easy, out of the box stuff...
For a custom user interfacing component - well - they you have the backend plumbing to resolve. So - abstract each piece/layer - from an architecture approach. Pseudo code it out - however you choose to do that. Most important is that you do not get locked into a mindset locked into a specific development paradigm, language, IDE, or whatever. If you can design the solution abstract and walk it through is pseudo code - and do this for each abstraction layer... Then start coding it. The source is relative... The publishing aspect - is relative - consistent.
I do not know if you'll grasp this - but perhaps someone will - and it'll prove helpful.
HTH's...
I am new to this concept so i need guidance that what will be best to use in following scenario.
I have to make a desktop application that contains many features like parts Stock ,Employees Data,Company Cars Data etc etc.
now the problem is that many users would be using the application and offices situated are in different cities in which this application is installed.
I want a scheme that if one uploads any data to database other gets its reflection and other instantly gets updated.for example if more cars are added everyone using gets their cars list updated.
I was having idea to use webservices and data should be stored somewhere on website database so that everyone's application refreshes lists every say 20 seconds or so.
Any help is appreciated
You wouldn't reload all your data constantly; there are a couple of common approaches here:
keep a list of changes; if you add new data you add the primary data record and you write the fact that the change happened (essentially an "events" list). Then you can query the change log periodically to get and additions/updates/deletes simply by asking for all events after (x)
if the infrastructure allows, some kind of pub/sub framework - same approach really but typically using middleware for the changes, rather than the main DB
re how you get the data; polling is simple and effective; active pushing is harder to setup but may reduce latency - not sure it is worth it here
Another approach, though, is to design it as a web app - then all your data lives at the server-farm and is trivial to update immediately. Your "desktop" app could be a web page using ajax
Try Cloud Computing and store your data into cloud
OK trying to recover my points here after the downvote.
The cloud (windows Azure especially) is a great fit for this project. Web services would help too as they can be easily scaled out to a number of webservers (Instances in Azure speak). Having many desktop clients talking directly to a database is not a good idea and often results in scalability issues.
Output cacheing could help a great deal here if you are refreshing your client side data frequently, this can be implemented with almost no code. This makes it much easier to do than managing lists of changes.
Good afternoon - I have a pretty general question today - I've been tasked with creating a web application to manage some basic information on customers. It's a very simple application, but what I don't know is what to keep in mind to develop the site around supporting multiple users at their own domains or subdomains of our url?
How would I restrict users from logging in to each others portion of the app?
I've seen mention of database scoping in similar questions on Stack Overflow, could anybody elaborate on best practices for an implementation like this?
Are there any new features in MVC3 to support multi-tenancy? I am facing this issue with MVC2 and my eCommerce site where we decided we wanted it white-labeled and customizable for multiple shop owners, and don't know where to begin in implementing these features in an existing application. Any input is appreciated.
edit
To elaborate on multi-tenancy, what I mean - in the context of a store for example, multiple users sign up for their own store at www.mystore.com and are each given a unique subdomain to access their own instance of the store, at user1.mystore.com, user2.mystore.com etc. Each store would have customers with order histories, and those customers would have logins. I would need to restrict customers of user1.mystore.com from logging in at user2.mystore.com without a new account, and likewise prevent user2.mystore.com from accessing user1.mystore.com's customer history.
I implemented a complete MVC multi-tennant app. Here are some links I found handy and some sample apps:
http://msdn.microsoft.com/en-us/library/aa479086.aspx
http://codeofrob.com/archive/2010/02/14/multi-tenancy-in-asp.net-mvc-controller-actions-part-i.aspx
http://www.developer.com/design/article.php/10925_3801931_2/Introduction-to-Multi-Tenant-Architecture.htm
http://msdn.microsoft.com/en-us/library/aa479086.aspx#mlttntda_cc
http://lukesampson.com/post/303245177/subdomains-for-a-single-application-with-asp-net-mvc
http://code.google.com/p/multimvc/
http://www.paulstovell.com/widgets
http://www.agileatwork.com/bolt-on-multi-tenancy-in-asp-net-mvc-with-unity-and-nhibernate/
http://ayende.com/blog/3530/multi-tenancy-approaches-and-applicability
http://weblogs.asp.net/zowens/archive/tags/Multi-tenancy/default.aspx
http://cloudsamurai.codeplex.com/
http://cloudninja.codeplex.com/
http://msdn.microsoft.com/en-us/library/hh534484.aspx
http://blog.maartenballiauw.be/post/2009/05/20/ASPNET-MVC-Domain-Routing.aspx
http://blog.tonywilliams.me.uk/asp-net-mvc-2-routing-subdomains-to-areas
Even starting from scratch, you are in for a world of hurt. The MVC framework does very little to help you address the issues.
Most likely you are about to spend a fair amount of time restructuring your database.
The first step is that you are going to create a table to house your "Tenant" list. Then you need to add this TenantId to just about every table in your system to make sure no one steps on each other. You can skip any tables that are global in nature. One example might be a list of Status Codes.
However, everything from users to the data they have etc will have to have this ID. Also, modify all of your indexes to take tenantid into account.
Once you have that, you'll need to modify all of your queries to take the tenantid into account.
One column of the tenants table should be the portal url. Like customername.oursite.com or whatever. This way you could point multiple urls to the exact same code. When the site needs to use the current tenantid just look it up based on the URL the passed in.
If I was doing this, I'd plan to spend about 1 to 2 hours per table in the database to make it "multi-tenant". Obviously some tables (and their queries) will go faster; others will take longer.
Incidentally, this doesn't cover things like customizing the UI (look / feel) per tenant or anything of that nature. If you need to do this then you'll have to either create a directory on the server for each tenant to hold their style sheets or load it directly from the DB (which has it's own issues with regards to caching).
Typically, you design for this at the beginning of the project. Refitting an already (or almost) complete project is a PITA.
Finally, test, test, test and do more testing. You will have to make sure that every single query pulls only the data it absolutely needs to.
There has been some talk of multi-tenancy support in Sharp Architecture (based on MVC 3) found here: http://www.yellowfeather.co.uk/2011/02/multi-tenancy-on-sharp-architecture-revisited/
Not sure if that really helps you with your existing application, porting over would be a bit of a job.
I am working on a Sometimes Connected CRUD application that will be primarily used by teams(2-4) of Social Workers and Nurses to track patient information in the form of a plan. The application is a revisualization of a ASP.Net app that was created before my time. There are approx 200 tables across 4 databases. The Web App version relied heavily on SP's but since this version is a winform app that will be pointing to a local db I see no reason to continue with SP's. Also of note, I had planned to use Merge Replication to handle the Sync'ing portion and there seems to be some issues with those two together.
I am trying to understand what approach to use for the DAL. I originally had planned to use LINQ to SQL but I have read tidbits that state it doesn't work in a Sometimes Connected setting. I have therefore been trying to read and experiment with numerous solutions; SubSonic, NHibernate, Entity Framework. This is a relatively simple application and due to a "looming" verion 3 redesign this effort can be borderline "throwaway." The emphasis here is on getting a desktop version up and running ASAP.
What i am asking here is for anyone with any experience using any of these technology's(or one I didn't list) to lend me your hard earned wisdom. What is my best approach, in your opinion, for me to pursue. Any other insights on creating this kind of App? I am really struggling with the DAL portion of this program.
Thank you!
If the stored procedures do what you want them to, I would have to say I'm dubious that you will get benefits by throwing them away and reimplementing them. Moreover, it shouldn't matter if you use stored procedures or LINQ to SQL style data access when it comes time to replicate your data back to the master database, so worrying about which DAL you use seems to be a red herring.
The tricky part about sometimes connected applications is coming up with a good conflict resolution system. My suggestions:
Always use RowGuids as your primary keys to tables. Merge replication works best if you always have new records uniquely keyed.
Realize that merge replication can only do so much: it is great for bringing new data in disparate systems together. It can even figure out one sided updates. It can't magically determine that your new record and my new record are actually the same nor can it really deal with changes on both sides without human intervention or priority rules.
Because of this, you will need "matching" rules to resolve records that are claiming to be new, but actually aren't. Note that this is a fuzzy step: rarely can you rely on a unique key to actually be entered exactly the same on both sides and without error. This means giving weighted matches where many of your indicators are the same or similar.
The user interface for resolving conflicts and matching up "new" records with the original needs to be easy to operate. I use something that looks similar to the classic three way merge that many source control systems use: Record A, Record B, Merged Record. They can default the Merged Record to A or B by clicking a header button, and can select each field by clicking against them as well. Finally, Merged Records fields are open for edit, because sometimes you need to take parts of the address (say) from A and B.
None of this should affect your data access layer in the slightest: this is all either lower level (merge replication, provided by the database itself) or higher level (conflict resolution, provided by your business rules for resolution) than your DAL.
If you can install a db system locally, go for something you feel familiar with. The greatest problem I think will be the syncing and merging part. You must think of several possibilities: Changed something that someone else deleted on the server. Who does decide?
Never used the Sync framework myself, just read an article. But this may give you a solid foundation to built on. But each way you go with data access, the solution to the businesslogic will probably have a much wider impact...
There is a sample app called issueVision Microsoft put out back in 2004.
http://windowsclient.net/downloads/folders/starterkits/entry1268.aspx
Found link on old thread in joelonsoftware.com. http://discuss.joelonsoftware.com/default.asp?joel.3.25830.10
Other ideas...
What about mobile broadband? A couple 3G cellular cards will work tomorrow and your app will need no changes sans large pages/graphics.
Excel spreadsheet used in the field. DTS or SSIS to import data into application. While a "better" solution is created.
Good luck!
If by SP's you mean stored procedures... I'm not sure I understand your reasoning from trying to move away from them. Considering that they're fast, proven, and already written for you (ie. tested).
Surely, if you're making an app that will mimic the original, there are definite merits to keeping as much of the original (working) codebase as possible - the least of which is speed.
I'd try installing a local copy of the db, and then pushing all affected records since the last connected period to the master db when it does get connected.