I have an application with several Web API controllers and I now I have a requirement which is to be able to filter GET results by the object properties. I've been looking at using OData but I'm not sure if it's a good fit for a couple reasons:
The Web API controller does not have direct access to the DataContext, instead it gets data from our database through our "domain" layer so it has no visibility into our Entity Framework models.
Tying into the first item, the Web API deals with lightweight DTO model objects which are produced in the domain layer. This is effectively what hides the EF models. The issue here is I want these queries to be executed in our database but by the time the Web API method gets a collection from the domain layer all of the objects in the collection have been mapped to these DTO objects, so I don't see how the OData filter could possibly do it's job when the objects are once-removed from EF in this way.
This item may be the most important one: We don't really want to allow arbitrary querying against our Web API/Database. We just sort of want to leverage this OData library to avoid writing our own filters, and filter parsers/builders for every type of object that could be returned by one of our Web API endpoints.
Am I on the wrong track based on #3? If not, would we be able to use this OData library without significant refactoring to how our Web API and our EF interact?
I haven't had experience with OData, but from what I can see it's designed to be fed a Context and manages the interaction and returning of those models. I am definitely not a fan of returning Entities in any form to a client.
It's an ugly situation to be in, but when faced with this, my first course of action is to push back to the clients to justify their searching needs. The default request is almost always "Well, it would be nice to be able to search against everything." My answer to that is that I don't want to know what you want, I want to know what you need because I don't want to give you a loaded gun to shoot your own foot off with and then have you blame me because the system came grinding to a halt. Searching is a huge performance killer if it's too open-ended. It's hard to test for accuracy/relevance, and efficiently index for 100% of possible search cases when users only need 25% of those scenarios. If the client cannot tell you what searching they will need, and just want everything because they might need it, then they don't need it yet.
Personally I stick to specific search DTOs and translate those into the linq expressions.
If I was faced with a hard requirement to implement something like that, I would:
Try to push for these searches/reports to be done off a reporting replica that is synchronized with the live database. (To minimize the bleeding when some idiot managers fire up some wacky non-indexed search criteria so that it doesn't tie up the production DB where people are trying to do work.)
Create a new bounded DbContext specific for searching with separate entity definitions that only expose the minimum # of properties to represent search criteria and IDs.
Hook this bounded context into the API and OData. It will return "search results". When a user selects a search result, use the ID(s) against the API to load the applicable domain, or initiate an action, etc.
no. 1. is optional, a nice to have provided they can live with searches not "seeing" updated criteria until replicated. (I.e. a few seconds to minutes depending on replication strategy/size) Normally these searches are used for reporting-type queries so I'd push to keep these separate from the normal day-to-day searching options that users use. (I.e. an advanced search option or the like.)
Related
I'm reviewing OData again as I would like to use it in a new Rest project with EF but I have the same concerns I had a few years back.
Exposing a general IQueryable can be quite dangerous. Restricting potentially expensive queries must be done elsewhere. DB, connection level.
OData doesn't allow any interception/customisation of the behaviour by developers as it sits outside the interface.
OData doesn't play well with DI in general. While it is possible to DI an alternative IQueryable you can't intercept the OD calls and check, amend or modify.
My suggestion is that the tool be broken down into more distinct elements to allow far greater customisation and re-use. Break open the black box :) It would also be better in terms of single responsibility. Would it be possible to have components that did the following
Expression generators from urls. Converts OData urls extensions into typed expressions usable with an IQueryable but independent of it. Generate T => Expression<Func<T, bool>> for a where for example. This would be a super useful stand alone component and support OData url formats being used more widely as a standard.
An EF Adaptor to attach the expressions to an EF context. An EF Adaptor to attach the expressions to an EF context or use in any other DI'ed code. So rather than exposing a public IQueryable the service can encapsulate an interface and get the benefits of OData functionality. Rest Get -> Expression Generation -> Map to IQueryable.
This approach would allow developers to intercept the query calls and customise the behaviour if required while maintaining the ease of use for simple cases. We could embed OData and EF within repository patterns where we add our own functionality.
There is a lot of misunderstanding in your post, it's not really well suited to this site, but it is a recurring line of speculation that does need to be addressed.
OData doesn't play well with DI in general. While it is possible to DI an alternative IQueryable you can't intercept the OD calls and check, amend or modify.
This statement is just not accurate at all, not on the DI topic or the query interception. To go into detail is too far out of scope as there are many different ways to achieve this, it would be better to post a specific scenario that you are challenged by and we can post a specific solution.
Exposing a general IQueryable can be quite dangerous. Restricting potentially expensive queries must be done elsewhere. DB, connection level.
Exposing raw IQueryable as a concept has some inherent dangers if you do not put in any restrictions, but in OData we are not exposing the IQueryable to the public at all, not in the traditional SDK or direct API sense. Yes your controller method can (and should) return an IQueryable but OData parses the Path and Query from the incoming Http Request to compose the final query to serve the request without pre-loading data into memory.
The inherent risk with IQueryable comes from when you allow external logic to compose or execute a query that is attached to your internal data context, but in OData the HTTP Host boundary prevents external operators from interacting with your query or code directly, so this risk is not present due to the hosting model.
OData gives you granularity over which fields are available for projecting, filtering or sorting, and although there is rich support for composing extended queries including functions and aggregates, the IQueryable expression itself does not pass the boundary of the executable interface. The IQueryable method response is itself fundamental to many of the features that drive us to choose OData in the first place.
However, you do not need to expose IQueryable at all if you really do not want to! You can return IEnumerable instead, but by doing so you will need to load enough data into memory to satisfy the query request, if you want to fulfil it that is. There are extension points to help you do this as well as tools to parse the URL query parameters into simple strings or an expression tree that you can apply to your own data models if you need to.
The EnableQueryAttribute is an Action Filter that will compose a LINQ query over the results from your controller endpoints to apply any $filter criteria or $select/$expand projections or even $apply aggregations.
OData doesn't allow any interception/customisation of the behaviour by developers as it sits outside the interface.
EnableQueryAttribute is about as close to a Black Box as you can find in OData, but the OData Libraries are completely open source and you can extend or override the implementation or omit the attribute altogether. If you do so (omit it), you will then need to process and format the response to be OData compliant. The specification allows for a high degree of flexibility, the major caveat is that you need to make sure the $metadata document describes the inputs and outputs.
The very nature of the ASP request processing pipeline means that we can inject all sorts of middleware implementations at many different points, we can even implement our own custom query options or we pass the query through the request body if we need to.
If your endpoints do NOT return IQueryable, then the LINQ composition in the EnableQueryAttribute can only operate over the data that is in the IEnumerable feed. A simple example of the implication of this is if the URL Query includes a $select parameter for a single field, something like this:
http://my.service.net/api/products(101)?$select=Description
If you are only exposing IEnumerable, then you must manually load the data from the underlying store. You can use the ODataQueryOptions class to access the OData arguments through a structured interface, the specific syntax will vary based on your DAL, ORM and the actual Model of course. However, like most Repository or MVC implementations, many implementations that do not use IQueryable will default to simply loading the entire object into memory instead of the specifically requested fields, they might end up loading the results from this comparative SQL query:
SELECT * FROM Product WHERE Id = #Id
If this Product has 20 fields, then all that data will be materialised into memory to service the request, even though only 1 field was requested. Even without using IQueryable, OData still has significant benefits here by reducing the bytes being sent across the wire to the client application. This reduces costs but also the time it will take to fulfill a request.
By comparison, if the controller method returned an IQueryable expression that had been deferred or not yet materialised, then the final SQL that gets executed could be something much more specific:
SELECT Description FROM Product WHERE Id = #Id
This can have significant performance benefits, not just in the SQL execution but in the transport between the data store and the service layer as well as the serialization of the data that is received.
Serialization is often taken for granted as a necessary aspect of API development, but that doesn't mean there is no room to improve the process. In the cloud age where we pay for individual CPU cycles there is a lot of wasted processing that we can reclaim by only loading the information that we need, when we need it.
To fully realise the performance gains requires selective data calls from the Client. If the end client makes a call to explicitly request all fields, then there should be no difference between OData and a traditional API approach, but with OData the potential is there to be realized.
If the controller is exposing a complex view, so not a traditional table, then there is even more significance in supporting IQueryable. For custom business DTOs (views) that do not match the underlying storage model we are often forced to compromise between performance practicalities and data structures. Without OData that allows for the caller to trim the data schema, it is common for APIs to either implement some fully dynamic endpoints, or to see a sprawl of similar DTO models that have restricted scope or potentially single purpose. OData provides a mechanism to expose a single common view that has more metadata than all callers need, while still allowing individual callers to only retrieve the sub-set that they need.
In aggregate views you can end up with some individual columns adding significant impact on the overall query execution, in traditional REST APIs this becomes a common justification for having similar DTO models, with OData we can define the view once and give the callers flexibility to choose when the extra data, that comes with a longer response wait time, should be queried, and when it should not.
OData provides a way to balance between being 100% generic with your DTOs or resorting to single use DTOs.
The flexibility provided by OData can significantly reduce the overall time to market by reducing the iterative evolution of views and complex types that often comes up as the front-end development teams start to consume your services. The nature of IQueryable and the conventions offered by the OData standard means that there is potential for front-end work to begin before the API is fully implemented
This was a very simple and contrived example, we didn't yet cover $expand or $apply that can lead to very memory intensive operations to support. I will however quickly talk about $count, it is a seemingly simple requirement, to return a count of all records for a specific criteria or for no criteria at all. An OData IQueryable implementation requires no additional code and has almost zero processing to service this request as it can be passed entirely to the underlying data store in the form of a SELECT COUNT(*) FROM...
With OData and the OData Libraries, we get a lot of functionality and flexibility OOTB, but the default functionality is just the start, you can extend your controllers with additional Functions and Actions and views as you need to.
Regarding the Dangers of IQueryable...
A key argument against exposing IQueryable from the DbContext is that it might allow callers to access more of your database than you might have intended. OData has a number of protections against this. The first is that for each field in the entire schema you can specify if the field is available at all, can be filtered, or can be sorted.
The next level of protection is that for each endpoint we can specify the overall expansion depth, by default this is 2.
It is worth mentioning that it is not necessary to expose your data model directly through OData, if your domain model is not in-line with your data model, it may be practical to only expose selected views or DTOs through the OData API, or only a sub-set of tables in your schema.
For more discussion on DTOs and Over/Under posting protection, have a read over this this post: How to deal with overposting/underposting when your setup is OData with Entity framework
Opening the Black Box
Expression generators from urls. Converts OData urls extensions into typed expressions usable with an IQueryable but independent of it. Generate T => Expression<Func<T, bool>> for a where for example.
This is a problematic concept, if you're not open to IQueryable ... That being said, you can use open types and can have a completely dynamic schema that you can validate in real-time or be derived from the query routes entirely without validation. There is not a lot of published documentation on this, mainly due to the scenarios where you want to implement this are highly specific, but it's not hard to sort out. While out of scope for this post, if you post a question to SO with a specific scenario in mind we can post specific implementation advice...
An EF Adaptor to attach the expressions to an EF context. An EF Adaptor to attach the expressions to an EF context or use in any other DI'ed code. So rather than exposing a public IQueryable the service can encapsulate an interface and get the benefits of OData functionality. Rest Get -> Expression Generation -> Map to IQueryable.
What you are describing is pretty close to how the OData Context works. To configure OData, you need to specify the structure of the Entities that the OData Model exposes. There are convention based mappers provided OOTB that can help you to expose an OData model that is close to 1:1 representation of an Entity Framework DbContext model with minimal code, but OData is not dependant on EF at all. The only requirement is that you define the DTO models, including the actions and functions, from this model the OData runtime is able to validate and parse the incoming HTTP request into queryable expressions composed from the base expressions that your controllers provide.
I don't recommend it, but I have seen many implementations that use AutoMapper to map between the EF Model to DTOs, and then the DTOs are mapped to the OData Entity model. The OData Model is itself an ORM that maps between your internal model and the model that you want to expose through the API. If this model is a significantly different structure or involves different relationships, then AutoMapper can be justified.
You don't have to implement the whole OData runtime including the OData Entity Model configuration and inheritng from ODataController if you don't want to.
The usual approach when you want to Support OData Query Options in ASP.NET Web API 2 without fully implementing the OData API is to use the EnableQueryAttribute in your standard API, it is after all just an Action Filter... and an example of how the OData libraries are already packaged in a way that you can implement OData query conventions within other API patterns.
OData Url "$filter=id eq 1"
becomes:
Func<TestModel, bool> filterLambda = x => x.id == 1;
where TestModel is the 'implied' in some way (maybe this is the problem you refer to)
in terms of generating the code it's something like
Expression.Equal(Expression.PropertyOrField(ExpressionParam, "id"), Expression.Constant(1))
generalised into an OData expression parser
Thanks for the reply. Your time is much appreciated. This isn't really an answer rather a design discussion. Apologies if it's not appropriate here.
I'm no expert with this tech so I may also be missing options. Manually exposing IEnumerables instead of IQueryable seems to require much more coding if I'm reading that suggestion correctly. It would also lead to service processing of the data after the database queries. The idea of custom actions on a custom IQueryable may also be worth some investigation.
An example of not playing well with DI...If we have an http context user with the token who has some limits re the queries they can do. We would like to take the user info from the http context and restrict the db context queries they do. Maybe the user can only see certain business units or certain clients. There are many other use-cases.
It should be possible to append/amend queries before presented to the database with the extra user context. This is where the decomposition idea comes in. If OData could generate a structure (lambda expression may not be good here either) that can be manipulated we can have the best of both by manipulating before the query execution.
The $filter, $extends concepts could be added more generally to interfaces that would allow the database to be better encapsulated. The interface applies the filter behind the scenes. The OData implementation could make the filters available on the context rather than applying the results 'outside' of the controller.
It would be interesting to know what you mean by "a problematic concept". This model seems so natural to me. I'm amazed it doesn't work this way already.
Im new in DDD and I would like yours advise.
In my UI I need to view data from 2 aggregates.Im using EF Core and as I have read its better to keep only one navigation between entities so not to mix two aggregates and avoids serialization issues due to circular references.
How should I make the query?
Do I need to create a new view whenever I need data from 2 aggregates?
If needs to create views in which layer this view can exist? In infrastructure persistance layer or domain?
Thank you
How should I make the query?
With the simplest and fastest technology you can use. I mean: if building the query with EF Core requires several steps and a lot of extra objects, change approach and try with a direct SQL request. It's query, something you can test fast and you can change equally fast, whenever you need to do.
Do I need to create a new view whenever I need data from 2 aggregates?
You don't. With a view you hide away (in the view) the complexity oft the data read (at the code to change the DB every time the data to show should change), with the illusion/feeling that you manage an entity. Or course it should be clear that the data comes from a view. A query, on the other side, is more code related (to change the data shown you just change the query), but you also show "directly" that that data come from several sources.
Note: I've used EF Core years ago, and for a a really simple project. If with view you mean instead a view of the EF Core, than I would say yes. But only if building it doesn't require several steps/joins to gather the information. I would always think about a direct approach, when it looks that the code starts to be a bit too complex to show some data.
Here, anyway, the things can go really deep: do you have all your entities (root) in the same project? Or you have several microservices? With microservices, how do you share the data and how do you store it? Maybe a query is not viable, or it reads partially old data. As you can see, there're several thing to take into account when you have to read the data.
If needs to create views in which layer this view can exist? In infrastructure persistance layer or domain?
As stated before, if you mean a view within the EF Core, I would put really close to the layer where you're going to use it. But, it could depend. You could have a look here.
Personally I use 3 layers: domain, application and infrastructure. My views are in the application layer, because I have several queries that I reuse for different purposes. But before going into the infrastructure (where the requests are) I transform the results into the format required for UI.
DDD is about putting together all the business logic that otherwise is spread around several entities, services and even controllers. With this solution, all the actions that the domain offers could be performed without requiring extra logic outside the domain itself. Of course you need to implement the services that the domain is going to use, this is obvious.
On the other side is clear, at least for me, that the reuse is limited to the domain itself. I mean:
I can build a big query, that collects a lot of information from different sources, and reuse it for several UI views, but I've to be ready to pay the price of something that I have to touch every time something in the UI changes (anyway I need to transform this into a view related object);
I can build small, specialized queries that I use for 1, 2 (if they are the same) UI views, paying the price of more code (but simple and specialized, and really fast to test!) to maintain (here the query can produce close to/equal to view related object).
The second approach is the basic of CQRS, and I prefer that one. Remember, you can do CQRS even without event store and eventually consistency: you just take part of it, not the whole. We design to simplify our lives, not to make them harder.
I currently access a Web API endpoint serving up hierarchical objects (complex deals) using JSON/BSON. The objects are translated from entity framework objects stored as standard normalised data in a SQL Server database. This all works well.
However, as the number of these objects grows it becomes increasingly inefficient to serialise/deserialise them across the wire before filtering out those required at the client. Having methods for all objects or object-by-id is fine, but often there are more complex criteria for filtering which would require a myriad of different method signatures to fully capture. In an ideal world it would be possible to send Func<Deal,bool> to the Deals endpoint and this would provide the filtering mechanism from the client side to be enacted server-side. The premise being that different users will be interested in deals based on varying facets.
This may be mad, but is there any way that something along these lines can be achieved?
I do this by passing a "SearchCriteria" object to the search endpoint and then performing filtering at the server based on the values set in the various criteria properties. However, we do have a fairly well defined list of criteria and performing the filtering isn't too bad.
Alternatively, I've not used OData, but from what I understand this might be what you are looking for. If I was pondering this again I would investigate this.
https://learn.microsoft.com/en-us/aspnet/web-api/overview/odata-support-in-aspnet-web-api/odata-v4/create-an-odata-v4-endpoint
I've been using BreezeJS in a number of projects for a while now, and I have to say that in most ways, it makes your life MUCH easier, which is why I keep coming back to it. However, I seem to consistently run into a scenario where it falls completely flat, and I cant seem to find any "correct" way of working around this issue.
Let me explain. One of the best things about BreezeJS is that it follow this UoW pattern that allows you to save entities using the saveChanges method of the entity manager, like EF.
However this is also a part of the problem, because as you develop more and more sophisticated application, I sometimes feel this approach is not always appropriate. I find that often I have:
Operations that doesn't really involve creating entities on the client, but rather involve executing an action on the web api that may result in the creation of various entities or other forms of state on the server, that should then be send back to the client.
Operations that involve entities with properties, that cannot be saved because some of them are private to the server, and should not be put on the client (often solved with a JsonIgnore for the client, but comes with issues when you start persisting the given entity again)
I feel that there is one thing that could solve these issues relatively easily, and it is a concept that already exists in OData: Actions. Actions that can be performed globally, on entity sets or on specific entities and then return either custom objects or entities that will be directly tracked by BreezeJS.
Currently, I find myself doing the following workaround (which I don't know if is appropriate):
Make a "Resource" action on the BreezeController that represents an action rather than an an actual resource. This takes in a custom parameter object and returns a non-entity object, that may contain actual entities (as described under "Cool Breezes" with the Lookups, because these will then be track by BreezeJS)
Use the "ajaxpost" breeze lab to allow querying a resource with a POST instead of a GET so any sort of arguments can be passed in.
Is there a more appropriate way of accomplishing something like this? Are there future plans to support custom actions?
An approach I have seen to solve this type of operation, is to simply make these sort of operations "around" the breeze api controller. That is, simply using an ApiController that has nothing to do with breeze. But I kinda feel this defeats the purpose of breeze, because then, if the operation results in the creation or deletion of entities, you must start tracking them by yourself on the client, by either creating them locally, or by issuing another breeze query to go get them. This really gets tiresome if you need a lot of these types of operations.
I’m struggling with the same issue myself. I have an app that uses breeze to store trades in a SQL database and after the trade is stored, another user can use the app to send the trade to a backend trading system. I created an OData action to do the import to the external trading system so I can do a post to /trades(123)/ImportTrade. When I get the metadata for the service using /$metadata it sees that the trades entity has this action (it's in the metadata).
I was hoping that breeze would see this in the metadata and create a method on the trades entity to do a post to my OData action, but it does not. This would be a great feature if it was added to breeze (exposing OData actions as methods on entities).
As a workaround I have extended the breeze entity myself with a custom method that does the post to /trades(???)/ImportTrade.
It would be great if breeze could handle this for us!
Your approach using ajax post is a good way to do thouse kind of things...
You can also make your own context by inheriting from BreezeContext that has nothing to do with a DB, and do your actions there with out saving the entites and still get the result back as a non tracked object or entities.
If you create a new entity on the server (not always a good idea with breeze, but still can still be done) you have to make sure that breeze will still generate the temp keys for that entity.
You can use the temp key generator or just delete the primary keys of the non tracked object.
You can use the metadata of that entity type in order to get it's primary key properties and then delete them using javascript like so: delete obj[prop]
then use createEntity with the non tracked entity that doesn't have primary keys.
Breeze will then generate the primary keys for you and your all set.
I also hope that Breeze will address the need to do custom actions that may return a custom non tracked object in a more intuitive way
Hope this helps
I'm trying to figure out how to write an IQueryable data source that can pull and combine data from multiple sources (in this case Azure Table, Azure Blobs, and ElasticSearch). I'm really having a hard time figuring out where to start with this though.
The idea is that a web service (in this case an Asp.Net Web Api) can present a queryable, OData interface, but when it gets queried it pulls data from multiple sources depending on what is requested. So large queries might hit the indexing service (ElasticSearch) which wouldn't necessarily have the full object available, but calls to get an individual object would go directly to the Azure Tables. But from the service users perspective it's always just accessing the same data source.
While I would like to just use the index as our search service and the tables as our backup, I have a design requirement that it has to pull data from multiple sources, which greatly complicates this whole thing.
I'm wondering if anyone has any guidance on this or can point me towards the right technologies. Some of the big issues I'm seeing are:
the backend objects aren't necessarily the same as the front end object being queried. Multiple back end objects may get combined into a single front end one, or it may have computed values. So a LINQ query would have to be translated or mapped
changing data sources based on query parameters
Here is a quick overview of the technology I'm working with:
ASP.Net Web API 2 web service running as an Azure Cloud service
ElasticSearch running on SUSE VMs (on Azure)
Azure Tables
Azure Blobs
First, you need to separate the data access from the Web API project. The Web API project is merely an interface, so remove it from the equation. The solution to the problem should be the same regardless of whether it is web API or an ASP.NET web page, an MVC solution, a WPF desktop application, etc.
You can then focus on the data problem. What you need is some form of "router" to determine the data source based on the parameters that make the decision. In this case, you are talking about 1 item = azure and more than 1 item - and map reduce when more than 1 item (I would set up the rules as a strategy or similar so you can swap out if you find 1 versus 2+ is not a good condition to change routing).
Then you solve the data access problem for each methodology.
The system as a whole.
User asks for data (user can be a real person or another system through the web api)
Query is partially parsed to determine routing path
Router sends data request to proper class that handles data access for the route
Data is returned
Data is routed back to the user via whatever User interface is used (in this case Web API - see paragraph 1 for other options)
One caution. Don't try to mix all types of persistence, as a generic "I can pull data or a blob or a {name your favorite other persistant storage here}" often ends up becoming a garbage can.
This post has been out a while. The 2nd / last paragraph is close, yet still restricted... Even a couple years ago, this architecture is common place.
Whether a WPF or ASP.NET or Java, or whatever the core interface is written in - the critical path is the result set based on a query for information. High-level, but sharing more than I should because of other details of a project I've been part of for several years.
Develop your core interface. We did a complete shell that replaced Windows/Linux entirely.
Develop a solution architecture wherein Providers are the source component. Providers are the publishing component.
Now - regardless of your query 'source' - it's just another Provider. The interfacing to that Provider - is abstract and consistent - regardless of the Provider::SourceAPI/ProviderSourceAPI::Interface
When the user wants to query for anything... literally anything... Criminal background checks.... Just hit Google... Query these specific public libraries in SW somewhere USA/Anywhere USA - for activity on checkouts or checkins - it's really relevant. Step back - and consider the objective. No solution is too small, and guaranteed - too large for this - abstract the objectives of the solution - and code them.
All queries - regardless of what is being searhed for - are simply queries.
All responses - regardless of the response/result-set - are results - the ResultantProviderModel / ResultantProviderController (no, I'm not referencing MVC specifically).
I cannot code you a literal example here.. but I hope I challenge you to consider the approach and solution much more abstract and open than what I've read here. The physical implementation should be much more simplified and VERY abstract form a specific technology stack. The searched source? MUST be abstract - and use a Provider Architecture to implement. So - if I have a tool my desktop or basically office workers use - they query for something... What has John Doe written on physics???
In a corporation leveraging SharePoint and FAST Search? This is easy, out of the box stuff...
For a custom user interfacing component - well - they you have the backend plumbing to resolve. So - abstract each piece/layer - from an architecture approach. Pseudo code it out - however you choose to do that. Most important is that you do not get locked into a mindset locked into a specific development paradigm, language, IDE, or whatever. If you can design the solution abstract and walk it through is pseudo code - and do this for each abstraction layer... Then start coding it. The source is relative... The publishing aspect - is relative - consistent.
I do not know if you'll grasp this - but perhaps someone will - and it'll prove helpful.
HTH's...