Design pattern for multiple DAC and SAC calls [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Scenario- After request validation, I will need to
Call database and update the record. Based on a certain condition on the execution of this database call and request data, I will be calling Service1.
Upon performing this step, I will call another database and update record from the request.
At last I will call audit service to save the transaction details.
This can be achievable in normal code structure. But I am pretty confident there will be plug and play after step 1 or 2, i.e., a database/service call will be introduced in next release after step 1/step 2 (TBD).
I decided to opt for Chain of Responsibility.
Problems
Where ever the operation breaks/exception is generated, code should stop its execution.
Under single Logging object, I am having difficulty to handle the sequential call.
For step 1’s conditional service call, the dynamic modification of the chain of operations is bit complex, as I have to rely on single data type return from the AbstractionHandler.
Is there any alternative design pattern that I can follow?

You have a scenario in which you have a sequence of operations that may or may not occur based on the result of previous operations.
In my opinion you choosing the right pattern, Chain of Responsibility will be a good choice.
You just need to adapt the classic implementaion that allows passing the request along the chain of potential handlers until one of them handles request.
Basicaly, you can change the implementation of each operation so that when its condition is valid, it executes its own logic and returns the result for the next operation in the chain.
So where ever the operation fail, you should't throw exceptions because exceptions must be used for exceptional conditions (any condition that your normal logical flow does not handle); therefore, within a chain of responsibility it is expect that some operation could return a signal to interrupt the chain (expected result).
Considering that, in my opinion you shouldn't throw an exception in this situation. Instead, you should return a controlled signal to stop the flow of the chain.
Regards,

Related

Heavy I/O with events firing much faster than they can be handled, causing problematic backlog [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In a nutshell, my project is receiving data faster than it can process and then write it to a database (EF6 to SQL Server 2016), and I'm not sure what the best-practice approach is (ditch EF? Offload to database via Service Broker? Something else?) Write events are not being handled fast enough, so they result in cascading event logjams and fatal memory crashes.
The write events are (I want them to be) low-priority, and I'm using async tasks for them. The write events involve a lot of data and a lot of relationships, and EF is just not handling them efficiently (I'm using AddRange, but EF is just sending everything in many single inserts, which I've read is its regular behavior).
I've tried paring back the relationships, and I've moved more processing over to the database, and I've tried using a batched "Delayed"Queue (an observable queue implementation that triggers an "empty me" event when a threshold is met), so that the inbound write events can be handled very quickly (just dump the request in the queue and move on), but this didn't get me anywhere (not surprising, I suppose, since I've basically added a message queue on top of the built-in message queue?).
Please correct me if I'm wrong, but it seems to me that EF is not the right tool for something as write-heavy and relationship-heavy as what I have (I know there are bulk-write extensions...). So, in an effort to resolve this sensibly, would it make sense to bypass EF and do my own bulk-write queries, or is this an appropriate use for Service Broker? With Service Broker, I could just send a dataset in one sproc, which just adds the dataset to the queue, frees the frontend to move on, and the database can handle and build the relationships whenever. Are these solutions sensible or best practice, or am I barking up the wrong tree (or putting lipstick-on-a-pig maybe)?
Thank you.
Please correct me if I'm wrong, but it seems to me that EF is not the
right tool for something as write-heavy and relationship-heavy as what
I have
You are right.
By default like you said, Entity Framework perform one database round-trip for every record to save which is INSANELY slow.
Disclaimer: I'm the owner of Entity Framework Extensions
(The library is not free)
This library allows you to improve Entity Framework Performance.
I'm not sure if our library can help you but it worth a try if you save multiple entities at once.
By example, the BulkSaveChanges is exactly like SaveChanges but way faster by dramatically reducing the database round-trip required.
Bulk SaveChanges
Bulk Insert
Bulk Delete
Bulk Update
Bulk Merge
Example
// Easy to use
context.BulkSaveChanges();
// Easy to customize
context.BulkSaveChanges(bulk => bulk.BatchSize = 100);
// Perform Bulk Operations
context.BulkDelete(endItems);
context.BulkInsert(endItems);
context.BulkUpdate(endItems);
// Customize Primary Key
context.BulkMerge(endItems, operation => {
operation.ColumnPrimaryKeyExpression =
endItem => endItem.Code;
});

How would you correctly return a collection of objects asynchronously? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I need to define methods in my core interface that return lists. My project heavily relies on the use of async/await so I need to define my core references/interfaces as asynchronous as possible. I also use EF7 for my data-access layer. I currently use IAsyncEnumerable everywhere.
I am currently deciding whether to keep using IAsyncEnumerable or to revert back to using Task<IEnumerable<T>>. IAsyncEnumerable seems promising at this point. EF7 is using it as well. The trouble is, I don't know and can't figure out how to use it. There is almost nothing on the website that tells anyone how to use Ix.Net. There's a ToAsyncEnumerable extension that I can use on IEnumerable objects but this wouldn't do anything asynchronously (or does it??). Another drawback is that given the below signature:
IAsyncEnumerable GetPersons();
Because this isn't a function that returns Task, I can't use async/await inside the function block.
On the other hand, my gut is telling me that I should stick with using Task<IEnumerable<T>>. This of course has it's problems as well. EF does not have an extension method that returns this type. It has a ToArrayAsync and ToListAsync extension method but this of course requires you to call await inside the method because Task<T> isn't covariant. This potentially is a problem because this creates an extra operation which could be avoided if I simply return the Task object.
My questions is: Should I keep using IAsyncEnumerable (preferred) or should I change everything back to Task<IEnumerable<T>> (not preferred)? I'm open to other suggestions as well.
I would go with IAsyncEnumerable. It allows you to keep your operations both asynchronous and lazy.
Without it you need to return Task<IEnumerble> which means you're loading all the results into memory. This in many cases meaning querying and holding more memory than needed.
The classic case is having a query that the user calls Any on. If it's Task<IEnumerable> it will load all the results into memory first, and if it's IAsyncEnumerable loading one result will be enough.
Also relevant is that with Task<IEnumerable> you need to hold the entire result set in memory at the same time while with IAsyncEnumerable you can "stream" the results a few at a time.
Also, that's the direction the ecosystem is heading. It was added by reactive extension, by a new library suggested by Stephen Toub just this week and will probably be supported in the next version of C# natively.
You should just use Task<IEnumerable<T>> return types. The reason is simply that you don’t want to lazily run a new query against the database for every object you want to read, so just let EF query those at once, and then pass that collection on.
Of course you could make the async list into an async enumerable then, but why bother. Once you have the data in memory, there’s no reason to artificially delay access to it.

Using static functions in a asp.net 3.5 website [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am building a ASP.NET webapplication in which I use several classes containing static functions for retreiving database values and such (based on session of user so their results are session specific, not application wide).
These functions can also be called from markup, which makes developing my GUI fast and easy.
Now I am wondering: is this the right way of doing things, or is it better to create a class, containing these functions and create an instance of the class when needed?
What will happen when there are a lot of visitors to this website? Will a visitor have to wait until the function is 'ready' if it's also called by another session? Or will IIS spread the workload over multiple threads?
Or is this just up to personal preferences and one should test what works best?
EDIT AND ADDITIONAL QUESTION:
I'm using code like this:
public class HandyAdminStuff
{
public static string GetClientName(Guid clientId)
{
Client client = new ClientController().GetClientById(clientId);
return client.Name;
}
}
Will the Client and ClientController classes be disposed of after completion of this function? Will the GarbageCollector dispose of them? Or will they continue to 'live' and bulk up memory everytime the function is called?
** Please, I don't need answers like: 'measure instead of asking', I know that. I'd like to get feedback from people who can give a good answer an maybe some pro's or cons, based on their experience. Thank you.
"Will a visitor have to wait until the function is 'ready' if it's also called by another session?"
Yes. It may happen if you have thread safe function body, or you perform some DB operations within transaction that locks DB.
Take a look at these threads:
http://forums.asp.net/t/1933971.aspx?THEORY%20High%20load%20on%20static%20methods%20How%20does%20net%20handle%20this%20situation%20
Does IIS give each connected user a thread?
It would be better to have instance based objects because they can also be easily disposed (connections possibly?) and you wouldn't have to worry about multithreading issues, additional to all the problems "peek" mentioned.
For example, each and every function of your static DAL layer should be atomic. That is, no variables should be shared between calls inside the dal. It is a common mistake in asp.net to think that [TreadStatic] data is safe to be used inside static functions. The only safe pool for storing per request data is the Context.Items pool, everything else is unsafe.
Edit:
I forgot to answer you question regarding IIS threads. Each and every request from your customers will be handled by a different thread. As long as you are not using Session State, concurrent requests from the same user will be also handled concurrently by different threads.
I would not recommend to use static function for retrieving data. This because these static functions will make your code harder to test, harder to maintain, and can't take advantage of any oo principals for design. You will end up with more duplicate code, etc.

Looping code or recursive method calling? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am working on project where we have to work out delivery times based on rules in the database. The same day can have a few possibilities (same day deliveries) but also Fridays, Saturdays don't have rules so we have to look ahead and find the Monday rule.
Sorry for long winded explanation ...
To add to complexity we also calculate when the item can be collected at the delivery point so we calculate the pick up time based on AM / PM guarantees and make sure the place is open and its not holiday...
When I initially wrote up the logic I made a method that takes a date and calculates all these values and returns our Calculated Model. Just before the end I put in a test to make sure the model is populated otherwise there was no match made for that date and time and I increment the day by 1 and call my method again, with the incremented datetime and the rule until I hit the return and everything bubbles back to the original stack call. For me that worked like a charm, single level if statements and no complicated and and or's
Basically that code was shot down because other test and bug developers did not understand how to debug it, or what it was doing.
The new proposal is a single method that does the same thing but enclosed in a while statement until the condition is met. Then within the while there is a foreach that validates the deliveries can be met and a line of conditional if and nested conditional or's and then returns the Calculated model.
Is it bad to recall the same method from within it self until the ultimate condition is met with adjusted values?
Both code fragments work fine I just find having nested for each in while and conditionals if more difficult to decipher than a flat set of rules.
Although recursion can lead to some elegant solutions it can also lead to difficult to follow code and stack overflows, as each recursive calls allocates a new stack frame. By default each thread has a 1MB stack, so it doesn't take long to run out of space.
Tail recursion can fix this, as long as your actually doing a tail recursive call, and the compiler can spot this. At the IL level there is support for tail recursion with the TailCall instruction, but the C# compiler doesn't generate code that uses it.

does encapsulation requires more processing than public variable [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm used to using encapsulation no matter what, all of my variables are private.
But when I'm handling thousands of instances with thousands of properties, I start thinking about optimization, wondering if the benefits of encapsulation justify the performance penalty (if any)
I'm aware of why one should use encapsulation but what I'm asking about is: Is encapsulation worth the processing it requires if it's not required to be used ? How much does it use ?
I think you're missing the point of encapsulation. The point of encapsulation is that the object controls ALL interaction with it's fields. Thus enforcing business logic uniformly and protecting the state of the system. Given that you would have to run the business logic anyway... you're not saving anything by just using data objects.
Your first choice should be to encapsulate things. Most of the time, the setter and getter functions should get inlined.
All you "lose" is the time it takes for any extra logic involved in the actual verification that you are not setting an invalid value, etc. But you don't want to miss that out just for the sake of speed, would you?
So, if the alternative is to write
if (x >= 0) obj.x = x;
or
obj.setx(x); // where setx checks that x >= 0.
which is better?
If there are performance criteria for the system, then benchmark. If you are meeting the criteria, fine. If not, figure out where the bottlenecks are. But unless your setter and getter functions are "normal" ones (that is, just storing the value after some checking), it shouldn't be the bottlenecks. Typical bottlenecks are "poor choice of algorithms".

Categories