How to dispose objects in a Singleton WCF service? I am using Entity Framework (3.5) and returning a bunch of custom POCO objects to the client. The service needs to be alive as it provides cross-client communication and hence Duplex binding is used. I would like dispose all the POCO objects created once they are serialized to the client.
As the session and hence the service is still alive, it looks like Framework is not doing any Garbage collection on these objects and over time the service is crashing with "Insufficient Memory" like error (after about 2GB).
I don't think dispose can be called before return statement, as the objects are not yet serialized by then.
Please suggest a solution.
Thanks in advance.
First, do not use singleton service, why, well your question is the answer.
As I see it your service should be a per call instance managed and the callback channels should be managed on another class or as static member in the service class.
Second, try to see if you keep reference to the poco's you return to client, Cause GC cleans unreferenced stuff. So if you find the reference just assign those member with null and GC will do the rest(you have nothing to worry about method variables).
I think you're on the wrong track here; if your objects are POCO, do they even implement IDisposable (not sure why you would for a POCO class). My guess is you've got something else that is chewing up your memory. Possibly your singleton service is just living too long and collecting too much crap; you might want to look at a different service model. Maybe an instance per session or something like that.
One thing you could do, however, is rather than serializing your POCO objects directly create very simple 'messaging' classes that have only the properties you want to serialize and send those instead. You could copy the properties to your message objects and then dispose your database objects immediately.
Related
In the video of course CQRS in Practice.
In the Startup.cs code, it has the following code.
public void ConfiureServices(IServiceCollection services)
{
service.AddMvc();
services.AddScoped<UnitOfWork>();
}
However, the code needs to be to services.AddTransient(); because there is no dispose method for UnitOfWork? Why UnitOfWork.dispose() is required for AddScoped?
The lifetime of an object (scoped, transient, singleton), is a wholly separate issue from whether or not the object implements IDisposable.
It is sometimes the case that objects that implement IDisposable are used in Dependency Injection (often because they're external dependencies that have unmanaged resources), but it's not always that way.
AddScoped, in the context of ASP.NET Core, means that for the lifetime of an ASP.NET request, that same object will be used.
AddTransient, in the context of ASP.NET Core, means that every object instantiation -- even during the same HTTP request, will use a new instance of that object.
For your particular problem -- the Unit of Work issue, you're going to want to make sure whatever database you're using is OK with multiple readers and writers before switching to Transient, the reason being if you're using AddTransient, then if you make multiple calls to the database, you're going to open new transactions and (possibly) connections for each call; and there are databases that do not like this very much (Postgres being a shining example).
The lingo we use to talk about that is the Multiple Active Result Sets issue, and each database handles it differently.
I have a WCF-based, C# CRUD REST api that works on Schedule objects. There are methods for creating, updating, deleting as you'd expect.
My problem is that the Schedule object contains a TriggerInfo subobject. If you just call the constructor, you're really calling the constructor of a proxy, and the real constructor is never called, so the subobjects are not initialized.
The proxy that WCF emits has TriggerInfo as a field but it's always going to be null because the constructor logic in the "real" class is never called.
In other words, when the client creates a C# 'Schedule' object, it's really creating a proxy of the real Schedule class, and the proxy knows nothing about having to init anything!
So in this chicken-and-egg situation, who creates the C# 'Schedule' object that the client can "fill out"?
I thought the C# client could create a Schedule object, fill out all the properties and pass it to the CreateSchedule() api and it'd work. Not so easy!
It'd work if I made a big, flat monolithic class where all of the TriggerInfo properties were properties on the Schedule object instead, but it's not very tidy, especially if you have multiple subclasses.
I could have a ScheduleFactory object exposed on my API that knows how to create one, but I don't know if that's a valid approach!
Don't create Schedule object client-side if it needs any nontrivial initialization - just add a New or Create method to your WCF service and do it server-side. Alternatively, you can use new Schedule() client-side, get a new proxy instance with a lot of null properties and fill in these properties with sensible default values server-side in Save method.
In the Newtonsoft docs for CustomCreationConverter, it says:
"A more complicated scenario could involve an object factory or service locator that resolves the object at runtime."
I'm dealing with such a scenario where I have to hydrate a persistent object with an incoming DTO (which is in JSON). To hydrate the persistent object, I have to first load it using a service call. I can make this call during deserialization using CustomCreationConverter. Or I can consume the JSON as an ExpandoObject and transitively assign all members. Complexity is not much as the object graph is small.
Something about making service calls during deserialization does not seem right. It makes testing more complicated as I would have to first load that object into memory to be able to deserialize it. This also screams tight coupling.
So my question is: Is it a good idea to make service calls during deserialization in this scenario?
As I understand it, you are doing this steps:
Call the deserialization
In a CustomCreationConverter you retrieve a pre-populated object instance via a remote Service
Json.Net does it's thing on the retrieved instance.
Well, it seems to me you could make use of the PopulateObject method, like so:
var obj = RemoteService.Retrieve(id);
Newtonsoft.Json.JsonConvert.PopulateObject(jsonString, obj);
This way you keep your code simple (albeit less fun) and testable.
I'm trying to use Ninject (version 3.0.1) in a WinForms application, I have several (currently) self-binded service class, which I construct using Ninject. Some service class needs other service classes (sub-services). Most of these service classes need a repository to interact with the database, for what I have an abstract IRepository interface. I need to have the same repository for the whole service-hierarchy in a service class, so I'm using the InCallScope() scope when binding the IRepository. Currently I'm using XPO as an ORM tool, so I have an XpoRepository implementation, which I'm binding to. See my other question about this scenario.
My binding looks like this:
Bind<IRepository>().To<XpoRepository>().InCallScope();
I don't have explicit ToSelf() bindings for each service class, so I assume when I get them from Ninject, they should have the transient scope, which I interpret as I have to manually dispose them.
Assume that I have a Services1 and a Services2 service class, both having a constructor parameter of type IRepository. Now assume that Services1 would like to use some methods of Services2, so I add another constructor parameter to Services1 with type Services2. Without Ninject, I would do:
var repo = new MyRepository(); // implementing IRepository
var service1 = new Services1(repo, new Services2(repo));
I'm using one of the services in a background thread (using TPL), in a loop like this:
while (true) {
if (CancellatioPending())
break;
using (var service = _kernel.Get<Service1>())
{
// do some stuff using the service class
}
Thread.Sleep(20*1000);
}
I had the same structure before using Ninject, so I have (I think) properly implemented disposal of every objects, including repositories in the correct places. However, I've noticed that since I'm utilizing Ninject for this, I have a big memory leak in my application, and it crashes in every 2-3 hours with OutOfMemoryException. I put a breakpoint inside the loop, and noticed that the Ninject cache has thousands of entries full of disposed XpoRepository objects. They are disposed by me I guess, but I'm not sure who called the dispose method.
Why is Ninject holding these disposed objects? I would expect that when I dispose the main service in the end of the using block (which is the scope of the IRepository objects due to InCallScope()) every object in its scope should be disposed and released by Ninject.
EDIT: Before any comment or answer about why this pattern is not good, I know that it could be better. I know I could extract service interfaces to actually make use of DI and improve testability, and I also know that I should probably use a Func<IRepository> as a constructor parameter, and inject into it, and like that every service could have its own reponsibility to dispose the repository. Simply I have no time for such refactorings currently.
Ninject will release the repository if all the following things are true:
no one is holding a reference to service1
service1 itself is GC'd (since you have a thread sleep of 20 sec there is a high chance that it has been promoted to Gen 2 and they are released very rarely)
Cache pruning was executed after service1 is GC'd, The cache pruning interval defaults to 30 sec. You may want to try a shorter interval.
Alternatively to the previous point you can try to force immediate releasing by implementing Ninject.Infrastructure.Disposal.INotifyWhenDisposed in service1
In my Azure web role code I have a CustomIdentity class derived from System.Security.Principal.IIdentity. At some point .NET runtime tries to serialize that class and serialization wouldn't work. Trying to resolve that I searched a lot and found this answer and tried to inherit my class from MarshalByRefObject.
Now once my CustomIdentity class inherits from MarshalByRefObject there're no serialization attempts anymore and my code works. However I'd like to know the performance implications of using MarshalByRefObject class.
My code runs like this. First the request comes to IIS and is passed to the authentication code that creates an instance of CustomIdentity and attaches that instance to HTTP context. Then some time later the same HTTP context is passed to the ASP.NET handler that accesses that CustomIdentity instance at most once. The CustomIdentity object lives for the duration of request and is then destroyed.
Now with serialization my CustomIdentity would be serialized into a stream, then deserialized from that stream into a new object. With MarshalByRefObject there's no serialization but a proxy is created and the access will be marshaled via RPC to where the actual object resides.
How expensive will using MarshalByRefObject be in this scenario? Which - MarshalByRefObject or serialization - will be more costly?
MarshalByRefObject means that all calls (methods, properties, etc) are proxied over the wire. This potentially means that instead of transferring the data once and then running multiple methods etc locally on the transferred data, you are making a network call every access. How many times (per request) is a role tested, for example? or the name queried? I honestly don't know, but I'm guessing it is more than 1 (all totalled). Plus the original setup costs...
The bandwidth probably won't be significant, but latency is very significant, especially if you have distributed nodes (since you mention a cloud scenario).
Personally, I would avoid MarshalByRefObject like the plague, but up to you...