I have a long db networking call and I want to populate my model in chunks. We are talking about asp.net MVC.
I have a vague idea that each time a new chunk is available I should trigger the model.Bind()
but I don't know how to do the plumbing between
a) the service which is providing the data in chunks- it's implemented using the event pattern- each time a new chunk is available an event is triggered, but which event ? It should hold a reference to the model?
b) the data which will be bound to the model ( i suppose it should not be an bind(), but an addition to some collection)
c) if everything is ok in steps a and b, then the changes will be propagated to the view without further a do?
You could use long polling with a hidden iframe and chunked transfer encoding from the server which will spit <script> tags as data becomes available. In this script tag you could invoke a custom callback javascript function that will take care to format the results.
UPDATE:
As requested in the comments section here's a sample implementation of a long polling technique using a hidden iframe.
Let's suppose that you have some model:
public class MyViewModel
{
public string Foo { get; set; }
}
and that you have a service that returns this model in chunks and notifies the caller that a chunk is available using events:
public class MyService
{
public void GetModels(Action<MyViewModel, object> onModelAvailable, object state, Action onComplete)
{
Task.Factory.StartNew(x =>
{
try
{
for (int i = 0; i < 10; i++)
{
onModelAvailable(new MyViewModel
{
Foo = "foo " + i
}, x);
Thread.Sleep(1000);
}
}
finally
{
onComplete();
}
}, state);
}
}
Now, we could have the following controller:
public class HomeController : AsyncController
{
public ActionResult Index()
{
return View();
}
public ActionResult LongPoll()
{
var service = new MyService();
return new MyActionResult(service);
}
}
and the following view:
<script type="text/javascript">
// we define a callback function which will be invoked
// when a chunk is available from the server
var callback = function (model) {
// the model variable passed here will represent the chunk
$($('<div/>', {
html: model.Foo
})).appendTo('#result');
};
</script>
<iframe style="display:none;" src="#Url.Action("longpoll")"></iframe>
<div id="result"></div>
Now the last part of course is the implementation of the custom action result which will do the chunked transfer:
public class MyActionResult : ActionResult
{
private readonly MyService _service;
public MyActionResult(MyService service)
{
_service = service;
}
public override void ExecuteResult(ControllerContext context)
{
var response = context.HttpContext.Response;
response.BufferOutput = true;
response.ContentType = "text/html";
var wait = new ManualResetEvent(false);
_service.GetModels((model, state) =>
{
var httpResponse = (HttpResponseBase)state;
httpResponse.BufferOutput = true;
httpResponse.ContentType = "text/html";
var serializer = new JavaScriptSerializer();
var script = string.Format(
"<script type=\"text/javascript\">window.parent.callback({0});</script>",
serializer.Serialize(model)
);
httpResponse.Write(script);
httpResponse.Flush();
},
response,
() =>
{
wait.Set();
});
wait.WaitOne();
}
}
The simplest solution is to use polling, just some ajax call every n-seconds to check if new data is available. Downsides to this approach: latency, server load. Advantages: rather simple to implement.
A better but much more involved solution is to use something like long-polling, web-sockets, etc.. If this feature is worth the trouble then take a look at Signal-R, which is an async signaling library for ASP.NET to help you build real-time, multi-user interactive web applications. Adding it to an ASP.NET MVC 3 web application is very straightforward. This is a good intro to the library: Asynchronous Scalable Web Applications With Realtime Persistent Long running Connections With SignalR
Related
I have an abstract class called HttpHelper it has basic methods like, GET, POST, PATCH, PUT
What I need to achieve is this:
Store the url, time & date in the database each time the function is called GET, POST, PATCH, PUT
I don't want to store directly to the database each time the functions are called (that would be slow) but to put it somewhere (like a static queue-memory-cache) which must be faster and non blocking, and have a background long running process that will look into this cache-storage-like which will then store the values in the database.
I have no clear idea how to do this but the main purpose of doing so is to take the count of each calls per hour or day, by domain, resource and url query.
I'm thinking if I could do the following:
Create a static class which uses ConcurrentQueue<T> to store data and call that class in each function inside HttpHelper class
Create a background task similar to this: Asp.Net core long running/background task
Or use Hangfire, but that might be too much for simple task
Or is there a built-in method for this in .netcore?
Both Hangfire and background tasks would do the trick as consumers of the queue items.
Hangfire was there before long running background tasks (pre .net core), so go with the long running tasks for net core implementations.
There is a but here though.
How important is to you that you will not miss a call? If it is, then neither can help you.
The Queue or whatever static construct you have will be deleted the time your application crashes/machine restarts or just plain recycling of the application pools.
You need to consider some kind of external Queuing mechanism like rabbit mq with persistence on.
You can also append to a file, but that might also cause some delays as read/write.
I do not know how complex your problem is but I would consider two solutions.
First is calling Async Insert Method which will not block your main thread but will start task. You can return response without waiting for your log to be appended to database. Since you want it to be implemented in only some methods, I would do it using Attributes and Middleware.
Simplified example:
public IActionResult SomePostMethod()
{
LogActionAsync("This Is Post Method");
return StatusCode(201);
}
public static Task LogActionAsync(string someParameter)
{
return Task.Run(() => {
// Communicate with database (X ms)
});
}
Better solution is creating buffer which will not communicate with database each time but only when filled or at interval. It would look like this:
public IActionResult SomePostMethod()
{
APILog.Log(new APILog.Item() { Date = DateTime.Now, Item1 = "Something" });
return StatusCode(201);
}
public partial class APILog
{
private static List<APILog.Item> _buffer = null;
private cont int _msTimeout = 60000; // Timeout between updates
private static object _updateLock = new object();
static APILog()
{
StartDBUpdateLoopAsync();
}
private void StartDBUpdateLoopAsync()
{
// check if it has been already and other stuff
Task.Run(() => {
while(true) // Do not use true but some other expression that is telling you if your application is running.
{
Thread.Sleep(60000);
lock(_updateLock)
{
foreach(APILog.Item item in _buffer)
{
//Import into database here
}
}
}
});
}
public static void Log(APILog.Item item)
{
lock(_updateLock)
{
if(_buffer == null)
_buffer = new List<APILog.Item>();
_buffer.Add(item);
}
}
}
public partial class APILog
{
public class Item
{
public string Item1 { get; set; }
public DateTime Date { get; set; }
}
}
Also in this second example I would not call APILog.Log() each time but use Middleware in combination with Attribute
I´m developing a service, that receives a HTTPPost request.
That request should start a long running process - (seconds to minutes).
At the client side I don´t want to wait all the time for the response, so I build the workflow as following:
client sends the request
server starts the job to do and immediately returns a guid - the job id
when the job completes on the server, it raises an event to inform the client about the completion (technically I use SignalR but that doesn´t matter)
A code sample:
[ApiController]
[Route("api/[controller]")]
public class JobsController : ControllerBase
{
private readonly IJobService jobService;
public JobsController(IJobService jobService)
{
this.jobService = jobService ?? throw new ArgumentNullException(nameof(jobService));
jobService.JobCompleted += id =>
{
// inform the client somehow
};
}
[HttpPost]
[Route("newJob")]
public IActionResult CreateNewJob(NewJob newJob)
{
var jobId = jobService.CreateNewJob(newJob);
return Ok(jobId);
}
}
public class JobService : IJobService
{
private readonly IImportantService importantService;
public event Action<string> JobCompleted;
public JobService(IImportantService importantService)
=> this.importantService = importantService ?? throw new ArgumentNullException(nameof(importantService));
public string CreateNewJob(NewJob newJob)
{
var id = Guid.NewGuid().ToString("H");
Task.Run(async () =>
{
// do the long running operation
Thread.Sleep(30000);
await importantService.DoSomethingAsync(newJob); // here is the problem - this service is already disposed
JobCompleted?.Invoke(id);
});
return id;
}
}
So the problem is, that I´m using the importantService in a Task that is running long after the request returned to the user.
But ASP.NET Core DI container is (normally) correctly disposing all services created for the request immediately.
How can I tell ASP.Net to not dispose importantService as long as the background task is still running?
Or is it better to do a complete different way - thinking about hangfire.io or similar...
Thank you! :)
You need to implement your service as a singleton see , that means it will keep alive throughout the application life cycle.
I have a service that talks to a 3rd party on the internet, if i want this to scale i don't want my threads to be blocked waiting for the response when they could be handling new requests. One way around this is to use the httpRequest.BeginGetResponse method and pass in a callback. The issue with this is this method returns immediately and i have nothing to give back to the caller of my service. Once in the callback i cant do anything useful.
What i would like is something like this
public string CallServiceNonBlocking(string url) {
var httpRequest = (HttpWebRequest)WebRequest.Create(url);
HttpResponse res = httpRequest.FetchNonBlocking(); //this does the work but my thread should be returned to the asp.net thread pool to do other work rather than blocking here
return res.GetValue();
}
Note i don't have .Net 4.5 available to me only .NET 4. But for reference can i see a solution for both versions?
Coding this with .NET 4.5 and C# 5.0 async/await is very easy.
You can make it work asynchronously with .NET 4.0 / C# 4.0 too, if you have to. You'd need to derive your controller from AsyncController and use AsyncManager. The basic steps are described in "Using an Asynchronous Controller in ASP.NET MVC".
In this light, your code may look like this (untested):
static public Task<WebResponse> GetResponseTapAsync(this WebRequest request)
{
return Task.Factory.FromAsync(
(asyncCallback, state) =>
request.BeginGetResponse(asyncCallback, state),
(asyncResult) =>
request.EndGetResponse(asyncResult), null);
}
// ...
public class YourController : AsyncController
{
public void YourMethodAsyc(string url)
{
AsyncManager.OutstandingOperations.Increment();
var request = (HttpWebRequest)WebRequest.Create(url);
request.GetResponseTapAsync().ContinueWith(responseTask =>
{
try
{
var stream = responseTask.Result.GetResponseStream();
using (var streamReader = new StreamReader(stream))
{
// still blocking here, see notes below
var data = streamReader.ReadToEnd();
AsyncManager.Parameters["data"] = data;
}
}
finally
{
AsyncManager.OutstandingOperations.Decrement();
}
}, TaskScheduler.FromCurrentSynchronizationContext());
}
public ActionResult YourMethodCompleted(string data)
{
return View("Data", new ViewModel
{
Data = data
});
}
}
You could take it further and implement ReadToEndAsync (as it is absent in .NET 4.0), but you won't be able to use using. Check this for some more details.
Ideally, if you need to target ASP.NET MVC with .NET 4.0, but develop with VS2012+ in C# 5.0, you still can use async/await, Microsoft provides Microsoft.Bcl.Async library for that. Then your code might look like this:
public class YourController : AsyncController
{
async Task<string> YourMethodAsyncImpl(string url)
{
var request = (HttpWebRequest)WebRequest.Create(url);
using (var response = await request.GetResponseAsync()
using (var streamReader = new StreamReader(response.GetResponseStream())
return await streamReader.ReadToEndAsync();
}
public void YourMethodAsyc(string url)
{
AsyncManager.OutstandingOperations.Increment();
YourMethodAsyncImpl(url).ContinueWith(resultTask =>
{
try
{
AsyncManager.Parameters["data"] = resultTask.Result;
}
finally
{
AsyncManager.OutstandingOperations.Decrement();
}
}, TaskScheduler.FromCurrentSynchronizationContext());
}
public ActionResult YourMethodCompleted(string data)
{
return View("Data", new ViewModel
{
Data = data
});
}
}
I'd use Microsoft's Reactive Framework for this. (NuGet "Rx-Main")
It's then super easy.
Define your function like this:
public IObservable<string> CallServiceNonBlocking(string url)
{
return Observable.Start(() =>
{
var httpRequest = (HttpWebRequest)WebRequest.Create(url);
HttpResponse res = httpRequest.FetchBlocking();
return res.GetValue();
});
}
Then call it like this:
CallServiceNonBlocking(url)
.Subscribe(response =>
{
/* handle response here */
});
It's easy to use an ObserveOn to make the subscription run on the UI thread, or current context, etc.
What's the best way to set cache control headers for public caching servers in WebAPI?
I'm not interested in OutputCache control on my server, I'm looking to control caching at the CDN side and beyond (I have individual API calls where the response can be indefinitely cached for the given URL) but everything I've read thus far either references pre-release versions of WebAPI (and thus references things that seem to no longer exist, like System.Web.HttpContext.Current.Reponse.Headers.CacheControl) or seems massively complicated for just setting a couple of http headers.
Is there a simple way to do this?
As suggested in the comments, you can create an ActionFilterAttribute. Here's a simple one that only handles the MaxAge property:
public class CacheControlAttribute : System.Web.Http.Filters.ActionFilterAttribute
{
public int MaxAge { get; set; }
public CacheControlAttribute()
{
MaxAge = 3600;
}
public override void OnActionExecuted(HttpActionExecutedContext context)
{
if (context.Response != null)
context.Response.Headers.CacheControl = new CacheControlHeaderValue()
{
Public = true,
MaxAge = TimeSpan.FromSeconds(MaxAge)
};
base.OnActionExecuted(context);
}
}
Then you can apply it to your methods:
[CacheControl(MaxAge = 60)]
public string GetFoo(int id)
{
// ...
}
The cache control header can be set like this.
public HttpResponseMessage GetFoo(int id)
{
var foo = _FooRepository.GetFoo(id);
var response = Request.CreateResponse(HttpStatusCode.OK, foo);
response.Headers.CacheControl = new CacheControlHeaderValue()
{
Public = true,
MaxAge = new TimeSpan(1, 0, 0, 0)
};
return response;
}
In case anyone lands here looking for an answer specifically to ASP.NET Core, you can now do what #Jacob suggested without writing your own filter. Core already includes this:
[ResponseCache(VaryByHeader = "User-Agent", Duration = 1800)]
public async Task<JsonResult> GetData()
{
}
https://learn.microsoft.com/en-us/aspnet/core/performance/caching/response
Like this answer suggesting filters, consider the "extended" version -- http://www.strathweb.com/2012/05/output-caching-in-asp-net-web-api/
It used to be available as a NuGet package Strathweb.CacheOutput.WebApi2, but doesn't seem to be hosted anymore, and is instead on GitHub -- https://github.com/filipw/AspNetWebApi-OutputCache
We have an old Silverlight UserControl + WCF component in our framework and we would like to increase the reusability of this feature. The component should work with basic functionality by default, but we would like to extend it based on the current project (without modifying the original, so more of this control can appear in the full system with different functionality).
So we made a plan, where everything looks great, except one thing. Here is a short summary:
Silverlight UserControl can be extended and manipulated via ContentPresenter at the UI and ViewModel inheritance, events and messaging in the client logic.
Back-end business logic can be manipulated with module loading.
This gonna be okay I think. For example you can disable/remove fields from the UI with overriden ViewModel properties, and at the back-end you can avoid some action with custom modules.
The interesting part is when you add new fields via the ContentPresenter. Ok, you add new properties to the inherited ViewModel, then you can bind to them. You have the additional data. When you save base data, you know it's succeeded, then you can start saving your additional data (additional data can be anything, in a different table at back-end for example). Fine, we extended our UserControl and the back-end logic and the original userControl still doesn't know anything about our extension.
But we lost transaction. For example we can save base data, but additional data saving throws an exception, we have the updated base data but nothing in the additional table. We really doesn't want this possibility, so I came up with this idea:
One WCF call should wait for the other at the back-end, and if both arrived, we can begin cross thread communication between them, and of course, we can handle the base and the additional data in the same transaction, and the base component still doesn't know anything about the other (it just provide a feature to do something with it, but it doesn't know who gonna do it).
I made a very simplified proof of concept solution, this is the output:
1 send begins
Press return to send the second piece
2 send begins
2 send completed, returned: 1
1 send completed, returned: 2
Service
namespace MyService
{
[ServiceContract]
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public class Service1
{
protected bool _sameArrived;
protected Piece _same;
[OperationContract]
public Piece SendPiece(Piece piece)
{
_sameArrived = false;
Mediator.Instance.WaitFor(piece, sameArrived);
while (!_sameArrived)
{
Thread.Sleep(100);
}
return _same;
}
protected void sameArrived(Piece piece)
{
_same = piece;
_sameArrived = true;
}
}
}
Piece (entity)
namespace MyService
{
[DataContract]
public class Piece
{
[DataMember]
public long ID { get; set; }
[DataMember]
public string SameIdentifier { get; set; }
}
}
Mediator
namespace MyService
{
public sealed class Mediator
{
private static Mediator _instance;
private static object syncRoot = new Object();
private List<Tuple<Piece, Action<Piece>>> _waitsFor;
private Mediator()
{
_waitsFor = new List<Tuple<Piece, Action<Piece>>>();
}
public static Mediator Instance
{
get
{
if (_instance == null)
{
lock (syncRoot)
{
_instance = new Mediator();
}
}
return _instance;
}
}
public void WaitFor(Piece piece, Action<Piece> callback)
{
lock (_waitsFor)
{
var waiter = _waitsFor.Where(i => i.Item1.SameIdentifier == piece.SameIdentifier).FirstOrDefault();
if (waiter != null)
{
_waitsFor.Remove(waiter);
waiter.Item2(piece);
callback(waiter.Item1);
}
else
{
_waitsFor.Add(new Tuple<Piece, Action<Piece>>(piece, callback));
}
}
}
}
}
And the client side code
namespace MyClient
{
class Program
{
static void Main(string[] args)
{
Client c1 = new Client(new Piece()
{
ID = 1,
SameIdentifier = "customIdentifier"
});
Client c2 = new Client(new Piece()
{
ID = 2,
SameIdentifier = "customIdentifier"
});
c1.SendPiece();
Console.WriteLine("Press return to send the second piece");
Console.ReadLine();
c2.SendPiece();
Console.ReadLine();
}
}
class Client
{
protected Piece _piece;
protected Service1Client _service;
public Client(Piece piece)
{
_piece = piece;
_service = new Service1Client();
}
public void SendPiece()
{
Console.WriteLine("{0} send begins", _piece.ID);
_service.BeginSendPiece(_piece, new AsyncCallback(sendPieceCallback), null);
}
protected void sendPieceCallback(IAsyncResult result)
{
Piece returnedPiece = _service.EndSendPiece(result);
Console.WriteLine("{0} send completed, returned: {1}", _piece.ID, returnedPiece.ID);
}
}
}
So is it a good idea to wait for another WCF call (which may or may not be invoked, so in a real example it would be more complex), and process them together with cross threading communication? Or not and I should look for another solution?
Thanks in advance,
negra
If you want to extend your application without changing any existing code, you can use MEF that is Microsoft Extensibility Framework.
For using MEF with silverlight see: http://development-guides.silverbaylabs.org/Video/Silverlight-MEF
I would not wait for 2 WCF calls from Silverlight, for the following reasons:
You are making your code more complex and less maintainable
You are storing business knowledge, that two services should be called together, in the client
I would call a single service that aggreagated the two services.
It doesn't feel like a great idea to me, to be honest. I think it would be neater if you could package up both "partial" requests in a single "full" request, and wait for that. Unfortunately I don't know the best way of doing that within WCF. It's possible that there's a generalized mechanism for this, but I don't know about it. Basically you'd need some loosely typed service layer where you could represent a generalized request and a generalized response, routing the requests appropriately in the server. You could then represent a collection of requests and responses easily.
That's the approach I'd look at, personally - but I don't know how neatly it will turn out in WCF.