I'm making a program which downloads files over http.
I've got it downloading, however I want to be able to pause the downloads, close the program and resume them again at a later date.
I know the location i'm downloading them from supports this.
I'm downloading the file through HttpWebResponse and reading the response into a Stream using GetResponseStream.
When i close the app and restart it, I'm stuck as to how resume the download. I've tried doing a seek on the stream but it states its not supported.
What would be the best way to do this?
If the server supports this you have to send the Range Http header with your request using the AddRange method:
request.AddRange(1024);
This will instruct the server to start sending the file after the 1st kilobyte. Then just read the response stream as normal.
To test if a server supports resuming you can send a HEAD request and test if it sends the Accept-Ranges: bytes header.
How about an HTTPRangeStream class?
using System;
using System.Collections.Generic;
using System.IO;
using System.Net;
using System.Text;
namespace Ionic.Kewl
{
public class HTTPRangeStream : Stream
{
private string url;
private long length;
private long position;
private long totalBytesRead;
private int totalReads;
public HTTPRangeStream(string URL)
{
url = URL;
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url);
HttpWebResponse result = (HttpWebResponse)request.GetResponse();
length = result.ContentLength;
}
public long TotalBytesRead { get { return totalBytesRead; } }
public long TotalReads { get { return totalReads; } }
public override bool CanRead { get { return true; } }
public override bool CanSeek { get { return true; } }
public override bool CanWrite { get { return false; } }
public override long Length { get { return length; } }
public override bool CanTimeout
{
get
{
return base.CanTimeout;
}
}
public override long Position
{
get
{
return position;
}
set
{
if (value < 0) throw new ArgumentException();
position = value;
}
}
public override long Seek(long offset, SeekOrigin origin)
{
switch (origin)
{
case SeekOrigin.Begin:
position = offset;
break;
case SeekOrigin.Current:
position += offset;
break;
case SeekOrigin.End:
position = Length + offset;
break;
default:
break;
}
return Position;
}
public override int Read(byte[] buffer, int offset, int count)
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url);
request.AddRange(Convert.ToInt32(position), Convert.ToInt32(position) + count);
HttpWebResponse result = (HttpWebResponse)request.GetResponse();
using (Stream stream = result.GetResponseStream())
{
stream.Read(buffer, offset, count);
stream.Close();
}
totalBytesRead += count;
totalReads++;
Position += count;
return count;
}
public override void Write(byte[] buffer, int offset, int count)
{
throw new NotSupportedException();
}
public override void SetLength(long value)
{
throw new NotSupportedException();
}
public override void Flush()
{
throw new NotSupportedException();
}
}
}
Your solution is fine, but it will only work for the cases where the server sends a Content-Length header. This header will not be present in dynamically generated content.
Also, this solution is send a request for each Read. If the content changes on the server between the requests, then you will get inconsistent results.
I would improve upon this, by storing the data locally - either on disk or in memory. Then, you can seek into it all you want. There wont be any problem of inconsistency, and you need only one HttpWebRequest to download it.
Related
I get error
System.IO.IOException: 'The process cannot access the file 'xxx' because it is being used by another process.'
when I try to delete a temp file in a background worker service in aspnet core.
I am eventually allowed to delete the file after about a minute (52s, 73s).
If I change garbage collection to workstation mode, I may instead delete after ~1s (but still, a delay).
I have tried a combination of FileOptions to no avail, including FileOptions.WriteThrough.
When the controller writes the file, I use
FlushAsync(), Close(), Dispose() and 'using' (I know it's overkill.)
I also tried using just File.WriteAllBytesAsync, with same result.
In the background reader, I as well use Close() and Dispose().
(hint: background reader will not allow me to use DeleteOnClose,
which would have been ideal.)
As I search stackoverflow for similar 'used by another process' issues,
all those I have found eventually resolve to
'argh it turns out I/he still had an extra open instance/reference
he forgot about',
but I have not been able to figure out that I am doing that.
Another hint:
In the writing controller, I am able to delete the file immediately
after writing it, I presume because I am still on the same thread?
Is there some secret knowledge I should read somewhere,
about being able to delete recently open files, across threads?
UPDATE: Here relevant(?) code snippets:
// (AspNet Controller)
[RequestSizeLimit(9999999999)]
[DisableFormValueModelBinding]
[RequestFormLimits(MultipartBodyLengthLimit = MaxFileSize)]
[HttpPost("{sessionId}")]
public async Task<IActionResult> UploadRevisionChunk(Guid sessionId) {
log.LogWarning($"UploadRevisionChunk: {sessionId}");
string uploadFolder = UploadFolder.sessionFolderPath(sessionId);
if (!Directory.Exists(uploadFolder)) { throw new Exception($"chunk-upload failed"); }
var cr = parseContentRange(Request);
if (cr == null) { return this.BadRequest("no content range header specified"); }
string chunkName = $"{cr.From}-{cr.To}";
string saveChunkPath = Path.Combine(uploadFolder,chunkName);
await streamToChunkFile_WAB(saveChunkPath); // write-all-bytes.
//await streamToChunkFile_MAN(saveChunkPath); // Manual.
long crTo = cr.To ?? 0;
long crFrom = cr.From ?? 0;
long expected = (crTo - crFrom) + 1;
var fi = new FileInfo(saveChunkPath);
var dto = new ChunkResponse { wrote = fi.Length, expected = expected, where = "?" };
string msg = $"at {crFrom}, wrote {dto.wrote} bytes (expected {dto.expected}) to {dto.where}";
log.LogWarning(msg);
return Ok(dto);
}
private async Task streamToChunkFile_WAB(string saveChunkPath) {
using (MemoryStream ms = new MemoryStream()) {
Request.Body.CopyTo(ms);
byte[] allBytes = ms.ToArray();
await System.IO.File.WriteAllBytesAsync(saveChunkPath, allBytes);
}
}
// stream reader in the backgroundService:
public class MyMultiStream : Stream {
string[] filePaths;
FileStream curStream = null;
IEnumerator<string> i;
ILogger log;
QueueItem qItem;
public MyMultiStream(string[] filePaths_, Stream[] streams_, ILogger log_, QueueItem qItem_) {
qItem = qItem_;
log = log_;
filePaths = filePaths_;
log.LogWarning($"filepaths has #items: {filePaths.Length}");
IEnumerable<string> enumerable = filePaths;
i = enumerable.GetEnumerator();
i.MoveNext();// necessary to prime the iterator.
}
public override bool CanRead { get { return true; } }
public override bool CanWrite { get { return false; } }
public override bool CanSeek { get { return false; } }
public override long Length { get { throw new Exception("dont get length"); } }
public override long Position {
get { throw new Exception("dont get Position"); }
set { throw new Exception("dont set Position"); }
}
public override void SetLength(long value) { throw new Exception("dont set length"); }
public override long Seek(long offset, SeekOrigin origin) { throw new Exception("dont seek"); }
public override void Write(byte[] buffer, int offset, int count) { throw new Exception("dont write"); }
public override void Flush() { throw new Exception("dont flush"); }
public static int openStreamCounter = 0;
public static int closedStreamCounter = 0;
string curFileName = "?";
private FileStream getNextStream() {
string nextFileName = i.Current;
if (nextFileName == null) { throw new Exception("getNextStream should not be called past file list"); }
//tryDelete(nextFileName,log);
FileStream nextStream = new FileStream(
path:nextFileName,
mode: FileMode.Open,
access: FileAccess.Read,
share: FileShare.ReadWrite| FileShare.Delete,
bufferSize:4096, // apparently default.
options: 0
| FileOptions.Asynchronous
| FileOptions.SequentialScan
// | FileOptions.DeleteOnClose // (1) this ought to be possible, (2) we should fix this approach (3) if we can fix this, our issue is solved, and our code much simpler.
); // None); // ReadWrite); // None); // ReadWrite); //| FileShare.Read);
log.LogWarning($"TELLUS making new stream [{nextFileName}] opened:[{++openStreamCounter}] closed:[{closedStreamCounter}]");
curFileName = nextFileName;
++qItem.chunkCount;
return nextStream;
}
public override int Read(byte[] buffer, int offset, int count) {
int bytesRead = 0;
while (true) {
bytesRead = 0;
if (curStream == null) { curStream = getNextStream(); }
try {
bytesRead = curStream.Read(buffer, offset, count);
log.LogWarning($"..bytesRead:{bytesRead} [{Path.GetFileName(curFileName)}]"); // (only show a short name.)
} catch (Exception e) {
log.LogError($"failed reading [{curFileName}] [{e.Message}]",e);
}
if (bytesRead > 0) { break; }
curStream.Close();
curStream.Dispose();
curStream = null;
log.LogWarning($"TELLUS closing stream [{curFileName}] opened:[{openStreamCounter}] closed:[{++closedStreamCounter}]");
//tryDelete(curFileName); Presumably we can't delete so soon.
bool moreFileNames = i.MoveNext();
log.LogWarning($"moreFileNames?{moreFileNames}");
if (!moreFileNames) {
break;
}
}
return bytesRead;
}
..
// Background worker operating multistream:
public class BackgroundChunkWorker: BackgroundService {
ILogger L;
ChunkUploadQueue q;
public readonly IServiceScopeFactory scopeFactory;
public BackgroundChunkWorker(ILogger<int> log_, ChunkUploadQueue q_, IServiceScopeFactory scopeFactory_) {
q = q_; L = log_;
scopeFactory = scopeFactory_;
}
override protected async Task ExecuteAsync(CancellationToken cancel) { await BackgroundProcessing(cancel); }
private async Task BackgroundProcessing(CancellationToken cancel) {
while (!cancel.IsCancellationRequested) {
try {
await Task.Delay(1000,cancel);
bool ok = q.q.TryDequeue(out var item);
if (!ok) { continue; }
L.LogInformation($"item found! {item}");
await treatItemScope(item);
} catch (Exception ex) {
L.LogCritical("An error occurred when processing. Exception: {#Exception}", ex);
}
}
}
private async Task<bool> treatItemScope(QueueItem Qitem) {
using (var scope = scopeFactory.CreateScope()) {
var ris = scope.ServiceProvider.GetRequiredService<IRevisionIntegrationService>();
return await treatItem(Qitem, ris);
}
}
private async Task<bool> treatItem(QueueItem Qitem, IRevisionIntegrationService ris) {
await Task.Delay(0);
L.LogWarning($"TryAddValue from P {Qitem.sessionId}");
bool addOK = q.p.TryAdd(Qitem.sessionId, Qitem);
if (!addOK) {
L.LogError($"why couldnt we add session {Qitem.sessionId} to processing-queue?");
return false;
}
var startTime = DateTime.UtcNow;
Guid revisionId = Qitem.revisionId;
string[] filePaths = getFilePaths(Qitem.sessionId);
Stream[] streams = filePaths.Select(fileName => new FileStream(fileName, FileMode.Open)).ToArray();
MyMultiStream multiStream = new MyMultiStream(filePaths, streams, this.L, Qitem);
BimRevisionStatus brs = await ris.UploadRevision(revisionId, multiStream, startTime);
// (launchDeletes is my current hack/workaround,
// it is not part of the problem)
// await multiStream.launchDeletes();
Qitem.status = brs;
return true;
}
..
I am working on a project (server side) where i need to stream data (videos, large files) to clients.
This worked perfect using ByteRangeStreamContent, as i was serving files from disk and could create a seekable stream (FileStream).
if (Request.Headers.Range != null)
{
try
{
HttpResponseMessage partialResponse = Request.CreateResponse(HttpStatusCode.PartialContent);
partialResponse.Content = new ByteRangeStreamContent(fs, Request.Headers.Range, mediaType);
return partialResponse;
}
catch (InvalidByteRangeException invalidByteRangeException)
{
return Request.CreateErrorResponse(invalidByteRangeException);
}
}
else
{
response.Content = new StreamContent(fs);
response.Content.Headers.ContentType = mediaType;
return response;
}
But, i moved the file provider from disk to an external service. The service allows me to get chunks of data (Range{0}-{1}).
Of course, it's not possible to download whole file in memory and then use a MemoryStream for ByteRangeStreamContent because of the obvious reasons (too many concurrent downloads will consume all the available memory at some point).
I found this article https://vikingerik.wordpress.com/2014/09/28/progressive-download-support-in-asp-net-web-api/ where the author says:
A change request I got for my library was to support reading only the
necessary data and sending that out rather than opening a stream for
the full data. I wasn’t sure what this would buy until the user
pointed out they are reading their resource data from a WCF stream
which does not support seeking and would need to read the whole stream
into a MemoryStream in order to allow the library to generate the
output.
That limitation still exists in this specific object but there is a
workaround. Instead of using a ByteRangeStreamContent, you could
instead use a ByteArrayContent object instead. Since the majority of
RANGE requests will be for a single start and end byte, you could pull
the range from the HttpRequestMessage, retrieve only the bytes you
need and send it back out as a byte stream. You’ll also need to add
the CONTENT-RANGE header and set the response code to 206
(PartialContent) but this could be a viable alternative (though I
haven’t tested it) for users who do not want or can’t easily get a
compliant stream object.
So, my question basically is: how can i do that ?
I finally managed to do it.
Here's how:
Custom implementation of a stream:
public class BufferedHTTPStream : Stream
{
private readonly Int64 cacheLength = 4000000;
private const Int32 noDataAvaiable = 0;
private MemoryStream stream = null;
private Int64 currentChunkNumber = -1;
private Int64? length;
private Boolean isDisposed = false;
private Func<long, long, Stream> _getStream;
private Func<long> _getContentLength;
public BufferedHTTPStream(Func<long, long, Stream> streamFunc, Func<long> lengthFunc)
{
_getStream = streamFunc;
_getContentLength = lengthFunc;
}
public override Boolean CanRead
{
get
{
EnsureNotDisposed();
return true;
}
}
public override Boolean CanWrite
{
get
{
EnsureNotDisposed();
return false;
}
}
public override Boolean CanSeek
{
get
{
EnsureNotDisposed();
return true;
}
}
public override Int64 Length
{
get
{
EnsureNotDisposed();
if (length == null)
{
length = _getContentLength();
}
return length.Value;
}
}
public override Int64 Position
{
get
{
EnsureNotDisposed();
Int64 streamPosition = (stream != null) ? stream.Position : 0;
Int64 position = (currentChunkNumber != -1) ? currentChunkNumber * cacheLength : 0;
return position + streamPosition;
}
set
{
EnsureNotDisposed();
EnsurePositiv(value, "Position");
Seek(value);
}
}
public override Int64 Seek(Int64 offset, SeekOrigin origin)
{
EnsureNotDisposed();
switch (origin)
{
case SeekOrigin.Begin:
break;
case SeekOrigin.Current:
offset = Position + offset;
break;
default:
offset = Length + offset;
break;
}
return Seek(offset);
}
private Int64 Seek(Int64 offset)
{
Int64 chunkNumber = offset / cacheLength;
if (currentChunkNumber != chunkNumber)
{
ReadChunk(chunkNumber);
currentChunkNumber = chunkNumber;
}
offset = offset - currentChunkNumber * cacheLength;
stream.Seek(offset, SeekOrigin.Begin);
return Position;
}
private void ReadNextChunk()
{
currentChunkNumber += 1;
ReadChunk(currentChunkNumber);
}
private void ReadChunk(Int64 chunkNumberToRead)
{
Int64 rangeStart = chunkNumberToRead * cacheLength;
if (rangeStart >= Length) { return; }
Int64 rangeEnd = rangeStart + cacheLength - 1;
if (rangeStart + cacheLength > Length)
{
rangeEnd = Length - 1;
}
if (stream != null) { stream.Close(); }
stream = new MemoryStream((int)cacheLength);
var responseStream = _getStream(rangeStart, rangeEnd);
responseStream.Position = 0;
responseStream.CopyTo(stream);
responseStream.Close();
stream.Position = 0;
}
public override void Close()
{
EnsureNotDisposed();
base.Close();
if (stream != null) { stream.Close(); }
isDisposed = true;
}
public override Int32 Read(Byte[] buffer, Int32 offset, Int32 count)
{
EnsureNotDisposed();
EnsureNotNull(buffer, "buffer");
EnsurePositiv(offset, "offset");
EnsurePositiv(count, "count");
if (buffer.Length - offset < count) { throw new ArgumentException("count"); }
if (stream == null) { ReadNextChunk(); }
if (Position >= Length) { return noDataAvaiable; }
if (Position + count > Length)
{
count = (Int32)(Length - Position);
}
Int32 bytesRead = stream.Read(buffer, offset, count);
Int32 totalBytesRead = bytesRead;
count -= bytesRead;
while (count > noDataAvaiable)
{
ReadNextChunk();
offset = offset + bytesRead;
bytesRead = stream.Read(buffer, offset, count);
count -= bytesRead;
totalBytesRead = totalBytesRead + bytesRead;
}
return totalBytesRead;
}
public override void SetLength(Int64 value)
{
EnsureNotDisposed();
throw new NotImplementedException();
}
public override void Write(Byte[] buffer, Int32 offset, Int32 count)
{
EnsureNotDisposed();
throw new NotImplementedException();
}
public override void Flush()
{
EnsureNotDisposed();
}
private void EnsureNotNull(Object obj, String name)
{
if (obj != null) { return; }
throw new ArgumentNullException(name);
}
private void EnsureNotDisposed()
{
if (!isDisposed) { return; }
throw new ObjectDisposedException("BufferedHTTPStream");
}
private void EnsurePositiv(Int32 value, String name)
{
if (value > -1) { return; }
throw new ArgumentOutOfRangeException(name);
}
private void EnsurePositiv(Int64 value, String name)
{
if (value > -1) { return; }
throw new ArgumentOutOfRangeException(name);
}
private void EnsureNegativ(Int64 value, String name)
{
if (value < 0) { return; }
throw new ArgumentOutOfRangeException(name);
}
}
Usage:
var fs = new BufferedHTTPStream((start, end) =>
{
// return stream from external service
}, () =>
{
// return stream length from external service
});
HttpResponseMessage partialResponse = Request.CreateResponse(HttpStatusCode.PartialContent);
partialResponse.Content = new ByteRangeStreamContent(fs, Request.Headers.Range, mediaType);
partialResponse.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = fileName
};
return partialResponse;
OK,
I have to create a C# library that can send commands to a device and process the command specific responses and broadcasts over a serial port (or other communications method). The library must also be able to handle request and response extensions held in other libraries as certain devices implement an extended command set, but it must be possible to choose whether these extended commands are utilised (I guess using reflection in the client app). I have created a class of type Packet that is able to create the packet and add its payload, calculate its checksum and write the packet to the stream.
public class Packet
{
internal PacketHeaderType Header { get; private set; }
internal List<byte> Payload { get; private set; }
protected int PayloadLength { get { return Payload.Count; } }
protected byte HeaderByte { get { return (byte)((Convert.ToByte(Header) << 4) | PayloadLength); } } //we need to add the packet length to the lower nibble of the header before sending
public Packet(PacketHeaderType header, List<byte> payload)
{
this.Header = header;
this.Payload = new List<byte>(payload);
}
public Packet(PacketHeaderType headerByte)
{
this.Header = headerByte;
this.Payload = new List<byte>();
}
internal byte XorByte
{
get
{
Byte xorByte = Convert.ToByte(HeaderByte);
for (int i = 0; i < PayloadLength; i++)
xorByte ^= Payload.ToArray()[i];
return xorByte;
}
}
public async Task WriteAsync(Stream stream, bool flush = true, CancellationToken token = default(CancellationToken))
{
var buffer = new List<byte>();
buffer.Add(HeaderByte);
if (Payload != null && PayloadLength > 0)
{
buffer.AddRange(Payload);
}
buffer.Add(XorByte);
await stream.WriteAsync(buffer.ToArray(), 0, buffer.Count);
if (flush)
{
await stream.FlushAsync();
}
}
}
I have also created child classes that implement Type packet for each of the valid commands.
Finally I have also created a class of type PacketHandler that is able to read bytes from a stream and create it into a packet object.
The way I envisage using the library would be like this:
public async string GetCmdStnSoftwareVersion()
{
var msgReq = new CmdStnSoftwareVersionReqMessage();
await msgReq.WriteAsync(sPort.BaseStream);
await var response = msgReq.GetResponse(5); //5 = timeout in seconds!
return String.Format("{0}.{1}", response.Major, response.Minor);
}
What I am stuck on is a good pattern and/or example implementation for handling responses which is compatible with implementing the extension libraries. Can anyone provide input?
I'm returning a video file through IIS for a range request in a WCF service.
The end of the code looks like this:
WriteResponseHeaders(stuff);
while (remainingBytes > 0)
{
if (response.IsClientConnected) // response is a System.Web.HttpResponse
{
int chunkSize = stream.Read(buffer, 0, 10240 < remainingBytes ? 10240 : remainingBytes);
response.OutputStream.Write(buffer, 0, chunkSize);
remainingBytes -= chunkSize;
response.Flush();
}
else
{
return;
}
}
In Firefox, Internet Explorer and Opera it works correctly. In Chrome, the video will stop playing a while before the end. Fiddler shows a 504 error:
[Fiddler] ReadResponse() failed: The server did not return a response for this request. Server returned 16556397 bytes.
If I stick a breakpoint just after the loop, and let the program sit there until the video has progressed past its stopping point, Chrome will play the full video without any problem and Fiddler will show the response with all of the correct headers and such. The only code that gets executed between that breakpoint and the end of the call is to flush the log stream.
As a test, I stuck in:
while (response.IsClientConnected)
{
System.Threading.Thread.Sleep(1000);
}
after the loop and playback was fine in all browsers. My response also looked fine in Fiddler. Of course this has way too many problems to be a proper solution, but it seems to show me that this is an issue more of timing than of behaviour.
Why does allowing the code to progress past this point too soon cause a problem and how do I prevent it from doing so?
Try returning a Stream instead of writing to the response.OutputStream.
[ServiceContract]
public interface IStreamingService
{
[OperationContract]
[WebGet(BodyStyle=WebMessageBodyStyle.Bare, UriTemplate = "/video?id={id}")]
Stream GetVideo(string id);
}
public class StreamingService : IStreamingService
{
public System.IO.Stream GetVideo(string id)
{
Stream stream = File.OpenRead("c:\\Temp\\Video.mp4");
//WriteResponseHeaders(stuff);
return stream;
}
}
Update:
If you want to support seeking you can either copy the chunk into a byte[] and return a MemoryStream or you could wrap your stream in a proxy that only returns a part of your full file.
public class PartialStream : Stream
{
private Stream underlying;
private long offset;
private long length;
public PartialStream(Stream underlying, long offset, long length)
{
this.underlying = underlying;
this.offset = offset;
if (offset + length > underlying.Length) {
this.length = underlying.Length - offset;
} else {
this.length = length;
}
this.underlying.Seek(offset, SeekOrigin.Begin);
}
public override bool CanRead { get { return true; } }
public override bool CanSeek { get { return false; } }
public override bool CanWrite { get { return false; } }
public override void Flush()
{
throw new NotSupportedException();
}
public override long Length
{
get { return this.length; }
}
public override long Position
{
get
{
return this.underlying.Position - offset;
}
set
{
this.underlying.Position = offset + Math.Min(value,this.length) ;
}
}
public override int Read(byte[] buffer, int offset, int count)
{
if (this.Position + offset >= this.length)
return 0;
if (this.Position + offset + count > this.length) {
count = (int)(this.length - this.Position - offset);
}
return underlying.Read(buffer, offset, count);
}
protected override void Dispose(bool disposing)
{
base.Dispose(disposing);
this.underlying.Dispose();
}
public override long Seek(long offset, SeekOrigin origin)
{
throw new NotImplementedException();
}
public override void SetLength(long value)
{
throw new NotImplementedException();
}
public override void Write(byte[] buffer, int offset, int count)
{
throw new NotImplementedException();
}
}
And you have to respect the Range request header.
public System.IO.Stream GetVideo(string id)
{
RangeHeaderValue rangeHeader;
bool hasRangeHeader = RangeHeaderValue.TryParse(
WebOperationContext.Current.IncomingRequest.Headers["Range"],
out rangeHeader);
var response = WebOperationContext.Current.OutgoingResponse;
Stream stream = File.OpenRead("c:\\Temp\\Video.mp4");
var offset = hasRangeHeader ? rangeHeader.Ranges.First().From.Value : 0;
response.Headers.Add("Accept-Ranges", "bytes");
response.ContentType = "video/mp4";
if (hasRangeHeader) {
response.StatusCode = System.Net.HttpStatusCode.PartialContent;
var totalLength = stream.Length;
stream = new PartialStream(stream, offset, 10 * 1024 * 1024);
var header = new ContentRangeHeaderValue(offset, offset + stream.Length - 1,totalLength);
response.Headers.Add("Content-Range", header.ToString());
}
response.ContentLength = stream.Length;
return stream;
}
I am localizing an ASP.NET MVC 5 application using PO files.
I created an HTTP module to intersect responses of type html, javascript, etc:
public class I18NModule : IHttpModule {
private Regex _types;
public I18NModule() {
_types = new Regex(#"^(?:(?:(?:text|application)/(?:plain|html|xml|javascript|x-javascript|json|x-json))(?:\s*;.*)?)$");
} // I18NModule
public void Init(HttpApplication application) {
application.ReleaseRequestState += OnReleaseRequestState;
} // Init
public void Dispose() {
} // Dispose
private void OnReleaseRequestState(Object sender, EventArgs e) {
HttpContextBase context = new HttpContextWrapper(HttpContext.Current);
if (_types != null && _types.Match(context.Response.ContentType).Success)
context.Response.Filter = new I18NFilter(context, context.Response.Filter, _service);
} // Handle
} // I18NModule
Then I have an I18NFilter as follows:
public class I18NFilter : MemoryStream {
private II18NNuggetService _service;
protected HttpContextBase _context;
private MemoryStream _buffer = new MemoryStream();
protected Stream _output;
public I18NFilter(HttpContextBase context, Stream output, II18NNuggetService service) {
_context = context;
_output = output;
_service = service;
} // I18NFilter
public override void Write(Byte[] buffer, Int32 offset, Int32 count) {
_buffer.Write(buffer, offset, count);
} // Write
public override void Flush() {
Encoding encoding = _context.Response.ContentEncoding;
Byte[] buffer = _buffer.GetBuffer();
String entity = encoding.GetString(buffer, 0, (Int32)_buffer.Length);
_buffer.Dispose();
_buffer = null;
buffer = null;
*USE SERVICE TO LOAD PO FILE AND PROCESS IT*
buffer = encoding.GetBytes(entity);
encoding = null;
Int32 count = buffer.Length;
_output.Write(buffer, 0, count);
_output.Flush();
} // Flush
} // I18NFilter
When I intersect the response I look for strings as [[[some text]]]. "some text" will be the key which I will look for in the PO file for current thread language.
So I need to load the PO file for the current language, process it, and find the strings that need to be translated.
My problem is performance ... Should I load the entire file in a static class?
Should I load the file in each request and use CacheDependency?
How should I do this?
Since this is an HTTP application I would take advantage of HttpRuntime.Cache. Here is an example of how it could be used to minimize the performance cost:
public override void Flush() {
...
var fileContents = GetLanguageFileContents();
...
}
private string GetLanguageFileContents(string languageName) {
if (HttpRuntime.Cache[languageName] != null)
{
//Just pull it from memory!
return (string)HttpRuntime.Cache[languageName];
}
else
{
//Take the IO hit :(
var fileContents = ReadFileFromDiskOrDatabase();
//Store the data in memory to avoid future IO hits :)
HttpRuntime.Cache[languageName] = fileContents;
return fileContents;
}
}