Generic Import method for EF - c#

I would like to make a generic method to import data into my application.
For example, say I have:
private static async Task<int> ImportAccount(string filename)
{
var totalRecords = await GetLineCount(filename);
var ctx = new AccountContext();
var count = 0;
var records = 0;
using (var stream = File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
using (var reader = new StreamReader(stream, Encoding.UTF8))
{
string line;
while ((line = await reader.ReadLineAsync()) != null)
{
var data = line.Split('\t');
var acc = new Account(data);
await ctx.Accounts.AddAsync(acc);
// need this to avoid using all the memory
// maybe there is a smarter or beter way to do it
// with 10k it uses about 500mb memory,
// files have million rows+
if (count % 10000 == 1)
{
records += result = await ctx.SaveChangesAsync();
if (result > 0)
{
ctx.Dispose();
ctx = new AccountContext();
}
}
count++;
}
}
}
await ctx.SaveChangesAsync();
ctx.Dispose();
return records;
}
In the above example I am importing data from a tab delimited file into the Accounts db.
Then I have properties, lands, and a whole lot of other db's I need to import.
Instead of having to make a method for each db like the above, I would like to make something like:
internal static readonly Dictionary<string, ??> FilesToImport = new Dictionary<string, ??>
{
{ "fullpath to file", ?? would be what I need to pass to T }
... more files ...
};
private static async Task<int> Import<T>(string filename)
Where T would be the DB in question.
All my classes have 1 thing in common, they all have a constructor that takes a string[] data.
But I have no idea how I could make a method that I would be able to accept:
private static async Task<int> Import<T>(string filename)
And then be able to do a:
var item = new T(data);
await ctx.Set<T>().AddAsync(item);
And if I recall correctly, I would not be able to instantiate T with a parameter.
How could I make this generic Import method and is it possible to achieve?

The easiest way to accomplish this is to pass a generic function that accepts the string line or a string array of split values and returns an object with values set. Use the ctx.AddAsync() method which supports generics and add the entity to the correct set.
private static async Task<int> Import<T>(string filename, Func<string, T> transform) where T : class
{
var totalRecords = await GetLineCount(filename);
var ctx = new AccountContext();
var count = 0;
var records = 0;
using (var stream = File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
using (var reader = new StreamReader(stream, Encoding.UTF8))
{
string line;
while ((line = await reader.ReadLineAsync()) != null)
{
var data = line.Split("\t");
var entity = transform(data);
await ctx.AddAsync(entity);
if (count % 10000 == 1)
{
records += result = await ctx.SaveChangesAsync();
if (result > 0)
{
ctx.Dispose();
ctx = new AccountContext();
}
}
count++;
}
}
}
await ctx.SaveChangesAsync();
ctx.Dispose();
return records;
}
// Usage
Import(filename, splits => {
/ * do whatever you need to transform the data */
return new Whatever(splits);
})
Since generic types cannot be constructed by passing a parameter, you will have to use a function as the second type in the dictionary.
Dictionary<string, Func<string, object>> FilesToImport = new Dictionary<string, Func<string, object>>{
{ "fullpath to file", data => new Account(data) },
{ "fullpath to file", data => new Whatever(data) },
{ "fullpath to file", data => new Whatever2(data) },
}

C# have only new() restriction for generic type arguments. But unfortunately it's not possible to force a type to have a constructor with parameters.
One workaround for this is to define an interface like this:
interface IImportedEntity<T>
// where T: YourBaseClass
{
T Init(string[] data);
}
In this case all implementation classes will have to implement such method:
class Account : /*YourBaseClass*/ IImportedEntity<Account>
{
public Account()
{
// for EF
}
// can be made private or protected
public Account(string[] data)
{
// your code
}
// public Account Init(string[] data) => { /*populate current instance*/ return this;};
// can be implemented in base class
public Account Init(string[] data) => new Account(data);
}
Finally you can restrict your generic Import method to deal only with imported entities:
private static async Task<int> Import<T>(string filename)
where T: class, IImportedEntity<T>, new()
{
....
var item = new T();
item = item.Init(data);
await ctx.Set<T>().AddAsync(item);
...
}
Note, if you still want to use this with a dictionary it will be needed to use reflection(example).

Related

Lazy load a list that is obtained from a using statement

I am using CSVHelper library to read from a CSV file. But that's not what this is about
Please refer to the code below
public class Reader
{
public IEnumerable<CSVModel> Read(string file)
{
using var reader = new StreamReader(#"C:\Users\z0042d8s\Desktop\GST invoice\RISK All RISKs_RM - Copy.CSV");
using var csv = new CsvReader(reader, CultureInfo.InvariantCulture);
IEnumerable<CSVModel> records = csv.GetRecords<CSVModel>();
return records;
}
}
The csv.GetRecords in the above method uses a yield return and returns every CSV row as soon as it's read and not wait until the entire CSV file is read to return (Streams data from the CSV)
I have a consumer class which as the name suggests consumes data returned by the Read method.
class Consumer
{
public void Consume(IEnumerable<CSVModel> data)
{
foreach(var item in data)
{
//Do whatever you want with the data. I am gonna log it to the console
Console.WriteLine(item);
}
}
And below is the caller
public static void main()
{
var data = new Reader().Read();
new Consumer().Consume();
}
Hope I didn't lose you.
The problem I am facing is below
As data variable above is IEnumerable, it will be Lazy loaded (In other words, it doesn't read the CSV file as long as it's not iterated over). But, by the time I call the Consume() method, which iterates over the data variable, forcing the reading from the CSV file in the Read() method, the reader and csv objects which are in using statements will be disposed off throwing an ObjectDisposed exception.
Also, I don't want to remove the reader and csv objects outside of the using blocks as they should be disposed to prevent memory leaks.
The exception message is below
System.ObjectDisposedException: 'GetRecords<T>() returns an IEnumerable<T>
that yields records. This means that the method isn't actually called until
you try and access the values. e.g. .ToList() Did you create CsvReader inside
a using block and are now trying to access the records outside of that using
block?
And I know I can use a greedy operator (.ToList()). But I want the lazy loading to work.
Please suggest if there are any ways out.
Thanks in advance.
You may pass an action as parameter to the reader. It changes a bit the approach thought:
public class Reader
{
public void Read(string file, Action<CSVModel> action)
{
using var reader = new StreamReader(#"C:\Users\z0042d8s\Desktop\GST invoice\RISK All RISKs_RM - Copy.CSV");
using var csv = new CsvReader(reader, CultureInfo.InvariantCulture);
IEnumerable<CSVModel> records = csv.GetRecords<CSVModel>();
foreach(var record in records){
action(record);
}
}
}
class Consumer
{
public void Consume(CSVModel data)
{
//Do whatever you want with the data. I am gonna log it to the console
Console.WriteLine(item);
}
}
public static void Main()
{
var consumer = new Consumer();
new Reader().Read(consumer.Consume); // Pass the action here
}
ALternatively, you can make the whole Reader class Disposable :
public class Reader : IDisposable
{
private readonly StreamReader _reader;
private readonly CsvReader _csv;
public Reader(string file)
{
_reader = new StreamReader(file);
_csv = new CsvReader(_reader, CultureInfo.InvariantCulture);
}
public IEnumerable<CSVModel> Read()
{
return csv.GetRecords<CSVModel>();
}
public void Dispose() => Dispose(true);
protected virtual void Dispose(bool disposing)
{
if (_disposed)
{
return;
}
if (disposing)
{
_csv.Dispose();
_reader.Dispose();
}
_disposed = true;
}
}
class Consumer
{
public void Consume(IEnumerable<CSVModel> data)
{
foreach(var item in data)
{
//Do whatever you want with the data. I am gonna log it to the console
Console.WriteLine(item);
}
}
}
public static void Main()
{
using var myReader = new Reader("c:\\path.csv");
var consumer = new Consumer().Consume(myReader.Read());
}
You can lazily enumerate items from GetRecords IEnumerable and yield records to the consumer like this:
public class Reader
{
public IEnumerable<CSVModel> Read(string file)
{
using var reader = new StreamReader(#"C:\Users\z0042d8s\Desktop\GST invoice\RISK All RISKs_RM - Copy.CSV");
using var csv = new CsvReader(reader, CultureInfo.InvariantCulture);
foreach (var csvRecord in csv.GetRecords<CSVModel>())
{
yield return csvRecord;
}
}
}
This way you guarantee to enumerate records before the underlying data gets disposed and you don't need to load all data up front.

Read x number of lines of a file at a time C#

I want to read and process 10+ lines at a time for GB files, but haven't found a solution to spit out 10 lines until the end.
My last attempt was :
int n = 10;
foreach (var line in File.ReadLines("path")
.AsParallel().WithDegreeOfParallelism(n))
{
System.Console.WriteLine(line);
Thread.Sleep(1000);
}
I've seen solutions that use buffer sizes but I want to read in the entire row.
The Default behavour is to read all the Line in one shot, if you want to read less than that you need to dig a little deeper into how it reads them and get a StreamReader which will then let you control the reading process
using (StreamReader sr = new StreamReader(path))
{
while (sr.Peek() >= 0)
{
Console.WriteLine(sr.ReadLine());
}
}
it also has a ReadLineAsync method that will return a task
if you contain these tasks in an ConcurrentBag you can very easily keep the processing running on 10 lines at a time.
var bag =new ConCurrentBag<Task>();
using (StreamReader sr = new StreamReader(path))
{
while(sr.Peek() >=0)
{
if(bag.Count < 10)
{
Task processing = sr.ReadLineAsync().ContinueWith( (read) => {
string s = read.Result;//EDIT Removed await to reflect Scots comment
//process line
});
bag.Add(processing);
}
else
{
Task.WaitAny(bag.ToArray())
//remove competed tasks from bag
}
}
}
note this code is for guidance only not to be used as is;
if all you want is the last ten lines then you can get that with the solution here
How to read a text file reversely with iterator in C#
This method would create "pages" of lines from your file.
public static IEnumerable<string[]> ReadFileAsLinesSets(string fileName, int setLen = 10)
{
using (var reader = new StreamReader(fileName))
while (!reader.EndOfStream)
{
var set = new List<string>();
for (var i = 0; i < setLen && !reader.EndOfStream; i++)
{
set.Add(reader.ReadLine());
}
yield return set.ToArray();
}
}
... More fun version...
class Example
{
static void Main(string[] args)
{
"YourFile.txt".ReadAsLines()
.AsPaged(10)
.Select(a=>a.ToArray()) //required or else you will get random data since "WrappedEnumerator" is not thread safe
.AsParallel()
.WithDegreeOfParallelism(10)
.ForAll(a =>
{
//Do your work here.
Console.WriteLine(a.Aggregate(new StringBuilder(),
(sb, v) => sb.AppendFormat("{0:000000} ", v),
sb => sb.ToString()));
});
}
}
public static class ToolsEx
{
public static IEnumerable<IEnumerable<T>> AsPaged<T>(this IEnumerable<T> items,
int pageLength = 10)
{
using (var enumerator = new WrappedEnumerator<T>(items.GetEnumerator()))
while (!enumerator.IsDone)
yield return enumerator.GetNextPage(pageLength);
}
public static IEnumerable<T> GetNextPage<T>(this IEnumerator<T> enumerator,
int pageLength = 10)
{
for (var i = 0; i < pageLength && enumerator.MoveNext(); i++)
yield return enumerator.Current;
}
public static IEnumerable<string> ReadAsLines(this string fileName)
{
using (var reader = new StreamReader(fileName))
while (!reader.EndOfStream)
yield return reader.ReadLine();
}
}
internal class WrappedEnumerator<T> : IEnumerator<T>
{
public WrappedEnumerator(IEnumerator<T> enumerator)
{
this.InnerEnumerator = enumerator;
this.IsDone = false;
}
public IEnumerator<T> InnerEnumerator { get; private set; }
public bool IsDone { get; private set; }
public T Current { get { return this.InnerEnumerator.Current; } }
object System.Collections.IEnumerator.Current { get { return this.Current; } }
public void Dispose()
{
this.InnerEnumerator.Dispose();
this.IsDone = true;
}
public bool MoveNext()
{
var next = this.InnerEnumerator.MoveNext();
this.IsDone = !next;
return next;
}
public void Reset()
{
this.IsDone = false;
this.InnerEnumerator.Reset();
}
}

How to check if a class implements interface methods in NRefactory

I have two file. One of this is a class declaration and other is interface declaration. Class should implements interface. How can I check in NRefactory if class implements interface methods?
I should give more details.
First file - for example:
class Test : IF
{
}
and the second
interface IF
{
void Foo();
}
I have to read these files and parse with NRefactory. I need to check if class Test implements method from interface IF.
Without compilation and loading compiled assembly.
Use the is keyword
http://msdn.microsoft.com/en-us/library/scekt9xw.aspx
if (myObj is IMyInterface) {
....
}
I found solution in NRefactory code. I've modified this in order to achive my goals. First we should implement visitor which check if classes implements each method from interface:
public class MissingInterfaceMemberImplementationVisitor : DepthFirstAstVisitor
{
private readonly CSharpAstResolver _resolver;
public bool IsInterfaceMemberMissing { get; private set; }
public MissingInterfaceMemberImplementationVisitor(CSharpAstResolver resolver)
{
_resolver = resolver;
IsInterfaceMemberMissing = false;
}
public override void VisitTypeDeclaration(TypeDeclaration typeDeclaration)
{
if (typeDeclaration.ClassType == ClassType.Interface || typeDeclaration.ClassType == ClassType.Enum)
return;
base.VisitTypeDeclaration(typeDeclaration);
var rr = _resolver.Resolve(typeDeclaration);
if (rr.IsError)
return;
foreach (var baseType in typeDeclaration.BaseTypes)
{
var bt = _resolver.Resolve(baseType);
if (bt.IsError || bt.Type.Kind != TypeKind.Interface)
continue;
bool interfaceMissing;
var toImplement = ImplementInterfaceAction.CollectMembersToImplement(rr.Type.GetDefinition(), bt.Type, false, out interfaceMissing);
if (toImplement.Count == 0)
continue;
IsInterfaceMemberMissing = true;
}
}
}
And now we have to read all files, parse them and invoke above class in a following way:
var solutionFiles = new List<FileInfo>();
var trees = new Dictionary<FileInfo, SyntaxTree>();
IProjectContent projectContent = new CSharpProjectContent();
foreach (var file in solutionFiles.Where(f => f.Extension == ".cs").Distinct())
{
var parser = new ICSharpCode.NRefactory.CSharp.CSharpParser();
SyntaxTree syntaxTree;
using (var fs = new FileStream(file.FullName, FileMode.Open, FileAccess.Read, FileShare.Read, 4096, FileOptions.SequentialScan))
{
syntaxTree = parser.Parse(fs, file.FullName);
}
trees.Add(file, syntaxTree);
var unresolvedFile = syntaxTree.ToTypeSystem();
projectContent = projectContent.AddOrUpdateFiles(unresolvedFile);
}
var compilation = projectContent.CreateCompilation();
foreach (var sharpFile in trees)
{
var originalResolver = new CSharpAstResolver(compilation, sharpFile.Value, sharpFile.Value.ToTypeSystem());
var visitor = new MissingInterfaceMemberImplementationVisitor(originalResolver);
sharpFile.Value.AcceptVisitor(visitor);
if (visitor.IsInterfaceMemberMissing)
return false;
}
return true;
Alternatively, you can also use as keyword.
class MyClass { }
interface IInterface { }
MyClass instance = new MyClass();
var inst = instance as IInterface;
if (inst != null)
{
// your class implements the interface
}
Use this when you need that instance use afterwards without additional reference typing. Using is you still need to do
if (instance is IInterface)
{
var inst = (IInterface) instance;
}
or
(instance as IInterface).InstanceProperty
so why don't do it right away.

ObjectDisposedException when trying to upload a file

I have this service class:
public delegate string AsyncMethodCaller(string id, HttpPostedFileBase file);
public class ObjectService : IDisposable
{
private readonly IObjectRepository repository;
private readonly IAmazonS3 client;
private readonly string bucketName;
private static object syncRoot = new object();
private static IDictionary<string, int> processStatus { get; set; }
public ObjectService(string accessKey, string secretKey, string bucketName)
{
var credentials = new BasicAWSCredentials(accessKey, secretKey);
this.bucketName = bucketName;
this.client = new AmazonS3Client(credentials, RegionEndpoint.EUWest1);
this.repository = new ObjectRepository(this.client, this.bucketName);
if (processStatus == null)
processStatus = new Dictionary<string, int>();
}
public IList<S3Object> GetAll()
{
return this.repository.GetAll();
}
public S3Object Get(string key)
{
return this.GetAll().Where(model => model.Key.Equals(key, StringComparison.OrdinalIgnoreCase)).SingleOrDefault();
}
/// <summary>
/// Note: You can upload objects of up to 5 GB in size in a single operation. For objects greater than 5 GB you must use the multipart upload API.
/// Using the multipart upload API you can upload objects up to 5 TB each. For more information, see http://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html.
/// </summary>
/// <param name="id">Unique id for tracking the upload progress</param>
/// <param name="bucketName">The name of the bucket that the object is being uploaded to</param>
/// <param name="file">The file that will be uploaded</param>
/// <returns>The unique id</returns>
public string Upload(string id, HttpPostedFileBase file)
{
var reader = new BinaryReader(file.InputStream);
var data = reader.ReadBytes((int)file.InputStream.Length);
var stream = new MemoryStream(data);
var utility = new TransferUtility(client);
var request = new TransferUtilityUploadRequest()
{
BucketName = this.bucketName,
Key = file.FileName,
InputStream = stream
};
request.UploadProgressEvent += (sender, e) => request_UploadProgressEvent(sender, e, id);
utility.Upload(request);
return id;
}
private void request_UploadProgressEvent(object sender, UploadProgressArgs e, string id)
{
lock (syncRoot)
{
processStatus[id] = e.PercentDone;
}
}
public void Add(string id)
{
lock (syncRoot)
{
processStatus.Add(id, 0);
}
}
public void Remove(string id)
{
lock (syncRoot)
{
processStatus.Remove(id);
}
}
public int GetStatus(string id)
{
lock (syncRoot)
{
if (processStatus.Keys.Count(x => x == id) == 1)
{
return processStatus[id];
}
else
{
return 100;
}
}
}
public void Dispose()
{
this.repository.Dispose();
this.client.Dispose();
}
}
and my controller looks like this:
public class _UploadController : Controller
{
public void StartUpload(string id, HttpPostedFileBase file)
{
var bucketName = CompanyProvider.CurrentCompanyId();
using (var service = new ObjectService(ConfigurationManager.AppSettings["AWSAccessKey"], ConfigurationManager.AppSettings["AWSSecretKey"], bucketName))
{
service.Add(id);
var caller = new AsyncMethodCaller(service.Upload);
var result = caller.BeginInvoke(id, file, new AsyncCallback(CompleteUpload), caller);
}
}
public void CompleteUpload(IAsyncResult result)
{
var caller = (AsyncMethodCaller)result.AsyncState;
var id = caller.EndInvoke(result);
}
//
// GET: /_Upload/GetCurrentProgress
public JsonResult GetCurrentProgress(string id)
{
try
{
var bucketName = CompanyProvider.CurrentCompanyId();
this.ControllerContext.HttpContext.Response.AddHeader("cache-control", "no-cache");
using (var service = new ObjectService(ConfigurationManager.AppSettings["AWSAccessKey"], ConfigurationManager.AppSettings["AWSSecretKey"], bucketName))
{
return new JsonResult { Data = new { success = true, progress = service.GetStatus(id) } };
}
}
catch (Exception ex)
{
return new JsonResult { Data = new { success = false, error = ex.Message } };
}
}
}
Now, I have found that sometimes, I get the error ObjectDisposedException when trying to upload a file on this line: var data = reader.ReadBytes((int)file.InputStream.Length);. I read that I should not be using the using keyword because of the asynchronous calls but it still seems to be disposing the stream.
Can anyone tell me why?
Update 1
I have changed my controller to this:
private ObjectService service = new ObjectService(ConfigurationManager.AppSettings["AWSAccessKey"], ConfigurationManager.AppSettings["AWSSecretKey"], CompanyProvider.CurrentCompanyId());
public void StartUpload(string id, HttpPostedFileBase file)
{
service.Add(id);
var caller = new AsyncMethodCaller(service.Upload);
var result = caller.BeginInvoke(id, file, new AsyncCallback(CompleteUpload), caller);
}
public void CompleteUpload(IAsyncResult result)
{
var caller = (AsyncMethodCaller)result.AsyncState;
var id = caller.EndInvoke(result);
this.service.Dispose();
}
but I am still getting the error on the file.InputStream line.
Update 2
The problem seems to be with the BinaryReader.
I changed the code to look like this:
var inputStream = file.InputStream;
var i = inputStream.Length;
var n = (int)i;
using (var reader = new BinaryReader(inputStream))
{
var data = reader.ReadBytes(n);
var stream = new MemoryStream(data);
var request = new TransferUtilityUploadRequest()
{
BucketName = this.bucketName,
Key = file.FileName,
InputStream = stream
};
try
{
request.UploadProgressEvent += (sender, e) => request_UploadProgressEvent(sender, e, id);
utility.Upload(request);
}
catch
{
file.InputStream.Dispose(); // Close our stream
}
}
If the upload fails and I try to re-upload the item, that is when the error is thrown. It is like the item is locked or something.
You are disposing the service with the using statement when you are calling the service with BeginInvoke.
using (var service = new ObjectService(ConfigurationManager.AppSettings["AWSAccessKey"], ConfigurationManager.AppSettings["AWSSecretKey"], bucketName))
{
service.Add(id);
var caller = new AsyncMethodCaller(service.Upload);
var result = caller.BeginInvoke(id, file, new AsyncCallback(CompleteUpload), caller);
}
You have to dispose your service when the job is done:
var service = new ObjectService(ConfigurationManager.AppSettings["AWSAccessKey"], ConfigurationManager.AppSettings["AWSSecretKey"], bucketName)
public void StartUpload(string id, HttpPostedFileBase file)
{
var bucketName = CompanyProvider.CurrentCompanyId();
service.Add(id);
var caller = new AsyncMethodCaller(service.Upload);
var result = caller.BeginInvoke(id, file, new AsyncCallback(CompleteUpload), caller);
}
public void CompleteUpload(IAsyncResult result)
{
var caller = (AsyncMethodCaller)result.AsyncState;
var id = caller.EndInvoke(result);
service.Close();
service.Dispose();
}
Also your file might be corrupted, try this code:
byte[] buffer = new byte[file.InputStream.Length];
file.InputStream.Seek(0, SeekOrigin.Begin);
file.InputStream.Read(buffer, 0, file.InputStream.Length);

Webapi formdata upload (to DB) with extra parameters

I need to upload file sending extra paramaters.
I have found the following post in stackoverflow: Webapi ajax formdata upload with extra parameters
It describes how to do this using MultipartFormDataStreamProvider and saving data to fileserver. I do not need to save file to server, but to DB instead.
And I have already working code using MultipartMemoryStreamProvider, but it doesn't use extra parameter.
Can you give me clues how to process extra paramaters in webapi?
For example, if I add file and also test paramater:
data.append("myParameter", "test");
Here is my webapi that processes fileupload without extra paramater:
if (Request.Content.IsMimeMultipartContent())
{
var streamProvider = new MultipartMemoryStreamProvider();
var task = Request.Content.ReadAsMultipartAsync(streamProvider).ContinueWith<IEnumerable<FileModel>>(t =>
{
if (t.IsFaulted || t.IsCanceled)
{
throw new HttpResponseException(HttpStatusCode.InternalServerError);
}
_fleDataService = new FileDataBLL();
FileData fle;
var fleInfo = streamProvider.Contents.Select(i => {
fle = new FileData();
fle.FileName = i.Headers.ContentDisposition.FileName;
var contentTest = i.ReadAsByteArrayAsync();
contentTest.Wait();
if (contentTest.Result != null)
{
fle.FileContent = contentTest.Result;
}
// get extra parameters here ??????
_fleDataService.Save(fle);
return new FileModel(i.Headers.ContentDisposition.FileName, 1024); //todo
});
return fleInfo;
});
return task;
}
Expanding on gooid's answer, I encapsulated the FormData extraction into the provider because I was having issues with it being quoted. This just provided a better implementation in my opinion.
public class MultipartFormDataMemoryStreamProvider : MultipartMemoryStreamProvider
{
private readonly Collection<bool> _isFormData = new Collection<bool>();
private readonly NameValueCollection _formData = new NameValueCollection(StringComparer.OrdinalIgnoreCase);
private readonly Dictionary<string, Stream> _fileStreams = new Dictionary<string, Stream>();
public NameValueCollection FormData
{
get { return _formData; }
}
public Dictionary<string, Stream> FileStreams
{
get { return _fileStreams; }
}
public override Stream GetStream(HttpContent parent, HttpContentHeaders headers)
{
if (parent == null)
{
throw new ArgumentNullException("parent");
}
if (headers == null)
{
throw new ArgumentNullException("headers");
}
var contentDisposition = headers.ContentDisposition;
if (contentDisposition == null)
{
throw new InvalidOperationException("Did not find required 'Content-Disposition' header field in MIME multipart body part.");
}
_isFormData.Add(String.IsNullOrEmpty(contentDisposition.FileName));
return base.GetStream(parent, headers);
}
public override async Task ExecutePostProcessingAsync()
{
for (var index = 0; index < Contents.Count; index++)
{
HttpContent formContent = Contents[index];
if (_isFormData[index])
{
// Field
string formFieldName = UnquoteToken(formContent.Headers.ContentDisposition.Name) ?? string.Empty;
string formFieldValue = await formContent.ReadAsStringAsync();
FormData.Add(formFieldName, formFieldValue);
}
else
{
// File
string fileName = UnquoteToken(formContent.Headers.ContentDisposition.FileName);
Stream stream = await formContent.ReadAsStreamAsync();
FileStreams.Add(fileName, stream);
}
}
}
private static string UnquoteToken(string token)
{
if (string.IsNullOrWhiteSpace(token))
{
return token;
}
if (token.StartsWith("\"", StringComparison.Ordinal) && token.EndsWith("\"", StringComparison.Ordinal) && token.Length > 1)
{
return token.Substring(1, token.Length - 2);
}
return token;
}
}
And here's how I'm using it. Note that I used await since we're on .NET 4.5.
[HttpPost]
public async Task<HttpResponseMessage> Upload()
{
if (!Request.Content.IsMimeMultipartContent())
{
return Request.CreateResponse(HttpStatusCode.UnsupportedMediaType, "Unsupported media type.");
}
// Read the file and form data.
MultipartFormDataMemoryStreamProvider provider = new MultipartFormDataMemoryStreamProvider();
await Request.Content.ReadAsMultipartAsync(provider);
// Extract the fields from the form data.
string description = provider.FormData["description"];
int uploadType;
if (!Int32.TryParse(provider.FormData["uploadType"], out uploadType))
{
return Request.CreateResponse(HttpStatusCode.BadRequest, "Upload Type is invalid.");
}
// Check if files are on the request.
if (!provider.FileStreams.Any())
{
return Request.CreateResponse(HttpStatusCode.BadRequest, "No file uploaded.");
}
IList<string> uploadedFiles = new List<string>();
foreach (KeyValuePair<string, Stream> file in provider.FileStreams)
{
string fileName = file.Key;
Stream stream = file.Value;
// Do something with the uploaded file
UploadManager.Upload(stream, fileName, uploadType, description);
// Keep track of the filename for the response
uploadedFiles.Add(fileName);
}
return Request.CreateResponse(HttpStatusCode.OK, "Successfully Uploaded: " + string.Join(", ", uploadedFiles));
}
You can achieve this in a not-so-very-clean manner by implementing a custom DataStreamProvider that duplicates the logic for parsing FormData from multi-part content from MultipartFormDataStreamProvider.
I'm not quite sure why the decision was made to subclass MultipartFormDataStreamProvider from MultiPartFileStreamProvider without at least extracting the code that identifies and exposes the FormData collection since it is useful for many tasks involving multi-part data outside of simply saving a file to disk.
Anyway, the following provider should help solve your issue. You will still need to ensure that when you iterate the provider content you are ignoring anything that does not have a filename (specifically the statement streamProvider.Contents.Select() else you risk trying to upload the formdata to the DB). Hence the code that asks the provider is a HttpContent IsStream(), this is a bit of a hack but was the simplest was I could think to do it.
Note that it is basically a cut and paste hatchet job from the source of MultipartFormDataStreamProvider - it has not been rigorously tested (inspired by this answer).
public class MultipartFormDataMemoryStreamProvider : MultipartMemoryStreamProvider
{
private readonly Collection<bool> _isFormData = new Collection<bool>();
private readonly NameValueCollection _formData = new NameValueCollection(StringComparer.OrdinalIgnoreCase);
public NameValueCollection FormData
{
get { return _formData; }
}
public override Stream GetStream(HttpContent parent, HttpContentHeaders headers)
{
if (parent == null) throw new ArgumentNullException("parent");
if (headers == null) throw new ArgumentNullException("headers");
var contentDisposition = headers.ContentDisposition;
if (contentDisposition != null)
{
_isFormData.Add(String.IsNullOrEmpty(contentDisposition.FileName));
return base.GetStream(parent, headers);
}
throw new InvalidOperationException("Did not find required 'Content-Disposition' header field in MIME multipart body part.");
}
public override async Task ExecutePostProcessingAsync()
{
for (var index = 0; index < Contents.Count; index++)
{
if (IsStream(index))
continue;
var formContent = Contents[index];
var contentDisposition = formContent.Headers.ContentDisposition;
var formFieldName = UnquoteToken(contentDisposition.Name) ?? string.Empty;
var formFieldValue = await formContent.ReadAsStringAsync();
FormData.Add(formFieldName, formFieldValue);
}
}
private static string UnquoteToken(string token)
{
if (string.IsNullOrWhiteSpace(token))
return token;
if (token.StartsWith("\"", StringComparison.Ordinal) && token.EndsWith("\"", StringComparison.Ordinal) && token.Length > 1)
return token.Substring(1, token.Length - 2);
return token;
}
public bool IsStream(int idx)
{
return !_isFormData[idx];
}
}
It can be used as follows (using TPL syntax to match your question):
[HttpPost]
public Task<string> Post()
{
if (!Request.Content.IsMimeMultipartContent())
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.NotAcceptable, "Invalid Request!"));
var provider = new MultipartFormDataMemoryStreamProvider();
return Request.Content.ReadAsMultipartAsync(provider).ContinueWith(p =>
{
var result = p.Result;
var myParameter = result.FormData.GetValues("myParameter").FirstOrDefault();
foreach (var stream in result.Contents.Where((content, idx) => result.IsStream(idx)))
{
var file = new FileData(stream.Headers.ContentDisposition.FileName);
var contentTest = stream.ReadAsByteArrayAsync();
// ... and so on, as per your original code.
}
return myParameter;
});
}
I tested it with the following HTML form:
<form action="/api/values" method="post" enctype="multipart/form-data">
<input name="myParameter" type="hidden" value="i dont do anything interesting"/>
<input type="file" name="file1" />
<input type="file" name="file2" />
<input type="submit" value="OK" />
</form>
I really needed the media type and length of the files uploaded so I modified #Mark Seefeldt answer slightly to the following:
public class MultipartFormFile
{
public string Name { get; set; }
public long? Length { get; set; }
public string MediaType { get; set; }
public Stream Stream { get; set; }
}
public class MultipartFormDataMemoryStreamProvider : MultipartMemoryStreamProvider
{
private readonly Collection<bool> _isFormData = new Collection<bool>();
private readonly NameValueCollection _formData = new NameValueCollection(StringComparer.OrdinalIgnoreCase);
private readonly List<MultipartFormFile> _fileStreams = new List<MultipartFormFile>();
public NameValueCollection FormData
{
get { return _formData; }
}
public List<MultipartFormFile> FileStreams
{
get { return _fileStreams; }
}
public override Stream GetStream(HttpContent parent, HttpContentHeaders headers)
{
if (parent == null)
{
throw new ArgumentNullException("parent");
}
if (headers == null)
{
throw new ArgumentNullException("headers");
}
var contentDisposition = headers.ContentDisposition;
if (contentDisposition == null)
{
throw new InvalidOperationException("Did not find required 'Content-Disposition' header field in MIME multipart body part.");
}
_isFormData.Add(String.IsNullOrEmpty(contentDisposition.FileName));
return base.GetStream(parent, headers);
}
public override async Task ExecutePostProcessingAsync()
{
for (var index = 0; index < Contents.Count; index++)
{
HttpContent formContent = Contents[index];
if (_isFormData[index])
{
// Field
string formFieldName = UnquoteToken(formContent.Headers.ContentDisposition.Name) ?? string.Empty;
string formFieldValue = await formContent.ReadAsStringAsync();
FormData.Add(formFieldName, formFieldValue);
}
else
{
// File
var file = new MultipartFormFile
{
Name = UnquoteToken(formContent.Headers.ContentDisposition.FileName),
Length = formContent.Headers.ContentLength,
MediaType = formContent.Headers.ContentType.MediaType,
Stream = await formContent.ReadAsStreamAsync()
};
FileStreams.Add(file);
}
}
}
private static string UnquoteToken(string token)
{
if (string.IsNullOrWhiteSpace(token))
{
return token;
}
if (token.StartsWith("\"", StringComparison.Ordinal) && token.EndsWith("\"", StringComparison.Ordinal) && token.Length > 1)
{
return token.Substring(1, token.Length - 2);
}
return token;
}
}
Ultimately, the following was what worked for me:
string root = HttpContext.Current.Server.MapPath("~/App_Data");
var provider = new MultipartFormDataStreamProvider(root);
var filesReadToProvider = await Request.Content.ReadAsMultipartAsync(provider);
foreach (var file in provider.FileData)
{
var fileName = file.Headers.ContentDisposition.FileName.Replace("\"", string.Empty);
byte[] documentData;
documentData = File.ReadAllBytes(file.LocalFileName);
DAL.Document newRecord = new DAL.Document
{
PathologyRequestId = PathologyRequestId,
FileName = fileName,
DocumentData = documentData,
CreatedById = ApplicationSecurityDirector.CurrentUserGuid,
CreatedDate = DateTime.Now,
UpdatedById = ApplicationSecurityDirector.CurrentUserGuid,
UpdatedDate = DateTime.Now
};
context.Documents.Add(newRecord);
context.SaveChanges();
}

Categories