What does the keyword "Action"? [duplicate] - c#

This question already has answers here:
Delegates: Predicate vs. Action vs. Func
(10 answers)
Closed 8 years ago.
In your code, when you open the FileMp3Reader, uses the word Action and then placed inside by using a Lambda expression, a method.
What does the keyword Action?
Inside the file.Create method what is being done?
var mp3Path = #"C:\Users\ronnie\Desktop\mp3\dotnetrocks_0717_alan_dahl_imagethink.mp3";
int splitLength = 120;
var mp3Dir = Path.GetDirectoryName(mp3Path);
var mp3File = Path.GetFileName(mp3Path);
var splitDir = Path.Combine(mp3Dir,Path.GetFileNameWithoutExtension(mp3Path));
Directory.CreateDirectory(splitDir);
int splitI = 0;
int secsOffset = 0;
using (var reader = new Mp3FileReader(mp3Path))
{
FileStream writer = null;
Action createWriter = new Action(() => {
writer = File.Create(Path.Combine(splitDir,Path.ChangeExtension(mp3File,(++splitI).ToString("D4") + ".mp3")));
});
Mp3Frame frame;
while ((frame = reader.ReadNextFrame()) != null)
{
if (writer == null) createWriter();
if ((int)reader.CurrentTime.TotalSeconds - secsOffset >= splitLength)
{
writer.Dispose();
createWriter();
secsOffset = (int)reader.CurrentTime.TotalSeconds;
}
writer.Write(frame.RawData, 0, frame.RawData.Length);
}
if(writer != null) writer.Dispose();
}

As noted in the comments, Action here is a delegate type. Which, given it's placement in the variable declaration, probably could have been inferred by many readers from the context. :)
The code in the File.Create() method simply generates a new file name based on the splitI index.
Ironically, in this particular case, the use of Action is superfluous. The code really should not have been written this way, as the delegate just makes it harder to read. A better version looks like this:
using (var reader = new Mp3FileReader(mp3Path))
{
FileStream writer = null;
try
{
Mp3Frame frame;
while ((frame = reader.ReadNextFrame()) != null)
{
if (writer != null &&
(int)reader.CurrentTime.TotalSeconds - secsOffset >= splitLength)
{
writer.Dispose();
writer = null;
secsOffset = (int)reader.CurrentTime.TotalSeconds;
}
if (writer == null)
writer = File.Create(Path.Combine(splitDir,
Path.ChangeExtension(mp3File,(++splitI).ToString("D4") + ".mp3")));
writer.Write(frame.RawData, 0, frame.RawData.Length);
}
}
finally
{
if(writer != null) writer.Dispose();
}
}
That way, the work to create a new FileStream instance is only ever needed in one place.
Even if it were really required to call it from two different places, IMHO this particular scenario would call for named method instead. The code would have been more readable that way, than using the delegate instance.

Related

Read lines batch wise in c#

I have below code which is reading a .json stream line by line. since it will be a lengthy process, I have decided that I will take 100 lines at a time before I call my main function. and so the below code works fine. but this also gives me an issue if number of lines is less than 100, in that case my main function will not be called. how can I optimize my below code to handle both the scenario i.e. read maximum 100 lines at a time and pass it to main function or read all the lines if it is below 100
public async void ReadJsonStream()
{
JsonSerializer serializer = new JsonSerializer();
using (Stream data = await manager.DownloadBlob(null, "TestMultipleLines.json", null))
{
using (StreamReader streamReader = new StreamReader(data, Encoding.UTF8))
{
int counter = 1;
List<string> lines = new List<string>();
while (streamReader.Peek() >= 0)
{
lines.Add(streamReader.ReadLine());
if (counter == 100)
{
counter = 1;
// call main function with line
lines.Clear();
}
counter++;
}
}
}
}
I feel what you are trying to do is wrong. How will you parse 100 lines? Do you want to rebuild from scratch a Json deserializer? And what will happen if some piece of json is split between the line 100 and the line 101?
But in the end, you asked for something and I'll give you what you asked.
public async void ReadJsonStream()
{
JsonSerializer serializer = new JsonSerializer();
using (Stream data = await manager.DownloadBlob(null, "TestMultipleLines.json", null))
{
using (StreamReader streamReader = new StreamReader(data, Encoding.UTF8))
{
List<string> lines = new List<string>();
string line;
while ((line = await streamReader.ReadLineAsync()) != null)
{
lines.Add(line);
if (lines.Count == 100)
{
// call main function with lines
lines.Clear();
}
}
if (lines.Count != 0)
{
// call main function with lines
lines.Clear(); // useless
}
}
}
}
As others noted, you forgot the "additional" call to // call main function with lines at the end of the cycle. I've even modified the code. You don't need to .Peek(), .ReadLine() returns null at the end of the input stream. You made your method async... You could make it fully async by using .ReadLineAsync().
Note that the JsonSerializer of Json.NET already has a Deserialize method that accept a TextReader (and a StreamReader is a TextReader), and that method will read the file "a piece at a time", and won't preload it before parsing it.
Add a check after the while loop. If the lines list is not empty, call main.
public async void ReadJsonStream()
{
JsonSerializer serializer = new JsonSerializer();
using (Stream data = await manager.DownloadBlob(null, "TestMultipleLines.json", null))
{
using (StreamReader streamReader = new StreamReader(data, Encoding.UTF8))
{
int counter = 1;
List<string> lines = new List<string>();
while (streamReader.Peek() >= 0)
{
lines.Add(streamReader.ReadLine());
if (counter == 100)
{
counter = 1;
// call main function with line
lines.Clear();
}
counter++;
}
if (lines.Count > 0)
// call main function with line
}
}
}
``

Sudden memory consumption jump resulting in out of memory exception while processing huge text file

I need to process a very large text file (6-8 GB). I wrote the code attached below. Unfortunately, every time output file reaches (being created next to source file) reaches ~2GB, I observe sudden jump in memory consumption (~100MB to few GBs) and in result - out of memory exception.
Debugger indicates that OOM occurs at while ((tempLine = streamReader.ReadLine()) != null)
I am targeting .NET 4.7 and x64 architecture only.
Single line is at most 50 character long.
I can workaround this and split original file to smaller parts not to face the problem while processing and merge resuls back to one file at the end, but would like not to do it.
Code:
public async Task PerformDecodeAsync(string sourcePath, string targetPath)
{
var allLines = CountLines(sourcePath);
long processedlines = default;
using (File.Create(targetPath));
var streamWriter = File.AppendText(targetPath);
var decoderBlockingCollection = new BlockingCollection<string>(1000);
var writerBlockingCollection = new BlockingCollection<string>(1000);
var producer = Task.Factory.StartNew(() =>
{
using (var streamReader = new StreamReader(File.OpenRead(sourcePath), Encoding.Default, true))
{
string tempLine;
while ((tempLine = streamReader.ReadLine()) != null)
{
decoderBlockingCollection.Add(tempLine);
}
decoderBlockingCollection.CompleteAdding();
}
});
var consumer1 = Task.Factory.StartNew(() =>
{
foreach (var line in decoderBlockingCollection.GetConsumingEnumerable())
{
short decodeCounter = 0;
StringBuilder builder = new StringBuilder();
foreach (var singleChar in line)
{
var positionInDecodeKey = decodingKeysList[decodeCounter].IndexOf(singleChar);
if (positionInDecodeKey > 0)
builder.Append(model.Substring(positionInDecodeKey, 1));
else
builder.Append(singleChar);
if (decodeCounter > 18)
decodeCounter = 0;
else ++decodeCounter;
}
writerBlockingCollection.TryAdd(builder.ToString());
Interlocked.Increment(ref processedlines);
if (processedlines == (long)allLines)
writerBlockingCollection.CompleteAdding();
}
});
var writer = Task.Factory.StartNew(() =>
{
foreach (var line in writerBlockingCollection.GetConsumingEnumerable())
{
streamWriter.WriteLine(line);
}
});
Task.WaitAll(producer, consumer1, writer);
}
Solutions, as well as advices how to optimize it a little more is greatly appreciated.
Like I said, I'd probably go for something simpler first, unless or until it's demonstrated that it's not performing well. As Adi said in their answer, this work appears to be I/O bound - so there seems little benefit in creating multiple tasks for it.
publiv void PerformDecode(string sourcePath, string targetPath)
{
File.WriteAllLines(targetPath,File.ReadLines(sourcePath).Select(line=>{
short decodeCounter = 0;
StringBuilder builder = new StringBuilder();
foreach (var singleChar in line)
{
var positionInDecodeKey = decodingKeysList[decodeCounter].IndexOf(singleChar);
if (positionInDecodeKey > 0)
builder.Append(model.Substring(positionInDecodeKey, 1));
else
builder.Append(singleChar);
if (decodeCounter > 18)
decodeCounter = 0;
else ++decodeCounter;
}
return builder.ToString();
}));
}
Now, of course, this code actually blocks until it's done, which is why I've not marked it async. But then, so did yours, and it should have been warning about that already.
(You could try using PLINQ instead of LINQ for the Select portion but honestly, the amount of processing we're doing here looks trivial; Profile first before applying any such change)
As the work you are doing is mostly IO bound, you aren't really gaining anything from parallelization. It also looks to me like (correct me if I'm wrong) that your transformation algorithm doesn't depend on you reading the file line-by-line, so I would recommend instead doing something like this:
void Main()
{
//Setup streams for testing
using(var inputStream = new MemoryStream())
using(var outputStream = new MemoryStream())
using (var inputWriter = new StreamWriter(inputStream))
using (var outputReader = new StreamReader(outputStream))
{
//Write test string and rewind stream
inputWriter.Write("abcdefghijklmnop");
inputWriter.Flush();
inputStream.Seek(0, SeekOrigin.Begin);
var inputBuffer = new byte[5];
var outputBuffer = new byte[5];
int inputLength;
while ((inputLength = inputStream.Read(inputBuffer, 0, inputBuffer.Length)) > 0)
{
for (var i = 0; i < inputLength; i++)
{
//transform each character
outputBuffer[i] = ++inputBuffer[i];
}
//Write to output
outputStream.Write(outputBuffer, 0, inputLength);
}
//Read for testing
outputStream.Seek(0, SeekOrigin.Begin);
var output = outputReader.ReadToEnd();
Console.WriteLine(output);
//Outputs: "bcdefghijklmnopq"
}
}
Obviously, you would be using FileStreams instead of MemoryStreams, and you can increase the buffer length to something much larger (as this was just a demonstrative example). Also as your original method is Async, you use the async variants of Stream.Write and Stream.Read

Operation is not valid due to the current state of the object

i am beginning in develop winphone and nokia imaging sdk. i have two function.
firstly, i call the function below to change image to gray color
private async void PickImageCallback(object sender, PhotoResult e)
{
if (e.TaskResult != TaskResult.OK || e.ChosenPhoto == null)
{
return;
}
using (var source = new StreamImageSource(e.ChosenPhoto))
{
using (var filters = new FilterEffect(source))
{
var sampleFilter = new GrayscaleFilter();
filters.Filters = new IFilter[] { sampleFilter };
var target = new WriteableBitmap((int)CartoonImage.ActualWidth, (int)CartoonImage.ActualHeight);
var renderer = new WriteableBitmapRenderer(filters, target);
{
await renderer.RenderAsync();
_thumbnailImageBitmap = target;
CartoonImage.Source = target;
}
}
}
SaveButton.IsEnabled = true;
}
then i call function to change image to binary color
private async void Binary(WriteableBitmap bm_image)
{
var target = new WriteableBitmap((int)CartoonImage.ActualWidth, (int)CartoonImage.ActualHeight);
MemoryStream stream= new MemoryStream();
bm_image.SaveJpeg(stream, bm_image.PixelWidth, bm_image.PixelHeight, 0, 100);
using (var source = new StreamImageSource(stream))
{
using (var filters = new FilterEffect(source))
{
var sampleFilter = new StampFilter(5, 0.7);
filters.Filters = new IFilter[] { sampleFilter };
var renderer1 =new WriteableBitmapRenderer(filters, target);
{
await renderer1.RenderAsync();
CartoonImage.Source = target;
}
}
}
}
but when it run to " await renderer1.RenderAsync();" in the second function, it doesn't work. How can i solve it. And you can explain for me about how "await" and "async" work ?
thank you very much!
I'm mostly guessing here since I do not know what error you get, but I'm pretty sure your problem lies in setting up the source. Have you made sure the memory stream position is set to the beginning (0) before creating an StreamImageSource?
Try adding:
stream.Position = 0;
before creating the StreamImageSource.
Instead of trying to create a memory stream from the writeable bitmap I suggest doing:
using Nokia.InteropServices.WindowsRuntime;
...
using (var source = new BitmapImageSource(bm_image.AsBitmap())
{
...
}

C# expression, equivalent to ruby's sandwich block code

I am a .NET developer and recently started learning ruby with ruby_koans. Some of Ruby's syntaxes are amazing and one of them is the way it handles "Sandwich" code.
The following is ruby sandwich code.
def file_sandwich(file_name)
file = open(file_name)
yield(file)
ensure
file.close if file
end
def count_lines2(file_name)
file_sandwich(file_name) do |file|
count = 0
while line = file.gets
count += 1
end
count
end
end
def test_counting_lines2
assert_equal 4, count_lines2("example_file.txt")
end
I am fascinated that I can get rid of the cumbersome "file open and close code" each time I access a file but cannot think of any C# equivalent code. Maybe, I can use IoC's dynamic proxy to do the same thing, but is there any way I can do it purely with C#?
Many thanks in advance.
You certainly don't need anything IoC-related here. How about:
public T ActOnFile<T>(string filename, Func<Stream, T> func)
{
using (Stream stream = File.OpenRead(stream))
{
return func(stream);
}
}
public int CountLines(string filename)
{
return ActOnFile(filename, stream =>
{
using (StreamReader reader = new StreamReader(stream))
{
int count = 0;
while (reader.ReadLine() != null)
{
count++;
}
return count;
}
});
}
In this case it doesn't help very much, as the using statement already does most of what you want... but the general principle holds. Indeed, that's how LINQ is so flexible. If you haven't looked at LINQ yet, I strongly recommend that you do.
Here's the act CountLines method I'd use:
public int CountLines(string filename)
{
return File.ReadLines(filename).Count();
}
Note that this will still only read a line at a time... but the Count extension method acts on the returned sequence.
In .NET 3.5 it would be:
public int CountLines(string filename)
{
using (var reader = File.OpenText(filename))
{
int count = 0;
while (reader.ReadLine() != null)
{
count++;
}
return count;
}
}
... still pretty simple.
are you just looking for something that opens and closes the stream for you?
public IEnumerable<string>GetFileLines(string path)
{
//the using() statement will open, close, and dispose your stream for you:
using(FileStream fs = new FileStream(path, FileMode.Open))
{
//do stuff here
}
}
Is yield return what you're looking for?
using will call Dispose() and Close() when it reaches the closing brace, but I think the question is how to achieve this particular structure of code.
Edit: Just realized that this isn't exactly what you're looking for, but I'll leave this answer here since a lot of people aren't aware of this technique.
static IEnumerable<string> GetLines(string filename)
{
using (var r = new StreamReader(filename))
{
string line;
while ((line = r.ReadLine()) != null)
yield return line;
}
}
static void Main(string[] args)
{
Console.WriteLine(GetLines("file.txt").Count());
//Or, similarly:
int count = 0;
foreach (var l in GetLines("file.txt"))
count++;
Console.WriteLine(count);
}

Drag and drop virtual files using IStream

I want to enable drag and drop from our windows forms based application to Windows Explorer. The big problem: The files are stored in a database, so I need to use delayed data rendering. There is an article on codeproject.com, but the author is using a H_GLOBAL object which leads to memory problems with files bigger than aprox. 20 MB. I haven't found a working solution for using an IStream Object instead. I think this must be possible to implement, because this isn't an unusual case. (A FTP program needs such a feature too, for example)
Edit: Is it possible to get an event when the user drops the file? So I could for example copy it to temp and the explorer gets it from there? Maybe there is an alternative approach for my problem...
AFAIK, there is not working article about this for .net. So you should write it by yourself, this is somewhat complicate, because .net DataObject class is limited. I have working example of the opposite task (accepting delayed rendering files from explorer), but it is easier, because I do not needed own IDataObject implementation.
So your task will be:
Find working IDataObject implementation in .net. I recommend you look here (Shell Style Drag and Drop in .NET (WPF and WinForms))
You also need an IStream wrapper for managed stream (it is relatively easy to implement)
Implement delayed rendering using information from MSDN (Shell Clipboard Formats)
This is the starting point, and in general enough information to implement such feature. With bit of patience and several unsuccessful attempts you will do it :)
Update: The following code lacks many necessary methods and functions, but the main logic is here.
// ...
private static IEnumerable<IVirtualItem> GetDataObjectContent(System.Windows.Forms.IDataObject dataObject)
{
if (dataObject == null)
return null;
List<IVirtualItem> Result = new List<IVirtualItem>();
bool WideDescriptor = dataObject.GetDataPresent(ShlObj.CFSTR_FILEDESCRIPTORW);
bool AnsiDescriptor = dataObject.GetDataPresent(ShlObj.CFSTR_FILEDESCRIPTORA);
if (WideDescriptor || AnsiDescriptor)
{
IDataObject NativeDataObject = dataObject as IDataObject;
if (NativeDataObject != null)
{
object Data = null;
if (WideDescriptor)
Data = dataObject.GetData(ShlObj.CFSTR_FILEDESCRIPTORW);
else
if (AnsiDescriptor)
Data = dataObject.GetData(ShlObj.CFSTR_FILEDESCRIPTORA);
Stream DataStream = Data as Stream;
if (DataStream != null)
{
Dictionary<string, VirtualClipboardFolder> FolderMap =
new Dictionary<string, VirtualClipboardFolder>(StringComparer.OrdinalIgnoreCase);
BinaryReader Reader = new BinaryReader(DataStream);
int Count = Reader.ReadInt32();
for (int I = 0; I < Count; I++)
{
VirtualClipboardItem ClipboardItem;
if (WideDescriptor)
{
FILEDESCRIPTORW Descriptor = ByteArrayHelper.ReadStructureFromStream<FILEDESCRIPTORW>(DataStream);
if (((Descriptor.dwFlags & FD.FD_ATTRIBUTES) > 0) && ((Descriptor.dwFileAttributes & FileAttributes.Directory) > 0))
ClipboardItem = new VirtualClipboardFolder(Descriptor);
else
ClipboardItem = new VirtualClipboardFile(Descriptor, NativeDataObject, I);
}
else
{
FILEDESCRIPTORA Descriptor = ByteArrayHelper.ReadStructureFromStream<FILEDESCRIPTORA>(DataStream);
if (((Descriptor.dwFlags & FD.FD_ATTRIBUTES) > 0) && ((Descriptor.dwFileAttributes & FileAttributes.Directory) > 0))
ClipboardItem = new VirtualClipboardFolder(Descriptor);
else
ClipboardItem = new VirtualClipboardFile(Descriptor, NativeDataObject, I);
}
string ParentFolder = Path.GetDirectoryName(ClipboardItem.FullName);
if (string.IsNullOrEmpty(ParentFolder))
Result.Add(ClipboardItem);
else
{
VirtualClipboardFolder Parent = FolderMap[ParentFolder];
ClipboardItem.Parent = Parent;
Parent.Content.Add(ClipboardItem);
}
VirtualClipboardFolder ClipboardFolder = ClipboardItem as VirtualClipboardFolder;
if (ClipboardFolder != null)
FolderMap.Add(PathHelper.ExcludeTrailingDirectorySeparator(ClipboardItem.FullName), ClipboardFolder);
}
}
}
}
return Result.Count > 0 ? Result : null;
}
// ...
public VirtualClipboardFile : VirtualClipboardItem, IVirtualFile
{
// ...
public Stream Open(FileMode mode, FileAccess access, FileShare share, FileOptions options, long startOffset)
{
if ((mode != FileMode.Open) || (access != FileAccess.Read))
throw new ArgumentException("Only open file mode and read file access supported.");
System.Windows.Forms.DataFormats.Format Format = System.Windows.Forms.DataFormats.GetFormat(ShlObj.CFSTR_FILECONTENTS);
if (Format == null)
return null;
FORMATETC FormatEtc = new FORMATETC();
FormatEtc.cfFormat = (short)Format.Id;
FormatEtc.dwAspect = DVASPECT.DVASPECT_CONTENT;
FormatEtc.lindex = FIndex;
FormatEtc.tymed = TYMED.TYMED_ISTREAM | TYMED.TYMED_HGLOBAL;
STGMEDIUM Medium;
FDataObject.GetData(ref FormatEtc, out Medium);
try
{
switch (Medium.tymed)
{
case TYMED.TYMED_ISTREAM:
IStream MediumStream = (IStream)Marshal.GetTypedObjectForIUnknown(Medium.unionmember, typeof(IStream));
ComStreamWrapper StreamWrapper = new ComStreamWrapper(MediumStream, FileAccess.Read, ComRelease.None);
// Seek from beginning
if (startOffset > 0)
if (StreamWrapper.CanSeek)
StreamWrapper.Seek(startOffset, SeekOrigin.Begin);
else
{
byte[] Null = new byte[256];
int Readed = 1;
while ((startOffset > 0) && (Readed > 0))
{
Readed = StreamWrapper.Read(Null, 0, (int)Math.Min(Null.Length, startOffset));
startOffset -= Readed;
}
}
StreamWrapper.Closed += delegate(object sender, EventArgs e)
{
ActiveX.ReleaseStgMedium(ref Medium);
Marshal.FinalReleaseComObject(MediumStream);
};
return StreamWrapper;
case TYMED.TYMED_HGLOBAL:
byte[] FileContent;
IntPtr MediumLock = Windows.GlobalLock(Medium.unionmember);
try
{
long Size = FSize.HasValue ? FSize.Value : Windows.GlobalSize(MediumLock).ToInt64();
FileContent = new byte[Size];
Marshal.Copy(MediumLock, FileContent, 0, (int)Size);
}
finally
{
Windows.GlobalUnlock(Medium.unionmember);
}
ActiveX.ReleaseStgMedium(ref Medium);
Stream ContentStream = new MemoryStream(FileContent, false);
ContentStream.Seek(startOffset, SeekOrigin.Begin);
return ContentStream;
default:
throw new ApplicationException(string.Format("Unsupported STGMEDIUM.tymed ({0})", Medium.tymed));
}
}
catch
{
ActiveX.ReleaseStgMedium(ref Medium);
throw;
}
}
// ...
Googlers may find this useful: download a file using windows IStream

Categories