I am working on a realtime simulation model. The models are written in unmanaged code, but the models are controlled by C# managed code, called the ExecutiveManager. An ExecutiveManager runs multiple models at a time, and controls the timing of the running models (like if a model has a "framerate" of 20 per second, the executive will tell the models when to start it's next frame).
We are seeing a consistently high load on the CPU when running the simulation, it can get up to 100% and stay there on a machine that should be totally appropriate. I have used a processor profiler to determine where the issues are, and it pointed me to two methods: WriteMemoryRegion and ReadMemoryRegion. The ExecutiveManager makes the calls to these methods. Models have shared memory regions, and the ExecutiveManager is used to read and write these regions using these Methods. Both read and write make calls to Marshal.Copy, and my gut tells me that's where the issue is, but I don't want to trust my gut! We are going to do further testing to narrow things down more, but I wanted to do a quick sanity check on Marshal.Copy. WriteMemoryRegion and ReadMemoryRegion are called each frame, and furthermore they're called by each model in the ExecutiveManager, and each model typically has 6 shared regions. So for 10 models each with 6 regions running at 20 frames per second calling both WriteMemoryRegion and ReadMemoryRegion, that's 2400 calls of Marshal.Copy per second. Is this unreasonable, or could my problem lie elsewhere?
public async Task ReadMemoryRegion(MemoryRegionDefinition g) {
if (!cache.ContainsKey(g.Name)) {
cache.Add(g.Name, mmff.CreateOrOpen(g.Name, g.Size));
}
var mmf = cache[g.Name];
using (var stream = mmf.CreateViewStream())
using (var reader = brf.Create(stream)) {
var buffer = reader.ReadBytes(g.Size);
await WriteIcBuffer(g, buffer).ConfigureAwait(false);
}
}
private Task WriteIcBuffer(MemoryRegionDefinition g, byte[] buffer) {
Marshal.Copy(buffer, 0, new IntPtr(g.BaseAddress),
buffer.Length);
return Task.FromResult(0);
}
public async Task WriteMemoryRegion(MemoryRegionDefinition g) {
if (!cache.ContainsKey(g.Name)) {
if (g.Size > 0) {
cache.Add(g.Name, mmff.CreateOrOpen(g.Name, g.Size));
} else if (g.Size == 0){
throw new EmptyGlobalException($#"Global {g.Name} not
created as it does not contain any variables.");
} else {
throw new NegativeSizeGlobalException($#"Global {g.Name}
not created as it has a negative size.");
}
}
var mmf = cache[g.Name];
using (var stream = mmf.CreateViewStream())
using (var writer = bwf.Create(stream)) {
var buffer = await ReadIcBuffer(g);
writer.Write(buffer);
}
}
private Task<byte[]> ReadIcBuffer(MemoryRegionDefinition g) {
var buffer = new byte[g.Size];
Marshal.Copy(new IntPtr(g.BaseAddress), buffer, 0, g.Size);
return Task.FromResult(buffer);
}
I need to come up with a solution so that my processor isn't catching on fire. I'm very green in this area so all ideas are welcome. Again, I'm not sure Marshal.Copy is the issue, but it seems possible. Please let me know if you see other issues that could contribute to the processor problem.
Related
I'm writing a console app to do some data migration between a legacy system and a new version. Each record has associated images stored on one web server and I'm downloading/altering/uploading each image to Azure (and also recording some data about each image in a database).
Here's a rough outline, in code:
public void MigrateData()
{
var records = GetRecords();
foreach (var record in records)
{
// ...
MigrateImages(record.Id, record.ImageCount);
}
}
public void MigrateImages(int recordId, int imageCount)
{
for (int i = 1; i <= imageCount; i++)
{
var legacyImageData = DownloadImage("the image url");
if (legacyImageData != null && legacyImageData.Length > 0)
{
// discard because we don't need the image id here, but it's used in other workflows
var _ = InsertImage(recordId, legacyImageData);
}
}
}
// This method can be used elsewhere, so the return of int is necessary and cannot be changed
public int InsertImage(int recordId, byte[] imageData)
{
var urls = UploadImage(imageData).Result;
return // method call to save image and return image ID
}
public async Task<(Uri LargeUri, Uri ThumbnailUri)> UploadImage(byte[] imageData)
{
byte[] largeData = ResizeImageToLarge(imageData);
byte[] thumbnailData = ResizeImageToThumbnail(imageData);
var largeUpload = largeBlob.UploadFromByteArrayAsync(largeImage, 0, largeImage.Length);
var thumbUpload = thumbsBlob.UploadFromByteArrayAsync(thumbImage, 0, thumbImage.Length);
await Task.WhenAll(largeUpload, thumbUpload);
var largeUrl = "";// logic to build url
var thumbUrl = "";// logic to build url
return (largeUrl, thumbUrl);
}
I'm using async/await for UploadImage() to allow parallel uploads for the large and thumbnail images, saving time.
My question is (if it's possible/makes sense) how can I utilize async/await for MigrateImages() to do parallel image uploads in order to reduce the overall time it takes for the task to complete? Does the fact that I'm already using async/await within UploadImage() hinder my goal?
It might be obvious due to my question, but await/async is still something I can't fully wrap my head around about how to correct utilize and/or implement it.
The async/await technology is intended for facilitating asynchrony, not concurrency. In your case you want to speed-up the images-uploading process by uploading multiple images concurrently. It is not important if you are wasting some thread-pool threads by blocking them, and you have no UI thread that needs to remain unblocked. You are making a tool that you are going to use once or twice and that's it. So I suggest that you spare yourself the trouble of trying to understand why your app is not behaving the way you expect, and avoid async/await altogether. Stick with the simple and familiar synchronous programming model for this assignment.
Combining synchronous and asynchronous code is dangerous, and even more so if your experience and understanding of async/await is limited. There are many intricacies in this technology. Using the Task.Result property in particular is a red flag. When you become more proficient with async/await code, you are going to treat any use of Result like a an unlocked grenade, ready to explode in your face at any time, and make you look like a fool. When used in apps with synchronization context (Windows Forms, ASP.NET) it can introduce deadlocks so easily that it's not even funny.
Here is how you can achieve the desired concurrency, without having to deal with the complexities of asynchrony:
public (Uri LargeUri, Uri ThumbnailUri) UploadImage(byte[] imageData)
{
byte[] largeData = ResizeImageToLarge(imageData);
byte[] thumbnailData = ResizeImageToThumbnail(imageData);
var largeUpload = largeBlob.UploadFromByteArrayAsync(
largeImage, 0, largeImage.Length);
var thumbUpload = thumbsBlob.UploadFromByteArrayAsync(
thumbImage, 0, thumbImage.Length);
Task.WaitAll(largeUpload, thumbUpload);
var largeUrl = "";// logic to build url
var thumbUrl = "";// logic to build url
return (largeUrl, thumbUrl);
}
I just replaced await Task.WhenAny with Task.WaitAll, and removed the wrapping Task from the method's return value.
Any idea why referencing the Position property of a stream dratically increases IO time ?
The execution time of:
sw.Restart();
fs = new FileStream("tmp", FileMode.Open);
var br = new BinaryReader(fs);
for (int i = 0; i < n; i++)
{
fs.Position+=0; //Should be a NOOP
a[i] = br.ReadDouble();
}
Debug.Write("\n");
Debug.Write(sw.ElapsedMilliseconds.ToString());
Debug.Write("\n");
fs.Close();
sw.Stop();
Debug.Write(a[0].ToString() + "\n");
Debug.Write(a[n - 1].ToString() + "\n");
is ~100 times slower than the equivalent loop without the "fs.Position+=0;".
Normally the intention with using Seek (or manipulating the Position property) is to speed up things when you dont need all data in the file.
But if, e.g., you only need every second value in the file, it is aparently much faster to read the entire file and discard the data that you dont need, than to skip every second value in the file by moving Stream.Position
You're doing two things:
Fetching the position
Setting it
It's entirely possible that each of those performs a direct interaction with the underlying Win32 APIs, whereas normally you can read a fair amount of data without having to interop with native code at all, due to buffering.
I'm slightly surprised at the extent to which it's worse, but I'm not surprised that it is worse. I think it would be worth you doing separate tests to find out which has more effect - the read or the write. So write similar code which either just reads or just writes. Note that that should affect the code you write later, but it may satisfy your curiosity a bit further.
You might also try
fs.Seek(0, SeekOrigin.Current);
... which is more likely to be ignored as a genuine no-op. But even so, skipping a single byte using fs.Seek(1, SeekOrigin.Current) may well be expensive again.
From the reflector:
public override long Position {
[SecuritySafeCritical]
get {
if (this._handle.IsClosed) {
__Error.FileNotOpen();
}
if (!this.CanSeek) {
__Error.SeekNotSupported();
}
if (this._exposedHandle) {
this.VerifyOSHandlePosition();
}
return this._pos + (long)(this._readPos - this._readLen + this._writePos);
}
set {
if (value < 0L) {
throw new ArgumentOutOfRangeException("value", Environment.GetResourceString("ArgumentOutOfRange_NeedNonNegNum"));
}
if (this._writePos > 0) {
this.FlushWrite(false);
}
this._readPos = 0;
this._readLen = 0;
this.Seek(value, SeekOrigin.Begin);
}
}
(self explanatory) - essentially every set causes flush and there is no check if you're setting the same value to the position; if you're not happy with the FileStream, make your own stream proxy that would handle position update more gracefully :)
I have tried to pass stream as an argument but I am not sure which way is "the best" so would like to hear your opinion / suggestions to my code sample
I personally prefer Option 3, but I have never seen it done this way anywhere else.
Option 1 is good for small streams (and streams with a known size)
Option 2_1 and 2_2 would always leave the "Hander" in doubt of who has the responsibility for disposing / closing.
public interface ISomeStreamHandler
{
// Option 1
void HandleStream(byte[] streamBytes);
// Option 2
void HandleStream(Stream stream);
// Option 3
void HandleStream(Func<Stream> openStream);
}
public interface IStreamProducer
{
Stream GetStream();
}
public class SomeTestClass
{
private readonly ISomeStreamHandler _streamHandler;
private readonly IStreamProducer _streamProducer;
public SomeTestClass(ISomeStreamHandler streamHandler, IStreamProducer streamProducer)
{
_streamHandler = streamHandler;
_streamProducer = streamProducer;
}
public void DoOption1()
{
var buffer = new byte[16 * 1024];
using (var input = _streamProducer.GetStream())
{
using (var ms = new MemoryStream())
{
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
_streamHandler.HandleStream(ms.ToArray());
}
}
}
public void DoOption2_1()
{
_streamHandler.HandleStream(_streamProducer.GetStream());
}
public void DoOption2_2()
{
using (var stream = _streamProducer.GetStream())
{
_streamHandler.HandleStream(stream);
}
}
public void DoOption3()
{
_streamHandler.HandleStream(_streamProducer.GetStream);
}
}
Option 2_2 is the standard way of dealing with disposable resources.
Your SomeTestClass instance asks the producer for a stream - then SomeTestClass owns a stream and is responsible for cleaning up.
Options 3 and 2_1 rely on a different object to clean up the resource owned by SomeTestClass - this expectation might not be met.
Option 1 is jut copying a stream's content to another stream - I don't see any benefits in doing that.
You may not realize it, but you are attempting to implement the pipeline design pattern. As a starting point, consider taking a look at:
MSDN: Pipelines
Steve's Blog: Pipeline Design Pattern
(other good references??)
With regards to your implementation, I recommend that you go with option #2:
public interface IStreamHandler
{
void Process(Stream stream);
}
With regards to object lifetime, it is my belief that:
the implementation should be consistent in how it handles calling Dispose
your solution will be more flexible if IStreamHandler did not call Dispose (now you can chain handlers together much like you would in Unix pipes)
THIRD-PARTY SOLUTIONS
Building a pipeline solution can be fun, but it is also worth noting that there are existing products on the market:
Yahoo: Pipes
Microsoft: BizTalk
IBM: Cast Iron
StackOverflow: Alternatives to Yahoo Pipes
ADDITIONAL NOTES
There is a design issue related to your proposed Option 2:
void Process(Stream stream);
In Unix Pipes you can chain a number of applications together by taking the output of one program and make it the input of another. If you were to build a similar solution using Option 2, you will run into problems if you are using multiple handlers and your data Stream is forward only (i.e. stream.CanSeek=False).
I'm curious as to how IEnumerable differs from IObservable under the hood. I understand the pull and push patterns respectively but how does C#, in terms of memory etc, notify subscribers (for IObservable) that it should receive the next bit of data in memory to process? How does the observed instance know it's had a change in data to push to the subscribers.
My question comes from a test I was performing reading in lines from a file. The file was about 6Mb in total.
Standard Time Taken: 4.7s, lines: 36587
Rx Time Taken: 0.68s, lines: 36587
How is Rx able to massively improve a normal iteration over each of the lines in the file?
private static void ReadStandardFile()
{
var timer = Stopwatch.StartNew();
var linesProcessed = 0;
foreach (var l in ReadLines(new FileStream(_filePath, FileMode.Open)))
{
var s = l.Split(',');
linesProcessed++;
}
timer.Stop();
_log.DebugFormat("Standard Time Taken: {0}s, lines: {1}",
timer.Elapsed.ToString(), linesProcessed);
}
private static void ReadRxFile()
{
var timer = Stopwatch.StartNew();
var linesProcessed = 0;
var query = ReadLines(new FileStream(_filePath, FileMode.Open)).ToObservable();
using (query.Subscribe((line) =>
{
var s = line.Split(',');
linesProcessed++;
}));
timer.Stop();
_log.DebugFormat("Rx Time Taken: {0}s, lines: {1}",
timer.Elapsed.ToString(), linesProcessed);
}
private static IEnumerable<string> ReadLines(Stream stream)
{
using (StreamReader reader = new StreamReader(stream))
{
while (!reader.EndOfStream)
yield return reader.ReadLine();
}
}
My hunch is the behavior you're seeing is reflecting the OS caching the file. I would imagine if you reversed the order of the calls you would see a similar difference in speeds, just swapped.
You could improve this benchmark by performing a few warm-up runs or by copying the input file to a temp file using File.Copy prior to testing each one. This way the file would not be "hot" and you would get a fair comparison.
I'd suspect that you're seeing some kind of internal optimization of the CLR. It probably caches the content of the file in memory between the two calls so that ToObservable can pull the content much faster...
Edit: Oh, the good colleague with the crazy nickname eeh ... #sixlettervariables was faster and he's probably right: it's rather the OS who's optimizing than the CLR.
I was seeing some strange behavior in a multi threading application which I wrote and which was not scaling well across multiple cores.
The following code illustrates the behavior I am seeing. It appears the heap intensive operations do not scale across multiple cores rather they seem to slow down. ie using a single thread would be faster.
class Program
{
public static Data _threadOneData = new Data();
public static Data _threadTwoData = new Data();
public static Data _threadThreeData = new Data();
public static Data _threadFourData = new Data();
static void Main(string[] args)
{
// Do heap intensive tests
var start = DateTime.Now;
RunOneThread(WorkerUsingHeap);
var finish = DateTime.Now;
var timeLapse = finish - start;
Console.WriteLine("One thread using heap: " + timeLapse);
start = DateTime.Now;
RunFourThreads(WorkerUsingHeap);
finish = DateTime.Now;
timeLapse = finish - start;
Console.WriteLine("Four threads using heap: " + timeLapse);
// Do stack intensive tests
start = DateTime.Now;
RunOneThread(WorkerUsingStack);
finish = DateTime.Now;
timeLapse = finish - start;
Console.WriteLine("One thread using stack: " + timeLapse);
start = DateTime.Now;
RunFourThreads(WorkerUsingStack);
finish = DateTime.Now;
timeLapse = finish - start;
Console.WriteLine("Four threads using stack: " + timeLapse);
Console.ReadLine();
}
public static void RunOneThread(ParameterizedThreadStart worker)
{
var threadOne = new Thread(worker);
threadOne.Start(_threadOneData);
threadOne.Join();
}
public static void RunFourThreads(ParameterizedThreadStart worker)
{
var threadOne = new Thread(worker);
threadOne.Start(_threadOneData);
var threadTwo = new Thread(worker);
threadTwo.Start(_threadTwoData);
var threadThree = new Thread(worker);
threadThree.Start(_threadThreeData);
var threadFour = new Thread(worker);
threadFour.Start(_threadFourData);
threadOne.Join();
threadTwo.Join();
threadThree.Join();
threadFour.Join();
}
static void WorkerUsingHeap(object state)
{
var data = state as Data;
for (int count = 0; count < 100000000; count++)
{
var property = data.Property;
data.Property = property + 1;
}
}
static void WorkerUsingStack(object state)
{
var data = state as Data;
double dataOnStack = data.Property;
for (int count = 0; count < 100000000; count++)
{
dataOnStack++;
}
data.Property = dataOnStack;
}
public class Data
{
public double Property
{
get;
set;
}
}
}
This code was run on a Core 2 Quad (4 core system) with the following results:
One thread using heap: 00:00:01.8125000
Four threads using heap: 00:00:17.7500000
One thread using stack: 00:00:00.3437500
Four threads using stack: 00:00:00.3750000
So using the heap with four threads did 4 times the work but took almost 10 times as long. This means it would be twice as fast in this case to use only one thread??????
Using the stack was much more as expected.
I would like to know what is going on here. Can the heap only be written to from one thread at a time?
The answer is simple - run outside of Visual Studio...
I just copied your entire program, and ran it on my quad core system.
Inside VS (Release Build):
One thread using heap: 00:00:03.2206779
Four threads using heap: 00:00:23.1476850
One thread using stack: 00:00:00.3779622
Four threads using stack: 00:00:00.5219478
Outside VS (Release Build):
One thread using heap: 00:00:00.3899610
Four threads using heap: 00:00:00.4689531
One thread using stack: 00:00:00.1359864
Four threads using stack: 00:00:00.1409859
Note the difference. The extra time in the build outside VS is pretty much all due to the overhead of starting the threads. Your work in this case is too small to really test, and you're not using the high performance counters, so it's not a perfect test.
Main rule of thumb - always do perf. testing outside VS, ie: use Ctrl+F5 instead of F5 to run.
Aside from the debug-vs-release effects, there is something more you should be aware of.
You cannot effectively evaluate multi-threaded code for performance in 0.3s.
The point of threads is two-fold: effectively model parallel work in code, and effectively exploit parallel resources (cpus, cores).
You are trying to evaluate the latter. Given that thread start overhead is not vanishingly small in comparison to the interval over which you are timing, your measurement is immediately suspect. In most perf test trials, a significant warm up interval is appropriate. This may sound silly to you - it's a computer program fter all, not a lawnmower. But warm-up is absolutely imperative if you are really going to evaluate multi-thread performance. Caches get filled, pipelines fill up, pools get filled, GC generations get filled. The steady-state, continuous performance is what you would like to evaluate. For purposes of this exercise, the program behaves like a lawnmower.
You could say - Well, no, I don't want to evaluate the steady state performance. And if that is the case, then I would say that your scenario is very specialized. Most app scenarios, whether their designers explicitly realize it or not, need continuous, steady performance.
If you truly need the perf to be good only over a single 0.3s interval, you have found your answer. But be careful to not generalize the results.
If you want general results, you need to have reasonably long warm up intervals, and longer collection intervals. You might start at 20s/60s for those phases, but here is the key thing: you need to vary those intervals until you find the results converging. YMMV. The valid times vary depending on the application workload and the resources dedicated to it, obviously. You may find that a measurement interval of 120s is necessary for convergence, or you may find 40s is just fine. But (a) you won't know until you measure it, and (b) you can bet 0.3s is not long enough.
[edit]Turns out, this is a release vs. debug build issue -- not sure why it is, but it is. See comments and other answers.[/edit]
This was very interesting -- I wouldn't have guessed there'd be that much difference. (similar test machine here -- Core 2 Quad Q9300)
Here's an interesting comparison -- add a decent-sized additional element to the 'Data' class -- I changed it to this:
public class Data
{
public double Property { get; set; }
public byte[] Spacer = new byte[8096];
}
It's still not quite the same time, but it's very close (running it for 10x as long results in 13.1s vs. 17.6s on my machine).
If I had to guess, I'd speculate that it's related to cross-core cache coherency, at least if I'm remembering how CPU cache works. With the small version of 'Data', if a single cache line contains multiple instances of Data, the cores are having to constantly invalidate each other's caches (worst case if they're all on the same cache line). With the 'spacer' added, their memory addresses are sufficiently far enough apart that one CPU's write of a given address doesn't invalidate the caches of the other CPUs.
Another thing to note -- the 4 threads start nearly concurrently, but they don't finish at the same time -- another indication that there's cross-core issues at work here. Also, I'd guess that running on a multi-cpu machine of a different architecture would bring more interesting issues to light here.
I guess the lesson from this is that in a highly-concurrent scenario, if you're doing a bunch of work with a few small data structures, you should try to make sure they aren't all packed on top of each other in memory. Of course, there's really no way to make sure of that, but I'm guessing there are techniques (like adding spacers) that could be used to try to make it happen.
[edit]
This was too interesting -- I couldn't put it down. To test this out further, I thought I'd try varying-sized spacers, and use an integer instead of a double to keep the object without any added spacers smaller.
class Program
{
static void Main(string[] args)
{
Console.WriteLine("name\t1 thread\t4 threads");
RunTest("no spacer", WorkerUsingHeap, () => new Data());
var values = new int[] { -1, 0, 4, 8, 12, 16, 20 };
foreach (var sv in values)
{
var v = sv;
RunTest(string.Format(v == -1 ? "null spacer" : "{0}B spacer", v), WorkerUsingHeap, () => new DataWithSpacer(v));
}
Console.ReadLine();
}
public static void RunTest(string name, ParameterizedThreadStart worker, Func<object> fo)
{
var start = DateTime.UtcNow;
RunOneThread(worker, fo);
var middle = DateTime.UtcNow;
RunFourThreads(worker, fo);
var end = DateTime.UtcNow;
Console.WriteLine("{0}\t{1}\t{2}", name, middle-start, end-middle);
}
public static void RunOneThread(ParameterizedThreadStart worker, Func<object> fo)
{
var data = fo();
var threadOne = new Thread(worker);
threadOne.Start(data);
threadOne.Join();
}
public static void RunFourThreads(ParameterizedThreadStart worker, Func<object> fo)
{
var data1 = fo();
var data2 = fo();
var data3 = fo();
var data4 = fo();
var threadOne = new Thread(worker);
threadOne.Start(data1);
var threadTwo = new Thread(worker);
threadTwo.Start(data2);
var threadThree = new Thread(worker);
threadThree.Start(data3);
var threadFour = new Thread(worker);
threadFour.Start(data4);
threadOne.Join();
threadTwo.Join();
threadThree.Join();
threadFour.Join();
}
static void WorkerUsingHeap(object state)
{
var data = state as Data;
for (int count = 0; count < 500000000; count++)
{
var property = data.Property;
data.Property = property + 1;
}
}
public class Data
{
public int Property { get; set; }
}
public class DataWithSpacer : Data
{
public DataWithSpacer(int size) { Spacer = size == 0 ? null : new byte[size]; }
public byte[] Spacer;
}
}
Result:
1 thread vs. 4 threads
no spacer 00:00:06.3480000 00:00:42.6260000
null spacer 00:00:06.2300000 00:00:36.4030000
0B spacer 00:00:06.1920000 00:00:19.8460000
4B spacer 00:00:06.1870000 00:00:07.4150000
8B spacer 00:00:06.3750000 00:00:07.1260000
12B spacer 00:00:06.3420000 00:00:07.6930000
16B spacer 00:00:06.2250000 00:00:07.5530000
20B spacer 00:00:06.2170000 00:00:07.3670000
No spacer = 1/6th the speed, null spacer = 1/5th the speed, 0B spacer = 1/3th the speed, 4B spacer = full speed.
I don't know the full details of how the CLR allocates or aligns objects, so I can't speak to what these allocation patterns look like in real memory, but these definitely are some interesting results.