memory management in C# for high speed applications - c#

I am building an application that processes an incoming image from file or buffer and outputs results in the form of an array or doubles of fixed size. The application needs to be relatively quick. I ran into a problem with cycle time. I started recording cycle time while processing one image and it went from the minimum at 65ms and gradually started increasing to all the way 500ms which is way too slow. Sure enough, I checked on the memory usage and it was steadily increasing as well.
I'm running GC after every cycle and dumping unused variable as ofter as possible. I don't create new objects within the processing loop. Image processing is done on its own thread so that all the resources get dumped. It seems the majority of the cycle time increase happens when I'm pulling the image from file. What could I be missing?
here's the rough code, the full thing is pretty large. Main Function
private void button4_Click(object sender, EventArgs e)
{
Stopwatch sw = new Stopwatch();
sw.Start();
cogDisplay1.Image = null;
ImageFromFile.Operator.Open(Properties.Settings.Default.ImageFile, CogImageFileModeConstants.Read);
ImageFromFile.Run();
cogDisplay1.Image = ImageFromFile.OutputImage;
cogDisplay1.AutoFit = true;
Thread t = new Thread(Vision);
t.Start();
textBox3.Clear();
sw.Stop();
textBox3.AppendText(sw.ElapsedMilliseconds.ToString() + Environment.NewLine + "TS: " + t.ThreadState.ToString());
textBox3.AppendText("GC" + GC.GetTotalMemory(true).ToString());
GC.Collect(GC.MaxGeneration , GCCollectionMode.Forced, false);
}
Image Processing
public void Vision()
{
Stopwatch sw = new Stopwatch();
sw.Start();
try
{
AlignmentParams.ApproximateNumberToFind = 1;
AlignmentParams.SearchRegionMode = 0;
AlignmentResult = AlignmentPat.Execute(cogDisplay1.Image as CogImage8Grey, null , AlignmentParams);
Fixture.InputImage = cogDisplay1.Image;
Fixture.RunParams.UnfixturedFromFixturedTransform = AlignmentResult[0].GetPose();
Fixture.Run();
AlignmentResult = null;
#region FindLineTools
#endregion
#region FindEllipse
#endregion
sw.Stop();
SetText("V" + sw.ElapsedMilliseconds.ToString() + Environment.NewLine);
}
catch (Exception err)
{
SetText(Environment.NewLine + "***********Error***********" + Environment.NewLine);
SetText(err.ToString() + Environment.NewLine);
SetText("***************************" + Environment.NewLine);
}
}

First, I would recommend to post a cleaned-up code for better readability (remove all commented-off stuff). Second, focus only on essential part, namely: your problem is memory overuse/leak due to heavy image processing (correct if wrong). Therefore, in your thread named Vision de-reference the image objects and set them to null immediately after processing completion (as mentioned above, GC is not a big help in your case). The concept briefly demonstrated by following code snippet:
public void Vision()
{
Stopwatch sw = new Stopwatch();
sw.Start();
try
{
AlignmentParams.ApproximateNumberToFind = 1;
AlignmentParams.SearchRegionMode = 0;
AlignmentResult = AlignmentPat.Execute(cogDisplay1.Image as CogImage8Grey, null , AlignmentParams);
Fixture.InputImage = cogDisplay1.Image;
Fixture.RunParams.UnfixturedFromFixturedTransform = AlignmentResult[0].GetPose();
Fixture.Run();
AlignmentResult = null;
// your coding stuff
sw.Stop();
SetText("V" + sw.ElapsedMilliseconds.ToString() + Environment.NewLine);
}
catch (Exception err)
{
SetText(Environment.NewLine + "***********Error***********" + Environment.NewLine);
SetText(err.ToString() + Environment.NewLine);
SetText("***************************" + Environment.NewLine);
}
finally{Fixture.InputImage=null}

Don't call GC.Collect. Let the VM decide when it is out of memory and must do the collection. You typically are not in a good position to decide the best time for a GC.Collect unless your running some other heartbeat or idle watching threads.
Secondly, ensure that whatever resources you're receiving from the method calls are being Disposed. Setting variables to NULL do NOT do this, you should be explicitly calling Dispose or within a Using block:
using(SomeResource resource = Class.GiveMeResource("image.png"))
{
int width = resource.Width;
int height = resource.Height;
Console.Write("that image has {0} pixels", width*height);
} //implicitly calls IDisposable.Dispose() here.
You also need to do some memory and call profiling to detect where, if any, leaks exist.

Related

Saving images via producer-consumer pattern using BlockingCollection

I'm facing a producer-consumer problem: I have a camera that sends images very quickly and I have to save them to disk. The images are in the form of ushort[]. The camera always overrides the same variable of type ushort[]. So between one acquisition and another I have to copy the array and when possible, save it in order to free up the memory of that image. The important thing is not to lose any images from the camera, even if it means increasing the memory used: it is entirely acceptable that the consumer (saving images with freeing of memory) is slower than the producer; however, it is not acceptable not to copy the image into memory in time.
I've written sample code that should simulate the problem:
immage_ushort: is the image generated by the camera that must be copied to the BlockingCollection before the next image arrives
producerTask: has a cycle that should simulate the arrival of the image every time_wait; within this time the producer should copy the image in the BlockingCollection.
consumerTask: must work on the BlockingCollection by saving the images to disk and thus freeing up the memory; it doesn't matter if the consumer works slower than the producer.
I put a time_wait of 1 millisecond, to test the performance (actually the camera will not be able to reach that speed). The times are respected (with a maximum delay of 1-2 ms, therefore acceptable) if there is no saving to disk in the code (commenting image1.ImWrite (file_name)). But with saving to disk on, I instead get delays that sometimes exceed 100ms.
This is my code:
private void Execute_test_producer_consumer1()
{
//Images are stored as ushort array, so we create a BlockingCollection<ushort[]>
//to keep images when they arrive from camera
BlockingCollection<ushort[]> imglist = new BlockingCollection<ushort[]>();
string lod_date = "";
/*producerTask simulates a camera that returns an image every time_wait
milliseconds. The image is copied and inserted in the BlockingCollection
to be then saved on disk in the consumerTask*/
Task producerTask = Task.Factory.StartNew(() =>
{
//Number of images to process
int num_img = 3000;
//Time between one image and the next
long time_wait = 1;
//Time log variables
var watch1 = System.Diagnostics.Stopwatch.StartNew();
long watch_log = 0;
long delta_time = 0;
long timer1 = 0;
List<long> timer_delta_log = new List<long>();
List<long> timer_delta_log_time = new List<long>();
int ii = 0;
Console.WriteLine("-----START producer");
watch1.Restart();
//Here I expect every wait_time (or a little more) an image will be inserted
//into imglist
while (ii < num_img)
{
timer1 = watch1.ElapsedMilliseconds;
delta_time = timer1 - watch_log;
if (delta_time >= time_wait || ii == 0)
{
//Add image
imglist.Add((ushort[])immage_ushort.Clone());
//Inserting data for time log
timer_delta_log.Add(delta_time);
timer_delta_log_time.Add(timer1);
watch_log = timer1;
ii++;
}
}
imglist.CompleteAdding();
watch1.Stop();
lod_date = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff");
Console.WriteLine("-----END producer: " + lod_date);
// We only print images that are not inserted on schedule
int gg = 0;
foreach (long timer_delta_log_t in timer_delta_log)
{
if (timer_delta_log_t > time_wait)
{
Console.WriteLine("-- Image " + (gg + 1) + ", delta: "
+ timer_delta_log_t + ", time: " + timer_delta_log_time[gg]);
}
gg++;
}
});
Task consumerTask = Task.Factory.StartNew(() =>
{
string file_name = "";
int yy = 0;
// saving images and removing data
foreach (ushort[] imm in imglist.GetConsumingEnumerable())
{
file_name = #"output/" + yy + ".png";
Mat image1 = new Mat(row, col, MatType.CV_16UC1, imm);
//By commenting on this line, the timing of the producer is respected
image1.ImWrite(file_name);
image1.Dispose();
yy++;
}
imglist.Dispose();
lod_date = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff");
Console.WriteLine("-----END consumer: " + lod_date);
});
}
I thought, also, that the BlockingCollection could remain blocked for the entire duration of the foreach and therefore of saving the image to disk. So I also tried replacing the foreach with this:
while(!imglist.IsCompleted)
{
ushort[] elem = imglist.Take();
file_name = #"output/" + yy + ".png";
Mat image1 = new Mat(row, col, MatType.CV_16UC1, elem);
//By commenting on this line, the timing of the producer is respected
image1.ImWrite(file_name);
image1.Dispose();
yy++;
}
But the result doesn't change.
What am I doing wrong?
You migth want to start your tasks with the "LongRunning" option:
LongRunning
Specifies that a task will be a long-running, coarse-grained operation involving fewer, larger components than fine-grained systems. It provides a hint to the TaskScheduler that oversubscription may be warranted. Oversubscription lets you create more threads than the available number of hardware threads. It also provides a hint to the task scheduler that an additional thread might be required for the task so that it does not block the forward progress of other threads or work items on the local thread-pool queue.

ScriptService execute eating memory

When I run Execute on a ScriptService run request, it takes some memory and fails to release it. Testing this by running on monodevelop on a Raspberry Pi shows the memory rising at an alarming rate, and will eventually crash the program. The GC.Collect was an attempt at re-claiming this memory. Is there any insight into what I am doing wrong?
public MainWindow() : base(Gtk.WindowType.Toplevel)
{
Build();
while (true)
{
getDashRow();
Thread.Sleep(1000);
Console.WriteLine("Total available memory before collection: {0:N0}", System.GC.GetTotalMemory(false));
System.GC.Collect();
Console.WriteLine("Total available memory collection: {0:N0}", System.GC.GetTotalMemory(true));
}
}
private int getDashRow()
{
ScriptsResource.RunRequest runreq;
DriveService driveservice;
ExecutionRequest exrequest;
Console.WriteLine("getDashRow");
int retval = 0;
exrequest = new ExecutionRequest();
exrequest.Function = "getMacRow";
IList<object> parameters = new List<object>();
parameters.Add(spreadsheetname);
exrequest.Parameters = parameters;
exrequest.DevMode = false;
try
{
// run a Google Apps Script function on the online sheet to find number of rows (more efficient)
runreq = scriptservice.Scripts.Run(exrequest, dashscriptid);
// following line consumes the memory
Operation op = runreq.Execute();
retval = Convert.ToInt16(op.Response["result"]);
parameters = null;
exrequest = null;
op = null;
}
catch (Exception ex)
{
Console.WriteLine("getDashRow: " + ex.Message);
}
return retval;
}
I solved this, installing Mono Preview v6.0.0.277 has resolved this problem, memory is now correctly freed up without manually calling GC.Collect().
Related issue which led to the solution

Com object method is blocking in asp.net core but not unit tests

I have a class I have used to wrap up a com object (MLApp) that is used to call into matlab. I have a unit test that initializes 5 matlab sessions and runs a command that pauses for 10 seconds in each individual session. My unit test takes ~10 seconds to run which is expected since they should be executing in parallel.
[Test]
public void ParallelExecution()
{
List<MatlabRunner> runners = new List<MatlabRunner>();
for (int i = 0; i < 5; i++)
{
runners.Add(new MatlabRunner());
}
try
{
List<Task<string>> t = new List<Task<string>>();
foreach (MatlabRunner r in runners)
t.Add(Task.Run<string>(()=>r.Execute( "pause(10)" )));
Task<string>.WaitAll(t.ToArray());
t.ForEach((task) => System.Console.WriteLine(task.Result));
}
finally
{
runners.ForEach((r)=>r.Quit());
}
}
However, now when I put what amounts to the same code in my asp.net core project, I can make two web requests and they are both received in parallel (I know this because I log that the request is received) but then the call into runner.Execute() is serialized.
[HttpGet("Verify")]
public JsonResult Verify(String lot = null,String lotStatus = null,int? NumPorts = null,int? wafer = null)
{
System.IO.File.AppendAllText(#"C:\Logs\log.txt", "\n\Received Request! " + System.DateTime.Now + "\n");
String command = "pause(10)";
MatlabRunner runner = null;
String result = "";
try
{
runner = new MatlabRunner();
result = runner.Execute(command);
}
catch(Exception e)
{
System.IO.File.AppendAllText(#"C:\Logs\log.txt", e.StackTrace + " " + e.InnerException + " " + e.GetType() + e.Source + " " + e.GetBaseException());
}
finally
{
runner.Quit();
System.IO.File.AppendAllText(#"C:\Logs\log.txt", "\nQuit! " + System.DateTime.Now + "\n");
}
return new JsonResult(result);
}
The problem is that I make two, parallel GET requests to this Verify method. While one request is calling runner.Execute("pause(10)") the other request is blocked from calling this method. I would expect to make two calls to this Verify method in parallel and them both to execute in parallel, finishing in a total of ~10 seconds. However, for both requests to complete takes 20 seconds (10 for the first, 10 for the second). In my unit test, I call this same method in parallel using Tasks and the entire unit test executes in 10 seconds, despite having 5 runners launched. If I were to do the same in the web service, it would take 50 seconds. Hopefully this is enough info. I can attach a log with time stamps to show how things are executing, if it would be valuable.
Am I missing something from a configuration perspective in IIS? Potentially something when I build or publish?

Async function with DownloadStringAsync

What I am trying
In my WPF I crawl my internal network and see if an IP/URL can be accessed. So I call 255 IPs with DownloadStringAsync in a loop. Now, whenever I receive a response or a timeout I want to update my progressbar (which has a max of 255). My code seems to work for the first x (I believe something like 10-15) IPs as I see the progressbar moving
The Call-Loop
try
{
while (i < 255)
{
var client = new WebClient();
client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(GetInfo);
client.DownloadStringAsync(new Uri("http://" + myIPNet + "." + i + ":1400/status/topology"));
i++;
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
The GetInfo Function
private void GetInfo(object sender, DownloadStringCompletedEventArgs e)
{
/* Each call, count up Global Ip Counter and move progressBar */
countIps++;
pbSearch.Value = countIps;
/* Do Stuff with e.Result */
...
/* Check if we have all IPs in net already */
if (countIps == 255)
{
/* Reset progressBar */
pbSearch.Value = 0;
/* Enable Button */
btnGetAll.IsEnabled = true;
}
}
I found this Asynchronous File Download with Progress Bar and I think I understand this answer https://stackoverflow.com/a/9459441/1092632 but this has the progressbar updated for one download in progress, any ideas on how to "convert" this to my case?
I think the main issue with your code here is that you're expecting the DownloadStringAsync to block until the DownloadStringCompleted event has fired. This does not happen as per the description of the function:
Downloads the resource specified as a Uri. This method does not block
the calling thread.
In effect you are rapidly making 255 concurrent web calls and then dropping out the bottom of the loop before they're completed.
As blocking using the Event-based Asynchronous Pattern (EAP) would require significant re-architecture of your code (and significantly increase complexity) I would encourage you to look at using Task-based Asynchronous Pattern instead. Stephen Toub has an excellent guide to converting the WebClient from EAP to TAP patterns here.
Using this you could rewrite you code as follows and should meet significantly more success:
var client = new WebClient();
while (i < 255)
{
try
{
byte[] result = await client.DownloadDataTask(new Uri("http://" + myIPNet + "." + i + ":1400/status/topology"));
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
pbSearch.Value = i++;
}
/* Now we have all IPs */
/* Reset progressBar */
pbSearch.Value = 0;
/* Enable Button */
btnGetAll.IsEnabled = true;
Hope it helps.

Unity - How to reduce time spent in Update and GC

I have script in Unity that is exchanging data with another Python app. It has a while loop that listens for UDP messages as a background thread. Also the script is asking for new data every frame via the Update function.
After I receive a message, the script parses it as a string and it needs to split the string by tabs in order to retrieve all the values. Currently, the string contains eyetracker and joystick data that Unity needs as player inputs.
UDPController.cs
private void init()
{
// define address to send data to
pythonEndPoint = new IPEndPoint(IPAddress.Parse(IP), pythonPort);
unityEndPoint = new IPEndPoint (IPAddress.Parse (IP), unityPort);
pythonSock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
//define client to receive data at
client = new UdpClient(unityPort);
client.Client.ReceiveTimeout = 1;
client.Client.SendTimeout = 1;
// start background thread to receive information
receiveThread = new Thread(new ThreadStart(ReceiveData));
receiveThread.IsBackground = true;
receiveThread.Start();
}
void Update(){
if (Calibration.calibrationFinished && startRequestNewFrame) {
RequestData();
}
}
private void RequestData() {
// Sends this to the UDP server written in Python
SendString("NEWFRAME");
}
// receive thread which listens for messages from Python UDP Server
private void ReceiveData()
{
while (true)
{
try
{
if(client.Available > 0) {
double unixRecvTimeStamp = DataManager.ConvertToUnixTimestamp(DateTime.Now);
byte[] data = client.Receive(ref pythonEndPoint);
string rawtext = Encoding.UTF8.GetString(data);
string[] msgs = rawtext.Split('\t');
string msgType = msgs[0];
double pythonSentTimeStamp = double.Parse(msgs[msgs.Length-1].Split(' ')[1]);
DataManager.UdpRecvBuffer += '"' + rawtext + '"' + "\t" + pythonSentTimeStamp + "\t" + unixRecvTimeStamp + "\t" + DataManager.ConvertToUnixTimestamp(DateTime.Now) + "\n";
if (String.Equals(msgType, "FRAMEDATA"))
{
DataManager.gazeAdcsPos = new Vector2(float.Parse(msgs[1].Split(' ')[1]), float.Parse(msgs[2].Split(' ')[1]));
float GazeTimeStamp = float.Parse(msgs[3].Split(' ')[1]);
DataManager.rawJoy = new Vector2(float.Parse(msgs[4].Split(' ')[1]), 255 - float.Parse(msgs[5].Split(' ')[1]));
float joyC = float.Parse(msgs[6].Split(' ')[1]);
float ArduinoTimeStamp = float.Parse(msgs[7].Split(' ')[1]);
}
}
}
catch (Exception err)
{
print(err.ToString());
}
}
}
So according to the Unity Profiler, it seems like there is a huge amount of time spent in Behaviour Update, especially inside UDPController.Update() and GC.Collect. My initial hypothesis is that perhaps I'm creating too many strings and arrays overtime and the garbage collector kicks in quite often to remove the unused memory space.
So my question is, is my hypothesis right? If so, how I can rewrite this code to increase my performance and reduce the drop in FPS and perceived lag. If not, where is the issue at because currently the game starts to lag right about 10 minutes in.
Moreover, is there a better way or format for data transferring? It seems like I can be using objects like JSON, Google Protocol Buffer or msgpack or would that be an overkill?
I can see a lot of local variables in your while loop (along with arrays). Local variables cause Garbage collector to run. You should declare all the variables outside of the method.
Moreover, avoid string operations in while/update() as strings are immutable. Thus your code create a new copy to store the result after every concatenation. Use StringBuilder in these situations to avoid GC.
Read More

Categories