I have two "customer" objects that share an Order Manager script in Unity. Ideally, every time a customer needs to create a new order, it should be different than the last one. What happens instead is that customer 1 orders X and Y, and then customer 2 will order X and Y instead of Z (or whichever different combination). When the script is placed on just one customer instead of 2, the random order function works correctly. There are no static variables.
To add, if I clear customer 1's order, the next order is different. Customer 2's orders will still follow customr 1. Ex:
Customer 1 has 2 orders: X X, then Y Y
Customer 2 won't start at the second order, it will order the same way: X X, then Y Y
This is the method that's causing an issue:
IEnumerator CustomerOrder(bool status, float delayTime)
{
yield return new WaitForSeconds(delayTime);//delay between orders
imageOffset.y = -0.75f;//space between order visuals
numberOfOrderItems = rnd.Next(1, maxOrderNumber + 1);//between 1 and 3 items per order
customerOrder = new GameObject[numberOfOrderItems];// order collection
Instantiate(orderBackgrounds[numberOfOrderItems - 1], orderVisual.transform);//Spawn correct background for number of items
for (int x = 0; x < numberOfOrderItems; x++)
{
int y = rnd.Next(products.Length);//choose a random number representing an index in the products array
customerOrder[x] = Instantiate(productImages[y],
orderVisual.transform.position + imageOffset,
Quaternion.identity,
orderVisual.transform);//spawn a visual representation of the order
imageOffset.y += -0.95f;//places next order visual lower
}
makingOrder = false;//we've finished making the order
}
I feel really silly. I was able to fix this issue by adding one line to the beginning of the method:
rnd = new System.Random();
I'm not certain if this is really the best way, but it seems to work.
I work with cellular automata. My repo for my work is here. The basic structure is
1) A grid of
2) cells, which may have
3) agents.
The agents act according to a set of rules, and typically one designates "states" for the agents (agents of different states have different rules). One (relatively) well-known CA is the game of life.
I'm trying to expand things a bit more and incorporate other types of "properties" in my CAs, mainly to simulate various phenomena (imagine an animal agent that consumes a plant agent, with the plant's "biomass" or what have you decreasing).
To do this I'm incorporating a normal dictionary, with strings as keys and a struct called CAProperty as the value. The struct is as follows:
public struct CAProperty
{
public readonly string name;
public readonly dynamic value;
//public readonly Type type;
public CAProperty(string name, dynamic value)
{
this.name = name;
this.value = value;
}
}
(note: previously I had the "type" variable to enable accurate typing at runtime...but in attempts to solve the issue in this post I removed it. Fact is, it'll need to be added back in)
This is well and good. However, I'm trying to do some work with large grid sizes. 100x100. 1000x1000. 5000x5000, or 25 million cells (and agents). That would be 25 million dictionaries.
See the image: a memory snapshot from Visual Studio for a 4000x4000 grid, or 16 million agents (I tried 5000x5000, but Visual Studio wouldn't let me take a snapshot).
On the right, one can clearly see that the debugger is reading 8 GB memory usage (and I tried this in a release version to see 6875 MB usage). However, when I count up everything in the third column of the snapshot, I arrive at less than 4 GB.
Why is there such a dramatic discrepancy between the total memory usage and the size of objects stored in memory?
Additionally: how might I optimize memory usage (mainly the dictionaries - is there another collection with similar behavior but lower memory usage)?
Edit: For each of the three "components" (Grid, Cell, Agent) I have a class. They all inherit from an original CAEntity class. All are shown below.
public abstract class CAEntity
{
public CAEntityType Type { get; }
public Dictionary<string, dynamic> Properties { get; private set; }
public CAEntity(CAEntityType type)
{
this.Type = type;
}
public CAEntity(CAEntityType type, Dictionary<string, dynamic> properties)
{
this.Type = type;
if(properties != null)
{
this.Properties = new Dictionary<string, dynamic>(properties);
}
}
}
public class CAGraph:CAEntity
{
public ValueTuple<ushort, ushort, ushort> Dimensions { get; }
public CAGraphCell[,,] Cells { get;}
List<ValueTuple<ushort, ushort, ushort>> AgentCells { get; set; }
List<ValueTuple<ushort, ushort, ushort>> Updates { get; set; }
public CA Parent { get; private set; }
public GridShape Shape { get; }
//List<double> times = new List<double>();
//System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
public CAGraph (CA parent, ValueTuple<ushort, ushort, ushort> size, GridShape shape):base(CAEntityType.Graph)
{
this.Parent = parent;
this.Shape = shape;
AgentCells = new List<ValueTuple<ushort, ushort, ushort>>();
Updates = new List<ValueTuple<ushort, ushort, ushort>>();
Dimensions = new ValueTuple<ushort, ushort, ushort>(size.Item1, size.Item2, size.Item3);
Cells = new CAGraphCell[size.Item1, size.Item2, size.Item3];
for (ushort i = 0; i < Cells.GetLength(0); i++)
{
for (ushort j = 0; j < Cells.GetLength(1); j++)
{
for (ushort k = 0; k < Cells.GetLength(2); k++)
{
Cells[i, j, k] = new CAGraphCell(this, new ValueTuple<ushort, ushort, ushort>(i, j, k));
}
}
}
}
public CAGraph(CA parent, ValueTuple<ushort, ushort, ushort> size, GridShape shape, List<ValueTuple<ushort, ushort, ushort>> agents, CAGraphCell[,,] cells, Dictionary<string, dynamic> properties) : base(CAEntityType.Graph, properties)
{
Parent = parent;
Shape = shape;
AgentCells = agents.ConvertAll(x => new ValueTuple<ushort, ushort, ushort>(x.Item1, x.Item2, x.Item3));
Updates = new List<ValueTuple<ushort, ushort, ushort>>();
Dimensions = new ValueTuple<ushort, ushort, ushort>(size.Item1, size.Item2, size.Item3);
Cells = new CAGraphCell[size.Item1, size.Item2, size.Item3];
for (ushort i = 0; i < size.Item1; i++)
{
for (ushort j = 0; j < size.Item2; j++)
{
for (ushort k = 0; k < size.Item3; k++)
{
//if(i == 500 && j == 500)
//{
// Console.WriteLine();
//}
Cells[i, j, k] = cells[i, j, k].Copy(this);
}
}
}
}
}
public class CAGraphCell:CAEntity
{
public CAGraph Parent { get; set; }
public CAGraphCellAgent Agent { get; private set; }
public ValueTuple<ushort, ushort, ushort> Position { get; private set; }
//private Tuple<ushort, ushort, ushort>[][] Neighbors { get; set; }
//System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
public CAGraphCell(CAGraph parent, ValueTuple<ushort, ushort, ushort> position):base(CAEntityType.Cell)
{
this.Parent = parent;
this.Position = position;
//this.Neighbors = new Tuple<ushort, ushort, ushort>[Enum.GetNames(typeof(CANeighborhoodType)).Count()][];
}
public CAGraphCell(CAGraph parent, ValueTuple<ushort, ushort, ushort> position, Dictionary<string, dynamic> properties, CAGraphCellAgent agent) :base(CAEntityType.Cell, properties)
{
this.Parent = parent;
this.Position = position;
if(agent != null)
{
this.Agent = agent.Copy(this);
}
}
}
public class CAGraphCellAgent:CAEntity
{
// have to change...this has to be a property? Or no, it's a CAEntity which has a list of CAProperties.
//public int State { get; set; }
public CAGraphCell Parent { get; set; }
//System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
public CAGraphCellAgent(CAGraphCell parent, ushort state):base(CAEntityType.Agent)
{
this.Parent = parent;
AddProperty(("state", state));
}
public CAGraphCellAgent(CAGraphCell parent, Dictionary<string, dynamic> properties) :base(CAEntityType.Agent, properties)
{
this.Parent = parent;
}
}
It sounds like your problem is that your representation of your agents (using dictionaries) consumes too much memory. If so, the solution is to find a more compact representation.
Since you're working in an object-oriented language, the typical solution would be to define an Agent class, possibly with subclasses for different types of agents, and use instance variables to store the state of each agent. Then your CA grid will be an array of Agent instances (or possibly nulls for vacant cells). This will be a lot more compact than using dictionaries with string keys.
Also, I would recommend not storing the position of the agent on the grid as part of the agent's state, but passing it as a parameter to any methods that need it. Not only does this save a bit of memory just by itself, but it also allows you to place references to the same Agent instance at multiple cells on the grid to represent multiple identical agents. Depending on how often such identical agents occur in your CA, this may save a huge amount of memory.
Note that, if you modify the state of such a reused agent instance, the modification will obviously affect all agents on the grid represented by that instance. For that reason, it may be a good idea to make your Agent objects immutable and just create a new one whenever the agent's state changes.
You might also want to consider maintaining a cache (e.g. a set) of Agent instances already on the grid so that you can easily check if a new agent might be identical with an existing one. Whether this will actually do any good depends on your specific CA model — with some CA you might be able to handle de-duplication sufficiently well even without such a cache (it's perfectly OK to have some duplicate Agent objects), while for others there might simply not be enough identical agents to make it worthwhile. Also, if you do try this, note that you'll need to either design the cache to use weak references (which can be tricky to get right) or periodically purge and rebuild it to avoid old Agent objects lingering in the cache even after they've been removed from the grid.
Addendum based on your comment below, which I'll quote here:
Imagine an environment where the temperature varies seasonally (so a property on the graph). There are land and water cells (so properties on cells), and in low enough temperatures the water cells become frozen so animal agents can use them to cross over between land locations. Imagine those animal agents hunt other animal agents to eat them (so properties on the agents). Imagine the animal agents that get eaten eat trees (so other agents with properties), and tend to eat young saplings (limiting tree growth), thereby limiting their own growth (and that of the carnivore agents).
OK, so let's sketch out the classes you'd need. (Please excuse any syntax errors; I'm not really a C# programmer and I haven't actually tested this code. Just think of it as C#-like pseudocode or something.)
First of all, you will obviously need a bunch of agents. Let's define an abstract superclass for them:
public abstract class Agent {
public abstract void Act(Grid grid, int x, int y, float time);
}
Our CA simulation (which, for simplicity, I'm going to assume to be stochastic, i.e. one where the agents act one at a time in a random order, like in the Gillespie algorithm) is basically going to involve repeatedly picking a random cell (x, y) on the grid, checking if that cell contains an agent, and if so, calling Act() on that agent. (We'll also need to update any time-dependent global state while we're doing that, but let's leave that for later.)
The Act() methods for the agents will receive a reference to the grid object, and can call its methods to make changes to the state of nearby cells (or even get a reference to the agents in those cells and call their methods directly). This could involve e.g. removing another agent from the grid (because it just got eaten), adding a new agent (reproduction), changing the acting agent's location (movement) or even removing that agent from the grid (e.g. because it starved or died of old age). For illustration, let's sketch a few agent classes:
public class Sapling : Agent {
private static readonly double MATURATION_TIME = 10; // arbitrary time delay
private double birthTime; // could make this a float to save memory
public Sapling(double time) => birthTime = time;
public override void Act(Grid grid, int x, int y, double time) {
// if the sapling is old enough, it replaces itself with a tree
if (time >= birthTime + MATURATION_TIME) {
grid.SetAgentAt(x, y, Tree.INSTANCE);
}
}
}
public class Tree : Agent {
public static readonly Tree INSTANCE = new Tree();
public override void Act(Grid grid, int x, int y, double time) {
// trees create saplings in nearby land cells
(int x2, int y2) = grid.RandomNeighborOf(x, y);
if (grid.GetAgentAt(x2, y2) == null && grid.CellTypeAt(x2, y2) == CellType.Land) {
grid.SetAgentAt(x2, y2, new Sapling(time));
}
}
}
For the sake of brevity, I'll leave the implementation of the animal agents as an exercise. Also, the Tree and Sapling implementations above are kind of crude and could be improved in various ways, but they should at least illustrate the concept.
One thing worth noting is that, to minimize memory usage, the agent classes above have as little internal state as possible. In particular, the agents don't store their own location on the grid, but will receive it as arguments to the act() method. Since omitting the location in fact made my Tree class completely stateless, I went ahead and used the same global Tree instance for all trees on the grid! While this won't always be possible, when it is, it can save a lot of memory.
Now, what about the grid? A basic implementation (ignoring the different cell types for a moment) would look something like this:
public class Grid {
private readonly int width, height;
private readonly Agent?[,] agents;
public Grid(int w, int h) {
width = w;
height = h;
agents = new Agent?[w, h];
}
// TODO: handle grid edges
public Agent? GetAgentAt(int x, int y) => agents[x, y];
public void SetAgentAt(int x, int y, Agent? agent) => agents[x, y] = agent;
}
Now, what about the cell types? You have a couple of ways to handle those.
One way would be to make the grid store an array of Cell objects instead of agents, and have each cell store its state and (possibly) an agent. But for optimizing memory use it's probably better to just have a separate 2D array storing the cell types, something like this:
public enum CellType : byte { Land, Water, Ice }
public class Grid {
private readonly Random rng = new Random();
private readonly int width, height;
private readonly Agent?[,] agents;
private readonly CellType[,] cells; // TODO: init in constructor?
private float temperature = 20; // global temperature in Celsius
// ...
public CellType CellTypeAt(int x, int y) {
CellType type = cells[x,y];
if (type == CellType.Water && temperature < 0) return CellType.Ice;
else return type;
}
}
Note how the CellType enum is byte-based, which should keep the array storing them a bit more compact than if they were int-based.
Now, let's finally look at the main CA simulation loop. At its most basic, it could look like this:
Grid grid = new Grid(width, height);
grid.SetAgentAt(width / 2, height / 2, Tree.INSTANCE);
// average simulation time per loop iteration, assuming that each
// actor on the grid acts once per unit time step on average
double dt = 1 / (width * height);
for (double t = 0; t < maxTime; t += dt) {
(int x, int y) = grid.GetRandomLocation();
Agent? agent = grid.GetAgentAt(x, y);
if (agent != null) agent.Act(grid, x, y, t);
// TODO: update temperature here?
}
(Technically, to correctly implement the Gillespie algorithm, the simulation time increment between iterations should be an exponentially distributed random number with mean dt, not a constant increment. However, since only the actor on one of the width * height cells is chosen in each iteration, the number of iterations between actions by the same actor is geometrically distributed with mean width * height, and multiplying this by dt = 1 / (width * height) gives an excellent approximation for an exponential distribution with mean 1. Which is a long-winded way of saying that in practice using a constant time step is perfectly fine.)
Since this is getting long enough, I'll let you continue from here. I'll just note that there are plenty of ways to further expand and/or optimize the algorithm I've sketched above.
For example, you could speed up the simulation by maintaining a list of all grid locations that contain a live actor and randomly sampling actors from that list (but then you'd also need to scale the time step by the inverse of the length of the list). Also, you may decide that you want some actors to get more frequent chances to act than others; while the simple way to do that is just to use rejection sampling (i.e. have the actor only do something if rng.Sample() < prob for some prob between 0 and 1), a more efficient way would be to maintain multiple lists of locations depending on the type of the actor there.
I have a list that stores well over a million objects within it. I need to look through the list and update the objects found through the below query as efficiently as possible.
I was thinking of using a Dictionary or HashSet, but I'm relatively new to C# and could not figure out how to implement these other two approaches. My current code is simply a LINQ statement searching through an IList.
public IList<LandObject> landObjects = new List<LandObject>();
var lObjsToUpdate = from obj in landObjects
where
obj.position.x >= land.x - size.x &&
obj.position.x <= land.x + size.x &&
obj.position.y >= land.y - size.y &&
obj.position.y <= land.y + size.y
select obj;
foreach(var item in lObjsToUpdate)
{
//do what I need to do with records I've found
}
Could anyone be so kind as to suggest how I might approach this efficiently?
The real answer should involve performance tests and comparisons, and depends on your runtime environment (memory vs. cpu, etc).
The first thing I would try, since land and size are constant, you can maintain a simple HashSet<LandObject> of objects that fit the criteria (in addition to a list or set of all objects or just all other objects). Every time a new object is added, check if it fits the criteria and if so - add it to that set of objects. I don't know how good HashSet is at avoiding collisions when working with object references, but you can try to measure it.
BTW, this related question discussing memory limits of HashSet<int> might interest you. With .NET < 4.5 your HashSet should be ok up to several million items. From what I understand, with some configuration .NET 4.5 removes the limitation of 2gb max object size and you'll be able to go crazy, assuming you have a 64-bit machine.
One thing that will probably help with that many iterations is do the calculations once and use different variables inside your query. Also it should help some to increase the range by 1 on each end and eliminate checking for equals:
public IList<LandObject> landObjects = new List<LandObject>();
float xmax = land.x + size.x + 1;
float xmin = land.x - size.x - 1;
float ymax = land.y + size.y + 1;
float ymin = land.y - size.y - 1;
var lObjsToUpdate = from obj in landObjects
where
obj.position.x > xmin &&
obj.position.x < xmax &&
obj.position.y > ymin &&
obj.position.y < ymax
select obj;
Ideally you need a way to partition elements so that you don't have to test every single one to find the ones that fit and the ones that should be thrown out. How you partition will depend on how dense the items are - it might be as simple as partitioning on the integer portion of the X coordinate for instance, or by some suitable scaled value of that coordinate.
Given a method (let's call it Partition for now) that takes an X coordinate and returns a partition value for it, you can filter on the X coordinate fairly quickly as a first-pass to reduce the total number of nodes you need to check. You might need to play with the partition function a little to get the right distribution though.
For example, say that you have floating-point coordinates in the range -100 < X <= 100, with your 1,000,000+ objects distributed fairly uniformly across that range. That would divide the list into 200 partitions of (on average) 5000 entries if partitioned on integer values of X. That means that for every integer step in the X dimension of your search range you only have ~5,000 entries to test.
Here's some code:
public interface IPosition2F
{
float X { get; }
float Y { get; }
}
public class CoordMap<T> where T : IPosition2F
{
SortedDictionary<int, List<T>> map = new SortedDictionary<int,List<T>>();
readonly Func<float, int> xPartition = (x) => (int)Math.Floor(x);
public void Add(T entry)
{
int xpart = xPartition(entry.X);
List<T> col;
if (!map.TryGetValue(xpart, out col))
{
col = new List<T>();
map[xpart] = col;
}
col.Add(entry);
}
public T[] ExtractRange(float left, float top, float right, float bottom)
{
var rngLeft = xPartition(left) - 1;
var rngRight = xPartition(right) + 1;
var cols =
from keyval in map
where keyval.Key >= rngLeft && keyval.Key <= rngRight
select keyval.Value;
var cells =
from cell in cols.SelectMany(c => c)
where cell.X >= left && cell.X <= right &&
cell.Y >= top && cell.Y <= bottom
select cell;
return cells.ToArray();
}
public CoordMap()
{ }
// Create instance with custom partition function
public CoordMap(Func<float, int> partfunc)
{
xPartition = partfunc;
}
}
That will partition on the X coordinate, reducing your final search space. If you wanted to take it a step further you could also partition on the Y coordinate... I'll leave that as an exercise for the reader :)
If your parition function is very finely grained and could result in a large number of partitions, it might be useful to add a ColumnRange function similar to:
public IEnumerable<List<T>> ColumnRange(int left, int right)
{
using (var mapenum = map.GetEnumerator())
{
bool finished = mapenum.MoveNext();
while (!finished && mapenum.Current.Key < left)
finished = mapenum.MoveNext();
while (!finished && mapenum.Current.Key <= right)
{
yield return mapenum.Current.Value;
finished = mapenum.MoveNext();
}
}
}
The ExtractRange method can then use that like so:
public T[] ExtractRange(float left, float top, float right, float bottom)
{
var rngLeft = xPartition(left) - 1;
var rngRight = xPartition(right) + 1;
var cells =
from cell in ColumnRange(rngLeft, rngRight).SelectMany(c => c)
where cell.X >= left && cell.X <= right &&
cell.Y >= top && cell.Y <= bottom
select cell;
return cells.ToArray();
}
I used SortedDictionary for convenience, and because it makes it possible to do an ExtractRange method that is reasonably quick. There are other container types that are possibly better suited to the task.
I have a matrix and i want to create a new matrix which will be the old matrix, but without the first row and first column. is there a way to do this without using loops?
i want to create a new matrix
From this it sounds to me like you want a new T[,] object.
which will be the old matrix, but without the first row and first column
I interpret this to mean you want the new T[,] object to contain the same values as the original, excepting the first row/column.
is there a way to do this without using loops?
If I've interpreted your question correctly, then no, not really. You will need to copy elements from one array to another; this requires enumeration. But that doesn't mean you can't abstract the implementation of this method into a reusable method (in fact, this is what you should do).
public static T[,] SubMatrix(this T[,] matrix, int xstart, int ystart)
{
int width = matrix.GetLength(0);
int height = matrix.GetLength(1);
if (xstart < 0 || xstart >= width)
{
throw new ArgumentOutOfRangeException("xstart");
}
else if (ystart < 0 || ystart >= height)
{
throw new ArgumentOutOfRangeException("ystart");
}
T[,] submatrix = new T[width - xstart, height - ystart];
for (int i = xstart; i < width; ++i)
{
for (int j = ystart; j < height; ++j)
{
submatrix[i - xstart, j - ystart] = matrix[i, j];
}
}
return submatrix;
}
The above code isn't pretty, but once it's in place you'll be able to use it quite neatly:
T[,] withoutFirstRowAndColumn = originalMatrix.SubMatrix(1, 1);
Now, if I misinterpreted your question, and you are not dead-set on creating a new T[,] object, you can improve the efficiency of this approach by not allocating a new T[,] at all; you could take Abel's idea (along with its caveats) and use unsafe code to essentially simulate a T[,] with indices pointing to the elements of the original matrix. Come to think of it, you could even achieve this without resorting to unsafe code; you'd simply need to define an interface for the functionality you'd want to expose (a this[int, int] property comes to mind) and then implement that functionality (your return type wouldn't be a T[,] in this case, but what I'm getting at is that it could be something like it).
Simply put: no. But if you do not use jagged arrays but instead use multi-dim arrays, and if you take some time to study the memory layout of arrays in .NET, you could do it with unsafe pointers and erasing a part of the memory and moving the starting pointer of the multi-dim array. But it'd be still dependent on how you design your arrays and your matrixes whether this works or not.
However, I'd highly advice against it. There's a big chance you screw up the type and confuse the garbage collector if you do so.
Alternatively, if you like to do this exercise, use C++/CLI for this task. In C++, you have more control and it's easier to manipulate memory and move pointers directly. You also have more control over the destructor and finalizers, which may come in handy here. But, that said, then you still need marshaling. If you'd do all this for performance, I'd advice to go back to the simple loops, it'll perform faster in most cases.
Maybe you should have a look at using a maths library with good support for Matrix operations? Here's a thread which mentions a few:
Matrix Library for .NET
Using some methods from the Buffer class, you can do a row-wise copy if the matrix element type is a primitive type. This should be faster than an element-wise copy. Here is a generic extension method which demonstrates the use of Buffer:
static PrimitiveType[,] SubMatrix<PrimitiveType>(
this PrimitiveType[,] matrix, int fromRow, int fromCol) where PrimitiveType: struct
{
var (srcRowCount, srcColCount) = ( matrix.GetLength(0), matrix.GetLength(1) );
if (fromRow < 0 || fromRow > srcRowCount)
{
throw new IndexOutOfRangeException(nameof(fromRow));
}
if (fromCol < 0 || fromCol > srcColCount)
{
throw new IndexOutOfRangeException(nameof(fromCol));
}
var (dstRowCount, dstColCount) = ( srcRowCount - fromRow, srcColCount - fromCol );
var subMatrix = new PrimitiveType[dstRowCount, dstColCount];
var elementSize = Buffer.ByteLength(matrix) / matrix.Length;
for (var row = 0; row < dstRowCount; ++row)
{
var srcOffset = (srcColCount * (row + fromRow) + fromCol) * elementSize;
var dstOffset = dstColCount * row * elementSize;
Buffer.BlockCopy(matrix, srcOffset, subMatrix, dstOffset, dstColCount * elementSize);
}
return subMatrix;
}