What I want to achieve?
I'm working on an evolutionary algorithm finding min/max of non-linear functions. I have fully functional WPF application, but there's one feature missing: 3D plots.
What is the problem?
To accomplish this I've started with free trial of ilNumerics which provide 3D data visualisation. It works completely fine with examples from documentation, but there's something what prevents me from plotting properly my own 3D graphs.
Visualising problem:
So, here is how it behaves at the moment
Those are graphs of non-linear function: x1^4+x2^4-0.62*x1^2-0.62*x2^2
Left side: Contour achieved with OxyPlot
Right side: 3D graph achieved with ilNumerics
As you can see, OxyPlot contour is completely fine and 3D graph which I'm trying to plot with exactly same data is not proper at all.
How actual (not working) solution is done?
I'm trying to visualise 3D surface using points in space. ILNumerics has class called Surface which object I have to create in order to plot my graph. It has following constructor:
public Surface(InArray<float> ZXYPositions, InArray<float> C = null, Tuple<float, float> colorsDataRange = null, Colormap colormap = null, object tag = null);
where as you can see ZXYPositions is what I actually have problem with. Before instantiating Surface object I'm creating an Array like this:
int m = 0;
for (int i = 0; i < p; ++i)
{
for (int j = 0; j < p; ++j)
{
sigma[m, 0] = (float)data[i, j];
sigma[m, 1] = (float)xy[0][i];
sigma[m, 2] = (float)xy[1][j];
m++;
}
}
where sigma[m, 0] = Z; sigma[m, 1] = X; sigma[m, 2] = Y;
And here's the problem. I cannot find any logical error in this approach.
Here is code responsible for creating object which I'm passing to ilNumerics plot panel:
var scene = new PlotCube(twoDMode: false) {
// add a surface
new Surface(sigma) {
// make thin transparent wireframes
Wireframe = { Color = Color.FromArgb(50, Color.LightGray) },
// choose a different colormap
Colormap = Colormaps.Jet,
}
};
Additionaly I want to say that sigma array is constructed properly, because I've printed out its values and they're definitely correct.
Plot only data points.
At the end I need to add, that when I'm not creating surface object and plot only data points it looks much more reasonable:
But sadly it's not what I'm looking for. I want to create a surface with this data.
Good News!
I found the answer. Oddly almost evereything was fine.. I missunderstood just one thing. When I'm passing ZXYPositions argument to surface it can actually expect only Z data from me to plot graph correctly.
What did I changed to make it work
Two first for loops now looks like that:
sigma = data;
As you can see they're no longer loops, because sigma now contains only "solution" coordinates (which are Z coords), so I need to just assign data array to sigma.
Second part, where I'm creating Surface now looks like this:
var B = ILMath.tosingle(sigma);
var scene = new PlotCube(twoDMode: false) {
// add a surface
new Surface(B) {
// make thin transparent wireframes
Wireframe = { Color = Color.FromArgb(50, Color.LightGray) },
// choose a different colormap
Colormap = Colormaps.Jet,
}
};
scene.Axes.XAxis.Max = (float)arguments[0].Maximum;
scene.Axes.XAxis.Min = (float)arguments[0].Minimum;
scene.Axes.YAxis.Max = (float)arguments[1].Maximum;
scene.Axes.YAxis.Min = (float)arguments[1].Minimum;
scene.First<PlotCube>().Rotation = Matrix4.Rotation(new Vector3(1f, 0.23f, 1), 0.7f);
Basically one thing which changed is scaling XY axes to proper values.
Final results
Here you have final results:
Related
I am fairly new in c# (3 weeks), and StackOverflow, though by searching did not find anything which would satisfy my answer in this page yet.
How can one make a x^2 function to be plotted in c# (obviously I am not interested only in x^2 but any function of my choice)
This should be plotted in grid as a user application. Before that I would need to gather some data from a binary file which user would be selecting himself and I assume to pass these points to arrays so that I could be able to plot a graph.
Issues which I am not familiar with.
How can I use arrays (if possible) to plot a graph with the least amount memory usage? Any links, reference to learn would be useful.
private void Pic1D_Click(object sender, RoutedEventArgs e)
{
Line myLine = new Line();
myLine.Stroke = System.Windows.Media.Brushes.LightSteelBlue;
myLine.X1 = 20;
myLine.Y1 = 20;
for (int i=0; i<=8; i++)
{
myLine.X2 = i+20;
myLine.Y2 = i*i+20;
myLine.VerticalAlignment = VerticalAlignment.Center;
myLine.StrokeThickness = 2;
FunctionGrid.Children.Add(myLine);
myLine.X2 = myLine.X1;
myLine.Y2 = myLine.Y1;
}
}
Another issue I am trying to research: Is it possible to plot the above graph by using binary inputs in arrays? Mainly the x value would be represented in binary as well the y would be represent in binary. Is there a function, class which I could use in order to do this? I know how to convert this in binary content, though the file itself is a raw file. Ideally in the end I would want to use the below read in file to plot the function above in 2 D.
if (NewDialogx.ShowDialog() == System.Windows.Forms.DialogResult.OK)
{
xPath.Text = NewDialogx.FileName;
}
byte[] fileBytes = File.ReadAllBytes(#xPath.Text);
StringBuilder sb = new StringBuilder();
foreach (byte b in fileBytes)
{
count++;
if (count > 4096)
{
sb.Append(Convert.ToString(b, 2).PadLeft(1, '!'));
// GraphPlot[count-512,0,0] = Convert.ToString(b, 2).PadLeft(8, '0');
}
}
File.WriteAllText(#"C:\Users\raiti\Desktop\NEW1", sb.ToString());
The below is how I tried to do this. I get this exception handling error:
System.ArgumentException: 'Specified Visual is already a child of another Visual or the root of a CompositionTarget.'
The issue is that I do not want to create this as list since this seems very impractical (though please do correct me) if I will need to plot around 10000+ data lines, and might take some space.
I hope I have been specific enough on this :).
I was going to put this in comments but there's way too much and I have some code to show.
I wouldn't usually worry about memory usage much, in a wpf application. This is presumably going to be running on a desktop and even the weaker end computers nowadays can cope with a shed load of graphics.
You should at least consider graphing software. There are a bunch of free possibilities. I also wouldn't totally dismiss setting wpf aside and dumping the data to disk, charting in excel.
If this is sort of an academic exercise in that you only win if you use the least memory then I'd have to try the options and look at how much memory they use.
The most efficient way to do graphics is supposed to be the various draw... options. You could draw lines into a picture. Although a bitmap has a fair bit of memory, you're potentially saving if you have many lines because the bitmap size won't go up whilst other approaches would use more memory.
You'd draw many lines using drawline, between each of the points.
https://msdn.microsoft.com/en-us/library/system.windows.media.drawingcontext(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/ms606810(v=vs.110).aspx
The following code (which I just happen to have to hand ) draws a set of lines on an image used as the overlay on the map here:
public static async Task<BitmapSource> GetGridImageAsync(FrameworkElement fe)
{
Matrix m = PresentationSource.FromVisual(fe)
.CompositionTarget.TransformToDevice;
double dpiFactor = 1 / m.M11;
return await Task.Run(() =>
{
Pen GreyPen = new Pen(Brushes.Gray, 1 * dpiFactor);
Pen GreyThickPen = new Pen(Brushes.Gray, 2 * dpiFactor);
GreyPen.Freeze();
int ix = 0;
int iy = 0;
int Width = 1155;
int Height = 805;
BitmapSource image = Visuals.CreateBitmap(
Width, Height, 96,
drawingContext =>
{
int count = 0;
while (ix <= Width)
{
if (count % 5 == 0)
{
drawingContext.DrawLine(
GreyThickPen, new Point(ix, 0), new Point(ix, Map.Height - 1));
}
else
{
drawingContext.DrawLine(
GreyPen, new Point(ix, 0), new Point(ix, Map.Height - 1));
}
ix += 35;
count++;
}
count = 0;
while (iy <= Width)
{
if (count % 5 == 0)
{
drawingContext.DrawLine(
GreyThickPen, new Point(0, iy), new Point(Map.Width -1, iy));
}
else
{
drawingContext.DrawLine(
GreyPen, new Point(0, iy), new Point(Map.Width - 1, iy));
}
iy += 35;
count++;
}
});
return image;
});
}
}
fe is the window passed in and I use that scaling so the lines are crisp. It gives a factor of .8 on my machine.
That low level approach might not really be necessary for your purpose, in which case you could use polylines ( as Clemens suggested ).
A common way to present numerous things using wpf is to use an itemscontrol and bind the itemssource. You then template the data you bind into ui objects ( a polyline each in this case ).
A polyline takes a pointcollection as it's Points property.
If your data is gathered and then doesn't change you can just bind that in a datatemplate like:
<Polyline Points="{Binding Points}" Stroke="Red" StrokeThickness="2" />
Where points is a public property presenting a pointcollection of some object you have for each row of your collection bound to itemssource.
If your lines are supposed to be curved then this is considerably more complicated. You can do Bezier curves using paths but you'd probably need to calculate based on your points.
At that point I'd be thinking again about using purpose built graphing software, personally.
2) Your array. You'd probably want to convert that to a pointcollection somewhere. That could be in a converter or in a viewmodel. MVVM ( and a viewmodel ) is the de facto standard pattern for wpf development.
3) Your error.
You get that when you try and use a piece of ui in two places.
https://social.technet.microsoft.com/wiki/contents/articles/29964.wpf-tips-only-one-parent.aspx
Sometimes it doesn't error but also doesn't give you the expected result.
I think I'd need to see more code to work out exactly why.
I'm using the official Kinect SDK 2.0 and Emgu CV in order to recognize the colors of a Rubik's Cube.
At first I use Canny Edge Extraction on the Infrared Camera since it handles different lightning conditions better than the RGB Camera and is much better to detect contours.
Then I use this code to convert the coordinates of the infrared sensor to the ones of the RGB camera.
As you can see the in the picture they are still off from what I am looking for. Since I already use the official KinectSensor.CoordinateMapper.MapDepthFrameToColorSpace I don't know how else I can improve the situation.
using (var colorFrame = reference.ColorFrameReference.AcquireFrame())
using (var irFrame = reference.InfraredFrameReference.AcquireFrame())
{
if (colorFrame == null || irFrame == null)
return;
// initialize depth frame data
FrameDescription depthDesc = irFrame.FrameDescription;
if (_depthData == null)
{
uint depthSize = depthDesc.LengthInPixels;
_depthData = new ushort[depthSize];
_colorSpacePoints = new ColorSpacePoint[depthSize];
// fill Array with max value so all pixels can be mapped
for (int i = 0; i < _depthData.Length; i++)
{
_depthData[i] = UInt16.MaxValue;
}
// didn't work so well with the actual depth-data
//depthFrame.CopyFrameDataToArray(_depthData);
_sensor.CoordinateMapper.MapDepthFrameToColorSpace(_depthData, _colorSpacePoints);
}
}
This is a helper-function I created in order to convert Point-Arrays in Infrared-Space to Color-Space
public static System.Drawing.Point[] DepthPointsToColorSpace(System.Drawing.Point[] depthPoints, ColorSpacePoint[] colorSpace){
for (int i = 0; i < depthPoints.Length; i++)
{
// 512 is the width of the depth/infrared image
int index = 512 * depthPoints[i].Y + depthPoints[i].X;
depthPoints[i].X = (int)Math.Floor(colorSpace[index].X + 0.5);
depthPoints[i].Y = (int)Math.Floor(colorSpace[index].Y + 0.5);
}
return depthPoints;
}
We can solve this problem by transforming infrared image coordinates to color image coordinates with 2 quadrilateral mapping.
A quadrilateral Q(x1,y1,x2,y2,x3,y3,x4,y4) in an infrared image, similarly,
it's mapping quadrilateral Q'(x1',y1',x2',y2',x3',y3',x4',y4') in the corresponding color image.
We can write the above mapping in form of equation as follows:
Q'= Q*A
where, A is a 3 X 3 matrix with coefficients a11, a12, a13, a21,.., a33;
The formula to obtain the coefficients are listed as follows:
x1=173; y1=98; x2=387; y2=93; x3=395; y3=262; x4=172; y4=264;
x1p=787; y1p=235; x2p=1407; y2p=215; x3p=1435; y3p=705; x4p=795; y4p=715;
tx=(x1p-x2p+x3p-x4p)*(y4p-y3p)-(y1p-y2p+y3p-y4p)*(x4p-x3p);
ty=(x2p-x3p)*(y4p-y3p)-(x4p-x3p)*(y2p-y3p);
a31=tx/ty;
tx=(y1p-y2p+y3p-y4p)*(x2p-x3p)-(x1p-x2p+x3p-x4p)*(y2p-y3p);
ty=(x2p-x3p)*(y4p-y3p)-(x4p-x3p)*(y2p-y3p);
a32=tx/ty;
a11=x2p-x1p+a31*x2p;
a12=x4p-x1p+a32*x4p;
a13=x1p;
a21=y2p-y1p+a31*y2p;
a22=y4p-y1p+a32*y4p;
a23=y1p;
a33=1.0;
Its because its not the same camera the camera that retrieves the depth data and the one that retrieves color data.
So you should apply a correction factor to displace the depth data.
Its a factor that is almost constant but its related to the distance.
I've got no code for you, but its something you can calculate yourself.
Hello I have 2d matrix data saved in the ILArray < double >. This matrix represents the weights of the neural network from one neuron and i want to see how the weights looks with ilnumerics. Any idea how can i do this? I find many examples for 3d plotting but nothing for plotting 2d image data representation.
Image data are currently best (simplest) visualized by utilizing ILSurface. Since this is a 3D plot, you may not get the optimal performance for large image data. Fortunately, ILNumerics' scene graph makes it easy to improve this with your own implementation.
The most simple attempt would take an ILPoints shape, arrange the needed number of points in a grid and let every point visualize the value of the corresponding element within the input matrix - let's say by color (or size).
private void ilPanel1_Load(object sender, EventArgs e) {
using (ILScope.Enter()) {
// some 'input matrix'
ILArray<float> Z = ILSpecialData.sincf(40, 50);
// do some reordering: prepare vertices
ILArray<float> Y = 1, X = ILMath.meshgrid(
ILMath.vec<float>(1, Z.S[1]),
ILMath.vec<float>(1,Z.S[0]),
Y);
// reallocate the vertex positions matrix
ILArray<float> pos = ILMath.zeros<float>(3, X.S.NumberOfElements);
// fill in values
pos["0;:"] = X[":"];
pos["1;:"] = Y[":"];
pos["2;:"] = Z[":"];
// colormap used to map the values to colors
ILColormap cmap = new ILColormap(Colormaps.Hot);
// setup the scene
ilPanel1.Scene.Add(new ILPlotCube {
new ILPoints() {
Positions = pos,
Colors = cmap.Map(Z).T,
Color = null
}
});
}
}
Obviously, the resulting points do not scale with the form. So the 'image' suffers from larger gaps between the points when the form size is increased. So, for a better implementation you may adapt the approach to utilize ILTriangles instead of ILPoints, in order to assemble adjacent rectangles.
I have decided to have a go at making a dungeon crawler game with the Xna framework. I am a computer science student and am quite familiar with c# and .net framework. I have some questions about different parts of the development for my engine.
Loading Maps
I have a tile class that stores the vector2 position, 2dtexture and dimensions of the tile. I have another class called tilemap that has a list of tiles that are indexed by position. I am reading from a text file which is in the number format above that matches the number to the index in the tile list and creates a new tile with the correct texture and position, storing it into another list of tiles.
public List<Tile> tiles = new List<Tile>(); // List of tiles that I have added to the game
public List<TileRow> testTiles = new List<TileRow>(); // Tilerow contains a list of tiles along the x axis along with there vector2 position.
Reading and storing the map tiles.
using (StreamReader stream = new StreamReader("TextFile1.txt"))
{
while (stream.EndOfStream != true)
{
line = stream.ReadLine().Trim(' ');
lineArray = line.Split(' ');
TileRow tileRow = new TileRow();
for (int x = 0; x < lineArray.Length; x++)
{
tileXCo = x * tiles[int.Parse(lineArray[x])].width;
tileYCo = yCo * tiles[int.Parse(lineArray[x])].height;
tileRow.tileList.Add(new Tile(tiles[int.Parse(lineArray[x])].titleTexture, new Vector2(tileXCo,tileYCo)));
}
testTiles.Add(tileRow);
yCo++;
}
}
For drawing the map.
public void Draw(SpriteBatch spriteBatch, GameTime gameTime)
{
foreach (TileRow tes in testTiles)
{
foreach (Tile t in tes.tileList)
{
spriteBatch.Draw(t.titleTexture, t.position, Color.White);
}
}
}
Questions:
Is this the correct way I should be doing it, or should I just be storing a list referencing my tiles list?
How would I deal with Multi Layered Maps?
Collision Detection
At the moment I have a method that is looping through every tile that is stored in my testTiles list and checking to see if its dimensions are intersecting with the players dimensions and then return a list of all the tiles that are. I have a derived class of my tile class called CollisionTile that triggers a collision when the player and that rectangle intersect. (public class CollisionTile : Tile)
public List<Tile> playerArrayPosition(TileMap tileMap)
{
List<Tile> list = new List<Tile>();
foreach (TileRow test in tileMap.testTiles)
{
foreach (Tile t in test.tileList)
{
Rectangle rectangle = new Rectangle((int)tempPosition.X, (int)tempPosition.Y, (int)playerImage.Width / 4, (int)playerImage.Height / 4);
Rectangle rectangle2 = new Rectangle((int)t.position.X, (int)t.position.Y, t.width, t.height);
if (rectangle.Intersects(rectangle2))
{
list.Add(t);
}
}
}
return list;
}
Yeah, I am pretty sure this is not the right way to check for tile collision. Any help would be great.
Sorry for the long post, any help would be much appreciated.
You are right. This is a very inefficient way to draw and check for collision on your tiles. What you should be looking into is a Quadtree data structure.
A quadtree will store your tiles in a manner that will allow you to query your world using a Rectangle, and your quadtree will return all tiles that are contained inside of that Rectangle.
List<Tiles> tiles = Quadtree.GetObjects(rectangle);
This allows you to select only the tiles that need to be processed. For example, when drawing your tiles, you could specify a Rectangle the size of your viewport, and only those tiles would be drawn (culling).
Another example, is you can query the world with your player's Rectangle and only check for collisions on the tiles that are returned for that portion of your world.
For loading your tiles, you may want to consider loading into a two dimensional array, instead of a List. This would allow you to fetch a tile based on its position, instead of cross referencing it between two lists.
Tile[,] tiles = new Tile[,]
Tile tile = tiles[x,y];
Also, in this case, an array data structure would be a lot more efficient than using a List.
For uniform sets of tiles with standard widths and heights, it is quite easy to calculate which tiles are visible on the screen, and to determine which tile(s) your character is overlapping with. Even though I wrote the QuadTree in Jon's answer, I think it's overkill for this. Generally, the formula is:
tileX = someXCoordinate % tileWidth;
tileY = someYCoordinate % tileHeight;
Then you can just look that up in a 2D array tiles[tileX, tileY]. For drawing, this can be used to figure out which tile is in the upper left corner of the screen, then either do the same again for the bottom right (+1), or add tiles to the upper left to fill the screen. Then your loop will look more like:
leftmostTile = screenX % tileWidth; // screenX is the left edge of the screen in world coords
topmostTile = screenY % tileHeight;
rightmostTile = (screenX + screenWidth) % tileWidth;
bottommostTile = (screenY + screenHeight) % tileHeight;
for(int tileX = leftmostTile; tileX <= rightmostTile; tileX++)
{
for(int tileY = topmostTile; tileY <= bottommostTile; tileY++)
{
Tile t = tiles[tileX][tileY];
// ... more stuff
}
}
The same simple formula can be used to quickly figure out which tile(s) are under rectangular areas.
IF however, your tiles are non-uniform, or you have an isometric view, or you want the additional functionality that a QuadTree provides, I would consider Jon's answer and make use of a QuadTree. I would try to keep tiles out of the QuadTree if you can though.
I'm having an issue with creating a histogram representation of an image in a WinRT app. What I'd like to make consists of four histogram plots for Red, Green, Blue, Luminosity for an image.
My main issue is how to actually draw a picture of that Histogram so I could show it on the screen. My code so far is pretty... messy, I've searched a lot for this topic, mostly my results consisted of code in Java, which I'm trying somehow to translate it in C#, but API is pretty different... Had an attempt from AForge as well but that's winforms...
Here's my messy code, I know it looks bad but I'm striving to make this work first :
public static WriteableBitmap CreateHistogramRepresentation(long[] histogramData, HistogramType type)
{
//I'm trying to determine a max height of a histogram bar, so
//I could determine a max height of the image that then I'll remake it
//at a lower resolution :
var max = histogramData[0];
//Determine the max value, the highest bar in the histogram, the initial height of the image.
for (int i = 0; i < histogramData.Length; i++)
{
if (histogramData[i] > max)
max = histogramData[i];
}
var bitmap = new WriteableBitmap(256, 500);
//Set a color to draw with according to the type of the histogram :
var color = Colors.White;
switch (type)
{
case HistogramType.Blue :
{
color = Colors.RoyalBlue;
break;
}
case HistogramType.Green:
{
color = Colors.OliveDrab;
break;
}
case HistogramType.Red:
{
color = Colors.Firebrick;
break;
}
case HistogramType.Luminosity:
{
color = Colors.DarkSlateGray;
break;
}
}
//Compute a scaler to scale the bars to the actual image dimensions :
var scaler = 1;
while (max/scaler > 500)
{
scaler++;
}
var stream = bitmap.PixelBuffer.AsStream();
var streamBuffer = new byte[stream.Length];
//Make a white image initially :
for (var i = 0; i < streamBuffer.Length; i++)
{
streamBuffer[i] = 255;
}
//Color the image :
for (var i = 0; i < 256; i++) // i = column
{
for (var j = 0; j < histogramData[i] / scaler; j++) // j = line
{
streamBuffer[j*256*4 + i] = color.B; //the image has a 256-pixel width
streamBuffer[j*256*4 + i + 1] = color.G;
streamBuffer[j*256*4 + i + 2] = color.R;
streamBuffer[j*256*4 + i + 2] = color.A;
}
}
//Write the Pixel Data into the Pixel Buffer of the future Histogram image :
stream.Seek(0, 0);
stream.Write(streamBuffer, 0, streamBuffer.Length);
return bitmap.Flip(WriteableBitmapExtensions.FlipMode.Horizontal);
}
This creates a pretty bad histogram representation, it doesn't even colour it with an corresponding colour... It's not working properly, I'm working on it to fix it...
If you can contribute with a link you might know any code for a histogram representation for WinRT apps or everything else is greatly appreciated.
While you could use a charting control as JP Alioto pointed out, histograms tend to represent a lot of data. In your sample alone you're rendering 256 bars * 4 axis (R,G,B,L). The problem with charting controls is that they usually like to be handed collections (or arrays) of hydrated data, which they draw, and which they tend to keep in memory. A histogram like yours would need to have 1024 objects in memory (256 * 4) and passed to the chart as a whole. It's just not a good use of memory management.
The alternative of course is to draw it yourself. But as you've found, pixel-by-pixel drawing can be a bit of a pain. The best answer - in my opinion - is to agree with Shahar and recommend you use WriteableBitmapEx on CodePlex.
http://writeablebitmapex.codeplex.com
WriteableBitmapEx includes methods for drawing shapes like lines and rectangles that are very very fast. You can draw the data as you enumerate it (instead of having to have it all in memory at one time) and the result is a nice compact image that is already "bitmap cached" (meaning it renders very fast since it doesn't have to redrawn on each frame).
Dev support, design support and more awesome goodness on the way: http://bit.ly/winappsupport