so Im trying to make a full sprite having it in 2 parts, one sprite with the head and another one with the body.
I set the 2 textures in the inspector and create another one though code that is the one that I want.
What I do is getting the pixels that I want for the body and set them. No problem here, the problem comes when I want to put the head, because its a 128x128 and I dont use all, so it picks transparent pixels of this one and deletes the bodies ones.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class MSSGameHandler : MonoBehaviour
{
[SerializeField] private Texture2D baseTexture;
[SerializeField] private Texture2D headTexture;
[SerializeField] private Texture2D bodyTexture;
[SerializeField] private Material guestMaterial;
private Sprite mysprite;
private void Awake()
{
Texture2D texture = new Texture2D(512, 512, TextureFormat.RGBA32, true);
Color[] spriteSheetBasePixels = baseTexture.GetPixels(0, 0, 512, 512);
texture.SetPixels(0, 0, 512, 512, spriteSheetBasePixels);
Color[] bodyPixels = bodyTexture.GetPixels(0, 0, 128, 128);
texture.SetPixels(0, 256, 128, 128, bodyPixels);
Color[] headPixels = headTexture.GetPixels(0, 0, 128, 128);
texture.SetPixels(0, 294, 128, 128, headPixels);
texture.Apply();
guestMaterial.mainTexture = texture;
//mysprite = Sprite.Create(texture)
}
}
well in SetPixels you overwrite all existing pixels
you should rather loop them "manually" and check the alpha value
var finalpixels = new Color[512 * 512];
var spriteSheetBasePixels = baseTexture.GetPixels(0, 0, 512, 512);
for(var i = 0; i < finalpixels.Length; i++)
{
finalpixels[i] = spriteSheetBasePixels [i];
}
var bodyPixels = bodyTexture.GetPixels(0, 0, 128, 128);
for(var x = 0; x < 128; x++)
{
for(var y = 256; y < 256 + 128; y++)
{
finalpixels[x + y * 512] = bodyPixels[x + (y - 256) * 128];
}
}
var headPixels = headTexture.GetPixels(0, 0, 128, 128);
for(var x = 0; x < 128; x++)
{
for(var y = 294; y < 294 + 128; y++)
{
var pixel = headPixels[x + (y - 256) * 128];
if(Mathf.Approximately(pixel.a, 0)) continue;
finalpixels[x + y * 512] = pixel;
}
}
texture.SetPixels(finalPixels);
Body image
Head image
Result (in a RawImage)
Sorry for my painting skills :D
Merging these two textures together adds unnecessary loading time at the start of your application for very little per-frame benefit. And, if you decide to move the relative positions of the head and body (e.g., as part of an idle animation), you have to take a performance hit to re-create the new texture showing the new relative positions.
So, instead, put the head and the body into separate objects then use a Sorting Group component to keep them sorted together.
Example character:
Before sorting group
Apply sorting group to root object (ChombiBoy, in this example):
After sorting group (the character rendered on top has a higher Order in Layer in its Sorting Group):
(images taken from Unity documentation)
Sorting a Sorting Group
Unity uses the concept of sorting layers to allow you to divide sprites into groups for overlay priority. Sorting Groups with a Sorting Layer lower in the order are overlaid by those in a higher Sorting Layer.
Sometimes, two or more objects in the same Sorting Layer can overlap (for example, two player characters in a side scrolling game, as shown in the example below). The Order in Layer property can be used to apply consistent priorities to Sorting Groups in the same layer. As with Sorting Layer, lower numbers are rendered first, and are obscured by Sorting Groups with higher layer numbers, which are rendered later. See the documentation on Tags and Layers for details on editing Sorting Layers.
The descendants of the Sorting Group are sorted against other descendants of closest or next Sorting Group (depending on whether sorting is by distance or Order in Layer). In other words, the Sorting Group creates a local sorting space for its descendants only. This allows each of the Renderers inside the group to be sorted using the Sorting Layer and Order in Layer, but locally to the containing Sorting Group.
Related
I am trying to take a grayscale bitmap and extract a single line from it and then graph the gray values. I got something to work, but I'm not really happy with it. It just seems slow and tedious. I am sure someone has a better idea
WriteableBitmap someImg; //camera image
int imgWidth = someImg.PixelWidth;
int imgHeight = someImg.PixelHeight;
Int32Rect rectLine = new Int32Rect(0, imgHeight / 2, imgWidth, 1); //horizontal line half way down the image as a rectangle with height 1
//calculate stride and buffer size
int imgStride = (imgWidth * someImg.Format.BitsPerPixel + 7) / 8; // not sure I understand this part
byte[] buffer = new byte[imgStride * rectLine.Height];
//copy pixels to buffer
someImg.CopyPixels(rectLine, buffer, imgStride, 0);
const int xGraphHeight = 256;
WriteableBitmap xgraph = new WriteableBitmap(imgWidth, xGraphHeight, someImg.DpiX, someImg.DpiY, PixelFormats.Gray8, null);
//loop through pixels
for (int i = 0; i < imgWidth; i++)
{
Int32Rect dot = new Int32Rect(i, buffer[i], 1, 1); //1x1 rectangle
byte[] WhiteDotByte = { 255 }; //white
xgraph.WritePixels(dot, WhiteDotByte, imgStride, 0);//write pixel
}
You can see the image and the plot below the green line. I guess I am having some WPF issues that make it look funny but that's a problem for another post.
I assume the goal is to create a plot of the pixel value intensities of the selected line.
The first approach to consider it to use an actual plotting library. I have used oxyplot, it works fine, but is lacking in some aspects. Unless you have specific performance requirements this will likely be the most flexible approach to take.
If you actually want to render to an image you might be better of using unsafe code to access the pixel values directly. For example:
xgraph.Lock();
for (int y = 0; y < imgHeight; y++){
var rowPtr = (byte*)(xgraph.BackBuffer + y * xgraph.BackBufferStride);
for(int x = 0; x < imgWidth; x++){
rowPtr[x] = (byte)(y < buffer[i] ? 0 : 255);
}
}
self.Unlock(); // this should be placed in a finally statement
This should be faster than writing 1x1 rectangles. It should also write columns instead of single pixels, and that should help making the graph more visible. You might also consider allowing arbitrary image height and scale the comparison value.
If you want to plot the pixel values along an arbitrary line, and not just a horizontal one. You can take equidistant samples along the line, and use bilinear interpolation to sample the image.
I need to graph rectangles of different heights and widths in a C# application. The rectangles may or may not overlap.
I thought the System.Windows.Forms.DataVisualization.Charting would have what I need, but every chart type I've explored wants data points composed of a single value in one dimension and multiple values in the other.
I've considered: Box, Bubble, and Range Bar.
It turns out that Richard Eriksson has the closest answer in that the Charting package doesn't contain what I needed. The solution I'm moving forward with is to use a Point chart to manage axes and whatnot, but overload the PostPaint event to effectively draw the rectangles I need on top. The Chart provides value-to-pixel (and vice versa) conversions.
Here is a minimal example that throws 100 squares of different colors and sizes randomly onto one Chart of ChartType Point with custom Marker Images.
You can modify to de-couple the datapoints from the colors, allow for any sizes or shapes etc..:
int count = 100;
int mSize = 60; // marker size
List<Color> colors = new List<Color>(); // a color list
for (int i = 0; i < count; i++)
colors.Add(Color.FromArgb(255, 255 - i * 2, (i*i) %256, i*2));
Random R = new Random(99);
for (int i = 0; i < count; i++) // create and store the marker images
{
int w = 10 + R.Next(50); // inner width of visible marker
int off = (mSize - w) / 2;
Bitmap bmp = new Bitmap(mSize, mSize);
using (Graphics G = Graphics.FromImage(bmp))
{
G.Clear(Color.Transparent);
G.FillRectangle(new SolidBrush(colors[i]), off, off, w, w);
chart5.Images.Add(new NamedImage("NI" + i, bmp));
}
}
for (int i = 0; i < count; i++) // now add a few points to random locations
{
int p = chart5.Series["S1"].Points.AddXY(R.Next(100), R.Next(100));
chart5.Series["S1"].Points[p].MarkerImage = "NI" + p;
}
Note that this is really just a quick one; in the Link to the original answer about a heat map I show how to resize the Markers along with the Chart. Here they will always stay the same size..:
I have lowered the Alpha of the colors for this image from 255 to 155, btw.
The sizes also stay fixed when zooming in on the Chart; see how nicely they drift apart, so you can see the space between them:
This may or may not be what you want, of course..
Note that I had disabled both Axes in the first images for nicer looks. For zooming I have turned them back on so I get the simple reset button..
Also note that posting the screenshots here introduces some level of resizing, which doesn't come from the chart!
How would I go about generating the 2D coordinates for an area of an image, so for example if one of the countries on this map was singled out and was the only one visible: but on a canvas the same size, how would I go about getting the 2D coordinates for it?
As I then want to create hover/click areas based on these coordinates using c#, I'm unable to find a tool which can detect for example a shape within a blank canvas and spit out its outline coordinates.
I mainly believe this to be a phrasing/terminology issue on my part, as I feel this whole process is already a "thing", and well documented.
There are many ways to achieve your task here are few:
Look at Generating Polygons from Image (Filled Shapes) which is Almost duplicate of yours but has a bit different start point.
In a nutshell:
extract all non white pixels which are neighboring white pixel
Just loop through whole image (except outer border pixels) if processed pixel is not white then look to its 4/8 neighbors of processed pixel. If any of them is different color then add the processed pixel color and coordinates to a list.
sort the point list by color
This will separate countries
apply closed loop / connectivity analysis
This is vectorisation/polygonize process. Just join not yet used neighboring pixels from list to form lines ...
There is also A* alternative for this that might be easier to implement:
extract all non white pixels which are neighboring white pixel
Just loop through whole image (except outer border pixels) if processed pixel is not white then look to its 4/8 neighbors of processed pixel. If none of them is different color then clear current pixel with some unused color (black).
recolor all white and the clear color to single color (black).
from this the recolor color will mean wall
Apply A* path finding
find first non wall pixel and apply A* like growth filling. When you done filling then just trace back remembering the order of points in a list as a polygon. Optionally joining straight line pixels to single line ...
Another option is adapt this Finding holes in 2d point sets
[notes]
If your image is filtered (Antialiasing,scaling,etc) then you need to do the color comparisons with some margin for error and may be even port to HSV (depends on the level of color distortion).
You can use opencv's findcontour() function. See documentation here: http://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html.
I think you're going at this the wrong way. Outlines of continents are madness; they are often made up of several parts with lots of small islands. And, you don't need the coordinates of the continents on the image; looking up if your current coordinates are in a list would take far too long. Instead, you should do the opposite: make an index table of the whole image, on which is indicated for each pixel which continent it belongs to.
And that's much, much easier.
Since you obviously have to assign a colour to each continent to identify them, you can go over all of the image's pixels, match each pixel's colour to the closest match in the colours of your continents, and fill each byte in the array with the corresponding found continent index. This way, you get a byte array that directly references your continents array. Effectively, this means you create an indexed 8-bit image, just as a plain bytes array. (There are methods to actually combine this with the colours array and get an image you can use, mind you. It's not too hard.)
For the actual colour matching, the best practice is to use LockBits on the source image to get direct access to the underlying bytes array. In the code below, the call to GetImageData gets me the bytes and the data stride. Then you can iterate over the bytes per line, and build a colour from each block of data that represents one pixel. If you don't want to bother too much with supporting different pixel sizes (like 24bpp), a quick trick is to just paint the source image on a new 32bpp image of the same dimensions (the call to PaintOn32bpp), so you can always simply iterate per four bytes and take the byte values in the order 3,2,1,0 for ARGB. I ignored transparency here because it just complicates the concept of what is and isn't a colour.
private void InitContinents(Bitmap map, Int32 nearPixelLimit)
{
// Build hues map from colour palette. Since detection is done
// by hue value, any grey or white values on the image will be ignored.
// This does mean the process only works with actual colours.
// In this function it is assumed that index 0 in the palette is the white background.
Double[] hueMap = new Double[this.continentsPal.Length];
for (Int32 i = 0; i < this.continentsPal.Length; i++)
{
Color col = this.continentsPal[i];
if (col.GetSaturation() < .25)
hueMap[i] = -2;
else
hueMap[i] = col.GetHue();
}
Int32 w = map.Width;
Int32 h = map.Height;
Bitmap newMap = ImageUtils.PaintOn32bpp(map, continentsPal[0]);
// BUILD REDUCED COLOR MAP
Byte[] guideMap = new Byte[w * h];
Int32 stride;
Byte[] imageData = ImageUtils.GetImageData(newMap, out stride);
for (Int32 y = 0; y < h; y++)
{
Int32 sourceOffs = y * stride;
Int32 targetOffs = y * w;
for (Int32 x = 0; x < w; x++)
{
Color c = Color.FromArgb(255, imageData[sourceOffs + 2], imageData[sourceOffs + 1], imageData[sourceOffs + 0]);
Double hue;
// Detecting on hue. Values with < 25% saturation are ignored.
if (c.GetSaturation() < .25)
hue = -2;
else
hue = c.GetHue();
// Get the closest match
Double smallestHueDiff = Int32.MaxValue;
Int32 smallestHueIndex = -1;
for (Int32 i = 0; i < hueMap.Length; i++)
{
Double hueDiff = Math.Abs(hueMap[i] - hue);
if (hueDiff < smallestHueDiff)
{
smallestHueDiff = hueDiff;
smallestHueIndex = i;
}
}
guideMap[targetOffs] = (Byte)(smallestHueIndex < 0 ? 0 : smallestHueIndex);
// Increase read pointer with 4 bytes for next pixel
sourceOffs += 4;
// Increase write pointer with 1 byte for next index
targetOffs++;
}
}
// Remove random edge pixels, and save in global var.
this.continentGuide = RefineMap(guideMap, w, h, nearPixelLimit);
// Build image from the guide map.
this.overlay = ImageUtils.BuildImage(this.continentGuide, w, h, w, PixelFormat.Format8bppIndexed, this.continentsPal, null);
}
The GetImageData function:
/// <summary>
/// Gets the raw bytes from an image.
/// </summary>
/// <param name="sourceImage">The image to get the bytes from.</param>
/// <param name="stride">Stride of the retrieved image data.</param>
/// <returns>The raw bytes of the image</returns>
public static Byte[] GetImageData(Bitmap sourceImage, out Int32 stride)
{
BitmapData sourceData = sourceImage.LockBits(new Rectangle(0, 0, sourceImage.Width, sourceImage.Height), ImageLockMode.ReadOnly, sourceImage.PixelFormat);
stride = sourceData.Stride;
Byte[] data = new Byte[stride * sourceImage.Height];
Marshal.Copy(sourceData.Scan0, data, 0, data.Length);
sourceImage.UnlockBits(sourceData);
return data;
}
Now, back to the process; once you have that reference table, all you need are the coordinates of your mouse and you can check the reference map at index (Y*Width + X) to see what area you're in. To do that, you can add a MouseMove listener on an ImageBox, like this:
private void picImage_MouseMove(object sender, MouseEventArgs e)
{
Int32 x = e.X - picImage.Padding.Top;
Int32 y = e.Y - picImage.Padding.Left;
Int32 coord = y * this.picWidth + x;
if (x < 0 || x > this.picWidth || y < 0 || y > this.picHeight || coord > this.continentGuide.Length)
return;
Int32 continent = this.continentGuide[coord];
if (continent == previousContinent)
return;
previousContinent = continent;
if (continent >= this.continents.Length)
return;
this.lblContinent.Text = this.continents[continent];
this.picImage.Image = GetHighlightPic(continent);
}
Note that a simple generated map produced by nearest colour matching may have errors; when I did automatic mapping of this world map's colours, the border between blue and red, and some small islands in Central America, ended up identifying as Antarctica's purple colour, and some other rogue pixels appeared around the edges of different continents too.
This can be avoided by clearing (I used 0 as default "none") all indices not bordered by the same index at the top, bottom, left and right. This removes some smaller islands, and creates a slight gap between any neighbouring continents, but for mouse coordinates detection it'll still very nicely match the areas. This is the RefineMap call in my InitContinents function. The argument it gets determines how many identical neighbouring values an index needs to allow it to survive the pruning.
A similar technique with checking neigbouring pixels can be used to get outlines, by making a map of pixels not surrounded at all sides by the same value.
I'm working on a project to show a 2D world generation process in steps using bitmap images.
Array data is stored in this way:
Main.tile[i, j].type = x;
With x being an integer value >= 0.
Basically i and j are changed every time the program loops using for-loops, and the following statement is run after certain conditions are met during the loop process at the end of the loop.
So, a possible sequence could be:
Main.tile[4, 67].type = 1;
Main.tile[4, 68].type = 1;
Main.tile[4, 69].type = 0;
And so on.
I tried several methods of directly modifying the bitmap image once the array was changed/updated (using Bitmap.SetPixel), but this seemed way to slow to be useful for a 21k,8k pixel resoltion bitmap.
I'm looking for a way to digest the whole array at the end of the whole looping process (not after each individual loop, but between steps), and put colored points (depending on the value of the array) accordingly to i, j (as if it were a coordinate system).
Are there any faster alternatives to SetPixel, or are there easier ways to save an array to a bitmap/image file?
Change your array to one dimension array and apply all operation on the one dimension array and ONLY if you want to
display the image change it back to 2 dimension.
How to change whole array from 2D to 1D:
byte[,] imageData = new byte[1, 2]
{
{ 1, 2 }
{ 3, 4 }
};
var mergedData = new byte[ImageData.Length];
// Output { 1, 2, 3, 4 }
Buffer.BlockCopy(imageData, 0, mergedData, 0, imageData.Length);
From 2D to 1D:
// depending on whether you read from left to right or top to bottom.
index = x + (y * width)
index = y + (x * height)
From 1D to 2D:
x = index % width
y = index / width or
x = index / height
y = index % height
I hope this will solve your problem!
I added several cubes to a Viewport3D in WPF and now I want to manipulate groups of them with the mouse.
When I click & drag over one and a half of those cubes I want the hole plane rotated in the direction that the drag was made, the rotation will be handled by RotateTransform3D so it won't be a problem.
The problem is that I don't know how I should handle the drag, more exactly:
How can I know which faces of the cubes were dragged over in order to determine what plane to rotate?
For example in the case below I'd like to know that I need to rotate the right plane of cubes with 90 degrees clockwise so the row of blue faces will be at the top instead of the white ones which will be in the back.
And in this example the top layer should be rotated 90 degrees counterclockwise:
Currently my idea is to place some sort of invisible areas over the cube, to check in which one the drag is happening with VisualTreeHelper.HitTest and then to determine which plane I should rotate, this area will match the first drag example:
But when I add all four regions then I'm back to square one because I still need to determine the direction and which face to rotate according to which areas were "touched".
I'm open to ideas.
Please note that this cube can be freely moved, so it may not be in the initial position when the user clicks and drags, this is what bothers me the most.
PS:
The drag will be implemented with a combination of MouseLeftButtonDown, MouseMove and MouseLeftButtonUp.
MouseEvents
You'll need to use VisualTreeHelper.HitTest() to pick Visual3D objects (process may be simpler if each face is a separate ModelVisual3D). Here is some help on the HitTesting in general, and here is a very useful tidbit that simplifies the picking process tremendously.
Event Culling
Let's say that you now have two ModelVisual3D objects from your picking tests (one from the MouseDown event, one from the MouseUp event). First, we should detect if they are coplanar (to avoid picks going from one face to another). One way to do this is to compare the face Normals to see if they are pointing the same direction. If you have defined the Normals in your MeshGeometry3D, that's great. If not, then we can still find it. I'd suggest adding a static class for extensions. An example of calculating a normal:
public static class GeometricExtensions3D
{
public static Vector3D FaceNormal(this MeshGeometry3D geo)
{
// get first triangle's positions
var ptA = geo.Positions[geo.TriangleIndices[0]];
var ptB = geo.Positions[geo.TriangleIndices[1]];
var ptC = geo.Positions[geo.TriangleIndices[2]];
// get specific vectors for right-hand normalization
var vecAB = ptB - ptA;
var vecBC = ptC - ptB;
// normal is cross product
var normal = Vector3D.CrossProduct(vecAB, vecBC);
// unit vector for cleanliness
normal.Normalize();
return normal;
}
}
Using this, you can compare the normals of the MeshGeometry3D from your Visual3D hits (lots of casting involved here) and see if they are pointing in the same direction. I would use a tolerance test on the X,Y,Z of the vectors as opposed to a straight equivalence, just for safety's sake. Another extension might be helpful:
public static double SSDifference(this Vector3D vectorA, Vector3D vectorB)
{
// set vectors to length = 1
vectorA.Normalize();
vectorB.Normalize();
// subtract to get difference vector
var diff = Vector3D.Subtract(vectorA, vectorB);
// sum of the squares of the difference (also happens to be difference vector squared)
return diff.LengthSquared;
}
If they are not coplanar (SSDifference > some arbitrary test value), you can return here (or give some kind of feedback).
Object Selection
Now that we have determined our two faces and that they are, indeed, ripe for our desired event-handling, we must deduce a way to bang out the information from what we have. You should still have the Normals you calculated before. We're going to be using them again to pick the rest of the faces to be rotated. Another extension method can be helpful for the comparison to determine if a face should be included in the rotation:
public static bool SharedColumn(this MeshGeometry3D basis, MeshGeometry3D compareTo, Vector3D normal)
{
foreach (Point3D basePt in basis.Positions)
{
foreach (Point3D compPt in compareTo.Positions)
{
var compToBasis = basePt - compPt; // vector from compare point to basis point
if (normal.SSDifference(compToBasis) < float.Epsilon) // at least one will be same direction as
{ // as normal if they are shared in a column
return true;
}
}
}
return false;
}
You'll need to cull faces for both of your meshes (MouseDown and MouseUp), iterating over all of the faces. Save the list of Geometries that need to be rotated.
RotateTransform
Now the tricky part. An Axis-Angle rotation takes two parameters: a Vector3D representing the axis normal to the rotation (using right-hand rule) and the angle of rotation. But the midpoint of our cube may not be at (0, 0, 0), so rotations can be tricky. Ergo, first we must find the midpoint of the cube! The simplest way I can think of is to add the X, Y, and Z components of every point in the cube and then divide them by the number of points. The trick, of course, will be not to add the same point more than once! How you do that will depend on how your data is organized, but I'll assume it to be a (relatively) trivial exercise. Instead of applying transforms, you'll want to move the points themselves, so instead of creating and adding to a TransformGroup, we're going to build Matrices! A translate matrix looks like:
1, 0, 0, dx
0, 1, 0, dy
0, 0, 1, dz
0, 0, 0, 1
So, given the midpoint of your cube, your translation matrices will be:
var cp = GetCubeCenterPoint(); // user-defined method of retrieving cube's center point
// gpu's process matrices in column major order, and they are defined thusly
var matToCenter = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 0, 1,
-cp.X, -cp.Y, -cp.Z, 1);
var matBackToPosition = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 0, 1,
cp.X, cp.Y, cp.Z, 1);
Which just leaves our rotation. Do you still have reference to the two meshes we picked from the MouseEvents? Good! Let's define another extension:
public static Point3D CenterPoint(this MeshGeometry3D geo)
{
var midPt = new Point3D(0, 0, 0);
var n = geo.Positions.Count;
foreach (Point3D pt in geo.Positions)
{
midPt.Offset(pt.X, pt.Y, pt.Z);
}
midPt.X /= n; midPt.Y /= n; midPt.Z /= n;
return midPt;
}
Get the vector from the MouseDown's mesh to the MouseUp's mesh (the order is important).
var swipeVector = MouseUpMesh.CenterPoint() - MouseDownMesh.CenterPoint();
And you still have the normal for our hit faces, right? We can (basically magically) get the rotation axis by:
var rotationAxis = Vector3D.CrossProduct(swipeVector, faceNormal);
Which will make your rotation angle always +90°. Make the RotationMatrix (source):
swipeVector.Normalize();
var cosT = Math.Cos(Math.PI/2);
var sinT = Math.Cos(Math.PI/2);
var x = swipeVector.X;
var y = swipeVector.Y;
var z = swipeVector.Z;
// build matrix, remember Column-Major
var matRotate = new Matrix3D(
cosT + x*x*(1 -cosT), y*x*(1 -cosT) + z*sinT, z*x*(1 -cosT) -y*sinT, 0,
x*y*(1 -cosT) -z*sinT, cosT + y*y*(1 -cosT), y*z*(1 -cosT) -x*sinT, 0,
x*z*(1 -cosT) -y*sinT, y*z*(1 -cosT) -x*sinT, cosT + z*z*(1 -cosT), 0,
0, 0, 0, 1);
Combine them to get the Transformation matrix, note that the order is important. We want to take the point, transform it to coordinates relative to the origin, rotate it, then transform it back to original coordinates, in that order. So:
var matTrans = Matrix3D.Multiply(Matrix3D.Multiply(matToCenter, matRotate), matBackToPosition);
Then, you're ready to move the points. Iterate through each Point3D in each MeshGeometry3D that you previously tagged for rotation, and do:
foreach (MeshGeometry3D geo in taggedGeometries)
{
for (int i = 0; i < geo.Positions.Count; i++)
{
geo.Positions[i] *= matTrans;
}
}
And then... oh wait, we're done!