No image texture on StaticModel - c#

I have a very simply scene (camera, light set up as usual). I want to apply a simple image material over the model. If I use one of the convenience models like Sphere, the object renders with the material all right:
var model = modelNode.CreateComponent<Sphere>();
model.SetMaterial(Material.FromImage("Textures/small.jpg"));
If I switch to a static model (it's the default cube from Blender), nothing renders (or probably, it renders invisible). With a color material, it works all right, so there is no question about the model itself.
var model = modelNode.CreateComponent<StaticModel>();
model.Model = ResourceCache.GetModel("Models/Cube.mdl", false);
//model.SetMaterial(CoreAssets.Materials.DefaultGrey);
//model.SetMaterial(Material.FromColor(Color.Yellow));
model.SetMaterial(Material.FromImage("Textures/small.jpg"));
For reference, the rest of the scene is:
scene = new Scene();
octree = scene.CreateComponent<Octree>();
var cameraNode = scene.CreateChild();
cameraNode.Position = new Vector3(0, 0, -10);
cameraNode.SetDirection(new Vector3(0, 0, 0));
camera = cameraNode.CreateComponent<Camera>();
var lightNode = cameraNode.CreateChild();
lightNode.Position = new Vector3(5, 5, -5);
lightNode.SetDirection(new Vector3(0, 0, 0));
var light = lightNode.CreateComponent<Light>();
light.LightType = LightType.Directional;
light.Brightness = 1.5f;
light.CastShadows = true;
light.Color = Color.White;
light.Range = 10;

Well, the model didn't have all the necessary details exported...

Related

Multiple 3D objects with C# and WPF

I'm trying to use C# and WPF to render two triangle mesh objects with different colors, and I can't quite figure out how to make it work.
If I set numObjects to 1, it will display a single red triangle as it should.
But, when I set numObjects to 2, the first red triangle is not displayed and only the 2nd green triangle is displayed.
What am I doing wrong here?
Here's my code:
using System.Windows.Media.Media3D;
namespace WpfApplication1
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
// Declare scene objects.
Viewport3D myViewport3D = new Viewport3D();
Model3DGroup myModel3DGroup = new Model3DGroup();
GeometryModel3D myGeometryModel = new GeometryModel3D();
ModelVisual3D myModelVisual3D = new ModelVisual3D();
// Defines the camera used to view the 3D object. In order to view the 3D object,
// the camera must be positioned and pointed such that the object is within view
// of the camera.
PerspectiveCamera myPCamera = new PerspectiveCamera();
public MainWindow()
{
InitializeComponent();
// Specify where in the 3D scene the camera is.
myPCamera.Position = new Point3D(0, 0, 5);
// Specify the direction that the camera is pointing.
myPCamera.LookDirection = new Vector3D(0, 0, -1);
// Define camera's horizontal field of view in degrees.
myPCamera.FieldOfView = 60;
// Asign the camera to the viewport
myViewport3D.Camera = myPCamera;
// Define the lights cast in the scene. Without light, the 3D object cannot
// be seen. Note: to illuminate an object from additional directions, create
// additional lights.
DirectionalLight myDirectionalLight = new DirectionalLight();
myDirectionalLight.Color = Colors.White;
myDirectionalLight.Direction = new Vector3D(-0.61, -0.5, -0.61);
myModel3DGroup.Children.Add(myDirectionalLight);
int numObjects = 2;
for(int i = 0; i < numObjects; i++)
{
BuildObject(i);
}
// Add the group of models to the ModelVisual3d.
myModelVisual3D.Content = myModel3DGroup;
//
myViewport3D.Children.Add(myModelVisual3D);
// Apply the viewport to the page so it will be rendered.
this.Content = myViewport3D;
}
private void BuildObject(int i)
{
// The geometry specifes the shape of the 3D plane. In this sample, a flat sheet
// is created.
MeshGeometry3D myMeshGeometry3D = new MeshGeometry3D();
// Create a collection of normal vectors for the MeshGeometry3D.
Vector3DCollection myNormalCollection = new Vector3DCollection();
myNormalCollection.Add(new Vector3D(0, 0, 1));
myNormalCollection.Add(new Vector3D(0, 0, 1));
myNormalCollection.Add(new Vector3D(0, 0, 1));
myMeshGeometry3D.Normals = myNormalCollection;
double basex = 0 + i * 1;
// Create a collection of vertex positions for the MeshGeometry3D.
Point3DCollection myPositionCollection = new Point3DCollection();
myPositionCollection.Add(new Point3D(basex + -0.5, -0.5, 0.5));
myPositionCollection.Add(new Point3D(basex + 0.5, -0.5, 0.5));
myPositionCollection.Add(new Point3D(basex + 0.5, 0.5, 0.5));
myMeshGeometry3D.Positions = myPositionCollection;
// Create a collection of texture coordinates for the MeshGeometry3D.
PointCollection myTextureCoordinatesCollection = new PointCollection();
myTextureCoordinatesCollection.Add(new Point(0, 0));
myTextureCoordinatesCollection.Add(new Point(1, 0));
myTextureCoordinatesCollection.Add(new Point(1, 1));
myMeshGeometry3D.TextureCoordinates = myTextureCoordinatesCollection;
// Create a collection of triangle indices for the MeshGeometry3D.
Int32Collection myTriangleIndicesCollection = new Int32Collection();
myTriangleIndicesCollection.Add(0);
myTriangleIndicesCollection.Add(1);
myTriangleIndicesCollection.Add(2);
myMeshGeometry3D.TriangleIndices = myTriangleIndicesCollection;
// Apply the mesh to the geometry model.
myGeometryModel.Geometry = myMeshGeometry3D;
// The material specifies the material applied to the 3D object. In this sample a
// linear gradient covers the surface of the 3D object.
Color color = Color.FromArgb(255, 255, 0, 0);
if(i == 1)
{
color = Color.FromArgb(255, 0, 255, 0);
}
SolidColorBrush solid_brush = new SolidColorBrush(color);
DiffuseMaterial solid_material = new DiffuseMaterial(solid_brush);
myGeometryModel.Material = solid_material;
// Add the geometry model to the model group.
myModel3DGroup.Children.Add(myGeometryModel);
Console.WriteLine(myGeometryModel.ToString());
}
}
}
You instantiate and work only with a single GeometryModel3D object (referenced by the field myGeometryModel). So, the data of the green triangle essentially replaces the red triangle data in myGeometryModel.
To fix the issue, delete the myGeometryModel field, and create a GeometryModel3D object for each triangle within the BuildObject method:
public partial class MainWindow : Window
{
Viewport3D myViewport3D = new Viewport3D();
Model3DGroup myModel3DGroup = new Model3DGroup();
ModelVisual3D myModelVisual3D = new ModelVisual3D();
PerspectiveCamera myPCamera = new PerspectiveCamera();
...
private void BuildObject(int i)
{
var myGeometryModel = new GeometryModel3D();
...
}
}

Cannot draw primitives with SharpDX on Windows 10 Universal App (but still able to clear the background)

Sorry because of my bad English. I am trying to write a very simple DirectX 11 on Windows 10 app with SharpDX, which draws a triangle at the middle of the windows. The problem is the triangle is not displayed while I can still change the background color (using ClearRenderTargetView). I verified that the render function is called periodically and my triangle is front-face (clockwise). What I have tried:
Disable back-face culling
Set static width and height
Try other primitives (line, triangle strip)
Change vertex shader input from float3 to float4 and vice-versa
I found that this post has very similar symptoms but still not work!
I have posted my code on GitHub at: https://github.com/minhcly/UWP3DTest
Here is my initialization code (where I think the problem resides):
D3D11.Device2 device;
D3D11.DeviceContext deviceContext;
DXGI.SwapChain2 swapChain;
D3D11.Texture2D backBufferTexture;
D3D11.RenderTargetView backBufferView;
private void InitializeD3D()
{
using (D3D11.Device defaultDevice = new D3D11.Device(D3D.DriverType.Hardware, D3D11.DeviceCreationFlags.Debug))
this.device = defaultDevice.QueryInterface<D3D11.Device2>();
this.deviceContext = this.device.ImmediateContext2;
DXGI.SwapChainDescription1 swapChainDescription = new DXGI.SwapChainDescription1()
{
AlphaMode = DXGI.AlphaMode.Ignore,
BufferCount = 2,
Format = DXGI.Format.R8G8B8A8_UNorm,
Height = (int)(this.swapChainPanel.RenderSize.Height),
Width = (int)(this.swapChainPanel.RenderSize.Width),
SampleDescription = new DXGI.SampleDescription(1, 0),
Scaling = SharpDX.DXGI.Scaling.Stretch,
Stereo = false,
SwapEffect = DXGI.SwapEffect.FlipSequential,
Usage = DXGI.Usage.RenderTargetOutput
};
using (DXGI.Device3 dxgiDevice3 = this.device.QueryInterface<DXGI.Device3>())
using (DXGI.Factory3 dxgiFactory3 = dxgiDevice3.Adapter.GetParent<DXGI.Factory3>())
{
DXGI.SwapChain1 swapChain1 = new DXGI.SwapChain1(dxgiFactory3, this.device, ref swapChainDescription);
this.swapChain = swapChain1.QueryInterface<DXGI.SwapChain2>();
}
using (DXGI.ISwapChainPanelNative nativeObject = ComObject.As<DXGI.ISwapChainPanelNative>(this.swapChainPanel))
nativeObject.SwapChain = this.swapChain;
this.backBufferTexture = this.swapChain.GetBackBuffer<D3D11.Texture2D>(0);
this.backBufferView = new D3D11.RenderTargetView(this.device, this.backBufferTexture);
this.deviceContext.OutputMerger.SetRenderTargets(this.backBufferView);
deviceContext.Rasterizer.State = new D3D11.RasterizerState(device, new D3D11.RasterizerStateDescription()
{
CullMode = D3D11.CullMode.None,
FillMode = D3D11.FillMode.Solid,
IsMultisampleEnabled = true
});
deviceContext.Rasterizer.SetViewport(0, 0, (int)swapChainPanel.Width, (int)swapChainPanel.Height);
CompositionTarget.Rendering += CompositionTarget_Rendering;
Application.Current.Suspending += Current_Suspending;
isDXInitialized = true;
}
InitScene() function:
D3D11.Buffer triangleVertBuffer;
D3D11.VertexShader vs;
D3D11.PixelShader ps;
D3D11.InputLayout vertLayout;
RawVector3[] verts;
private void InitScene()
{
D3D11.InputElement[] inputElements = new D3D11.InputElement[]
{
new D3D11.InputElement("POSITION", 0, DXGI.Format.R32G32B32_Float, 0)
};
using (CompilationResult vsResult = ShaderBytecode.CompileFromFile("vs.hlsl", "main", "vs_4_0"))
{
vs = new D3D11.VertexShader(device, vsResult.Bytecode.Data);
vertLayout = new D3D11.InputLayout(device, vsResult.Bytecode, inputElements);
}
using (CompilationResult psResult = ShaderBytecode.CompileFromFile("ps.hlsl", "main", "ps_4_0"))
ps = new D3D11.PixelShader(device, psResult.Bytecode.Data);
deviceContext.VertexShader.Set(vs);
deviceContext.PixelShader.Set(ps);
verts = new RawVector3[] {
new RawVector3( 0.0f, 0.5f, 0.5f ),
new RawVector3( 0.5f, -0.5f, 0.5f ),
new RawVector3( -0.5f, -0.5f, 0.5f )
};
triangleVertBuffer = D3D11.Buffer.Create(device, D3D11.BindFlags.VertexBuffer, verts);
deviceContext.InputAssembler.InputLayout = vertLayout;
deviceContext.InputAssembler.PrimitiveTopology = D3D.PrimitiveTopology.TriangleList;
}
Render function:
private void RenderScene()
{
this.deviceContext.ClearRenderTargetView(this.backBufferView, new RawColor4(red, green, blue, 0));
deviceContext.InputAssembler.SetVertexBuffers(0,
new D3D11.VertexBufferBinding(triangleVertBuffer, Utilities.SizeOf<RawVector3>(), 0));
deviceContext.Draw(verts.Length, 0);
this.swapChain.Present(0, DXGI.PresentFlags.None);
}
Thank you for your help.
I have solved the problem. I have used the Graphics Debugger in Visual Studio and detect that the OutputMerger has no RenderTarget. So I move the line
this.deviceContext.OutputMerger.SetRenderTargets(this.backBufferView);
to the RenderScene() function and it works. However, I can't understand why I must reset this every frame. I'm new in Direct3D so if any one has an anwswer, please comment. Thank you.
P/s: I have committed the working project on the GitHub for anyone who encounters the same problem with me.

How to move an image pixel by pixel?

So I have 2 Image's I want the first Image to move towards the other Image I have both X,Y coordinates, how should this be done if I want it moving pixel by pixel towards the target Image?
Bare in mind, I'm using Windows-Universal I've tried DoubleAnimation but my knowledge in that stuff is really bad, I don't know where to start. The Image would sometimes have to move diagonally (across) rather than moving right then up.
How should I do this?
This is what I have so far:
private void MoveToTarget_Start()
{
moveToTimer = new DispatcherTimer();
moveToTimer.Tick += MoveToTarget_Tick;
moveToTimer.Interval = new TimeSpan(0, 0, 0, 0, 1);
moveToTimer.Start();
}
void MoveToTarget_Tick(object sender, object e)
{
}
First, you need to know how many pixels you need to move. For that, we can try to retrieve the absolute position of each element and compare them (there may be a more straightforward way, I just don't know how):
private Point GetAbsolutePosition(UIElement element)
{
var ttv = element.TransformToVisual(Window.Current.Content);
return ttv.TransformPoint(new Point(0, 0));
}
(taken from this answer)
From there, we retrieve the point of each element and compute the difference:
var position1 = GetAbsolutePosition(Image1);
var position2 = GetAbsolutePosition(Image2);
var offsetX = position2.X - position1.X;
var offsetY = position2.Y - position1.Y;
Now, we now of how many pixels we'll have to move on each axis. We add the TranslateTransform for the element (it may be better to do that beforehand directly from the XAML):
var translateTransform = new TranslateTransform();
image1.RenderTransform = translateTransform;
Finally, we create the animations, and target the TranslateTransform. Then we group them in a Storyboard, and start it:
var animationX = new DoubleAnimation()
{
From = 0,
To = offsetX,
Duration = TimeSpan.FromSeconds(2)
};
var animationY = new DoubleAnimation()
{
From = 0,
To = offsetY,
Duration = TimeSpan.FromSeconds(2)
};
Storyboard.SetTarget(animationX, translateTransform);
Storyboard.SetTargetProperty(animationX, "X");
Storyboard.SetTarget(animationY, translateTransform);
Storyboard.SetTargetProperty(animationY, "Y");
var storyboard = new Storyboard();
storyboard.Children.Add(animationX);
storyboard.Children.Add(animationY);
storyboard.Begin();
To be honest the best idea is to use DoubleAnimation and Storyboard classes.
I would set background as Canvas then You can animate it throught Canvas.SetLeft and Canvas.SetTop properties.
First You should create DoubleAnimationC class
DoubleAnimation da = new DoubleAnimation()
{
SpeedRatio = 3.0,
AutoReverse = false,
From = 0
To = 100
BeginTime = TimeSpan.FromSeconds(x),
};
Storyboard.SetTarget((Timeline)doubleAnimation, YOUR IMAGE);
Storyboard.SetTargetProperty(doubleAnimation, new PropertyPath("(Canvas.Top)"));
Of course change those properties as You like and now we must create StoryBoard class which will contains our animation
Storyboard sb = new Storyboard();
sb.Children.Add(da);
sb.Start();
Hope it helps!

Problems with attribute binding and shaders

Currently i'm upgrading our OpenGL-Engine to a new shading and attribute system which is more dynamic and are not somehow static in usage and programming.
For this i'm replacing the old VertexBuffer class with a new BufferRenderer to which multiple DataBuffer ( RenderDataBuffer, RenderIndexBuffer ) objects are assigned, those are holding my rendering data. This new system allows instancing with glDrawElementsInstanced and also static rendering with glDrawElements.
The Problem
It looks like a attribute corrupts a existing position attribute and leads to unexpected results. I tested this with different settings
Test setup
This code sets up the test data:
_Shader = new Shader(ShaderSource.FromFile("InstancingShader.xml"));
_VertexBuffer = new BufferRenderer();
RenderDataBuffer positionBuffer = new RenderDataBuffer(ArrayBufferTarget.ARRAY_BUFFER, ArrayBufferUsage.STATIC_DRAW,
new VertexDeclaration(DeclarationType.Float, DeclarationSize.Three, AttributeBindingType.Position));
// Set the position data of the quad
positionBuffer.BufferData(new[] { new Vector3(0, 0, 0), new Vector3(0, 0, 1), new Vector3(1, 0, 0), new Vector3(1, 0, 1) });
RenderDataBuffer instanceBuffer = new RenderDataBuffer(ArrayBufferTarget.ARRAY_BUFFER, ArrayBufferUsage.DYNAMIC_DRAW,
new VertexDeclaration(DeclarationType.Float, DeclarationSize.Four, AttributeBindingType.Instancing),
new VertexDeclaration(DeclarationType.Float, DeclarationSize.Four, AttributeBindingType.Color));
// Buffer the instance data
instanceBuffer.BufferData<InstanceTestData>(new[] {
new InstanceTestData() { Color = Colors.Red, PRS = new Color(0.1f, 1f, 0.5f, 1) },
new InstanceTestData() { Color = Colors.Blue, PRS = new Color(1f, 1f, 0.5f, 1) },
new InstanceTestData() { Color = Colors.Green, PRS = new Color(0.1f, 1f, 1f, 1) },
new InstanceTestData() { Color = Colors.Yellow, PRS = new Color(1f, 1f, 1f, 1) }
});
// Set up a index buffer for indexed rendering
RenderIndexBuffer indiciesBuffer = new RenderIndexBuffer(type: IndiciesType.UNSIGNED_BYTE);
indiciesBuffer.BufferData(new Byte[] { 2, 1, 0, 1, 2, 3 });
// Register the buffers ( second parameter is used for glVertexAttribDivisor )
_VertexBuffer.AddBuffer(positionBuffer);
_VertexBuffer.AddBuffer(instanceBuffer, 1);
_VertexBuffer.IndexBuffer = indiciesBuffer;
The vertex shader ( pixel just outputs the color ):
uniform mat4 uModelViewProjection;
varying vec4 vColor;
attribute vec3 aPosition; // POSITION0
attribute vec4 aColor; // COLOR 0
attribute vec4 aInstancePosition; // INSTANCING0
void main()
{
gl_Position = uModelViewProjection * vec4(vec2((aPosition.x * 20) + (gl_InstanceID * 20), aPosition.z * 20), -3, 1);
vColor = aColor;
}
Rendering ( Pseudocode to simplify reading; Not final for all performance guys out there )
glUseProgram
foreach (parameter in shader_parameters)
glUniformX
foreach (buffer in render_buffers)
glBindBuffer
foreach (declaration in buffer.vertex_declarations)
if (shader.Exists(declaration)) // Check if declaration exists in shader
glEnableVertexAttribArray(shader.attributeLocation(declaration))
glVertexAttribPointer
if (instanceDivisor != null)
glVertexAttribDivisor
glBindBuffer(index_buffer)
glDrawElementsInstanced
Shader attribute binding
The shader attribute binding is done at initialization and looks like this:
_VertexAttributes = source.Attributes.ToArray();
for (uint i = 0; i < _VertexAttributes.Length; i++)
{
ShaderAttribute attribute = _VertexAttributes[i];
GLShaders.BindAttribLocation(_ShaderHandle, i, attribute.Name);
}
So there should be not attribute aliasing in the shader, each of them gets a unique number ( Matrices are not implemented yet, i know they require more than one index, but i'm also not using them as vertex attributes right now ). As mentioned in the comment, i filter the attributes after linking the shader so no location is bound which does not exists.
This is the code for the attribute binding:
Bind();
Int32 offset = 0;
for (UInt32 i = 0; i < _Declarations.Length; i++)
{
VertexDeclaration data = _Declarations[i];
ShaderAttributeLocation location;
if (shader.GetAttributeLocation(data.Binding, out location))
{
GLVertexBuffer.EnableVertexAttribArray(location);
GLVertexBuffer.VertexAttribPointer(location, (AttributeSize)data.Size, (AttributeType)data.Type, data.Normalized, _StrideSize, new IntPtr(offset));
if (instanceDivisor != null)
GLVertexBuffer.VertexAttribDivisor(location, instanceDivisor.Value);
}
offset += data.ComponentSize;
}
Test results
The results are as seen here:
Now, if i change the binding on the code side ( AttributeBindingType.Color <-> AttributeBindingType.Instancing ) it looks like this:
If i change now vColor = aColor; to vColor = aInstancePosition; the results are simple: Instead of having multiple small quads with color i have one big fullscreen quad which is red. The locations of each of the attributes is different from the others so technically the values should be correct, but they seem to be not. Also using both attributes in the shader doesn't solve the problem.
I'm searching for a idea or solution to solve this problem.
Tracking the problem down
I've started to track it down more and more, with this complex code it costs me hours but i found something: The shader i'm using is only working when i leave out the attribute index 0 when calling BindAttribLocation. With other words this is a workaround which only works for the specified shader:
foreach (attribute in vertexAttributes)
{
if (shader == problemShader)
// i is index of the attribute
glBindAttribLocation(_ShaderHandle, i + 1, attribute.Name);
// All other shaders
else glBindAttribLocation(_ShaderHandle, i, attribute.Name);
}
I guess it has something to do with either instancing or the multiple VBO's which i'm using for instancing. This are the only differences to the normal shaders. The normal ones are also only working when i start the attribute location index at 0, they are not working when starting at 1.
I found the solution to the problem and it has been really simple.
After rendering with instancing i need to call glVertexAttribDivisor(location, 0); on the attributes which had the divisor enabled before.

Wpf animations from code

There was some similar threads, but I didn't find solution of my problem. It's my first post here.
Here's the thing:
Viewport3D viewPort3D;
GeometryModel3D geometryModel = new GeometryModel3D();
Transform3DGroup transform3DGroup = new Transform3DGroup();
...
// Rotation
RotateTransform3D rotateTransform3D = new RotateTransform3D();
AxisAngleRotation3D axisAngleRotation3d = new AxisAngleRotation3D();
axisAngleRotation3d.Axis = new Vector3D(0, 1, 0);
axisAngleRotation3d.Angle = angle;
rotateTransform3D.Rotation = axisAngleRotation3d;
transform3DGroup.Children.Add(rotateTransform3D);
// Translation
TranslateTransform3D translateTransform3D = new TranslateTransform3D();
translateTransform3D.OffsetX = offsetX;
transform3DGroup.Children.Add(translateTransform3D);
// Adding transforms
geometryModel.Transform = transform3DGroup;
Model3DGroup model3DGroup = new Model3DGroup();
model3DGroup.Children.Add( image.getGeometryModel3D() );
modelVisual3D.Content = model3DGroup;
viewPort3D.Children.Add( modelVisual3D );
And now I want to make translation using storyboard (because later I want to add also rotating to that storyboard):
Storyboard s = new Storyboard();
Transform3DGroup transform3DGroup = model3DGroup.Children.ElementAt(current).Transform as Transform3DGroup;
for (int j = 0; j < transform3DGroup.Children.Count; ++j)
{
if (transform3DGroup.Children.ElementAt(j) is TranslateTransform3D)
{
TranslateTransform3D translation = transform3DGroup.Children.ElementAt(j) as TranslateTransform3D;
DoubleAnimation doubleAnimation = new DoubleAnimation();
doubleAnimation.From = 0;
doubleAnimation.To = 2;
doubleAnimation.Duration = new Duration(TimeSpan.FromSeconds(1));
doubleAnimation.AutoReverse = true;
doubleAnimation.RepeatBehavior = RepeatBehavior.Forever;
s.Children.Add(doubleAnimation);
s.Duration = new Duration(TimeSpan.FromSeconds(1));
Storyboard.SetTarget(doubleAnimation, model3DGroup.Children.ElementAt(current));
Storyboard.SetTargetProperty(doubleAnimation, new PropertyPath("(Model3D.Transform).(Transform3DGroup.Children)[1].(TranslateTransform3D.OffsetX)"));
s.Begin(); // Exception during the execution.
}
}
Exception in the last line:
'[Unknown]' property value in the path
'(Model3D.Transform).(Transform3DGroup.Children)[1].(TranslateTransform3D.OffsetX)'
points to immutable instance of
'System.Windows.Media.Media3D.TranslateTransform3D'.
I took PropertyPath similar to the path generated in blend 4.
Thank you for any help.
I think because translate tranform 3d is an immutable instance it has to be indicated that it should be mutable while rendering / translation is taking place.
I guess
We can supply x:Name to that immutable TranslateTransform3D object, to make it mutable.
Bind to its property than to animate it.
E.g. in your case
NameScope.SetNameScope(this, new NameScope());
this.RegisterName("AxisRotation", MyAxisRotation3DObject.Rotation);
this.RegisterName("TranslateTransformation", MyTranslation3DObject);
This way we give names to Axis Rotation 3D and Translate Transform 3D objects and then in double animations refer them as Storyboard.SetTargetName(.., "AxisRotation") and Storyboard.SetTargetName(.., "TranslateTransformation") and access their direct properties such as Storyboard.SetTargetProperty(.., new PropertyPath("Angle")) and Storyboard.SetTargetProperty(.., new PropertyPath("OffsetX")) resp.
Your error states that TranslateTransform3D is immutable, what means, it cannot be changed. And you're trying to animate one of its properties, hence got the error.

Categories