Animating a MatrixTransform in WPF from code - c#

I have a Canvas which I would need to animate the RenderTransform property of. The start and end matrices will be abitrary, so I can't pre write the storyboard in XAML, so I'm trying to do it in code, I can't find any example of how to do this, below is my best try which does not work (it compiles and runs, but the rendertransform does not change).
Any suggestions on how this should be done?
MatrixAnimationUsingKeyFrames anim = new MatrixAnimationUsingKeyFrames();
MatrixKeyFrameCollection keyframes = new MatrixKeyFrameCollection();
DiscreteMatrixKeyFrame start = new DiscreteMatrixKeyFrame(fromMatrix, KeyTime.FromPercent(0));
DiscreteMatrixKeyFrame end = new DiscreteMatrixKeyFrame(toMatrix, KeyTime.FromPercent(1));
keyframes.Add(start);
keyframes.Add(end);
anim.KeyFrames = keyframes;
Storyboard.SetTarget(anim, World.RenderTransform);
Storyboard.SetTargetProperty(anim, new PropertyPath("Matrix"));
Storyboard sb = new Storyboard();
sb.Children.Add(anim);
sb.Duration = TimeSpan.FromSeconds(4);
sb.Begin();

I have implemented MatrixAnimation class which supports smooth translation, scaling and rotation animations. It also supports easing functions! Find at http://pwlodek.blogspot.com/2010/12/matrixanimation-for-wpf.html

I bumped into this problem this morning, although the solution I used won't cope with rotations or shearing. link

I managed to get matrixtransform working by setting rendersource and using beginanimation
something like this:
this.matrixTransform = new MatrixTransform();
this.canvas.RenderTransform = this.matrixTransform;
MatrixAnimationUsingKeyFrames anim = new MatrixAnimationUsingKeyFrames();
anim.KeyFrames = new MatrixKeyFrameCollection();
anim.Duration = TimeSpan.FromSeconds(4);
Matrix fromMatrix = new Matrix(2, 0, 0, 2, 0, 0);
Matrix toMatrix = new Matrix(3, 0, 0, 3, 0, 0);
anim.FillBehavior = FillBehavior.HoldEnd;
DiscreteMatrixKeyFrame start = new DiscreteMatrixKeyFrame(fromMatrix, KeyTime.FromTimeSpan(TimeSpan.FromSeconds(0)));
DiscreteMatrixKeyFrame end = new DiscreteMatrixKeyFrame(toMatrix, KeyTime.FromTimeSpan(TimeSpan.FromSeconds(4)));
anim.KeyFrames.Add(start);
anim.KeyFrames.Add(end);
this.matrixTransform.BeginAnimation(MatrixTransform.MatrixProperty, anim);
Not sure exactly how I'm going to do the interpolation for all the keyframes myself though :)

Related

SharpDX/DX11 Alpha Blend

I am attempting to use alpha blending with SharpDX. If I set my blend state on my output merger, nothing renders at all, and I have absolutely no idea why. When never setting the blend state, everything works fine. Even if I set a blend state with the default blend description, nothing renders. I figure there's some step I'm missing or I am doing something in the wrong order, so I'll just paste what I've got and hope somebody can point something out...
I have a BlendState set up using the following code:
bs = new BlendState(Devices.Device11, new BlendStateDescription());
var blendDesc = new RenderTargetBlendDescription(
true,
BlendOption.SourceAlpha,
BlendOption.InverseSourceAlpha,
BlendOperation.Add,
BlendOption.One,
BlendOption.Zero,
BlendOperation.Add,
ColorWriteMaskFlags.All);
bs.Description.RenderTarget[0] = blendDesc;
...and here are the contents of my render loop. If all I do is comment out context.OutputMerger.SetBlendState(bs), my meshes render fine (without any blending, that is):
var context = Devices.Device11.ImmediateContext;
context.ClearDepthStencilView(DepthStencilView, DepthStencilClearFlags.Depth, 1.0f, 0);
context.ClearRenderTargetView(RenderTargetView, new Color4());
context.OutputMerger.SetTargets(DepthStencilView, RenderTargetView);
context.OutputMerger.SetBlendState(bs);
context.Rasterizer.State = rs;
context.Rasterizer.SetViewport(Viewport);
context.VertexShader.SetConstantBuffer(0, viewProjBuffer);
context.UpdateSubresource(Camera.ViewProjection.ToFloatArray(), viewProjBuffer);
Dictionary<Mesh, Buffer> vBuffers = VertexBuffers.ToDictionary(k => k.Key, v => v.Value);
Dictionary<Mesh, Buffer> iBuffers = IndexBuffers.ToDictionary(k => k.Key, v => v.Value);
foreach (var mesh in vBuffers.Keys)
{
if (mesh.MeshType == MeshType.LineStrip)
{
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.LineStrip;
context.InputAssembler.InputLayout = Effects.LineEffect.InputLayout;
context.VertexShader.Set(Effects.LineEffect.VertexShader);
context.PixelShader.Set(Effects.LineEffect.PixelShader);
}
else
{
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
context.InputAssembler.InputLayout = Effects.FaceEffect.InputLayout;
context.VertexShader.Set(Effects.FaceEffect.VertexShader);
context.PixelShader.Set(Effects.FaceEffect.PixelShader);
}
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vBuffers[mesh], GetMeshStride(mesh) * 4, 0));
context.InputAssembler.SetIndexBuffer(iBuffers[mesh], Format.R32_UInt, 0);
context.DrawIndexed(mesh.IndexUsage, 0, 0);
}
context.ResolveSubresource(RenderTarget, 0, SharedTexture, 0, Format.B8G8R8A8_UNorm);
context.Flush();
I am rendering to a texture, which is initialized using the following texture description:
Texture2DDescription colorDesc = new Texture2DDescription
{
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
Format = Format.B8G8R8A8_UNorm,
Width = width,
Height = height,
MipLevels = 1,
SampleDescription = new SampleDescription(8, 32),
Usage = ResourceUsage.Default,
OptionFlags = ResourceOptionFlags.Shared,
CpuAccessFlags = CpuAccessFlags.None,
ArraySize = 1
};
It is important for me to render to a texture and for me to use that format. I thought that maybe blending worked with particular formats, but I could not find any info suggesting anything of that sort.
I am also multisampling, which is why I call ResolveSubresource(...) at the end of my render method. The texture I am copying to has an identical description to RenderTarget, except with a different SampleDescription.
Speaking of multisampling, here's the RasterizerState I am using:
rs = new RasterizerState(Devices.Device11, new RasterizerStateDescription()
{
FillMode = FillMode.Solid,
CullMode = CullMode.Back,
IsFrontCounterClockwise = true,
DepthBias = 0,
DepthBiasClamp = 0,
SlopeScaledDepthBias = 0,
IsDepthClipEnabled = true,
IsScissorEnabled = false,
IsMultisampleEnabled = true,
IsAntialiasedLineEnabled = true
});
I am rendering with a shader, which is basically a "Hello World" shader plus a camera matrix and basic normal direction lighting. I've verified that my vertex alpha values are what they're supposed to be just as they're being populated into their vertex buffers... and even if I hardcode their alpha to 1 at the end of the pixel shader, I still get nothing, so long as I use that BlendState. Without using BlendState, everything renders opaque as expected, regardless of alpha values. I've even tried implementing the blend state in the shader itself, but that seems to have no effect whatsoever. Everything appears to render as if no blending was defined at all.
If it matters, I'm using FeatureLevel.Level.Level_11_0 with my Device.
Unfortunately this is about as much as I have to go on. Like I said this problem is a total mystery to me at this point.
Just wanted to update this for anyone who comes here in the future. Something that worked for me was quite simply:
BlendStateDescription blendStateDescription = new BlendStateDescription
{
AlphaToCoverageEnable = false,
};
blendStateDescription.RenderTarget[0].IsBlendEnabled = true;
blendStateDescription.RenderTarget[0].SourceBlend = BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationBlend = BlendOption.InverseSourceAlpha;
blendStateDescription.RenderTarget[0].BlendOperation = BlendOperation.Add;
blendStateDescription.RenderTarget[0].SourceAlphaBlend = BlendOption.Zero;
blendStateDescription.RenderTarget[0].DestinationAlphaBlend = BlendOption.Zero;
blendStateDescription.RenderTarget[0].AlphaBlendOperation = BlendOperation.Add;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = ColorWriteMaskFlags.All;
this._context.OutputMerger.BlendState = new BlendState(_device,blendStateDescription);
I also handled the alpha component in my shader, as well, in case you want to manually add transparency to a model, pseudo code:
float4 PixelShaderMain( PixelShaderArgs pixelShaderArgs )
: SV_Target
{
float u = pixelShaderArgs.col.x;
float v = pixelShaderArgs.col.y;
float4 color = ShaderTexture.Load(int3(convertUVToPixel(u,v),0));
return float4(color.r,color.g,color.b,0.5); // 50% transparency
}
Hope this helps someone instead of getting a dead page that basically points nowhere. Happy coding everyone!

Transform control in WinRt

I have several boxes from type Windows.UI.Xaml.Controls.Control with different sizes. I wanna transform a few of them vertically. Like shown in the picture.
I'm struggling doing this.I'm sure that should not be very difficult but I don't get it...
Btw. I wanna do that in code behind not in XAML.
Many thanks for your help.
Cheers
Daniel
edit:
DoubleAnimation scaleAnimation = new DoubleAnimation();
scaleAnimation.From = startHeight;
scaleAnimation.To = this.ClientHeight * Percentage;
scaleAnimation.Duration = TimeSpan.FromMilliseconds(500);
scaleAnimation.EasingFunction = new QuarticEase() { EasingMode = EasingMode.EaseOut };
Storyboard storyScaleX = new Storyboard();
storyScaleX.Children.Add(scaleAnimation);
Storyboard.SetTarget(storyScaleX, slice);
scaleAnimation.EnableDependentAnimation = true;
Storyboard.SetTargetProperty(storyScaleX, "Height");
You can apply a TranslateTransform to the LayoutTransform or RenderTransform of the element (depending on what you need). e.g.
element.LayoutTransform = new TranslateTransform(0, 100)
If the effect you require depends on the height of the element, use the element's ActualHeight as the value to translate by.

Removing .Diffuse colours from an FBX model

I am currently working on an AR project, based around the original 'Tutorial 8 - Marker Tracking' program supplied by GoblinXNA. I've had a play around with it, and replaced the models with some of my own designs, saved as .fbx format. The problem I am having though is that the .Diffuse extension is replacing the the original colours of the model with red; altering the colours makes no difference, only changing the colour and not allowing me to have the models original appearance, and removing the .Diffuse line of code only makes the model turn to shades of grey and black (I'm guessing this is something to do with CreateLights() method?)
In any case, here is the code form the object; any help would be much appreciated!
ModelLoader mLoader = new ModelLoader(); //self explanatory
Model flagModel = (Model)mLoader.Load("", "FlagModelAsset2");
flagNode = new GeometryNode("FlagModelAsset2");
flagNode.Model = flagModel;
flagNode.AddToPhysicsEngine = true;
flagNode.Physics.Shape = ShapeType.Box;
flagNode.Model.ShadowAttribute = ShadowAttribute.ReceiveCast;
flagNode.Model.Shader = new SimpleShadowShader(scene.ShadowMap);
//TransformNode flagTransNode = new TransformNode();
//flagTransNode.Translation = new Vector3(0, 0, 0); //position of flag
//flagTransNode.Scale = new Vector3(1f, 1f, 1f); //size of flag
toolbarMarkerNode = new MarkerNode(scene.MarkerTracker, "ALVARToolbar.xml");
Material flagMaterial = new Material();
flagMaterial.Diffuse = new Vector4(0.5f, 2, 0, 1); //colour of flag
flagMaterial.Specular = Color.White.ToVector4();
flagMaterial.SpecularPower = 10;
flagNode.Material = flagMaterial;
groundMarkerNode.AddChild(flagNode);
scene.RootNode.AddChild(toolbarMarkerNode);
//flagNode.AddChild(flagTransNode);
NewtonPhysics.CollisionPair pair = new NewtonPhysics.CollisionPair(flagNode.Physics, sphereNode.Physics);
((NewtonPhysics)scene.PhysicsEngine).AddCollisionCallback(pair, BoxSphereCollision);
}
It was the materials; removing this and adding the code below allows the use of the textures from the original imported file
((Model)flagNode.Model).UseInternalMaterials = true;

Sequence of animations in WPF with BeginAnimation

I'm trying to animate some rotations in 3D with WPF and if I trigger them manually (on click) everything is fine, but if I compute the movements that should be made on the Viewport3D all animations seem to go off at the same time.
The code that computes the movements is as follows:
for(int i=0; i<40; i++){
foo(i);
}
Where the foo(int i) looks like:
//compute axis, angle
AxisAngleRotation3D rotation = new AxisAngleRotation3D(axis, angle);
RotateTransform3D transform = new RotateTransform3D(rotation, new Point3D(0, 0, 0));
DoubleAnimation animation = new DoubleAnimation(0, angle, TimeSpan.FromMilliseconds(370));
rotation.BeginAnimation(AxisAngleRotation3D.AngleProperty, animation);
The computation of axis and angle is not something time consuming, simple attributions, so I guess the problem is that all animations trigger the next frame since the computations are already done when the current frame is "over".
How can I display those animations sequentially, rather than all at once, in code (not XAML)?
PS: everything is in C#, no XAML.
You may add multiple animations to a Storyboard and set each animation's BeginTime to the sum of the durations of the previous animations:
var storyboard = new Storyboard();
var totalDuration = TimeSpan.Zero;
for (...)
{
var rotation = new AxisAngleRotation3D(axis, angle);
var transform = new RotateTransform3D(rotation, new Point3D(0, 0, 0));
var duration = TimeSpan.FromMilliseconds(370);
var animation = new DoubleAnimation(0, angle, duration);
animation.BeginTime = totalDuration;
totalDuration += duration;
Storyboard.SetTarget(animation, rotation);
Storyboard.SetTargetProperty(animation, new PropertyPath(AxisAngleRotation3D.AngleProperty));
storyboard.Children.Add(animation);
}
storyboard.Begin();
Note that i haven't tested the code above, so sorry for any faults.
Or you create your animations in a way that each animation (starting from the second one) is started in a Completed handler of the previous one.

WindowsPhone7: How to animate position of UIElement with code?

It seems like it should be so simple. I've read dozens of links and I can't get anything to animate the position. I believe the closest code I can write so far is this:
Storyboard storyboard = new Storyboard();
TranslateTransform trans = new TranslateTransform() { X = 1.0, Y = 1.0 };
myCheckbox.RenderTransformOrigin = new Point(0.5, 0.5);
myCheckbox.RenderTransform = trans;
DoubleAnimation moveAnim = new DoubleAnimation();
moveAnim.Duration = TimeSpan.FromMilliseconds(1200);
moveAnim.From = -1;
moveAnim.To = 1;
Storyboard.SetTarget(moveAnim, myCheckbox);
Storyboard.SetTargetProperty(moveAnim, new PropertyPath("(UIElement.RenderTransform).(TranslateTransform.X)"));
storyboard.Completed += new System.EventHandler(storyboard_Completed);
storyboard.Children.Add(moveAnim);
storyboard.Begin();
No errors are thrown.
The completion callback does get called.
If I animate opacity in a similar fashion it works fine.
How can I simply animate a UIElement's position with code??
The comment from xyzzer was correct. The cause of the confusion was because the coordinates for RenderTransformOrigin use (0,1) relative to the element. The actual transforms (e.g. TranslateTransform) use pixels as units.

Categories