I have a problem that I have not been able to find a thread about.
I have a camera feed of a circular object which I need to rotate BUT I want to detect the rotation of the object. If the object is rotated clockwise or counterclockwise. In the end I want to, for example, draw an rectangle on the object and that rectangle will the rotate the same direction as the object in the camera feed.
The camera feed (shown in GIF below) it's processed with Otsus algorithm and the object is always deformed somehow (i.e. it's not 100% round).
I looked into various motion detection algorithms and by comparing two frames you can get the motion if the object moves across the frame. But the methods won't work to determine the rotation.
If someone could be so kind to help me or to direct me to the correct direction I would be very grateful. And as before, if I'm being unclear I will of course try to explain further. Thank you!
Related
Hey team I'm currently working on a 3rd person game where I would like to fire grappling hooks from the player to a point that is offset from the center of the camera.
I have a screen overlay canvas with Ui images for crosshairs. When the left shift is held down the crosshairs move outward from center along the x axis and return to center once shift is released, a bit like crosshair spread in games except I want to trigger the spread via the shift key. These crosshairs are meant to dictate the location the anchors of the grappling hook land, originating from the player and hitting whatever object is directly forward of the crosshairs. (imagine attack on titan ODM gear if you've seen it). I am looking for a way to ray cast from the player to the point forward of these crosshairs while they're offset from the center.
So far I have the grappling system set up and working but am having trouble with the direction parameter when I use the crosshair spread. It separates fine but where the hooks land in relation to the cross hairs are obviously out as I'm trying to use angle calculations at the moment instead of what is forward of these reticles.
I'm basically wondering if it is possible to use these screen overlay UI images to cast forward from or if there's a better way to accomplish the same thing. I have my doubts because they're screen overlay so I imagine their coordinates wont be attached to the camera as they appear.
Thanks in advance for your help.
What I would do is determine the location of the reticleson the screen, then (as Aybe suggested) use ScreenPointToRay or ViewportToRay (depending on if it's easier to get a pixel position or a fractional position of each reticle) to Physics.Raycast that ray from the camera into the scene to find where the rays collide. At this point, you have two world positions the player seems to want to shoot the hooks.:
Vector3 hookTarget1;
Vector3 hookTarget2;
So, now you actually have to fire the hooks - but as you know the hooks aren't being shot from the camera, they are being shot from the player, and they maybe offset by a bit. Let's call the originating points (which may have the same value):
Vector3 hookOrigin1;
Vector3 hookOrigin2;
So, then you can create Rays that originate from the hook origins and point at the targets:
Ray hookShot1 = new Ray(hookOrigin1, hookTarget1 - hookOrigin1);
Ray hookShot2 = new Ray(hookOrigin2, hookTarget2 - hookOrigin2);
Using these rays, you can do another Physics.Raycast if you would like, to confirm that there aren't any trees or other obstacles that are between the player and the target location - and if there are, that may be where the anchor should actually sink:
Vector3 anchorPoint1;
Vector3 anchorPoint2;
The segment between the origin of these rays and these anchor points would be appropriate for rendering the cable, calculating physics for how to move the player, as well as checking for collisions with world geometry that might cause the anchor to release.
I'm making game that must set target number on all enemy on screen.
I FAIL TO FIND function like IsInsideCameraView(gameObject)
Now I'm trying to scan area that camera looks.
For this, I need Camera angle width, angle hight.
As described in this emage (My reputation is not enough to show image directly)
My question is similer to this question, How to get angles value of perspective camera in Three.js? but for Unity.
If you know
HOW TO CALCULATE CAMERA WIDTH&HEIGHT (following screen size when unity starts)or
IS THERE GOOD FUNTIONS LIKE IsInsideCameraView(Camera, gameObject)
any answers will be welcomed.
[Untested. Javascript] You can use renderer.isVisible to determine if the object is within the camera frustum.
I FAIL TO FIND function like IsInsideCameraView(gameObject)
You can use:
object.Renderer.isVisible
...to check if the object is visible in any camera.
To test for a specific camera, use:
Vector3 screenPos = camera.WorldToScreenPoint(target.position);
Then check if X and Y coordinates are between (0,0) and Screen.width, Screen.height.
HOW TO CALCULATE CAMERA WIDTH&HEIGHT
That's what Camera.pixelWidth and Camera.pixelHeight are for or you can use Screen.width, Screen.height for cameras that represent the entire screen.
Anyway, that's arguably a XY problem, refer to the beginning of my answer for direct solution.
Tell me more
Is target in view frustum?
I have an unknown amount of polygons, all facing the camera (-z) with different positions. I want to rotate each polygon around its center by different angles. Would it be faster to use glRotate and glTranslate, or to calculate the rotation myself, or to do something else?
I have a problem with AABB collision resolution.
I resolve AABB intersection by resolving the X axis first, then the Y axis.
This is done to prevent this bug: http://i.stack.imgur.com/NLg4j.png
The current method works fine when an object moves into the player and the player has to be pushed horizontally. As you can see in the .gif, the horizontal spikes push the player correctly.
When the vertical spikes move into the player, however, the X axis is still resolved first. This makes "using the spikes as a lift" impossible.
When the player moves into the vertical spikes (affected by gravity, falls into them), he's pushed on the Y axis, because there was no overlap on the X axis to begin with.
Something I tried was the method described in the first answer of this link.
However the spikes and moving objects move by having their position changed, not velocity, and I don't calculate their next predicted position until their Update() method is called.
Needless to say this solution didn't work either. :(
I need to solve AABB collision in a way that both of the cases described above work as intended.
This is my current collision source code: http://pastebin.com/MiCi3nA1
I'd be really grateful if someone could look into this, since this bug has been present in the engine all the way back from the beginning, and I've been struggling to find a good solution, without any success. This is seriously making me spend nights looking at the collision code and preventing me from getting to the "fun part" and coding the game logic :(
I tried implementing the same collision system as in the XNA AppHub platformer demo (by copy-pasting most of the stuff). However the "jumping" bug occurs in my game, while it doesn't occur in the AppHub demo.
[ jumping bug: http://i.stack.imgur.com/NLg4j.png ]
To jump I check if the player is "onGround", then add -5 to Velocity.Y.
Since the player's Velocity.X is higher than Velocity.Y (refer to the fourth panel in the diagram), onGround is set to true when it shouldn't be, and thus lets the player jump in mid-air.
I believe this doesn't happen in the AppHub demo because the player's Velocity.X will never be higher than Velocity.Y, but I may be mistaken.
I solved this before by resolving on the X axis first, then on the Y axis. But that screws up the collision with the spikes as I stated above.
Why not resolve on the Y-axis first for vertical spikes, and on the X-axis first for horizontal spikes?
Nice graphic, by the way.
As I understand it, you're handling movement and collision something like this:
Move all objects.
For each object O, test for intersection between the player and O, and if necessary, eject the player horizontally or vertically so that it is no longer intersecting with O.
If the player is still intersecting with some object, then (something).
This means that when you come to step (2), you have forgotten which way the object O was moving, so you can't tell if it is trying to push the player upwards or sideways.
Solution: in step (1), store for each object the direction it is moving. When you find the player intersecting with an object, you can look to see whether the object is moving up, down, left or right, and that will tell you which way to perform the ejection.
As Gareth Rees already said, the issue is after a collision is detected, you need more information (both current location and either direction came from or last position) to perform the collision response.
It gets quite complicated if both objects are moving. Instead, choose one object to be the frame of reference and subtract its velocity from everything else.
A straight forward solution might be to create line segments for the movement/delta of the non-frame-of-reference object. Then intersect those segments with the 4 AABB edges. This gives the time of intersection and the normal at the point of intersection. Then you can apply the same response you have now.
One possible solution I found is sorting the objects before resolving based on the velocity of the player.
Expression blend enables you to import 3d models. I want to animate a 3d object with code. I just can't seem to figure out what are the property values that I have to modify in order to make an object rotate. Let me show you what I mean:
so if I want to rotate this object I could use the camera orbit tool and If I use it I can end up with something like:
I know I can create a storyboard and create the animation by modifying the object. I need to rotate the object along the x axis with a slider. If I modify just one value it will rotate in a weird way I actually have to change several properties if I wish to do so. For example when I am rotating the object along the x-axis with the camera orbit tool I can see that all these properties are changing. I need to figure out what is the algorithm being used to rotate the object.
The math to move the camera position around so that you appear to be rotating around the X axis is just the parametric equation of a circle:
where t is the angle from zero to 2 pi.
Imagine you are standing on the street looking at a house. The camera's coordinates have to follow a circle around the house and the latitude and longitude are continuously changing to keep the same distance from the house. So there is no one value you can change to make it rotate.
Once you know the camera position, the direction is just the difference between the origin and the camera position.
All this is not hard to calculate but there is an easier way. Instead, keep the camera fixed and rotate the object. This makes animations much easier. Here is an MSDN article contains examples of that approach, including animations:
3-D Transformations Overview
That article is meant for WPF and Visual Studio but you can easily adapt the same ideas to Expression Blend.