For a VR project I have a 360 panorama on a sphere in which users need to mark something by "drawing" a rectangle with their head movement. So let's say you'd want to mark the person in the image below you start drawing at for example the top left corner like so.
Then move your head to the bottom right and end up with a rectangle, something like this.
How would i go about doing this? I guess i would somehow have to project a plane on the panorama sphere based on the camera position? Any help would be appreciated!
You need to determine the gesture/action for starting to draw and an indicator for each corner of the rect. (you are basically creating a mesh/quad at runtime, herewith is a tutorial to help you with that https://www.youtube.com/watch?v=gmuHI_wsOgI )
Now you need to find the wanted vertex positions for the quad. I would try to raycast and get the hit.point values. collect 4 raycast points and assign their values to the mesh vertices.
Related
I have a board that made with bunch of square tiles, i have player which is circle and i wanna drop over the tiles and set plater postion to the center of nearest tile while mouse button up ;
is there any way to handle this without Canvas and EventSystem ?
There are a few ways to do it,
You can store all the tiles in an array and when dragging the player check for the position of the closest one, this can work great if you have gaps between tiles and want the player to still snap on to the tile even if the player is not over it by having a threshold setup, it could get a bit costly though if you have thousands of tiles and you are iterating through them all.
Second and the easiest one is to just cast a Raycast from your player straight to the plane where your tiles are positioned, on hit just position the player in the middle of the tile. If you have more objects in the scene you can also tag the tiles with for example "Tile" and in the raycast check for the tag. The use of raycasts is documented here https://docs.unity3d.com/ScriptReference/Physics.Raycast.html
Blue overlaps Green which overlaps Red.
Each card can be selected by passing the mouse over it. But the thing my hitboxes don't have a depth notion (z-axis), it's a 2D game.
So lets say that i want to select the Green Card when i put my mouse over it the Green and the Red are selected because the cursor is in the Green HitBox but also in the Red HitBox.
So my question is how should i manage this: When i have overlapping hitboxes, how to check only the area that are not covered ?
Note : I use the Rectangle Intersect and Contains functions.
But the thing my hitboxes don't have a depth notion (z-axis), it's a 2D game....So my question is how should i manage this
Just because it's a 2D game (and by that I mean the camera is projecting some world from xD to 2D) doesn't mean your scene has to be in 2D. Because your cards can overlap one another your scene has depth and so it is 3D.
Once you realise this, hit detection of objects in your 3D scene are trivial.
i.e.
shoot a ray from the mouse, inverse projected into the scene
test to see which objects it hits
take the first object closest to the origin
How do I make so when I move the camera(with touch) it doesn't go beyond the scene borders?
How do I move the camera with touch so it moves strictly with scene parts, like slides(swipe-first slide, another swipe-another slide) with not going beyond the borders of the scene?
The game I'm making has a camera like in Disco Zoo game for android (I'm a newbie)
Technically, the scene doesn't really have a border. You'll need to define the border somehow within your game, then constrain the camera's position with something like Mathf.Clamp(value, min, max) in an Update() function on the camera.
How can you define the border? It's up to you. Some ideas:
Hard-code the values in the script that clamps the camera. Probably the quickest option, but not flexible
Make public parameters on the camera script that let you set min and max positions in the X and Y directions
If you have a background image: use the extents of that to define your camera's extents
Create empty objects in your scene that define the minimum and maximum extents of the scene. Put your "min" object at the top-left, and the "max" object at the top-right. Connect it to the camera script, then use those positions to see if you've gone too far in any given direction. The main reason to do this is that it's visual.
(Slower, but dynamic) If everything in your scene uses physics, you could search the entire scene for every Collider component, then find the furthest extents in each direction. However, this is probably going to be pretty slow (so you'll only want to do it once), it'll take a while to code, and you'll probably want to tweak the boundaries by hand anyway.
I'm coding in c# using the XNA 4.0 framework. I have noticed that when the sprite is moved up,down,left,right (it flips to face the proper direction) the sprite is sharp and in focus similar to how it was drawn.
Example of sprite in up direction (sharp image);
But for some reason when I move the player diagonally in any diagonal direction it becomes slightly blurry.
Example of sprite in diagonal up/right direction (blurry image);
I am just rotating the object around its origin point(center of sprite), I'm not messing with any other draw attribute besides rotate and origin.
Can anyone shed some light on why this may be happening? Is this just what happens when a sprite is rotated? Is there a way I can rotate the sprite and maintain its sharpness?
---I"m not sure if this matters but the sprite is drawn facing the up direction in my sprite sheet---
I'm not sure if this matters but the sprite is drawn facing the up direction in my sprite sheet
It does. When the sprite is going up and down, the pixels drawn to the scree can be exactly that which is in the sprite sheet, simply because they line up with the buffer. When the image is rotated, calculations need to be done to decide the best pixels to shade. This leads to some pixels being gray instead of all black and white causing the "blurring". You can play with the anti aliasing settings to get something you are happy with, but this can be a bigger issue with sprites that have hard lines like your sprite appears to have.
Expression blend enables you to import 3d models. I want to animate a 3d object with code. I just can't seem to figure out what are the property values that I have to modify in order to make an object rotate. Let me show you what I mean:
so if I want to rotate this object I could use the camera orbit tool and If I use it I can end up with something like:
I know I can create a storyboard and create the animation by modifying the object. I need to rotate the object along the x axis with a slider. If I modify just one value it will rotate in a weird way I actually have to change several properties if I wish to do so. For example when I am rotating the object along the x-axis with the camera orbit tool I can see that all these properties are changing. I need to figure out what is the algorithm being used to rotate the object.
The math to move the camera position around so that you appear to be rotating around the X axis is just the parametric equation of a circle:
where t is the angle from zero to 2 pi.
Imagine you are standing on the street looking at a house. The camera's coordinates have to follow a circle around the house and the latitude and longitude are continuously changing to keep the same distance from the house. So there is no one value you can change to make it rotate.
Once you know the camera position, the direction is just the difference between the origin and the camera position.
All this is not hard to calculate but there is an easier way. Instead, keep the camera fixed and rotate the object. This makes animations much easier. Here is an MSDN article contains examples of that approach, including animations:
3-D Transformations Overview
That article is meant for WPF and Visual Studio but you can easily adapt the same ideas to Expression Blend.