I'm using Unity 2020.1.17f.
To do align correction between VR lens and Display, I want to control size of screen, offset of screen, and rotation of screen(screen means area of display where actual game scene is appeared, my game has letter box in 4 ways).
I could control size and offset by changing values of Viewport Rect, but I cannot find a way to control rotation of screen. (Like Cv2.GetRotationMatrix2D() of openCV in C#)
You can say "why don't you rotate your camera?", but rotating camera is not what I want. As you know, in VR, Game screen is distorted in several shapes to optimize to their lens shape. Thus, even I rotate camera, it is not helpful for mis-align correction.
In Google, there are many answers, but only related with rotation of mobile phone, not controlling rotation in arbitrary angle(that what i really want).
How can control of rotation of game screen?
You need to go to Edit/PlayerSettings/Player
And open the section called resolution and presentation
as you see there is a section called default orientation and here you can set the orientation of the build just opened.
Down there is also a section called Allowed orientations for auto rotations where you can set the other rotations allowed. In my case, everything is allowed, but you can just set the 2 landscapes or the 2 portraits depending on your game and your canvas and camera orientation
Related
I am talking about the Camera settings in Unity3D.
I'm trying to figure out if I can change (at least) the background color of the gray area in the screenshot. The limits of the camera are changed programmatically. The motivation lies in the fact that the playing area has to change dynamically based on whether a child or an adult is playing. The screen is huge around more than 83 inches. When rescaling the playing area, the area that is not drawn is gray and a bit ugly, I would like to know if you can define at least the color, or better still if possible with an image.
The screenshot you see is the screen capture in fullscreen mode, so it includes all the pixels.
After this brief explanation in words and images, let's go to the specifics of the technical details. This is how I resize the room design area:
public static void SetViewportCalibration()
{
var camera = Camera.main;
camera.pixelRect = new Rect(MinX, MinY, MaxX, MaxY);
}
Is it possible to set the color of that gray area outside the new Rect(MinX, MinY, MaxX, MaxY)?
There's two ways off the top of my head to accomplish this. Both ways use two Cameras.
The first way. Create a second Camera. The second Camera should have Depth LESS than the dynamic camera. This second, "Background" camera can then display anything you'd like, for example, a separate Skybox, a separate UI, other scene content, etc. etc.
The second way. Your dynamic camera is actually not resized dynamically. Instead, render your camera to a Target Texture. Use this texture in a material, and assign the material to a Quad mesh (most appropriate). This mesh can then be used in your scene like any other 3D object, which means not only can you position it, but scale it and even rotate it. The new camera that you added can have it's own Skybox, UI etc. etc.
I would opt for the second way. Partly personal preference, but also because it sounds like it might suit your situation better and be easier to implement. You can also implement many more effects for extra "wow".
Try to create another camera with no objects in its view and the following settings:
Clear Flags: Solid Color,
Background: Pick a color,
ViewPort Rect: X = 0, y = 0, w = 1, h = 1,
Depth: A smaller value than the other camera (Set the depth of this camera to 0 and the depth of the other camera to 1)
This camera will work as background of your screen.
I hope that I understood the question :)
How to make 3D Viewport within that 3D viewport square
You can use the Normalized Viewport Rectangles' approach, achieved by editing the Viewport Rect of the Camera.
The Documentation explains an example of split screen for a two-player game. You can adapt the explanation having the game in a particular area, and the GUI in the other screen space.
Normalized Viewport Rectangles
Normalized Viewport Rectangle is specifically for defining a certain
portion of the screen that the current camera view will be drawn upon.
You can put a map view in the lower-right hand corner of the screen,
or a missile-tip view in the upper-left corner. With a bit of design
work, you can use Viewport Rectangle to create some unique behaviors.
It’s easy to create a two-player split screen effect using Normalized
Viewport Rectangle. After you have created your two cameras, change
both camera’s H values to be 0.5 then set player one’s Y value to 0.5,
and player two’s Y value to 0. This will make player one’s camera
display from halfway up the screen to the top, and player two’s camera
start at the bottom and stop halfway up the screen.
I am trying to make a VR game with google cardboard in unity. However we can not find a way to display score text right in front of the player. However when I add 2D text it is only on one side and therefore on one side of the eye and getting the position right for 2 texts is hard. If I use 3D text and set in front of the players position I think it will go into the wall if a player hits one. Is their any way to display a score on google cardboard / Unity VR.
You can either use native Unity Canvas UI or Googles hack to render OnGUI calls onto a texture.
I would definately recommend Canvas as that is the way Unity is working on their UI features, and it has much better layout capability.
To use canvas, Right click in the hierarchy and add UI->Text. You will automatically get a canvas. The important part is set the canvas to world-space (not screen space overlay). Then drag the canvas game object so it is a child of the Google Cardboard Main head object. Scale it down (like x:0.001,y:0.001,z=0.001) because by default it will be massive. To avoid going through walls position it about 0.5m in front of the camera - within any collider you may have.
there is another approach as well which you can place that canvas under camera make it world space, and then adjust it as you want, after that where ever you look at it will be seen easily ( as suggested by earlier user in answers) as you can see below in picture i placed canvas > Text > under camera which i used for oculus/ google camera
Inside my project, I have a sprite being draw of a box. I have the camera zoom out when clicking a key. When I zoom out, I want my box to scale it's dimensions so it stays consistent even though the camera has zoomed out and "shrunk" it.
I have tried multiplying the object's dimensions by 10% which seems to be the viewpoint's adjustment when zooming out, but that doesn't seem to work. Now this may sound dumb, but would scaling the sprite in the draw function also change the sprite's dimensions?
Let's say the box is 64x64 pixels. I zoom out 10% and scale the sprite. Does the sprite still have the boundaries as 64x64 or is the up-scaling also changing it's dimensions?
Scaling using SpriteBatch.Draw()s scale argument will just draw the sprite smaller/bigger, i.e. a 64x64 one will appear as 7x7 pixels (the outer pixels being alpha blended if enabled). However there are no size properties on the sprite, if you have your own rectangle, position variables for the sprite SpriteBatch.Draw() of course will not change those.
An alternative is draw the sprite in 3D space then everything is scaled when you move your camera, so the sprite will appear smaller though it will still be a 64x64 sprite.
How to draw a sprite in 3D space? Here is a good tutorial http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Point_sprites.php. (You will need to take time to learn about using 3D viewports, camera's etc, see here: http://msdn.microsoft.com/en-us/library/bb197901.aspx)/
To change sprite dimensions you need to change Rectangle parameter for SpriteBatch.Draw. To calculate zoom on rectange:
Rectangle scaledRect = new Rectangle(originalRectangle.X, originalRectangle.Y, (int)(originalRectangle.Width*zoom), (int)(originalRectangle.Height*zoom)); // where zoom default is 1.0f
When drawing use:
spriteBatch.Draw(Texture, scaledRect, Color.White);
Now I'm sorry to assume it, but without knowing why you doing what you doing - I think you doing something wrong.
You should use camera transformation to zoom out/in. It is done like that:
var transform = Matrix.CreateTranslation(new Vector3(-Position.X, -Position.Y, 0))* // camera position
Matrix.CreateRotationZ(_rotation)* // camera rotation, default 0
Matrix.CreateScale(new Vector3(Zoom, Zoom, 1))* // Zoom default 1
Matrix.CreateTranslation(
new Vector3(
Device.Viewport.Width*0.5f,
Device.Viewport.Height*0.5f, 0)); // Device from DeviceManager, center camera to given position
SpriteBatch.Begin( // SpriteBatch variable
SpriteSortMode.BackToFront, // Sprite sort mode - not related
BlendState.NonPremultiplied, // BelndState - not related
null,
null,
null,
null,
transformation); // set camera tranformation
It will change how sprites are displayed inside sprite batch, however - now you also must account for different mouse coordinates (if you using mouse input). To do that you must transform mouse position to transformed world matrix:
// mouse position, your tranformation matrix
public Vector2 ViewToWorld(Vector2 pos, Matrix transform)
{
return Vector2.Transform(pos, Matrix.Invert(transform));
}
I used the code without direct access to test it, so if something will not work - feel free to ask.
This is not answer to your question directly, if you could provide reason why you want re-size sprite when zooming instead of zooming camera - maybe I could better answer your question, also you should fallow markmnl link to understand world transformations and why you seem to need it in this situation.
I'm trying to make a zoom system for a C#/XNA game I'm working on. What I have is the cameras position, the cameras current zoom (stored as a float) and the GestureSample instance.
I'm grabbing both positions of the pinchs and finding their center to make that my zoom in point, then if the person tries to pinch inwards/outwards I compare the length of the distance between the two fingers before and after the pinch drag action happened to determine to zoom in or out.
This kind of works but it feels a bit floaty. I also haven't figured out how I'm going to make it zoom towards a position where the user is pinching against. I get the middle point of the pinch and try to make the camera move in that direciton as the zoom gets larger but sometimes the camera gets to that point before 100% zoom and sometimes not ever.
It's all algorithm issues, I suppose what I want to know is if there is a simple straight forward way of doing this that I don't know of?
All you need to do is give your camera a target location (ie. the "middle point" of your pinch), and an acceleration ... the camera should then, independently of the pinch gesture, move towards the target location. This way, the camera will just end up at the right spot ... and on top of that you have a new feature for your camera :-)