There is a game I made on Unity, but my game does not support different screens. I searched it online but all solutions contain "canvas" but I do not use canvas in my project.
my game works in 16:9 landscape resolution
What can I do about it
1280x720 screen resolution
2960x1440 screen resolution
You can write a script and attach it to the background that changes its dimensions using the resolution in use.
Example :
int width = Screen.Width;
int height = Screen.Height;
transform.localScale = new Vector3(width, height, tranform.localScale.z);
Just edit the code and use Vector2 for 2D objects.
Related
I am talking about the Camera settings in Unity3D.
I'm trying to figure out if I can change (at least) the background color of the gray area in the screenshot. The limits of the camera are changed programmatically. The motivation lies in the fact that the playing area has to change dynamically based on whether a child or an adult is playing. The screen is huge around more than 83 inches. When rescaling the playing area, the area that is not drawn is gray and a bit ugly, I would like to know if you can define at least the color, or better still if possible with an image.
The screenshot you see is the screen capture in fullscreen mode, so it includes all the pixels.
After this brief explanation in words and images, let's go to the specifics of the technical details. This is how I resize the room design area:
public static void SetViewportCalibration()
{
var camera = Camera.main;
camera.pixelRect = new Rect(MinX, MinY, MaxX, MaxY);
}
Is it possible to set the color of that gray area outside the new Rect(MinX, MinY, MaxX, MaxY)?
There's two ways off the top of my head to accomplish this. Both ways use two Cameras.
The first way. Create a second Camera. The second Camera should have Depth LESS than the dynamic camera. This second, "Background" camera can then display anything you'd like, for example, a separate Skybox, a separate UI, other scene content, etc. etc.
The second way. Your dynamic camera is actually not resized dynamically. Instead, render your camera to a Target Texture. Use this texture in a material, and assign the material to a Quad mesh (most appropriate). This mesh can then be used in your scene like any other 3D object, which means not only can you position it, but scale it and even rotate it. The new camera that you added can have it's own Skybox, UI etc. etc.
I would opt for the second way. Partly personal preference, but also because it sounds like it might suit your situation better and be easier to implement. You can also implement many more effects for extra "wow".
Try to create another camera with no objects in its view and the following settings:
Clear Flags: Solid Color,
Background: Pick a color,
ViewPort Rect: X = 0, y = 0, w = 1, h = 1,
Depth: A smaller value than the other camera (Set the depth of this camera to 0 and the depth of the other camera to 1)
This camera will work as background of your screen.
I hope that I understood the question :)
I was recently using the WebCamTexture API in Unity, but ran across a couple issues.
My biggest issue is orientation. When I ran the app on my phone, it wouldn't work properly in portrait mode: the image was orientated so it was landscape while the phone was portrait. Rotating the phone didn't help.
Then I changed the default orientation of the app to landscape and it worked, but the image was reflected: letters and such would be backwards in the image. Rotating the image 180 on the y-axis didn't help since it was a 1 sided image.
Here's the code for the camera alone:
cam = new WebCamTexture();
camImage.texture = cam;
camImage.material.mainTexture = cam;
cam.Play ();
camImage.transform.localScale = new Vector3(-1,-1,1);
where camImage is a RawImage.
How would I rotate the image to work correctly in portrait as well as reflecting the image correctly? Am I using the API incorrectly?
important -
for some years now, it is really only practical to use "NatCam" now for camera work, either ios or droid, in Unity3D - every app which uses the camera, uses it.
There is no other realistic solution, until Unity actually do the job and include (working :) ) camera software.
solution is basically this ...
you have to do this EVERY FRAME, you can't do it "when the camera starts".
Unity f'd up and the actual values only arrive after a second or so. it's a well-known problem
private void _orient()
{
float physical = (float)wct.width/(float)wct.height;
rawImageARF.aspectRatio = physical;
float scaleY = wct.videoVerticallyMirrored ? -1f : 1f;
rawImageRT.localScale = new Vector3(1f, scaleY, 1f);
int orient = -wct.videoRotationAngle;
rawImageRT.localEulerAngles = new Vector3(0f,0f,orient);
showOrient.text = orient.ToString();
}
Right now I'm using XNA 4.0 with Windows Phone Developer Tools to create a textured cube using a predefined quad class on MSDN.
The front/back/left/right faces of the cube will draw fine (for every cube that I make), however the top and bottom faces won't render. The rasterizer state's cull mode is set to none and the quad that represents the top face exists, and it seems as if it would draw, but for some reason it won't.
Is there a problem with my code, or is this not happening for some other reason?
Here's the code:
Game1.cs: http://pastebin.com/RHU7jNXA
Quad.cs & Cube.cs: http://pastebin.com/P9gz5q4C
It's because your top and bottom faces have a height to them. They should have 0 height.
Here you are passing in a value as height:
Faces[4] = new Quad(topFaceOrigin, Vector3.Normalize(Vector3.Down), Up, Size, Size);
And then here in Quad constructor it's being used to give incorrect LowerLeft & LowerRight values:
LowerLeft = UpperLeft - (Up * height);
LowerRight = UpperRight - (Up * height);
I would recommend changing how you create all your quads; each face really should have different parameters. Right now all your faces are passing in practically the same stuff.
I'm developing an UI for a project for school, and I've tried similar methods to scaling my texture as listed here, but here is the issue:
Our project is developed at 1440 x 900, so I've made my own images that fit that screen resolution. When we have to demo our project in class, the projector can only render up to 1024 x 768, thus, many things on the screen goes missing. I have added window resizing capabilities, and I'm doing my scaling like this. I have my own class called "button" which has a texture 2d, and a Vector2 position contruscted by Button(Texture2d img, float width, float height).
My idea is to set the position of the image to a scalable % of the window width and height, so I'm attempting to set the position of the img to a number between 0-1 and then multiply by the window width and height to keep everything scaled properly.
(this code is not the proper syntax, i'm just trying to convey the point)
Button button = new Button(texture, .01, .01 );
int height = graphicsdevice.viewport.height * button.position.Y;
int width = graphicsdevice.viewport.width * button.position.X;
Rectangle rect = new Rectangle(0,0,width, height);
sprite.being()
sprite.draw (button.img, rect, color.white);
sprite.end
it doesn't end up scaling anything when i go to draw it and resize the window by dragging the mouse around. if i hard code in a different bufferheight and bufferwidth to begin with, the image stays around the same size regardless of resolution, except that the smaller the resolution is, the more pixelated the image looks.
what is the best way to design my program to allow for dynamic texture2d scaling?
As Hannesh said, if you run it in fullscreen you won't have these problems. However, you also have a fundamental problem with the way you are doing this. Instead of using the position of the sprite, which will not change at all during window resize, you must use the size of the sprite. I often do this using a property called Scale in my Sprite class. So instead of clamping the position of the sprite between 0 and 1, you should be clamping the Size property of the sprite between 0 and 1. Then as you rescale the window it will rescale the sprites.
In my opinion, a better way to do this is to have a default resolution, in your case 1440 x 900. Then, if the window is rescaled, just multiply all sprites' scaling factors by the ratio of the new screensize to the old screensize. This takes only 1 multiplication per resize, instead of a multiplication per update (which is what your method will do, because you have to convert from the clamped 0-1 value to the real scale every update).
Also, the effects you noticed during manual rescale of the sprites is normal. Rescaling images to arbitrary sizes causes artifacts in the rendered image because the graphics device doesn't know what to do at most sizes. A good way to get around this is by using filler art during the development process and then create the final art in the correct resolution(s). Obviously this doesn't apply in your situation because you are resizing a window to arbitrary size, but in games you will usually only be able to switch to certain fixed resolutions.
I declared a device + sprite in a Windows.Form like this
PresentParameters presentParameters = new PresentParameters();
presentParameters.Windowed = true;
presentParameters.SwapEffect = SwapEffect.Copy;
var device = new Device(Manager.Adapters.Default.Adapter, DeviceType.Hardware, this, CreateFlags.HardwareVertexProcessing, presentParameters);
var sprite = new Sprite(device);
I loaded a texture via TextureLoader.FromFile(device, "image.png");
In my Draw method i startet the device scene, then the sprite scene, then i wrote
sprite.Draw2D(texture, PointF.Empty, 0, PointF.Empty, Color.White);
the drawing itself works, but it draws only a big portion of the image scaled up to the screen (like 90%)
i tried it with a source rectangle with the given texture size too, but the same bug occurred
any suggestions?
I am experienced in C++ DirectX, but not C# DirectX, so take this with a grain of salt.
In my experiences with the Sprite interface, you need to scale, rotate, and translate just like you need to with 3D objects. You may be forgetting to scale. Here is the code of my Update function.
void Button::Update()
{
Sprite->Begin(D3DXSPRITE_ALPHABLEND);
D3DXMATRIX trans;
D3DXMATRIX scale;
D3DXMATRIX world;
D3DXMatrixIdentity(&world);
D3DXMatrixTranslation(&trans, pos.x, pos.y, 0.0f);
D3DXMatrixScaling(&scale, scaleFactor, scaleFactor, 1.0f);
world = scale * trans;
Sprite->SetTransform(&world);
Sprite->Draw(buttonTexture, NULL, NULL, &D3DXVECTOR3(-width2, -height2, 0.0), whitecol);
Sprite->End();
}
Admittedly, this isn't a very object-oriented way of doing things, but it suits my needs.
Caveat: I am not an DirectX expert, but I had the same problem.
When you load the sprite it expands the sprite to fit a size where each dimension is a power of 2. For example, If you sprite was 200 x 65, the sprite will have a width of 256 (and the image will be expanded to a width of 256, increasing it slightly) by 128 (almost doubling the height).
When you draw the image, it will be almost twice the height you expected.
My solution was to modify my image file to have a height and width of a factor of 2 and then only draw the portion that was the original size.