I'm making an infinite runner in unity, I have an tile spawner/generator and its generating GameObjects based on screen height and width, I managed to make it work with the width but when changing the height the camera doesn't follow and I can't manage to make that work.
Anyway, my code isn't good, I have spent the last 6 hours into that and I don't appreciate the result.
As I found out you can define an aspect ratio to the camera and it will auto-scale your ratio to that one, but it distorts the image and doesn't look great.
Since all those notes, which is the best way to auto-scale a 2D platform game (NOT CONSIDERING GUI, only GameObjects)
I' m using this script to stretch sprites based on their size, works for most cases. I use 5 for the orthographicSize of camera.
using UnityEngine;
using System.Collections;
#if UNITY_EDITOR
[ExecuteInEditMode]
#endif
public class SpriteStretch : MonoBehaviour {
public enum Stretch{Horizontal, Vertical, Both};
public Stretch stretchDirection = Stretch.Horizontal;
public Vector2 offset = new Vector2(0f,0f);
SpriteRenderer sprite;
Transform _thisTransform;
void Start ()
{
_thisTransform = transform;
sprite = GetComponent<SpriteRenderer>();
StartCoroutine("stretch");
}
#if UNITY_EDITOR
void Update()
{
scale();
}
#endif
IEnumerator stretch()
{
yield return new WaitForEndOfFrame();
scale();
}
void scale()
{
float worldScreenHeight = Camera.main.orthographicSize *2f;
float worldScreenWidth = worldScreenHeight / Screen.height * Screen.width;
float ratioScale = worldScreenWidth / sprite.sprite.bounds.size.x;
ratioScale += offset.x;
float h = worldScreenHeight / sprite.sprite.bounds.size.y;
h += offset.y;
switch(stretchDirection)
{
case Stretch.Horizontal:
_thisTransform.localScale = new Vector3(ratioScale,_thisTransform.localScale.y,_thisTransform.localScale.z);
break;
case Stretch.Vertical:
_thisTransform.localScale = new Vector3(_thisTransform.localScale.x, h,_thisTransform.localScale.z);
break;
case Stretch.Both:
_thisTransform.localScale = new Vector3(ratioScale, h,_thisTransform.localScale.z);
break;
default:break;
}
}
}
First i want to say there are no good solutions only less bad.
The easiest is to just support one aspect ratio that way you can just scale everything up and down without distortion but that almost never an option.
The second easiest is to make the game be in one aspect ratio and add black bars (or something) to the edges to made the actual game area the same aspect ratio.
If you still want different aspect ratios the solution is to increase the game area but that might cause people with different aspect ratios to get an advantage since they can see further ahead or higher and it might mess up your level design.
I just posted an answer to how make everything work automatically as long as you have constant aspect ration here https://stackoverflow.com/a/25160299/2885785
Related
I am developing a VR application in Unity and I am struggling to develop a smooth UI scroll using my VR controller's joystick. So far what I have looks like this...
private void Update()
{
float joyStickDirection = globals.menuInteraction_Scroll.GetAxis(SteamVR_Input_Sources.Any).y; // this is either 1 for up or -1 for down
if (joyStickDirection != 0)
{
float multiplier = joyStickDirection * 5f;
scrollRect.verticalNormalizedPosition = scrollRect.verticalNormalizedPosition + (multiplier * Time.deltaTime);
}
}
...this works, but has two problems. Firstly, it scrolls at different speeds depending how big the scrolling container is. Secondly, the scrolling is not very smooth, as it is clearly just skipping varying gaps between 0 and 1.
I think I know what's wrong but I don't have enough experience working inside Update() to figure out the correct approach. Can anyone advise?
Actually you don't necessarily go through the ScrollRect component itself.
I usually would simply do
public class ScrollExample : MonoBehaviour
{
public float speed = 5f;
public Transform ScrollContent;
void Update()
{
// this is either 1 for up or -1 for down
var joyStickDirection = globals.menuInteraction_Scroll.GetAxis(SteamVR_Input_Sources.Any).y;
if (joyStickDirection != 0)
{
var multiplier = joyStickDirection * speed;
// You want to invert the direction since scrolling down actually means
// moving the content up
ScrollContent.position -= Vector3.up * multiplier * Time.deltaTime;
}
}
}
The ScrollRect then updates and handles the rest itself. The speed is in Units/seconds or in a Screenspace Overlay canvas in pixels per seconds regardless of how big the content is.
Usually you would want to adjust the elasticity of the ScrollRect or simply set the Movement Type to Clamped right away.
I have a large sprite on my screen but I want the image it displays to scroll infinitely horizontally.
I have the current code which does not have any effect at all.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ObjLayer : MonoBehaviour
{
public float Speed = 0;
public int layer = 0;
protected Material _material;
protected float currentscroll = 0f;
// Start is called before the first frame update
void Start()
{
_material = GetComponent<SpriteRenderer>().material;
}
// Update is called once per frame
void Update()
{
currentscroll += Speed * Time.deltaTime;
var currentOffset=_material.GetTextureOffset("Layer3");
_material.SetTextureOffset("Layer3", new Vector2(currentscroll, 0));
_material.mainTextureOffset = new Vector2(currentscroll, 0);
}
}
Just to note that I am setting both SetTextureOffset and mainTextureOffset as neither seem to be working.
Also currentOffset is changing as expected but the texture is not moving on the screen.
You are most likely using a Material that doesn't support texture offsetting, unless the property is HiddenInInspector you can check in the inspector if your material supports it by checking if it has the Offset x,y input fields underneath "Texture"
The standard Unlit/Texture shader has this property, so do any newly created UnlitShaders (as seen in my example screenshot).
If you are using a custom shader then your Texture isn't named "_MainTex", which is the property that Unity looks for when using mainTextureOffset, as cited from the docs:
By default, Unity considers a texture with the property name name "_MainTex" to be the main texture.
Using material.mainTextureOffset works fine for me when using a shader where the Texture is called _MainTex:
public Vector2 offset;
private Material material;
private void Start()
{
material = GetComponent<MeshRenderer>().material;
}
private void Update()
{
material.mainTextureOffset = offset;
}
Result (Gyazo gif)
In your example
_material.SetTextureOffset("Layer3", new Vector2(currentscroll, 0));
You are looking for a property named "Layer3" in your shader, this is not a name that is standard in use by Unity (in my knowledge), if you are using a custom shader then make sure your Texture property has the name Layer3 inside your shader.
Update after OP's comment:
Tiling and Offsetting is not available to Materials on a SpriteRenderer, Sprites are assumed to not be offset. As indicted by the warning given by Unity when you try to add a material that has Tiling/Ofsetting in its texture
Material texture property _MainTex has offset/scale set. it is incompatible with spriterenderer
Instead use a Quad, Plane, Image or raw Image component which does have support for materials with tiling/offsetting in combination with the above code.
I think technically it is still possible to use offsetting with sprites by throwing on a shader that support it and ignoring the warning, but I can not guarantee this won't break down the line or what the performance implications are
I need some help with my college project. I have a cylinder and need it to act as a coil. For example, if I touched the cylinder's surface it's height will decrease (scaled in the y direction) as if pressing on a coil then when I remove my hand it returns back to its original size.
This is what I reached till now but I still have some problems that I can't solve.
public class Deformation : MonoBehaviour
{
Vector3 tempPos;
private void InteractionManager_SourceUpdated(InteractionSourceUpdatedEventArgs hand)
{
if (hand.state.source.kind == InteractionSourceKind.Hand)
{
Vector3 handPosition;
hand.state.sourcePose.TryGetPosition(out handPosition);
float negXRange = transform.position.x - transform.localScale.x;
float posXRange = transform.position.x + transform.localScale.x;
float negYRange = transform.position.y - (transform.localScale.y / 2);
float posYRange = transform.position.y + (transform.localScale.y / 2);
float negZRange = transform.position.z - transform.localScale.z;
float posZRange = transform.position.z + transform.localScale.z;
float handX = handPosition.x;
float handY = handPosition.y;
float handZ = handPosition.z;
if ((negXRange <= handX) && (handX <= posXRange) && (negYRange <= handY) && (handY <= posYRange) && (negZRange <= handZ) && (handZ <= posZRange))
{
tempPos.y = handPosition.y;
transform.localScale = tempPos;
}
else
{
tempPos.y = 0.3f;
transform.localScale = tempPos;
}
}
}
// Use this for initialization
void Start()
{
tempPos = transform.localScale;
InteractionManager.InteractionSourceUpdated += InteractionManager_SourceUpdated;
}
I attached two scripts to my object (cylinder) the TapToPlace script from the HoloToolKit and the deformation script stated above. The problem is when I deploy to my HoloLens to test, when I place the cylinder first to the needed place then try to deform it after that, it is placed but not deformed. If I tried it the other way around both work. Any ideas why does the deformation script does not work after the TapToPlace one?
The cylinder when viewed by my HoloLens is somehow transparent. I mean that I can see my hand through it. I need it to be more solid.
I wonder if there is something like a delay that I can use because when I use the deformation script stated above the cylinder is scaled to my hand position then scaled back to its default size very fast and appears as if blinking.
At first I place the cylinder on a setup (something as a table for example) then I begin to deform it. When I commented the else part in the deformation script stated above, it was scaled and left stable without returning to the original size. It is scaled symmetrically so its height is decreased from up and down resulting in the base of the cylinder becomes away from the table. I need the base of the cylinder to be always stable and touching the table under it.
Note: I am using Unity 2017.3.1f1 (64-bit) - HoloToolkit-Unity-2017.2.1.3
Thank you in advance.
1) Did you see the MRTK 2017.2.1.4 release? It has some useful features such as two handed resizing/scaling of objects. The BoundingBox code in the new MRTK release does moving and resizing in one component, it might be a better base to start from than the TapToPlace, or at least show how the two types of transform can work together.
2) What colour is your object? Hololens will render black as transparent, so try making the object bright white for testing. Also, just double check the brightness is turned up to full (the LHS buttons on the hololens). Finally, check your shader is the MRTK Standard shader. (again, the 2017.2.1.4 release has new shader code you might want to try.) . In a room without direct sunlight it should pretty much cover up your hand.
4) I'm not sure I follow completely, but the pivot point could be important here. If it is centred in the middle of the coil (as I'd imagine it is) then when you deform the coil down it will still stay centered at that central pivot point.
If you instead set the pivot point to the bottom of the coil, touching the table, you can scale and that point stays on the table and the top does all the moving.
I have a 2D game in XNA which has a scrolling camera. Unfortunately, when screen is moved, I can see some artifacts - mostly blur and additional lines on the screen.
I thought about changing coordinates before drawing (approximating with Ceiling() or Floor() consistently), but this seems a little inefficient. Is this the only way?
I use SpriteBatch for rendering.
This is my drawing method from Camera:
Vector2D works on doubles, Vector2 works on floats (used by XNA), Srpite is just a class with data for spriteBatch.Draw.
public void DrawSprite(Sprite toDraw)
{
Vector2D drawingPostion;
Vector2 drawingPos;
drawingPostion = toDraw.Position - transform.Position;
drawingPos.X = (float) drawingPostion.X*UnitToPixels;
drawingPos.Y = (float) drawingPostion.Y*UnitToPixels;
spriteBatch.Draw(toDraw.Texture, drawingPos, toDraw.Source, toDraw.Color,
toDraw.Rotation, toDraw.Origin, toDraw.Scale, toDraw.Effects, toDraw.LayerDepth + zsortingValue);
}
My idea is to do this:
drawingPos.X = (float) Math.Floor(drawingPostion.X*UnitToPixels);
drawingPos.Y = (float) Math.Floor(drawingPostion.Y*UnitToPixels);
And it solves the problem. I think I can accept it this way. But are there any other options?
GraphicsDevice.SamplerStates[0] = SamplerState.PointWrap;
This isn't so much a problem with your camera as it is the sampler. Using a Point Sampler state tells the video card to take a single point color sample directly from the texture depending on the position. Other default modes like LinearWrap and LinearClamp will interpolate between texels (pixels on your source texture) and give it a very mushy, blurred look. If you're going for pixel-graphics, you need Point sampling.
With linear interpolation, if you have red and white next to each other in your texture, and it samples between the two (by some aspect of the camera), you will get pink. With point sampling, you get either red or white. Nothing in between.
Yes it is possible... try something this...
bool redrawSprite = false;
Sprite toDraw;
void MainRenderer()
{
if (redrawSprite)
{
DrawSprite(toDraw);
redrawSprite = false;
}
}
void ManualRefresh()
{
"Create or set your sprite and set it to 'toDraw'"
redrawSprite = true;
}
This way you will let main loop do the work like is intended.
I'm just starting with physics, so I'm not always sure about what I'm doing. It's a 2D project but I'm using 3D physical objects like SphereCollider etc..
What I have:
Objects floating in space and affecting each other through gravity:
protected virtual IEnumerator OnTriggerStay(Collider other) {
yield return new WaitForFixedUpdate();
if(other.attachedRigidbody) {
Vector3 offsetVector = this.transform.position - other.transform.position;
float distance = offsetVector.magnitude;
float gravityForce = (other.rigidbody.mass * mass) / Mathf.Pow(distance, 2);
// Clamp gravity.
if(gravityForce > 1.0F) {
gravityForce = 1.0F;
}
other.attachedRigidbody.constantForce.force = offsetVector.normalized * gravityForce;
}
}
There are controllable objects on which the player can click and drag a line away from the object in order to give it a force (shoot) in the opposite direction.
What I want to achieve:
The player should see a rough prediction of the way while aiming. That means that the way-prediction needs to take in account the current velocity, the force which would be applied when the player release the mouse button and the gravity of the surrounding objects.
What I have tried so far:
For testing purposes I just save the computed/predicted positions in an array and draw those positions in OnDrawGizmos().
I wrote a method which returns the gravity influence for a certain position called computeGravityForPosition(Vector3 position).
And thats how I try to calculate the positions:
private void drawWayPrediction() {
Vector3 pos = this.transform.position;
// The offsetVector for the shooting action.
Vector3 forceVector = pos - Camera.main.ScreenToWorldPoint(Input.mousePosition);
forceVector.z = 0.0F;
// The predicted momentum scaled up to increase the strength.
Vector3 force = (forceVector.normalized * forceVector.magnitude);
// 1. I guess that this is wrong, but don't know how to do it properly.
momentum = this.rigidbody.velocity + force;
for(int i = 0; i < predictionPoints.Length; i++) {
float t = i * Time.fixedDeltaTime;
momentum += computeGravityForPosition(pos);
pos += momentum * t * t;
predictionPoints[i] = pos;
}
}
At the beginning, when the objects just slowly approaching each other it looks okay. After the first shot, the prediction is completely wrong. I guess it is because of 1. in the code. Just adding the force to the velocity is probably horrible wrong.
Thank you very much for your time.
EDIT:
I removed seemingly unnessecary parts.
I still think that the main problem lays in 1. in the code. I just don't know how to mix up the current movement of the object (from which I only have the current velocity as far as I know the physics engine of unity) with the new created force:
Vector3 forceVector = pos - Camera.main.ScreenToWorldPoint(Input.mousePosition);
Vector3 force = (forceVector.normalized * forceVector.magnitude);
So if you are using a new version of unity probably above 2018, you can use the nice method
Physics.Simulate(dt); // delta time, dt, is the amount of time to simulate.
https://docs.unity3d.com/ScriptReference/Physics.Simulate.html
https://docs.unity3d.com/2018.3/Documentation/ScriptReference/PhysicsScene.Simulate.html
By using this function you can manually advance the simulation.
This method should be applied to a different physics scene.
Therefore I suggest that when you click you will simulate a few physics steps (the more you will simulate the more accurate indication the player will get),
with every step you store the position of the object and when you are done simulating draw a line between all the points.
In my opinion, it should run quite fast if done correctly.
The code should look something like this:
public PhysicsScene physicsScene;
GameObject actualBall;
GameObject simulatedBall;
OnClick() {
simulatedBall.SetPosition(actualBall.transform.position);
if (!physicsScene.IsValid())
return; // do nothing if the physics Scene is not valid.
for (int i=0; i < 10; i++) {
physicsScene.Simulate(Time.fixedDeltaTime);
// store the position.
myPoints.append(simulatedBall.rb.position);
}
// draw a line from the stored points.
}
In addition there is this video that I hope will help, good luck
https://www.youtube.com/watch?v=GLu1T5Y2SSc
I hope I answered your question and if not tell me :)
Disclaimer : Unfortunately I suck at math so can't provide any code for the calculations.
Now that the legal stuff is out of the way :)
In my opinion you are looking at this all wrong. What you need is to calculate the curve (path of the objects trajectory) and then simply plot the curve in OnDrawGizmos with a line renderer.
You don't need to simulate the behaviour of the object. Not only is this a LOT faster but it's also simpler in terms of TimeScale shenanigans. By changing the TimeScale you are also affecting the TimeScale of your trajectory simulation which will most likely look and feel weird.
By doing a basic trajectory calculation you will not have this issue.
PS: This link might help.