Create a new photo at the tapped point - c#

I want to create a new photo at the point I touch it and I want it to be done with every touch so I wrote the following line inside the void Update () function.
public Canvas cv;
public Image im;
I have defined the UI elements above.
for (var i = 0; i < Input.touchCount; ++i)
{ Touch touch = Input.GetTouch(i);
if (touch.phase == TouchPhase.Began)
{
Instantiate(im, Input.GetTouch(i).position, Quaternion.identity).transform.SetParent(cv.transform, false);
}
}
And when I try it with the unity remote app, I take the picture about 3-4 fingers above the point I click. what's the problem? please help!
Vector2 scaleSomething()
{
var screenPosition = Camera.main.WorldToScreenPoint(worldPosition);
var scaler = cv.GetComponentInParent<CanvasScaler>();
var guiScale = 1.0f;
if (Mathf.Approximately(scaler.matchWidthOrHeight, 0.0f))
guiScale = scaler.referenceResolution.x / (float) Screen.width;
else if (Mathf.Approximately(scaler.matchWidthOrHeight, 1.0f))
guiScale = scaler.referenceResolution.y / (float) Screen.height;
return new Vector2(
(screenPosition.x - (Screen.width* 0.5f)) * guiScale,
(screenPosition.y - (Screen.height* 0.5f)) * guiScale);
}

try removing the set parent part after instantiating
edit what is happening is when you instantiate your objects as children they get translated/scaled in local space to the parent. Because your canvas is getting stretched (scaled up on y) your children elements are also getting scaled up on y and are out of place.

Related

Ui button doesn't move correctly

I have a code for a crafting system that checks if the inventory has the ingredients needed to craft an item and adds a button to craft it. The problem is when I want to position my button it goes way off the canvas. I have seen some people saying that it has something to do with rect transform. I've been stuck with it for over an hour. Any help is appreciated.
I have tried
removing the setparent() function,
using anchoredPosition,
using localPosition
My code
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Crafting : MonoBehaviour
{
public List<recipe> recipes = new List<recipe>();
public GameObject base_item, parent;
List<GameObject> items = new List<GameObject>();
public int y = 75;
public int x = -45;
public Inv inv;
private void Start()
{
inv = GetComponent<Inv>();
}
private void Update()
{
if (Input.GetKeyDown(KeyCode.Tab))
{
checkitems();
Debug.Log("y = " + y + " x = " + (x - 40));
}
}
public void checkitems()
{
for (int i = 0; i < recipes.Count; i++)
{
recipe r = recipes[i];
for (int x = 0; x < r.ingredients.Count; x++)
{
if (!inv.hasitem(r.ingredients[x])){
return;
}
}
showitem(r.result);
}
}
public void onClick(int _slot)
{
recipe r = recipes[_slot];
for (int i = 0; i < r.ingredients.Count; i++)
{
inv.removeitem(inv.getitem(r.ingredients[i]));
}
inv.additem(inv.getFirstAvailable(), r.result, r.stack);
}
public void showitem(string name)
{
GameObject obj = Instantiate(base_item);
if (items.Count != 0)
{
if (((items.Count) % 3) != 0)
{
Debug.Log("first thing");
obj.GetComponent<RectTransform>().position = new Vector2(x, y);
obj.transform.SetParent(parent.transform);
obj.SetActive(true);
items.Add(obj);
x = x + 40;
Debug.Log("x + 40");
}
else if (((items.Count + 1) % 3) == 0)
{
Debug.Log("second thing");
x = -45;
Debug.Log("x + 40");
y = y + 40;
Debug.Log(" y + 40");
obj.GetComponent<RectTransform>().position = new Vector2(x, y);
obj.transform.SetParent(parent.transform);
obj.SetActive(true);
items.Add(obj);
}
}else
{
obj.GetComponent<RectTransform>().position = new Vector2(x, y);
obj.transform.SetParent(parent.transform);
obj.SetActive(true);
items.Add(obj);
x = x + 40;
Debug.Log("x + 40");
}
}
}
Blue circle where it spawns. Red circle where I want it to be
Seems you are confusing a bunch of terms for being the issue of your problem. Firstly I want to address the red X over your scroll bar. Whenever this occurs, it means that your RectTransform of this UI object has been dragged from its positive vertices to negative or vice versa, causing it to almost invert. I would correct this but it is not the reason your objects are not childing correctly.
Generally, with UI objects, I would never use LocalPosition, just AnchoredPosition. LocalPosition is a field from Transform which I believe RectTransform inherits from. As RectTransforms have a lot of modifications to their position from pivots, anchors, and anchored positions, the LocalPosition will most likely need to recalculate data to properly move the object, whereas AnchoredPosition has already done these calculations.
I believe the issue with your current code is how you are using SetParent. There is a second parameter of SetParent which governs whether the object keeps the same position based in world space after being childed. As you are not passing in a new bool for this parameter, it is defaulting to true. As you want your objects to be childed to the parent but not keep their world space positions, you would want to pass in false.
In your case, as it looks as if you want to set objects in a grid-like pattern childed to this ScrollRect, I would attach a GridLayoutGroup to the Content of your scroll and child the new objects to this object. You can set the max columns of this grid and spacing to give the same layout you are attempting to achieve in code.
To summarize, I would remove all the hand placement you are doing in code with LocalPosition and AnchorPosition and just attach a GridLayoutGroup. To fix the current positioning of your objects relative to the parent, change all lines of obj.transform.SetParent(parent.transform); to obj.transform.SetParent(parent.transform, false);. If you want to keep changing position locally in code instead of a layout element, use SetParent first, and use AnchoredPosition instead of LocalPosition as the SetParent with false passed in will override the position you set.

Two different Vector3.zero's, but no parent?

UPDATE
I found out that the mesh center of the mesh object is not at 0,0,0. Does that do anything?
I have the following problem. I am generating a terrain from Perlin noise and that works fine. However, as soon as I try to instantiate any objects on it, some are spawned in the terrain area and some completely outside. When I reset the object's transform, it teleports to (0,0,0) as expected, but when I reset another object, that was not instantiated at runtime, the (0,0,0) is at a completely different location! I have no parent set to these objects and no parent set to the other object as well. Below is my code for generating the objects:
private void AddRocks(Terrain terrain, int count)
{
for (int i = 0; i < count; i++)
{
float randX = Random.Range(0, 256); //256 is my terrain size, the transform is all zeros and 1 for the transform size.
float randZ = Random.Range(0, 256);
GameObject newGameObject = Instantiate(rockPrefab,
new Vector3(randX, terrain.terrainData.GetHeight((int)randX, (int)randZ),
randZ), Quaternion.identity);
}
}
This is my code for generating the perlin noise terrain:
TerrainData GenerateTerrain(TerrainData terrainData)
{
terrainData.heightmapResolution = width + 1;
terrainData.size = new Vector3(width, depth, height);
terrainData.SetHeights(0, 0, GenerateHeights());
return terrainData;
}
float[,] GenerateHeights()
{
float[,] heights = new float[width, height];
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
heights[x, y] = CalculateHeight(x, y);
}
}
return heights;
}
float CalculateHeight(int x, int y)
{
float xCoord = (float)x / width * scale + offsetX;
float yCoord = (float)y / height * scale + offsetY;
return Mathf.PerlinNoise(xCoord, yCoord);
}
This is how I call them in Start:
terrain.terrainData = GenerateTerrain(terrain.terrainData);
AddRocks(terrain: terrain, count: 20);
This is how it looks after generating:
This is how the rocks look:
The rocks are generated from a script that lies on the mainterrain itself.
I have no parent set to these objects and no parent set to the other object as well.
Actually, you do set parent:
GameObject newGameObject = Instantiate(rockPrefab,
new Vector3(randX, terrain.terrainData.GetHeight((int)randX, (int)randZ),
randZ), Quaternion.identity, rockHolder.transform);
The last parameter ( rockHolder.transform) is a transform to which the instantiated object will be attached and the position you set will become a localPosition of the instantiated object relative to the parent ( rockHolder).
But I don't see the rockHolder object in the hierarchy view screenshot. Seems like you have rockHolder.transform = null, in other words it's not initialized. So, when you call Instantiate (...) and pass the rockHolder.transform as a desired parent for the rocks, it is null, so Unity spawns the objects and assign them to null (no parent).
Can't tell if this is the root of the problem but it's certainly not okay anyway.

Unity canvas scaler blocking GL.Vertex

So I'm drawing lines using GL in Unity. I draw those lines on my screen, and they are being drawn. However, I only see the lines once I disable the canvas scaler on the script that is providing the data for the lines.
I tried other ways of drawing the lines but nothing helped.
So the setup is as followed, I have a canvas in my scene with a canvas scaler ontop of it that scales with screen size. This canvas contains a script that provides the data (positions) for the lines. When the lines need to be drawn, it fires an event with a vector2[] with all the data that needs lines etc. I have another script on my main camera listening to this event, and when needed it draws the lines in the onpostrender. These lines are being drawn at the correct positions, but I can only see them once I disable the canvas scaler on the canvas. Here's some code:
This is the script in my scene (thats on the canvas) that fires an event with the data.
lock (_skeleton2D)
{
for (int i = 0; i < skeleton.Length; ++i)
{
_skeleton2D[i] = new Vector2(skeleton[i].x * scaleX + _depthImageRect.anchoredPosition.x * 2, _depthImageRect.rect.height - skeleton[i].y * scaleY);
}
for (int i = 0; i < skeleton.Length; i++)
{
Vector2 vector2 = skeleton[i];
objects[i].transform.position = vector2;
// _skeleton2D[i] = vector2;
}
DrawSkeletonRenderer?.Invoke(_skeleton2D);
As you can see in the end it fires the DrawSkeletonRenderer with a Vector2[] _skeleton2D. Then I have a script on the Main and only camera in the scene which is listening to this event, and when it receives the event, it updates it's own Vector2[] of skeletondata, and uses this to render the lines.
private void Start()
{
_skeletonMaterial = new Material(Shader.Find("UI/Default"));
MoveCalibrationToolManager.DrawSkeletonRenderer += receivedDrawSkeleton;
}
private void receivedDrawSkeleton(Vector2[] skeleton)
{
_skeleton2D = skeleton;
isSkeletonValid = true; //TODO Turn false when not needed anymore.
}
private bool isSkeletonValid;
private void OnPostRender()
{
if (isSkeletonValid)
{
lock (_skeleton2D)
{
drawSkeleton(_skeleton2D);
Debug.LogError("DRAW SKELETON RIGHT NOW!");
}
}
}
As you can see it eventually calls drawSkeleton which as shown below draws the lines
GL.PushMatrix();
_skeletonMaterial.SetPass(0);
GL.LoadPixelMatrix();
drawBone(skeleton[(int)JointType.Head], skeleton[(int)JointType.Neck]);
drawBone(skeleton[(int)JointType.Neck], skeleton[(int)JointType.LShoulder]);
drawBone(skeleton[(int)JointType.Neck], skeleton[(int)JointType.RShoulder]);
drawBone(skeleton[(int)JointType.LShoulder], skeleton[(int)JointType.LElbow]);
drawBone(skeleton[(int)JointType.LElbow], skeleton[(int)JointType.LHand]);
drawBone(skeleton[(int)JointType.RShoulder], skeleton[(int)JointType.RElbow]);
drawBone(skeleton[(int)JointType.RElbow], skeleton[(int)JointType.RHand]);
drawBone(skeleton[(int)JointType.LShoulder], skeleton[(int)JointType.RWaist]);
Which then calls drawBone which looks as follows:
if (float.IsNaN(fromJoint.x) || float.IsNaN(fromJoint.y) || float.IsNaN(toJoint.x) || float.IsNaN(toJoint.y))
{
Debug.Log("Joint positions for lines are null");
return;
}
GL.Begin(GL.LINES);
GL.Color(Color.red);
GL.Vertex(fromJoint);
GL.Vertex(toJoint);
GL.End();
As I said, everything is drawn but I can only see it when I disable the Canvas Scaler on the canvas that has the first script attached.
During runtime, I make a texture on which I show a feed of a 3D realsense camera. I need the skeleton to be drawn ontop of that. I make this texture the following way:
GameObject obj = new GameObject("Calibration View");
obj.transform.SetParent(gameObject.transform, false);
obj.transform.SetAsFirstSibling();
_depthImage = obj.AddComponent<RawImage>();
_depthImageRect = obj.GetComponent<RectTransform>();
_depthImageRect.anchorMin = Vector2.zero;
_depthImageRect.anchorMax = Vector2.one;
_depthImageRect.anchoredPosition = Vector2.zero;
Quaternion transformRotation = new Quaternion(180f, 0f, 0f, 0f);
_depthImageRect.transform.rotation = transformRotation;
_depthImage.material = _depthImageMaterial;
_depthTexture = new Texture2D(RsDevice.DepthWidth, RsDevice.DepthHeight, TextureFormat.BGRA32, false);
_depthImage.texture = _depthTexture;
_depthTextureBytes = new byte[RsDevice.DepthWidth * RsDevice.DepthHeight * 4];

Why when adding Handles.ArrowHandleCap to a line it's not drawing the ArrowHandleCap?

I'm using Editor Window maybe that's the problem ?
The idea is when connecting two nodes is also to make an arrow at the end position that will show the connecting flow direction.
In the screenshot when I'm connecting two nodes for example Window 0 to Window 1
So there should be an arrow at the end of the line near Window 1 showing indicating that Window 0 is connected to Window 1 so the flow is from Window 0 to Window 1.
But it's not drawing any ArrowHandleCap.
I don't mind to draw another simple white arrow at the end position but it's not working at all for now. Not drawing an arrow at all.
This is my Editor Window code :
using UnityEngine;
using UnityEditor;
using System.Collections.Generic;
using UnityEditor.Graphs;
using UnityEngine.UI;
public class NodeEditor : EditorWindow
{
List<Rect> windows = new List<Rect>();
List<int> windowsToAttach = new List<int>();
List<int> attachedWindows = new List<int>();
int tab = 0;
float size = 10f;
[MenuItem("Window/Node editor")]
static void ShowEditor()
{
const int width = 600;
const int height = 600;
var x = (Screen.currentResolution.width - width) / 2;
var y = (Screen.currentResolution.height - height) / 2;
GetWindow<NodeEditor>().position = new Rect(x, y, width, height);
}
void OnGUI()
{
Rect graphPosition = new Rect(0f, 0f, position.width, position.height);
GraphBackground.DrawGraphBackground(graphPosition, graphPosition);
int selected = 0;
string[] options = new string[]
{
"Option1", "Option2", "Option3",
};
selected = EditorGUILayout.Popup("Label", selected, options);
if (windowsToAttach.Count == 2)
{
attachedWindows.Add(windowsToAttach[0]);
attachedWindows.Add(windowsToAttach[1]);
windowsToAttach = new List<int>();
}
if (attachedWindows.Count >= 2)
{
for (int i = 0; i < attachedWindows.Count; i += 2)
{
DrawNodeCurve(windows[attachedWindows[i]], windows[attachedWindows[i + 1]]);
}
}
BeginWindows();
if (GUILayout.Button("Create Node"))
{
windows.Add(new Rect(10, 10, 200, 40));
}
for (int i = 0; i < windows.Count; i++)
{
windows[i] = GUI.Window(i, windows[i], DrawNodeWindow, "Window " + i);
}
EndWindows();
}
void DrawNodeWindow(int id)
{
if (GUILayout.Button("Attach"))
{
windowsToAttach.Add(id);
}
GUI.DragWindow();
}
void DrawNodeCurve(Rect start, Rect end)
{
Vector3 startPos = new Vector3(start.x + start.width, start.y + start.height / 2, 0);
Vector3 endPos = new Vector3(end.x, end.y + end.height / 2, 0);
Vector3 startTan = startPos + Vector3.right * 50;
Vector3 endTan = endPos + Vector3.left * 50;
Color shadowCol = new Color(255, 255, 255);
for (int i = 0; i < 3; i++)
{// Draw a shadow
//Handles.DrawBezier(startPos, endPos, startTan, endTan, shadowCol, null, (i + 1) * 5);
}
Handles.DrawBezier(startPos, endPos, startTan, endTan, Color.white, null, 5);
Handles.color = Handles.xAxisColor;
Handles.ArrowHandleCap(0, endPos, Quaternion.LookRotation(Vector3.right), size, EventType.Repaint);
}
}
The problem is that the arrow is always behind the e.g. Window 0 since you call DrawNodeWindow always after DrawNodeCurve.
It happens because the arrow is always drawen starting from the endPos to the right direction with length = size so you always overlay it with the window later ... you have to change
// move your endpos to the left by size
var endPos = new Vector3(end.x - size, end.y + end.height / 2 , 0);
in order to have it start size pixels left before the actual end.x position.
However, as you can see it is still really small since it usually is used to display the arrow in 3D space - not using Pixel coordinates .. you might have to tweak arround or use something completely different.
How about e.g. simply using a GUI.DrawTexture instead with a given Arrow sprite?
// assign this as default reference via the Inspector for that script
[SerializeField] private Texture2D aTexture;
// ...
// since the drawTexture needs a rect which is not centered on the height anymore
// you have to use endPos.y - size / 2 for the Y start position of the texture
GUI.DrawTexture(new Rect(endPos.x, endPos.y - size / 2, size, size), aTexture, ScaleMode.StretchToFill);
like mentioned in the comment for all serialized fields in Unity you can already reference default assets for the script itself (in contrary to doing it for each instance like for MonoBehaviours) so with the NodeEditor script selected simply reference a downloaded arrow texture
If using a white arrow as texture you could then still change its color using
var color = GUI.color;
GUI.color = Handles.xAxisColor;
GUI.DrawTexture(new Rect(endPos.x, endPos.y - size / 2, size, size), aTexture, ScaleMode.StretchToFill);
GUI.color = color;
Result
P.S.: Arrow icon usedfor the example: https://iconsplace.com/red-icons/arrow-icon-14 you can already change the color directly on that page before downloading the icon ;)

Scripted GUI button doesn't appear on Android

I'm using Unity to develop a cross-platform application.
I'm using the following C# code to place a button on screen:
void onGUI()
{
float texWidth = m_buttonPano.normal.background.width;
float texHeight = m_buttonPano.normal.background.height;
float width = texWidth * Screen.width / 1920;
float height = (width / texWidth) * texHeight;
float y = Screen.height - height;
float x = Screen.width - width;
if (GUI.Button (new Rect (x, y, width, height), "", m_buttonPano)) {
if (this.TappedOnPanoButton != null) {
this.TappedOnPanoButton ();
}
m_guiInput = true;
}
}
Also note, that I added this script to my scene via creating an empty GameObject and attaching the script to it.
It works well on PC, but on Android the button doesn't show up. The interesting part is that if I tap at it's location (bottom right corner) the functionality is preserved, therefore it's only the custom background texture I put on it that doesn't show up..
ALso, here's the code of the attachment of the background texture:
m_buttonPano = new GUIStyle();
m_buttonPano.normal.background = Resources.Load("GUI/buttonPano") as Texture2D;
m_buttonPano.active.background = Resources.Load("GUI/buttonPano") as Texture2D;
m_buttonPano.onActive.background = Resources.Load("GUI/buttonPano") as Texture2D;
The problem was actually a Unity bug. After downloading the f4 patch, everything works correctly.

Categories