Is basically a mathematical question, would like to know what would be a good solution.
Problem: I have 25 images placed in one line. I want the images to fade out in the order. That is the first image should be completely opaque and last image should be completely transparent.
I have placed all these images in an order inside one parent.
My solution: I am just providing a fixed number that iterates itself for the alpha.
What I am looking for: a formula so that this "fixed" number can be dynamically changed by number of images present.
void Start () {
int color = 10; //my fixed number
foreach (Transform child in transform) {
child.gameObject.GetComponent<Image>().color = new Color32(255, 255, 255, (byte) (255 - color));
color += 10; //iterating for the next child
}
}
What about simply calculating the step:
void Start ()
{
if(transform.childCount <= 1)
{
Debug.LogWarning("Requires at least 2 children!");
return;
}
var alphaStep = 1f / (transform.childCount - 1);
var alpha = 1f;
foreach (Transform child in transform)
{
child.GetComponent<Image>().color = new Color(1f, 1f, 1f, alpha);
alpha -= alphaStep;
}
}
Or if you want full control over the maximum and minimum alpha you could use e.g.
public float minAlpha = 0f;
public float maxAlpha = 1f;
and then
var alphaStep = 1f / (transform.childCount - 1);
for (var i = 0; i < transform.childCount; i++)
{
var factor = i / (transform.childCount - 1);
transform.GetChild(i).GetComponent<Image>().color = new Color(1f, 1f, 1f, Mathf.Lerp(maxAlpha, minAlpha, factor));
}
I would recomend use an Array to iterate through your elements more freely.
With that you could go something like... (coding in SO, not tested)
Image[] images; //this should reference the array constructed elsewhere where you load the images.
private void Start() {
for (int i = 0; i < images.Length; i++) {
int alpha = 255 - (Mathf.CeilToInt(255 / images.Length) * i + 1);
images[i].color = Color32(255,255,255,alpha);}
}
That will probably do what you want.
By the way, not shure why you using Color32 but working with "float" RGBA will rid you from that ceil and give you more precission.
Related
I have a gameobject that occupies the whole screen just for testing purposes. I'm drawing a line btw. What I'm trying to achieve is if the mouse position hits a gameobject it will store the vector2 coordinates in a list. But raycast is not storing all the coordinates. Below is my code
private void Update()
{
if (Input.GetMouseButton(0))
{
Vector2 mousePos = Input.mousePosition;
Vector2 Pos = _camera.ScreenToWorldPoint(mousePos);
if(!mousePositions.Contains(Pos))
mousePositions.Add(Pos);
if (Physics.Raycast(Camera.main.ScreenPointToRay(mousePos), out RaycastHit hit))
{
Vector2 textureCoord = hit.textureCoord;
int pixelX = (int)(textureCoord.x * _templateDirtMask.width);
int pixelY = (int)(textureCoord.y * _templateDirtMask.height);
Vector2Int paintPixelPosition = new Vector2Int(pixelX, pixelY);
if (!linePositions.Contains(paintPixelPosition))
linePositions.Add(paintPixelPosition);
foreach (Vector2Int pos in linePositions)
{
int pixelXOffset = pos.x - (_brush.width / 2);
int pixelYOffset = pos.y - (_brush.height / 2);
for (int x = 0; x < _brush.width; x++)
{
for (int y = 0; y < _brush.height; y++)
{
_templateDirtMask.SetPixel(
pixelXOffset + x,
pixelYOffset + y,
Color.black
);
}
}
}
_templateDirtMask.Apply();
}
}
}
Everytime I checked the element count mousePositions are always greater than linePositions. I don't know what's causing this
the element count mousePositions are always greater than linePosition
well it is quite simple: In
int pixelX = (int)(textureCoord.x * _templateDirtMask.width);
int pixelY = (int)(textureCoord.y * _templateDirtMask.height);
you are casting to int and cut off any decimals after the comma (basically like doing Mathf.FloorToInt).
So you can totally have multiple mouse positions which result in float pixel positions like e.g.
1.2, 1.2
1.4, 1.7
1.02, 1.93
...
all these will map to
Vector2Int paintPixelPosition = new Vector2Int(1, 1);
Besides, you might want to look at some better line drawing algorithms like e.g. this simple one
And then note that calling SetPixel repeatedly is quite expensive. You want to do a single SetPixels call like e.g.
var pixels = _templateDirtMask.GetPixels();
foreach (Vector2Int pos in linePositions)
{
int pixelXOffset = pos.x - (_brush.width / 2);
int pixelYOffset = pos.y - (_brush.height / 2);
for (int x = 0; x < _brush.width; x++)
{
for (int y = 0; y < _brush.height; y++)
{
pixels[(pixelXOffset + x) + (pixelYOffset + y) * _templateDirtMask.width] = Color.black;
}
}
}
_templateDirtMask.SetPixels(pixels);
_templateDirtMask.Apply();
It happens because there is really could be a case, when several elements from mousePositions are associated with one elment from linePositions.
Rough example: your texture resolution is only 1x1px. In this case you linePositons will contain only one element. And this element will be associated with all elements from mosePositions.
So, relation of the number of elements in these lists depends on relation of your texture and screen resolutions.
I have a list of vertices, of N size, and a weight gradient(which can be any length) defined as:
float[] weight_distribution = { 0f, 1f, 0f };
which says that the first and last vertices will have less weight and the middle vertices will have full. Like a black and white gradient with keys defined like the array.
This is based on the Y-axis for a plane of many segments that is to be weighted for procedural rigging based on the gradient.
The list is sorted based on the vertices' Y values, so that the lowest vertices are found at the start of the list and highest last.
I don't know how to calculate the weight for a given vertex with this kind of gradient. Any pointers would be really helpful.
I tried a few different things to find values regarding the current vertex, but I don't know how to extract the weight from the gradient for this position.
This is probably just garbage, but I'll put it here anyway in case it can help.
// find unique Ys
List<float> ys = new List<float>();
for (int i = 0; i < list.Count; i++) {
if (!ys.Contains(list[i].y)) { ys.Add(list[i].y); }
}
float min = ys[0];
float max = ys[ys.Count - 1];
int levels = (ys.Count - 1);
float levelStep = (gradient.Length * 1f / levels * 1f);
float currentY = ys[0];
// set weights here
for (int i = 0; i < list.Count; i++)
{
// find current pos/value based on gradient somehow?
if(list[i].y > currentY ) { currentY = list[i].y; yindex++; }
float pos = ((yindex * levelStep) / levels * 1f);
float lerped = Mathf.Lerp(list[i].y, max, pos);
// ... calculate weight
}
I'm using Editor Window maybe that's the problem ?
The idea is when connecting two nodes is also to make an arrow at the end position that will show the connecting flow direction.
In the screenshot when I'm connecting two nodes for example Window 0 to Window 1
So there should be an arrow at the end of the line near Window 1 showing indicating that Window 0 is connected to Window 1 so the flow is from Window 0 to Window 1.
But it's not drawing any ArrowHandleCap.
I don't mind to draw another simple white arrow at the end position but it's not working at all for now. Not drawing an arrow at all.
This is my Editor Window code :
using UnityEngine;
using UnityEditor;
using System.Collections.Generic;
using UnityEditor.Graphs;
using UnityEngine.UI;
public class NodeEditor : EditorWindow
{
List<Rect> windows = new List<Rect>();
List<int> windowsToAttach = new List<int>();
List<int> attachedWindows = new List<int>();
int tab = 0;
float size = 10f;
[MenuItem("Window/Node editor")]
static void ShowEditor()
{
const int width = 600;
const int height = 600;
var x = (Screen.currentResolution.width - width) / 2;
var y = (Screen.currentResolution.height - height) / 2;
GetWindow<NodeEditor>().position = new Rect(x, y, width, height);
}
void OnGUI()
{
Rect graphPosition = new Rect(0f, 0f, position.width, position.height);
GraphBackground.DrawGraphBackground(graphPosition, graphPosition);
int selected = 0;
string[] options = new string[]
{
"Option1", "Option2", "Option3",
};
selected = EditorGUILayout.Popup("Label", selected, options);
if (windowsToAttach.Count == 2)
{
attachedWindows.Add(windowsToAttach[0]);
attachedWindows.Add(windowsToAttach[1]);
windowsToAttach = new List<int>();
}
if (attachedWindows.Count >= 2)
{
for (int i = 0; i < attachedWindows.Count; i += 2)
{
DrawNodeCurve(windows[attachedWindows[i]], windows[attachedWindows[i + 1]]);
}
}
BeginWindows();
if (GUILayout.Button("Create Node"))
{
windows.Add(new Rect(10, 10, 200, 40));
}
for (int i = 0; i < windows.Count; i++)
{
windows[i] = GUI.Window(i, windows[i], DrawNodeWindow, "Window " + i);
}
EndWindows();
}
void DrawNodeWindow(int id)
{
if (GUILayout.Button("Attach"))
{
windowsToAttach.Add(id);
}
GUI.DragWindow();
}
void DrawNodeCurve(Rect start, Rect end)
{
Vector3 startPos = new Vector3(start.x + start.width, start.y + start.height / 2, 0);
Vector3 endPos = new Vector3(end.x, end.y + end.height / 2, 0);
Vector3 startTan = startPos + Vector3.right * 50;
Vector3 endTan = endPos + Vector3.left * 50;
Color shadowCol = new Color(255, 255, 255);
for (int i = 0; i < 3; i++)
{// Draw a shadow
//Handles.DrawBezier(startPos, endPos, startTan, endTan, shadowCol, null, (i + 1) * 5);
}
Handles.DrawBezier(startPos, endPos, startTan, endTan, Color.white, null, 5);
Handles.color = Handles.xAxisColor;
Handles.ArrowHandleCap(0, endPos, Quaternion.LookRotation(Vector3.right), size, EventType.Repaint);
}
}
The problem is that the arrow is always behind the e.g. Window 0 since you call DrawNodeWindow always after DrawNodeCurve.
It happens because the arrow is always drawen starting from the endPos to the right direction with length = size so you always overlay it with the window later ... you have to change
// move your endpos to the left by size
var endPos = new Vector3(end.x - size, end.y + end.height / 2 , 0);
in order to have it start size pixels left before the actual end.x position.
However, as you can see it is still really small since it usually is used to display the arrow in 3D space - not using Pixel coordinates .. you might have to tweak arround or use something completely different.
How about e.g. simply using a GUI.DrawTexture instead with a given Arrow sprite?
// assign this as default reference via the Inspector for that script
[SerializeField] private Texture2D aTexture;
// ...
// since the drawTexture needs a rect which is not centered on the height anymore
// you have to use endPos.y - size / 2 for the Y start position of the texture
GUI.DrawTexture(new Rect(endPos.x, endPos.y - size / 2, size, size), aTexture, ScaleMode.StretchToFill);
like mentioned in the comment for all serialized fields in Unity you can already reference default assets for the script itself (in contrary to doing it for each instance like for MonoBehaviours) so with the NodeEditor script selected simply reference a downloaded arrow texture
If using a white arrow as texture you could then still change its color using
var color = GUI.color;
GUI.color = Handles.xAxisColor;
GUI.DrawTexture(new Rect(endPos.x, endPos.y - size / 2, size, size), aTexture, ScaleMode.StretchToFill);
GUI.color = color;
Result
P.S.: Arrow icon usedfor the example: https://iconsplace.com/red-icons/arrow-icon-14 you can already change the color directly on that page before downloading the icon ;)
How can I blend two textures into a new one?
I have a texture from the android gallery and some logo png texture. I need to add this logo into the texture from the gallery and store this as variable to save into the gallery as a new image.
These shaders blend between two textures based on a 0-1 value that you control. The first version is extra-fast because it does not use lighting, and the second uses the same basic ambient + diffuse calculation that I used in my Simply Lit shader.
http://wiki.unity3d.com/index.php/Blend_2_Textures
Drag a different texture onto each of the material's variable slots, and use the Blend control to mix them to taste.
Take note that the lit version requires two passes on the GPU used in the oldest iOS devices.
ShaderLab - Blend 2 Textures.shader
Shader "Blend 2 Textures" {
Properties {
_Blend ("Blend", Range (0, 1) ) = 0.5
_MainTex ("Texture 1", 2D) = ""
_Texture2 ("Texture 2", 2D) = ""
}
SubShader {
Pass {
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
}
}
}
ShaderLab - Blend 2 Textures, Simply Lit.shader
Shader "Blend 2 Textures, Simply Lit" {
Properties {
_Color ("Color", Color) = (1,1,1)
_Blend ("Blend", Range (0,1)) = 0.5
_MainTex ("Texture 1", 2D) = ""
_Texture2 ("Texture 2", 2D) = ""
}
Category {
Material {
Ambient[_Color]
Diffuse[_Color]
}
// iPhone 3GS and later
SubShader {Pass {
Lighting On
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
SetTexture[_] {Combine previous * primary Double}
}}
// pre-3GS devices, including the September 2009 8GB iPod touch
SubShader {
Pass {
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
}
Pass {
Lighting On
Blend DstColor SrcColor
}
}
}
}
I had a similar task with a paint tool I was making. So here's my approach:
First, import or instantiate logo and picture textures as Texture2D in order to use Texture2D.GetPixel() and Texture2D.SetPixel() methods.
Assuming that logo size is smaller than picture itself, store logo pixels into the Color[] array:
Color[] logoPixels = logo.GetPixels();
We need to apply logo above the picture, considering alpha level in logo image itself:
//Method GetPixels stores pixel colors in 1D array
int i = 0; //Logo pixel index
for (int y = 0; y < picture.height; y++) {
for (int x = 0; x < picture.width; x++) {
//Get color of original pixel
Color c = picture.GetPixel (logoPositionX + x, logoPositionY + y);
//Lerp pixel color by alpha value
picture.SetPixel (logoPositionX + x, logoPositionY + y, Color.Lerp (c, logoPixels[i], logoPixels[i].a));
i++;
}
}
//Apply changes
picture.Apply();
So, if pixel's alpha = 0 we leave it without changes.
Get bytes of resulting image with picture.GetRawTextureData() and save it as png in a regular way. And to use SetPixel() and SetPixels() methods, make sure both, logo and picture it being applied to, are set Read/Write enabled in the import settings!
It's an old question but I have another solution:
public static Texture2D merge(params Texture2D[] textures) {
if (textures == null || textures.Length == 0)
return null;
int oldQuality = QualitySettings.GetQualityLevel();
QualitySettings.SetQualityLevel(5);
RenderTexture renderTex = RenderTexture.GetTemporary(
textures[0].width,
textures[0].height,
0,
RenderTextureFormat.Default,
RenderTextureReadWrite.Linear);
Graphics.Blit(textures[0], renderTex);
RenderTexture previous = RenderTexture.active;
RenderTexture.active = renderTex;
GL.PushMatrix();
GL.LoadPixelMatrix(0, textures[0].width, textures[0].height, 0);
for (int i = 1; i < textures.Length; i++)
Graphics.DrawTexture(new Rect(0, 0, textures[0].width, textures[0].height), textures[i]);
GL.PopMatrix();
Texture2D readableText = new Texture2D(textures[0].width, textures[0].height);
readableText.ReadPixels(new Rect(0, 0, renderTex.width, renderTex.height), 0, 0);
readableText.Apply();
RenderTexture.active = previous;
RenderTexture.ReleaseTemporary(renderTex);
QualitySettings.SetQualityLevel(oldQuality);
return readableText;
}
And here is the use:
Texture2D coloredTex = ImageUtils.merge(tex,
sprites[0].texture,
sprites[1].texture,
sprites[2].texture,
sprites[3].texture);
Hope it helps
I made this solution, It works with two texture2d in Unity.
public Texture2D ImageBlend(Texture2D Bottom, Texture2D Top)
{
var bData = Bottom.GetPixels();
var tData = Top.GetPixels();
int count = bData.Length;
var final = new Color[count];
int i = 0;
int iT = 0;
int startPos = (Bottom.width / 2) - (Top.width / 2) -1;
int endPos = Bottom.width - startPos -1;
for (int y = 0; y < Bottom.height; y++)
{
for (int x = 0; x < Bottom.width; x++)
{
if (y > startPos && y < endPos && x > startPos && x < endPos)
{
Color B = bData[i];
Color T = tData[iT];
Color R;
R = new Color((T.a * T.r) + ((1-T.a) * B.r),
(T.a * T.g) + ((1 - T.a) * B.g),
(T.a * T.b) + ((1 - T.a) * B.b), 1.0f);
final[i] = R;
i++;
iT++;
}
else
{
final[i] = bData[i];
i++;
}
}
}
var res = new Texture2D(Bottom.width, Bottom.height);
res.SetPixels(final);
res.Apply();
return res;
}
I'm currently working on a project involving integrating OpenCVSharp into Unity, to allow eye tracking within a game environment. I've managed to get OpenCVSharp integrated into the Unity editor and currently have eye-detection (not tracking) working within a game. It can find your eyes within a webcam image, then display where its currently detected them on a texture, which I display within the scene.
However its causing a HUGE fps drop, mainly because every frame its converting a webcam texture into an IPLimage so that OpenCV can handle it. It then has to convert it back to a 2Dtexture to be displayed within the scene, after its done all the eye detection. So understandably its too much for the CPU to handle. (As far as I can tell its only using 1 core on my CPU).
Is there a way to do all the eye detection without converting the texture to an IPLimage? Or any other way to fix the fps drop. Some things that I've tried include:
Limiting the frames that it updates on. However this just causes it
to run smoothly, then stutter horribly on the frame that it has to
update.
Looking at threading, but as far as I'm aware Unity doesn't allow it.
As far as I can tell its only using 1 core on my CPU which seems a bit silly. If there was a way to change this it could fix the issue?
Tried different resolutions on the camera, however the resolution that the game can actually run smoothly at, is too small for the eye's to actually be detected, let alone tracked.
I've included the code below, of if you would prefer to look at it in a code editor here is a link to the C# File. Any suggestions or help would be greatly appreciated!
For reference I used code from here (eye detection using opencvsharp).
using UnityEngine;
using System.Collections;
using System;
using System.IO;
using OpenCvSharp;
//using System.Xml;
//using OpenCvSharp.Extensions;
//using System.Windows.Media;
//using System.Windows.Media.Imaging;
public class CaptureScript : MonoBehaviour
{
public GameObject planeObj;
public WebCamTexture webcamTexture; //Texture retrieved from the webcam
public Texture2D texImage; //Texture to apply to plane
public string deviceName;
private int devId = 1;
private int imWidth = 640; //camera width
private int imHeight = 360; //camera height
private string errorMsg = "No errors found!";
static IplImage matrix; //Ipl image of the converted webcam texture
CvColor[] colors = new CvColor[]
{
new CvColor(0,0,255),
new CvColor(0,128,255),
new CvColor(0,255,255),
new CvColor(0,255,0),
new CvColor(255,128,0),
new CvColor(255,255,0),
new CvColor(255,0,0),
new CvColor(255,0,255),
};
const double Scale = 1.25;
const double ScaleFactor = 2.5;
const int MinNeighbors = 2;
// Use this for initialization
void Start ()
{
//Webcam initialisation
WebCamDevice[] devices = WebCamTexture.devices;
Debug.Log ("num:" + devices.Length);
for (int i=0; i<devices.Length; i++) {
print (devices [i].name);
if (devices [i].name.CompareTo (deviceName) == 1) {
devId = i;
}
}
if (devId >= 0) {
planeObj = GameObject.Find ("Plane");
texImage = new Texture2D (imWidth, imHeight, TextureFormat.RGB24, false);
webcamTexture = new WebCamTexture (devices [devId].name, imWidth, imHeight, 30);
webcamTexture.Play ();
matrix = new IplImage (imWidth, imHeight, BitDepth.U8, 3);
}
}
void Update ()
{
if (devId >= 0)
{
//Convert webcam texture to iplimage
Texture2DtoIplImage();
/*DO IMAGE MANIPULATION HERE*/
//do eye detection on iplimage
EyeDetection();
/*END IMAGE MANIPULATION*/
if (webcamTexture.didUpdateThisFrame)
{
//convert iplimage to texture
IplImageToTexture2D();
}
}
else
{
Debug.Log ("Can't find camera!");
}
}
void EyeDetection()
{
using(IplImage smallImg = new IplImage(new CvSize(Cv.Round (imWidth/Scale), Cv.Round(imHeight/Scale)),BitDepth.U8, 1))
{
using(IplImage gray = new IplImage(matrix.Size, BitDepth.U8, 1))
{
Cv.CvtColor (matrix, gray, ColorConversion.BgrToGray);
Cv.Resize(gray, smallImg, Interpolation.Linear);
Cv.EqualizeHist(smallImg, smallImg);
}
using(CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile (#"C:\Users\User\Documents\opencv\sources\data\haarcascades\haarcascade_eye.xml"))
using(CvMemStorage storage = new CvMemStorage())
{
storage.Clear ();
CvSeq<CvAvgComp> eyes = Cv.HaarDetectObjects(smallImg, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));
for(int i = 0; i < eyes.Total; i++)
{
CvRect r = eyes[i].Value.Rect;
CvPoint center = new CvPoint{ X = Cv.Round ((r.X + r.Width * 0.5) * Scale), Y = Cv.Round((r.Y + r.Height * 0.5) * Scale) };
int radius = Cv.Round((r.Width + r.Height) * 0.25 * Scale);
matrix.Circle (center, radius, colors[i % 8], 3, LineType.AntiAlias, 0);
}
}
}
}
void OnGUI ()
{
GUI.Label (new Rect (200, 200, 100, 90), errorMsg);
}
void IplImageToTexture2D ()
{
int jBackwards = imHeight;
for (int i = 0; i < imHeight; i++) {
for (int j = 0; j < imWidth; j++) {
float b = (float)matrix [i, j].Val0;
float g = (float)matrix [i, j].Val1;
float r = (float)matrix [i, j].Val2;
Color color = new Color (r / 255.0f, g / 255.0f, b / 255.0f);
jBackwards = imHeight - i - 1; // notice it is jBackward and i
texImage.SetPixel (j, jBackwards, color);
}
}
texImage.Apply ();
planeObj.renderer.material.mainTexture = texImage;
}
void Texture2DtoIplImage ()
{
int jBackwards = imHeight;
for (int v=0; v<imHeight; ++v) {
for (int u=0; u<imWidth; ++u) {
CvScalar col = new CvScalar ();
col.Val0 = (double)webcamTexture.GetPixel (u, v).b * 255;
col.Val1 = (double)webcamTexture.GetPixel (u, v).g * 255;
col.Val2 = (double)webcamTexture.GetPixel (u, v).r * 255;
jBackwards = imHeight - v - 1;
matrix.Set2D (jBackwards, u, col);
//matrix [jBackwards, u] = col;
}
}
}
}
You can move these out of the per frame update loop :
using(CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile (#"C:\Users\User\Documents\opencv\sources\data\haarcascades\haarcascade_eye.xml"))
using(CvMemStorage storage = new CvMemStorage())
No reason to be building the recognizer graph each frame.
Threading is the logical way to go moving forward if you want real speed updates, unity itself is not threaded, but you can fold in other threads if your careful.
Do the texture -> ipl image on the main thread then trigger an event to fire off your thread.
The thread can do all the CV work, probably construct the tex2d and then push back to main to render.
You should also be able to gain some performance improvements if you use:
Color32[] pixels;
pixels = new Color32[webcamTexture.width * webcamTexture.height];
webcamTexture.GetPixels32(pixels);
The Unity doco suggests that this can be quite a bit faster than calling "GetPixels" (and certainly faster than calling GetPixel for each pixel), and then you don't need to scale each RGB channel against 255 manually.