Runtime Normal Map Import in Unity 5 - c#

For my project, I need to create materials at run time. When I create material, Normal map has no effect. I tried the two solutions about this but they did not work for me. Is something has changed about this in Unity 5 ?
The Links I checked :
http://answers.unity3d.com/questions/801670/runtime-loading-normal-texture.html
http://answers.unity3d.com/questions/47121/runtime-normal-map-import.html
P.S: The weird thing is when I switch to "Scene View" inside Unity, If I expand material tab from "Inspector" Normal Map is being applied to the object.
My Code:
....
Material mat = new Material(Shader.Find("Standard (Specular setup)"));
mat.SetTexture("_MainTex", colortex);
normaltex = getNormalTexture(Texture2D source);
mat.SetTexture("_BumpMap", normaltex);
mat.SetFloat("_Glossiness", 0.1f);
mat.SetFloat("_BumpScale", 1.0f);
....
public static Texture2D getNormalTexture(Texture2D source)
{
Texture2D normalTexture = new Texture2D(source.width, source.height, TextureFormat.ARGB32, true);
Color theColour = new Color();
for (int x = 0; x < source.width; x++)
{
for (int y = 0; y < source.height; y++)
{
theColour.r = 0;
theColour.g = source.GetPixel(x, y).g;
theColour.b = 0;
theColour.a = source.GetPixel(x, y).r;
normalTexture.SetPixel(x, y, theColour);
}
}
normalTexture.Apply();
return normalTexture;
}

At least with Unity 4.x you had to modify the shader to display runtime normal maps correctly. Just needed to remove UnpackNormal() from the code.
Technical details:
http://forum.unity3d.com/threads/creating-runtime-normal-maps-using-rendertotexture.135841/#post-924587
Builtin shader sources can be downloaded from:
http://unity3d.com/get-unity/download/archive

Related

Blend 2 Textures Unity C#

How can I blend two textures into a new one?
I have a texture from the android gallery and some logo png texture. I need to add this logo into the texture from the gallery and store this as variable to save into the gallery as a new image.
These shaders blend between two textures based on a 0-1 value that you control. The first version is extra-fast because it does not use lighting, and the second uses the same basic ambient + diffuse calculation that I used in my Simply Lit shader.
http://wiki.unity3d.com/index.php/Blend_2_Textures
Drag a different texture onto each of the material's variable slots, and use the Blend control to mix them to taste.
Take note that the lit version requires two passes on the GPU used in the oldest iOS devices.
ShaderLab - Blend 2 Textures.shader
Shader "Blend 2 Textures" {
Properties {
_Blend ("Blend", Range (0, 1) ) = 0.5
_MainTex ("Texture 1", 2D) = ""
_Texture2 ("Texture 2", 2D) = ""
}
SubShader {
Pass {
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
}
}
}
ShaderLab - Blend 2 Textures, Simply Lit.shader
Shader "Blend 2 Textures, Simply Lit" {
Properties {
_Color ("Color", Color) = (1,1,1)
_Blend ("Blend", Range (0,1)) = 0.5
_MainTex ("Texture 1", 2D) = ""
_Texture2 ("Texture 2", 2D) = ""
}
Category {
Material {
Ambient[_Color]
Diffuse[_Color]
}
// iPhone 3GS and later
SubShader {Pass {
Lighting On
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
SetTexture[_] {Combine previous * primary Double}
}}
// pre-3GS devices, including the September 2009 8GB iPod touch
SubShader {
Pass {
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
}
Pass {
Lighting On
Blend DstColor SrcColor
}
}
}
}
I had a similar task with a paint tool I was making. So here's my approach:
First, import or instantiate logo and picture textures as Texture2D in order to use Texture2D.GetPixel() and Texture2D.SetPixel() methods.
Assuming that logo size is smaller than picture itself, store logo pixels into the Color[] array:
Color[] logoPixels = logo.GetPixels();
We need to apply logo above the picture, considering alpha level in logo image itself:
//Method GetPixels stores pixel colors in 1D array
int i = 0; //Logo pixel index
for (int y = 0; y < picture.height; y++) {
for (int x = 0; x < picture.width; x++) {
//Get color of original pixel
Color c = picture.GetPixel (logoPositionX + x, logoPositionY + y);
//Lerp pixel color by alpha value
picture.SetPixel (logoPositionX + x, logoPositionY + y, Color.Lerp (c, logoPixels[i], logoPixels[i].a));
i++;
}
}
//Apply changes
picture.Apply();
So, if pixel's alpha = 0 we leave it without changes.
Get bytes of resulting image with picture.GetRawTextureData() and save it as png in a regular way. And to use SetPixel() and SetPixels() methods, make sure both, logo and picture it being applied to, are set Read/Write enabled in the import settings!
It's an old question but I have another solution:
public static Texture2D merge(params Texture2D[] textures) {
if (textures == null || textures.Length == 0)
return null;
int oldQuality = QualitySettings.GetQualityLevel();
QualitySettings.SetQualityLevel(5);
RenderTexture renderTex = RenderTexture.GetTemporary(
textures[0].width,
textures[0].height,
0,
RenderTextureFormat.Default,
RenderTextureReadWrite.Linear);
Graphics.Blit(textures[0], renderTex);
RenderTexture previous = RenderTexture.active;
RenderTexture.active = renderTex;
GL.PushMatrix();
GL.LoadPixelMatrix(0, textures[0].width, textures[0].height, 0);
for (int i = 1; i < textures.Length; i++)
Graphics.DrawTexture(new Rect(0, 0, textures[0].width, textures[0].height), textures[i]);
GL.PopMatrix();
Texture2D readableText = new Texture2D(textures[0].width, textures[0].height);
readableText.ReadPixels(new Rect(0, 0, renderTex.width, renderTex.height), 0, 0);
readableText.Apply();
RenderTexture.active = previous;
RenderTexture.ReleaseTemporary(renderTex);
QualitySettings.SetQualityLevel(oldQuality);
return readableText;
}
And here is the use:
Texture2D coloredTex = ImageUtils.merge(tex,
sprites[0].texture,
sprites[1].texture,
sprites[2].texture,
sprites[3].texture);
Hope it helps
I made this solution, It works with two texture2d in Unity.
public Texture2D ImageBlend(Texture2D Bottom, Texture2D Top)
{
var bData = Bottom.GetPixels();
var tData = Top.GetPixels();
int count = bData.Length;
var final = new Color[count];
int i = 0;
int iT = 0;
int startPos = (Bottom.width / 2) - (Top.width / 2) -1;
int endPos = Bottom.width - startPos -1;
for (int y = 0; y < Bottom.height; y++)
{
for (int x = 0; x < Bottom.width; x++)
{
if (y > startPos && y < endPos && x > startPos && x < endPos)
{
Color B = bData[i];
Color T = tData[iT];
Color R;
R = new Color((T.a * T.r) + ((1-T.a) * B.r),
(T.a * T.g) + ((1 - T.a) * B.g),
(T.a * T.b) + ((1 - T.a) * B.b), 1.0f);
final[i] = R;
i++;
iT++;
}
else
{
final[i] = bData[i];
i++;
}
}
}
var res = new Texture2D(Bottom.width, Bottom.height);
res.SetPixels(final);
res.Apply();
return res;
}

Unity: Code works great in editor but fails in build version (Windows 10 64 bit)

I'm getting a weird issue here: grid shows beautifully in the editor while running. However: it doesn't show in my build version
View the attached screenshot of the 2 builds below:
Build version vs Editor version
Also i suppose it would be helpful to show you the script I wrote for it..:
void GenerateGrid()
{
Color gridColor = Color.cyan;
Color borderColor = Color.black;
Collider floorCollider = floor.GetComponent<Collider>();
Vector3 foorSize = new Vector3(floorCollider.bounds.size.x, floorCollider.bounds.size.z);
for (int x = 0; x < gridImage.width; x++)
{
for (int y = 0; y < gridImage.height; y++)
{
if (x < borderSize || x > gridImage.width - borderSize || y < borderSize || y > gridImage.height - borderSize)
{
gridImage.SetPixel(x, y, new Color(borderColor.r, borderColor.g, borderColor.b, 50));
}
else gridImage.SetPixel(x, y, new Color(gridColor.r, gridColor.g, gridColor.b, 50));
}
gridImage.wrapMode = TextureWrapMode.Repeat;
gridImage.Apply();
}
floor.GetComponent<MeshRenderer>().material.SetTexture(1, gridImage);
floor.GetComponent<MeshRenderer>().material.SetTextureScale(1, new Vector2(floorCollider.bounds.size.x, floorCollider.bounds.size.z));
floor.GetComponent<MeshRenderer>().material.SetTextureOffset(1, new Vector2(.5f, .5f));
Debug.Log(floor.GetComponent<MeshRenderer>().material.GetTexture(1));
}
Try floor.GetComponent<MeshRenderer>().material.SetTexture("_Mai‌​nTex", gridImage);. If you want to use SetTexture(int nameID, Texture value); you should use Shader.PropertyToID to get nameID.
Each name of shader property (for example, _MainTex or _Color) is
assigned an unique integer number in Unity, that stays the same for
the whole game. The numbers will not be the same between different
runs of the game or between machines, so do not store them or send
them over network.

Unity2D Combine Sprites

I would like to take a two-dimensional array of Sprites and turn it into one single sprite image at runtime. All sprites are square and exactly the same size, but the resulting image does not necessarily need to be square, as the width and height of the array can vary.
I have yet found this resource: Combine Array of Sprite objects into One Sprite - Unity
But I don't think it works for my purposes.
If you have those sprites in your project already you can simply edit their import settings to Advanced and check the Read/Write Enable toggle.
Then you should be able to read your sprites content and merge them like this:
public Sprite CombineSpriteArray(Sprite[][] spritesArray)
{
// Set those two or get them from one the the sprites you want to combine
int spritesWidth = (int)spritesArray[0][0].rect.width;
int spritesHeight = (int)spritesArray[0][0].rect.height;
Texture2D combinedTexture = new Texture2D(spritesWidth * spritesArray.Length, spritesHeight * spritesArray[0].Length);
for(int x = 0; x < spritesArray.Length; x++)
{
for(int y = 0; y < spritesArray[0].Length; y++)
{
combinedTexture.SetPixels(x * spritesArray.Length, y * spritesArray[0].Length, spritesWidth, spritesHeight, spritesArray[x][y].texture.GetPixels((int)spritesArray[x][y].textureRect.x, (int)spritesArray[x][y].textureRect.y, (int)spritesArray[x][y].textureRect.width, (int)spritesArray[x][y].textureRect.height));
// For a working script, use:
// combinedTexture.SetPixels32(x * spritesWidth, y * spritesHeight, spritesWidth, spritesHeight, spritesArray[x][y].texture.GetPixels32());
}
}
combinedTexture.Apply();
return Sprite.Create(combinedTexture, new Rect(0.0f, 0.0f, combinedTexture.width, combinedTexture.height), new Vector2(0.5f, 0.5f), 100.0f);
}
Warning: code untested
Be aware that such an operation is heavy and that doing it asynchronously in a coroutine may be a good idea to avoid a freeze.
EDIT:
Since you seem new to Stack Overflow, please keep in mind it's not a script providing service, people are here to help each others: this means code provided won't always be perfect but may simply guide you to the right path (this is also why I added the "Warning: code untested" after my code).
You claimed that the code was "completely broken" and "puts out errors all over the place". I wrote a small piece of script to test the script and the only error I got was that one (agreed it popped-up multiple times):
So after searching for it on Google (what you should have done by yourself), I noticed there were GetPixels32() / SetPixels32() methods that could also be used instead of GetPixels() / SetPixels() (here are the 3rd and 5th results that showed this methods). By simply changing this, the code now worked flawlessly.
Only problems I obtained was sprites were packed together at the bottom left of the texture: my bad on this I made a small mistake. Not hard to find where: just change
x * spritesArray.Length, y * spritesArray[0].Length, ...
to
x * spritesWidth, y * spritesHeight, ...
inside the SetPixels method.
So please find the whole test script I wrote and feel free to use it:
using UnityEngine;
using System.Collections;
using UnityEngine.UI;
public class TestScript : MonoBehaviour
{
public Image m_DisplayImage;
public Sprite m_Sprite1, m_Sprite2;
void Update()
{
if (Input.GetKeyDown(KeyCode.Space))
{
StartCoroutine(CombineSpritesCoroutine());
}
}
private IEnumerator CombineSpritesCoroutine()
{
Sprite[][] spritesToCombine = new Sprite[4][];
for (int i = 0; i < spritesToCombine.Length; i++)
{
spritesToCombine[i] = new Sprite[4];
}
for (int x = 0; x < spritesToCombine.Length; x++)
{
for (int y = 0; y < spritesToCombine[x].Length; y++)
{
spritesToCombine[x][y] = ((x + y) % 2 == 0 ? m_Sprite1 : m_Sprite2);
}
}
Sprite finalSprite = null;
yield return finalSprite = CombineSpriteArray(spritesToCombine);
m_DisplayImage.sprite = finalSprite;
}
public Sprite CombineSpriteArray(Sprite[][] spritesArray)
{
// Set those two or get them from one the the sprites you want to combine
int spritesWidth = (int)spritesArray[0][0].rect.width;
int spritesHeight = (int)spritesArray[0][0].rect.height;
Texture2D combinedTexture = new Texture2D(spritesWidth * spritesArray.Length, spritesHeight * spritesArray[0].Length);
for (int x = 0; x < spritesArray.Length; x++)
{
for (int y = 0; y < spritesArray[0].Length; y++)
{
combinedTexture.SetPixels32(x * spritesWidth, y * spritesHeight, spritesWidth, spritesHeight, spritesArray[x][y].texture.GetPixels32());
}
}
combinedTexture.Apply();
return Sprite.Create(combinedTexture, new Rect(0.0f, 0.0f, combinedTexture.width, combinedTexture.height), new Vector2(0.5f, 0.5f), 100.0f);
}
}

how to change terrain texture in code

I want to change the offset (2) of terrain texture through code.
I have added a road image as a texture on the terrain.
I've found related code online, but I am unable to figure out the role of renderer in this case.
More than code, I just want to know the first step that should be taken in order to change texture through code. (Settings basically).
And please mention the role of renderer.
In Unity Terrains textures are handled by the SplatPrototype class. See documentation
A Splat prototype is just a texture that is used by the TerrainData.
So if you want to change the Terrain's Texture you have to create a new SplatPrototype and set it to the splatPrototype variable of TerrainData.
There you can set the values of metallic, normalMap, smoothness, texture, tileSize and tileOffset of your choice.
You can use the following method:
private void SetTerrainSplatMap(Terrain terrain, Texture2D[] textures)
{
var terrainData = terrain.terrainData;
// The Splat map (Textures)
SplatPrototype[] splatPrototype = new SplatPrototype[terrainData.splatPrototypes.Length];
for (int i = 0; i < terrainData.splatPrototypes.Length; i++)
{
splatPrototype[i] = new SplatPrototype();
splatPrototype[i].texture = textures[i]; //Sets the texture
splatPrototype[i].tileSize = new Vector2(terrainData.splatPrototypes[i].tileSize.x, terrainData.splatPrototypes[i].tileSize.y); //Sets the size of the texture
splatPrototype[i].tileOffset = new Vector2(terrainData.splatPrototypes[i].tileOffset.x, terrainData.splatPrototypes[i].tileOffset.y); //Sets the size of the texture
}
terrainData.splatPrototypes = splatPrototype;
}
THIS WOKED FOR ME
splat[i].tileOffset = new Vector2(tar.splatPrototypes[i].tileOffset.x, tar.splatPrototypes[i].tileOffset.y+5f);
Splat Prototypes is Deprecated. I used TerrainLayers instead to edit the tiling size of the texture.
float[,,] splatMapData = terrain.terrainData.GetAlphamaps(0, 0, 100, 100);
for (int i = 26; i < 100; i++)
{
for (int j=0; j < 100; j++)
{
splatMapData[i, j, 0] = 0;
splatMapData[i, j, 1] = 0;
splatMapData[i, j, 2] = 1;
}
}
TerrainLayer[] layers = terrain.terrainData.terrainLayers;
layers[2].tileSize = new Vector2(100, 100);
terrain.terrainData.SetAlphamaps(0, 0, splatMapData);
terrain.Flush();

Eye detection using OpenCVSharp in Unity (fps issues)

I'm currently working on a project involving integrating OpenCVSharp into Unity, to allow eye tracking within a game environment. I've managed to get OpenCVSharp integrated into the Unity editor and currently have eye-detection (not tracking) working within a game. It can find your eyes within a webcam image, then display where its currently detected them on a texture, which I display within the scene.
However its causing a HUGE fps drop, mainly because every frame its converting a webcam texture into an IPLimage so that OpenCV can handle it. It then has to convert it back to a 2Dtexture to be displayed within the scene, after its done all the eye detection. So understandably its too much for the CPU to handle. (As far as I can tell its only using 1 core on my CPU).
Is there a way to do all the eye detection without converting the texture to an IPLimage? Or any other way to fix the fps drop. Some things that I've tried include:
Limiting the frames that it updates on. However this just causes it
to run smoothly, then stutter horribly on the frame that it has to
update.
Looking at threading, but as far as I'm aware Unity doesn't allow it.
As far as I can tell its only using 1 core on my CPU which seems a bit silly. If there was a way to change this it could fix the issue?
Tried different resolutions on the camera, however the resolution that the game can actually run smoothly at, is too small for the eye's to actually be detected, let alone tracked.
I've included the code below, of if you would prefer to look at it in a code editor here is a link to the C# File. Any suggestions or help would be greatly appreciated!
For reference I used code from here (eye detection using opencvsharp).
using UnityEngine;
using System.Collections;
using System;
using System.IO;
using OpenCvSharp;
//using System.Xml;
//using OpenCvSharp.Extensions;
//using System.Windows.Media;
//using System.Windows.Media.Imaging;
public class CaptureScript : MonoBehaviour
{
public GameObject planeObj;
public WebCamTexture webcamTexture; //Texture retrieved from the webcam
public Texture2D texImage; //Texture to apply to plane
public string deviceName;
private int devId = 1;
private int imWidth = 640; //camera width
private int imHeight = 360; //camera height
private string errorMsg = "No errors found!";
static IplImage matrix; //Ipl image of the converted webcam texture
CvColor[] colors = new CvColor[]
{
new CvColor(0,0,255),
new CvColor(0,128,255),
new CvColor(0,255,255),
new CvColor(0,255,0),
new CvColor(255,128,0),
new CvColor(255,255,0),
new CvColor(255,0,0),
new CvColor(255,0,255),
};
const double Scale = 1.25;
const double ScaleFactor = 2.5;
const int MinNeighbors = 2;
// Use this for initialization
void Start ()
{
//Webcam initialisation
WebCamDevice[] devices = WebCamTexture.devices;
Debug.Log ("num:" + devices.Length);
for (int i=0; i<devices.Length; i++) {
print (devices [i].name);
if (devices [i].name.CompareTo (deviceName) == 1) {
devId = i;
}
}
if (devId >= 0) {
planeObj = GameObject.Find ("Plane");
texImage = new Texture2D (imWidth, imHeight, TextureFormat.RGB24, false);
webcamTexture = new WebCamTexture (devices [devId].name, imWidth, imHeight, 30);
webcamTexture.Play ();
matrix = new IplImage (imWidth, imHeight, BitDepth.U8, 3);
}
}
void Update ()
{
if (devId >= 0)
{
//Convert webcam texture to iplimage
Texture2DtoIplImage();
/*DO IMAGE MANIPULATION HERE*/
//do eye detection on iplimage
EyeDetection();
/*END IMAGE MANIPULATION*/
if (webcamTexture.didUpdateThisFrame)
{
//convert iplimage to texture
IplImageToTexture2D();
}
}
else
{
Debug.Log ("Can't find camera!");
}
}
void EyeDetection()
{
using(IplImage smallImg = new IplImage(new CvSize(Cv.Round (imWidth/Scale), Cv.Round(imHeight/Scale)),BitDepth.U8, 1))
{
using(IplImage gray = new IplImage(matrix.Size, BitDepth.U8, 1))
{
Cv.CvtColor (matrix, gray, ColorConversion.BgrToGray);
Cv.Resize(gray, smallImg, Interpolation.Linear);
Cv.EqualizeHist(smallImg, smallImg);
}
using(CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile (#"C:\Users\User\Documents\opencv\sources\data\haarcascades\haarcascade_eye.xml"))
using(CvMemStorage storage = new CvMemStorage())
{
storage.Clear ();
CvSeq<CvAvgComp> eyes = Cv.HaarDetectObjects(smallImg, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));
for(int i = 0; i < eyes.Total; i++)
{
CvRect r = eyes[i].Value.Rect;
CvPoint center = new CvPoint{ X = Cv.Round ((r.X + r.Width * 0.5) * Scale), Y = Cv.Round((r.Y + r.Height * 0.5) * Scale) };
int radius = Cv.Round((r.Width + r.Height) * 0.25 * Scale);
matrix.Circle (center, radius, colors[i % 8], 3, LineType.AntiAlias, 0);
}
}
}
}
void OnGUI ()
{
GUI.Label (new Rect (200, 200, 100, 90), errorMsg);
}
void IplImageToTexture2D ()
{
int jBackwards = imHeight;
for (int i = 0; i < imHeight; i++) {
for (int j = 0; j < imWidth; j++) {
float b = (float)matrix [i, j].Val0;
float g = (float)matrix [i, j].Val1;
float r = (float)matrix [i, j].Val2;
Color color = new Color (r / 255.0f, g / 255.0f, b / 255.0f);
jBackwards = imHeight - i - 1; // notice it is jBackward and i
texImage.SetPixel (j, jBackwards, color);
}
}
texImage.Apply ();
planeObj.renderer.material.mainTexture = texImage;
}
void Texture2DtoIplImage ()
{
int jBackwards = imHeight;
for (int v=0; v<imHeight; ++v) {
for (int u=0; u<imWidth; ++u) {
CvScalar col = new CvScalar ();
col.Val0 = (double)webcamTexture.GetPixel (u, v).b * 255;
col.Val1 = (double)webcamTexture.GetPixel (u, v).g * 255;
col.Val2 = (double)webcamTexture.GetPixel (u, v).r * 255;
jBackwards = imHeight - v - 1;
matrix.Set2D (jBackwards, u, col);
//matrix [jBackwards, u] = col;
}
}
}
}
You can move these out of the per frame update loop :
using(CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile (#"C:\Users\User\Documents\opencv\sources\data\haarcascades\haarcascade_eye.xml"))
using(CvMemStorage storage = new CvMemStorage())
No reason to be building the recognizer graph each frame.
Threading is the logical way to go moving forward if you want real speed updates, unity itself is not threaded, but you can fold in other threads if your careful.
Do the texture -> ipl image on the main thread then trigger an event to fire off your thread.
The thread can do all the CV work, probably construct the tex2d and then push back to main to render.
You should also be able to gain some performance improvements if you use:
Color32[] pixels;
pixels = new Color32[webcamTexture.width * webcamTexture.height];
webcamTexture.GetPixels32(pixels);
The Unity doco suggests that this can be quite a bit faster than calling "GetPixels" (and certainly faster than calling GetPixel for each pixel), and then you don't need to scale each RGB channel against 255 manually.

Categories