how to change terrain texture in code - c#

I want to change the offset (2) of terrain texture through code.
I have added a road image as a texture on the terrain.
I've found related code online, but I am unable to figure out the role of renderer in this case.
More than code, I just want to know the first step that should be taken in order to change texture through code. (Settings basically).
And please mention the role of renderer.

In Unity Terrains textures are handled by the SplatPrototype class. See documentation
A Splat prototype is just a texture that is used by the TerrainData.
So if you want to change the Terrain's Texture you have to create a new SplatPrototype and set it to the splatPrototype variable of TerrainData.
There you can set the values of metallic, normalMap, smoothness, texture, tileSize and tileOffset of your choice.
You can use the following method:
private void SetTerrainSplatMap(Terrain terrain, Texture2D[] textures)
{
var terrainData = terrain.terrainData;
// The Splat map (Textures)
SplatPrototype[] splatPrototype = new SplatPrototype[terrainData.splatPrototypes.Length];
for (int i = 0; i < terrainData.splatPrototypes.Length; i++)
{
splatPrototype[i] = new SplatPrototype();
splatPrototype[i].texture = textures[i]; //Sets the texture
splatPrototype[i].tileSize = new Vector2(terrainData.splatPrototypes[i].tileSize.x, terrainData.splatPrototypes[i].tileSize.y); //Sets the size of the texture
splatPrototype[i].tileOffset = new Vector2(terrainData.splatPrototypes[i].tileOffset.x, terrainData.splatPrototypes[i].tileOffset.y); //Sets the size of the texture
}
terrainData.splatPrototypes = splatPrototype;
}

THIS WOKED FOR ME
splat[i].tileOffset = new Vector2(tar.splatPrototypes[i].tileOffset.x, tar.splatPrototypes[i].tileOffset.y+5f);

Splat Prototypes is Deprecated. I used TerrainLayers instead to edit the tiling size of the texture.
float[,,] splatMapData = terrain.terrainData.GetAlphamaps(0, 0, 100, 100);
for (int i = 26; i < 100; i++)
{
for (int j=0; j < 100; j++)
{
splatMapData[i, j, 0] = 0;
splatMapData[i, j, 1] = 0;
splatMapData[i, j, 2] = 1;
}
}
TerrainLayer[] layers = terrain.terrainData.terrainLayers;
layers[2].tileSize = new Vector2(100, 100);
terrain.terrainData.SetAlphamaps(0, 0, splatMapData);
terrain.Flush();

Related

How to build a customized 3D Object modeling with Aspose 3D lib

I'm trying to build a 3D object model. But my code just has rendered a 3D model with a specific colour in the image.
How can I create a 3D object with 6 images for each surface like a Rubik cube?
This is my code, using Aspose 3D lib and C#:
private void Form1_Load(object sender, EventArgs e)
{
//Create a FBX file with embedded textures
Scene scene = new Scene();
scene.Open("BetterShirt.obj");
//Create an embedded texture
Texture tex = new Texture()
{
Content = CreateTextureContent(),
FileName = "face.png",
WrapModeU = Aspose.ThreeD.Shading.WrapMode.Wrap,
};
tex.SetProperty("TexProp", "value");
//create a material with custom property
//Aspose.ThreeD.Shading.
Material mat = scene.RootNode.ChildNodes[0].Material;
mat.SetTexture(Material.MapDiffuse, tex);
mat.SetProperty("MyProp", 1.0);
scene.RootNode.ChildNodes[0].Material = mat;
//save this to file
scene.Save("exported.obj", FileFormat.WavefrontOBJ);
}
private static byte[] CreateTextureContent()
{
using (var bitmap = new Bitmap(256, 256))
{
using (var g = Graphics.FromImage(bitmap))
{
g.Clear(Color.White);
LinearGradientBrush brush = new LinearGradientBrush(new Rectangle(0, 0, 128, 128),
Color.Moccasin, Color.Blue, 45);
using (var font = new Font(FontFamily.GenericSerif, 40))
{
g.DrawString("Aspose.3D", font, brush, Point.Empty);
}
}
using (var ms = new MemoryStream())
{
//bitmap.Save(ms, ImageFormat.Png);
return ms.ToArray();
}
}
}
build an 3D object model with 6 images
We have devised below code based on your requirements. Comments have also been added for your reference. Please try using it in your environment and then share your kind feedback with us.
private static void RubikCube()
{
Bitmap[] bitmaps = CreateRubikBitmaps();
Scene scene = new Scene();
//create a box and convert it to mesh, so we can manually specify the material per face
var box = (new Box()).ToMesh();
//create a material mapping, the box mesh generated from Box primitive class contains 6 polygons, then we can reference the material of polygon(specified by MappingMode.Polygon) by index(ReferenceMode.Index)
var materials = (VertexElementMaterial)box.CreateElement(VertexElementType.Material, MappingMode.Polygon, ReferenceMode.Index);
//and each polygon uses different materials, the indices of these materials are specified below
materials.SetIndices(new int[] {0, 1, 2, 3, 4, 5});
//create the node and materials(referenced above)
var boxNode = scene.RootNode.CreateChildNode(box);
for (int i = 0; i < bitmaps.Length; i++)
{
//create material with texture
var material = new LambertMaterial();
var tex = new Texture();
using (var ms = new MemoryStream())
{
bitmaps[i].Save(ms, ImageFormat.Png);
var bytes = ms.ToArray();
//Save it to Texture.Content as embedded texture, thus the scene with textures can be exported into a single FBX file.
tex.Content = bytes;
//Give it a name and save it to disk so it can be opened with .obj file
tex.FileName = string.Format("cube_{0}.png", i);
File.WriteAllBytes(tex.FileName, bytes);
//Dispose the bitmap since we're no longer need it.
bitmaps[i].Dispose();
}
//the texture is used as diffuse
material.SetTexture(Material.MapDiffuse, tex);
//attach it to the node where contains the box mesh
boxNode.Materials.Add(material);
}
//save it to file
//3D viewer of Windows 10 does not support multiple materials, you'll see same textures in each face, but the tools from Autodesk does
scene.Save("test.fbx", FileFormat.FBX7500ASCII);
//NOTE: Multiple materials of mesh in Aspose.3D's OBJ Exporter is not supported yet.
//But we can split the mesh with multiple materials into different meshes by using PolygonModifier.SplitMesh
PolygonModifier.SplitMesh(scene, SplitMeshPolicy.CloneData);
//following code will also generate a material library file(test.mtl) which uses the textures exported in above code.
scene.Save("test.obj", FileFormat.WavefrontOBJ);
}
private static Bitmap[] CreateRubikBitmaps()
{
Brush[] colors = { Brushes.White, Brushes.Red, Brushes.Blue, Brushes.Yellow, Brushes.Orange, Brushes.Green};
Bitmap[] bitmaps = new Bitmap[6];
//initialize the cell colors
int[] cells = new int[6 * 9];
for (int i = 0; i < cells.Length; i++)
{
cells[i] = i / 9;
}
//shuffle the cells
Random random = new Random();
Array.Sort(cells, (a, b) => random.Next(-1, 2));
//paint each face
// size of each face is 256px
const int size = 256;
// size of cell is 80x80
const int cellSize = 80;
// calculate padding size between each cell
const int paddingSize = (size - cellSize * 3) / 4;
int cellId = 0;
for (int i = 0; i < 6; i++)
{
bitmaps[i] = new Bitmap(size, size, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
using (Graphics g = Graphics.FromImage(bitmaps[i]))
{
g.Clear(Color.Black);
for (int j = 0; j < 9; j++)
{
//calculate the cell's position
int row = j / 3;
int column = j % 3;
int y = row * (cellSize + paddingSize) + paddingSize;
int x = column * (cellSize + paddingSize) + paddingSize;
Brush cellBrush = colors[cells[cellId++]];
//paint cell
g.FillRectangle(cellBrush, x, y, cellSize, cellSize);
}
}
}
return bitmaps;
}
PS: I work with Aspose as Developer Evangelist.

How to create an image from byte array and display it?

I want to get data from an image file and display that information in a texture that Unity can read.
I am able to get the pixel information into a byte array, but nothing ever displays on the screen. How do I actually get the image to display?
pcxFile = File.ReadAllBytes("Assets/5_ImageParser/bagit_icon.pcx");
int startPoint = 128;
int height = 152;
int width = 152;
target = new Texture2D(height, width);
for (var y = 0; y < height; y++)
{
for (var x = 0; x < width; x++)
{
timesDone ++;
pixels[x, y] = new Color(pcxFile[startPoint], pcxFile[startPoint+1], pcxFile[startPoint+2]);
startPoint += 4;
target.SetPixel(x, y, pixels[x, y]);
}
}
target.Apply();
target.EncodeToJPG();
Well, (assuming you get the pixel data correctly) you have to asign that created texture to something...
I'ld use e.g. a RawImage since it doesn't need a Sprite (as the UI.Image component would do - also see the RawImage Manual):
// Reference this in the Inspector
public RawImage image;
//...
pcxFile = File.ReadAllBytes("Assets/5_ImageParser/bagit_icon.pcx");
int startPoint = 128;
int height = 152;
int width = 152;
target = new Texture2D(height, width);
for (var y = 0; y < height; y++)
{
for (var x = 0; x < width; x++)
{
timesDone ++;
pixels[x, y] = new Color(pcxFile[startPoint], pcxFile[startPoint+1], pcxFile[startPoint+2]);
startPoint += 4;
target.SetPixel(x, y, pixels[x, y]);
}
}
target.Apply();
// You don't need this. Only if you are also going to save it locally
// as an actual *.jpg file or if you are going to
// e.g. upload it later via http POST
//
// In this case however you would have to asign the result
// to a variable in order to use it later
//var rawJpgBytes = target.EncodeToJPG();
// Assign the texture to the RawImage component
image.texture = target;
Alternatively for using with normal Image component create a Sprite from your texture using Sprite.Create:
// Reference this in the Inspector
public Image image;
// ...
var sprite = Sprite.Create(target, new Rect(0.0f, 0.0f, target.width, target.height), new Vector2(0.5f, 0.5f), 100.0f);
image.sprite = sprite;
Hint 1
In order to have the correct aspect ratio I used a little trick usually:
Create a parent object for this RawImage
Set the desired "maximal size" in the RectTransform of the parent
Next to the RawImage/Image (on the child object) add an AspectRatioFitter (also see the AspectRatioFitter Manual) and set AspectMode to FitInParent.
Now in the code adjust the aspect ratio (you get it from the texture):
public RawImage image;
public AspectRatioFitter fitter;
//...
image.texture = target;
var ratio = target.width / target.height;
fitter.aspectRatio = ratio;
Hint 2
It is "cheaper" to call SetPixels once for all pixels than calling SetPixel repeatedly:
// ...
startPoint = 0;
pixels = new Color[width * height]();
for(int i = 0; i < pixels.Length; i++)
{
pixels[i] = new Color(pcxFile[startPoint], pcxFile[startPoint+1], pcxFile[startPoint+2]);
startPoint += 4;
}
target.SetPixels(pixels);
target.Apply();
// ...
(I don't know how exactly your pcx format works but maybe you could even use LoadRawTextureData

Matrix of photos

So I have a code that injects an image into my project through resources Image foodWorld = Resources.orange and I want to make a matrix out of this photo, so it can look like this:
I have this code but I don't know how to draw the matrix. Also, I don't know if this is the right way to draw it or not:
this.Width = 400;
this.Height = 300;
Bitmap b = new Bitmap(this.Width, this.Height);
for(int i = 0; i < this.Height; i++)
{
for(int j = 0; j < this.Width; j ++)
{
//fill the matrix
}
}
I am not too familiar with WinForms, but in WPF, I'd do it this way:
var columns = 15;
var rows = 10;
var imageWidth = 32;
var imageHeight = 32;
var grid = new Grid();
for (int i = 0; i < rows; i++)
{
for (int j = 0; j < columns; j++)
{
//Get the image in your project; I'm not sure how this is done in WinForms
var b = new Bitmap(imageWidth, imageHeight);
//Display it
var pictureBox = new PictureBox();
pictureBox.Image = b;
//Set the position
Grid.SetColumn(j, pictureBox);
Grid.SetRow(i, pictureBox);
//Insert into the "matrix"
grid.Children.Add(pictureBox);
}
}
For moving Pacman, repeat the above, but for only one image. Store a reference to the current position and when certain keys are pressed,
Animate it's margin until it appears to be in an adjacent cell (for instance, if each cell is 16 pixels wide and pacman should be in the center of any given cell, animate the right margin by 16 pixels to get it into the cell on the right and so forth).
Once it has moved to another cell, set the new row and column based on the direction in which it last moved.
If there is a fruit at the new position, get the fruit at that position and remove it from the Grid. You can get it by using myGrid.Children[currentRow * totalColumns + currentColumn] assuming currentRow and currentColumn are both zero-based.
Repeat for each cell it must move to.
This does mean the matrix will have a fixed size, but in WPF, there is a Viewbox, which is convenient for these types of scenarios. Also, set the z-index of pacman to be greater than the fruits so it's always on top.

Runtime Normal Map Import in Unity 5

For my project, I need to create materials at run time. When I create material, Normal map has no effect. I tried the two solutions about this but they did not work for me. Is something has changed about this in Unity 5 ?
The Links I checked :
http://answers.unity3d.com/questions/801670/runtime-loading-normal-texture.html
http://answers.unity3d.com/questions/47121/runtime-normal-map-import.html
P.S: The weird thing is when I switch to "Scene View" inside Unity, If I expand material tab from "Inspector" Normal Map is being applied to the object.
My Code:
....
Material mat = new Material(Shader.Find("Standard (Specular setup)"));
mat.SetTexture("_MainTex", colortex);
normaltex = getNormalTexture(Texture2D source);
mat.SetTexture("_BumpMap", normaltex);
mat.SetFloat("_Glossiness", 0.1f);
mat.SetFloat("_BumpScale", 1.0f);
....
public static Texture2D getNormalTexture(Texture2D source)
{
Texture2D normalTexture = new Texture2D(source.width, source.height, TextureFormat.ARGB32, true);
Color theColour = new Color();
for (int x = 0; x < source.width; x++)
{
for (int y = 0; y < source.height; y++)
{
theColour.r = 0;
theColour.g = source.GetPixel(x, y).g;
theColour.b = 0;
theColour.a = source.GetPixel(x, y).r;
normalTexture.SetPixel(x, y, theColour);
}
}
normalTexture.Apply();
return normalTexture;
}
At least with Unity 4.x you had to modify the shader to display runtime normal maps correctly. Just needed to remove UnpackNormal() from the code.
Technical details:
http://forum.unity3d.com/threads/creating-runtime-normal-maps-using-rendertotexture.135841/#post-924587
Builtin shader sources can be downloaded from:
http://unity3d.com/get-unity/download/archive

Faces not getting detected from Kinect Feed

Below is the code for a method which I am using for detecting faces from Kinect Feed and then setting the pixels from the face into a new image. . It is triggered by a Gesture which is done by GestureFlag . The Detect method which I am calling from the FaceDetection class is taken from the EMGU CV sample.
public string Detect(WriteableBitmap colorBitmap, int GestureFlag)
{
if(GestureFlag!=0)
{
List<Rectangle> faces = new List<Rectangle>();
Bitmap bitface = BitmapFromSource(colorBitmap);
Image<Bgr, Byte> image = new Image<Bgr, Byte>(bitface);
FaceDetection.Detect(image, "haarcascade_frontalface_default.xml",faces);
Bitmap img = new Bitmap(#"C:\Users\rama\Downloads\kinect-2-background-removal-master\KinectBackgroundRemoval\Assets\emptyimage.png");
img = ResizeImage(img, 1540, 1980);
int high = image.Height;
int width = image.Width;
for (int i = 0; i < width; i++)
{
for (int j = 0; j < high; j++)
{
Bgr pixel = image[j,i];
System.Drawing.Point p = new System.Drawing.Point(j,i);
if (faces[0].Contains(p) && i<1540 && j<1980)
{
img.SetPixel(i, j, System.Drawing.Color.FromArgb(255, (int)pixel.Blue, (int)pixel.Green,(int) pixel.Red));
}
}
}
count++;
key = count.ToString() + "rich.jpg";
image.Save(#"C:\Users\rama\Downloads\kinect-2-background-removal-master\KinectBackgroundRemoval\Assets\"+key);
img.Save(#"C:\Users\rama\Downloads\kinect-2-background-removal-master\KinectBackgroundRemoval\"+key);
bool status = UploadToS3("nitish2", key, #"C:\Users\rama\Downloads\kinect-2-background-removal-master\KinectBackgroundRemoval\"+key);
var FBClient = new FacebookClient();
var UploadToFacebook = new FacebookUpload(FBClient, key);
UploadToFacebook.Show();
GestureFlag = 0;
}
return key;
}
The problem I'm running into is that an entirely different set of pixels gets printed on the new image which I'm saving.
Basically i think that the problem is here:
FaceDetection.Detect(image, "haarcascade_frontalface_default.xml",faces);
for (int i = 0; i < width; i++)
{
for (int j = 0; j < high; j++)
{
Bgr pixel = image[j,i];
System.Drawing.Point p = new System.Drawing.Point(j,i);
if (faces[0].Contains(p) && i<1540 && j<1980)
{
img.SetPixel(i, j, System.Drawing.Color.FromArgb(255, (int)pixel.Blue, (int)pixel.Green,(int) pixel.Red));
}
}
}
So can someone please point out where I'm going wrong?
Thank you very much.
EDITS : I've tried putting flags like faces[0]!=null which should take care of whether FaceDetection.Detect is actually returning anything but still I'm getting the same result.
I've also tried saving the ColorBitmap and testing it against the EMGU CV sample and it detects the faces in the image easily.
EDIT 2: So I've cross checked the co-ordinates of the rectangle which is being printed with the co-ordinates of the face being detected and the values of the Rectangle() being populated. They turn out to be almost same as far as I can see. So no luck there.
I don't know what else I can try for debugging.
If someone could point that out that would be great.
Thanks!
Ok I got it to work finally.
The problem was the Point[j,i] which should have been Point[i,j] . Also my FaceDetection method proved to be a little erratic and I had to fix that too to finally properly debug my code.

Categories