I am trying to add a watermark on my image, and this is the code I have for taking a screenshot. Can someone teach me how to implement watermark into my image? I want a small logo at the top right hand side of the image.
I am trying to research on maybe if I could implement what I have in the canvas to stay when a screenshot is taken ( real life ). But to no luck. Would really appreciate if someone could help me out here !
public string MakePhoto(bool openIt)
{
int resWidth = Screen.width;
int resHeight = Screen.height;
Texture2D screenShot = new Texture2D(resWidth, resHeight, TextureFormat.RGB24, false); //Create new texture
RenderTexture rt = new RenderTexture(resWidth, resHeight, 24);
// hide the info-text, if any
if (infoText)
{
infoText.text = string.Empty;
}
// render background and foreground cameras
if (backroundCamera && backroundCamera.enabled)
{
backroundCamera.targetTexture = rt;
backroundCamera.Render();
backroundCamera.targetTexture = null;
}
if (backroundCamera2 && backroundCamera2.enabled)
{
backroundCamera2.targetTexture = rt;
backroundCamera2.Render();
backroundCamera2.targetTexture = null;
}
if (foreroundCamera && foreroundCamera.enabled)
{
foreroundCamera.targetTexture = rt;
foreroundCamera.Render();
foreroundCamera.targetTexture = null;
}
// get the screenshot
RenderTexture prevActiveTex = RenderTexture.active;
RenderTexture.active = rt;
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
// clean-up
RenderTexture.active = prevActiveTex;
Destroy(rt);
byte[] btScreenShot = screenShot.EncodeToJPG();
Destroy(screenShot);
#if !UNITY_WSA
// save the screenshot as jpeg file
string sDirName = Application.persistentDataPath + "/Screenshots";
if (!Directory.Exists(sDirName))
Directory.CreateDirectory (sDirName);
string sFileName = sDirName + "/" + string.Format ("{0:F0}", Time.realtimeSinceStartup * 10f) + ".jpg";
File.WriteAllBytes(sFileName, btScreenShot);
Debug.Log("Photo saved to: " + sFileName);
if (infoText)
{
infoText.text = "Saved to: " + sFileName;
}
// open file
if(openIt)
{
System.Diagnostics.Process.Start(sFileName);
}
return sFileName;
PS: I found this which might be useful?
public Texture2D AddWatermark(Texture2D background, Texture2D watermark)
{
int startX = 0;
int startY = background.height - watermark.height;
for (int x = startX; x < background.width; x++)
{
for (int y = startY; y < background.height; y++)
{
Color bgColor = background.GetPixel(x, y);
Color wmColor = watermark.GetPixel(x - startX, y - startY);
Color final_color = Color.Lerp(bgColor, wmColor, wmColor.a / 1.0f);
background.SetPixel(x, y, final_color);
}
}
background.Apply();
return background;
}
Select the imported image in the ProjectsView and in the inspector set TextureType to Sprite (2D and UI) (see Sprites Manual) and hit Apply
add a Sprite field for it to your class like
public Texture2D watermark;
Reference the watermark in the Inspector
You could simply add the watermark as overlay by adding the Color values from both textures for each pixel (assuming here they have the same size!)
If you want a watermark only in a certain rect of the texture you either have to scale it accordingly and use Texture2D.SetPixels(int x, int y, int blockWidth, int blockHeight, Color[] colors) (This assumes the watermark image is smaller in pixels than the screenShot!) like
private static void AddWaterMark(Texture2D texture, Texture2D watermarkTexture)
{
int watermarkWidth = watermarkTexture.width;
int watermarkHeight = watermarkTexture.height;
// In Unity differrent to most expectations the pixel corrdinate
// 0,0 is not the top-left corner but instead the bottom-left
// so since you want the whatermark in the top-right corner do
int startx = texture.width - watermarkWidth;
// optionally you could also still leave a border of e.g. 10 pixels by using
// int startx = texture.width - watermarkWidth - 10;
// same for the y axis
int starty = texture.height - watermarkHeight;
Color[] watermarkPixels = watermarkTexture.GetPixels();
// get the texture pixels for the given rect
Color[] originalPixels = texture.GetPixels(startx, starty, watermarkWidth, watermarkHeight);
for(int i = 0; i < watermarkPixels.Length; i++)
{
var pixel = watermarkPixels[i];
// adjust the alpha value of the whatermark
pixel.a *= 0.5f;
// add watermark pixel to original pixel
originalPixels[i] += pixel;
}
// write back the changed texture data
texture.SetPixels(startx, starty, watermarkWidth, watermarkHeight, originalPixels);
texture.Apply();
}
call it like
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
AddWaterMark(screenShot, watermark);
Related
I'm new to Unity 3D and trying to split a texture2D sprite that contains an audio waveform in a Scroll Rect. The waveform comes from an audio source imported by the user and added to a scroll rect horizontally like a timeline. The script that creates the waveform works but the variable of the width (that came from another script, but this is not the problem) exceeds the limits of a Texture2D, only if I put manually a width less than 16000 the waveform appear but not to the maximum of the scroll rect. Usually, a song with 3-4min has a width of 55000-60000 width, and this can't be rendered. I need to split that waveform texture2D sprite horizontally into multiple parts (or Childs) together and render them only when appearing on the screen. How can I do that? Thank you in advance.
This creates the Waveform Sprite, and should split the sprite into multiple sprites and put together horizontally, render them only when appear on the screen):
public void LoadWaveform(AudioClip clip)
{
Texture2D texwav = waveformSprite.GetWaveform(clip);
Rect rect = new Rect(Vector2.zero, new Vector2(Realwidth, 180));
waveformImage.sprite = Sprite.Create(texwav, rect, Vector2.zero);
waveformImage.SetNativeSize();
}
This creates the waveform from an audio clip (getting from the internet and modifying for my project) :
public class WaveformSprite : MonoBehaviour
{
private int width = 16000; //This should be the variable from another script
private int height = 180;
public Color background = Color.black;
public Color foreground = Color.yellow;
private int samplesize;
private float[] samples = null;
private float[] waveform = null;
private float arrowoffsetx;
public Texture2D GetWaveform(AudioClip clip)
{
int halfheight = height / 2;
float heightscale = (float)height * 0.75f;
// get the sound data
Texture2D tex = new Texture2D(width, height, TextureFormat.RGBA32, false);
waveform = new float[width];
Debug.Log("NUMERO DE SAMPLES: " + clip.samples);
var clipSamples = clip.samples;
samplesize = clipSamples * clip.channels;
samples = new float[samplesize];
clip.GetData(samples, 0);
int packsize = (samplesize / width);
for (int w = 0; w < width; w++)
{
waveform[w] = Mathf.Abs(samples[w * packsize]);
}
// map the sound data to texture
// 1 - clear
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
tex.SetPixel(x, y, background);
}
}
// 2 - plot
for (int x = 0; x < width; x++)
{
for (int y = 0; y < waveform[x] * heightscale; y++)
{
tex.SetPixel(x, halfheight + y, foreground);
tex.SetPixel(x, halfheight - y, foreground);
}
}
tex.Apply();
return tex;
}
}
Instead of reading all the samples in one loop to populate waveform[], read only the amount needed for the current texture (utilizing an offset to track position in the array).
Calculate the number of textures your function will output.
var textureCount = Mathf.CeilToInt(totalWidth / maxTextureWidth); // max texture width 16,000
Create an outer loop to generate each texture.
for (int i = 0; i < textureCount; i++)
Calculate the current textures width (used for the waveform array and drawing loops).
var textureWidth = Mathf.CeilToInt(Mathf.Min(totalWidth - (maxTextureWidth * i), maxWidth));
Utilize an offset for populating the waveform array.
for (int w = 0; w < textureWidth; w++)
{
waveform[w] = Mathf.Abs(samples[(w + offset) * packSize]);
}
With offset increasing at the end of the textures loop by the number of samples used for that texture (ie texture width).
offset += textureWidth;
In the end the function will return an array of Texture2d instead of one.
How can I blend two textures into a new one?
I have a texture from the android gallery and some logo png texture. I need to add this logo into the texture from the gallery and store this as variable to save into the gallery as a new image.
These shaders blend between two textures based on a 0-1 value that you control. The first version is extra-fast because it does not use lighting, and the second uses the same basic ambient + diffuse calculation that I used in my Simply Lit shader.
http://wiki.unity3d.com/index.php/Blend_2_Textures
Drag a different texture onto each of the material's variable slots, and use the Blend control to mix them to taste.
Take note that the lit version requires two passes on the GPU used in the oldest iOS devices.
ShaderLab - Blend 2 Textures.shader
Shader "Blend 2 Textures" {
Properties {
_Blend ("Blend", Range (0, 1) ) = 0.5
_MainTex ("Texture 1", 2D) = ""
_Texture2 ("Texture 2", 2D) = ""
}
SubShader {
Pass {
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
}
}
}
ShaderLab - Blend 2 Textures, Simply Lit.shader
Shader "Blend 2 Textures, Simply Lit" {
Properties {
_Color ("Color", Color) = (1,1,1)
_Blend ("Blend", Range (0,1)) = 0.5
_MainTex ("Texture 1", 2D) = ""
_Texture2 ("Texture 2", 2D) = ""
}
Category {
Material {
Ambient[_Color]
Diffuse[_Color]
}
// iPhone 3GS and later
SubShader {Pass {
Lighting On
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
SetTexture[_] {Combine previous * primary Double}
}}
// pre-3GS devices, including the September 2009 8GB iPod touch
SubShader {
Pass {
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
}
Pass {
Lighting On
Blend DstColor SrcColor
}
}
}
}
I had a similar task with a paint tool I was making. So here's my approach:
First, import or instantiate logo and picture textures as Texture2D in order to use Texture2D.GetPixel() and Texture2D.SetPixel() methods.
Assuming that logo size is smaller than picture itself, store logo pixels into the Color[] array:
Color[] logoPixels = logo.GetPixels();
We need to apply logo above the picture, considering alpha level in logo image itself:
//Method GetPixels stores pixel colors in 1D array
int i = 0; //Logo pixel index
for (int y = 0; y < picture.height; y++) {
for (int x = 0; x < picture.width; x++) {
//Get color of original pixel
Color c = picture.GetPixel (logoPositionX + x, logoPositionY + y);
//Lerp pixel color by alpha value
picture.SetPixel (logoPositionX + x, logoPositionY + y, Color.Lerp (c, logoPixels[i], logoPixels[i].a));
i++;
}
}
//Apply changes
picture.Apply();
So, if pixel's alpha = 0 we leave it without changes.
Get bytes of resulting image with picture.GetRawTextureData() and save it as png in a regular way. And to use SetPixel() and SetPixels() methods, make sure both, logo and picture it being applied to, are set Read/Write enabled in the import settings!
It's an old question but I have another solution:
public static Texture2D merge(params Texture2D[] textures) {
if (textures == null || textures.Length == 0)
return null;
int oldQuality = QualitySettings.GetQualityLevel();
QualitySettings.SetQualityLevel(5);
RenderTexture renderTex = RenderTexture.GetTemporary(
textures[0].width,
textures[0].height,
0,
RenderTextureFormat.Default,
RenderTextureReadWrite.Linear);
Graphics.Blit(textures[0], renderTex);
RenderTexture previous = RenderTexture.active;
RenderTexture.active = renderTex;
GL.PushMatrix();
GL.LoadPixelMatrix(0, textures[0].width, textures[0].height, 0);
for (int i = 1; i < textures.Length; i++)
Graphics.DrawTexture(new Rect(0, 0, textures[0].width, textures[0].height), textures[i]);
GL.PopMatrix();
Texture2D readableText = new Texture2D(textures[0].width, textures[0].height);
readableText.ReadPixels(new Rect(0, 0, renderTex.width, renderTex.height), 0, 0);
readableText.Apply();
RenderTexture.active = previous;
RenderTexture.ReleaseTemporary(renderTex);
QualitySettings.SetQualityLevel(oldQuality);
return readableText;
}
And here is the use:
Texture2D coloredTex = ImageUtils.merge(tex,
sprites[0].texture,
sprites[1].texture,
sprites[2].texture,
sprites[3].texture);
Hope it helps
I made this solution, It works with two texture2d in Unity.
public Texture2D ImageBlend(Texture2D Bottom, Texture2D Top)
{
var bData = Bottom.GetPixels();
var tData = Top.GetPixels();
int count = bData.Length;
var final = new Color[count];
int i = 0;
int iT = 0;
int startPos = (Bottom.width / 2) - (Top.width / 2) -1;
int endPos = Bottom.width - startPos -1;
for (int y = 0; y < Bottom.height; y++)
{
for (int x = 0; x < Bottom.width; x++)
{
if (y > startPos && y < endPos && x > startPos && x < endPos)
{
Color B = bData[i];
Color T = tData[iT];
Color R;
R = new Color((T.a * T.r) + ((1-T.a) * B.r),
(T.a * T.g) + ((1 - T.a) * B.g),
(T.a * T.b) + ((1 - T.a) * B.b), 1.0f);
final[i] = R;
i++;
iT++;
}
else
{
final[i] = bData[i];
i++;
}
}
}
var res = new Texture2D(Bottom.width, Bottom.height);
res.SetPixels(final);
res.Apply();
return res;
}
Short Version of Problem
I am trying to access the contents of a RenderTexture in Unity which I have been drawing with an own Material using Graphics.Blit.
Graphics.Blit (null, renderTexture, material);
My material converts some yuv image to rgb successfully, which I have tested by assigning it to the texture of an UI element. The result is the correct RGB image visible on the screen.
However, I also need the raw data for a QR code scanner. I am doing this like I would access it from a camera as explained here. In a comment down there, it was mentioned that the extraction is also possible from a RenderTexture that was filled with Graphics.Blit. But when I am trying to that, my texture only contains the value 205 everywhere. This is the code I am using in the Update function, directly after the Graphics.Blit call:
RenderTexture.active = renderTexture;
texture.ReadPixels (new Rect (0, 0, width, height), 0, 0);
texture.Apply ();
RenderTexture.active = null;
When assigning this texture to the same UI element, it is gray and slightly transparent. When viewing the image values, they are all 205.
Why is the possible? May there be problems with the formats that the RenderTexture and Texture2D I am trying to fill?
Complete Code
In the following I add the whole code I am using. The names of the variables slightly differ to the ones I used above but they do essentially the same:
/**
* This class continously converts the y and uv textures in
* YUV color space to a RGB texture, which can be used somewhere else
*/
public class YUV2RGBConverter : MonoBehaviour {
public Material yuv2rgbMat;
// Input textures, set these when they are available
[HideInInspector]
public Texture2D yTex;
[HideInInspector]
public Texture2D uvTex;
// Output, the converted textures
[HideInInspector]
public RenderTexture rgbRenderTex;
[HideInInspector]
public Texture2D rgbTex;
[HideInInspector]
public Color32[] rawRgbData;
/// Describes how often per second the image should be transferred to the CPU
public float GPUTransferRate = 1.0f;
private float timeSinceLastGPUTransfer = 0.0f;
private int width;
private int height;
/**
* Initializes the used textures
*/
void Start () {
updateSize (width, height);
}
/**
* Depending on the sizes of the texture, creating the needed textures for this class
*/
public void updateSize(int width, int height)
{
// Generate the input textures
yTex = new Texture2D(width / 4, height, TextureFormat.RGBA32, false);
uvTex = new Texture2D ((width / 2) * 2 / 4, height / 2, TextureFormat.RGBA32, false);
// Generate the output texture
rgbRenderTex = new RenderTexture(width, height, 0);
rgbRenderTex.antiAliasing = 0;
rgbTex = new Texture2D (width, height, TextureFormat.RGBA32, false);
// Set to shader
yuv2rgbMat.SetFloat("_TexWidth", width);
yuv2rgbMat.SetFloat("_TexHeight", height);
}
/**
* Sets the y and uv textures to some dummy data
*/
public void fillYUWithDummyData()
{
// Set the y tex everywhere to time rest
float colorValue = (float)Time.time - (float)((int)Time.time);
for (int y = 0; y < yTex.height; y++) {
for (int x = 0; x < yTex.width; x++) {
Color yColor = new Color (colorValue, colorValue, colorValue, colorValue);
yTex.SetPixel (x, y, yColor);
}
}
yTex.Apply ();
// Set the uv tex colors
for (int y = 0; y < uvTex.height; y++) {
for (int x = 0; x < uvTex.width; x++) {
int firstXCoord = 2 * x;
int secondXCoord = 2 * x + 1;
int yCoord = y;
float firstXRatio = (float)firstXCoord / (2.0f * (float)uvTex.width);
float secondXRatio = (float)secondXCoord / (2.0f * (float)uvTex.width);
float yRatio = (float)y / (float)uvTex.height;
Color uvColor = new Color (firstXRatio, yRatio, secondXRatio, yRatio);
uvTex.SetPixel (x, y, uvColor);
}
}
uvTex.Apply ();
}
/**
* Continuously convert y and uv texture to rgb texture with custom yuv2rgb shader
*/
void Update () {
// Draw to it with the yuv2rgb shader
yuv2rgbMat.SetTexture ("_YTex", yTex);
yuv2rgbMat.SetTexture ("_UTex", uvTex);
Graphics.Blit (null, rgbRenderTex, yuv2rgbMat);
// Only scan once per second
if (timeSinceLastGPUTransfer > 1 / GPUTransferRate) {
timeSinceLastGPUTransfer = 0;
// Fetch its pixels and set it to rgb texture
RenderTexture.active = rgbRenderTex;
rgbTex.ReadPixels (new Rect (0, 0, width, height), 0, 0);
rgbTex.Apply ();
RenderTexture.active = null;
rawRgbData = rgbTex.GetPixels32 ();
} else {
timeSinceLastGPUTransfer += Time.deltaTime;
}
}
}
Ok, sorry that I have to answer my question on my own. The solution is very easy:
The width and height property that I was using in this line:
rgbTex.ReadPixels (new Rect (0, 0, width, height), 0, 0);
where not initialized, so they where 0.
I just had to add those lines to the updateSize function:
this.width = width;
this.height = height;
I am asking this question as the other one is two years old and not answered accurately.
I'm looking to replicate the PhotoShop effect mentioned in this article in C#. Adobe call it a Color halftone, I think it looks like some sort of rotated CMYK halftone thingy. Either way I don't know how I would do it.
Current code sample is below.
Any ideas?
P.S.
This isn't homework. I'm looking to upgrade the comic book effect I have in my OSS project ImageProcessor.
Progress Update.
So here's some code to show what I have done so far...
I can convert to and from CMYK to RGB fairly easily and accurately enough for my needs and also print out a patterned series of ellipses based on the the intensity of each colour component at a series of points.
What I am stuck at just now is rotating the graphics object for each colour so that the points are laid at the angles specified in the code. Can anyone give me some pointers as how to go about that?
public Image ProcessImage(ImageFactory factory)
{
Bitmap newImage = null;
Image image = factory.Image;
try
{
int width = image.Width;
int height = image.Height;
// These need to be used.
float cyanAngle = 105f;
float magentaAngle = 75f;
float yellowAngle = 90f;
float keylineAngle = 15f;
newImage = new Bitmap(width, height);
newImage.SetResolution(image.HorizontalResolution, image.VerticalResolution);
using (Graphics graphics = Graphics.FromImage(newImage))
{
// Reduce the jagged edges.
graphics.SmoothingMode = SmoothingMode.AntiAlias;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
graphics.CompositingQuality = CompositingQuality.HighQuality;
graphics.Clear(Color.White);
using (FastBitmap sourceBitmap = new FastBitmap(image))
{
for (int y = 0; y < height; y += 4)
{
for (int x = 0; x < width; x += 4)
{
Color color = sourceBitmap.GetPixel(x, y);
if (color != Color.White)
{
CmykColor cmykColor = color;
float cyanBrushRadius = (cmykColor.C / 100) * 3;
graphics.FillEllipse(Brushes.Cyan, x, y, cyanBrushRadius, cyanBrushRadius);
float magentaBrushRadius = (cmykColor.M / 100) * 3;
graphics.FillEllipse(Brushes.Magenta, x, y, magentaBrushRadius, magentaBrushRadius);
float yellowBrushRadius = (cmykColor.Y / 100) * 3;
graphics.FillEllipse(Brushes.Yellow, x, y, yellowBrushRadius, yellowBrushRadius);
float blackBrushRadius = (cmykColor.K / 100) * 3;
graphics.FillEllipse(Brushes.Black, x, y, blackBrushRadius, blackBrushRadius);
}
}
}
}
}
image.Dispose();
image = newImage;
}
catch (Exception ex)
{
if (newImage != null)
{
newImage.Dispose();
}
throw new ImageProcessingException("Error processing image with " + this.GetType().Name, ex);
}
return image;
}
Input Image
Current Output
As you can see since the drawn ellipses are not angled colour output is incorrect.
So here's a working solution. It ain't pretty, it ain't fast (2 seconds on my laptop) but the output is good. It doesn't exactly match Photoshop's output though I think they are performing some additional work.
Slight moiré patterns sometimes appear on different test images but descreening is out of scope for the current question.
The code performs the following steps.
Loop through the pixels of the image at a given interval
For each colour component, CMYK draw an ellipse at a given point which is calculated by rotating the current point by the set angle. The dimensions of this ellipse are determined by the level of each colour component at each point.
Create a new image by looping though the pixel points and adding the CMYK colour component values at each point to determine the correct colour to draw to the image.
Output image
The code
public Image ProcessImage(ImageFactory factory)
{
Bitmap cyan = null;
Bitmap magenta = null;
Bitmap yellow = null;
Bitmap keyline = null;
Bitmap newImage = null;
Image image = factory.Image;
try
{
int width = image.Width;
int height = image.Height;
// Angles taken from Wikipedia page.
float cyanAngle = 15f;
float magentaAngle = 75f;
float yellowAngle = 0f;
float keylineAngle = 45f;
int diameter = 4;
float multiplier = 4 * (float)Math.Sqrt(2);
// Cyan color sampled from Wikipedia page.
Brush cyanBrush = new SolidBrush(Color.FromArgb(0, 153, 239));
Brush magentaBrush = Brushes.Magenta;
Brush yellowBrush = Brushes.Yellow;
Brush keylineBrush;
// Create our images.
cyan = new Bitmap(width, height);
magenta = new Bitmap(width, height);
yellow = new Bitmap(width, height);
keyline = new Bitmap(width, height);
newImage = new Bitmap(width, height);
// Ensure the correct resolution is set.
cyan.SetResolution(image.HorizontalResolution, image.VerticalResolution);
magenta.SetResolution(image.HorizontalResolution, image.VerticalResolution);
yellow.SetResolution(image.HorizontalResolution, image.VerticalResolution);
keyline.SetResolution(image.HorizontalResolution, image.VerticalResolution);
newImage.SetResolution(image.HorizontalResolution, image.VerticalResolution);
// Check bounds against this.
Rectangle rectangle = new Rectangle(0, 0, width, height);
using (Graphics graphicsCyan = Graphics.FromImage(cyan))
using (Graphics graphicsMagenta = Graphics.FromImage(magenta))
using (Graphics graphicsYellow = Graphics.FromImage(yellow))
using (Graphics graphicsKeyline = Graphics.FromImage(keyline))
{
// Ensure cleared out.
graphicsCyan.Clear(Color.Transparent);
graphicsMagenta.Clear(Color.Transparent);
graphicsYellow.Clear(Color.Transparent);
graphicsKeyline.Clear(Color.Transparent);
// This is too slow. The graphics object can't be called within a parallel
// loop so we have to do it old school. :(
using (FastBitmap sourceBitmap = new FastBitmap(image))
{
for (int y = -height * 2; y < height * 2; y += diameter)
{
for (int x = -width * 2; x < width * 2; x += diameter)
{
Color color;
CmykColor cmykColor;
float brushWidth;
// Cyan
Point rotatedPoint = RotatePoint(new Point(x, y), new Point(0, 0), cyanAngle);
int angledX = rotatedPoint.X;
int angledY = rotatedPoint.Y;
if (rectangle.Contains(new Point(angledX, angledY)))
{
color = sourceBitmap.GetPixel(angledX, angledY);
cmykColor = color;
brushWidth = diameter * (cmykColor.C / 255f) * multiplier;
graphicsCyan.FillEllipse(cyanBrush, angledX, angledY, brushWidth, brushWidth);
}
// Magenta
rotatedPoint = RotatePoint(new Point(x, y), new Point(0, 0), magentaAngle);
angledX = rotatedPoint.X;
angledY = rotatedPoint.Y;
if (rectangle.Contains(new Point(angledX, angledY)))
{
color = sourceBitmap.GetPixel(angledX, angledY);
cmykColor = color;
brushWidth = diameter * (cmykColor.M / 255f) * multiplier;
graphicsMagenta.FillEllipse(magentaBrush, angledX, angledY, brushWidth, brushWidth);
}
// Yellow
rotatedPoint = RotatePoint(new Point(x, y), new Point(0, 0), yellowAngle);
angledX = rotatedPoint.X;
angledY = rotatedPoint.Y;
if (rectangle.Contains(new Point(angledX, angledY)))
{
color = sourceBitmap.GetPixel(angledX, angledY);
cmykColor = color;
brushWidth = diameter * (cmykColor.Y / 255f) * multiplier;
graphicsYellow.FillEllipse(yellowBrush, angledX, angledY, brushWidth, brushWidth);
}
// Keyline
rotatedPoint = RotatePoint(new Point(x, y), new Point(0, 0), keylineAngle);
angledX = rotatedPoint.X;
angledY = rotatedPoint.Y;
if (rectangle.Contains(new Point(angledX, angledY)))
{
color = sourceBitmap.GetPixel(angledX, angledY);
cmykColor = color;
brushWidth = diameter * (cmykColor.K / 255f) * multiplier;
// Just using blck is too dark.
keylineBrush = new SolidBrush(CmykColor.FromCmykColor(0, 0, 0, cmykColor.K));
graphicsKeyline.FillEllipse(keylineBrush, angledX, angledY, brushWidth, brushWidth);
}
}
}
}
// Set our white background.
using (Graphics graphics = Graphics.FromImage(newImage))
{
graphics.Clear(Color.White);
}
// Blend the colors now to mimic adaptive blending.
using (FastBitmap cyanBitmap = new FastBitmap(cyan))
using (FastBitmap magentaBitmap = new FastBitmap(magenta))
using (FastBitmap yellowBitmap = new FastBitmap(yellow))
using (FastBitmap keylineBitmap = new FastBitmap(keyline))
using (FastBitmap destinationBitmap = new FastBitmap(newImage))
{
Parallel.For(
0,
height,
y =>
{
for (int x = 0; x < width; x++)
{
// ReSharper disable AccessToDisposedClosure
Color cyanPixel = cyanBitmap.GetPixel(x, y);
Color magentaPixel = magentaBitmap.GetPixel(x, y);
Color yellowPixel = yellowBitmap.GetPixel(x, y);
Color keylinePixel = keylineBitmap.GetPixel(x, y);
CmykColor blended = cyanPixel.AddAsCmykColor(magentaPixel, yellowPixel, keylinePixel);
destinationBitmap.SetPixel(x, y, blended);
// ReSharper restore AccessToDisposedClosure
}
});
}
}
cyan.Dispose();
magenta.Dispose();
yellow.Dispose();
keyline.Dispose();
image.Dispose();
image = newImage;
}
catch (Exception ex)
{
if (cyan != null)
{
cyan.Dispose();
}
if (magenta != null)
{
magenta.Dispose();
}
if (yellow != null)
{
yellow.Dispose();
}
if (keyline != null)
{
keyline.Dispose();
}
if (newImage != null)
{
newImage.Dispose();
}
throw new ImageProcessingException("Error processing image with " + this.GetType().Name, ex);
}
return image;
}
Additional code for rotating the pixels is as follows. This can be found at Rotating a point around another point
I've left out the colour addition code for brevity.
/// <summary>
/// Rotates one point around another
/// <see href="https://stackoverflow.com/questions/13695317/rotate-a-point-around-another-point"/>
/// </summary>
/// <param name="pointToRotate">The point to rotate.</param>
/// <param name="centerPoint">The centre point of rotation.</param>
/// <param name="angleInDegrees">The rotation angle in degrees.</param>
/// <returns>Rotated point</returns>
private static Point RotatePoint(Point pointToRotate, Point centerPoint, double angleInDegrees)
{
double angleInRadians = angleInDegrees * (Math.PI / 180);
double cosTheta = Math.Cos(angleInRadians);
double sinTheta = Math.Sin(angleInRadians);
return new Point
{
X =
(int)
((cosTheta * (pointToRotate.X - centerPoint.X)) -
((sinTheta * (pointToRotate.Y - centerPoint.Y)) + centerPoint.X)),
Y =
(int)
((sinTheta * (pointToRotate.X - centerPoint.X)) +
((cosTheta * (pointToRotate.Y - centerPoint.Y)) + centerPoint.Y))
};
}
Im a C#/XNA student and I've recently been working on an isometric tile engine and so far it works fairly well. But im having problem trying to figure out on how to do collision, this is what my tile engine does at the moment:
Draws the world from an image and place a tile depending on what color is on my image. For instance color red would draw a grass tile. (Tiles are 64x32)
Camera following player, and my draw loop only draws what the camera sees.
This is how my game looks if that would be of any help:
I don't know what sort of collision would work best. Should i do collision points, or intersects or any other sort of collision. I've read somewhere that you could do Worldtoscreen/Screentoworld but im far to inexperienced and don't know how that works nor how the code would look like.
Here is my code drawing tiles etc:
class MapRow
{
public List<MapCell> Columns = new List<MapCell>();
}
class TileMap
{
public List<MapRow> Rows = new List<MapRow>();
public static Texture2D image;
Texture2D tileset;
TileInfo[,] tileMap;
Color[] pixelColor;
public TileMap(string TextureImage, string Tileset)
{
tileset = Game1.Instance.Content.Load<Texture2D>(Tileset);
image = Game1.Instance.Content.Load<Texture2D>(TextureImage);
pixelColor = new Color[image.Width * image.Height]; // pixelColor array that is holding all pixel in the image
image.GetData<Color>(pixelColor); // Save all the pixels in image to the array pixelColor
tileMap = new TileInfo[image.Height, image.Width];
int counter = 0;
for (int y = 0; y < image.Height; y++)
{
MapRow thisRow = new MapRow();
for (int x = 0; x < image.Width; x++)
{
tileMap[y, x] = new TileInfo();
if (pixelColor[counter] == new Color(0, 166, 81))
{
tileMap[y, x].cellValue = 1;//grass
}
if (pixelColor[counter] == new Color(0, 74, 128))
{
tileMap[y, x].cellValue = 2;//water
}
if (pixelColor[counter] == new Color(255, 255, 0))
{
tileMap[y, x].cellValue = 3;//Sand
}
tileMap[y, x].LoadInfoFromCellValue();//determine what tile it should draw depending on cellvalue
thisRow.Columns.Add(new MapCell(tileMap[y, x]));
counter++;
}
Rows.Add(thisRow);
}
}
public static int printx;
public static int printy;
public static int squaresAcross = Settings.screen.X / Tile.TileWidth;
public static int squaresDown = Settings.screen.Y / Tile.TileHeight;
int baseOffsetX = -32;
int baseOffsetY = -64;
public void draw(SpriteBatch spriteBatch)
{
printx = (int)Camera.Location.X / Tile.TileWidth;
printy = (int)Camera.Location.Y / Tile.TileHeight;
squaresAcross = (int)Camera.Location.X / Tile.TileWidth + Settings.screen.X / Tile.TileWidth;
squaresDown = 2*(int)Camera.Location.Y / Tile.TileHeight + Settings.screen.Y / Tile.TileHeight + 7;
for (printy = (int)Camera.Location.Y / Tile.TileHeight; printy < squaresDown; printy++)
{
int rowOffset = 0;
if ((printy) % 2 == 1)
rowOffset = Tile.OddRowXOffset;
for (printx = (int)Camera.Location.X / Tile.TileWidth; printx < squaresAcross; printx++)
{
if (tileMap[printy, printx].Collides(MouseCursor.mousePosition))
Console.WriteLine(tileMap[printy, printx].tileRect);
foreach (TileInfo tileID in Rows[printy].Columns[printx].BaseTiles)
{
spriteBatch.Draw(
tileset,
tileMap[printy, printx].tileRect = new Rectangle(
(printx * Tile.TileStepX) + rowOffset + baseOffsetX,
(printy * Tile.TileStepY) + baseOffsetY,
Tile.TileWidth, Tile.TileHeight),
Tile.GetSourceRectangle(tileID.cellValue),
Color.White,
0.0f,
Vector2.Zero,
SpriteEffects.None,
tileID.drawDepth);
}
}
}
}
}
Why don't you just draw stuff just like in normal tile based games, and then rotate the camera with a 45degree? Of course then you'd need to make your graphics a bit odd, but would be easier to handle the tiles.
But if you prefer your way, then I'd suggest using simple math to calculate the "tile to the right", "tile to the left" , "tile to the up" and "tile to the down" ones, you know, the tiles around the player(or another tile). You can simply work with your lists, and with some math, basic math, like getting the next tile, is quite simple.
Edit:
You could get the player's next position's tile value with a code something like this:
tileMap[Math.Floor((player.y+playerVelociy.Y)/tileHeight)]
[Math.Floor((player.x+playerVelocity.X)/tileWidth)]
In this code, I assume that the first tile is at 0,0 and you're drawing to right and down. (If not, then just change the Math.Floor to Math.Ceil)
THIS link could help you get the idea, however it's in AS3.0, only the syntax is different.