I'm trying to write a function which can scale image up and down with a scale factor like 1.5 or 0.5. When I scale down the image will appear correct on the screen when I scale up there is some kind of screen overlay on the image. See images below
This is the code I wrote:
public static void ResizeImage(Bitmap inputImage, double scale)
{
int maxWidth = (int) (scale * inputImage.Width);
int maxHeight = (int) (scale * inputImage.Height);
Bitmap scaledImage = new Bitmap(maxWidth, maxHeight);
for(int x = 0; x < inputImage.Width; x++)
{
for(int y = 0; y < inputImage.Height; y++)
{
//Gets current Pixel
Color pixel = inputImage.GetPixel(x, y);
//Calucalte new position Pixel
int newPixelX = (int) (Math.Floor(x * scale));
int newPixelY = (int) (Math.Floor(y * scale));
//Sets pixel in new image
scaledImage.SetPixel(newPixelX, newPixelY, pixel);
}
}
DrawImage(Bitmap2colorm(scaledImage));
}
Does anyone know a solution to this problem and can someone give me an explaintion why this is happening?
Related
I'm new to Unity 3D and trying to split a texture2D sprite that contains an audio waveform in a Scroll Rect. The waveform comes from an audio source imported by the user and added to a scroll rect horizontally like a timeline. The script that creates the waveform works but the variable of the width (that came from another script, but this is not the problem) exceeds the limits of a Texture2D, only if I put manually a width less than 16000 the waveform appear but not to the maximum of the scroll rect. Usually, a song with 3-4min has a width of 55000-60000 width, and this can't be rendered. I need to split that waveform texture2D sprite horizontally into multiple parts (or Childs) together and render them only when appearing on the screen. How can I do that? Thank you in advance.
This creates the Waveform Sprite, and should split the sprite into multiple sprites and put together horizontally, render them only when appear on the screen):
public void LoadWaveform(AudioClip clip)
{
Texture2D texwav = waveformSprite.GetWaveform(clip);
Rect rect = new Rect(Vector2.zero, new Vector2(Realwidth, 180));
waveformImage.sprite = Sprite.Create(texwav, rect, Vector2.zero);
waveformImage.SetNativeSize();
}
This creates the waveform from an audio clip (getting from the internet and modifying for my project) :
public class WaveformSprite : MonoBehaviour
{
private int width = 16000; //This should be the variable from another script
private int height = 180;
public Color background = Color.black;
public Color foreground = Color.yellow;
private int samplesize;
private float[] samples = null;
private float[] waveform = null;
private float arrowoffsetx;
public Texture2D GetWaveform(AudioClip clip)
{
int halfheight = height / 2;
float heightscale = (float)height * 0.75f;
// get the sound data
Texture2D tex = new Texture2D(width, height, TextureFormat.RGBA32, false);
waveform = new float[width];
Debug.Log("NUMERO DE SAMPLES: " + clip.samples);
var clipSamples = clip.samples;
samplesize = clipSamples * clip.channels;
samples = new float[samplesize];
clip.GetData(samples, 0);
int packsize = (samplesize / width);
for (int w = 0; w < width; w++)
{
waveform[w] = Mathf.Abs(samples[w * packsize]);
}
// map the sound data to texture
// 1 - clear
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
tex.SetPixel(x, y, background);
}
}
// 2 - plot
for (int x = 0; x < width; x++)
{
for (int y = 0; y < waveform[x] * heightscale; y++)
{
tex.SetPixel(x, halfheight + y, foreground);
tex.SetPixel(x, halfheight - y, foreground);
}
}
tex.Apply();
return tex;
}
}
Instead of reading all the samples in one loop to populate waveform[], read only the amount needed for the current texture (utilizing an offset to track position in the array).
Calculate the number of textures your function will output.
var textureCount = Mathf.CeilToInt(totalWidth / maxTextureWidth); // max texture width 16,000
Create an outer loop to generate each texture.
for (int i = 0; i < textureCount; i++)
Calculate the current textures width (used for the waveform array and drawing loops).
var textureWidth = Mathf.CeilToInt(Mathf.Min(totalWidth - (maxTextureWidth * i), maxWidth));
Utilize an offset for populating the waveform array.
for (int w = 0; w < textureWidth; w++)
{
waveform[w] = Mathf.Abs(samples[(w + offset) * packSize]);
}
With offset increasing at the end of the textures loop by the number of samples used for that texture (ie texture width).
offset += textureWidth;
In the end the function will return an array of Texture2d instead of one.
I am trying to get color from specific area in an Image.
Assume that , this is image , and I want to get color inside image.(the result should be red of the above image) This color may be different position in image. Because I don't know exact position of color where it starting, so I can't get exact result.
Until now, I cropped image giving manually position of x and y, and then cropped image and I got average color of cropped image. But I know , this is not exact color.
What I tried :
private RgbDto GetRGBvalueCroppedImage(Image croppedImage)
{
var avgRgb = new RgbDto();
var bm = new Bitmap(croppedImage);
BitmapData srcData = bm.LockBits(
new Rectangle(0, 0, bm.Width, bm.Height),
ImageLockMode.ReadOnly,
PixelFormat.Format32bppArgb);
int stride = srcData.Stride;
IntPtr Scan0 = srcData.Scan0;
long[] totals = new long[] { 0, 0, 0 };
int width = bm.Width;
int height = bm.Height;
unsafe
{
byte* p = (byte*)(void*)Scan0;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
for (int color = 0; color < 3; color++)
{
int idx = (y * stride) + x * 4 + color;
totals[color] += p[idx];
}
}
}
}
avgRgb.avgB = (int)totals[0] / (width * height);
avgRgb.avgG = (int)totals[1] / (width * height);
avgRgb.avgR = (int)totals[2] / (width * height);
return avgRgb;
}
How can I get exact position to crop? May be I can convert image to byte array, then I can find different color and take position of it and then crop. But I have no clue how do this.
You can use something this extension method to get dominant color in a region of an image in case they are not all the same
public static Color GetDominantColor(this Bitmap bitmap, int startX, int startY, int width, int height) {
var maxWidth = bitmap.Width;
var maxHeight = bitmap.Height;
//TODO: validate the region being requested
//Used for tally
int r = 0;
int g = 0;
int b = 0;
int totalPixels = 0;
for (int x = startX; x < (startX + width); x++) {
for (int y = startY; y < (startY + height); y++) {
Color c = bitmap.GetPixel(x, y);
r += Convert.ToInt32(c.R);
g += Convert.ToInt32(c.G);
b += Convert.ToInt32(c.B);
totalPixels++;
}
}
r /= totalPixels;
g /= totalPixels;
b /= totalPixels;
Color color = Color.FromArgb(255, (byte)r, (byte)g, (byte)b);
return color;
}
You can then use it like
Color pixelColor = myBitmap.GetDominantColor(xPixel, yPixel, 5, 5);
there is room for improvement, like using a Point and Size, or even a Rectangle
public static Color GetDominantColor(this Bitmap bitmap, Rectangle area) {
return bitmap.GetDominantColor(area.X, area.Y, area.Width, area.Height);
}
and following this link:
https://www.c-sharpcorner.com/UploadFile/0f68f2/color-detecting-in-an-image-in-C-Sharp/
If you want to get the image colors, you don't need to do any cropping at all. Just loop on image pixels and find the two different colors. (Assuming that you already know the image will have exactly 2 colors, as you said in comments). I've written a small function that will do that. However, I didn't test it in an IDE, so expect some small mistakes:
private static Color[] GetColors(Image image)
{
var bmp = new Bitmap(image);
var colors = new Color[2];
colors[0] = bmp.GetPixel(0, 0);
for (int i = 0; i < bmp.Width; i++)
{
for (int j = 0; j < bmp.Height; j++)
{
Color c = bmp.GetPixel(i, j);
if (c == colors[0]) continue;
colors[1] = c;
return colors;
}
}
return colors;
}
I have grayscale pictures of an ArrayList<System.Windows.Controls.Image> laid out horizontally on a Canvas. Their ImageSource are of type System.Windows.Media.Imaging.BitmapImage.
Is there a way to measure in pixels the height of each Image without considering white, non-transparent pixels Outside the colored part ?
Lets say I have an Image of height 10, in which the whole top half is white and the bottom half is black; I would need to get 5 as it's height. In the same way, if that Image had the top third black, middle third white and bottom third black, the height would be 10.
Here's a drawing that shows the desired heights (in blue) of 3 images:
I am willing to use another type for the images, but it Must be possible to either get from a byte[] array to that type, or to convert Image to it.
I have read the docs on Image, ImageSource and Visual, but I really have no clue where to start.
Accessing pixel data from a BitmapImage is a bit of a hassle, but you can construct a WriteableBitmap from the BitmapImage object which is much easier (not to mention more efficient).
WriteableBitmap bmp = new WriteableBitmap(img.Source as BitmapImage);
bmp.Lock();
unsafe
{
int width = bmp.PixelWidth;
int height = bmp.PixelHeight;
byte* ptr = (byte*)bmp.BackBuffer;
int stride = bmp.BackBufferStride;
int bpp = 4; // Assuming Bgra image format
int hms;
for (int y = 0; y < height; y++)
{
hms = y * stride;
for (int x = 0; x < width; x++)
{
int idx = hms + (x * bpp);
byte b = ptr[idx];
byte g = ptr[idx + 1];
byte r = ptr[idx + 2];
byte a = ptr[idx + 3];
// Construct your histogram
}
}
}
bmp.Unlock();
From here, you can construct a histogram from the pixel data, and analyze that histogram to find the boundaries of the non-white pixels in the images.
EDIT: Here's a Silverlight solution:
public static int getNonWhiteHeight(this Image img)
{
WriteableBitmap bmp = new WriteableBitmap(img.Source as BitmapImage);
int topWhiteRowCount = 0;
int width = bmp.PixelWidth;
int height = bmp.PixelHeight;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
int pixel = bmp.Pixels[y * width + x];
if (pixel != -1)
{
topWhiteRowCount = y - 1;
goto returnLbl;
}
}
}
returnLbl:
return topWhiteRowCount >= 0 ? height - topWhiteRowCount : height;
}
I have tried this code for converting a bitmap to pure black and white - not greyScale, but this gives me a pure black image.
public Bitmap blackwhite(Bitmap source)
{
Bitmap bm = new Bitmap(source.Width,source.Height);
for(int y=0;y<bm.Height;y++)
{
for(int x=0;x<bm.Width;x++)
{
if (source.GetPixel(x, y).GetBrightness() > 0.5f)
{
source.SetPixel(x,y,Color.White);
}
else
{
source.SetPixel(x,y,Color.Black);
}
}
}
return bm;
}
What can cause such a problem? Is there any alternate method to this?
I know this answer is way too late but I just figured it out and hope it helps other people having this problem.
I get the average brightness of the picture and use that as the threshold for setting pixels to black or white. It isn't 100% accurate and definitely isn't optimized for time complexity but it gets the job done.
public static void GetBitmap(string file)
{
using (Bitmap img = new Bitmap(file, true))
{
// Variable for image brightness
double avgBright = 0;
for (int y = 0; y < img.Height; y++)
{
for (int x = 0; x < img.Width; x++)
{
// Get the brightness of this pixel
avgBright += img.GetPixel(x, y).GetBrightness();
}
}
// Get the average brightness and limit it's min / max
avgBright = avgBright / (img.Width * img.Height);
avgBright = avgBright < .3 ? .3 : avgBright;
avgBright = avgBright > .7 ? .7 : avgBright;
// Convert image to black and white based on average brightness
for (int y = 0; y < img.Height; y++)
{
for (int x = 0; x < img.Width; x++)
{
// Set this pixel to black or white based on threshold
if (img.GetPixel(x, y).GetBrightness() > avgBright) img.SetPixel(x, y, Color.White);
else img.SetPixel(x, y, Color.Black);
}
}
// Image is now in black and white
}
I have two images and I want to multiply these two images together in C# as we multiply two layers in Photoshop.
I have found the method by which the layers are multiplied in photoshop or any other application.
Following is the formula that I have found on GIMP documentation. It says that
E=(M*I)/255
where M and I are the color component(R,G,B) values of the two layers. We have to apply this to every color component. E will be the resultant value for that color component.
If the color component values are >255 then it should be set to white i.e. 255 and if it is <0 then it should be set as Black i.e. 0
Here I have a suggestion - I didn't test it, so sorry for any errors - I'm also assuming that both images have the same size and are greylevel.
Basically I'm multiplying the image A for the relative pixel percentage of image B.
You can try different formulas like:
int result = ptrB[0] * ( (ptrA[0] / 255) + 1);
or
int result = (ptrB[0] * ptrA[0]) / 255;
Never forget to check for overflow (above 255)
public void Multiply(Bitmap srcA, Bitmap srcB, Rectangle roi)
{
BitmapData dataA = SetImageToProcess(srcA, roi);
BitmapData dataB = SetImageToProcess(srcB, roi);
int width = dataA.Width;
int height = dataA.Height;
int offset = dataA.Stride - width;
unsafe
{
byte* ptrA = (byte*)dataA.Scan0;
byte* ptrB = (byte*)dataB.Scan0;
for (int y = 0; y < height; ++y)
{
for (int x = 0; x < width; ++x, ++ptrA, ++ptrB)
{
int result = ptrA[0] * ( (ptrB[0] / 255) + 1);
ptrA[0] = result > 255 ? 255 : (byte)result;
}
ptrA += offset;
ptrB += offset;
}
}
srcA.UnlockBits(dataA);
srcB.UnlockBits(dataB);
}
static public BitmapData SetImageToProcess(Bitmap image, Rectangle roi)
{
if (image != null)
return image.LockBits(
roi,
ImageLockMode.ReadWrite,
image.PixelFormat);
return null;
}