Generating colours after applying opacity of black and white - c#

I don't even know how to describe what I want, but here goes. Assume I have 3 textboxes, and I enter some colour hex code in the first one, I want to apply a black layer on top of it, and have the opacity set to 50% and get the resulting colour in the second text box. Same thing, but with white on the third one.
Let me explain: consider this image below:
In Photoshop, I have the base layer which is sort of sky blue in colour. I create two layers on top of it, one with black, one with white but both have an opacity of 50%. Now, I can use the colour picker (I) to simply select the two wanted colours.
I am having to do this an insane amount of times so I was wondering if I could produce it programatically.
I know, Ideally I should have tried out something, then give out the code which isn't working.. but this has stumped me enough that I don't even know where to start. The closest thing I've seen is Kuler so I think it is possible in flash at least, but then again, I don't know where to start.
Can you guys please point me in the right direction? Ideally, this would be so much better if it's doable in jQuery, but I have looked around and couldn't find anything quite like it. I am not asking for a working solution, just a nudge in the right direction.
If you have any questions, please ask.
Technology is not important to me, solution is - I am most comfortable with c# (at least I like to think I am) but I am a beginner in php, flash. Jquery, I am good at it with most stuff (well, who isn't?) - Whatever works is good to me. Php and Flash, I learnt it as a hobby just to familiarize myself.
Many thanks.

So I can get close, but not exactly your results, which I think is a side effect of .NET using a number in the range 1..255 for alpha when creating a color.
But nonetheless, I think this pretty much does what you want:
public class ColorUtility
{
private Color color;
public ColorUtility(Color original)
{
this.color = original;
}
public Color GetTransformedColor(Color overlay)
{
using(var bitmap = new Bitmap(1,1))
{
var g = Graphics.FromImage(bitmap);
using(Brush baseBrush = new SolidBrush(this.color))
{
g.FillRectangle(baseBrush,0,0,1,1);
}
using(Brush overlayBrush = new SolidBrush(Color.FromArgb(127,overlay)))
{
g.FillRectangle(overlayBrush,0,0,1,1);
}
return bitmap.GetPixel(0, 0);
}
}
}
usage:
var startColor = ColorTranslator.FromHtml("#359eff");
var util = new ColorUtility(startColor);
var blackOverlay = util.GetTransformedColor(Color.Black); // yields #9aceff
var whiteOverlay = util.GetTransformedColor(Color.White); // yields #1b4f80
Close to your desired results, but not exactly.
EDIT: If you change the alpha value to 128 in the utility you get
Black: #9acfff
White: #1a4f7f
This might be closer to what you want, but its still not exact.

I know I am late for the party, just wanted to show another way of doing it.
There is a jquery color plugin for this, I have never really used this, but there is a function that looks like it does what you want.. xColor is the plugin you are looking for.. if you go the combination section you will see that it says that it does what you want.
I just tried a sample on jsfiddle but the results are in line with Jamie's amazing answer. this gives out the same result colors as that of Jamie's code. so you can use either I guess.

So... What's the problem to just write what you exactly said using technology you know best? 3 boxes with colors, input for opacity percentage and output as resulting mixed colors. (I can write it on flash, but I'm not sure if prividing the whole program is appropriate on this site.)
If you don't know how to mix colors with opacity, this link should help:
http://www.pegtop.net/delphi/articles/blendmodes/opacity.htm

Related

Is it possible to create isosurfaces using Oxyplot?

I'm using Oxyplot HeatMapSeries for representing some graphical data.
For a new application I need to represent the data with isosurfaces, something looking like this:
Some ideas around this:
I know the ContourSeries can do the isolines, but I can't find any option that allows me to fill the gaps between lines. Does this option exists?
I know the HeatMapSeries can be shown under the contourSeries so I can get a similar result but it does not fit our needs. .
Another option wolud be limiting the HeatMapSeries colours and eliminate the interpolation. Is this possible?
If anyone has another approach to the solution I will hear it!
Thanks in advance!
I'm evaluating whether Oxyplot will meet my needs and this question interests me... from looking at the ContourSeries source code, it appears to be only for finding and rendering the contour lines, but not filling the area between the lines. Looking at AreaSeries, I don't think you could just feed it contours because it is expecting two sets of points which when the ends are connected create a simple closed polygon. The best guess I have is "rasterizing" your data so that you round each data point to the nearest contour level, then plot the heatmap of that rasterized data under the contour. The ContourSeries appears to calculate a level step that does 20 levels across the data by default.
My shortcut for doing the rasterizing based on a step value is to divide the data by the level step you want, then truncate the number with Math.Floor.
Looking at HeatMapSeries, it looks like you can possibly try to turn interpolation off, use a HeatMapRenderMethod.Rectangles render method, or supply a LinearColorAxis with fewer steps and let the rendering do the rasterization perhaps? The Palettes available for a LinearColorAxis can be seen in the OxyPalettes source: BlueWhiteRed31, Hot64, Hue64, BlackWhiteRed, BlueWhiteRed, Cool, Gray, Hot, Hue, HueDistinct, Jet, and Rainbow.
I'm not currently in a position to run OxyPlot to test things, but I figured I would share what I could glean from the source code and limited documentation.

Detect and draw shape inside of available space between lines

My goal is to detect the different regions within a simple drawing constructed of various lines. Please click the following link to view a visual example of my goal for clarification. I am of course able to get the position of the drawn lines, but since one line can cross multiple 'regions' I don't think this information alone will be sufficient.
Any ideas, suggestions or points to other websites are welcome. I am using C# in combination with WPF - I am not certain which search words might lead to an answer to this problem. I did come across this shape checker article from AForge, but it seems to focus on detecting shapes that are already there, not so much on regions that still have to be 'discovered'. As a side note, I hope to find a solution that works not only with rectangles but also with other types of shapes.
Thank you very much in advance.
Update:
foreach (Line canvasObject in DrawingCanvas.Children.OfType<Line>())
{
LineGeometry lineGeometry1 = new LineGeometry();
lineGeometry1.StartPoint = new Point(canvasObject.X1, canvasObject.Y1);
lineGeometry1.EndPoint = new Point(canvasObject.X2, canvasObject.Y2);
if (canvasObject.X1 != canvasObject.X2) {
foreach (Line canvasObject2 in DrawingCanvas.Children.OfType<Line>()) {
if (canvasObject.X1 == canvasObject2.X1 && canvasObject.X2 == canvasObject2.X2 &&
canvasObject2.Y1 == canvasObject2.Y2 && canvasObject.Y2 == canvasObject2.Y2) {
return;
// prevent the system from 'colliding' the same two lines
}
LineGeometry lineGeometry2 = new LineGeometry {
StartPoint = new Point(canvasObject2.X1, canvasObject2.Y1),
EndPoint = new Point(canvasObject2.X2, canvasObject2.Y2)
};
if (lineGeometry1.FillContainsWithDetail(lineGeometry2).ToString() != "Empty") {
//collision detected
Rectangle rectangle = new Rectangle {
Width = Math.Abs(canvasObject.X2 - canvasObject.X1),
Height = 20,
Fill = Brushes.Red
};
//rectangle.Height = Math.Abs(canvasObject.Y2 - canvasObject.Y1);
DrawingCanvas2.Children.Add(rectangle);
Canvas.SetTop(rectangle, canvasObject.Y1);
Canvas.SetLeft(rectangle, canvasObject.X1);
}
}
}
}
I have experimented with the following code - to give you an impression of how I tried to tackle this problem. Initially I thought I had found a partial solution, by checking for collision between lines. Unfortunately I just created a second line of each line (which of course collided 'with itself'). After I added a simple if check (see below) this no longer occurs, but now I don't get any collisions anymore.. so will probably need a new technique.
Update 2:
After some more digging and searching the internet for solutions, I have a new potential solution in mind. Hopefully this can also be of use to anyone looking for answers in the future. Using a flood-fill algorithm I am able to 'fill' each region with a specific color - much like the paint bucket tool in an image editing application. Summarized, this done by taking a 'screenshot' of the Canvas element, starting at a certain pixel and expanding over and over until a pixel with a different color is found (these would be the lines). It works pretty well and is able to return an image with the various regions. However - my current problem is accessing these regions as 'objects' in C#/WPF. I would like to draw the regions myself (using polyobject or something similar?) - making it possible to use the objects for further calculations or interactions.
I have tried saving the position of the smallest and largest X and Y positions in the FloodFill algorithm after each pixel check, but this makes the algorithm work very very slow. If anyone has an idea, I would love to know. :)

How do I properly achieve subtractive blending in C#, XNA?

I'm working on some kind of mod for Terraria (written in C# and using XNA), in which I need to use some blend modes. I didn't have any troubles getting additive blending to work, but subtractive one causes me some problems.
I managed to display stuff with subtractive blending, but it doesn't really want to return to the standard mode. SpriteBatch.End and Begin doesn't help at all.
This is my custom BlendState:
public readonly static BlendState
bsSubtract = new BlendState{
ColorSourceBlend = Blend.SourceAlpha,
ColorDestinationBlend = Blend.One,
ColorBlendFunction = BlendFunction.ReverseSubtract,
AlphaSourceBlend = Blend.SourceAlpha,
AlphaDestinationBlend = Blend.One,
AlphaBlendFunction = BlendFunction.ReverseSubtract
},
Drawing code:
sb.End();
sb.Begin(SpriteSortMode.Immediate,bsSubtract);
(...drawing drawing blah...)
sb.End();
sb.Begin(SpriteSortMode.Immediate,BlendState.Additive);
The problem is, everything that is drawn after this code seems to still use some old options (half-transparent, bland). What am I doing wrong?
I even tried calling just sb.End() and sb.Begin() before setting the blend state back, or using another custom blend state which was a standard additive one, just with BlendFunctions set to Add, to no avail.
EDIT: Seems like setting ANY custom BlendState makes it do that...
EDIT2: Seems like the problem was me splitting the drawing to 3 separate places: one for item slots, one for tiles and one for world in general. And in one of these (items) I forgot to set the SpriteBatch before using and reset it afterwards. I should have spent more time looking at my code. Still, thanks for trying to help!
(can't close the question just yet, gonna close it after StackOverflow lets me do it)
The default blending mode is BlendState.AlphaBlend.
Try replacing BlendState.Additive with BlendState.AlphaBlend in your code. Or possibly NonPremultiplied, depending on what Terraria is actually using.
Better yet, you could read out exactly the blend state that Terraria was using, as SpriteBatch sets it on the graphics card and simply leaves it there. Here is some untested code that should do exactly that:
sb.End(); // Sets blend state
BlendState previousState = GraphicsDevice.BlendState; // Retrieve it
sb.Begin(SpriteSortMode.Immediate, bsSubtract);
// (...drawing drawing blah...)
sb.End();
sb.Begin(SpriteSortMode.Immediate, previousState); // Re-use it
Seems like the problem was me splitting the drawing to 3 separate places: one for item slots, one for tiles and one for world in general. And in one of these (items) I forgot to set the SpriteBatch before using and reset it afterwards. I should have spent more time looking at my code. Still, thanks for trying to help!

Do something similar to Auto Tone of Photoshop with Aforge.net or c#

Im developing an image skin detection app.
But there is a problem with my camera, that try to compensate the light and the result image is bad, in most of cases i have a cold or warm effect on the image.
When i use photoshop there is the AutoTone function that normalize an image and reduce this problem.
With aforge i want to use HistogramEqualization() filter but the result is very bad:
// create filter
HistogramEqualization filter = new HistogramEqualization( );
// process image
filter.ApplyInPlace( sourceImage );
So my question is:
There is a function in Accord or Aforge to have the same result of the autotone of Photoshop?
If not, there is some library or script that let to do this?
Thank you all.
I use the LevelsLinear filter and base it on image stats:
ImageStatistics stats = new ImageStatistics(sourceImage);
LevelsLinear levelsLinear = new LevelsLinear {
InRed = stats.Red.GetRange( 0.90 ),
InGreen = stats.Green.GetRange( 0.90 ),
InBlue = stats.Blue.GetRange( 0.90 )
};
levelsLinear.ApplyInPlace(sourceImage);
You can play with the range to tweak the result.
You probably don't want to equalize the histogram, because as you see, a photo that wouldn't normally have much red, would have alot of red created and make it look nasty. Instead you probably want to examine for a bias to a hue that occurs almost everywhere. For example, your original photo probably had a bias towards blue in almost every pixel, and thus probably shouldn't be there. Look for a minimum bias and remove that amount everywhere.
A more practical solution is to experiment with the white balance setting on your camera to see what gives you the best result. Choosing the right preset, will leverage an algorithm that's probably as good as what you would write by hand. But maybe you are doing this as a learning experience.

Screen Region Recognition to Find Field location on Screen

I am trying to figure out a way of getting Sikuli's image recognition to use within C#. I don't want to use Sikuli itself because its scripting language is a little slow, and because I really don't want to introduce a java bridge in the middle of my .NET C# app.
So, I have a bitmap which represents an area of my screen (I will call this region BUTTON1). The screen layout may have changed slightly, or the screen may have been moved on the desktop -- so I can't use a direct position. I have to first find where the current position of BUTTON1 is within the live screen. (I tried to post pictures of this, but I guess I can't because I am a new user... I hope the description makes it clear...)
I think that Sikuli is using OpenCV under the covers. Since it is open source, I guess I could reverse engineer it, and figure out how to do what they are doing in OpenCV, implementing it in Emgu.CV instead -- but my Java isn't very strong.
I looked for examples showing this, but all of the examples are either extremely simple (ie, how to recognize a stop sign) or very complex (ie how to do facial recognition)... and maybe I am just dense, but I can't seem to make the jump in logic of how to do this.
Also I worry that all of the various image manipulation routines are actually processor intensive, and I really want this as lightweight as possible (in reality I might have lots of buttons and fields I am trying to find on a screen...)
So, the way I am thinking about doing this instead is:
A) Convert the bitmaps to byte arrays and do brute force search. (I know how to do that part). And then
B) Use the byte array position that I found to calculate its screen position (I'm really not completely sure how I do this) instead of using the image processing stuff.
Is that completely crazy? Does anyone have a simple example of how one could use Aforge.Net or Emgu.CV to do this? (Or how to flesh out step B above...?)
Thanks!
Generally speaking, it sounds like you want basic object recognition. I don't have any experience with SIKULI, but there are a number of ways to do object recognition (Edge based template matching, etc.). That being said you might be able to go with just straight histogram matching.
http://www.codeproject.com/KB/GDI-plus/Image_Processing_Lab.aspx
That page should show you how to use AForge.net to get the histogram of an image. You would just do a brute force search using something like this:
Bitmap ImageSearchingWithin=new Bitmap("Location of image"); //or just load from a screenshot or whatever
for (int x = 0; x < ImageSearchingWithin.Width - WidthOfImageSearchingFor; ++x)
{
for (int y = 0; y < ImageSearchingWithin.Height - HeightOfImageSearchingFor; ++y)
{
Bitmap MySmallViewOfImage = ImageSearchingWithin.Clone(new Rectangle(x, y, WidthOfImageSearchingFor, HeightOfImageSearchingFor), System.Drawing.Imaging.PixelFormat.Format24bppRgb);
}
}
And then compare the newly created bitmap's histogram to the one that you calculated of the original image (whatever area is the closest in terms of matching is what you would select as being the region of BUTTON1). It's not the most elegant solution but it might work for your needs. Otherwise you get into more difficult techniques (of course I could be forgetting something at the moment that might be simpler).

Categories