I've been working on implementing a 2D lighting system in XNA, and I've gotten the system to work--as long as my window's dimensions are powers of two. Otherwise, the program will fail at this line:
GraphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleStrip, Vertices, 0, 2);
The exception states that "XNA Framework Reach profile requires TextureAddressMode to be Clamp when using texture sizes that are not powers of two," and every attempt that I've made to slve this problem has failed--the most common solution I've found on the internet is to put the line GraphicsDevice.SamplerStates[0] = SamplerState.LinearClamp; directly above the line above, but that hasn't solved my problem.
I apologize if I've left out any information that could be necessary to solve this; I'll be more than happy to provide more as needed.
Isn't this the same question you asked before?
In your HLSL look for the line that declares the sampler that the pixel shader is using.
You can set the address mode to clamp in this line.
SamplerState somethingLikeThis {
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Clamp;
AddressV = Clamp;
};
Related
I have just recently started to work with OpenCV and image processing in general, so please bear with me
I have the following image to work with:
The gray outline is the result of the tracking algorithm, which I drew in for debugging, so you can ignore that.
I am tracking glowing spheres, so it is easy to turn down the exposure of my camera and then filter out the surrounding noise that remains. So what I have to work with is always a black image with a white circle. Sometimes a little bit of noise makes it through, but generally that's not a problem.
Note that the spheres are mounted on a flat surface, so when held at a specific angle the bottom of the circle might be "cut off", but the Hough transform seems to handle that well enough..
Currently, I use the Hough Transform for getting position and size. However, it jitters a lot around the actual circle, even with very little motion. When in motion, it sometimes loses track entirely and does not detect any circles.
Also, this is in a real-time (30fps) environment, and i have to run two Hough circle transforms, which takes up 30% CPU load on a ryzen 7 cpu...
I have tried using binary images (removing the "smooth" outline of the circle), and changing the settings of the hough transform. With a lower dp value, it seems to be less jittery, but it then is no longer real-time due to the processing needed.
This is basically my code:
ImageProcessing.ColorFilter(hsvFrame, Frame, tempFrame, ColorProfile, brightness);
ImageProcessing.Erode(Frame, 1);
ImageProcessing.SmoothGaussian(Frame, 7);
/* Args: cannyThreshold, accumulatorThreshold, dp, minDist, minRadius, maxRadius */
var circles = ImageProcessing.HoughCircles(Frame, 80, 1, 3, Frame.Width / 2, 3, 80);
if (circles.Length > 0)
...
The ImageProcessing calls are just wrappers to the OpenCV framework (EmguCV)
Is there a better, less jittery and less performance-hungry way or algorithm to detect these kinds of (as i see it) very simple circles? I did not find an answer on the internet that matches these kinds of circles. thank you for any help!
Edit:
This is what the image looks like straight from the camera, no processing:
I feel desperate to see how often people spoil good information by jumping on edge detection and/or Hough transformations.
In this particular case, you have a lovely blob, which can be detected in a fraction of a millisecond and for which the centroid will yield good accuracy. The radius can be obtained just from the area.
You report that in case of motion the Hough becomes jittery; this can be because of motion blur or frame interleaving (depending on the camera). The centroid should be more robust to these effects.
The below is Android code.
path.moveTo(xx, yy);
for (...) {
path.lineTo(xx, yy);
}
canvas.drawPath(this.path, paint);
In order to remove the sharp corner, I am using
final CornerPathEffect cornerPathEffect = new CornerPathEffect(50);
paint.setPathEffect(cornerPathEffect);
When comes to WPF, I am using the following code.
PathFigure pathFigure = new PathFigure();
pathFigure.StartPoint = new Point(xx, yy);
for (...) {
LineSegment lineSegment = new LineSegment(new Point(xx, yy), true);
lineSegment.IsSmoothJoin = true;
pathFigure.Segments.Add(lineSegment);
}
PathGeometry pathGeometry = new PathGeometry(new PathFigure[] { pathFigure });
drawingContext.DrawGeometry(null, new Pen(Brushes.White, 3), pathGeometry);
I am getting the following effect.
Note that, I avoid from using PolyQuadraticBezierSegment or PolyBezierSegment. It tend to become unstable. This means, whenever I add a new incoming point to the line graph, the newly added point will tend to change the old path, which is already drawn on the screen. As an end effect, you may observer the whole line graph is shaking
May I know in WPF, how I can smooth out the line segment? Although I have used lineSegment.IsSmoothJoin = true;, I still can see the sharp corner. Can I have something equivalent to Android's CornerPathEffect?
I know, zombie thread post. You probably already solved this problem, but here are my thoughts years after the fact...
Since smoothing is dependent on multiple points in the line I think it will be quite difficult to find a smoothing algorithm that looks reasonable without producing some instability in the leading edge. You can probably reduce the scope of the instability, but only at risk of producing a very odd looking trace.
First option would be to use a spline algorithm that limits the projection artifacts. For instance the Catmull-Rom algorithm uses two known points on either side of the curve segment being interpolated. You can synthesize two additional points at each end of the curve or simply draw the first curve segment as a straight line. This will give a straight line as the last segment, plus a curve as the second to last segment which should change very little if at all when another point is added.
Alternatively you can run the actual data points through an initial spline calculation to multiply the points, then run those points through the spline algo a second time. You'll still have to update the most recent 2m points (where m is the multiplier of the first pass) or the output will look distorted.
About the only other option I can think of is to try to predict a couple of points ahead based on your prior data, which can be difficult even with a fairly regular input. I used this to synthesize Bezier control points for the ends of my curves - I was calculating CPs for all of the points and needed something to use for the end points - and had some interesting times trying to stop the final curve segments from looking horribly deformed.
Oh, one more... don't graph the final curve segment. If you terminate your curve at Pn-1 the curve should stay stable. Draw the final segment in a different style if you must show it at all. Since C-R splines only need +/- 2 known points from the interpolation the segment Pn-2-Pn-1 should be stable enough.
If you don't have code for the C-R algorithm you can do basically the same thing with synthetic Bezier control points. Rather than attempt to describe the process, check out this blog post which gives a fairly good breakdown of the process. This article has code attached which may be useful.
I'm looking over this tutorial to mix different textures based on the types of pixels I want to pass:
http://www.crappycoding.com/tag/xna/page/2/
and so far I hink I understand the whole concept, except for couple lines in creating the AlphaTestEffect object, as there is very little explanation to it given and I have no clue what it is there for and why it's set up like that.
Matrix projection = Matrix.CreateOrthographicOffCenter(0, PlanetDataSize, PlanetDataSize, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
alphaTestEffect.Projection = halfPixelOffset * projection;
Could somebody please explain these necesities, what they do and what they are for? I hope it won't take too much time, and my question is not a silly one.
cheers
Lucas
Because he is using a custom effect instead of the default SpriteBatch one, he has to make sure the projection works the same way as the default (or rather, he's making it the same to make everything play nice together).
http://blogs.msdn.com/b/shawnhar/archive/2010/04/05/spritebatch-and-custom-shaders-in-xna-game-studio-4-0.aspx
It's explained there if you scroll down a bit:
" This code configures BasicEffect to replicate the default SpriteBatch coordinate system:"
The default SpriteBatch camera is a simple orthographic projection with a half pixel offset to display 2D textures better. That can be explained here:
http://drilian.com/2008/11/25/understanding-half-pixel-and-half-texel-offsets/
I am trying to set a constant hue value for entire image with using ColorMatrix. My goal is to make entire image look with same color without loosing brightness of any area. I found a way to shift the hue values of an image by using ColorMatrix but i couldn't find any way to set same hue value for all pixels. I can do it with iterating every pixel of image but this approach is too slow. I am not sure if it is possible to do it with ColorMatrix and i am open to possible solutions other than ColorMatrix approach.
Input Image
Hue shifting output Image*
Desired output Image**
*This can be done with color matrix
** I can do this with iterating pixels but not with ColorMatrix
PS: I am trying to do this on Android but i believe the question is not directly related to the android since ColorMatrix approach is common on other platforms like Flash, C# etc.
not really familiar here, but i belive this link can help:
http://www.graficaobscura.com/matrix/index.html
it's c code, so you have to translate C -> ColorMatrix, but in the last paragraph there is the operation
Hue Rotation While Preserving Luminance
which seems what you are looking for.
I don't think there is a way to do exactly what you're asking for with a ColorMatrix. It sounds like what you want to do is transform from RGB to HSL color space, set H constant across all pixels, and transform back from HSL to RGB. These color space transformations aren't linear, and can't be represented with matrices. Because of the different way these spaces parameterize color, I also suspect some degradation could occur doing RGB->HSL->RGB.
I think the closest you would be able to achieve with the ColorMatrix is by using one to convert to greyscale, and another to weight the RGB values (tint them). This kind of thing is often used to do fake sepia tone photos, but it is not what you are asking for.
I have made a quick sample in Flash (this question is tagged as ActionScript), not sure if this is what you are looking for:
http://megaswf.com/serve/1047061
The code:
import com.greensock.*;
import com.greensock.easing.*;
import flash.events.MouseEvent;
colorButton.addEventListener(MouseEvent.CLICK, onColor);
resetButton.addEventListener(MouseEvent.CLICK, onReset);
function onColor(e:MouseEvent):void {
TweenMax.to(mc, 1, {colorMatrixFilter:{colorize:0x0099ff, amount:1}});
}
function onReset(e:MouseEvent):void {
TweenMax.to(mc, 1, {colorMatrixFilter:{colorize:0x0099ff, amount:0}});
}
here is a quick way to do it, if you want to set the hue to, let's say, h(0.5, 0.2, 0.3).
var matrix:Array = new Array();
matrix = matrix.concat([.5, .5, .5, 0, 0]);
matrix = matrix.concat([.2, .2, .2, 0, 0]);
matrix = matrix.concat([.3, .3, .3, 0, 0]);
matrix = matrix.concat([0, 0, 0, 1, 0]);
var filter:ColorMatrixFilter = new ColorMatrixFilter(matrix);
image.filters = [filter];
I'm not sure it will respect perfectly the luminance, but it may satisfy your need!
Check out QColorMatrix, which has a ColorMatrix routine called RotateHue. Source is in C++, but is portable to other languages (I've ported part of it to .NET in the past and it worked great).
I am simulating a thermal camera effect. I have a webcam at a party pointed at people in front of a wall. I went with background subtraction technique and using Aforge blobcounter I get blobs that I want to fill with gradient coloring. My problem = GetBlobsEdgePoints doesn't return sorted point cloud so I can't use it with, for example, PathGradientBrush from GDI+ to simply draw gradients.
I'm looking for simple,fast, algorithm to trace blobs into path (can make mistakes).
A way to track blobs received by blobcounter.
A suggestion for some other way to simulate the effect.
I took a quick look at Emgu.CV.VideoSurveillance but didn't get it to work (examples are for v1.5 and I went with v2+) but I gave up because people say it's slow on forums.
thanks for reading.
sample code of aforge background removal
Bitmap bmp =(Bitmap)e.VideoFrame.Clone();
if (backGroundFrame == null)
{
backGroundFrame = (Bitmap)e.VideoFrame.Clone();
difference.OverlayImage = backGroundFrame;
}
difference.ApplyInPlace(bmp);
bmp = grayscale.Apply(bmp);
threshold.ApplyInPlace(bmp);
Well, could you post some sample image of the result of GetBlobsEdgePoints, then it might be easier to understand what types if image processing algorithms are needed.
1) You may try a greedy algorithm, first pick a point at random, mark that point as "taken", pick the closest point not marked as "taken" and so on.
You need to find suitable termination conditions. If there can be several disjunct paths you need to find out a definition of how far away points need to be to be part of disjunct paths.
3) If you have a static background you can try to create a difference between two time shifted images, like 200ms apart. Just do a pixel by pixel difference and use abs(diff) as index in your heat color map. That will give more like an edge glow effect of moving objects.
This is the direction i'm going to take (looks best for now):
Define a set of points on the blob by my own logic (color of skin blobs should be warmer etc..)
draw gradients around those points
GraphicsPath gp=new GraphicsPath();
var rect = new Rectangle(CircumferencePoint.X - radius, CircumferencePoint.Y - radius, radius*2, radius*2);
gp.AddEllipse(rect);
GradientShaper = new PathGradientBrush(gp);
GradientShaper.CenterColor = Color.White;
GradientShaper.SurroundColors = surroundingColors;
drawBmp.FillPath(GradientShaper,gp);
mask those gradients with blob shape
blobCounter.ExtractBlobsImage(bmp,blob,true);
mask.OverlayImage = blob.Image;
mask.ApplyInPlace(rslt);
colorize with color remapping
tnx for the help #Albin