So I'm coding a synthesizer from scratch in C# using NAudio. I've gotten it play different frequencies, which is cool, but I notice that the higher pitches are significantly louder than the lower pitches. Is that due to this effect:
http://en.wikipedia.org/wiki/Equal-loudness_contour
Or am I doing something wrong when I'm generating the sine wave? How would I implement an Equal-loudness contour curve if it is indeed necessary?
Thanks
My Code:
NAudio expects a buffer filled with floating point values in the range of -1 to +1 to represent the waveform.
Generating the sine wave:
buffer[n + offset] = (float)(Amplitude * Math.Sin(angle));
angle = (angle + angleIncrement) % (2 * Math.PI);
Setting a frequency:
public double Frequency
{
set
{
angleIncrement = 2 * Math.PI * value / sampleRate;
}
get
{
return angleIncrement * sampleRate / 2 / Math.PI;
}
}
Controlling the amplitude of the audio from your synthesizer based on equal-loudness contours is probably not what you want.
In theory, you would need to know the absolute level (SPL) produced by the speakers in order to choose the appropriate contour. In practice, a bigger issue would be when you extend your synthesizer to use complex waveforms instead of merely pure tones, possibly processed by filters etc. The equal-loudness contours are based on pure tones, and when you generate complex signals (i.e. containing many frequencies) you would instead need a loudness model to estimate the loudness of your synthesized sound.
Related
I need to normalize a playing audio stream using BASS. For this, I'm following these steps:
Play the stream
Create another stream from the file, and determine the peak value in a background worker
Apply DSP_Gain with the appropriate gain value to the stream that is playing.
I realize the normalization will only occur after the worker is done with the task, which can seem ugly, but that isn't the point.
The trouble is, when determining the peak value of the stream, the resulting value is an integer between 0 and 32768 (the bigger the value, the louder the sound), however DSP_Gain has two variables for setting the amplification value, none of which are integers. The first one is Gain - a double between 0 and 1024, and the second is Gain_dBV - a double between -infinity and 60. Trying to pass the peak value as a factor resulted in enormous clipping inside the playing stream. My question is, how do I translate this peak value into the correct parameter for DSP_Gain? Below is the code for getting peak value:
int strm = Bass.BASS_StreamCreateFile(filename, 0, 0, BASSFlag.BASS_STREAM_DECODE);
//initialized stream for getting peak value
int peak=0; //This value will be between 0 and 32768
while (System.Convert.ToBoolean(Bass.BASS_ChannelIsActive(strm)))
{
//calculates peak from a 20ms frame and advances, loops till stream over
int level = Bass.BASS_ChannelGetLevel(strm);
int left = Utils.LowWord32(level); // the left level
int right = Utils.HighWord32(level); // the right level
if (peak < left) peak = left;
if (peak < right) peak = right;
}
Applying the DSP_Gain:
DSPGain = new DSP_Gain();
DSPGain.ChannelHandle = stream; //this stream is the already playing one
DSPGain.Gain = *SOME VALUE*
DSPGain.Start();
Just reading the links you posted it seems gain is a multiplying factor being applied to the signal - values below 1.0 will reduce the level of the signal, values above 1.0 will increase the level. So you need to calculate how much you want to reduce the level by - say you want a max peak value of 30000 & your calculated peak value is 32000 - then your gain is likely to be (30000 / 32000) = 0.9375.
Gain_dBV is the gain ratio expressed in decibels - this is typically calculated as either 10 * log( power out / power in) or 20 * log(p-p Volts Out / p-p Volts In). The dB is converted back to the actual Gain before being applied to the signal as above - in the example the Gain dB would be 20 * log(0.9375) = -0.56
I'm using GDI+ to implement some simple graphics, I've taken the code from this example http://www.vcskicks.com/3d_gdiplus_drawing.php and can get it to do what I want, but I don't understand how it's doing the conversion from 3D data point to 2D data point:
//Convert 3D Points to 2D
Math3D.Point3D vec;
for (int i = 0; i < point3D.Length; i++)
{
vec = cubePoints[i];
if (vec.Z - camera1.Position.Z >= 0)
{
point3D[i].X = (int)((double)-(vec.X - camera1.Position.X) / (-0.1f) * zoom) + drawOrigin.X;
point3D[i].Y = (int)((double)(vec.Y - camera1.Position.Y) / (-0.1f) * zoom) + drawOrigin.Y;
}
else
{
tmpOrigin.X = (int)((double)(cubeOrigin.X - camera1.Position.X) / (double)(cubeOrigin.Z - camera1.Position.Z) * zoom) + drawOrigin.X;
tmpOrigin.Y = (int)((double)-(cubeOrigin.Y - camera1.Position.Y) / (double)(cubeOrigin.Z - camera1.Position.Z) * zoom) + drawOrigin.Y;
point3D[i].X = (float)((vec.X - camera1.Position.X) / (vec.Z - camera1.Position.Z) * zoom + drawOrigin.X);
point3D[i].Y = (float)(-(vec.Y - camera1.Position.Y) / (vec.Z - camera1.Position.Z) * zoom + drawOrigin.Y);
point3D[i].X = (int)point3D[i].X;
point3D[i].Y = (int)point3D[i].Y;
}
}
I've found a couple of resources which discuss conversion from a 3d data point to a 2d one:
https://amycoders.org/tutorials/3dbasics.html
https://en.wikipedia.org/wiki/Isometric_projection
https://en.wikipedia.org/wiki/3D_projection
However none of these resources seem to detail the maths used in the above example.
I'd be really grateful if someone could point me at the derivation for the maths and/or explain how the above code works.
The article and code is a bit confusing, indeed. Before we start, let's do some modifications to the rest of the code. Through these modifications, you will probably see what's going on more easily. Let's specify a static camera position. Instead of this weird formula:
double cameraZ = -(((anchorPoint.X - cubeOrigin.X) * zoom) / cubeOrigin.X) + anchorPoint.Z;
Let's just do this:
cameraZ = 200;
zoom = 100;
And after that, we keep
camera1.Position = new Math3D.Point3D(cubeOrigin.X, cubeOrigin.Y, cameraZ);
This will position the camera at a depth of 200 such that its x/y coordinates coincide with the cube center. I'll come back to the meaning of zoom.
The camera model uses a perspective projection and a right-handed coordinate system. That means that the camera look in the negative z-direction and things that are far away will appear smaller.
Let's take a closer look at the 3D->2D conversion code step by step:
if (vec.Z - camera1.Position.Z >= 0)
vec is the point that we want to project. A more intuitive way to write that would be:
if (vec.Z >= camera1.Position.Z)
So, this branch applies to all points that are behind the camera (remember that the camera looks into the negative z-direction). What happens in this branch is a bit hacky. It has nothing to do with real projections. What you actually want to do is to cut off those points (as they are not visibile). Luckily, in the example, none of the points lie behind the camera. So, we don't need to care about this. I'll come back to that later.
Let's continue to the else branch.
tmpOrigin = ...
This variable is not used anywhere, so we can ignore it.
point3D[i].X = (float)((vec.X - camera1.Position.X) / (vec.Z - camera1.Position.Z) * zoom + drawOrigin.X);
This is the actual projection (I will only consider the X part. The same goes for the Y part). Let's take a look at the individual parts:
vec.X - camera1.Position.X
This is the vector from the camera position to the point drawn. Everything left of the camera has a negative coordinate, everything right of the camera has a positive coordinate.
vec.Z - camera1.Position.Z
This is the negative depth of the camera. Not sure why the negative depth is used here. This will give you a mirrored image. What you actually wanted to do is (due to the camera looking into the negative z-axis)
camera1.Position.Z - vec.Z
Then,
(vec.X - camera1.Position.X) / (vec.Z - camera1.Position.Z)
is the perspective divide. The difference vector is scaled by its inverse depth (i.e. far objects become smaller).
* zoom
This scales the image from world space (which is very small) to image space (convert world units to pixels). The factor is kind of arbitrary (that's why we just specified 100). More involved camera models use a field of view.
drawOrigin.X
And finally, we align the camera center to the drawOrigin. Remember that points left of the camera had a negative coordinate. With this, these will get a positive coordinate (but still be left of drawOrigin).
point3D[i].X = (int)point3D[i].X;
This is just a cast to int.
For the y-coordinate, there is an additional -. This turns the y-axis around (in the pixel coordinate system of the image, the y-axis points downwards).
Let's go back to the hacky if branch. You see that the formula is exactly the same. Except that the part that had the negative depth of the point before now has (-0.1f). So these points will be considered having a constant depth of 0.1. Pretty dubious and far from actual projections.
And that's basically it. One more note: The article has a section about Gimbal lock. Thing is, the properties of matrix multiplications that are described there have nothing to do with Gimbal lock. So, don't rely on this article too much. It's a nice practical application, but it has quite some flaws.
I am using this equation to work out the angles of my x, y and z compared to gravity:
directionalVector = Math.Sqrt(Math.Pow(accelForceX, 2) + Math.Pow(accelForceY, 2) + Math.Pow(accelForceZ, 2));
accelAngleX = (Math.Acos(accelForceX / directionalVector) * (180f / Math.PI)); ;
accelAngleY = (Math.Acos(accelForceY / directionalVector) * (180f / Math.PI));
accelAngleZ = (Math.Acos(accelForceZ / directionalVector) * (180f / Math.PI));
accelForceN is a reading from an accelerometers axis, measured in G's
This way produces a range of result from 0-180degress, no negative numbers.
How can I find the sign of the angles?
I think you are confused about what you are actually calculating here. Make sure you are aware that you are simply calculating the angle by using the definition of cosinus:
cos(accelAngleN*2*Math.Pi/360) = accelForceN/directionalVector
(the multiplication with 2Pi/360 merely transforms an angle into Radian). Now consider that the angular sum in a triangle is 180° and thus in this case a negative angle or an angle larger than 180° as result would not make any sense. This is just another way of looking at the fact that cos is not injective over the whole field of real numbers and thus arccos is defined as function on [-1,1] -> [0,Pi] (or [0°,180°] for that matter).
So what you are currently calculating is the angle of your x/y/z-vectors to your directional vector (rather than to gravity, which would be the z-achsis direction anyways, wouldn't it?) and from this standpoint your output is perfectly valid.
If you need help with transforming your result any further, please provide an example (2 dimensional should be good enough for the sake of simplicity) of what you are expecting.
I need to convert meters to decimal degrees in C#. I read on Wikipedia that 1 decimal degree equals 111.32 km. But it is on equator, so if I'm located above/below it my the conversion will be wrong?
I assume this is wrong:
long sRad = (long.Parse(sRadTBx.Text)) / (111.32*1000);
EDIT: I need this search radius to find nearby users
long myLatitude = 100;
long myLongitude = 100;
long sRad = /* right formula to convert meters to decimal degrees*/
long begLat = myLatitude - searchRad;
long endLat = myLatitude + searchRad;
long begLong = myLongitude - searchRad;
long endLong = myLongitude + searchRad;
List<User> FoundUsers = new List<User>();
foreach (User user in db.Users)
{
// Check if the user in the database is within range
if (user.usrLat >= begLat && user.usrLat <= endLat && user.usrLong >= begLong && user.usrLong <= endLong)
{
// Add the user to the FoundUsers list
FoundUsers.Add(user);
}
}
Also from that very same Wikipedia article:
As one moves away from the equator towards a pole, however,
one degree of longitude is multiplied by
the cosine of the latitude,
decreasing the distance, approaching zero at the pole.
So this would be a function of latitude:
double GetSRad(double latitude)
{
return 111.32 * Math.Cos(latitude * (Math.PI / 180));
}
or similar.
edit: So for going the other way around, converting meters to decimal degrees, you need to do this:
double MetersToDecimalDegrees(double meters, double latitude)
{
return meters / (111.32 * 1000 * Math.Cos(latitude * (Math.PI / 180)));
}
Christopher Olsson already has a good answer, but I thought I'd fill in some of the theory too.
I've always found this webpage useful for these formulas.
A quick note on the concept
Think about the actual geometry going on.
As it stands, you are currently doing nothing more than scaling the input. Imagine the classic example of a balloon. Draw two lines on the balloon that meet at the bottom and the top. These represent lines of longitude, since they go "up and down." Quotes, of course, since there aren't really such concepts, but we can imagine. Now, if you look at each line, you'll see that they vary in distance as you go up and down their lengths. Per the original specification, they meet at the top of the balloon and the bottom, but they don't meet anywhere else. The same is true of lines of longitude. Non-Euclidean geometry tells us that lines intersect exactly twice if they intersect at all, which can be hard to conceptualize. But because of that, the distance between our lines is effectively reflected across the equator.
As you can see, the latitude greatly affects the distance between your longitudinal lines. They vary from the closest at the north and south poles, to the farthest away at the equator.
Latitudinal lines are a bit easier. They do not converge. If you're holding our theoretical balloon straight up and down, with the poles pointed straight up and straight down that is, lines of latitude will be parallel to the floor. In a more generalized sense, they will be perpendicular to the axis (a Euclidean concept) made by the poles of the longitudinal lines. Thus, the distance is constant between latitudes, regardless of your longitude.
Your implementation
Now, your implementation relies on the idea that these lines are always at a constant distance. If that was the case, you'd be able to do take a simple scaling approach, as you have. If they were, in fact, parallel in the Euclidean sense, it would be not too dissimilar to the concept of converting from miles per hour to kilometers per hour. However, the variance in distance makes this much more complicated.
The distance between longitudes at the north pole is zero, and at the equator, as your cited Wikipedia page states, it's 111.32 kilometers. Consequently, to get a truly accurate result, you must account for the latitude you're looking for. That's why this gets a little more complicated.
Getting Realistic Results
Now, the formula you want, given your recent edit, it seems that you're looking to incorporate both latitude and longitude in your assessment. Given your code example, it seems that you want to find the distance between two coordinates, and that you want it to work well at short distances. Thus, I will suggest, as the website I pointed you to at the beginning of this posts suggests, a Haversine formula. That website gives lots of good information on it, but this is the formula itself. I'm copying it directly from the site, symbols and all, to make sure I don't make any stupid typos. Thus, this is, of course, JavaScript, but you can basically just change some cases and it will run in C#.
In this, φ is latitude, λ is longitude, θ is the bearing (in radians, clockwise from north), δ is the angular distance (in radians) d/R; d being the distance travelled, R the earth’s radius
var R = 6371; // km
var φ1 = lat1.toRadians();
var φ2 = lat2.toRadians();
var Δφ = (lat2-lat1).toRadians();
var Δλ = (lon2-lon1).toRadians();
var a = Math.sin(Δφ/2) * Math.sin(Δφ/2) +
Math.cos(φ1) * Math.cos(φ2) *
Math.sin(Δλ/2) * Math.sin(Δλ/2);
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
var d = R * c;
I think the only thing that must be noted here is that R, as declared in the first line, is the radius of the earth. As the comment suggests, we're already working in kilometers so you may or may not have to change that for your implementation. It's easy enough, fortunately, to find the (average) radius of the earth in your favorite units by doing a search online.
Of course, you'll also want to note that toRadians is simply the input multiplied by Math.PI, then divided by 180. Simple enough.
Alternative
This doesn't really look relevant to your case, but I will include it. The aforementioned formula will give accurate results, but it will be at the cost of speed. Obviously, it's a pretty small deal on any individual record, but as you build up to handle more and more, this might become an issue. If it does, and if you're dealing in a fairly centralized locale, you could work off the immense nature of our planet and find numbers suitable for the distance between one degree of latitude and longitude, then treat the planet as "more or less Euclidean" (flat, that is), and use the Pythagorean Theorem to figure the values. Of course, that will become less and less accurate the further away you get from your original test site (I'd just find these numbers, personally, by asking Google Earth or a similar product). But if you're dealing with a dense cluster of users, that will be way, way, way faster than running a flurry of formulas to the Math class to work out.
Another, more abstract alternative
You might also want to think about where you're doing this logic. Here I begin to overstep my reach a bit, but if you happen to be storing your data in SQL Server, it already has some really cool geography functionality built right in that will handle distance calculations for you. Just check out the GEOGRAPHY type.
Edit
This is a response to a comment, suggesting that the desired result is really a rectangle denoting boundaries. Now, I would advise against this, because it isn't really a search "radius" as your code may suggest.
But if you do want to stick to that method, you'll be looking at two separate distances: one for latitude and one for longitude. This is also from that webpage. φ1 is myLatitude, and λ1 is myLongitude. This formula accepts a bearing and starting coordinates, then gives the resulting position.
var φ2 = Math.asin( Math.sin(φ1)*Math.cos(d/R) + Math.cos(φ1)*Math.sin(d/R)*Math.cos(brng) );
var λ2 = λ1 + Math.atan2(Math.sin(brng)*Math.sin(d/R)*Math.cos(φ1), Math.cos(d/R)-Math.sin(φ1)*Math.sin(φ2));
You could use that to determine the boundaries of your search rectangle.
I'm currently using value noise I believe it is called in combination with fractal Brownian motion to generate terrain in a 3D environment. I generate a 2D height-map from the noise, with values ranging between -1 and +1. I typically just multiply that return value by 10 or so to set the height and that generates rolling hills.
What I'd like to do is somehow combine calls to the algorithm to have some areas very hilly, others quite flat, while some are nearly mountainous. How do you go about something like that without having edges between areas be extremely obvious (like jutting cliffs segregating them)?
Edit: This is needed to be completely procedural in a near infinite environment.
Based on ananthonline's answer, I'm taking a low frequency call to my noise generator as the mask. With two biome types, I take the 'impact' of the biome, subtract the absolute value of the mask value minus the 'location' on the mask, and divide that whole value by the 'impact' value again.
(Impact- abs(Mask - Location)) / Impact
That gives me a value that when positive, I can multiply towards the return value of a specific noise call with specific frequencies, amplitudes, etc (such as rolling hills or mountains or ocean).
The primary issue here is that if my mask returned values of 0 through 1 as an example, I'd need to (in a two biome scenario) set one biome's location to .25 and the other to .75 with a strength of .5 each for them to blend together properly. At least, if I wanted even distribution of each biome type AND to blend them together properly. I'd struggle quite a bit if I wanted, say, mountains to show up twice as often as rolling hills.
I do so hope that makes sense. The math works out fantastically, but I certainly don't think I'm explaining it well with my limited mathematics background and knowledge. Maybe some code would help (if my cruddy uncommented code means something to someone), note that GetNoise2d returns values between 0 and 1:
float GetHeight(float X, float Z)
{
float fRollingHills_Location = .25f, fRollingHills_Impact = .5f, fRollingHills = 0;
float fMountains_Location = .75f, fMountains_Impact = .5f, fMountains = 0;
float fMask = GetNoise2d(0, X, Z, 2, .01f, .5f, false, false);
float fRollingHills_Strength = (fRollingHills_Impact - Math.Abs(fMask - fRollingHills_Location)) / fRollingHills_Impact;
float fMountains_Strength = (fMountains_Impact - Math.Abs(fMask - fMountains_Location)) / fMountains_Impact;
if (fRollingHills_Strength > 0)
fRollingHills = fRollingHills_Strength * (GetNoise2d(0, X, Z, 2, .05f, .5f, false, false) * 10f + 25f);
if (fMountains_Strength > 0)
fMountains = fMountains_Strength * (GetNoise2d(0, X, Z, 2, .1f, .5f, false, false) * 25f + 10f);
return fRollingHills + fMountains;
}
If a problem with the above code needs to be specified, then let it be that to do this with say 10 different biomes would require some extreme thought to be put into the exact 'location' and 'impact' values so ensure the blending is flawless. I'd rather code it in such a way to ensure that's already taken care of.
How about generating low frequency noise (white-black) in a texture that is 1/8th or smaller than the terrain texture, blurring it and then using this as the mask to blend the two heightmaps together (perhaps as part of the rendering algorithm itself)?
Note that you can also paint this "blending" texture by hand, allowing fine control of cliffs vs smooth transition areas (sharper edges vs blurry edges).