Matlab to C# code translation and matrix/array - c#

Im rewriting matlab code to C#. I have no idea about programming in matlab and I can't understand this part:
d9=[d9 d8];
d10=d9(:,2:10);
d5=[d6 d10 d7];
Variables d6, d7, d8 and d9 are 2-dimensional arrays.
Full Matlab code is here: link to codeforge.com

"I have no idea about programming in matlab and I can't understand this part"
a) d9=[d9 d8];
will concatenate the matrix d9 and d8 and store result in d9. Other way is that it just append matrix d8 to d9
Example :
>> a=[1 2;3 4]
a =
1 2
3 4
>> b=[5 6;7 8]
b =
5 6
7 8
>> a=[a b]
a =
1 2 5 6
3 4 7 8
b) d10=d9(:,2:10);
: is colon operator extensively used for vector manipulation, sub-scripting and creating for loops iterator
Here,
second subscript 2:10 means the columns number 2 3 4...10 in d9
first subscript : all rows in d10
So d10 is assigned by all elements in column 2 to 10 from all rows of d9.
Example :
>> c=a(:,2:4)
c =
2 5 6
4 7 8
c) d5=[d6 d10 d7];
Again similar to first one, concatenates matrices d6 d10 and d7 and assign the result to d5.

not yet able to comment directly under an answer but I think there is a typo in P0W's answer.
It should state:
" first subscript : all rows in d9 " (emphasis added) instead of " first subscript : all rows in d10 "
The rest of the answer is correct but just in case it confuses somebody unfamiliar with Matlab...

Related

What mean & 0x40 and << 7

I've a list with 10 values (bytes, hex). The list is converted to decimal:
09 04 5A 14 4F 7D
to
9 4 90 20 79 125
After that. There is a method (parameter: List<Byte> byteList). Can anybody explain me the following code in that method:
"Test:" + ((((UInt16)byteList[(Int32)index] & 0x40) << 1) >> 7):
Especially & 0x40 and << 1 and >> 7
0x40 is hex 40 - aka 64 in decimal, or 01000000 in binary. & is bitwise "and", so {expr} & 0x40 means "take just the 7th bit". << is left shift, and >> is right shift. So this:
takes the 7th bit
left shifts 1
right shifts 7
leaving the 7th bit in the LSB position, so the final value will be either 0 (if the 7th bit wasn't set) or 1 (if the 7th bit was set)
frankly, it would be easier to just >> 6, or to just compare against 0. Likewise, casting to short (UInt16) is not useful here.
If I wanted to test the 7th bit, I would just have done:
bool isSet = (byteList[(int)index] & 0x40) != 0;
This is a test to verify if 7th bit of value is set. Result will be 1 if bit is set, and 0 if not.

How is an integer stored in memory?

This is most probably the dumbest question anyone would ask, but regardless I hope I will find a clear answer for this.
My question is - How is an integer stored in computer memory?
In c# an integer is of size 32 bit. MSDN says we can store numbers from -2,147,483,648 to 2,147,483,647 inside an integer variable.
As per my understanding a bit can store only 2 values i.e 0 & 1. If I can store only 0 or 1 in a bit, how will I be able to store numbers 2 to 9 inside a bit?
More precisely, say I have this code int x = 5; How will this be represented in memory or in other words how is 5 converted into 0's and 1's, and what is the convention behind it?
It's represented in binary (base 2). Read more about number bases. In base 2 you only need 2 different symbols to represent a number. We usually use the symbols 0 and 1. In our usual base we use 10 different symbols to represent all the numbers, 0, 1, 2, ... 8, and 9.
For comparison, think about a number that doesn't fit in our usual system. Like 14. We don't have a symbol for 14, so how to we represent it? Easy, we just combine two of our symbols 1 and 4. 14 in base 10 means 1*10^1 + 4*10^0.
1110 in base 2 (binary) means 1*2^3 + 1*2^2 + 1*2^1 + 0*2^0 = 8 + 4 + 2 + 0 = 14. So despite not having enough symbols in either base to represent 14 with a single symbol, we can still represent it in both bases.
In another commonly used base, base 16, which is also known as hexadecimal, we have enough symbols to represent 14 using only one of them. You'll usually see 14 written using the symbol e in hexadecimal.
For negative integers we use a convenient representation called twos-complement which is the complement (all 1s flipped to 0 and all 0s flipped to 1s) with one added to it.
There are two main reasons this is so convenient:
We know immediately if a number is positive of negative by looking at a single bit, the most significant bit out of the 32 we use.
It's mathematically correct in that x - y = x + -y using regular addition the same way you learnt in grade school. This means that processors don't need to do anything special to implement subtraction if they already have addition. They can simply find the twos-complement of y (recall, flip the bits and add one) and then add x and y using the addition circuit they already have, rather than having a special circuit for subtraction.
This is not a dumb question at all.
Let's start with uint because it's slightly easier. The convention is:
You have 32 bits in a uint. Each bit is assigned a number ranging from 0 to 31. By convention the rightmost bit is 0 and the leftmost bit is 31.
Take each bit number and raise 2 to that power, and then multiply it by the value of the bit. So if bit number three is one, that's 1 x 23. If bit number twelve is zero, that's 0 x 212.
Add up all those numbers. That's the value.
So five would be 00000000000000000000000000000101, because 5 = 1 x 20 + 0 x 21 + 1 x 22 + ... the rest are all zero.
That's a uint. The convention for ints is:
Compute the value as a uint.
If the value is greater than or equal to 0 and strictly less than 231 then you're done. The int and uint values are the same.
Otherwise, subtract 232 from the uint value and that's the int value.
This might seem like an odd convention. We use it because it turns out that it is easy to build chips that perform arithmetic in this format extremely quickly.
Binary works as follows (as your 32 bits).
1 1 1 1 | 1 1 1 1 | 1 1 1 1 | 1 1 1 1 | 1 1 1 1 | 1 1 1 1 | 1 1 1 1 | 1 1 1 1
2^ 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16......................................0
x
x = sign bit (if 1 then negative number if 0 then positive)
So the highest number is 0111111111............1 (all ones except the negative bit), which is 2^30 + 2 ^29 + 2^28 +........+2^1 + 2^0 or 2,147,483,647.
The lowest is 1000000.........0, meaning -2^31 or -2147483648.
Is this what high level languages lead to!? Eeek!
As other people have said it's a base 2 counting system. Humans are naturally base 10 counters mostly, though time for some reason is base 60, and 6 x 9 = 42 in base 13. Alan Turing was apparently adept at base 17 mental arithmetic.
Computers operate in base 2 because it's easy for the electronics to be either on or off - representing 1 and 0 which is all you need for base 2. You could build the electronics in such a way that it was on, off or somewhere in between. That'd be 3 states, allowing you to do tertiary math (as opposed to binary math). However the reliability is reduced because it's harder to tell the difference between those three states, and the electronics is much more complicated. Even more levels leads to worse reliability.
Despite that it is done in multi level cell flash memory. In these each memory cell represents on, off and a number of intermediate values. This improves the capacity (each cell can store several bits), but it is bad news for reliability. This sort of chip is used in solid state drives, and these operate on the very edge of total unreliability in order to maximise capacity.

Generating random numbers for solar energy harvesting using Markov models

How do I generate random numbers using a Markov model in C#? I noticed here that almost all of the applications of the Markov algorithm is for randomly writing text. Is there a source code somewhere or a tutorial where I can fully understand how this works? My goal actually is to generate random numbers to simulate solar energy harvesting.
First decide how deep your Markov model is going. Do you look at the previous number? The previous two numbers? The previous three numbers? Perhaps deeper?
Second, look through some actual solar energy data and extract the probabilities for what follows a group of 1, 2 or 3 numbers. Ideally you will be able to get complete coverage, but there may well be gaps. For those either extrapolate, or put in some average/random value.
All this so far is data.
Third generate the first 1, 2 or 3 numbers. From your database pick the correct combination and randomly select one of the possible followers. When I do this, I have a low probability random element possible as well so things don't get stuck in a rut.
Drop the earliest element of your 1, 2 or 3 numbers. Shift the others down and add the new number at the end. Repeat until you have enough data.
Here is a short extract from my 1-deep Markov word generator showing part of the data table:
// The line addEntry('h', "e 50 a 23 i 12 o 7 # 100") shows the the letter
// 'h' is followed by 'e' 50% of the time, 'a' 23% of the time, 'i' 12% of
// the time, 'o' 7% of the time and otherwise some other letter, '#'.
//
// Figures are taken from Gaines and tweaked. (see 'q')
private void initMarkovTable() {
mMarkovTable = new HashMap<Character, List<CFPair>>(26);
addEntry('a', "n 21 t 17 s 12 r 10 l 8 d 5 c 4 m 4 # 100");
addEntry('b', "e 34 l 17 u 11 o 9 a 7 y 5 b 4 r 4 # 100");
addEntry('c', "h 19 o 19 e 17 a 13 i 7 t 6 r 4 l 4 k 4 # 100");
addEntry('d', "e 16 i 14 a 14 o 10 y 8 s 6 u 5 # 100");
addEntry('e', "r 15 d 10 s 9 n 8 a 7 t 6 m 5 e 4 c 4 o 4 w 4 # 100");
addEntry('f', "t 22 o 21 e 10 i 9 a 7 r 5 f 5 u 4 # 100");
addEntry('g', "e 14 h 14 o 12 r 10 a 8 t 6 f 5 w 4 i 4 s 4 # 100");
addEntry('h', "e 50 a 23 i 12 o 7 # 100");
// ...
}
The data is organised as letter-frequency pairs. I used the '#' character to indicate "pick any letter here". Your data will be number-frequency pairs instead.
To pick an output, I read the appropriate line of the data and generate a random percentage. Scan long the data accumulating the frequencies until the accumulated frequency exceeds the random percantage. That is the letter (or number in your case) you pick.

How are byte values obtained (XOR Example)

I have been reading this page here from MSDN regarding the XOR operator and its usage.
Half way down the page I read the following code:
// Bitwise exclusive-OR of 10 (2) and 11 (3) returns 01 (1).
Console.WriteLine("Bitwise result: {0}", Convert.ToString(0x2 ^ 0x3, 2));
Now, I cannot figure out for the life of me how 10 equates to 2, or how 11 equates to 3. Would anyone mind explaining this in simple terms so that I can clearly understand the concept here?
Thank you,
Evan
The "10" and "11" in the text are simply binary representations of numbers. So "10" in binary equals "2" in decimal, and "11" in binary equals "3" in decimal.
It's not very clear though, I admit...
(If that doesn't help, please comment saying what else is confusing. I suspect this is enough though.)
10 in binary is a 2 in decimal,
11 in binary is a 3
(10)2=1*2^1+0*2^0=2
(11)2=1*2^1+1*2^0=3
10 XOR 11 = 01
10
-
11
----
01
Exclusive means there has to be only one '1' to get a '1', in all other cases, you get a 0.
The issue here is one of base conversion. In base 2 (or binary) we represent a number a as series of zeros and ones. Take a look at http://en.wikipedia.org/wiki/Binary_numeral_system
It's showing you in binary that hexadecimal (0x2) equals 00000010 and (0x3) equals 00000011.
Therefore in XOR that is
00000010
00000011
--------
00000001

small problem in shifting range please help me out

I have small concept problem.
i am working on graph.
consider i hav a point 32 between range 28 and 35
and i have to bring all the points between the range 28 and 35 within range 1 and 2.
how to calculate it?
actually i ll have the point 32.
and i have to shift it between 1 and 2...
please help me out.
In other words,
if 32 is between 28 and 35
what is 32 in range 1 and 2
I think it is: 1 + [(number - 28) / (35 - 28)], for example for 32 is (1 + 4/7) = 1.57...
and in general if you want move it within [a,b]:
a + (b-a) * [(number - 28) / (35 - 28)]

Categories