Convert value stored as hex to date - c#

I have data stored in a file in HEX format and I know some examples of what the corresponding date should be.
However I am unable to determine how to calculate it.
83 61 94 04/08/2015
83 61 75 16/07/2015
83 61 97 07/08/2015
83 0 135 01/01/1999
83 51 64 08/10/2012
I don't know how the date was encoded originally and have no way of finding out as the data file is old and no longer supported by anyone.
any suggestions as to how to convert the hex to the corresponding date??

2015-08-04 - **94** days = 2015-05-02
2015-07-16 - **75** days = 2015-05-02
2015-08-07 - **97** days = 2015-05-02
So the third byte, in its "hexadecimal form" must have to do with the "number of days" and 2015-05-02 should be encoded as 83 61 00 I guess. However you should provide some more examples of other dates with different years so as to understand the other bytes.
EDIT: You new data can tell the following
83 51 64 --> 2012-10-08 - ** 64 ** days = 2012-08-05 (83 51 00)
83 01 35 --> 1999-01-01 - 83 days = 1998-11-27 (83 01 00)
61 - 51 = 10, and there are 1000 days between 2012-08-05 and 2015-05-02 (83 61 00), that means, the second "byte" also give us "days" but with a x100 multiplier.
Try it on the second case (1998-11-27). There are 6000 thousand days from that date to 2015-05-02 (83 61 00). 61-01 = 60 * 100.
I believe your formula should be something like:
Date in "hex" (AA BB CC)
Date = (AA * 1000 + BB * 100 + CC) days + Date_0
And I let you calculate yourself Date_0 as you can use any of the date examples that you have provided.
Note I am assuming AA is multiplied by 1000 as you have not provided any example where the first byte was changed.

Related

Why does Assert.AreEqual() fail for string and DateTimeFormatter?

I have written the following unit test to test date time formatting:
using System;
using Windows.Globalization.DateTimeFormatting;
using Microsoft.VisualStudio.TestPlatform.UnitTestFramework;
namespace MyTests
{
[TestClass]
public class DateTimeFormatterTests
{
[DataTestMethod]
[DataRow(2, 3, 2017, "en", "Thursday, March 2")]
[DataRow(2, 3, 2017, "de", "Donnerstag, 2. März")]
public void Long_date_without_year_should_match_expected(int day, int month, int year, string regionCode, string expected)
{
DateTimeFormatterformatter = new DateTimeFormatter("dayofweek month day", new[] { regionCode });
string actual = formatter.Format(new DateTime(year, month, day));
Assert.AreEqual(expected, actual);
}
}
}
I don't understand why the assertion fails with the following error:
{"Assert.AreEqual failed. Expected:<Thursday, March 2>. Actual:<‎Thursday‎, ‎March‎ ‎2>. "}
Is this because the strings have different encoding?
After converting both strings into byte arrays using UTF8 encoding the content of the byte arrays looks like this:
actual:
e2 80 8e 54 68 75 72 73 64 61 79 e2 80 8e 2c 20 e2 80 8e 4d 61 72 63 68 e2 80 8e 20 e2 80 8e 32
expected:
54 68 75 72 73 64 61 79 2c 20 4d 61 72 63 68 20 32
The octets e2 80 8e show that you have several U+200E characters in the actual string. U+200E is a control character for overriding the bi-directional text algorithm and insisting that what follows be written left-to-right, even if it's a case (such as Hebrew or Arabic characters) that would normally be written right-to-left.
The expected string does not have them.
Presumably that control character got copied into either your test data or into the actual source of the formatter you are testing. In the latter case, be glad the testing caught it. (Alternatively, maybe it's meant to be there for some reason).

Why does DateTime.Now sets the highest bit in DateTime's usual binary representation?

As far as I know, the binary representation of DateTime and TimeSpan structures are 8-byte numbers of ticks (1 millisecond = 10000 ticks according to TimeSpan.TicksPerSecond). And values of Days, Hours, Minutes, etc. properties are obtained by integer division on TicksPerDay, TicksPerHour, TicksPerSecond etc. constants of TimeSpan.
For example if You run this code:
TimeSpan s1 = new TimeSpan(3, 5, 7, 9, 11).AddTicks(13));
long t1 = s1.Ticks;
You can get (if you use Visual Studio) something like that in you Memory windows:
0x061BE4D0 3d 2a c9 67 86 02 00 00
0x061BE4E0 3d 2a c9 67 86 02 00 00
where 0x061BE4D0 and 0x061BE4E0 are addresses of s1 and t1 respectively.
(Actually You should write 's1' and '&t1' instead of just 't1' in Address area of Memory window)
Now if You run another snippet of code:
DateTime d1 = new DateTime(1, 1, 3, 5, 7, 9, 11).AddTicks(13);
long t1 = d1.Ticks;
DateTime d2 = DateTime.Now;
long t2 = d2.Ticks;
You'll see for 'd1', '&t1', 'd2', '&t2' respectively the data like shown:
0x061AE438 3d 6a 5f 3d bd 01 00 00
0x061AE430 3d 6a 5f 3d bd 01 00 00
0x061AE424 bd 71 d5 02 3f 9d d0 88
0x061AE41C bd 71 d5 02 3f 9d d0 08
Why does DateTime.Now set the highest bit (0x 80 00 00 00 00 00 00 00) in its binary representation?
If you look at the source code here (not sure exactly which version this is, but you get the idea):
http://www.dotnetframework.org/default.aspx/DotNET/DotNET/8#0/untmp/whidbey/REDBITS/ndp/clr/src/BCL/System/DateTime#cs/1/DateTime#cs
you can see that high bits are applied depending on whether the time is local or not.
From a quick glance over the code, there's a const member called:
private const UInt64 KindLocal = 0x8000000000000000;
which looks as if it's used in the conversion. I'd suspect this is happening because you're using DateTime.Now, which is a local 'Now'. If you used 'UtcNow' it probably would set a different bit.
private const UInt64 KindUtc = 0x4000000000000000;
However, when you get it as 'ticks', it probably returns the unspecified form, which has no top bit set.
private const UInt64 KindUnspecified = 0x0000000000000000;
Basically, you're getting hung up on the inner workings of the struct. If you really want to understand it, then I'd suggest digging through the code. Otherwise, just use it as per the instructions, and it'll work fine for you!

Encoding/Decoding hex packet

I want to send this hex packet:
00 38 60 dc 00 00 04 33 30 3c 00 00 00 20 63 62
39 62 33 61 36 37 34 64 31 36 66 32 31 39 30 64
30 34 30 63 30 39 32 66 34 66 38 38 32 62 00 06
35 2e 31 33 2e 31 00 00 02 3c
so i build the string:
string packet = "003860dc0000" + textbox1.text+ "00000020" + textbox2.text+ "0006" + textbox3.text;
then "convert" it to ascii:
conn_str = HexString2Ascii(packet);
then i send the packet... but i have this:
00 38 60 **c3 9c** 00 00 04 33 30 3c 00 00 00 20 63
62 39 62 33 61 36 37 34 64 31 36 66 32 31 39 30
64 30 34 30 63 30 39 32 66 34 66 38 38 32 62 00
06 35 2e 31 33 2e 31 00 00 02 3c **0a**
why??
Thank you!
P.S.
the function is:
private string HexString2Ascii(string hexString)
{
byte[] tmp;
int j = 0;
int lenght;
lenght=hexString.Length-2;
tmp = new byte[(hexString.Length)/2];
for (int i = 0; i <= lenght; i += 2)
{
tmp[j] =(byte)Convert.ToChar(Int32.Parse(hexString.Substring(i, 2), System.Globalization.NumberStyles.HexNumber));
j++;
}
return Encoding.GetEncoding(1252).GetString(tmp);
}
EDIT:
if i convert directly in byte, the hex packet in coded as string:
00000000 30 30 33 38 36 30 64 63 30 30 30 30 30 34 33 33 003860dc 00000433
00000010 33 30 33 43 30 30 30 30 30 30 32 30 33 34 33 32 303C0000 00203432
00000020 36 33 36 33 33 35 33 39 33 32 33 34 36 36 33 39 63633539 32346639
00000030 36 33 33 39 33 31 33 39 33 30 33 36 33 33 36 35 63393139 30363365
00000040 33 35 36 33 36 35 36 35 36 35 33 31 33 39 33 38 35636565 65313938
00000050 36 33 33 31 36 34 33 34 36 33 33 30 30 30 30 36 63316434 63300006
00000060 33 35 32 65 33 31 33 33 32 65 33 31 30 30 30 30 352e3133 2e310000
00000070 30 32 33 43 023C
You cannot convert raw binary data to string data and expect things to just work. They are not the same. This is especially true when you mix up your character encodings.
C# characters are not ASCII characters. They are Unicode characters, represented by Unicode code points. When you then turn around and write those characters out, you need to specify what kind of data to write out. When you read your byte array into a string, using Encoding.GetEncoding(1252), you are getting the characters corresponding to code page 1252, in which 0xdc is a Ü.
But when your string is being converted back into bytes to send over the network, it is being written out as UTF-8. In UTF-8, UTF-00DC cannot be encoded as a single byte, since that byte value is used to indicate the start of a multi-byte sequence. Instead, it's encoded as the multi-byte sequence 0xc3 0x9c. As far as C# is concerned, those two values are the same character. (I don't know where that extra 0x0a is coming from, but my guess is an errant line feed from one of your text boxes and/or some other part of your process).
Its not clear what exactly you're trying to do, but I suspect you are converting way too many times for it to work out correctly. If you know the byte sequence you want to send, why not just encode that as a byte[] directly? For example, use a MemoryStream and write the constant bytes you need into it.
To get the values out of your text boxes, your original code to "convert" the string of hex digits into a string of ASCII characters had the right idea. You just need to stop at the point where you have a byte array, since ultimately the byte array is what you want.
public byte[] GetBytesFrom(string hex)
{
var length = hex.Length / 2;
var result = new byte[length];
for (var i = 0; i < length; i++)
{
result[i] = byte.Parse(hex.Substring(i, 2), NumberStyles.HexNumber);
}
return result;
}
// Variable portions of packet structure.
var byte[] segment2 = GetBytesFrom(textbox1.Text);
var byte[] segment4 = GetBytesFrom(textbox2.Text);
var byte[] segment6 = GetBytesFrom(textbox3.Text);
MemoryStream output = new MemoryStream();
output.Write(new[] { 0x00, 0x38, 0x60, 0xdc, 0x00, 0x00 }, 0, 6);
output.Write(segment2, 0, segment2.Length);
output.Write(new[] { 0x00, 0x00, 0x00, 0x20 }, 0, 4);
output.Write(segment4, 0, segment4.Length);
output.Write(new[] { 0x00, 0x06 }, 0, 2);
output.Write(segment6, 0, segment6.Length);
From here, you could use MemoryStream.CopyTo() to copy it to another stream, or MemoryStream.Read() to read the entire packet into a new byte array, or MemoryStream.GetBuffer() to get the underlying buffer (though that last one is rarely what you want -- it includes unused padding bytes)

Share Fruits Fairly (Dynamic Programming)

I'm having a very hard time trying to figure out how to solve this problem efficiently. Let me describe how it goes:
"A hard working mom bought several fruits with different nutritional values for her 3 kids, Amelia, Jessica and Bruno. Both girls are overweight, and they are very vicious and always leave poor Bruno with nothing, so their mother decided to share the food in the following manner:
Amelia being the heaviest one gets the most amount of Nutritional Value
Jessica gets an amount equal or less than Amelia
Bruno gets an amount equal or less than Jessica, but you need to find a way to give him the highest possible nutritional value while respecting the rule ( A >= J >= B )"
Note: The original problem is described differently but the idea is the same, I don't want my classmates to find this post when they google for help hehe.
One of the test cases given by my teacher is the following:
The fruit list has the following values { 4, 2, 1, 8, 11, 5, 1}
Input:
7 -----> Number of Fruits
4 2 1 8 11 5 1 ----> Fruits Nutritional Values
Output:
1 11 ----> One fruit, their nutritional values sum for Amelia
5 ----> Position of the fruit in the list
3 11 ----> Three fruits, their nutritional values sum for Jessica
1 2 6 ----> Position of the fruits in the list
3 10 ----> Three fruits, their nutritional values sum for Bruno
3 4 7 ----> Position of the fruits in the list
Note: I am aware that there are several ways of diving the fruits among the kids, but it doesn't really matter as long as it follows the rule A >= J >= B.
At first I made an algorithm that generated all the subsets, each one had their sums, and the positions that were in use. That method was quickly discarded because the list of fruits can have up to 50 fruits, and the subset algorithm is O(2^n). I ran out of memory.
The second idea that I have is to use Dynamic Programming. In the columns header I would have the positions of the Fruit's List, in the row header the same, it's kind of hard to explain with letters so I'll ahead an do the table for the previous example { 4, 2, 1, 8, 11, 5, 1}.
00 01 02 03 04 05 06 07
00
01
02
03
04
05
06
07
Each time we advance to the row below we add the positions 1,2,3,...,7
00 01 02 03 04 05 06 07
00 00 <---No positions in use
01 04 <---RowPosition 1 + Column Position(Column 0) (4+0)
02 06 <---RowPosition 1 + RowPosition 2 + Column Position (4+2+0)
03 07 <---RP(1) + RP(2) + RP(3) + CP(0) (4+2+1+0)
04 15 <--- (4+2+1+8+0)
05 26
06 31
07 32 <--- (4+2+1+8+11+5+1+0)
Now that you know how it goes lets add the first row
00 01 02 03 04 05 06 07
00 00 04 02 01 08 11 05 01 <-- Sum of RP + CP
01 04 00 06 05 12 15 09 05 <-- Sum of RP(0..1) + CP
02 06
03 07
04 15
05 26
06 31
07 32
I put the 00 because the 1st position is already in use. The completed table would look like this.
00 01 02 03 04 05 06 07
00 00 04 02 01 08 11 05 01
01 04 00 06 05 12 15 09 05
02 06 00 00 07 14 17 11 07
03 07 00 00 00 15 18 12 08
04 15 00 00 00 00 26 20 16
05 26 00 00 00 00 00 31 27
06 31 00 00 00 00 00 00 32
07 32 00 00 00 00 00 00 00
Now that we have the table. I divide the sum of the Nutritional Values by the amount of kids, 32/3 = 10.6667, the ceiling would be 11. I try to check for 11, if it's in the table, I choose it and mark the position of the row and columns of the tables as used, then I would try to check for 11 again, if it's in the table I choose it otherwise look the 10, or 9, etc until I find it. Afterwards I would mark the respective positions as used then sum the unused positions to get Bruno's fruits.
I know that there has to be better way to do this because I found a flaw in my method, the table only has the sum of a few subsets. So maybe that will be detrimental in a few test cases. Maybe a 3D Memoization Cube?, I think it would consume too much memory, and I have a limit too 256MB.
Wow, I didn't realize I typed this much x.X. I hope I don't get a lot of tl; dr. Any help / guide would be greatly appreciated :D
EDIT: I made the code that generates the table in case anyone wants to try it out.
static void TableGen(int[] Fruits)
{
int n = Fruits.Length + 1;
int[,] memo = new int[n, n];
for (int i = 1; i < n; i++)
{
memo[0, i] = Fruits[i - 1];
memo[i, 0] = memo[i - 1, 0] + Fruits[i - 1];
for (int j = i + 1; j < n; j++)
memo[i, j] = memo[i, 0] + Fruits[j - 1];
}
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
Console.Write("{0:00} ", memo[i, j]);
Console.WriteLine();
}
}
A slightly computationally intensive way would be to assign the fruit in a round robin way, starting with the highest nutritional value for amelia.
From there, progressively loop through the fruit from lowest nutritional value held by amelia, and try each combination of either (a) giving the fruit to jessica, or (b) swapping the fruit with one held by jessica, while still satisfying the rule.
Then apply the same method to jessica and bruno. Repeat these 2 loops until no more swaps or gives are possible.
Slightly trickier, but potentially faster, would be to simultaneously give/swap to jess/bruno. For each 2 pieces of fruit that A holds, you would have 4 options to try, with more if you at the same time try and balance J & B.
For a faster algorithm, you could try asking at the mathematics stack exchange site, as this is very much a set-theory problem.
for(i = 0; i < count; i++)
{
currentFruit=Fruits.Max();
if(Amelia.Sum() + currentFruit < Jessica.Sum() + currentFruit)
{
Amelia.Add(currentFruit);
Fruits.Remove(currentFruit);
continue;
}
if(Jessica.Sum() + currentFruit < Bruno.Sum() + currentFruit)
{
Jessica.Add(currentFruit);
Fruits.Remove(currentFruit);
continue;
}
Bruno.Add(currentFruit);
Fruits.Remove(currentFruit);
}
This works for fruits with relatively similar values. If you add a fruit whose value is greater than all other fruits combined (which I did once by accident) it breaks down a bit.

Project Euler 18

Hey, been working at Project Euler, and this one is giving me some problems
By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Find the maximum total from top to bottom of the triangle below:
...
NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o)
here's the algorithm I've used to solve it
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Problem18
{
class Program
{
static void Main(string[] args)
{
string triangle = #"75
95 64
17 47 82
18 35 87 10
20 04 82 47 65
19 01 23 75 03 34
88 02 77 73 07 63 67
99 65 04 28 06 16 70 92
41 41 26 56 83 40 80 70 33
41 48 72 33 47 32 37 16 94 29
53 71 44 65 25 43 91 52 97 51 14
70 11 33 28 77 73 17 78 39 68 17 57
91 71 52 38 17 14 91 43 58 50 27 29 48
63 66 04 68 89 53 67 30 73 16 69 87 40 31
04 62 98 27 23 09 70 98 73 93 38 53 60 04 23";
string[] rows = triangle.Split('\n');
int currindex = 1;
int total = int.Parse(rows[0]);
Console.WriteLine(rows[0]);
for (int i = 1; i < rows.Length; i++)
{
string[] array1 = rows[i].Split(' ');
if (array1.Length > 1)
{
if (int.Parse(array1[currindex - 1]) > int.Parse(array1[currindex]))
{
Console.WriteLine(array1[currindex - 1]);
total += int.Parse(array1[currindex - 1]);
}
else
{
Console.WriteLine(array1[currindex]);
total += int.Parse(array1[currindex]);
currindex++;
}
}
}
Console.WriteLine("Total: " + total);
Console.ReadKey();
}
}
}
now whenever i run it, it comes up with 1064, only 10 less then the solution -- 1074
i haven't found any problems with the algorithm and I did the problem by hand and also came up with 1064, anyone know if the solution is wrong, i'm interpreting the question wrong, or if there's just a flaw in the algorithm?
Here is a graphical description:
Here's what the bottom up method belisarius describes--using the trivial triangle given in problem 18--looks like, just in case the image in his post is confusing to anyone else.
03
07 04
02 04 06
08 05 09 03
03
07 04
02 04 06
08 05 09 03
^^^^^^
03
07 04
10 04 06
08 05 09 03
^^^^^^
03
07 04
10 13 06
08 05 09 03
^^^^^^
03
07 04
10 13 15
^^^^^^
08 05 09 03
03
20 04
10 13 15
^^^^^^
08 05 09 03
03
20 04
10 13 15
^^^^^^
08 05 09 03
03
20 19
^^^^^^
10 13 15
08 05 09 03
23
^^
20 19
10 13 15
08 05 09 03
Your problem is that your algorithm is a greedy algorithm, always finding local maxima. Unfortunately that causes it to miss higher numbers down below because they are directly below lower numbers. For example, if the triangle were only 3 levels, your algorithm would pick 75 + 95 + 47 = 217, while the correct answer is 75 + 64 + 82 = 221.
The correct algorithm will either try every path and choose the one with the highest total, or compute paths from the bottom up (which allows you to avoid trying every one, thus being much faster). I should add that working from the bottom-up is not only much faster (O(n^2) instead of O(2^n)!), but also much easier to write (I did it in about 3 lines of code).
You've written a greedy algorithm, which I don't think fits the requirements here. Here's a quick example to demonstrate that point:
1
2 1
1 1 100
Using your algorithm you'll reach a sum of 4, although the optimal solution is 102.
It is a good question based on dynamic programming. You need to create a 2d data structure(like vector in c++) then follow the bottom to up approach of dp.
The formula is dp[i][j] += max(dp[i + 1][j], dp[i + 1][j + 1]). Try coding on your own then if you are stuck at some point see my solution.
vector< vector<int> > dp(n); // n is the number of rows
for (int i = 0 ; i < n; i++){
for (int j = 0; j <= i; j++){
cin >> val;
dp[i].push_back(val);
}
}
for (int i = n - 2 ; i >= 0; i--)
{
for (int j = 0; j <= i; j++)
dp[i][j] += max(dp[i + 1][j], dp[i + 1][j + 1]);
}
cout << dp[0][0] << endl;
return 0;
}
input: 3
2
4 5
6 8 9
output: 16
Recursive (not necessarily the best) approach:
static int q18(){
int matrix[][] = // BIG MATRIX;
return getMaxPath(matrix, 0, 0, 0);
}
static int getMaxPath(int matrix[][], int sum, int row, int col){
if(row==matrix.length) return sum;
return Math.max(getMaxPath(matrix, sum+matrix[row][col], row+1, col),
getMaxPath(matrix, sum+matrix[row][col], row+1, col+1));
}

Categories