I tried using the BigInteger implementation for Unity, but it still overflows(Or maybe I used in a wrong way? Im not sure also, it can only do 20 characters of ints which is like 64 bits).... This is how my thing works, I have a Hexadecimal which is has 64 characters, and then for me to do arthmetic computations, I want to convert it first into a decimal form and store it in variable.
public BigInteger x = 0;
and then here where it overflows... HexToDecimal is a function that takes a string of Hexadecimal and returns the decimal form of that.
x = HexToDecimal(hex);
a sample output of HexToDecimal is
105627842363267744400190144423808258002852957479547731009248450467191077417570
that's the ideal size of a number I want to store.
It works if I used very small numbers like hundreds thousands or something. but BitInteger kinda limits it to 20 characters only cause I tried declaring a variable like this, just to know the where BitInteger limits me
public BigInteger x = 10000000000000000000
when I add another "0" there, its throws an error stating that integral is too large
The way you've instantiated your BigInteger is a convenience method for smaller numbers (see "instantiating a BigInteger" -- I can't seem to link to it directly). This means that you're actually creating an int32 or an int64 and then converting it to a BigInteger (so it has to be able to fit into the limited size of those types).
To truly take advantage of BigInteger's arbitrary size, you probably want to use BigInteger.Parse(String). This method will return a BigInteger for a numeric String (and it must be a numeric string as defined by the current system culture -- nothing else except a possible leading negative symbol) This method should work perfectly fine in Unity as it's part of the C# standard lib.
So, for your HexToDecimal example, assuming it returns a string you'd use it like
x = BigInteger.Parse(HexToDecimal(hex));
Related
I am programming a messaging app that converts strings to and from unicode in order to later encrypt those strings.
Example from my code:
g = g + Char.ConvertFromUtf32(Convert.ToInt32(d));
This line works just fine but it only supports int 32 as input variables. And that's a problem as in some occasions depending on user input the conversion to int32 will fail due to size limitations of int32.
One solution I see is to limit user input, but that would compromise message security which I would rather avoid.
Any ideas on how to solve my problem?
The method Char.ConvertFromUtf32(Int32) does not convert any int32 into a string representing the Unicode point, but only values in the defined Unicode range:
Exceptions ArgumentOutOfRangeException
utf32 is not a valid 21-bit Unicode code point ranging from U+0
through U+10FFFF, excluding the surrogate pair range from U+D800
through U+DFFF.
Also, it's not clear what d is, and where it comes from.
I've run into an unusual quirk in a program I'm writing, and I was trying to figure out if anyone knew the cause. Note that fixing the issue is easy enough. I just can't figure out why it is happening in the first place.
I have a WinForms program written in VB.NET that is displaying a subset of data. It contains a few labels that show numeric values (the .Text property of the labels are being assigned directly from the Decimal values). These numbers are being returned by a DLL I wrote in C#. The DLL calls a webservice which initially returns the values in question. It returns one as a string, the other as a decimal (I don't have any control over the webservice, I just consume it). The DLL assigns these to properties on an object (both of which are decimals) then returns that object back to the WinForm program that called the DLL. Obviously, there's a lot of other data being consumed from the webservice, but no other operations are happening which could modify these properties.
So, the short version is:
WinForm requests a new Foo from the DLL.
DLL creates object Foo.
DLL calls webservice, which returns SomeOtherFoo.
//Both Foo.Bar1 and Foo.Bar2 are decimals
Foo.Bar1 = decimal.Parse(SomeOtherFoo.Bar1); //SomeOtherFoo.Bar1 is a string equal to "2.9000"
Foo.Bar2 = SomeOtherFoo.Bar2; //SomeOtherFoo.Bar2 is a decimal equal to 2.9D
DLL returns Foo to WinForm.
WinForm.lblMockLabelName1.Text = Foo.Bar1 //Inspecting Foo.Bar1 indicates my value is 2.9D
WinForm.lblMockLabelName2.Text = Foo.Bar2 //Inspecting Foo.Bar2 also indicates I'm 2.9D
So, what's the quirk?
WinForm.lblMockLabelName1.Text displays as "2.9000", whereas WinForm.lblMockLabelname2.Text displays as "2.9".
Now, everything I know about C# and VB indicates that the format of the string which was initially parsed into the decimal should have no bearing on the outcome of a later decimal.ToString() operation called on the same decimal. I would expect that decimal.Parse(someDecimalString).ToString() would return the string without any trailing zeroes. Everything I find online seems to corroborate this (there are countless Stack Overflow questions asking exactly the opposite...how to keep the formatting from the initial parsing).
At the moment, I've just removed the trailing zeroes from the initial string that gets parsed, which has hidden the quirk. However, I'd love to know why it happens in the first place.
It's because the scaling factor also preserves any trailing zeros in a Decimal number. Trailing zeros do not affect the value of a Decimal number in arithmetic or comparison operations. However, trailing zeros might be revealed by the ToString method if an appropriate format string is applied.
I'm working in C#. I have an unsigned 32-bit integer i that is incremented gradually in response to an outside user controlled event. The number is displayed in hexadecimal as a unique ID for the user to be able to enter and look up later. I need i to display a very different 8 character string if it is incremented or two integers are otherwise close together in value (say, distance < 256). So for example, if i = 5 and j = 6 then:
string a = Encoded(i); // = "AF293E5B"
string b = Encoded(j); // = "CD2429A4"
The limitations on this are:
I don't want an obvious pattern in how the string changes in each increment.
The process needs to be reversible, so if given the string I can generate the original number.
Each generated string needs to be unique for the entire range of a 32-bit unsigned integers, so that two numbers don't ever produce the same string.
The algorithm to produce the string should be fairly easy to implement and maintain for both encoding and decoding (maybe 30 lines each or less).
However:
The algorithm does not need to be cryptographically secure. The goal is obfuscation more than encryption. The number itself is not secret, it just needs to not obviously be an incrementing number.
It is alright if looking at a large list of incremented numbers a human can discern a pattern in how the strings are changing. I just don't want it to be obvious if they are "close".
I recognize that a Minimal Perfect Hash Function meets these requirements, but I haven't been able to find one that will do what I need or learn how to derive one that will.
I have seen this question, and while it is along similar lines, I believe my question is more specific and precise in its requirements. The answer given for that question (as of this writing) references 3 links for possible implementations, but not being familiar with Ruby I'm not sure how to get at the code for the "obfuscate_id" (first link), Skipjack feels like overkill for what I need (2nd link), and Base64 does not use the character set I'm interested in (hex).
y = p * x mod q is reversible if p and q are co-primes. In particular, mod 2^32 is easy, and any odd number is a co-prime of 2^32. Now 17,34,51,... is a bit too easy, but the pattern is less obvious for 2^31 < p < 2^32-2^30 (0x8000001-0xBFFFFFFF).
I'm trying to read-in a bunch of unsigned integers from a configuration file into a class. These numbers may be specified in either base-10 (eg: 1234) or in base-16 (eg: 0xAB31). Therefore looking for the strtoul equivalent in C# 2.0.
More specifically, I'm interested in a C# function which mimics the behaviour of the this function when the argument indicating the base or radix is passed in as zero. (Under C++, strtoul will attempt to 'guess' the base or radix based on the first couple of characters in the string and then proceed to convert the number suitably)
Currently I'm manually checking the first two characters (using string.Substring() method) of the string and then calling Convert.ToUInt32(hex, 10) or Convert.ToUInt32(hex, 16) as needed.
I'm sure that there has to be a better way to deal with this problem and hence this post. More elegant ideas/solutions or work-arounds would be great help.
Well, you don't need to use Substring unless it's in hex, but it sounds like you're basically doing it the right way:
return text.StartsWith("0x") ? Convert.ToUInt32(text.Substring(2), 16)
: Convert.ToUInt32(text, 10);
Obviously this will create an extra object for the Substring call, and you could write your own hex parsing code to cope with this - but unless you've actually run into performance problems with this approach, I'd keep it simple.
I am programming on a project which I should store the key of the user to the initial configuration of a machine, I want to write it in C#.
I have an initial configuration which consists of two number R and X0, R = 3.9988 and X0 = 0.5. I want to add the user key to these numbers. for example:
Key: hos110 =>
R = 3.9988104111115049049048
X0 = 0.5104111115049049048
104111115049049048 are ASCII codes of the key which are concatenated.
How can I store these numbers?
Is there a better method for doing this?
Update: How about MATLAB?
You're not really "adding" numbers. You are concatenating strings.
Store them as strings. You can't get much more precise than that.
If you need to perform any arithmetic operations, it is easy enough to convert them to a decimal number on the fly.
I don't really follow why you're using a key as part of a number, but leaving that aside... System.Decimal (aka decimal) seems like the right tool for the job here.
If you need infinite precision you need something that is called BigInteger. However these classes are usually only used for scientific calculations (and usually unsuited for stroring the data) which doesn't really seem to match your code sample. If you need to do only general calculations use Strings and then convert them to Decimal for the calculations.
However if you are looking for such a BigInterger Class you can find one here.
.Net 4.0 will have a BigInteger built-in-class in the class libraries named System.Numerics.BigInteger.
Well, depending on the precision you are trying to achieve, you can probably save these as a pair of decimal values.
However, if this is an ASCII code, you may just want to save these as a string directly. This will avoid the numerical precision issues, especially if you're going to pull off the 104111... prior to using this information.
It seems that you are storing a "key", so why not use a String then?
Floating point numbers are inherently imprecise. I'm not sure what this 'initial configuration' is or why it's a float, but you're not going to be able to tack on a 'user key' (whatever that may be) and recover it later. Store the user key separately, in a string or something.
If these 'numbers' have no numeric value, i.e. you will not use them for mathematical computation then there is no need to store them in a numeric datatype. You can store them as strings.