Storing an int to SQL, but keep leading zeros - c#

I have a field in my SQL Server 2012 table defined as Int. but when I try to get the value from a textbox in C# using the converting (Convert.toint32(textbox.text)). Lets say if the textbox contains the number 0032, it will be saved to the database as 32 removing the 00.
Any solutions other than changing the Field Data Type??

Numeric datatypes do not retain leading zeros, as they are insignificant to the number you want to store. Char or Varchar is more appropriate. You could set a constraint to ensure only numeric characters are stored.
If you absolutely cannot change the data type, then another alternative is to store the number of leading zeros into another int field
So in your example you would store:
Value : 32
Leading zeros : 2

So save to the db in a numeric format - ignoring the leading zero's (as the others have mentioned) and then format like this example:
int i = 32;
string format = "0000";
Console.WriteLine(i.ToString(format));

A datatype is defined by set of possible values and operations that can be done on them.
So, the question here is: what operations will you do on that values? Summing, adding, multiplying...?
If answer is 'no', then change the column's type to varchar() or char() and store the value as-is, with the leading zeroes.
If it's 'yes', then store a proper number and leave the formatting to the client.
In any case, always try to use a proper datatype in the database, domain integrity is a nice thing to have.

Related

Which data type should I use to handle nine-digit account numbers and why?

Which data type should I use to handle 9-digit account numbers and why?
varchar(9) or int or decimal or something else ?
I'm talking from a database perspective — and the DBMS is Informix.
TL;DR Use CHAR(9).
You have a number of options, most of them mentioned in the comments. The options have different trade-offs. They include:
CHAR(9). This uses 9 bytes of storage, but can store leading zeros and that can save on formatting in the applications. You can write a check constraint that ensures that the value always contains 9 digits. If you later need to use longer numbers, you can extend the type easily to CHAR(13) or CHAR(16) or whatever.
INTEGER. This uses 4 bytes of storage. If you need leading zeros, you will have to format them yourself. If you later need more digits, you will need to change the type to BIGINT.
SERIAL. This could be used on one table and would automatically generate new values when you insert a zero into the column. Cross-referencing tables would use the INTEGER type.
DECIMAL(9,0). This uses 5 bytes of storage, and does not store leading zeros so you will have to format them yourself. If you later need more digits, you can change the type to DECIMAL(13,0) or DECIMAL(16,0) or whatever.
BIGINT and BIGSERIAL. These are 8-byte integers that can take you to 16 digits without problem. You have to provide leading zeros yourself.
INT8 and SERIAL8 — do not use these types.
VARCHAR(9). Not really appropriate since the length is not variable. It would require 10 bytes on disk where 9 is sufficient.
LVARCHAR(9). This is even less appropriate than VARCHAR(9).
NCHAR(9). This could be used as essentially equivalent to CHAR(9), but if you're only going to store digits, you may as well use CHAR(9).
NVARCHAR(9). Not appropriate for the same reasons that VARCHAR(9) and NCHAR(9) are not appropriate.
MONEY(9,0). Basically equivalent to DECIMAL(9,0) but might attract currency symbols — it would be better to use DECIMAL(9,0).
Any other type is rather quickly inappropriate, unless you design an extended type that uses INTEGER for storage but provides a conversion function to CHAR(9) that adds the leading zeros.

Is it possible to keep original trailing zeroes from C# decimal type when saving in SQL Server?

I would like to find a simple way to keep trailing zeroes from C# decimal type when saving in SQL Server.
Example:
5.3 and save, the system should display 5.3 after reloading.
05.30 and save, the system should display 5.30
5.300 and save, the system should display 5.300 after reloading.
The C# decimal type seems to do it well but the SQL server decimal type not.
For example, I would define the SQL Server column as decimal(9,3) and all 3 values would be saved as 5.300.
Of course, I could convert to string but I just wonder if there is any more elegant solution if any computing is needed on this field.
I think it is not a good idea to mix DB and UI layers. How the SQL stores data is the DB problem, and how to show type to the user is a UI problem.
C# stores Decimal in the format, using the base 10:
http://msdn.microsoft.com/en-us/library/system.decimal.getbits.aspx
Internal representation:
1m : 0x00000001 0x00000000 0x00000000 0x00000000
1.0000m : 0x000186a0 0x00000000 0x00000000 0x00050000
AFAIK, the internal representation of decimal in MSSQL is not documented. And, if the are using the following floating point format http://en.wikipedia.org/wiki/IEEE_754-1985
then it is impossible.
But, there are parameters of the decimal in MSSQL like precision and scale. One can try to use ADO.NET to manipulate this parameters in the code like this:
var cmd = new SqlCommand("command", new SqlConnection("connection"));
cmd.Parameters.Add("#p1", SqlDbType.Decimal,18);
cmd.Parameters["#p1"].Precision = 18;
cmd.Parameters["#p1"].Scale = 8;
Then, it can be possible, but, anyway it is a really hacking method, and you should not use this in the production
If you want the number of significant figures (or precision, if that's all you're interested in) for a numeric value to vary on a row-by-row basis you'll need to have a separate column that stores that number as a single numeric column isn't going to store that information.
If the number of significant figures (or precision) is consistent for all of the rows, then you can simply store the data in the database with as much precision as the database supports and then convert it back to what it should be within your application before presenting it to the user.
You can define the column type of SQL Server as sql_variant.
When you set a value of decimal(C#) to that column via SqlParameter, SQL Server keeps the metadata including scale of the value.

Database to DataGridView floating point error or non-trimmed trailing zeros

My current project requires displaying some numbers in a DataGridView column in a Windows Forms front end. The numbers can then be editted and are sent back to the database to be updated. All values are between 0 and 0.5. For the user's sake, I wish all trailing zeros to be removed, but accuracy is also important so a value of 0.123456789 should be stored to full precision.
I had been using a SQL float to store the numbers, passing them to doubles in C#. This would result in the correct output, with trialing zeros removed (e.g. 0.2, 0.123, 0.432105). The problem was that some values were being inaccurately passed (e.g. 0.208 was being returned as 0.2080000002) due to a floating point error.
To solve this, I changed to using decimal data types in both the database and the front end. However, all values are now displayed to the maximum number of defined decimal places (eg. 0.200000, 0.123000, 0.432105).
The possible solutions I can see are:
Removing trialing zeros from the decimal in the DataGridView.
Though,
dataGridView.Columns[0].DefaultCellStyle.Format = "0.#"
doesn't appear to work.
Passing the number accurately to a C# double, which will then automatically display the number in my preferred format.
though, I am unable to achieve either of these.
Is anyone able to assist me with this problem?
private char[] _noZero = new char[] {'0'};
textBox1.Text = l.fieldOfDecimalType.ToString().TrimEnd(_noZero);

I want to store large chunk data in true/false format using bits

Consider example where I have many types(types - some sections). For each type there are multiple values and out of available values possible useful values are less.
each type will store 30 values. All 30 values are not applicatble but I need to store in 1/0 format. Consuming byte is also costly here.
Please guide me on the same.
Consider using BitArray class.
You can define either an int column (If you have value less than or equal to 32) or bigint column (long in case C#) (If you have value less than or equal to 64) along with each Type and then define each bit of either int or bigint (long in case C#) column as one value of type and then store.
For Example: Suppose that each type has four value Physics, Maths, Chemistry, English and others up to 32. Now we have a type as "Class" which have only three value Physics, Maths and English and rest is not available. So value will be 0000000000000000000000000001011 = 13.

what type shall use to store 12 digit value shall I use decimal or nvarchar in SQL DB?

I need to store an CARD ID number in Database. So there is no calculation just a search of the ID and putting the value in Session as property in a class.
The is ID is always numeric and it's 12 positions.
e.g. 123456789012 and I would like to show on the screen in this format. 123.456.789.012 (every 3 digit a dot).
I tried a test and defined Decimal(12,0) in database and I have put this value in database: 555666777888
then I try to display on the screen I used this code (CardID is decimal):
lblCardID.Text = ent.CardID.ToString("0:#,###")
but it shows on the screen like this: 555,666,77:7,888
where is the colon (:) coming from?
question additional:
- What type shall use in MS SQL to store this value in Database. Decimal (12,0) or Nvarchar(12) ?
nvarchar is definitely not needed. if it's always 12 digits, char(12) would be fine, but I think a 64-bit integer would be most appropriate.
Try writing
lblCardID.Text = ent.CardID.ToString("#,###")
You can user the decimal(12,0) or the bigint datatype. bigint requires one byte less (8 bytes total) per stored value.
The colon is coming from the colon in your format string. The "0:" at the beginning of the format string is needed when you are using string.Format(), as a placeholder to identify which of the arguments to format, but not if you are using ToString() (since there's only one value being formatted).
I would use bigint because it needs only 8 bytes per value.
decimal(12,0) needs 9 bytes and varchar or nvarchar even more (12 or 24 bytes respectively in case of storing 12 digits).
Smaller column size makes indexes smaller, which make indexes faster in use.
Formatting numbers can be done in application.
It's also much easier to change formatting in app in case of requirements change.
If you need to store the formatting, and it's just a numeric value, use varchar, don't waste time with nvarchar as it increases your storage size and won't do you any good unless you expect special (international) chars
If it's never going to be calculated on, I would store it as char(12).
Then in your code, split it with something like this and use the replace function to convert commas to dots:
lblCardID.Text = ent.CardID.ToString("#,###").Replace(",", ".")
If it's an ID number store it as a string datatype, you're not going to be doing sums on it, you also won't have problems losing any leading zeros. You could also then store the card id with the embedded dots, sorting out your formatting problems.
Does your identifier's domain have matematical properties, other than being composed of digits? If not, your value is fixed width, so use CHAR(12). Do not forget to add appropriate domain checks (no characters other than digits, no leading zero, etc) e.g.
CREATE TABLE Cards
(
card_ID CHAR(12) NOT NULL
UNIQUE
CONSTRAINT card_ID__all_digits
CHECK (card_ID NOT LIKE '%[^0-9]%'),
CONSTRAINT card_ID__no_leading_zero
CHECK (card_ID NOT LIKE '[1-9]%)')
);

Categories