I have this enum:
public enum SupportedISOCurrencySymbol { DKK = 208, EUR = 978, NOK = 578, SEK = 752 }
I save the value for an order in the approved_orders table, Currency field.
I populate the sql insert query with a parameter like:
cmd.Parameters.AddWithValue("?Currency", this.Currency);
Well, If i do debug on the above line I clearly see that this.Currency has value DKK.
Why then it inserts in the DB: 208?
Do you have any ideea?
Thanks.
SupportedISOCurrencySymbol is an enum, which has integer as base. "DKK" represents the number 208.
What you are seeing in the debug window, is the value of .ToString() on your enum, which is defined to return the name of the enum, not the value. If you want to send the string "DKK" to your database, you could simply state it explicitly:
cmd.Parameters.AddWithValue("?Currency", this.Currency.ToString());
Of course, the columns involved in the database would then need to be of a compatible text type.
"DKK" is a friendly name for the value "208" for you (as programmer) to use in your code.
When the code is compiled all occurrences of "DKK" are replaced by "208" which will include the code that inserts the value into the database.
You could change your database schema to hold a string rather than an integer and store the name of the enum rather than it's value - but you would need to parse the string into an enum when reading it back out of the database into your code.
do you need to add a this.Currency.ToString()? so as to get the text not the numeric value of the enum?
Related
I have configured various entities in my DbContext class to convert the enum value to a string when saving to the database eg.
modelBuilder.Entity<TestObject>(builder =>
{
builder.ToTable("TestObject", "test");
builder.Property(x => x.MyEnum).HasConversion<string>();
}
However all of these enum values are being saved as the numeric value behind the enum.
I've tried checking the strings aren't being truncated by the max lengths on the columns in the database but the enum string values are definitely all valid lengths.
At this point I'm not sure where else to check, any help is appreciated!
The issue was I was using a stored procedure for the insert and didn't realise I was passing the enum value to the SQL parameter in as MyTestEnum.TestValue.
Passing in MyTestEnum.TestValue.ToString() as the SQL parameter fixed the issue.
I have .NET Core application which communicate with a Postgres database using entity framework.
There was a enum used in model which create a Postgres enum type in postgres.
Now I need such migration that will change the enum into flags, so I need to change enum integer value in C#.
I feel I should also change the underlaying enum values in Postgres, but I don't see any numbers there, I just see only enum names in type definition.
How the Postgres stores enum internally?
How can I migrated it if integer value of enum is changed in C#?
I am not aware how Postgres stores enum internally and prefer to not rely on it.
Assuming that your enum type is called _enumtype and that column is called _enumcolumn then you can alter the table structure (enum -> integer) and keep the data in it like this:
alter table _mytable
alter column _enumcolumn type integer
using array_position(enum_range(null::_enumtype), _enumcolumn) - 1;
The enum type words will be replaced with 0, 1, 2, .... respectively.
So you are free to make whatever changes and updates you need after that.
This probably does not answer your question directly, but useful for understanding under the hood. Postgres stores enums with both numeric and string components. You use the string component for all references both setting and testing, references the numeric component generates an exception. Internally Postgres uses the numeric component for sorting (perhaps other uses I am not aware of). This numeric is (fyi) stored as a floating point value. Adding value(s) to an emum is a simple Alter type. Removing them cannot be done directly - it evolves deleting and recreating along with managing all dependencies. You can see the component values, both numeric and string with the query (see Demo)
select * from pg_catalog.pg_enum;
That gives you all values for all enums, unfortunately getting a specific one is more complicated.
I have a field in a sqlite database, we'll call it field1, on which I'm trying to iterate over each record (there's over a thousand records). The field type is string. The value of field1 in the first four rows are as follows:
DEPARTMENT
09:40:24
PARAM
350297
Here is some simple code I use to iterate over each row and display the value:
while (sqlite_datareader.Read())
{
strVal = sqlite_datareader.GetString(0);
Console.WriteLine(strVal);
}
The first 3 values display correctly. However, when it gets to the numerical entry 350297 it errors out with the following exception on the .getString() method
An unhandled exception of type 'System.InvalidCastException' occurred in System.Data.SQLite.dll
I've tried casting to a string, and a bunch of other stuff. But I can't get to the bottom of why this is happening. For now, I'm forced to use getValue, which is of type object, then convert back to a string. But I'd like to figure out why getString() isn't working here.
Any ideas?
EDIT: Here's how I currently deal with the problem:
object objVal; // This is declared before the loop starts...
objVal = sqlite_datareader.IsDBNull(i) ? "" : sqlite_datareader.GetValue(i);
if (objVal != "")
{
strVal = (string)objVal;
}
What the question should have included is
The table schema, preferrably the CREATE TABLE statement used to define the table.
The SQL statement used in opening the sqlite_datareader.
Any time you're dealing with data type issues from a database, it is prudent to include such information. Otherwise there is much unnecessary guessing and floundering (as apparent in the comments), when so very useful, crucial information is explicitly defined in the schema DDL. The underlying query for getting the data is perhaps less critical, but it could very well be part of the issue if there are CAST statements and/or other expressions that might be affecting the returned types. If I were debugging the issue on my own system, these are the first thing I would have checked!
The comments contain good discussion, but a best solution will come with understanding how sqlite handles data types straight from the official docs. The key takeaway is that sqlite defines type affinities on a column and then stores actual values according to a limited set of storage classes. A type affinity is a type to which data will attempt to be converted before storing. But (from the docs) ...
The important idea here is that the type is recommended, not required. Any column can still store any type of data.
But now consider...
A column with TEXT affinity stores all data using storage classes NULL, TEXT or BLOB. If numerical data is inserted into a column with TEXT affinity it is converted into text form before being stored.
So even though values of any storage class can be stored in any column, the default behavior should have been to convert any numeric values, like 350297, as a string before storing the value... if the column was properly declared as a TEXT type.
But if you read carefully enough, you'll eventually come to the following at the end of section 3.1.1. Affinity Name Examples:
And the declared type of "STRING" has an affinity of NUMERIC, not TEXT.
So if the question details are taken literally and field1 was defined like field1 STRING, then technically it has NUMERIC affinity and so a value like 350297 would have been stored as an integer, not a string. And the behavior described in the question is precisely what one would expect when retrieving data into strictly-typed data model like System.Data.SQLite.
It is very easy to cuss at such an unintuitive design decisions and I won't defend the behavior, but
at least the results of "STRING" type are clearly stated so that the column can be redefined to TEXT in order to fix the problem, and
"STRING" is actually not a standard SQL data type. SQL strings are instead defined with TEXT, NTEXT, CHAR, NCHAR, VARCHAR, NVARCHAR, etc.
The solution is either to use code as currently implemented: Get all values as objects and then convert to string values... which should be universally possible with .Net objects since they should all have ToString() method defined.
Or, redefine the column to have TEXT affinity like
CREATE TABLE myTable (
...
field1 TEXT,
...
)
Exactly how to redefine an existing column filled with data is another question altogether. However, at least when doing the conversion from the original to the new column, remember to use a CAST(field1 AS TEXT) to ensure the storage class is changed for the existing data. (I'm not certain whether type affinity is "enforced" when simply copying/inserting data from an existing table into another or if the original storage class is preserved by default. That's why I suggest the cast to force it to a text value.)
I have a SQL table on a database which has a column in it with the Bit datatype. I'm trying to define a method in my C# application which takes two of the columns from the table and uses them as parameters.
Here is the code
public void Edit_NAA_ApplicationsFirm(int ApplicationId, string UniversityOffer, bit Firm)
{
//This line declares a variable in which we store the actual values from NAA_Applications based on the associating ID
NAA_Applications AppFirm = Get_Applicant_Application(ApplicationId);
//Here we tell the application that the values edited are equal to the values within the table
AppFirm.Firm = Firm;
//Any of these changes are then saved.
_context.SaveChanges();
}
The only issue is the the program keeps trying to convert bit to BitConverter. When I change it to bit it has issues accepting it as a datatype.
It should be worth noting I'm building the application in an ASP.Net Framework solution.
Could anyone tell me what it is I'm doing wrong? Am I just referring to the datatype wrong?
Looks like you're using Entity Framework.
It'll use a .NET boolean to represent a T-SQL bit. That would be a sensible way to do it for any other data access method as well. boolean true will convert to 1 in the bit field, and false to 0.
In fact it's even documented, more than once, that this is the correct .NET CLR type to use. See https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/linq/sql-clr-type-mapping , https://msdn.microsoft.com/en-us/library/cc716729(v=vs.100).aspx and probably others.
So in your case bool Firm would be appropriate.
The correct datatype for bit in c# is boolean. so
public void Edit_NAA_ApplicationsFirm(int ApplicationId, string UniversityOffer, bool Firm)
if I understand corectly, try use bool Firm
in c#, value 1 from database is converted to "true", and "true" from code is converted to 1 in database
I am decorating one of my DateTime property in my class with Representation = BsonType.Int64 attribute so that it gets stored in the database with Int64 representation of a date.
When I used to store that property as a normal C# Datetime and did not set the value to anything then it would store DateTime.Min in the database. That is perfect because I was doing reading from databse and doing Query.LT operation like following on it:
Query.LT("MyField", DateTime.Now));
And it used to return all the values fine.
Now that I started storing it as BsonType.Int64 and the equivalent of DateTime.Min in BsonType.Int64 is "0". my Query.LT("MyField", DateTime.Now)); fails on all the dates that are stored with DateTime.Min.
Any idea on how to solve this?
The problem is that, during the query, the MongoDB driver doesn't know that you chose an alternate representation.
Hence, you need to query for an Int64 explicitly:
Query.LT("MyField", DateTime.Now.Ticks));
This will work as expected (tested w/ MongoDB 2.1.1, C# driver 1.4.2.4500)