C# Access SQL ADD COLUMN - c#

I have a problem with the Query Access. My code is as follows:
string query = "ALTER TABLE Student ADD COLUMN Surname MEMO AFTER 'Name'";
Why always inserts the column at the end of the table? Is there any method to insert a new column in a specific position?

First of all, I don't see any reason to add your column to a specific position. You can always use the column order as you want for a select statement for example..
Why always inserts the column at the end of the table?
Because it is designed like that?
There is a method to insert a new column in a specific position?
As far as I know, there is no way to do it without rebuilding your table.
From ALTER TABLE syntax for changing column order
Today when you use ALTER TABLE ADD to add a column, a new column is
always placed as the last column. This is far from often desireable.
Often developers and database designers want to keep some logic in a
column order, so that related column are close to each other. A
standard rule we keep in the system I work with is to always have
auditing columns at the end. Furthermore many graphical design tools
encourage this kind of design, both bottom-end tools like the Table
Designer in SSMS as well as high-end data-modelling tools such as
Power Designer.
Today, if you want to maintain column order you have no choice but to
go the long way: create a new version of the table and copy over. It
takes time, and if not implemented correctly, things can go very
wrong.

Related

Insert data manually in a safe way?

I have a trigger which needs to fill a table with hundreds of rows, I need to type every single insert manually (it is a kind of pre-config table).
This table has an Int FK to an Enum Table. The Enum Table uses an int as a PK and a varchar (which is UNIQUE).
While typing the insert statements I need to be very careful that the integer FK is the correct one.
I would rather like to insert the data by the varchar of the enum.
So I do something like this now:
INSERT INTO MyTable(ColorId)
VALUES(1)
And I would like to do something like this:
INSERT INTO MyTable(ColorStr)
VALUES('Red')
The reason why the Enum has an int PK is because of performance issues (fast queries), but I don't know if it is a good idea now. What do you think?
Is there a safe way to do it? Is it possible to insert data into a Table View?
Sure. Do not insert.
No joke.
First, you do not need to use one insert statement PER LINE - look at the syntax, you can have one insert statement doing a lot of lines.
Second, nothing in the world says you can not do processing (like select and join) on the inserted data.
I generally use table definition like this (with a merge statement) for all my static lookup library data (like country lists). Full automatic maintenance on every change. WIth inserts and updates happening on demand.

How specify caption to column in oracle and read it from C# through ODP.Net?

With ODP.Net package we fill a simple query result into Dataset by Oracle.ManagedDataAccess.Client.OracleDataAdapter.I want to have a bit more description for resulting columns. Would have been nice if I could define a caption for columns in Oracle and get them in the resulting dataset.
I found a way to add a comment on column in Oracle :
COMMENT ON COLUMN my_table.my_columns IS 'MY_CUSTOM_CAPTION'
but I don't know how we can get it.
In the other side i found two options (Caption & Extended properties) in the resulting dataset which i guess is what i am looking for, but it seems that I am wrong:( :
Anyone knows a way to put some description or alternative caption to columns in Oracle DB and read them in application by Odp.Net?
I would recommend to create a view from your table like this:
CREATE OR REPLACE VIEW V_my_table AS
SELECT my_column AS "My Caption even with spaces"
FROM my_table;
You can also make DML operations like DELETE, UPDATE or INSERT into such views.
I think there are two possible ways you can choose.
Read the Column comments from Oracle as you described in your question
Create a own Table where you store extra information
In both ways you need to join an extra Table to your query. If you want to read the Column comments take a look at Dba_col_comments. This table stores all columncomments you define in your Oracledatabase. So you can read them with sql. But be careful that you don't fill Column comments with extra information which describe the column. You DBA and others will intend that a column comment describe for what the column is or what it stores. So don't write any other information to it.
The second approach is that you build an own Table, where you store extra information for your columns. You just need three columns for table, column and information. This table you can join.

Update row data with a new value per row using fluentmigrator

I am using fluentmigrator to add a new column to a table. I then want to update each row in the table with a unique value for that column.
Currently when I use:
Update.Table("Foo").InSchema("dbo").Set(new { Bar = Bar.Generate() }).AllRows();
It gives the same value for all the rows.
How do I ensure it calls that method for each row?
I'm not sure what Bar.Generate does but I am guessing it creates a GUID or unique id.
If so then you could use:
Execute.Sql("update dbo.Foo set Bar = NEWID()");
Or if you want sequential guids then you could use NEWSEQUENTIALID().
If you are adding a new column for this unique identier, then all you would need to do is specify the new column .AsGuid()
EDIT: FluentMigrator is a small fluent dsl and is not meant to cover a complicated case like this. There is no way (as far as I know) to do this with one sql UPDATE and therefore no easy way to do it with FluentMigrator. You'll have to get the row count for the table with ADO.NET or an ORM (Dapper/NHibernate) and then loop through each row and update the Bar column with the custom unique identifier. So if you have one million rows then you will have to make one million sql updates. If you can rewrite your Bar.Generate() method as an Sql function that is based on the NEWID() function like this or this then you could do it as one UPDATE statement and call it with FluentMigrator's Execute.Sql method.
You haven't mentioned which database you are working with. But some like Postgres have non-standard features that could help you.

SQL: Changing Table's Data Column - Preserving Column Order

I've been assigned the task of changing some data columns in SQL tables (using Sql CE Server 3.5, if that matters).
The tables are populated from hundreds of Comma Separated Excel text documents.
The code makes a stab at determining the data type of the column and the table is created.
Later, I need the ability to come back in and say, "No, this column with 'Y' and 'N' need to be changed to a Boolean type instead of a Character type."
I have found information on how to Alter the Table (drop a column and insert the new one), but would I be able to get the table's column back to the same Column Index value that it had before, like "Insert At Index=X"?
There is no way to add a column at a specific index through ALTER TABLE. Tools like Sql Server Management Studio and Visual Studio Premium with Database tools can do it. But at least Visual Studio does it through a workaround:
Drop any constraints relating to the table, including FKs pointing at it.
Create a table with the new layout under temp name.
Move all the data (possibly including IDENTITY INSERT to preserve an IDENTITY column)
Drop the original table.
Rename the table with the temp name.
Recreate the constraints.
If you have the possibility, I deeply recommend Visual Studio Premiums DB project. Its deploy engine can handle this automatically for you.
You can just alter the column in place, then you won't have to worry about ordering it.
ALTER TABLE myTable
ALTER COLUMN myColumn Boolean
There are a couple of way to deal with this
Do as Anders noted which is to recreate the table from scratch
Don't rely on the table's column order. Instead use a layer of abstraction, for example Views. (SQL Views or a .NET object view)
Don't drop and recreate the column but alter the column instead
That 3rd option is tricky because you'd have to update the values before the alter.
For example
Create table #temp (foo char(1), bar int)
Insert into #temp VALUES ('Y', 0)
Insert into #temp VALUES ('N', 1)
UPDATE #temp
SET foo = CASE WHEN foo = 'Y' THEN 1 ELSE 0 END
ALTER table #temp alter column foo bit
SELECT * FROM #temp
This is easy in the case. Converting a varchar(50) to a date-time for example would be a bit more difficult
There is "dirty tricks" like this:
Reset Identity Column Index
But I, honestly, never did it, cause why you ever need to care about next index implied by DB. May be I'm missing something, but why do not construct your DB relationships on your own IDs, on which you can have total control.
Regards.

TSQL: UPDATE with INSERT INTO SELECT FROM

so I have an old database that I'm migrating to a new one. The new one has a slightly different but mostly-compatible schema. Additionally, I want to renumber all tables from zero.
Currently I have been using a tool I wrote that manually retrieves the old record, inserts it into the new database, and updates a v2 ID field in the old database to show its corresponding ID location in the new database.
for example, I'm selecting from MV5.Posts and inserting into MV6.Posts. Upon the insert, I retrieve the ID of the new row in MV6.Posts and update it in the old MV5.Posts.MV6ID field.
Is there a way to do this UPDATE via INSERT INTO SELECT FROM so I don't have to process every record manually? I'm using SQL Server 2005, dev edition.
The key with migration is to do several things:
First, do not do anything without a current backup.
Second, if the keys will be changing, you need to store both the old and new in the new structure at least temporarily (Permanently if the key field is exposed to the users because they may be searching by it to get old records).
Next you need to have a thorough understanding of the relationships to child tables. If you change the key field all related tables must change as well. This is where having both old and new key stored comes in handy. If you forget to change any of them, the data will no longer be correct and will be useless. So this is a critical step.
Pick out some test cases of particularly complex data making sure to include one or more test cases for each related table. Store the existing values in work tables.
To start the migration you insert into the new table using a select from the old table. Depending on the amount of records, you may want to loop through batches (not one record at a time) to improve performance. If the new key is an identity, you simply put the value of the old key in its field and let the database create the new keys.
Then do the same with the related tables. Then use the old key value in the table to update the foreign key fields with something like:
Update t2
set fkfield = newkey
from table2 t2
join table1 t1 on t1.oldkey = t2.fkfield
Test your migration by running the test cases and comparing the data with what you stored from before the migration. It is utterly critical to thoroughly test migration data or you can't be sure the data is consistent with the old structure. Migration is a very complex action; it pays to take your time and do it very methodically and thoroughly.
Probably the simplest way would be to add a column on MV6.Posts for oldId, then insert all the records from the old table into the new table. Last, update the old table matching on oldId in the new table with something like:
UPDATE mv5.posts
SET newid = n.id
FROM mv5.posts o, mv6.posts n
WHERE o.id = n.oldid
You could clean up and drop the oldId column afterwards if you wanted to.
The best you can do that I know is with the output clause. Assuming you have SQL 2005 or 2008.
USE AdventureWorks;
GO
DECLARE #MyTableVar table( ScrapReasonID smallint,
Name varchar(50),
ModifiedDate datetime);
INSERT Production.ScrapReason
OUTPUT INSERTED.ScrapReasonID, INSERTED.Name, INSERTED.ModifiedDate
INTO #MyTableVar
VALUES (N'Operator error', GETDATE());
It still would require a second pass to update the original table; however, it might help make your logic simpler. Do you need to update the source table? You could just store the new id's in a third cross reference table.
Heh. I remember doing this in a migration.
Putting the old_id in the new table makes both the update easier -- you can just do an insert into newtable select ... from oldtable, -- and the subsequent "stitching" of records easier. In the "stitch" you'll either update child tables' foreign keys in the insert, by doing a subselect on the new parent (insert into newchild select ... (select id from new_parent where old_id = oldchild.fk) as fk, ... from oldchild) or you'll insert children and do a separate update to fix the foreign keys.
Doing it in one insert is faster; doing it in a separate step meas that your inserts aren't order dependent, and can be re-done if necessary.
After the migration, you can either drop the old_id columns, or, if you have a case where the legacy system exposed the ids and so users used the keys as data, you can keep them to allow use lookup based on the old_id.
Indeed, if you have the foreign keys correctly defined, you can use systables/information-schema to generate your insert statements.
Is there a way to do this UPDATE via INSERT INTO SELECT FROM so I don't have to process every record manually?
Since you wouldn't want to do it manually, but automatically, create a trigger on MV6.Posts so that UPDATE occurs on MV5.Posts automatically when you insert into MV6.Posts.
And your trigger might look something like,
create trigger trg_MV6Posts
on MV6.Posts
after insert
as
begin
set identity_insert MV5.Posts on
update MV5.Posts
set ID = I.ID
from inserted I
set identity_insert MV5.Posts off
end
AFAIK, you cannot update two different tables with a single sql statement
You can however use triggers to achieve what you want to do.
Make a column in MV6.Post.OldMV5Id
make a
insert into MV6.Post
select .. from MV5.Post
then make an update of MV5.Post.MV6ID

Categories