I have a c# winform application that accesses data from a database (SQL Server CE), manipulates data and stores it back. I have been something rather "noobish", as in my program retrieves data from the .sdf file every time I need the data. Instead, I think it would be better to create a dataset which is populated from thae database when the application is launched. The changes can be made to the dataset, and then the dataset data can be stored back into the database when the application closes.
I want to know if this is the right way to go, or am I missing something. Note that the dataset exists only as long as the application is running.
I would suggest using SqlCeResultSet . It is suitable for displaying as well as updating data.
Related
In the .NET framework, in order to save data into a databse item, one has to use:
Me.Validate();
Me.CustomersBindingSource.EndEdit();
Me.TableAdapterManager.UpdateAll(Me.CustomerDataSet);
Can someone explain me why? What's going on behind the scenes? If the .EndEdit() "applies changes to the underlying data source" why isn't it enough to apply those changes?
It is enough to "apply those changes"... to the data source. The data source is a DataTable, which is an object in your application. The UpdateAll call is what saves the changes from that DataTable - in fact, all DataTables in your DataSet - to the database.
ADO.NET is based on a disconnected model. That means that your application is not directly connected to your database. Using ADO in VB6, changes you made to a Recordset were made directly to the database. Not so in ADO.NET. When you call Fill, a connection to the database is opened, data is copied from the database into a DataTable and then the connection is closed. Any changes you make locally affect only that local copy. When you call Update or UpdateAll, the connection is opened again and the local changes saved to the database.
I have a SQL Server database where I process some data using a .Net application. The processing takes quite some time and its basically pulling down data from a service/api and stores it in this database of mine.
However.. I need to keep the data in the database synced with the data fetched from the service/api. So I was thinking that maybe I could have two databases.
The idea:
Database1 is always cleared by my .Net application and then I simply pull down all the data from the service/api again and re-builds the data. This is done daily. (This usually takes about 4.5 hours).
After 6 hours I copy/restore all the data from Database1 to Database2
My data-consuming app/frontend consumes the data in Database2
This way I would not have the need of "syncing" data in a single database.
However.. Im not sure of how to set a daily restore in SQL Server, any ideas of how to do this?
Also.. sorry if this might be the wrong forum for db-questions..
Br,
Inx
I'm trying to learn to use SQLite, but I'm very frustrated and confused. I've gotten as far as finding System.Data.SQLite, which is apparently the thing to use for SQLite in C#.
The website has no documentation whatsoever. The "original website", which is apparently obsolete from 2010 onwards, has no documentation either. I could find a few blog tutorials, but from what I can tell their method of operation is basically:
Initialize a database connection.
Feed SQL statements into the connection.
Take out stuff that comes out of the connection.
Close connection.
I don't want to write SQL statements in my C# code, they're ugly and I get no assistance from the IDE because I have to put the SQL code in strings.
Can't I just:
Create a DataSet.
Tell the DataSet that it should correspond to the SQLite database MyDB.sqlite.
Manipulate the DataSet using its member functions.
Not worry about SQLite because the DataSet automatically keeps itself in sync with the SQLite database on disc.
I know that I can fill a DataSet with the contents of a database, but if I want access to the entire database I will have to fill the DataSet with all of its contents. If my database is 1 GB, I have just used up 1 GB of RAM (not to mention the time needed to write all of it at once).
Can't I simply take a SQLite database connection and pretend it's just an ordinary DataSet (that perhaps needs to be asked occasionally if it's done syncing yet)?
The answer to the question is no.
No you cannot simply take a SQLite connection pretend it's just a DataSet.
If you don't want to code SQL statements then consider Entity Framework.
Using SQLite Embedded Database with Entity Framework and Linq-to-SQL
You shouldn't treat a DataSet as a database. It's just a result of a query.
You query the database to get a subset of data (you never want ALL the data from your DB) and this subset is used to populate your DataSet.
You are required to synchronize your changes manually because DataSet doesn't know which updates should be a part of which transaction. This is your system knowledge.
The DataSet is an in memory cache and will only synchronize to the underlying data store when the developer allows it. You could put a timer wrapper around in and do it on a schedule but you still need to keep the Dataset and data store synchronized manually.
Storing 1GB+ of data is really not recommended as the memory usage would be very high and the performance very low. You also don't want to be sending that amount of data over a network or god forbid an internet connection.
Why would you want to keep 1GB of data in memory?
When I load from the database I use one store procedure which loads the DataItem and any Data associated with it. This comes back in one DataSet with two tables, the first table has one row and describes the DataItem and each row in the other table describing the related Data.
This DataSet is then used to populate my objects.
My problem comes when I have to save the objects back to the database. I am currently saving the DataItem and then looping through all of my Data and performing a save on each one. Completely horrible way to go about doing it, I know. It's both slow and it's not transactional.
So what I'd ideally like to do is convert my objects back into my DataSet and then save it all back to the database in one efficient transactional operation. What code do I need on the C# side to make this transactional and to allow me to pass back a DataSet. I presume this will involve using a TableAdapter. But given that I have two tables how will this work? What do I use on the SQL side - Can I use store procedures? (I would like to avoid having SQL in my C# project) Would I need to write something that will handle cycling through a datatable to save each record?
What's the best way to go about doing all this? This will form the lynchpin of a project I'm working on so I want it to be as fast and efficient as it can be!
(.NET 4.0 and SQL 2005)
Did not use TableAdapter in the end as it was more effort than it was worth.
From the comments:
http://msdn.microsoft.com/en-us/library/4esb49b4.aspx
I am trying to replace a DTS access exporter package with a exe we can call from our stored procedures (using xp_cmdshell).
We are in the middle of a transition between SQL 2000 and SQL 2005, and for the moment if we can not use DTS OR SSIS that would be the best options.
I believe I have the following options:
Using a SQL data reader to read SQL records, and using ADO.net to insert the read records into Access.
I have implemented this and it is WAY too slow. This is not a option
Setting up Linked tables in access, then getting access to pull the data out of sql.
If anyone has any experience in doing this I would be grateful for some code examples or pointing out some resources?
If there are any other options for transferring large amounts of data from SQL into a Access database that would be awesome, but performance is a big issue as we can be dealing with up to 1mil records per table.
Have you tried this?
Why not creating a linked table in Access, and pulling data from Sql Server instead of pushing from Sql to Access ?
I've done plenty of cases where I start with an Access database, attach to SQL Server, create a Create Table or Insert Querydef, and write some code to execute the querydef, possibly with arguments. But there are a lot of assumptions I would need to make about your problem and your familiarity with Access to go into more detail. How far can you get with that description?
I have ended up using Access interop, thanks to le dorfier for pointing me in the direction of the import function which seems to be the simplest way..
I now have something along these lines:
Access.ApplicationClass app = new Access.ApplicationClass();
Access.DoCmd doCmd = null;
app.NewCurrentDatabase(_args.Single("o"));
doCmd = app.DoCmd;
//Create a view on the server temporarily with the query I want to export
doCmd.TransferDatabase(Access.AcDataTransferType.acImport,
"ODBC Database",
string.Format("ODBC;DRIVER=SQL Server;Trusted_Connection=Yes;SERVER={0};Database={1}", _args.Single("s"), _args.Single("d")),
Microsoft.Office.Interop.Access.AcObjectType.acTable,
viewName,
exportDetails[0], false, false);
//Drop view on server
//Releasing com objects and exiting properly.
Have you looked at bcp? It's a command line utility that's supposed work well for importing and exporting large amounts of data. I've never tried to make it play nice with Access, but it's a great lightweight alternative to DTS and/or SSIS.
Like others have said, the easiest way I know to get data into an Access mdb is to set things up in Access to begin with. Roughly speaking:
Create linked tables to the SQL data you want to export. (in Access: File --> get ecternal data --> link tables) This just gives you a connection to sql server.
Create a local table that represents teh schema of the data you want to export. (on the tables tab, click the "new" button and follow your nose).
Create an Update query that selects data from the linked tables (SQL Server) and appends rows to the local table (access mdb).
On the macros tab, create a new macro that executes the query you just created above (I can't recall the exact "action" to use, but it's something like OpenQuery or RunQuery); name the macro "autoexec", which will cause it to automatically run when the mdb is opened.
Use a script (or whatever) to copy and open the mdb when appropriate; the autoexec macro will kick things off and the query will copy data from SQL server to the mdb.