I've got a database which will be accessed by multiple users.
For example User 1 retrieves a list of all available datasets in the table "Test". User 2 does the same thing and both users afterwards got the same datasets.
So now, if User 1 wants to write ABC to dataset with Index 1, he can do so and the dataset is persistent.
But if User 2 NOW wants to write ABB to dataset with Index 1, how can he know, that the dataset has already been updated?
Is there a pattern for multi user database access or can I just use hashing algorithms to obtain if there is an updated dataset?
Or are there any other approaches?
There are a number of possible answers here and a lot depends on your application architecture.
Are two users going to be working on the same row at the same time? And if so, if they have different ideas about what data should be there, this sounds like a business problem that needs to be considered and resolved.
That said, if you have a front end that is receiving data and then trying to write back, using either a time stamp or a checksum as part of validation is a common and useful way of handling this solution.
If I were implementing this I would use a stored procedure and force my application to pass the checksum back to the proc. The procedure would check to see if the checksum is still accurate and the write would fail if it wasn't.
Related
I'm still learning MVC and have gone through several online tutorials. But I'm missing something sort of vital to my application, so this is a general question not necessarily requiring code examples to answer. If you can just steer me in the right direction in conceptual terms...
My application is completely read-only to the database, I don't need or want to write back. I need to pull data from multiple tables in one database, which are the exact same schema, into what I think would be a single model that I can then filter, then display the results. To complicate things somewhat, the table names need to be variables, these tables are built upstream on the fly using the date as part of the table name.
The tables are television automation schedules, different table for each day, but each contain a number of fields for scheduled time, house ID, title, etc. I need to get several days into one model (I think), and then I'm going to query a different database that will tell me for each row in the table whether the House ID exists on a video server or not. I want to then display the list of rows that do not exist in the video server.
I have an example in VB but feel like I should tackle this in C# as it seems to be more universally supported.
I don't think I can use VS tools to create a model from the database table since the table name is different every day.
So is the proper plan of attack to load the multiple table data into one model?
Maybe I don't even need a model in the true sense of the word, there's no binding required to be able to write the data back to the db. I just essentially need to load the table data into an array, doesn't need to continue to be bound to the db, that I can then analyze and figure out which of these items don't exist in the server.
Thanks!
All,
I have a test program that will serialize test subjects for some research sessions. I'll be running the program at different times, so I need this data to persist. It will be a simple ID number in 001 format (two leading zeros until 10, one leading zero until 100) and will max out at 999. How would I accomplish this in C#? Ideally, it starts up, reads the persistent data, then starts registering new test subjects at the latest number. This number will then be used as a primary key to recognize the test subjects in a database. I've never done anything remotely like this before, so I'm clueless as to what I should do.
EDIT:
I probably should have clarified... there are multiple databases. One is a local SQLite file that holds the test subject's trial data (the specific data from each test). The other is a much larger MySQL database that holds more general information (things about the test subject relevant to the study). The MySQL database is remote and data from the application is not directly submitted to it... that's handled by another application that takes the SQLite file and submits that data to the MySQL database. The test environment is variable and may not have a connection to the MySQL database. As such, it's not a viable candidate for holding such data as I need the ID numbers each time I start the program, regardless of the connection state to the MySQL database. The SQLite files are written after program execution from a text file (csv) and need to contain the ID number to be used as a primary key, so the SQLite database might not be the best candidate for storing the persistent data. Sorry I didn't explain this earlier... it's still early in the day :P
If these numbers are used in a database as the index, why not check the database for the next number? If 5 subjects have been registered already, next time just check the database, get the max for the index and add 1 for the next subject. Once you insert that subject, you can add 1 again.
We are using MySQL to get data from database, match the data and send back the matched data to user. The MySQL Db contain 10 table , 9 tables are having less data which needed to be matched with 10th table which has 25 Million records and still adding. I need to create C# application to match the data and send to user. After every 1 min, new data is updated in rest of 9 table and old is deleted after being compared. I have got 10 table data in C# memory, but it sometime get out of memory. I'm thinking of diving C# application into 5-6 parts to handle data and than to do rest of logic. But i need some some good suggestion to start my work.
Thanks
APS
I think you are approaching your problem incorrectly. From your post, it sounds like you are trying to load massive quantities of highly volatile data into memory. By doing that, you are entirely defeating the point of having a database server like MySql. Don't preload all of the data into memory...let your users query the data they need from the database via your C# application. That is exactly what database servers are for, and they are going to do a considerably better job at providing optimized, performant access to data than you can do yourself.
You should probably think about your algorithms and decide if there is any way to split the problem into smaller chunks, for example to work on small partitions of the data at a time.
32 bit .net processes have a memory limit of 2GB. Perhaps you are hitting this limit, hence the out of memory errors? If so, two things you could do are:
Have multiple processes running, each dealing with a subset of the data
Move to a 64bit OS and recompile your code into a 64bit executable
Please do not say you have a lot of data. 24 million rows is not exactly a lot by todays standards.
Where does C# enter here? This looks 100% like something (from your explanation) that should be done totally on the server side with SQL.
I dont use MySQL but I would suggest using a stored procedure to sort through the data first. Depends on how complex or cpu-expensive your computation is and how big the dataset is that you're going to send over your network. But normally I'd try to let the server handle it. That way you don't end up sending all your data over the network. Plus you avoid trouble when your data model changes. You don't have to recompile and distribute your C# app. You change 1 stored procedure and you're ready
I am working on a price list management program for my business in C# (Prototype is in Win Forms but am thinking of using WPF for the final ap as a MVVM learning exercise).
Our EMS system is based on a COBOL back end and will remain that way for at least 3 years so I cannot really access it's data directly. I want to pull data from them EMS system periodically to ensure that pricing remains in sync (And to provide some other information to users in a non-editable manner such as bin locations). What I am looking at doing is...
Use WinBatch to automatically run a report nightly then to Use Monarch to convert the text report to a flat file (.xls?)
Drop the file into a folder and write a small ap to read it in and add it to the database
How should I add this to the database? (SQL Express) I could have a table that is just replaced completely each time but I am a beginner at most of this and I am concerned what would happen if an entire table was replaced while the database was being used by the price list ap.
Mike
If you truncate and refill a whole table you should do it in one single transaction and place a full table lock. This is more secure and faster.
You also could update all changed rows, then insert new (missing rows) and then delete all rows which weren't updated in this run (insert some kind of version number in each row to determine this).
First create a .txt file from the legacy application. Then use a batch insert to pull it into a work table for whatever clean up you need to make. Do the clean up using t-sql. Then run t-sql to insert new data into the proper tables and/or to update rows where data has changed. If there are toomany records, do the inserting and updating in batches. Schedule all this as a job to run during hours when the database is not busy.
You can of course do all of this best in SSIS but I don't know if that is available with Express.
Are there any fields/tables available to tell you when the price was last updated? If so you can just pull the recently updated rows and update that in your database.... assuming you have a readily available unique primary key in your cobol app's datastore.
This wouldn't be up to date though because you're running it as a nightly script to update the database used by the new app. You can maybe create a .net script to query the cobol datastore specifically for whatever price the user is looking for, and if the cobol datastores update time is more recent than what you have logged, update the SQL Server record(s).
(I'm not familiar with cobol at all, just throwing ideas out there)
I have an Oracle DB, and an ASP Page that has many ListBoxes and DropDownLists. These controls make the user input data to get filtered results on the next page.
Once the user clicks search, a string (query) is generated based on the selections of the user. The results page has a datagrid that takes this string and uses it to get data for the grid from the database.
Also, I want to use a separate class with methods to create the string.
My datagrid is working fine with queries that I type on my own, but what I need is a class to generate that query using all the user input.
What would be the best approach?
(I am using ASP.NET 2.0 and C#)
For such a broad question you're going to require multiple sets of information.
You'll want to start by hooking into your Oracle database and making the query (Step 1). The next step is to display the results on your forms (Step 2). Once you've got that working, you can start parameterizing your queries (Step 3). Here is a collection of topics to get you started. You ought to be able to piece things together from there.
Step 1 :: Conntecting to an Oracle DB in ASP.NET
Step 2 :: ASP.NET GridView Databinding
Step 3 :: Parameterized Queries
We've done similar things in that we have a massive Criteria page where the user can pick from ~400 points of data. Then we use all that data to formulate some kind of query into the database. We found it very useful to roll all that Criteria data into a serializable structure, we used a complex object that could serialize to xml. It made testing that whole system a thousand times easier. It also opened the door for us to add saved searches to the system.
Use a separate class for the Transform-Object-To-Sql code.