I have never written s service before. This is my situation.
I have a mysql table that I need to migrate data from.
I have a MSSQL where I need to transfer data.
This needs to happen everyday at least once.
Can anyone show me how to achieve this. Thanks in advance
I have an Oracle database with a table Customers to which I only have read-only access. I would like to copy some columns from this table and insert them into a SQL Server 2008 R2 table called Customers, only when the ID does not exist in the destination table.
I'm new to C#.... I can open and read from Oracle successfully using Oracle.DataAccess and I can write to SQL Server, but I am lost on how to read from Oracle then write into SQL Server. All the examples I could find are for databases of the same type.
I wanted to follow up on this to share that despite many creative solutions offered here and elsewhere, there was not an efficient enough way to accomplish this task. In most every case, I would have to read the entire contents of two very large tables and then check for missing records without the performance advantage of a true join. I decided to go with a hybrid model where the app obtains basic information from the main Oracle database and then reads and writes the supplemental information in and out of SQL Server.
Provided I load a user data from my database to redis. Say 5 secs have gone by, and the user information in the database has been updated.
What's the best way to check if the data in redis is out of sync? Do you periodically call database and check if you have same data as what's stored in the database? or do you check when you are about to commit some data to the database?
I am using stackexchange.redis as client, if that makes any difference.
The answer to your question depends on software architecture and the style, your application works.
If you have only one application with one component (the monolith-way), the data sync should happen within the application. The User is updated through the application, and the record is written to the database and in Redis.
If you have multiple applications/components (the microservice-way), this get's a bit harder. One application updates the user information in the database, but the other applications do not know. So you either:
invalidate the data in Redis, so the next application fetching user details will firstly see, that there is nothing stored in Redis and then go to the database, fetch the record and store it into Redis
or: Use TTL (worst option ever).
If the records are written "somehow" into the database and you do not have control over that application, that's the worst case. I would go for TTL in this case.
HTH, Mark
I am working on a price list management program for my business in C# (Prototype is in Win Forms but am thinking of using WPF for the final ap as a MVVM learning exercise).
Our EMS system is based on a COBOL back end and will remain that way for at least 3 years so I cannot really access it's data directly. I want to pull data from them EMS system periodically to ensure that pricing remains in sync (And to provide some other information to users in a non-editable manner such as bin locations). What I am looking at doing is...
Use WinBatch to automatically run a report nightly then to Use Monarch to convert the text report to a flat file (.xls?)
Drop the file into a folder and write a small ap to read it in and add it to the database
How should I add this to the database? (SQL Express) I could have a table that is just replaced completely each time but I am a beginner at most of this and I am concerned what would happen if an entire table was replaced while the database was being used by the price list ap.
Mike
If you truncate and refill a whole table you should do it in one single transaction and place a full table lock. This is more secure and faster.
You also could update all changed rows, then insert new (missing rows) and then delete all rows which weren't updated in this run (insert some kind of version number in each row to determine this).
First create a .txt file from the legacy application. Then use a batch insert to pull it into a work table for whatever clean up you need to make. Do the clean up using t-sql. Then run t-sql to insert new data into the proper tables and/or to update rows where data has changed. If there are toomany records, do the inserting and updating in batches. Schedule all this as a job to run during hours when the database is not busy.
You can of course do all of this best in SSIS but I don't know if that is available with Express.
Are there any fields/tables available to tell you when the price was last updated? If so you can just pull the recently updated rows and update that in your database.... assuming you have a readily available unique primary key in your cobol app's datastore.
This wouldn't be up to date though because you're running it as a nightly script to update the database used by the new app. You can maybe create a .net script to query the cobol datastore specifically for whatever price the user is looking for, and if the cobol datastores update time is more recent than what you have logged, update the SQL Server record(s).
(I'm not familiar with cobol at all, just throwing ideas out there)
I have a client who has a product-based website with hundreds of static product pages that are generated by Microsoft Access reports and pushed up to the ISP via FTP (it is an old design). We are thinking about getting a little more sophisticated and creating a data-driven website, probably using ASP.NET MVC.
Here's my question. Since this is a very small business (a handful of employees), I'd like to avoid enterprise patterns like web services if I can. How does one push updated product information to the website, batch-style? In a SQL Server environment, you can't just push up a new copy of the database, can you?
Clarification: The client already has a system at his facility where he keeps all of his product information and specifications. I would like to refresh the database at the ISP with this information.
You don't mention what exactly the data source is, but the implication is that it's not already in SQL Server. If that's the case, have a look at SSIS.
If the source data is in SQL Server, then I think you'd want to be looking at either transactional replication or log shipping to sync the two databases.
If you are modernizing, and it is a handful of employees, why would you push the product info out batch style?
I don't know exactly what you mean by "data driven", but why not allow the ASP.NET app to query the SQL Server product catalog database directly? Why generate static pages at all?
UPDATE: ok, I see, the real question is, how to update the SQL database running at the ISP.
You create an admin panel so the client can edit the data directly on the server. It is perfectly reasonable to have the client keep all their records on the server as long as the server is backed up nightly. Many cloud and virtual services offer easy ways to do replicated backups.
The additional benefit of this model is that more than one user can be adding or updating records at a time, making the workforce a lot more scalable. Likewise, the users can log in from anywhere they have a web browser to add new records, fix mistakes made in old records, etc.
EDIT: This approach assumes you can convince the client to abandon their current data entry system in favor of a centralized web-based management panel. Even if this isn't the case, the SQL database can be hosted on the server and the client's application could be made to talk to that so you're only ever using one database. From the sounds of it, it's a set of Access forms and macros which you should have source access to.
Assuming that there is no way to sync the data directly between your legacy system DB (is it in Access, or is Access just running the reports) and the SQL Server DB on the website (I'm not aware of any):
The problem with "pushing" the data directly into the SQL server will be that "old" (already in the DB) records won't be updated, but instead removed and then recreated. This is a big problem with foreign keys. Plus, I really don't like the idea of giving the client any access to the db at all.
So considering that, I find that the best is to write a relatively simple page that takes an uploaded file and updates the database. The file will likely be CSV, possibly XML. After a few iterations of writing these pages over the years, here's what I've come up with:
Show file upload box.
On next page load, save file to temp location
Loop through each line (element in XML) and validate all the data. Foreign keys, especially, but also business validations. You can also validate that the header row exists, etc. Don't update the database.
3a. If invalid data exists, save an error message to an array
At the end of the looping, show the view.
4a. If there were errors, show the list of error messages and tell them to re-upload the file.
4b. If there were no errors, create a link that has the file location from #2 and a confirmation flag
After the file location and confirm flag have been submitted run the loop in #3 again, but there's an if (confirmed) {} statement that actually makes the updates to the db.
EDIT: I saw your other post. One of the assumptions I made is that the databases won't be the same. ie, the legacy app will have a table or two. Maybe just products. But the new app will have orders, products, categories, etc, etc. This will complicate "just uploading the file".
Why do you need to push anything?
You just need to create a product management portion of the webpage and a secondly a public facing portion of the webpage. Both portions would touch the same SqlServer database.
.Net has the ability to monitor a database and check for updates. then you can run a query to [push] the data elsewhere.
or use sql to push the data with a trigger on the table(s) in question.
Is this what you were looking for?
You can try Dynamic Data Web Application.
You should have a service that regularly updates the data in the target DB. It will probably run on your source data machine (where the Access-DB is)
The service can use SSIS or ADO.NET to write the data. You can do this over the web, because you have access via TCP/IP to the server I assume.
Please check when the updates are done and how long it takes. If you can do the updates during the night you are fine. If not you should check, if you can still access the web during the import. That is sometimes not the case.
Use wget to push the new data file to the mvc app and once the data is received by the action, the mvc app invokes the processing/importing of the data (maybe in a worker process if you dont want long requests).