Sybase: create a database from a backup file - c#

Using SQL statements (sybase) how can I create or restore a full database given the conpressed backup file (myDatabase.cmp.bak).
The reason i'm saying create or restore is that the DB already exists but I can drop it before creating it if that's easier.
Do I need to worry about Device files? For example if the backup file used 3 database files each a certain size do I need to create the empty device files first or will that be taken care of during the restore?
I'm doing this from a C# app
Cheers
Damien

First, a word of warning. You are attempting DBA tasks without reasonable understanding, and without a decent connection to the server.
1 Forget your C# app, and use isql or DBSQL or SQL Advantage (comes with the CD). You will find it much easier.
2 Second, read up on the commands you expect to use. You need a handle on the task, not just command syntax.
--
No, you do not have to change the Device files at all.
Yes, you do need to know the copression and the database allocations of the source (dumped) database. Usually when we transfer dump_files, we know to send the database create statement and the compression ratio with it. The no of stripes are also required but that is easy to ascertain.
If your target db is reasonably similar to the source db, as in, it was once synchronised, you do not need to DROP/CREATE DB , just add the new allocations. But if it isn't, you will need to. The target db must be created with the exact same CREATE/ALTER DB sequence as the source db. Otherwise, you will end up with mixed data/log segments, which prevents log dumps, and deems the db unrecoverable (from the log). That sequence can be gleaned from the dump_file, but the compression has to be known. Hence much easier for the target DBA, if the source DBA shipped the CREATE/ALTER DB and dump commands.

Related

Is it appropriate to use a shell command to download MySQL tables from within a C# program?

I am writing a C# program that needs to obtain data from a MySQL database in a REMOTE server. The internet connections that it will be using are extremely slow and unreliable, so I want to minimize the data that is being transferred.
The following shell command gets MySQL to store data from a certain table as a *.txt file in the LOCAL machine:
mysql.exe -u USERNAME -pPASSWORD -h REMOTE_SERVER_IP DB_NAME -e "SELECT * FROM table1" > C:/folder/file_name.txt
As of now, I am writing a C# program that will execute this command. HOWEVER, when executing this command from the Windows Command Prompt, I get a Warning that says "Using a password on the command line interface can be insecure." I have a few questions:
1- What kind of security risk is it referring to?
2- Does this risk still exist if you execute it from within another program?
3- Would any of y'all use the same approach? How does this compare with using a straight MySqlConnection and calling in SP's to store all of the data in RAM (and inserting it into the local database later), in terms of amounts of data transferred, speed and RAM usage? (In theory, of course, I don't expect anyone to have tried this specific comparison already)
4- Is the code on the following link the best for this? Isn't there something in the MySql library (.Net Framework) that will make it easier?
How to use mysql.exe from C#
I am also open to suggestions on changing my approach altogether, just in case...
EDIT: The alternate method I referred to in 3 uses the MySqlDataAdapter class, which stores the data in DataSets.
1 & 2
As you're passing password as CLI arguments, if they were displayed on screen, anyone can see your password. As easy as that.
Rest of points
It's not true that you would take all records into memory. If you use MySQL's IDataReader MySqlDataReader (i.e. you'll need to call MySqlCommand.ExecuteReader method) implementation, you can sequentially retrieve results from the database like an stream, thus, you can read each result in the result set one by one and store them in a file using a FileStream.
It will show your password in plain text either on the screen or in the console output or in memory.
Yes since you need to store the password in plain text either on Disk or in memory
If you are not that concerned about someone gaining access to your remote machine and steal the password without you knowing it, then its fine
You can try Windows Native Authentication Plugin which you wouldn't need to store the password but instead it will use your current windows Login information to authenticate. (unless you are on Linux then forget about it)
It is pretty much the same idea as typing your password on any website without a mask (either dot or *). Whether or not that is a concern for you is for you to decide.
Why not connect to the DB the standard way (from w/i .Net like you can connect to an Oracle db for example) using MySqlConnection as shown here MySql Connection. Then, once you do that, you have no password concerns as this is in code. Then I think that I would handle the problem in a similar fashion (incrementally fetching data and storing locally - to get around the internet issue).
So, I finally got around to properly coding and testing both methods (the shell command and the IDataReader), and the results were pretty interesting. In total, I tested a sample of my 4 heaviest tables six times for each method. The method of the shell command needed 1:00.3 minute on average, while the DataReader needed 0:56.17, so I'll go with that because of an overall (and pretty consistent) advantage of 4s.
If you look at the breakdown per step though, it seems that C# needed a full 8s to connect to the database (48.3s for downloading the tables vs the previous total). If you consider that the shell command was most likely establishing and closing a new connection for each table that was being downloaded, it seems to me that something in that process is actually quicker for connecting to the remote database. Also, for one of the tables, the shell command was actually faster by 2.9 seconds. For the rest of the tables, only one was more than 8 seconds slower under the shell command.
My recommendation for anyone in the future is to use the shell command if you're only obtaining a single, large table. For downloading multiple tables, the IDataReader is the more efficient choice, probably because it only establishes the connection once.

How does an MS-Access SQL query run on a remotely located .mdb file?

I'm trying to understand how making a query to a .mdb file works.
Suppose the file is located on a share drive, PC2, I open it programmatically from PC1.
When I make a connection to a .mdb file, I assume no "instance" of MS Access is started on the PC2 (since it's a simple file server). Is this correct?
When I make a SQL query, does it have to copy the table locally, and run the query then return my results and toss away the table and any excess data?
What happens if I "order by" on a query? is the entire query returned, then locally ordered, or somehow ordered remotely?
I'm sure I have other questions, but I'm trying to understand how connecting to an MDB file works from a remote location. (we have a decent amount of latency where I am located, so a particular query can take 9 seconds, which in my case is unacceptable, I'm trying to understand how this is working and if it can be improved).
I'm running with c# in this case, I don't expect that should make much difference, but may in your response.
When I make a connection to a .mdb file, I assume no "instance" of MS Access is started on the [remote machine] (since it's a simple file server). Is this correct?
Yes. The application will be interacting with a copy of the Access Database Engine on the local machine, which in turn retrieves the information from the database file on the remote machine.
When I make a SQL query, does it have to copy the table locally, and run the query then return my results and toss away the table and any excess data?
Not necessarily. Depending on the indexing scheme of the tables(s) involved, the Access Database Engine may only need to retrieve the relevant indexes and then determine the specific pages in the data file that contain the records to be retrieved. In some cases it may need to retrieve the entire table (e.g., when a full table scan is required), but that it not always the case.
What happens if I "order by" on a query? is the entire query returned, then locally ordered, or somehow ordered remotely?
The Access documentation says that indexes will speed up sort operations (ref: here), suggesting that the Access Database Engine can retrieve the required rows from the remote file in sorted order.
Your instincts are correct, mdb/mde dbs are just glorified text files which must be processed locally. Here are some tips on network performance: http://www.granite.ab.ca/access/performancefaq.htm
But since SQL Server Express is free, there is almost no excuse for not migrating, especially since Access has a tool to manage that for you. In a low volume multi-user environment (2-10 for example), MS Access can work ok, but for enterprise solutions where a higher volume of users and/or transactions is in any way possible, you are dicing with disaster.
To add to Gord's answer...
Access databases are accessed through Windows file page locks. My understanding was that Microsoft added this page locking specifically for use by MS Access (but is also available for any file through the Windows API).
Because the instance is local and collisions and conflicts are handled through file page locks, client-server contention is an issue. Access has known issues here. It's why one should switch to SQL Server Express (also free) whenever possible. But, yes, MS Access has a certain level of convenience; SSE has a bigger footprint and a far less friendly GUI
All desktop databases have client/server issues. Gord's answer matches my knowledge. The point of indices is to reduce the amount of table data that needs to be pulled locally. Pulling the index is a relatively small task in comparison to the table data. This is standard index optimisation, although I would say it is even more important for desktop databases due to the remote data and, ugh, file paging.
In general the Access (JET) engine does NOTHING remotely. It's all file data grabs and executed locally in in the local MSA/Jet engine. You know this because the engine is installed locally and doesn't have to be installed on the file host. It is, however, a convenient quick and dirty way of dispersing processing loads. :)

Programmatically saving a SQL Server database to xml files and restoring it again

I want to save a whole MS SQL 2008 Database into XML files... using asp.net.
Now I am bit lost here.. what would be the best method to achieve this? Datasets?
And I need to restore the database later again.. using these XML files. I am thinking about using datasets for reading the tables and writing to xml and using the SQLBulkCopy class to restore the database again. But I am not sure whether this would be the right approach..
Any clues and tips for me?
If you will need to restore it on the same server type (I mean SQL Server 2008 or higher) and don't care about ability to see actual data inside the XML do the following:
Programmatically backup the DB using "BACKUP DATABASE" T-SQL
Compress the backup
Convert the backup to Base64
Place the backup as the content of the XML file (like: <database name="..." compressionmethod="..." compressionlevel="...">the Base64 content here</database>
On the server where you need to restore it, download the XML, extract the Base64 content, use the attributes to know what compression was used. Decompress and restore using T-SQL "RESTORE" command.
Would that approach work?
For sure, if you need to see the content of the database, you would need to develop the XML scheme, go through each table etc. But, you won't have SPs/Views and other items backed up.
Because you are talking about a CMS, I'm going to assume you are deploying into hosted environments where you might not have command line access.
Now, before I give you the link I want to state that this is a BAD idea. XML is way too verbose to transfer large amounts of data. Further, although it is relatively easy to pull data out, putting it back in will be difficult and a very time consuming development project in itself.
Next alert: as Denis suggested, you are going to miss all of your stored procedures, functions, etc. Your best bet is to use the normal sql server backup / restore process. (Incidentally, I upvoted his answer).
Finally, the last time I dealt with XML and SQL Server we noticed interesting issues that cropped up when data exceeded a 64KB boundary. Basically, at 63.5KB, the queries ran very quickly (200ms). At 64KB, the query times jumped to over a minute and sometimes quite a bit longer. We didn't bother testing anything over 100KB as that was taking 5 minutes on a fast/dedicated server with zero load.
http://msdn.microsoft.com/en-us/library/ms188273.aspx
See this for putting it back in:
How to insert FOR AUTO XML result into table?
For kicks, here is a link talking about pulling the data out as json objects: http://weblogs.asp.net/thiagosantos/archive/2008/11/17/get-json-from-sql-server.aspx
you should also read (not for the faint of heart): http://www.simple-talk.com/sql/t-sql-programming/consuming-json-strings-in-sql-server/
Of course, the commentors all recommend building something using a CLR approach, but that's probably not available to you in a shared database hosting environment.
At the end of the day, if you are truly insistent on this madness, you might be better served by simply iterating through your table list and exporting all the data to standard CSV files. Then, iterating the CSV files to load the data back in ala C# - is there a way to stream a csv file into database?
Bear in mind that ALL of the above methods suffer from
long processing times due to the data overhead; which leads to
a high potential for failure due to the various time outs (page processing, command, connection, etc); and,
if your data model changes between the time it was exported and reimported then you're back to writing custom translation code and ultimately screwed anyway.
So, only do this if you really really have to and are at least somewhat of a masochist at heart. If the purpose is simply to transfer some data from one installation to another, you might consider using one of the tools like SQL Compare and SQL Data Compare from RedGate to handle the transfer.
I don't care how much (or little) you make, the $1500 investment in their developer bundle is much cheaper than the months of time you are going to spend doing this, fixing it, redoing it, fixing it again, etc. (for the record I do NOT work for them. Their products are just top notch.)
Red Gate's SQL Packager lets you package a database into an exe or to a VS project, so you might want to take a look at that. You can specify which tables you want to consider for data.
Is there any specific reason you want to do this using xml?

Backup and Restore Filtered Data from SQL Server database using C#

I wish to implement backup and restore feature for my application. Here I want to backup filtered data(not the whole database).
Like Select * from Sales where CompanyId=1 For all tables in database and write these data to a file.bak file, later which I can be used for restore purpose.
My Question here is Is there any way to implement this feature using SMO? If you have any other suggestion about how to implement this, I am very happy to hear it.
Please help me friends..
There is no native way in which you are going to achieve this backup, but there are some awkward workarounds you can do to try to get this functionality.
If every table includes the CompanyId field, you could create a partition schema / function based on the company Id, and specifically place each partition of the schema on to a separate file group. This has then split the data for each CompanyId onto a different file group, which is the key since there is the functionality to perform a file / file group level backup in SQL instead of the entire database.
I wouldn't do this unless it was the last option, I think I would work out exactly what the backup / restore requirements are, and check whether there are better options / choices.

Determining the start and end range of bytes changed in a file

I'm working on a little experimental utility to use within our company that indexes notes stored in our custom CRM software for full-text searching. These notes are stored in a Btrieve database (a file called NOTES.DAT). It's possible to connect to the database and retrieve the notes for indexing by using Pervasive's ADO.NET provider. However, the indexer currently loops through each note and re-indexes it every 5 minutes. This seems grossly inefficient.
Unfortunately, there's no way for our CRM software to signal to the indexing service that a note has been changed, because it's possible for the database to exist on a remote machine (and the developers aren't going to write a procedure to communicate with my service over a network, since it's just a hobby project for now).
Rather than give up, I'd like to take this opportunity to learn a little more about raw Btrieve databases. So, here's my plan...
The NOTES.DAT file has to be shared, since our CRM software uses the Btrieve API rather than the ODBC driver (which means client installations have to be able to see the file itself on the network). I would like to monitor this file (using something like FileSystemWatcher?) and then determine the bytes that were changed. Using that information, I'll try to calculate the record at that position and get its primary key. Then the indexer will update only that record using Pervasive's ADO.NET provider.
The problem (besides the fact that I don't quite know the structure of Btrieve files yet or if determining the primary key from the raw data is possible) is that I don't know how to determine the start and end range of bytes that were changed in NOTES.DAT.
I could diff two versions, but that would mean storing a copy of NOTES.DAT somewhere (and it can be quite large, hence the reason for a full-text indexing service).
What's the most efficient way to do this?
Thanks!
EDIT: It's possible for more than one note to be added, edited, or deleted in one transaction, so if possible, the method needs to be able to determine multiple separate byte ranges.
If your NOTES.DAT file is stored on an NTFS partition, then you should be able to perform one of the following:
use the USN journal to identify changes to your file (preferred)
use the volume shadow copy service to track changes to your file by taking periodic snapshots through VSS (very fast), and then either:
diffing versions N and N-1 (probably not as slow as reindexing, but still slow), or
delving deeper and attempting to do diff the $Mft to determine which blocks changed at which offsets for the file(s) of interest (much more complex, but also much faster - yet still not as fast, reliable and simple as using the USN journal)
Using the USN journal should be your preferred method. You can use the FSUTIL utility to create and truncate the USN journal.

Categories