Mysql query to copy new data from one DB to another - database

I have a situation. While we upgraded our server drives, we took a backup of our DB and then imported it again and then site was live, its after a weeks we realize that import did not copy entire backup from dump file as when we checked it was only 13 GB which it suppose to be 60 GB. The interesting thing is, there is one big table which we figured out is copied in a funny way, it is huge and its just copied few initial records, say like 2000-5000 and it copied last records say 400000-500000 and there is no records in between. Isn't that weird? cause while we imported we checked initial and last entries and thought it copied everything. Then we created a new DB and imported a dump again and now it seems to be ok, but we have the new entry already in our live DB(13GB data), so we have to copy those and add to our newl imported DB. What will be the idle way to copy those new records in to recovered DB? I mean some query which will search the new DB and add only new records found. So do I have to take dump of our 13 GB and then import? Or is there anyways to copy from live DB?

You can use mysqldump with the following option: --insert-ignore - writes INSERT IGNORE statements rather than INSERT statements (http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_insert-ignore)
With IGNORE keyword, a row that duplicates an existing UNIQUE index or PRIMARY KEY value in the table is not inserted and no duplicate-key error is issued.

Related

T SQL insert statement results in object reference being inserted into empty data field

I've run into something very strange in SSMS that I can't seem to find researching it on the Internet. I have a package that runs nightly on a database inserting millions of records. It's a very straight forward script that simply imports data to a temp table and then inserts data from the temp table to another table. One of the fields in the import file has an ASCII blank space character for every single row, which normally just populates the temp table with a blank and nothing else.
Recently, some of the rows of the temp & final destination table are being populated with a string that looks like this:
label.ComponentCustomColumnRefValDesc.a41d368515c2198830d97c5528fc58c2
Does anyone have any idea where that could be coming from? Is it an object customization specification? If so, why would it be showing up as data in the table?

Import textfiles into linked SQL Server tables in Access

I have an Access (2010) database (front-end) with linked SQL Server-tables (back-end). And I need to import text files into these tables. These text files are very large (some have more than 200.000 records, and approx. 20 fields)
The problem is that I can't import the tex tfiles directly in the SQL tables. Some files contain empty lines at the start, and some other lines that I don't want to import in the tables.
So here's what I did in my Access database:
1) I created a link to the text files.
2) I also have a link to the SQL Server tables
3a) I created an Append-query that copies the records from the linked text file to the linked SQLServer table.
3b) I created a VBA-code that opens both tables, and copies the records from the text file in the SQL Server-table, record for record. (I tried it in different ways: with DAO and ADODB).
[Step 3a and 3b are two different ways how I tried to import the data. I use one of them, not both. I prefer option 3b, because I can run a counter in statusbar to see how many records needs to be imported at any moment; I can see how far he is.]
The problem is that it takes a lot of time to run it... and I mean a LOT of time: 3 hours for a file with 70.000 records and 20 fields!
When I do the same with an Access-table (from TXT to Access), it's much faster.
I have 15 tables like this (with even more records), and I need to make these imports every day. I run this procedure automatically every night (between 20:00 and 6:00).
Is there an easier way to do this?
What is the best way to do this?
This feels like a good case for SSIS to me.
You can create a data flow from a flat file (as the data source) to a SQL DB (as the destination).
You can add some validation or selection steps in between.
You can easily find tutorials like this one online.
Alternatively, you can do what Gord mentioned, and import the data from a text file into a local Access table and then using a single INSERT INTO LinkedTable SELECT * FROM LocalTable to copy the data to the SQL Server table.

Combining several sqlite databases (one table per file) into one big sqlite database

How to combine several sqlite databases (one table per file) into one big sqlite database containing all the tables. e.g. you have database files: db1.dat, db2.dat, db3.dat.... and you want to create one file dbNew.dat which contains tables from all the db1, db2...
Several similar questions have been asked on various forums. I posted this question (with answer) for a particular reason. When you are dealing with several tables and have indexed many fields there. It causes unnecessary confusion to create index properly into the destination database tables. You may miss 1-2 index and its just annoying. The given method can also deal with large amount of data i.e. when you really have gbs of tables. Following are the steps to do so:
Download sqlite expert: http://www.sqliteexpert.com/download.html
Create a new database dbNew: File-> New Database
Load the 1st sqlite database db1 (containing a single table): File-> Open Database
Click on the 'DDL' option. It gives you a list of commands which are needed to create the particular sqlite table CONTENT.
Copy these commands and select 'SQL' option. Paste the commands there. Change the name of destination table DEST (from default name CONTENT) into whatever you want.
6'Click on 'Execute SQL'. This should give you a copy of the table CONTENT in db1 with the name DEST. The main utility of doing it is that you create all the index also in the DEST table as they were in the CONTENT table.
Now just click and drag the DEST table from the database db1 to the database dbNew.
Now just delete the database db1.
Go back to step 3 and repeat with the another database db2 etc.

PostgreSQL COPY FROM Command Help

I have CSV File which is quite large (few hundred MBs) which I am trying to import into Postgres Table, problem arise when there, is some primary key violation (duplicate record in CSV File)
If it has been one I could manually filter out those records, but these files are generated by a program which generate such data every hour. My script has to automatically import it to database.
My question is: Is there some way out that I can set a flag in COPY command or in Postgres so It can skip the duplicate records and continue importing file to table?
My thought would be to approach this in two ways:
Use a utility that can help create an "exception report" of duplicate rows, such as this one during the COPY process.
Change your workflow by loading the data into a temp table first, massaging it for duplicates (maybe JOIN with your target table and mark all existing in the temp with a dup flag), and then only import the missing records and send the dups to an exception table.
I personally prefer the second approach, but that's a matter of specific workflow in your case.

What FoxPro data tools can I use to find corrupted data?

I have some SQL Server DTS packages that import data from a FoxPro database. This was working fine until recently. Now the script that imports data from one of the FoxPro tables bombs out about 470,000 records into the import. I'm just pulling the data into a table with nullable varchar fields so I'm thinking it must be a weird/corrupt data problem.
What tools would you use to track down a problem like this?
FYI, this is the error I'm getting:
Data for source column 1 ('field1') is not available. Your provider may require that all Blob columns be rightmost in the source result set.
There should not be any blob columns in this table.
Thanks for the suggestions. I don't know if it a corruption problem for sure. I just started downloading FoxPro from my MSDN Subscription, so I'll see if I can open the table. SSRS opens the table, it just chokes before running through all the records. I'm just trying to figure out which record it's having a problem with.
Cmrepair is an excellent freeware utility to repair corrupted .DBF files.
Have you tried writing a small program that just copies the existing data to a new table?
Also,
http://fox.wikis.com/wc.dll?Wiki~TableCorruptionRepairTools~VFP
My company uses Foxpro to store quite a bit of data... In my experience, data corruption is very obvious, with the table failing to open in the first place. Do you have a copy of foxpro to open the table with?
At 470,000 records you might want to check to see if you're approaching the 2 gigabyte limit on FoxPro table size. As I understand it, the records can still be there, but become inaccessible after the 2 gig point.
#Lance:
if you have access to Visual FoxPro command line window, type:
SET TABLEVALIDATE 11
USE "YourTable" EXCLUSIVE && If the table is damaged VFP must display an error here
PACK && To reindex the table and deleted "marked" records
PACK MEMO && If you have memo fields
After doing that, the structure of the table must ve valid, if you want to see fields with invalid data, you can try:
SELECT * FROM YourTable WHERE EMPTY(YourField) && All records with YourField empty
SELECT * FROM YourTable WHERE LEN(YourMemoField) > 200 && All records with a long memo field, there can be corrupted data
etc.
Use Repair Databases from my site (www.shershahsoft.com) for FREE (and Will always be FREE).
I have designed this program to repair damaged Foxpro/FoxBase/Dbase files. The program is very quick. It will repair 1 GB table in less than a minute.
You can asign files, and folders to the program. As you start the program it will mark all the corrupted files, and by clicking Repair or Check and Repair button, it will repair all the corrupted files. Moreover, it will create a folders "CorruptData" in the folders where the actual data exist, and will keep copies of the corrupt files there.
One thing to keep in mind, always run Windows CheckDsk on the drives where you store the files. Cause, when records are being copied to a table and power failure occures, there exists lost clusters which Windows converts to files during CheckDsk. After that, the RepairDatabases will do the job for you.
I have used many paid and free programs which repair tables, but all such programs leave extra records in the tables with embiguit characters (and they are time consuming too). The programer needs to find and delete such records manually. But Repair Databases actually recovers the original records, you need no further action. The only action you need is reindexing your files.
In the repair process some times File Open Dialog appears which asks to locate the compact index file for a table with indeces. You may click cancel the dialog at that point, the table will be repaired, however, you will need to reindex the file later. (this dialog may appear several times depending upon the number of corrupted indeces.)

Resources