I am using Bucardo to replicate data in a database. I have one database, called mydb, and another called mydb2. They both contain identical tables, called "data" in both cases. Following the steps on this website, I have installed Bucardo and added the two databases:
bucardo_ctl add database mydb
bucardo_ctl add database mydb2
and added the tables:
bucardo_ctl add all tables
Now when I try to add a sync using the following command:
bucardo_ctl add sync testfc source=mydb targetdb=mydb2 type=pushdelta tables=data
I get the following error:
DBD::Pg::st execute failed: ERROR: error from Perl function "herdcheck": Cannot have goats from different databases in the same herd (1) at line 17. at /usr/bin/bucardo_ctl line 3346.
Anyone have any suggestions? Any would be appreciated.
So, in the source option you should put the name of the herd (which, as I know, is the list of tables.
Then, instead of:
bucardo_ctl add all tables
use
bucardo_ctl add all tables --herd=foobar
And instead of using
bucardo_ctl add sync testfc source=mydb targetdb=mydb2 type=pushdelta tables=data
use
bucardo_ctl add sync testfc source=foobar targetdb=mydb2 type=pushdelta tables=data
The thing is that the source option is not a place where you put the source database, but the "herd" or tables.
Remember that the pushdelta are for tables with primary keys, and the fullcopy are for tables that doesn't matter is they have a PK or not.
Hope that helps.
Related
I imported data from Power BI into SQL-Server. You can see how is look like imported data.
Additionally I created own database with commands below:
CREATE DATABASE MY_DW
GO
USE MY_DW
GO
Now I want to copy all this table into my base named as MY_DW. So can anybody help me how to solve this problem and copy all tables into my base ?
Please check https://www.sqlshack.com/how-to-copy-tables-from-one-database-to-another-in-sql-server/.
This link suggests various methods to copy the data tables from one database to another.
Thanks,
Rajan
Following approach could resolve your issue:
Imported Database
Generate Scripts
Introduction
Next button
Select the database objects (Tables in your case) to script
Next button
Specify how scripts should be saved
Advanced -> Types of data to script -> Schema and data
Next button
Review your selections
Next button
Script generation would take place and saved which you should run under the database,
MY_DW, you created
Another approach:
Assuming that the databases are in the same server.
The below query will create the table into your database(without constraints).
SELECT * INTO MY_DW.Table_Name
FROM ImportedDB.Table_Name
And the below query will insert the data into your database table.
INSERT INTO MY_DW.Table_Name
SELECT * FROM ImportedDB.Table_Name
Final approach:
Assuming that the databases are in the linked server.
Incase of linked server, four part database object naming convention will be applied like below.
The below query will create the table into your database(without constraints).
SELECT * INTO [DestinationServer].[MY_DW].[dbo].[Table_Name]
FROM [SourceServer].[ImportedDB].[dbo].[Table_Name]
And the below query will insert the data into your database table.
INSERT INTO [DestinationServer].[MY_DW].[dbo].[Table_Name]
SELECT * FROM [SourceServer].[ImportedDB].[dbo].[Table_Name]
I have a postgresql database for code (flask, ember) that is being developed. I did a db_dump to back up the existing data. Then I added a column in the code. I have to create the database again so the new column will be in the database. When I try to restore the data with psql -d dbname -f dumpfile I get many errors such as 'relation "xx" already exists', " violates foreign key constraint", etc.
I'm new to this. Is there a way to restore old data to a new empty database that has all the relationships set up already? Or do I have add a column "by hand" to the database when I add a column in the code, to keep the data?
The correct way to proceed is to use ALTER TABLE to add a column to the table.
When you upgrade code, you can simply replace the old code with new one. Not so with a database, because it holds state. You will have to provide SQL statements that modify the existing database so that it changes to the desired new state.
To keep this manageable, use specialized software like Flyway or Liquibase.
When you did the pg_dump, you only dumped the data and table structure, bit did not drop any tables. Now, you are trying to restore the dump, and that will attempt to re-create the tables.
You have a couple options (the first is what I'd recommend):
Add --clean to your pg_dump command -- this will DROP all the tables when you go to restore the dump file.
You can also --data-only your pg_dump command -- this will only dump the existing data, and will not attempt to re-create the tables. However, you will have to find a way to truncate your tables (or delete the data out of them) so as not to encounter any FK errors or PK collisions.
We have duplicate data in entities in Master data services and not in staging tables. how can we delete these? We cannot delete each row because these are more than 100?
Did you create a view for this entity? see: https://msdn.microsoft.com/en-us/library/ff487013.aspx
Do you access to the database via SQL Server Management Studio?
If so:
Write a query against the view that returns the value of the Code field for each record you want to delete.
Write a query that inserts the following into the staging table for that entity: code (from step 1), BatchTag, ImportType of 4 (delete)
Run the import stored proc EXEC [stg].[udp_YourEntityName_Leaf] See: https://msdn.microsoft.com/en-us/library/hh231028.aspx
Run the validation stored proc see: https://msdn.microsoft.com/en-us/library/hh231023.aspx
Use ImportType 6 instead of 4 as the deletion will fail if the Code which you are trying to delete is being referenced by a domain based attribute in other entities if you use ImportType 4. Rest all the steps will remain same as told by Daniel.
I deleted the duplicate data from the transaction tables which cleared the duplicates from the UI also.
MDS comes out-of-the-box with two front-end UIs:
Web UI
Excel plugin
You can use both of them to easily delete multiple records. I'd suggest using the excel plugin.
Are there any Domain-based attributes linked to the entity you're deleting values from? If so, if the values are related to child entity members, you'll have to delete those values first.
We're using Magento 1.4.0.1 and want to use an extension from a 3rd party developer. The extension does not work, because of a join to the table "sales_flat_shipment_grid":
$collection = $model->getCollection()->join('sales/shipment_grid', 'increment_id=shipment', array('order_increment_id'=>'order_increment_id', 'shipping_name' =>'shipping_name'), null,'left');
Unfortunately this table does not exist n our database. So the error "Can't retrieve entity config: sales/shipment_grid" appears. If I comment this part out, the extension is working, but I guess, it does not proper work.
Does anybody know something about this table? There are a backend-option for the catalog to use the "flat table" option, but this is only for the catalog. And the tables already exists, no matter which option is checked.
As it is obvious from table name, this table contains information about shipments and is used in grid on backend. The problem is that this table was created in 1.4.1.1, so you won't find it in your store.
I see 3 ways of solving the problem:
You can create this table and write some script, that will fill it
with necessary data by cron
You can rewrite SQL-query in that 3rd party extension so that it took necessary data from other sources
You can upgrade your Magento at least to 1.4.1.1 (highly recommended)
How to combine several sqlite databases (one table per file) into one big sqlite database containing all the tables. e.g. you have database files: db1.dat, db2.dat, db3.dat.... and you want to create one file dbNew.dat which contains tables from all the db1, db2...
Several similar questions have been asked on various forums. I posted this question (with answer) for a particular reason. When you are dealing with several tables and have indexed many fields there. It causes unnecessary confusion to create index properly into the destination database tables. You may miss 1-2 index and its just annoying. The given method can also deal with large amount of data i.e. when you really have gbs of tables. Following are the steps to do so:
Download sqlite expert: http://www.sqliteexpert.com/download.html
Create a new database dbNew: File-> New Database
Load the 1st sqlite database db1 (containing a single table): File-> Open Database
Click on the 'DDL' option. It gives you a list of commands which are needed to create the particular sqlite table CONTENT.
Copy these commands and select 'SQL' option. Paste the commands there. Change the name of destination table DEST (from default name CONTENT) into whatever you want.
6'Click on 'Execute SQL'. This should give you a copy of the table CONTENT in db1 with the name DEST. The main utility of doing it is that you create all the index also in the DEST table as they were in the CONTENT table.
Now just click and drag the DEST table from the database db1 to the database dbNew.
Now just delete the database db1.
Go back to step 3 and repeat with the another database db2 etc.