PostgreSQL database sync - database

Im new to working with databases and Im trying to do the following
Copy all databaseA schemas (which has several tables each and permissions attached) without any data to my
existing databaseB table as record fields (which now contains only one schema and
also a few tables and permissions attached)
databaseA is an Amazon redshift database & databaseB is an Amazon RDS database. Im connecting to both using DBeaver, databaseA I'm using a redshift driver & databaseB I'm using a PostgreSQL driver
After the inital copy I want to run a daily cron job that checks for the following
a. Compare databaseA to databaseB table
b. If databaseA does not match databaseB (in terms of schema & table permissions)
c. Then switch all perms to match databaseB table
Any feedback on how to approach this would be appreciated!

You could create a python script that connects to both of the databases. You could set up a cron job to spot the differences daily and to update the database.
You can have a query like this for PG:
SELECT table_schema,table_name
FROM information_schema.tables
ORDER BY table_schema,table_name;
And something like this for Redshift:
SELECT schemaname, tablename
FROM PG_TABLE_DEF;
From there it`s just a matter of comparing the two and deciding if you want to update certain tables. Good luck.

I don't have experience with AWS. Im translating the little knowledge I have from OCS which is a younger solution than AWS.
First, Amazon Redshift is tailored for Data warehousing. RDS is a cloud relational database. Im not sure what your aim is to copy from Redshift to RDS. It would be more natural to have the DB or multiple DB copy/clone to the data warehouse unless this was some form of backup. You might need to look into the architecture of your solution.
Oracle Cloud which is fairly new provides a service for copying. Amazon should have a similar solution as they have been in the cloud business longer.
I have had a look at the Amazon documentation. Your challenge has a solution "backwards" here.
After copying the two dbs my assumption is they would be similar structurally. What is affecting the changes on dbA ? It feels like you don't want to use the permissions on dbA maybe its compromised.
My suggestion is to use permissions to prevent changes to dbA. Look at the IAM documentation and check the logs for dbA. If you really need to develop a solution use the API or CLI to interface with the db

Related

.Net Core Identity - migrate from Postgresql to SQL Server

I have a .NET Core 3.0 MVC website with Identity. The database is a Postgresql database (mainly because of better performance with geographic data). But since the only other person that can work with postgresql has quit, i have to migrate to SQL Server (because of internal policy).
But there isn't much information on the big web on this specific migration.
I got a few ideas but since setting up a test takes quite some time, I wanted to check here first.
Is it just a matter of copying all tables between the databases (Copy/export data - change connectionstring - people won't even notice the change)?
write a small script using entity framework, copying all users to the new database with a default password - users have to change password on first login
people have to re-register
a combination of the above
EDIT: the problem is not the tables and data types but my concern is the passwords and the hashes. Can I just copy all the values to the SQL database and can people just log in?
There is a password hash in the table and I was thinking it maybe used other variables like the database engine to create the hash.
If the application is build on the same stack, lets say in your case Dot net core with Aspnet Identity, then the hashes can be migrated with no issue at all. Everything is handled by dotnet and it is not bound to the underlying datastore.
Create the schema and populate it and you will be good to go. No need to rehash or make your users change their passwords. Just move the data
You will need to figure out which data types you are using in Postgresql and what their equivalents are on MSSQL. Most data types are same/similar while there might be a few where there is no direct equivalent.
There are lots of ways to move data between databases. One possible simple way in this case is to dump your postgres db using pg_dump. This will get you a text file with sql statements to recreate the database. Then you can modify the sql statements as necessary to work on your MSSQL database.

Mirror database between different Azure Accounts

We have two different Azure Accounts with each having a set of databases.
Now, there is one database that we need to access in both the accounts with joins involving local tables (Especially one huge local table).
We tried distributed queries, but the performance dips with joins. We, therefore, want to mirror a table from the other azure account into this account's db so that it becomes a local table and join work faster.
We only need read only access to the mirrored table, the changes need to reflected almost instantaneously though the frequency of updates is not that high.
What is the way of achieving this ?
It seems like you are using Azure SQL. Please mention that your DB is Azure SQL or on any VM using MS SQL.
If it is Azure SQL, you can simply replicate your db with sync tool available in azure.
please follow this link:
https://azure.microsoft.com/en-in/documentation/articles/sql-database-get-started-sql-data-sync/
You can use single way sync so ops will get more faster.
HTH.

SQL Server move data between databases

We have a requirement where we will have to move data between different database instance on regular basis. (For e.g. some customers willing to pay more for the better performance). So this is not going to be one off.
The database tables has referential integrity. Is there a way in which this can be done without rewriting sql script (or some other method) every time we migrate customers data?
I came across this How to move data between multiple database's table while maintaining foreign-key relationships/referential integrity?. However it appears that we have write script every time we migrate data (please correct me if I misunderstood the answer on this thread).
Thanks
Edit:
Both servers are using SQL Server 2012 (same version). Its an Azure SQL Server database.
They are not necessarily linked (no firewall between them)
We are only transferring some data, not the whole database. This is only for certain customers who opted pay more.
The schema are exactly same in both databases.
Preyash - please see the documentation on the Split-Merge tool. The Split-Merge tool enables you do move data between databases, as you have described, based on a sharding key (e.g., customer ID). One modification that you will need for your application is to add a shard map (i.e., a database that understand the global state of which customers resides in which databases).
Have a look into Azure Data Sync. It is much more aligned with your requirements. But you may end up in having another SQL Azure DB to maintain a Hub. Azure data Sync follows hub-spoke pattern and will let you do all flexible directional syncs with a few minutes of syncing gap. It is more simple and can set it up very fast without any scripts and all as you wanted.

Continuous Deployment in Cloud

I am assinged for the task of Continuous deployment from development server to production server.
In my development server all the database objects will be created under the 'DBO' Schema. But in Production server based on every Tenants company list differenet SCHEMAS will be there.
for E.g in my development server if a tablename is created like
dbo.ABC
dbo.XYZ
And while i creating a tenant(Omkar---db) (Sarkur,Mathur--- schemas), the database objects will be like
Sarkur.ABC, sarkur.XYZ
Mathur.ABC, Mathur.XYZ
Now, i have to compare these two databases to check whether any changes in structure of the database objects, addition / deletion of database objects. If so that changes has tobe synchronized in the production database.
If anyone know that how to compare these two different schemas object, pls let me know..
1 option that I know is looking suitable
Flyway :
It is Easy to setup, simple to master. Flyway let's you regain control of your database migrations with pleasure and plain sql.
Solves only one problem and solves it well. Flyway migrates your database, so you don't have to worry about it anymore.
Made for continuous delivery. Let Flyway migrate your database on application startup. Releases have never been this easy.
Big Plus It's Open Source framework!
http://flywaydb.org/

Generic Database Monitoring Tool

It seems like something like this should exist, but I have never heard of it and would find such a utility to be incredibly useful. Many times, I develop applications that have a database backed - SQL Server or Oracle. During development, end users of the app are encouraged to test the site - I can verify this by looking for entries in the database...if there are entries, they have been testing...if not, they haven't.
What I would like is a tool/utility that would do this checking for me. I would specify the Database and connection parameters and the tool would pool the database periodically (based on values that I specify) and alert me if there was any new activity in the database (perhaps it would pop up a notification in the system tray). I could also specify multiple database scenarios to monitor in the tool. If such an app existed, I wouldn't have to manually run queries against databases for new activity. I'm aware of SQL Profiler, but when I reviewed it, it seemed like overkill for what I wanted to do (and it also wouldn't do the Oracle DB monitoring). Also, to use SQL Profiler, you have to be an admin of the database. I would need to monitor databases where I only have a read-only account.
Does someone know if such a tool exists?
Sounds like something really easy to write yourself. Just query the database schema, then do a select count(*) or select max(lastUpdateTime) query on each table and save the result. If something is different send yourself an email. JDBC in Java gives you access to the schema information in a cross-database manner. Don't know about ADO.

Resources