Deploy stored procedure scripts to mulitple Databases and instances in DB2 - database

I am new to DB2/UDB. I have a 3 instances each instance have different number of databases below.
instance_1 - 20 databases
instance_2 - 18 databases
instance_3 - 16 databases
Totally I have 54 DBs. All are in sync now. I have developed a stored procedure for a requirement. I have to deploy this stored procedure to all 54 databases. I am doing it manually (54 times). It is very hard to do and it leads to human errors.
Can anyone suggest a tool / approach to do this process automatically.

Related

How to achieve real time reporting in Azure SQL server in one database?

We have a stored procedure in Azure SQL database (Pricing tier is premium with 250 DTU) which processes around 1.3 billion records and inserts results in tables which we display in reporting page. To run this stored procedure, it takes around 15 minutes and we have scheduled it weekly as Azure webjobs because we use same database for writing actual user logs.
But now, we want real time reporting max 5 minutes of differences and if I schedule webjobs to execute the stored procedure every 5 minutes then my application will shutdown.
Is there any other approach to achieve real time reporting?
Is there any Azure services available for it?
Can I use azure databricks to execute the stored procedure? Will it help?
Yes, you can use read queries on Premuim replica databases, by adding this to your connection string:
ApplicationIntent=ReadOnly;
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-read-scale-out

SQL Azure migration wizard taking long time

I am using SQL Azure migration wizard for migrating one of my database to a different instance. It literally took more than 12 hours to do BCP out itself. The only change i have doneis to increase the packet size from 4096 to 65535(max). Is that wrong ? And i am doing this from a AWS server which is part of the same subnet where SQL server RDS instance is hosted
Analysis completed at 7/16/2016 1:53:31 AM -- UTC -> 7/16/2016 1:53:31 AM
Any issues discovered will be reported above.
Total processing time: 12 hours, 3 minutes and 14 seconds
There is a blog post from the SQL Server Customer Advisory Team (CAT) that goes into a few details about optimal settings to get data into and out of Azure SQL databases.
Best Practices for loading data to SQL Azure
When loading data to SQL Azure, it is advisable to split your data into multiple concurrent streams to achieve the best performance.
Vary the BCP batch size option to determine the best setting for your network and dataset.
Add non clustered indexes after loading data to SQL Azure.
If, while building large indexes, you see a throttling-related error message, retry using the online option.

Will clustering Oracle Database into multiple PCs enhance performance?

There are several packages in my oracle DBMS which sometimes fails to complete the task of takes more than 6 to 7 hours to complete the task even if the procedures and packages are error free.
This only occurs when multiple clients try to run different packages from their system into a server pc which contains the database at same time like the picture shown.
There are 20 client systems and 1 single server PC in my office.
I am asking if I use 2 or more pcs to cluster my oracle db will that be helpful to enhance performance i.e. will it reduce package run time and will it reduce chances of failing package run ?

Mater-master database replication on 5 servers

My current project requires immediate replication between 5 databases. Servers are physically distributed across the globe. Currently we use Redis with one master server installed on another 6th server. All database writes on any of 5 servers are performed to this 6th master server, and other 5 are slaves of it. This approach has a lot of flaws and i'm looking for solution to replace it. Any suggestions please?

Can we mirror SQL Server tables from one DB into another with acceptable performance?

We have two servers, A and B
on server A we have DB OPS_001 with a number of tables.
on server B we have DB XYZ with a number of tables.
We consider a project of integrating both systems and start pointing the resulting system to the tables in server B (for foreign keys/etc). We face some technical difficulties physically moving all tables from server A.OPS_001 into B.XYZ due to legacy applications that need to have connections rewritten and compiled.
Is there a way to mirror server A.OPS_001 tables in B.XYZ such a way that the performance is still acceptable (like not taking 1,2 seconds for a select on a PK)? I know acceptable is a very generic term but take in consideration around 150 users rely on those 2 databases from 9am to 5pm.
I've tested linked server views but it's very slow.
Just so you know, A is a SQL Server 2000 and B is SQL Server 2008.
EDIT:
Size of the source DB is 220 tables and the file itself around 14 GB.
The solution was to migrate the database to the same server and since the databases were extremely tightly coupled, they should be merged together and form a single database with relational integrity.

Resources