BAM Archive database - Reduce size - sql-server

Is there a way to reduce BAM Archive database size? We are currently moving the data from BAM Primary DB to BAM Archive. But now BAM Archive DB contains large amount of data which takes huge space. I tried using Biztalk Terminator for this but DB is not getting connected on BT

Related

Multiple data files for a single database in different drives

We are receiving thousands of records daily in our lets say 'abc' database and the size of mdf file has now increased to 800 GB. The drive where primary mdf file is kept is full now due to that mdf file. We cannot add additional storage to that drive due to cost constraint.
Can we have multiple data files for a database in different drives? Would there be any issue in doing this?

SQL Server Query Failing

I have 2 servers Server A and Server B. They both have SQL Server 2016 SP2-CU7 installed with 64 GB memory and same amount of disk space for Temp DB. I allocated same amount of memory for both servers (48 GB).
When I run a script it fails on Server A with this error:
Could not allocate space for object 'dbo.SORT temporary run storage: 140737993048064' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
But on Server B it runs in 3 minutes without any issues. I requested more space for TempDb drive on Server A but the script still fails.
How can I find out what is wrong with Server A?

Huge size log file in SQL server

I have SQL database and I have sent the same database to other place for further development (there may be or may not changes in Schema). I created a a backup file and restored it on another SQL Server.
I found the log file size is very huge (14GB) even though I only have 4 tables with 1000 rows and will not grow two much. Currently I run query and found (Size is in MB).
data_size data_used_size log_size log_used_size
801.00 2.06 14220.75 55.63
What I did:
I have shrunk the log files using SQL Management Studio and also kept SIMPLE mode recovery, as we have only few updates to this database that can be done again if transaction fails at any point. I created a backup and restored same and found size of log file has decreased considerably and here is here under.
total_size data_size data_used_size log_size log_used_size
802.00 801.00 2.06 1.00 0.46
Question 1: Since the database size very less should we decrease the initial size of database
Question 2: Is this ok now to send this .bak file for restoring database at another location
Answer to Q1:
It is always a good idea to estimate your data growth and set an initial size for your database. The reason is simply to avoid the SQL Data file from performing Auto Growth operations which are very expensive. If you are not expecting any data growth, then whether or not you set Initial Size, it does not matter.
Answer to Q2:
You can send the Backup file to any location as long as your SQL Server version on which you restore the file is of similar or higher version. Only point to note is the data in the backup file - consider encryption if you have sensitive data.

Automating Postgresql database backup from small SSD to multiple harddrives daily

Relative newcomer to sql and pg here so this is a relatively open question regarding backing up daily data from a stream. Specific commands / scripts would be appreciated if it's simple, otherwise I'm happy to be directed to more specific articles/tutorials on how to implement what needs to be done.
Situation
I'm logging various data streams from some external servers on the amount of a few GB/day every day. I want to be able to store this data onto larger harddrives which will then be used to pull information from for analysis at a later date.
Hardware
x1 SSD (128GB) (OS + application)
x2 HDD (4TB each) (storage, 2nd drive for redundancy)
What needs to be done
The current plan is to have the SSD store a temporary database consisting of the daily logged data. When server load is low (early morning), dump the entire temporary database onto two separate backup instances on each of the two storage disks. The motivation for storing a temp db is to reduce the load on the harddrives. Furthermore, the daily data is small enough that it will be able to copy over to the storage drives before server load picks up.
Questions
Is this an acceptable method?
Is it better/safer to just push data directly to one of the storage drives, consider that the primary database, and automate a scheduled backup from that drive to the 2nd storage drive?
What specific commands would be required to do this to ensure data integrity (i.e. while a backup is in progress, new data will still be being logged)
At a later date when budget allows the hardware will be upgraded but the above is what's in place for now.
thanks!
First rule when building a backup system - do the simplest thing that works for you.
Running pg_dump will ensure data integrity. You will want to pay attention to what the last item backed up is to make sure you don't delete anything newer than that. After deleting the data you may well want to run a CLUSTER or VACUUM FULL on various tables if you can afford the logging.
Another option would be to have an empty template database and do something like:
Halt application + disconnect
Rename database from "current_db" to "old_db"
CREATE DATABASE current_db TEMPLATE my_template_db
Copy over any other bits you need (sequence numbers etc)
Reconnect the application
Dump old_db + copy backups to other disks.
If what you actually want is two separate live databases, one small quick one and one larger for long-running queries then investigate tablespaces. Create two tablespaces - the default on the big disks and the "small" one on your SSD. Put your small DB on the SSD. Then you can just copy from one table to another with foreign-data-wrappers (FDW) or dump/restore etc.

Copying 6000 tables and data from sqlserver to oracle ==> fastest method?

i need to copy the tables and data (about 5 yrs data, 6200 tables) stored in sqlserver, i am using datastage and odbc connection to connect and datstage automatically creates the table with data, but its taking 2-3 hours per table as tables are very large(0.5 gig, 300+columns and about 400k rows).
How can i achieve this the fastes as at this rate i am able to only copy 5 tables per day but within 30 days i need move over these 6000 tables.
6000 tables at 0.5 Gb each would be about 3 terabytes. Plus indexes.
I probably wouldn't go for an ODBC connection, but the question is where is the bottleneck.
You have an extract stage from SQL Server. You have the transport from the SQL Server box to the Oracle box. You have the load.
If the network is the limiting capability, you are probably best off extracting to a file, compressing it, transferring the compressed file, uncompressing it, and then loading it. External tables in Oracle are the fastest way to load data from flat file (delimited or fixed length), preferably spread over multiple physical disks to spread the load and without logging.
Unless there's a significant transformation happening, I'd forget datastage. Anything that isn't extracting or loading is excess to be minimised.
Can you do the transfer of separate tables simultaneously in parallel?
We regularly transfer large flat files into SQL Server and I run them in parallel - it uses more bandwidth on the network and SQL Server, but they complete together faster than in series.
Have you thought about scripting out the table schemas and creating them in Oracle and then using SSIS to bulk-copy the data into Oracle? Another alternative would be to use linked servers and a series of "Select * INTO xxx" statements that would copy the schema and data over (minus key constarints), but I think the performance would be quite pitiful with 6000 tables.

Resources