during oracle export database becomes very slow - database

I have oracle 10g database, while export command is running to take scheduled backup, the database becomes very very very slow. A report which runs normally for 4 minutes (that too is high), takes 30-40 minutes, it is killing us. Firstly, we used to run the exp command on the server where database is installed but later moved it to another server to reduce load on the production server but no change, however it became more slow. We usually have average of 159 sessions on the database server. I think there's some thing wrong with my SGA, PGA memory allocation, here is my initXZ10.ora file.
XZ10.__db_cache_size=512876288
XZ10.__java_pool_size=33554432
XZ10.__large_pool_size=16777216
XZ10.__shared_pool_size=436207616
XZ10.__streams_pool_size=0
*.db_block_size=8192
*.job_queue_processes=10
*.open_cursors=5000
*.open_links=20
*.open_links_per_instance=20
*.pga_aggregate_target=4865416704
*.processes=500
*.sessions=2000
*.sga_max_size=11516192768
*.sga_target=11516192768
*.transactions=500
*.db_cache_size=512876288
*.java_pool_size=33554432
*.large_pool_size=16777216
*.shared_pool_size=436207616
*.streams_pool_size=0
I have 16GB RAM on the server and 64 bit oracle 10.2.0.4.0
Can somebody please help me here to optimize the init.ora file and any parameters to speed up the database.
Thanks everyone.

Related

Large USERSTORE_OBJPERM in SQL Server

I'm trying to make our SQL Server go faster and have noticed that no stored procedures are staying in the plan cache for any length of time. Most have of the plans have been created in the last hour or so.
Running the script below I see that the USERSTORE_OBJPERM is around 3GB and is the 2nd biggest memory cache on the server after the SQL BUFFERPOOL.
SELECT top 100 *
FROM sys.dm_os_memory_clerks
where type = 'USERSTORE_OBJPERM'
I've run the same script on a few other of our production servers and none of the USERSTORE_OBJPERM on the other servers are any where near as large around 200MBs.
My question is has anyone seen a USERSTORE_OBJPERM at around 3GB and what might of caused it.
I ran the following to try and clear the cache, it went down down by a 100mb or so and instantly started rising again.
DBCC FREESYSTEMCACHE ('ObjPerm - DatabaseName')
Results of script
SQL Server version is 2017 Enterprise with CU22 applied.
Many Thanks in advance for any tips or advice provided
Cheers Mat
Fixed.
It seems the issue was caused by an application using service broker.
The application was running a script to check permissions every 30 seconds.
Fortunately there was an option to switch the permission check off.
The USERSTORE_OBJPERM cache size is now 200MBs instead of 3GB and stored procedure plans are staying in the cache.

SQL Server 2008R2 super high memory consumption - many stolen pages and huge log file

We have a SQL Server 2008 R2 instance running on a Windows Server 2008 R2 machine. The SQL instance has 12 GB RAM allocated to it. Currently (and probably typically) we are sitting at 47 concurrent connections. The server has a handful of DBs residing on it, but only one is really used. This db is 33 GBs with a log size of 89 GB.
The server physical memory is steady at 98%, and our application response time is bad. Most of the memory used by SQL Server are stolen pages. I'm not sure how to correct this. Our indexes and statistics are all basically brand new/recently rebuilt.
I'm at a bit of a loss as to how stolen pages are occurring, why they remain so high, and how to deal with this. Is the log related? It's nearly 3 times the size of the DB. We're reaching a critical point, so any and all help would be much appreciated. Thanks!
SQL Server, by default, will take up all memory unless you tell it not to in the properties of the Server in SSMS (Right click the server and choose properties, go to memory on the left and set the max).
For the log file size, it matters how the application is logging. Right click the database and choose options, look at the recovery model. I would Google the different types to match the type to your database need. If it is Full logged and you are not taking Transaction log backups, the log will just grow and grow. In that case, look at implementing Ole Hallengren's widely used scripting for backups: https://ola.hallengren.com/

Oracle data pump export performance query

I need to export two schemas in my Oracle 11.2.0.4 database that totals 500GB in size. My problem is the duration of the export. It was still running 14 hours later before crashing out because of disk space. This is a standard edition database so I don't have the option to invoke the parallel parameter. I'm not an Oracle guru and would really appreciate some tips on how to speed this up. It seems to be taking a huge amount of time with the stats. Can I exclude them and regenerate them after I import. Please find some database parameter values below:
SGA Target: 6G
PGA_aggregate_target: 3G
db_cache_size: 0
stream_pool_size: 0
NOARCHVIVEMODE: enabled
Thanks

SQL Server 2014 time outs while running backups

We have a SQL Server 2014 database running in virtual machine withing Azure.
The virtual machine has these specs: D13, 8 cores, 56Gb RAM which is one of the top options available from Azure.
While running backups, the database causes timeout errors to the web application (roughly 100 concurrent users).
I would expect a top notch hardware from Azure to manage backups and attend users requests without struggling.
Is this something acceptable? What alternatives could I implement to avoid this?
Try Implementing backups using Split Backup mechanism. This will use multiple threads to backup your database and the time taken to complete backup process is less.
Reference: http://www.sqlideas.com/2011/08/split-database-full-backup-to-mupltiple.html
For huge databases this is the best approach in my opinion

SQL Server Express Performance breaks with large Logfiles

we run since 2 years a small application on SQL Server 2005 Express Edition the Database has gown from 75 MB up to nearly 400MB within this time, the there isn't a big amount of data.
But the log file has been arrived at 3,7GB now without changing Hardware, table structure or Program code we noted that the Import processes which required 10-15 minutes are now arrived at a couple of hours.
any idea where could be the Problem? Depends it on the log file may be? The 4GB Lock of Express Edition bear only on data files or also on log files?
Additional Informations: There isn't any RAID on the DB Server, There doesn't work concurrent users (only one user is logged in while the import process).
Thanks in Advance
Johannes
That the log file is so large is completely normal behavior; in the two years you have been running; sql has been keeping track of the events that happen in the database as it goes along its business.
Normally you might clear these logs off when you take a backup (as you most likely dont need them anyway.) If you are backing up all you need to change the sql script to checkpoint the logfile (its in books online) depending on how you are backing up your milage may vary.
To clear it down in the immediate make sure no one is in using the database; open management studio express find the database and run
backup log database_name with truncate_only
go
dbcc shrinkdatabase('database_name')
From MSDN:
"The 4 GB database size limit applies only to data files and not to log files. "
SQL Server Express is also limited in that it can only use 1 processor and 1GB of memory. Have you tried monitoring the processor/memory usage while the import is running to see if this is causing a bottleneck?

Resources