SQL Server MDF file is smaller than expected - sql-server

I'm executing very large SQL files (100-200 GBs) to import data into SQL Server on a Windows 10 computer. Execution is going well with no errors so far but disk usage is much lower than I expect it to be.
For example I executed 3 SQL files full of inserts with about 50 GBs of total size but SQL Server is showing that 6.5 GB is used by the database. Is this normal or is there something funny going on?
Disk is a 1 TB NVME M2 SSD.
Thanks in advance.

Related

SQL Server 2008R2 super high memory consumption - many stolen pages and huge log file

We have a SQL Server 2008 R2 instance running on a Windows Server 2008 R2 machine. The SQL instance has 12 GB RAM allocated to it. Currently (and probably typically) we are sitting at 47 concurrent connections. The server has a handful of DBs residing on it, but only one is really used. This db is 33 GBs with a log size of 89 GB.
The server physical memory is steady at 98%, and our application response time is bad. Most of the memory used by SQL Server are stolen pages. I'm not sure how to correct this. Our indexes and statistics are all basically brand new/recently rebuilt.
I'm at a bit of a loss as to how stolen pages are occurring, why they remain so high, and how to deal with this. Is the log related? It's nearly 3 times the size of the DB. We're reaching a critical point, so any and all help would be much appreciated. Thanks!
SQL Server, by default, will take up all memory unless you tell it not to in the properties of the Server in SSMS (Right click the server and choose properties, go to memory on the left and set the max).
For the log file size, it matters how the application is logging. Right click the database and choose options, look at the recovery model. I would Google the different types to match the type to your database need. If it is Full logged and you are not taking Transaction log backups, the log will just grow and grow. In that case, look at implementing Ole Hallengren's widely used scripting for backups: https://ola.hallengren.com/

during oracle export database becomes very slow

I have oracle 10g database, while export command is running to take scheduled backup, the database becomes very very very slow. A report which runs normally for 4 minutes (that too is high), takes 30-40 minutes, it is killing us. Firstly, we used to run the exp command on the server where database is installed but later moved it to another server to reduce load on the production server but no change, however it became more slow. We usually have average of 159 sessions on the database server. I think there's some thing wrong with my SGA, PGA memory allocation, here is my initXZ10.ora file.
XZ10.__db_cache_size=512876288
XZ10.__java_pool_size=33554432
XZ10.__large_pool_size=16777216
XZ10.__shared_pool_size=436207616
XZ10.__streams_pool_size=0
*.db_block_size=8192
*.job_queue_processes=10
*.open_cursors=5000
*.open_links=20
*.open_links_per_instance=20
*.pga_aggregate_target=4865416704
*.processes=500
*.sessions=2000
*.sga_max_size=11516192768
*.sga_target=11516192768
*.transactions=500
*.db_cache_size=512876288
*.java_pool_size=33554432
*.large_pool_size=16777216
*.shared_pool_size=436207616
*.streams_pool_size=0
I have 16GB RAM on the server and 64 bit oracle 10.2.0.4.0
Can somebody please help me here to optimize the init.ora file and any parameters to speed up the database.
Thanks everyone.

SQL Server 2014 standard edition slows the machine when Database size grows

I have a scenario where an application server saves 15k rows per second in SQL Server database. At first initial hours machine is still usable but whenever the database size increases ~20gig, it seems that machine is becoming unusable.
I saw some topics/forums/answers/blogs suggesting to limit the max memory usage of SQL Server. Any thoughts on this?
Btw, using SQL Bulkcopy to insert rows in the database.
I have two suggestions for you:
1 - Database settings:
When you create the database, try to use a large initial size, and consider to have a bigger autogrowth percentage/size.
You will want to minimize the times your filegroups need to grow.
2 - Server settings:
In your SQL Server settings I would recommend that you remove one logical processor from the SQL Server. The OS will use this processor when the SQL Server is busy with heavy loads on the other processors. In my experience, this usually gives a nice boost to the OS .

SSIS performance

I am working on an intel core-i3, 64-bit machine with just 4GB RAM.OS is Windows 7 and SQL Server 2012, evaluation is installed.
I am trying to do some SSIS development in it. i need to load a flatfile with 0.5 million records (156 columns/ approximately Row length of 3500 in total). SQL engine and SSIS engine are running in same machine.
As I am using a small Pc, I dont expect high performance in my machine. See the below print screens.
Once my package starts running, the memory usage reaches maximum in no time.
See processes tab
The CPU usage is just 3%, and memory may be 96%,
1. Even after closing, SSDT and SQL server management studio, the
memory still remains at 95%, until I restart MSSQL service. Why is
it behaving So?
2. How can I KNOW the I/O efficiency?
thanks in advance.
I think you need to set your flat file input to Fast parse.
This is a better guide.
http://www.bidn.com/blogs/BrianKnight/ssis/780/loading-flat-files-faster-in-ssis-with-fastparse
Old guide
http://henkvandervalk.com/speeding-up-ssis-bulk-inserts-into-sql-server
As per your problem, you should consider decreasing the buffer size or decreasing the batch size. For example, decreasing the DefaultBufferSize and DefaultBufferMaxRows.
Instead of using fast-load in OLE DB destination or SQL destination, you can use ODBC with appropriate batch size.
Due to the limitation of your machine, try to use parallelism properly if there is any.

SQL Server Express Performance breaks with large Logfiles

we run since 2 years a small application on SQL Server 2005 Express Edition the Database has gown from 75 MB up to nearly 400MB within this time, the there isn't a big amount of data.
But the log file has been arrived at 3,7GB now without changing Hardware, table structure or Program code we noted that the Import processes which required 10-15 minutes are now arrived at a couple of hours.
any idea where could be the Problem? Depends it on the log file may be? The 4GB Lock of Express Edition bear only on data files or also on log files?
Additional Informations: There isn't any RAID on the DB Server, There doesn't work concurrent users (only one user is logged in while the import process).
Thanks in Advance
Johannes
That the log file is so large is completely normal behavior; in the two years you have been running; sql has been keeping track of the events that happen in the database as it goes along its business.
Normally you might clear these logs off when you take a backup (as you most likely dont need them anyway.) If you are backing up all you need to change the sql script to checkpoint the logfile (its in books online) depending on how you are backing up your milage may vary.
To clear it down in the immediate make sure no one is in using the database; open management studio express find the database and run
backup log database_name with truncate_only
go
dbcc shrinkdatabase('database_name')
From MSDN:
"The 4 GB database size limit applies only to data files and not to log files. "
SQL Server Express is also limited in that it can only use 1 processor and 1GB of memory. Have you tried monitoring the processor/memory usage while the import is running to see if this is causing a bottleneck?

Resources