I'm doing a database migration from Oracle tables to SQL Server tables. Two of the three tables were successful during the first try, mostly because they didn't have that many rows as the third (about 3,5 million rows with around 30 columns). I took me around 15 attempts to accomplish the migration process, because Dbeaver use all the RAM available (around 14 GB).
Migration using 10.000/100.000 segments, CPU use to 100% for many minutes, Dbeaver crashed because the JVM use all assigned memory.
After increasing the JVM memory to 14 GB, migration crashed because the system didn't have more RAM available.
I did change the segment size many times with no results. I ended up using the 'direct Query' and after 1,5 hours it finished successfully.
The question is: Why Dbeaver keeps using RAM without GC cleaning it?
How can I change the beauvoir of the GC to be more 'eager'?
Thanks.
Related
I am trying to run a large query that writes to a custom table in my database. The databases I am calling on are about 6-7 million rows per table. The query includes join statements and sub-queries.
Previously when I have run this query, my raw data databases were significantly smaller. In the range of 1-2 million rows per table. Everything was working up until I imported more data... I've run into this error a few times prior to this addition of data and clearing the working memory cache's worked well
The host computer currently has 8GB of RAM. That computer is used solely for uploading data to the server and hosting the server. The laptop I am calling the commands from only has 4 GB of RAM.
1) Do I have enough memory on my server computer and my laptop? When I run a large query is it using the memory on my laptop or on my host computer? When answering, can you please explain how the working memory of the computer works in conjunction with SQL?
2) If I do not need to add memory, how do I configure my server to prevent this error from occurring?
Additional Information from Server Properties:
Indexing Memory Creation is set to 0 (Dynamic Memory)
Minimum memory per query = 1024 KB
Maximum server memory = maxed out # 21474836447 MB
Minimum Server Memory = 10 MB
We work for a small company that cannot afford to pay SQL DBA nor consultation.
What started as a small project has now become a full scale system with a lot of data.
I need someone to help me sort out performance improvements. I realise no-one will be able to help directly and nail this issue completely, but I just want to make sure I have covered my tracks.
OK, the problem is basically we are experiencing time-outs with our queries on cached data. I have increased the time-out time with c# code but I can only go so far when it's becoming ridiculous.
The current setup is a database that has data inserted every 5 / 10 seconds, constantly! During this process we populate tables from csv files. Over night we run data caching processes that reduces the overload on the "inserted" tables. Originally we were able to convert 10+ million rows into say 400000 rows, but as users want more filtering we had to include more data rows and of course increases the number of data cached tables from 400000 to 1-3 million rows.
On my SQL Development Server (which does not have data inserted every 5 seconds) it used to take 30 seconds to run queries on data cache table with 5 million rows, now with indexing and some improvements it's now 17 seconds. The live server has standard SQL Server and used to take 57 seconds, now 40 seconds.
We have 15+ instances running with same number of databases.
So far we have outlined the following ways of improving the system:
Indexing on some of the data cached tables - database now bloated and slows down overnight processes.
Increased CommandTimeout
Moved databases to SSD
Recent improvements likely:
Realised we will have to move csv files on another hard disk and not on the same SSD drive SQL Server databases reside.
Possibly use file-groups for indexing or cached tables - not sure if SQL Server standard will cover this.
Enterprise version and partition table data - customer may pay for this but we certainly can't afford this.
As I said I'm looking for rough guidelines and realise no-one may be able to help fix this issue completely. We're are a small team and no-one has extended SQL Server experience. Customer wants answers and we've tried everything we know. Incidentally they had a small scale version in Excel and said they found no issues so why are we?!?!?
Hope someone can help.
I'm running some stored procedures in SQL Server 2012 under Windows Server 2012 in a dedicated server with 32 GB of RAM and 8 CPU cores. The CPU usage is always below 10% and the RAM usage is at 80% because SQL Server has 20 GB (of 32 GB) assigned.
There are some stored procedures that are taking 4 hours some days and other days, with almost the same data, are taking 7 or 8 hours.
I'm using the least restrictive isolation level so I think this should not be a locking problem. The database size is around 100 GB and the biggest table has around 5 million records.
The processes have bulk inserts, updates and deletes (in some cases I can use truncate to avoid generating logs and save some time). I'm making some full-text-search queries in one table.
I have full control of the server so I can change any configuration parameter.
I have a few questions:
Is it possible to improve the performance of the queries using
parallelism?
Why is the CPU usage so low?
What are the best practises for configuring SQL Server?
What are the best free tools for auditing the server? I tried one
from Microsoft called SQL Server 2012 BPA but the report is always
empty with no warnings.
EDIT:
I checked the log and I found this:
03/18/2015 11:09:25,spid26s,Unknown,SQL Server has encountered 82 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL11.HLSQLSERVER\MSSQL\DATA\templog.ldf] in database [tempdb] (2). The OS file handle is 0x0000000000000BF8. The offset of the latest long I/O is: 0x00000001fe4000
Bump up max memory to 24 gb.
Move tempdb off the c drive and consider mult tempdb files, with auto grow at least 128 Mbps or 256 Mbps.
Install performance dashboard and run performance dashboard report to see what queries are running and check waits.
If you are using auto grow on user data log and log files of 10%, change that to something similar to tempdb growth above.
Using performance dashboard check for obvious missing indexes that predict 95% or higher improvement impact.
Disregard all the nay Sayers who say not to do what I'm suggesting. If you do these 5 things and you're still having trouble post some of the results from performance dashboard, which by the way is free.
One more thing that may be helpful, download and install the sp_whoisactive stored proc, run it and see what processes are running. Research the queries that you find after running sp_whoisactive.
query taking hours but using low CPU
You say that as if CPU would matter for most db operations. HINT: They do not.
Databases need IO. RAM sin some cases helps mitigate this, but at the end it runs down to IO.
And you know what I see in your question? CPU, Memory (somehow assuming 32gb is impressive) but NO WORD ON DISC LAYOUT.
And that is what matters. Discs, distribution of files to spread the load.
If you look into performance counters then you will see latency being super high on discs - because whatever "pathetic" (in sql server terms) disc layout you have there, it simply is not up to the task.
Time to start buying. SSD are a LOT cheaper than discs. You may say "Oh, how are they cheaper". Well, you do not buy GB - you buy IO. And last time I checked SSD did not cost 100 times the price of discs - but they have 100 times or more the IO. and we talk always of random IO.
Then isolate Tempdb on separate SSD - tempdb either does no a lot or a TON and you want to see this.
Then isolate the log file.
Make multiple data files, for database and tempdb (particularly tempdb - as many as you have cores).
And yes, this will cost money. But at the end - you need IO and like most developers you got CPU. Bad for a database.
I have oracle 10g database, while export command is running to take scheduled backup, the database becomes very very very slow. A report which runs normally for 4 minutes (that too is high), takes 30-40 minutes, it is killing us. Firstly, we used to run the exp command on the server where database is installed but later moved it to another server to reduce load on the production server but no change, however it became more slow. We usually have average of 159 sessions on the database server. I think there's some thing wrong with my SGA, PGA memory allocation, here is my initXZ10.ora file.
XZ10.__db_cache_size=512876288
XZ10.__java_pool_size=33554432
XZ10.__large_pool_size=16777216
XZ10.__shared_pool_size=436207616
XZ10.__streams_pool_size=0
*.db_block_size=8192
*.job_queue_processes=10
*.open_cursors=5000
*.open_links=20
*.open_links_per_instance=20
*.pga_aggregate_target=4865416704
*.processes=500
*.sessions=2000
*.sga_max_size=11516192768
*.sga_target=11516192768
*.transactions=500
*.db_cache_size=512876288
*.java_pool_size=33554432
*.large_pool_size=16777216
*.shared_pool_size=436207616
*.streams_pool_size=0
I have 16GB RAM on the server and 64 bit oracle 10.2.0.4.0
Can somebody please help me here to optimize the init.ora file and any parameters to speed up the database.
Thanks everyone.
I've asked for more ram for our SQL Server (currently we have a server with 4 GB of RAM) but our administrator told me that he would accept that only if I can show him the better performance with having more memory available because he has checked the server logs and SQL Server is using only 2.5 GB.
Can someone tell me how can I prove to him the effect of more available memory (like in a performance issue for a query)?
Leaving aside the fact that you don't appear to have memory issues...
Some basic checks to run:
Check the Page Life Expectancy counter: this is how long a page will stay in memory
Target Server Memory is how much RAM SQL Server want to use
Note on PLE:
"300 seconds" is quoted but our busy server has a PLE of 80k+. Which is a week. When our databases are 15 x RAM. With peaks of 3k new rows per second and lot of read aggregations.
Edit, Oct 2011
I found this article on PLE by Jonathan Kehayias: http://www.sqlskills.com/blogs/jonathan/post/Finding-what-queries-in-the-plan-cache-use-a-specific-index.aspx
The comments have many of the usual SQL Server suspect commenting