I am working on an intel core-i3, 64-bit machine with just 4GB RAM.OS is Windows 7 and SQL Server 2012, evaluation is installed.
I am trying to do some SSIS development in it. i need to load a flatfile with 0.5 million records (156 columns/ approximately Row length of 3500 in total). SQL engine and SSIS engine are running in same machine.
As I am using a small Pc, I dont expect high performance in my machine. See the below print screens.
Once my package starts running, the memory usage reaches maximum in no time.
See processes tab
The CPU usage is just 3%, and memory may be 96%,
1. Even after closing, SSDT and SQL server management studio, the
memory still remains at 95%, until I restart MSSQL service. Why is
it behaving So?
2. How can I KNOW the I/O efficiency?
thanks in advance.
I think you need to set your flat file input to Fast parse.
This is a better guide.
http://www.bidn.com/blogs/BrianKnight/ssis/780/loading-flat-files-faster-in-ssis-with-fastparse
Old guide
http://henkvandervalk.com/speeding-up-ssis-bulk-inserts-into-sql-server
As per your problem, you should consider decreasing the buffer size or decreasing the batch size. For example, decreasing the DefaultBufferSize and DefaultBufferMaxRows.
Instead of using fast-load in OLE DB destination or SQL destination, you can use ODBC with appropriate batch size.
Due to the limitation of your machine, try to use parallelism properly if there is any.
Related
I have a scenario where an application server saves 15k rows per second in SQL Server database. At first initial hours machine is still usable but whenever the database size increases ~20gig, it seems that machine is becoming unusable.
I saw some topics/forums/answers/blogs suggesting to limit the max memory usage of SQL Server. Any thoughts on this?
Btw, using SQL Bulkcopy to insert rows in the database.
I have two suggestions for you:
1 - Database settings:
When you create the database, try to use a large initial size, and consider to have a bigger autogrowth percentage/size.
You will want to minimize the times your filegroups need to grow.
2 - Server settings:
In your SQL Server settings I would recommend that you remove one logical processor from the SQL Server. The OS will use this processor when the SQL Server is busy with heavy loads on the other processors. In my experience, this usually gives a nice boost to the OS .
I just want to know that how I can allocate more memory to MSSQL management studio, so that it takes less time to run long queries. because I have to import and manipulate large amount of data form access files and other, where as I have 32 Gb ram for this purpose. so tell me solutions.
thanks
SQL Server Management Studio (SSMS) is just a client tool.
If queries are running too long, the issue will be at the actual SQL Server. You can assign extra memory there by using SSMS to connect to the server, right-click it, and then select Properties.
On the Memory page, you can configure how much maximum RAM you want to assign to SQL Server. In your situation, I'd not assign more than 30 GB (leave some for the OS. In case your SQL Server is not the only application running on the machine, assign even less).
If you're dealing with large amounts of data that need to be imported, RAM might not be the issue, though. Most likely the bottleneck will be the disk system. Use Performance Monitor to try and get a clue as to where the real bottleneck is.
Some ways to enhance performance for the disk system is to ensure your drives are configured properly (General rule of thumb is to place transaction logs on RAID 1 partitions, and data on RAID 10 (or 5)). If you can afford it, place indexes on separate RAID partitions. Also make sure the database and drives are regularly defragged.
We have an 8 Core, 16GB RAM server that has SQL Server 2008 running on it. When we perform large queries on millions of rows the RAM usage goes up to 15.7GB and then even file browsing, opening excel etc gets really slow.
So does SQL Server really release memory when another process needs it or am I having another issue? We don't have any other major programs running on this server.
We've set a max memory usage of 14GB for SQL Server.
Thanks all for any enlightenment or trouble shooting ideas.
Yes it does. See SQLOS's memory manager: responding to memory pressure for details how this works. But what exactly means to have 'memory pressure' it depends from machine to machine and from OS version to OS version, see Q & A: Does SQL Server always respond to memory pressure?. If you want to reserve more memory for applications (I'm not even bother to ask why you browse files and use Excel on a machine dedicated to SQL Server....) then you should lower the mas server memory until it leaves enough for your entertainment.
SQL server does NOT release memory. It takes all the memory it can get up to the MaxMemory setting and it stays there.
Our primary database server is an 8 core box with 8GB of RAM. The CPU is a Xeon E7330 # 2.4GHz. It runs Windows Server 2003 R2 (x64 edition) and SQL Server 2005
I wanted to do some testing so I set up SQL Server 2005 on another brand-new server which is an 8 core box with 4 GB of RAM. It has a Xeon X5460 # 3.16GHz and runs Windows Server 2003 R2 Standard. I Installed SQL Server 2005 out of the box and restored a backup of the primary database on to it, and did an UPDATE STATISTICS on all the tables.
The process I was testing executes the same stored proc many times. I was astounded to find from the profiler that this proc which executes with duration=0 or 1 on the primary server, was consistently executing with durations in excess of 130. This essentially makes the secondary server useless for testing, because it's just too slow.
No other apps run on either of these two boxes, just SQL server. And unlike the primary database server, the test server only had me accessing it.
I can't believe the difference in spec between these two machines explains this colossal difference in performance. Can anybody suggest any settings I may need to change?
Updates in answers to questions:
Second server is 32 bit Windows
I'm inquiring now about the disk arrays and how comparable they are
On the primary server, the data and logs are on the same drive (!) and it works fine
Looking in task manager on the test server, the CPU is running at like 10%, only one core even showing activity
Task manager on the test server (4GB RAM) shows "PF Usage 2.01GB" with SQL Server running. On the primary server (8GB RAM) it shows "PF Usage 6.67GB". How would I make SQL Server on the test box use more of the RAM? Maybe that would make a difference
Another update:
The primary server has a RAID-5 with 15,000 RPM drives. The test box has a RAID-5 with 10,000 RPM drives.
32 bit OS means 2 GB Virtual Address Space for your processes. Standard edition OS mean no AWE extensions either. So your test machine will be severely RAM deprived compared with the production one. Your buffer pool will suffer from premature eviction of the pages, your execution plans will not have the option to choose hash-joins for a lot of queries and so on and so forth. I doubt this explains the entire difference, I'm sure there must be something more at play. You say only 10 CPU usage during the query, is your MAXDOP setting 1 by any chance on the test server? Have you compared the output of sp_configure on the two machines? (make sure you enable 'advanced options' too).
Can you run the same problem query on the two machines, from a SSMS query window, with SET STATISTICS IO ON and SET STATISTICS TIME ON? Run it 2-3 times on each and write down the results. Does it show the same number of logical reads but vastly different number of physical reads? This would point to the RAM being insufficient to cache the needed pages. IS the number of logical reads very different? It probably means you get a bad execution plan on test.
Is the query write intensive by any chance? If so did you pre-grow the test database or is your execution blocked by log growth and database growth events?
There are plenty of places to look at to narrow down the issue, like SQL performance counters, sys.dm_os_wait_stats, check the sys.dm_exec_requests wait_type and wait_resource.
was the data in the memory cache yet? or was it all read from disk
You either have a different plan being generated or some hardware differences. For hardware you can check the disk seconds/[read,write] (edit to clarify - you do this in perfmon) and see if you have some massive differences from caching (e.g. high perf raid controller).
For the plan difference just check out the execution plans.
Also do set statistics io on and see if you are getting physical reads instead of logical reads. Maybe the mem difference is keeping your dataset from fitting in memory in secondary but not primary machine.
Although you may not be able to use AWE on your 32-bit server, you can provide SQL Server with a little more memory by adding the /3GB switch to the boot.ini file. Check out Books Online, it should give you more information.
I have a development vm which is running sql server as well as some other apps for my stack, and I found that the other apps are performing awfully. After doing some digging, SQL Server was hogging the memory. After a quick web search I discovered that by default, it will consume as much memory as it can in order to cache data and give it back to the system as other apps request it, but this process often doesn't happen fast enough, apparently my situation is a common problem.
There however is a way to limit the memory SQL Server is allowed to have. My question is, how should I set this limit. Obviously I'm going to need to do some guess and check, but is there an absolute minimum threshhold? Any recommendations are appreciated.
Edit:
I'll note that out developer machines have 2 gigs of memory so I'd like to be able to run the vm on 768 mb or less if possible. This vm will be only used for local dev and testing , so the load will be very minimal. After code has been tested locally it goes to another environment where the SQL server box is dedicated. What I'm really looking for here is recommendations on minimums
Extracted fromt he SQL Server documentation:
Maximum server memory (in MB)
Specifies the maximum amount of memory
SQL Server can allocate when it starts
and while it runs. This configuration
option can be set to a specific value
if you know there are multiple
applications running at the same time
as SQL Server and you want to
guarantee that these applications have
sufficient memory to run. If these
other applications, such as Web or
e-mail servers, request memory only as
needed, then do not set the option,
because SQL Server will release memory
to them as needed. However,
applications often use whatever memory
is available when they start and do
not request more if needed. If an
application that behaves in this
manner runs on the same computer at
the same time as SQL Server, set the
option to a value that guarantees that
the memory required by the application
is not allocated by SQL Server.
The recommendation on minimum is: No such thing. The more memory the better. The SQL Sever needs as much memory as it can get or it will trash your IO.
Stop the SQL Server. Run your other applications and take note to the amount of memory they need. Subtract that from your total available RAM, and use that number for the MAX memory setting in the SQL Server.
Since this is a development environment, I agree with Greg, just use trial and error. It's not that crucial to get it perfectly right.
But if you do a lot of work in the VM, why not give it at least half of the 2GB?
so id like to be able to run the vm on
768 mb or less if possible.
That will depend on your data and the size of your database. But I usually like to give SQL server at least a GB
It really depends on what else is going on on the machine. Get things running under a typical load and have a look at Task Manager to see what you need for everything else. Try that number to start with.
For production machines, of course, it is best to give control of the machine to Sql Server (Processors -> Boost Sql Server Priority) and let it have all the RAM it wants.
Since you are using VMs, maybe you could create a dedicated one just for Sql Server and run everything else on a different VM.