Sqlserver.exe showing memory greater than max memory limit lock pages also enabled its confusing
AS stated here
SQL Servers max memory setting defines the limits for buffer pool usage only. There will be variable but significant allocations required over and above that limit.
Jonathan Kehayias's, Christian Bolton and John Samson have level 300/400 posts on the topic. Brent Ozar has an easier to read article that might be a better place to start.
Also related: SQL Server 2008 R2 “Ghost Memory”
Min & Max Server Memory
Microsoft SQL Server Management Studio → Right Click the Server → Properties →Memory → Server Memory Options → Minimum server memory (In MB): = 0 and Maximum server memory (In MB): = 2147483647
Configure this memory allocation Based on the RAM installed in the DB Server.
For Ex:
IF DB server is installed with 6 GB of RAM, then maintain the 20% breadth space for the OS installed in the server.
For 6 GB of RAM, Maximum server memory (In MB) will be = 4915 for the SQL server.
Right Click the Server → Properties →Security → Login Auditing → Enable the Failed logins only. This option will avoid the log write and memory log space will be saved.
Related
I have a Azure VM with the following:
Windows DataCenter 2019
SQL Server 2017 Developer
Virtual drive of 6TB (built up of 12 512GB Premium SSD disks)
112 GB of RAM
16 VCPUS
I have a db that has a data file of approx ~5TB (2TB empty) and log file of approx ~1TB (99% empty).
I have backed this up to Azure blob storage (64 block blobs).
When I restore to my SQL Server with Instant File Initialisation enabled, it takes ~ 40 hours.
I can see the network and disk Throughput really low.
When I disable Instant File Initialisation, it takes ~3hours to zero out the files and then I get good performance on the restore ~1 1/2 hours (on top of the ~3 hours to zero out the files)
Does anyone know why this could be.
My code to restore
restore database [<db_name>] from
url = 'https://.....url_1.bak',
...
url = 'https://.....url_64.bak',
move 'db_log' to 'new log location' -- i am only moving the log file, as the data file's location doesn't change
stats = 1, norecovery;
I have and machine powered by Windows Server 2012 R2 with SQL Server 2014, and for an unknown reason, the max memory of the SQL Server decreased automatically from 18024 MB to 1024 MB which is causing slowness in the system, as we need to update this value manually to 18024 MB
.
Not sure why that happened "max memory of SQL server decreased automatically from 18024 MB to 1024 MB".
But if you want to correct it, you can do it instantly without restart :
1. Increase Max memory
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'max server memory', 18024;
GO
RECONFIGURE;
GO
Output will be like :
Configuration option 'max server memory (MB)' changed from 1024
to 18024. Run the RECONFIGURE statement to install.
2. Determine current memory allocation
SELECT
physical_memory_in_use_kb/1024 AS sql_physical_memory_in_use_MB,
large_page_allocations_kb/1024 AS sql_large_page_allocations_MB,
locked_page_allocations_kb/1024 AS sql_locked_page_allocations_MB,
virtual_address_space_reserved_kb/1024 AS sql_VAS_reserved_MB,
virtual_address_space_committed_kb/1024 AS sql_VAS_committed_MB,
virtual_address_space_available_kb/1024 AS sql_VAS_available_MB,
page_fault_count AS sql_page_fault_count,
memory_utilization_percentage AS sql_memory_utilization_percentage,
process_physical_memory_low AS sql_process_physical_memory_low,
process_virtual_memory_low AS sql_process_virtual_memory_low
FROM sys.dm_os_process_memory;
3. Determining value for 'max server memory (MB)
SELECT c.value, c.value_in_use
FROM sys.configurations c WHERE c.[name] = 'max server memory (MB)'
For increasing memory from low doesn't require server restart/stop. Just make sure your OS has enough memory for running self & other process to make sure everything runs ok after.
For details refer Microsoft SQL server configuration options:
https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options?view=sql-server-ver15
As you said in your comments that this problem happens on weekends, there may be some management scripts on your server, which do some cleaning and configuration tasks. Please check your SQL Server agent jobs and maintenance plans on the server.
This is my first experience with SSIS so bear with me...
I am using SSIS to migrate tables from Oracle to SSMS, there are some very large tables I am trying to transfer (50 million rows +). SSIS is now completely freezing up and rebooting VS when I am just trying to save the package (not even running it). It keeps returning errors of insufficient memory, however, I am working on a remote server that has well over the RAM it takes to run this package.
Error Message when trying to save
The only thing I can think of is when this package is attempting to run, my Ethernet Kbps are through the roof right as the package starts. Maybe need to update my pipeline?
Ethernet Graph
Also, my largest table will fail when importing due to BYTE sizes (again, not nearly using all the memory on the server). We are using ODBC Source as this was the only way we were able to get other large tables to upload more than 1 million rows.
I have tried creating a temporary buffer file to help with memory pressure, but that had no changes. I have changed the AutoAdjustBufferSize to True, no change in results. also changed DefaultBufferMaxRows and DefaultBufferSize.. no change.
ERRORS WHEN RUNNING LARGE TABLE:
Information: 0x4004300C at SRC_STG_TABLENAME, SSIS.Pipeline: Execute
phase is beginning.
Information: 0x4004800D at SRC_STG_TABLENAME: The buffer manager
failed a memory allocation call for 810400000 bytes, but was unable
to swap out any buffers to relieve memory pressure. 2 buffers were
considered and 2 were locked.
Either not enough memory is available to the pipeline because not
enough are installed, other processes were using it, or too many
buffers are locked.
Information: 0x4004800F at SRC_STG_TABLENAME: Buffer manager
allocated 1548 megabyte(s) in 2 physical buffer(s).
Information: 0x40048010 at SRC_STG_TABLENAME: Component "ODBC
Source" (60) owns 775 megabyte(s) physical buffer.
Information: 0x4004800D at SRC_STG_TABLENAME: The buffer manager
failed a memory allocation call for 810400000 bytes, but was unable
to swap out any buffers to relieve memory pressure. 2 buffers were
considered and 2 were locked.
Either not enough memory is available to the pipeline because not
enough are installed, other processes were using it, or too many
buffers are locked.
Information: 0x4004800F at SRC_STG_TABLENAME: Buffer manager
allocated 1548 megabyte(s) in 2 physical buffer(s).
Information: 0x40048010 at SRC_STG_TABLENAME: Component "ODBC
Source" (60) owns 775 megabyte(s) physical buffer.
Information: 0x4004800D at SRC_STG_TABLENAME: The buffer manager
failed a memory allocation call for 810400000 bytes, but was unable
to swap out any buffers to relieve memory pressure. 2 buffers were
considered and 2 were locked.
Either not enough memory is available to the pipeline because not
enough are installed, other processes were using it, or too many
buffers are locked.
Information: 0x4004800F at SRC_STG_TABLENAME: Buffer manager
allocated 1548 megabyte(s) in 2 physical buffer(s).
Information: 0x40048010 at SRC_STG_TABLENAME: Component "ODBC
Source" (60) owns 775 megabyte(s) physical buffer.
Error: 0xC0047012 at SRC_STG_TABLENAME: A buffer failed while
allocating 810400000 bytes.
Error: 0xC0047011 at SRC_STG_TABLENAME: The system reports 26
percent memory load. There are 68718940160 bytes of physical memory
with 50752466944 bytes free. There are 4294836224 bytes of virtual
memory with 914223104 bytes free. The paging file has 84825067520
bytes with 61915041792 bytes free.
Information: 0x4004800F at SRC_STG_TABLENAME: Buffer manager
allocated 1548 megabyte(s) in 2 physical buffer(s).
Information: 0x40048010 at SRC_STG_TABLENAME: Component "ODBC
Source" (60) owns 775 megabyte(s) physical buffer.
Error: 0x279 at SRC_STG_TABLENAME, ODBC Source [60]: Failed to add
row to output buffer.
Error: 0x384 at SRC_STG_TABLENAME, ODBC Source [60]: Open Database
Connectivity (ODBC) error occurred.
Error: 0xC0047038 at SRC_STG_TABLENAME, SSIS.Pipeline: SSIS Error
Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on ODBC Source
returned error code 0x80004005. The component returned a failure code
when the pipeline engine called PrimeOutput(). The meaning of the
failure code is defined by the component, but the error is fatal and
the pipeline stopped executing. There may be error messages posted
before this with more information about the failure.
This is really holding up my work. HELP!
I suggest reading data in chunks:
Instead of loading the whole table, try to split the data into chunks and import them to SQL Server. From a while, I answered a similar answer related to SQLite, i will try to reproduce it to fit the Oracle syntax:
Step by Step guide
In this example each chunk contains 10000 rows.
Declare 2 Variables of type Int32 (#[User::RowCount] and #[User::IncrementValue])
Add an Execute SQL Task that execute a select Count(*) command and store the Result Set into the variable #[User::RowCount]
Add a For Loop with the following preferences:
Inside the for loop container add a Data flow task
Inside the dataflow task add an ODBC Source and OLEDB Destination
In the ODBC Source select SQL Command option and write a SELECT * FROM TABLE query *(to retrieve metadata only`
Map the columns between source and destination
Go back to the Control flow and click on the Data flow task and hit F4 to view the properties window
In the properties window go to expression and Assign the following expression to [ODBC Source].[SQLCommand] property: (for more info refer to How to pass SSIS variables in ODBC SQLCommand expression?)
"SELECT * FROM MYTABLE ORDER BY ID_COLUMN
OFFSET " + (DT_WSTR,50)#[User::IncrementValue] + "FETCH NEXT 10000 ROWS ONLY;"
Where MYTABLE is the source table name, and IDCOLUMN is your primary key or identity column.
Control Flow Screenshot
References
ODBC Source - SQL Server
How to pass SSIS variables in ODBC SQLCommand expression?
HOW TO USE SSIS ODBC SOURCE AND DIFFERENCE BETWEEN OLE DB AND ODBC?
How do I limit the number of rows returned by an Oracle query after ordering?
Getting top n to n rows from db2
Update 1 - Other possible workarounds
While searching for similar issues i found some additional workarounds that you can try:
(1) Change the SQL Server max memory
SSIS: The Buffer Manager Failed a Memory Allocation Call
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'max server memory', 4096;
GO
RECONFIGURE;
GO
(2) Enable Named pipes
[Fixed] The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers
Go to Control Panel – > Administrative Tools -> Computer Management
On Protocol for SQL Instance -> Set Named Pipes = Enabled
Restart the SQL instance Service
After that try to import the data and it will fetch the data in chunks now instead of fetch all at once. Hope that will work for you guys and save your time.
(3) If using SQL Server 2008 install hotfixes
The SSIS 2008 runtime process crashes when you run the SSIS 2008 package under a low-memory condition
Update 2 - Understanding the error
In the following MSDN link, the error cause was described as following:
Virtual memory is a superset of physical memory. Processes in Windows typically do not specify which they are to use, as that would (greatly) inhibit how Windows can multitask. SSIS allocates virtual memory. If Windows is able to, all of these allocations are held in physical memory, where access is faster. However, if SSIS requests more memory than is physically available, then that virtual memory spills to disk, making the package operate orders of magnitude slower. And in worst cases, if there is not enough virtual memory in the system, then the package will fail.
Are you running your packages in parallel ? If yes, change to serie.
You can also try to divide this big table into subsets using an operation like modulo. See that example :
http://henkvandervalk.com/reading-as-fast-as-possible-from-a-table-with-ssis-part-ii
(in the example, he is running in parallel, but you can put this in serie)
Also, if you are running the SSIS package on a computer that is running an instance of SQL Server, when you run the package, set the Maximum server memory option for the SQL Server instance to a smaller value.
That will increases available memory.
when i try connect postgresql 9.0 server on linux i get too many clients connected already. I tried increasing max_connections from 100 to 200 and start the server it doest take the max connections. What should i change on the linux server
Eclipse LogCat
Caused by: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:291)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:108)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:125)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22)
This is a bit of a FAQ and is discussed in Number of Database Connections on the PostgreSQL wiki.
The only way to increase max_connections and persist this value is modifying the postgresql.conf file, so first of all, check if the value has changed (after restarting the server):
show max_connections
If the value DID NOT change, there's something wrong with your procedure (file permissions maybe?). If the value DID change, you might try with a higher value (weird, but might depend on your application connection requirements, OR a connection leak).
I have a website running on a Windows 2008 R2 server, using a SQL Server Express 2008 R2 database, and I have been experiencing some pretty nasty memory issues the last several days.
Here's the server stats:
It's a Rackspace Cloud Server
Windows Server 2008 R2 Enterprise SP1 x64
Quad-Core AMD Opteron 2.34GHz
2GB RAM
SQL Server 2008 R2 Express Edition with Advanced Services x64
Full text indexing is being used
The website has been running good for a few months now, but all of a sudden I've been seeing errors related to SQL Server running out of memory. Here are the most common exceptions I've seen:
Warning: Fatal error 9001 occurred at Oct 22 2011 5:02AM. Note the error and time, and contact your system administrator.
There is insufficient system memory in resource pool 'internal' to run this query.
A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)
Error: 4060, Severity: 11, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped. Error: 18456, Severity: 14, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
Error: 3980, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
The website still seems to respond for the most part. The pages that seem to be affected the most are the ones that write to the database.
I have tried restarting the sqlexpress service as well as restarting the server. Both solutions fix the symptoms, but the problem comes back within about 10 to 15 hours.
When these errors occur, Task Manager reports that around 1.8GB of memory is being used. After I restart the service, the used memory drops back down to about 600MB used and very slowly climbs backup until the exceptions start showing up again.
All help will be greatly appreciated... thanks!
The ERRORLOG will contain some important information about the memory allocation pattern when things started to degrade. The lines will look similar to this:
MEMORYCLERK_SQLGENERAL (node 1) KB
---------------------------------------------------------------- --------------------
VM Reserved 0
VM Committed 0
AWE Allocated 0
SM Reserved 0
SM Commited 0
SinglePage Allocator 136
MultiPage Allocator 0
(7 row(s) affected)
Search your ERRORLOG files for such occurrences, the follow the guidance from How to use the DBCC MEMORYSTATUS command to monitor memory usage on SQL Server 2005, since the output of DBCC MEMORYSTATUS and the output in the ERRORLOG due to OOM status is quite similar.