I need to run a "large" script on SQL Server 2008 R2 Express and it is failing with
There is insufficient system memory in resource pool 'internal' to run this query.
The script is around 10MB saved to disk, contains about 54000 top-level statements (insert/delete/update) and declares about 5000 variables (of type BIGINT).
I am running SQL Server 2008 R2 Express 64bit 10.5.1746. There is 3GB allocated to the VM, 1GB allocated to SQL Server, 512kb minimum memory per query. The results of DBCC MEMORYSTATUS can be found on this link.
The script is merely a restoration of a (lightweight) production database which was exported as SQL statements (data only, no schema).
If it's not possible to do this, I am shocked that SQL Server cannot handle such a basic scenario. I've tested this equivalent scenario on Firebird and Sqlite and it's worked just fine! (and they are open-source products).
NOTE: it is not possible to break the script up as variables declared in the beginning are referenced in the end of the script.
NOTE: Before rushing to flag this as a "duplicate" please note the other similar threads do not address the specific issue "How to run very large script in sql server 2008" .
SQL Server Express is limited in the amount of memory it can use. Of that memory only a portion can be used for executing queries. You can try setting forced parameterization on the database as it may reduce the memory required for the plan which would leave more for query execution (depends on your specific queries).
The best option is to use an edition of SQL Server that supports more memory. Developer edition is affordable but can't be used for production use. Standard edition would be your next best bet.
The only thing that's worked thus far is upgrading to SQL Server 2012 Express. Query took several minutes to execute, but did so completely and without error.
Related
We have a RDP server which is running in 2008 version of SSMS and OS. Recently we migrated this server to 2016 version, both the OS(2016) and SSMS(2016).
The configured new machine(with ssms2016) is same to the old one(with ssms2008) in terms of system configuration. It has a 64-bit OS with x64-based processor. RAM memory is 64.0 GB and 2.39 GHz (32 processor).
We are facing severe performance issue while running stored procedures in SSMS 2016 version, as the same code base has been migrated from sql server 2008.We are loading data to these servers using SSIS ETL tools.
For example if we are running a stored procedure in old server(with ssms2008) it is taking 1 hour of time to complete the execution but the same SP is taking 10 hours of time in new server(with ssms 2016) sometimes even more.
To identify the root cause we have tried below approaches but so far nothing worked.
• After migration we changed the compatibility level of sql server from 2008 to 2016.
• Restored the database once again from old server (ssms 2008) to new server (ssms 2016 ) without changing the compatibility level.
• Recompiled the stored procedures in new server(ssms2016).
• updated the statistics in new server(ssms2016).
• Done the disc reconfiguration/thickening of windows new server drives also.
• While running time consuming stored procedures in new server(ssms2016), parallely ran sql server
profiler to identify the issue, but couldn't find anything
• Ran parallelly same query in ssms 2008 and ssms 2016 servers at the same time, in old
server(ssms2008) execution got completed much faster than new server(ssms2016)
Note: Is there any solution we can try to get the same execution time in both servers.
Thanks in Advance
Bala Muraleedharan
I'm going to assume the SQL server version got updated too, as SSMS version would not make any difference.
It's impossible to tell of course, but query execution times can be drastically effected by the use of the new cardinality estimator in SQL Server 2014 and above. 99% of the time, things run faster, but once in a while, the new CE gets crushed. Add this line to the stored procedure to run with the old 2008 CE and see if it makes any difference.
OPTION(QUERYTRACEON 9481);
This problem may have two reasons:
1- Control the setting of your SQL server. Specifically limit maximum server memory to 60% of your RAM and increase the number of your tempdb(system database) files to reach to your CPU cores number.
2- Check your SP syntax. If you are using table type variable(#Table) change them to temp table(#Table).
I've got SQL Server Express 2017 (RTM) 14.0.1000.169 installed on a low-powered W2019 server on AWS. It has 2GB of RAM and 2.40GHz processor. (t2.small).
I've had the same setup on other AWS machines with less power than this and they don't seem to have this problem. But those machines would have SQL Server 2014 and windows 2008R2.
The problem is that when I run specific queries, they are taking huge amounts of time. In my applications, I have a routine that sniffs parameters for the stored procedure being called so that the .NET can provide parameters as needed without me having to code every single procedure separately.
To do this, I have been running
[sys].[sp_procedure_params_100_managed] #procedure_name=#PRC
but this routine in this new machine is taking up to 30 seconds to run.
So I've manually created something simpler to see if I can view any issues with the execution plan, but it still runs extremely slowly:
select o.name, prm.*
from sys.parameters prm
inner join sys.objects o on prm.object_id=o.object_id
where o.type='P'
and o.name='prc_THEPROCNAME'
To be clear, there are only 30 stored procedures in the whole database. But the query above takes 2860ms to run in SSMS and is showing 1285 reads.
When I run that on my little win10 machine with SQL Server Express 2014, it takes 12ms with 568 reads on a similar (but larger) database.
So my question is this: what is the issue with this environment? Is a t2.small too underpowered for Win2019? Does SQL Server 2017 have that much more overhead than SQL Server 2014? Or is it possible that I have a bad configuration somewhere in my SQL Server setup?
I am using SQL Server on a very similar AWS platform (t2.small) and have no performance issues with your query (< 1s). You haven't mentioned whether any other queries are suffering performance problems, but that would be helpful. My answer then would be that the t2.small is not underpowered and it is highly unlikely that SQL '17 would have greater difficulty than SQL '14 when executing this query.
I have a scenario where an application server saves 15k rows per second in SQL Server database. At first initial hours machine is still usable but whenever the database size increases ~20gig, it seems that machine is becoming unusable.
I saw some topics/forums/answers/blogs suggesting to limit the max memory usage of SQL Server. Any thoughts on this?
Btw, using SQL Bulkcopy to insert rows in the database.
I have two suggestions for you:
1 - Database settings:
When you create the database, try to use a large initial size, and consider to have a bigger autogrowth percentage/size.
You will want to minimize the times your filegroups need to grow.
2 - Server settings:
In your SQL Server settings I would recommend that you remove one logical processor from the SQL Server. The OS will use this processor when the SQL Server is busy with heavy loads on the other processors. In my experience, this usually gives a nice boost to the OS .
I want to install sql server 2008 express on my laptop that has 1 GB memory, but my database contains lots of binary data that I don't want spending all my RAM. I would much rather sacrifice sql performance (make it page) in favor of other applications.
Is it possible to limit the memory footprint of sql server?
look here and here
essentially sp_configure 'max server memory' I think
I've only got SQL Server 2005 Express, not 2008, but from SQL Server Management Studio Express, if I right-click on the root node in the tree (the server node) and select Properties, there's a "Memory" page with both minimum and maximum amounts of memory available to be set.
From the docs for these options:
Minimum server memory (in MB)
Specifies that SQL Server should start
with at least the minimum amount of
allocated memory and not release
memory below this value. Set this
value based on the size and activity
of your instance of SQL Server. Always
set the option to a reasonable value
to ensure that the operating system
does not request too much memory from
SQL Server and inhibit Windows
performance.
Maximum server memory (in MB)
Specifies the maximum amount of memory
SQL Server can allocate when it starts
and while it runs. This configuration
option can be set to a specific value
if you know there are multiple
applications running at the same time
as SQL Server and you want to
guarantee that these applications have
sufficient memory to run. If these
other applications, such as Web or
e-mail servers, request memory only as
needed, then do not set the option,
because SQL Server will release memory
to them as needed. However,
applications often use whatever memory
is available when they start and do
not request more if needed. If an
application that behaves in this
manner runs on the same computer at
the same time as SQL Server, set the
option to a value that guarantees that
the memory required by the application
is not allocated by SQL Server.
I'd be surprised if these options weren't in 2008, but you could always just install it and try.
You can do it w/ osql:
http://kb.hs-lab.com/content/7/113/en/how-to-limit-ram-usage-for-sql-2005-express-database.html
osql -E -S YOURSERVERNAME\PRINTLOGGER
sp_configure 'show advanced options',1
RECONFIGURE WITH OVERRIDE
GO
then
sp_configure 'max server memory',70?
RECONFIGURE WITH OVERRIDE
GO
You might also try giving cpu priority to your favored applications and letting SQL manage memory dynamically. It will release memory as needed by other apps, regardless of priority.
Hopefully you're not trying to run visual studio on that machine. It won't be much fun.
Our software must be able to run on SQL Server 2000 and 2005. To simplify development, we're running our SQL Server 2005 databases in compatibility level 80. However, database performance seems slower on SQL 2005 than on SQL 2000 in some cases (we have not confirmed this using benchmarks yet). Would upgrading the compatibility level to 90 improve performance on the SQL 2005 servers?
I think i read somewhere, that the SQL Server 2005 database engine should be about 30% faster than the SQL Server 2000 engine. It might be, that you have to run your database in compatibility mode 90 to get these benefits.
But i stumbled on two scenarios, where performance can drop dramatically when using mssql 2005 compared to mssql 2000:
Parameter Sniffing: When using a stored procedure, sql server will calculate exactly one execution plan at the time, you first call the procedure. The execution plan depends on the parameter values given for that call. In our case, procedures which normally took about 10 seconds are running for hours under mssql 2005. Take a look here and here.
When using distributed queries, mssql 2005 behaves different concerning assumptions about the sort order on the remote server. Default behavior is, that the server copies the whole remote tables involved in a query to the local tempdb and then execute the joins locally. Workaround is to use OPENQUERY, where you can control exactly which resultset is transferred from the remote server.
after you moved the DBs over to 2005 did you
update the stats with full scan?
rebuilt the indexes?
first try that and then check performance again
Also a FYI, if you run compatibility level 90 then some things are not supported anymore like old style outer joins (*= and =*)
Are you using subselects in your queries?
From my experience, a SELECT statement with subselects that runs fine on SQL Server 2000 can crawl on SQL Server 2005 (it can be like 10x slower!).
Make an experiment - re-write one query to eliminate the subselects and see how its performance changes.