Why SQL Express database is 4 times bigger than SQL CE counterpart? - sql-server

I have been using SQL CE as my database system, but for further functionality I am now switching to SQL Express. While running the first test, I found that the SQL Express 2005 database reached 4GB within one day, whereas a SQL CE database with similar size of data is only around 1GB.
I later tested in another system with SQL Express 2008, where the database size was still bigger than the CE version, but not as much as the above.
I tried shrinking database using SQL Management Studio, but it only reduced from 4096mb to 4095.55mb. I have learned that SQL Express databases requires extra space for performing its operations, but I don't think that should be 4 times and in one day. What should I look for?

Check the the minimum size you specified when you create your databases.Check your database growth settings etc.minimum size of a database is specified when the database was originally created, or the last size set by using a file-size-changing operation, such as DBCC SHRINKFILE. For example, if a database was originally created with a size of 4GB and grew to 4.1 GB, the smallest size the database could be reduced to is 4GB, even if all the data in the database has been deleted.

Related

Facing slowness in database server after migrating from SSMS 2008 to SSMS 2016

We have a RDP server which is running in 2008 version of SSMS and OS. Recently we migrated this server to 2016 version, both the OS(2016) and SSMS(2016).
The configured new machine(with ssms2016) is same to the old one(with ssms2008) in terms of system configuration. It has a 64-bit OS with x64-based processor. RAM memory is 64.0 GB and 2.39 GHz (32 processor).
We are facing severe performance issue while running stored procedures in SSMS 2016 version, as the same code base has been migrated from sql server 2008.We are loading data to these servers using SSIS ETL tools.
For example if we are running a stored procedure in old server(with ssms2008) it is taking 1 hour of time to complete the execution but the same SP is taking 10 hours of time in new server(with ssms 2016) sometimes even more.
To identify the root cause we have tried below approaches but so far nothing worked.
• After migration we changed the compatibility level of sql server from 2008 to 2016.
• Restored the database once again from old server (ssms 2008) to new server (ssms 2016 ) without changing the compatibility level.
• Recompiled the stored procedures in new server(ssms2016).
• updated the statistics in new server(ssms2016).
• Done the disc reconfiguration/thickening of windows new server drives also.
• While running time consuming stored procedures in new server(ssms2016), parallely ran sql server
profiler to identify the issue, but couldn't find anything
• Ran parallelly same query in ssms 2008 and ssms 2016 servers at the same time, in old
server(ssms2008) execution got completed much faster than new server(ssms2016)
Note: Is there any solution we can try to get the same execution time in both servers.
Thanks in Advance
Bala Muraleedharan
I'm going to assume the SQL server version got updated too, as SSMS version would not make any difference.
It's impossible to tell of course, but query execution times can be drastically effected by the use of the new cardinality estimator in SQL Server 2014 and above. 99% of the time, things run faster, but once in a while, the new CE gets crushed. Add this line to the stored procedure to run with the old 2008 CE and see if it makes any difference.
OPTION(QUERYTRACEON 9481);
This problem may have two reasons:
1- Control the setting of your SQL server. Specifically limit maximum server memory to 60% of your RAM and increase the number of your tempdb(system database) files to reach to your CPU cores number.
2- Check your SP syntax. If you are using table type variable(#Table) change them to temp table(#Table).

SQL Server 2008R2 super high memory consumption - many stolen pages and huge log file

We have a SQL Server 2008 R2 instance running on a Windows Server 2008 R2 machine. The SQL instance has 12 GB RAM allocated to it. Currently (and probably typically) we are sitting at 47 concurrent connections. The server has a handful of DBs residing on it, but only one is really used. This db is 33 GBs with a log size of 89 GB.
The server physical memory is steady at 98%, and our application response time is bad. Most of the memory used by SQL Server are stolen pages. I'm not sure how to correct this. Our indexes and statistics are all basically brand new/recently rebuilt.
I'm at a bit of a loss as to how stolen pages are occurring, why they remain so high, and how to deal with this. Is the log related? It's nearly 3 times the size of the DB. We're reaching a critical point, so any and all help would be much appreciated. Thanks!
SQL Server, by default, will take up all memory unless you tell it not to in the properties of the Server in SSMS (Right click the server and choose properties, go to memory on the left and set the max).
For the log file size, it matters how the application is logging. Right click the database and choose options, look at the recovery model. I would Google the different types to match the type to your database need. If it is Full logged and you are not taking Transaction log backups, the log will just grow and grow. In that case, look at implementing Ole Hallengren's widely used scripting for backups: https://ola.hallengren.com/

Maximum number of rows supported by SQL Server Management Studio

We are in need to process 100 million rows with size 20 -25 GB.
We have planned to use SQL Server Management Studio for processing, but don't know the capacity of SQL Server Management Studio. What is the maximum number of rows supported by SQL Server Management Studio?
Much bigger than that (524,272 terabytes to be exact). The main question would be one of disk space, max file size, filegrowth options, etc. I work with databases well in excess of a terrabyte, so 25GB shouldn't nake SQL flinch (unless you're using Express edition, where I think the cap is a puny 1GB)

SQL Server 2014 standard edition slows the machine when Database size grows

I have a scenario where an application server saves 15k rows per second in SQL Server database. At first initial hours machine is still usable but whenever the database size increases ~20gig, it seems that machine is becoming unusable.
I saw some topics/forums/answers/blogs suggesting to limit the max memory usage of SQL Server. Any thoughts on this?
Btw, using SQL Bulkcopy to insert rows in the database.
I have two suggestions for you:
1 - Database settings:
When you create the database, try to use a large initial size, and consider to have a bigger autogrowth percentage/size.
You will want to minimize the times your filegroups need to grow.
2 - Server settings:
In your SQL Server settings I would recommend that you remove one logical processor from the SQL Server. The OS will use this processor when the SQL Server is busy with heavy loads on the other processors. In my experience, this usually gives a nice boost to the OS .

Large script failing on SQL Server 2008 R2 Express

I need to run a "large" script on SQL Server 2008 R2 Express and it is failing with
There is insufficient system memory in resource pool 'internal' to run this query.
The script is around 10MB saved to disk, contains about 54000 top-level statements (insert/delete/update) and declares about 5000 variables (of type BIGINT).
I am running SQL Server 2008 R2 Express 64bit 10.5.1746. There is 3GB allocated to the VM, 1GB allocated to SQL Server, 512kb minimum memory per query. The results of DBCC MEMORYSTATUS can be found on this link.
The script is merely a restoration of a (lightweight) production database which was exported as SQL statements (data only, no schema).
If it's not possible to do this, I am shocked that SQL Server cannot handle such a basic scenario. I've tested this equivalent scenario on Firebird and Sqlite and it's worked just fine! (and they are open-source products).
NOTE: it is not possible to break the script up as variables declared in the beginning are referenced in the end of the script.
NOTE: Before rushing to flag this as a "duplicate" please note the other similar threads do not address the specific issue "How to run very large script in sql server 2008" .
SQL Server Express is limited in the amount of memory it can use. Of that memory only a portion can be used for executing queries. You can try setting forced parameterization on the database as it may reduce the memory required for the plan which would leave more for query execution (depends on your specific queries).
The best option is to use an edition of SQL Server that supports more memory. Developer edition is affordable but can't be used for production use. Standard edition would be your next best bet.
The only thing that's worked thus far is upgrading to SQL Server 2012 Express. Query took several minutes to execute, but did so completely and without error.

Resources