I have an odd issue on a Microsoft SQL Server that I manage. Two of the largest tables in a database are not visible in the Object Explorer.
When doing Right Click > Tasks > Shrink > Files, on the database, it is showing the data file as 99% unused. However in the following screenshot it is clear that there's over 500GB used:
The disk usage by table shows these two tables have over 1B records and is the majority of space reserved in the data file.
However, when looking in the object explorer, the tables does not exist:
I know the table exists because I am able to run select queries against it. The SQL Server version is Microsoft SQL Server 2019 (RTM-GDR) Standard Edition (64-bit). I am also using a sysadmin account, and have confirmed that it is not a view.
Any idea what could be causing this?
Cheers,
It looks to me like you have temporal tables in your environment. The history table will show up underneath the base table in SSMS. Here's a screenshot from the WideWorldImporters sample database from MS:
Related
I have some problems moving a database to another computer having tried both manual-file-copy with attach in SSMS and backup/restore in SSMS (Moving SQL Server 2019 database to another computer).
However the entire database is only a few tables and 15 megabytes.
So why not simply export to pure SQL I am thinking... That could work for me in this case. So in SSMS I right click database, tasks and select export data...
It appears the closest I can get in SSMS is picking "flatfile" which is essentially a CSV file which can contain single table... I want a SQL file that can create database, tables, add data etc.
What am I overlooking? Is this not possible in SSMS?
I am trying to download my SQL Server database (that is more than 40 GB) from production server to my local machine. I need only schema & some of data as downloading 40 GB backup file & restoring is really tough task for me.
I have tried to use generate scripts to obtain schema, this was successful. But for getting data for (suppose approx. first 500 rows) of all tables, I am not sure how I should approach that.
Please let me know is there any other way to achieve this?
I am using Microsoft's SQL Server Version 12.0.xxx.
Thanks
SQL Server Management Studio provides a wizard which enables you to generate scripts not only for metadata (or schema) but also the scripts for data within database.
Please refer to Script Data in SQL Server
But if your database backup size is very big, the script file will be very huge.
Actually this wizard does not provide a parameter to script only for first 500 rows of each table.
Besides all, if you have foreign keys and constraints on your table definitions, you might not be enough to get only the first 500 rows. You need every referenced lookup data in your database in order to insert data into your transactional tables, or you need the parent for the child data.
This forces you to create a more smart script for data extraction.
I have a sample SQL Server database backup that is ~12gb. Too large for me to restore in SQL Server Express (10gb limit). I downloaded the Developer version of SQL Server (on my home PC) and was able to restore the instance there, but the machine I need to test this database on is 32-bit and I can't find a 32-bit SQL Server Developer edition.
Is there an easy way for me to reduce the database size by a couple of GB by truncating some tables I'm not using in order to get under the 10gb limit so I can restore it to a 32-bit version of SQL Server Express?
I've already cleared out the data I don't need, But when I make a full backup, it's like ~50gb! Is there a way to get a smaller backup after removing some data? what settings should I use to backup?
This data is just for testing with a Windows form app I'm creating so it's most important that I get the schema and stored procedures over anyways. I've tried to generate scripts and run them on the new SQL server instance, but there's these dependencies to some .mdf file that I can't move that causes it to fail.
Let me know if I can provide any additional helpful information.
Any help would be appreciated - thanks!
In Developer Edition, right-click Database Name, select Reports > Standard Reports > Disk Usage by Top Tables. That will show which tables are taking up the most room.
Remove what you can, then shrinking your data and log files using:
Right-Click Database Name > Tasks > Shrink > Files:
Do this twice, once for File Type: Data and second for File Type: Log
Create new backup and see if it's within limit.
I have Server_A with DB_A and Server_B with DB_B, both of these are remote servers that I have no root access to.
Server_A is SQL 2012 and DB_A is set in Compatibility level 2008
Server_B is SQL 2008 and DB_B is set in Compatibility level 2008, of course.
I need to copy the data, including relationships and keys from DB_A to DB_B. How can this be done?
Using the import or export wizard I've only been able to move the data and all the relationships are list. Please give a guy a hand and teach him something!
One way is to right-click on the source database in SSMS and select Tasks > Generate Scripts. This will show a wizard which when completed will produce a text file with all the SQL statements needed to replicate the database on a new system.
The neat thing about the wizard is that it gives you a set options so that you can decide what is in the final output. So you get to
pick if you just want the data or the table structures or both.
The final result might not be a good option for large databases but it is very portable.
Actually you can back up to a .bak file.
Where you want to restore it is totally up to you.
If it doesn't work for some reason (it should work fine) you can always try to copy your database like so.
These two steps will copy the database intierly. Including the relationships and keys.
This is the only alternative I've found for your problem since you don't have sysadmin rights.
I have 2 link tables in a MS Access database. One of the link tables links to a table in a Sybase database and the other links to a SQL Server database.
The tables structures are same and has the same data too, barring a few rows.
I tried the 'Find unmatched rows' query wizard to compare the two tables and find the number of rows which are same (and different). But the problem is this makes the MS Access hang for huge tables (10 million+ rows).
Are there any settings that I can tweak so that Access does not hang? I am using ODBC connections to Sybase and SQL Server.
One more thing I noticed is when I right click on the SQL Server link table and click on open, it shows all the rows from the table. When I do the same for the Sybase one, it hangs and I have to close Access through Task Manager.
Some details:
Sybase version - 12.5.3
SQL Server version - 2008 R2
MS Access Version - 2003
On the face of it I would say the problem is that access is trying to do this query locally and is pulling most of the table down the wire. This is where you often get the myth that access does this all of the time when in fact it only does it on certain edge events. Is there anyway you could narrow down the data you are comparing? Maybe the table is a list of product sales and you could do one product line at a time or something like that?