I am using SSMS and need to create a .bak of a table in one of the databases containing the content of that table.
The reason behind this is that I might need to populate the database again with this data at a later time (it is test data) and generating it again using the script I wrote takes to much time.
How do I do this in SSMS?
Taking your response to WEI_DBA very literally and accepting you prefer a bak file, I suggest you create a second database, then copy the content of this specific table (eg. with 'select into') to the second database. You can then backup the second database in the regular way.
create database foo
select *
into foo.dbo.table1
from dbo.table1
backup database foo
to disk = 'c:\temp.foo.bak'
with format
, medianame = 'foobak'
, name = 'Full backup foo'
The thing is that your instinct tells you that you need a backup (.bak) file. This is not the best way to share the contents of the table. Users would have to restore it to a dummy database and then copy in the same way as you now have ahead.
Sharing the table content via csv or xml is in my opinion a better way (and for that you can indeed use the Import/Export wizard as mentioned earlier).
Related
I imported data from Power BI into SQL-Server. You can see how is look like imported data.
Additionally I created own database with commands below:
CREATE DATABASE MY_DW
GO
USE MY_DW
GO
Now I want to copy all this table into my base named as MY_DW. So can anybody help me how to solve this problem and copy all tables into my base ?
Please check https://www.sqlshack.com/how-to-copy-tables-from-one-database-to-another-in-sql-server/.
This link suggests various methods to copy the data tables from one database to another.
Thanks,
Rajan
Following approach could resolve your issue:
Imported Database
Generate Scripts
Introduction
Next button
Select the database objects (Tables in your case) to script
Next button
Specify how scripts should be saved
Advanced -> Types of data to script -> Schema and data
Next button
Review your selections
Next button
Script generation would take place and saved which you should run under the database,
MY_DW, you created
Another approach:
Assuming that the databases are in the same server.
The below query will create the table into your database(without constraints).
SELECT * INTO MY_DW.Table_Name
FROM ImportedDB.Table_Name
And the below query will insert the data into your database table.
INSERT INTO MY_DW.Table_Name
SELECT * FROM ImportedDB.Table_Name
Final approach:
Assuming that the databases are in the linked server.
Incase of linked server, four part database object naming convention will be applied like below.
The below query will create the table into your database(without constraints).
SELECT * INTO [DestinationServer].[MY_DW].[dbo].[Table_Name]
FROM [SourceServer].[ImportedDB].[dbo].[Table_Name]
And the below query will insert the data into your database table.
INSERT INTO [DestinationServer].[MY_DW].[dbo].[Table_Name]
SELECT * FROM [SourceServer].[ImportedDB].[dbo].[Table_Name]
I'm looking for a way to automatize a copy of a database each day on the same SQL Server.
For example, I have a database MyDB. I would like, each day, do a copy of MyDB in a MyDB_TEST on the same server.
Is there any simple script to do this "simple" task ?
I found this script:
backup database OriginalDB
to disk = 'D:\backup\OriginalDB_full.bak'
with init, stats =10;
restore database new_db_name
from disk = 'D:\backup\OriginalDB_full.bak'
with stats =10, recovery,
move 'logical_Data_file' to 'D:\data\new_db_name.mdf',
move 'logical_log_file' to 'L:\log\new_db_name_log.ldf'
But I don't understand for what to replace in 'logical_Data_file' and 'logical_log_file'.
It's a move and I want a copy of my database...Why these two latest lines are "move" ?
I think I misunderstand this script...anyone could help me please?
EDIT :
I just edited my code like this :
backup database MY_DB
to disk = 'D:\BACKUP\MY_DB.bak'
with init, stats =10;
restore database MY_DB_NEW
from disk = 'D:\BACKUP\MY_DB.bak'
with stats =10, recovery,
move 'D:\Microsoft SQL Server\MSSQL12.SQLDIVALTO\MSSQL\DATA\MY_DB.mdf' to 'D:\Microsoft SQL Server\MSSQL12.SQLDIVALTO\MSSQL\DATA\MY_DB_new.mdf',
move 'D:\Microsoft SQL Server\MSSQL12.SQLDIVALTO\MSSQL\DATA\MY_DB_log.mdf' to 'D:\Microsoft SQL Server\MSSQL12.SQLDIVALTO\MSSQL\DATA\MY_DB_new_log.ldf'
And I sadly get an error telling the logical file "MY_DB.mdf" is not a part of the My_DB_New database...use RESTORE FILELISTONLY to get logicals file name.
I don't understand where is my mistake in this script, any inputs?
When you RESTORE a database, unless you specify otherwise it would create the same files that it had in the previously; same name, same path. As you want a copy, that would be overwriting the existing ones. That would, obviously fail, as those files are in use by your original database.
Therefore you need tell the instance to put the database files ("move" them) to a different path, hence the MOVE clause. This means that you then don't have the 2 databases conflicting over trying to use or write other each others files.
Side note, this type of thing does normally, however, tend to suggest an xy problem, though that is a different question.
CORRECTION:
I wasn't putting the logical file name but the file name instead.
Just put the logical file name without the path!
I have one database with an image table that contains just over 37,000 records. Each record contains an image in the form of binary data. I need to get all of those 37,000 records into another database containing the same table and schema that has about 12,500 records. I need to insert these images into the database with an IF NOT EXISTS approach to make sure that there are no duplicates when I am done.
I tried exporting the data into excel and format it into a script. (I have doe this before with other tables.) The thing is, excel does not support binary data.
I also tried the "generate scripts" wizard in SSMS which did not work because the .sql file was well over 18GB and my PC could not handle it.
Is there some other SQL tool to be able to do this? I have Googled for hours but to no avail. Thanks for your help!
I have used SQL Workbench/J for this.
You can either use WbExport and WbImport through text files (the binary data will be written as separate files and the text file contains the filename).
Or you can use WbCopy to copy the data directly without intermediate files.
To achieve your "if not exists" approache you could use the update/insert mode, although that would change existing row.
I don't think there is a "insert only if it does not exist mode", but you should be able to achieve this by defining a unique index and ignore errors (although that wouldn't be really fast, but should be OK for that small number of rows).
If the "exists" check is more complicated, you could copy the data into a staging table in the target database, and then use SQL to merge that into the real table.
Why don't you try the 'Export data' feature? This should work.
Right click on the source database, select 'Tasks' and then 'Export data'. Then follow the instructions. You can also save the settings and execute the task on a regular basis.
Also, the bcp.exe utility could work to read data from one database and insert into another.
However, I would recommend using the first method.
Update: In order to avoid duplicates you have to be able to compare images. Unfortunately, you cannot compare images directly. But you could cast them to varbinary(max) for comparison.
So here's my advice:
1. Copy the table to the new database under the name tmp_images
2. use the merge command to insert new images only.
INSERT INTO DB1.dbo.table_name
SELECT * FROM DB2.dbo.table_name
WHERE column_name NOT IN
(
SELECT column_name FROM DB1.dbo.table_name
)
How to combine several sqlite databases (one table per file) into one big sqlite database containing all the tables. e.g. you have database files: db1.dat, db2.dat, db3.dat.... and you want to create one file dbNew.dat which contains tables from all the db1, db2...
Several similar questions have been asked on various forums. I posted this question (with answer) for a particular reason. When you are dealing with several tables and have indexed many fields there. It causes unnecessary confusion to create index properly into the destination database tables. You may miss 1-2 index and its just annoying. The given method can also deal with large amount of data i.e. when you really have gbs of tables. Following are the steps to do so:
Download sqlite expert: http://www.sqliteexpert.com/download.html
Create a new database dbNew: File-> New Database
Load the 1st sqlite database db1 (containing a single table): File-> Open Database
Click on the 'DDL' option. It gives you a list of commands which are needed to create the particular sqlite table CONTENT.
Copy these commands and select 'SQL' option. Paste the commands there. Change the name of destination table DEST (from default name CONTENT) into whatever you want.
6'Click on 'Execute SQL'. This should give you a copy of the table CONTENT in db1 with the name DEST. The main utility of doing it is that you create all the index also in the DEST table as they were in the CONTENT table.
Now just click and drag the DEST table from the database db1 to the database dbNew.
Now just delete the database db1.
Go back to step 3 and repeat with the another database db2 etc.
Currently, I would like provide this as an option to the user when storing data to the database.
Save the data to a file and use a background thread to read data from the textfile to SQL server.
Flow of my program:
- A stream of data coming from a server constantly (100 per second).
- want to store the data in a textfile and use background thread to copy data from the textfile back to the SQL database constantly as another user option.
Has this been done before?
Cheers.
Your question is indeed a bit confusing.
I'm guessing you mean that:
100 rows per second come from a certain source or server (eg. log entries)
One option for the user is textfile caching: the rows are stored in a textfile and periodically an incremental copy of the contents of the textfile into (an) SQL Server table(s) is performed.
Another option for the user is direct insert: the data is stored directly in the database as it comes in, with no textfile in between.
Am I right?
If yes, then you should do something in the lines of:
Create a trigger on an INSERT action to the table
In that trigger, check which user is inserting. If the user has textfile caching disabled, then the insert can go on. Otherwise, the data is redirected to a textfile (or a caching table)
Create a stored procedure that checks the caching table or text file for new data, copies the new data into the real table, and deletes the cached data.
Create an SQL Server Agent job that runs above stored procedure every minute, hour, day...
Since the interface from T-SQL to textfiles is not very flexible, I would recommend using a caching table instead. Why a textfile?
And for that matter, why cache the data before inserting it into the table? Perhaps we can suggest a better solution, if you explain the context of your question.