I imported data from Power BI into SQL-Server. You can see how is look like imported data.
Additionally I created own database with commands below:
CREATE DATABASE MY_DW
GO
USE MY_DW
GO
Now I want to copy all this table into my base named as MY_DW. So can anybody help me how to solve this problem and copy all tables into my base ?
Please check https://www.sqlshack.com/how-to-copy-tables-from-one-database-to-another-in-sql-server/.
This link suggests various methods to copy the data tables from one database to another.
Thanks,
Rajan
Following approach could resolve your issue:
Imported Database
Generate Scripts
Introduction
Next button
Select the database objects (Tables in your case) to script
Next button
Specify how scripts should be saved
Advanced -> Types of data to script -> Schema and data
Next button
Review your selections
Next button
Script generation would take place and saved which you should run under the database,
MY_DW, you created
Another approach:
Assuming that the databases are in the same server.
The below query will create the table into your database(without constraints).
SELECT * INTO MY_DW.Table_Name
FROM ImportedDB.Table_Name
And the below query will insert the data into your database table.
INSERT INTO MY_DW.Table_Name
SELECT * FROM ImportedDB.Table_Name
Final approach:
Assuming that the databases are in the linked server.
Incase of linked server, four part database object naming convention will be applied like below.
The below query will create the table into your database(without constraints).
SELECT * INTO [DestinationServer].[MY_DW].[dbo].[Table_Name]
FROM [SourceServer].[ImportedDB].[dbo].[Table_Name]
And the below query will insert the data into your database table.
INSERT INTO [DestinationServer].[MY_DW].[dbo].[Table_Name]
SELECT * FROM [SourceServer].[ImportedDB].[dbo].[Table_Name]
Related
We have duplicate data in entities in Master data services and not in staging tables. how can we delete these? We cannot delete each row because these are more than 100?
Did you create a view for this entity? see: https://msdn.microsoft.com/en-us/library/ff487013.aspx
Do you access to the database via SQL Server Management Studio?
If so:
Write a query against the view that returns the value of the Code field for each record you want to delete.
Write a query that inserts the following into the staging table for that entity: code (from step 1), BatchTag, ImportType of 4 (delete)
Run the import stored proc EXEC [stg].[udp_YourEntityName_Leaf] See: https://msdn.microsoft.com/en-us/library/hh231028.aspx
Run the validation stored proc see: https://msdn.microsoft.com/en-us/library/hh231023.aspx
Use ImportType 6 instead of 4 as the deletion will fail if the Code which you are trying to delete is being referenced by a domain based attribute in other entities if you use ImportType 4. Rest all the steps will remain same as told by Daniel.
I deleted the duplicate data from the transaction tables which cleared the duplicates from the UI also.
MDS comes out-of-the-box with two front-end UIs:
Web UI
Excel plugin
You can use both of them to easily delete multiple records. I'd suggest using the excel plugin.
Are there any Domain-based attributes linked to the entity you're deleting values from? If so, if the values are related to child entity members, you'll have to delete those values first.
I have a few experience in using SQL Server 2012.
All I know to import a excel to database is like the following:
open SQL Server Management Studio
right click on the "table" folder -> Tasks -> Import Data
set data source to MS Excel.
It seems that only one Excel is entertained.
But I want to concatenate 6 Excel files (all with same column layouts) to form a single table in SQL Server.
P.S. No need to tell me to concatenate the Excel file manually by copy and paste because each individual Excel file has about ~50,000 records.
Any ideas / solutions by using sql scripts or any other programming methods?
Thanks a lot.
There's a range of ways to do this, but I'll give you the simplest that comes to mind without requiring any deep technical knowledge on your part.
Given that you're using the wizard, firstly on the 'Select Table Sources and Views' page, change the 'Destination' to be the name of the table you've previously created.
Then, under the 'Edit Mappings' menu when selecting your sheets, ensure you have 'Append rows to the destination table' selected, rather than Create/Delete. Within reason, this will achieve your goal.
There is a risk in flat file loading like this that SQL Server will create your table with unsuitable types (i.e. a Column is a text column, but only contained numbers on the first file - so the column was created as an INT and wont accept any other files). You'll need to create the tables from scratch again with the right structure or work with the mappings page to do this work.
Another way for the semi-technical type is as long as the data is equivelent between files, you can simply do your imports into a series of seperate tables:
Table1
Table2
Table3
...
Then do a
INSERT INTO Table1
SELECT * FROM Table2
UNION ALL
SELECT * FROM Table3
... Add tables here
You can then use DROP TABLE to remove the extras.
I am looking for some idea if we can generate a script for just one view and run that on the another database to create that view with its datas intact. Please help, thank you
If your destination server is not linked with the source, getting this data out will take a few more steps. I am assuming that you only want to transport the data from the view, but the steps below could be applied to the source table(s), making this view instantiation part unnecessary.
First, since a view does not store data, it only references data, you will need to instantiate the view into a table.
Select *
INTO tblNewTable --this creates a new table from the data selected from the view
FROM dbTest.dbo.Tester;
Next, open SSMS. Right click the database, select tasks, then generate scripts
Then select the newly created table, and next
You will need to select advanced and change the 'types of data to script' to schema and data. It will be schema only by default. Select Next and Finish.
SSMS will export a file, or load a new query window with the code to create a new table, but will also have the insert statements to load the new table exactly as it was on the source server
Use following as an example
use dbNew;
go
create view dbo.ViewTest as
select * from dbTest.dbo.Tester;
Following code will create table using another table. The new Table will contain all the data of the previous table.
Select * into DBName1.SchemaName.NewTableName from DBName2.SchemaName.PreviousTableName
You can use this query to create new table in any database and schema.
Basically I have a two databases on SQL Developer. I want to take the table data FOR A PARTICULAR RECORD from one database and copy it to another database's table. What should be the query? I don't want to use a restore to avoid data loss... Any ideas?
I got a query from google:
INSERT INTO dbo.ELLIPSE_PFPI.T_ANTENNE
(COLUMNS)
SELECT COLUMNS_IN_SAME_ORDER FROM dbo.ELLIPSE_PFPI.T_ANTENNE
What should be written in the query instead of dbo?
Try this
I havent tested it but i think it works
select * into [databaseTo].dbo.tablename from [databaseFrom].dbo.tablename
I have a desktop application through which data is entered and it is being captured in MS Access DB. The application is being used by multiple users(at different locations). The idea is to download data entered for that particular day into an excel sheet and load it into a centralized server, which is an MSSQL server instance.
i.e. data(in the form of excel sheets) will come from multiple locations and saved into a shared folder in the server, which need to be loaded into SQL Server.
There is a ID column with IDENTITY in the MSSQL server table, which is the primary key column and there are no other columns in the table which contains unique value. Though the data is coming from multiple sources, we need to maintain single auto-updating series(IDENTITY).
Suppose, if there are 2 sources,
Source1: Has 100 records entered for the day.
Source2: Has 200 records entered for the day.
When they get loaded into Destination(SQL Server), table should have 300 records, with ID column values from 1 to 300.
Also, for the next day, when the data comes from the sources, Destination has to load data from 301 ID column.
The issue is, there may be some requests to change the data at Source, which is already loaded in central server. So how to update the data for that row in the central server as the ID column value will not be same in Source and Destination. As mentioned earlier ID is the only unique value column in the table.
Please suggest some ides to do this or I've to take up different approach to accomplish this task.
Thanks in advance!
Krishna
Okay so first I would suggest .NET and doing it through a File Stream Reader, dumping it to the disconnected layer of ADO.NET in a DataSet with multiple DataTables from the different sources. But... you mentioned SSIS so I will go that route.
Create an SSIS project in Business Intelligence Development Studio(BIDS).
If you know for a fact you are just doing a bunch of importing of Excel files I would just create many 'Data Flow Task's or many Source to Destination tasks in a single 'Data Flow Task' up to you.
a. Personally I would create tables in a database for each location of an excel file and have their columns map up. I will explain why later.
b. In a data flow task, select 'Excel Source' as the source file. Put in the appropriate location of 'new connection' by double clicking the Excel Source
c. Choose an ADO Net Destination, drag the blue line from the Excel Source to this endpoint.
d. Map your destination to be the table you map to from SQL.
e. Repeat as needed for each Excel destination
Set up the SSIS task to automate from SQL Server through SQL Management Studio. Remember you to connect to an integration instance, not a database instance.
Okay now you have a bunch of tables right instead of one big one? I did that for a reason as these should be entry points and the logic to determinate dupes and import time I would leave to another table.
I would set up another two tables for the combination of logic and for auditing later.
a. Create a table like 'Imports' or similar, have the columns be the same except add three more columns to it: 'ExcelFileLocation', 'DateImported'. Create an 'identity' column as the first column and have it seed on the default of (1,1), assign it the primary key.
b. Create a second table like 'ImportDupes' or similar, repeat the process above for the columns.
c. Create a unique constraint on the first table of either a value or set of values that make the import unique.
c. Write a 'procedure' in SQL to do inserts from the MANY tables that match up to the excel files to insert into the ONE 'Imports' location. In the many inserts do a process similar to:
Begin try
Insert into Imports (datacol1, datacol2, ExcelFileLocation, DateImported) values
Select datacol1, datacol2, (location of file), getdate()
From TableExcel1
End try
-- if logic breaks unique constraint put it into second table
Begin Catch
Insert into ImportDupes (datacol1, datacol2, ExcelFileLocation, DateImported) values
Select datacol1, datacol2, (location of file), getdate()
From TableExcel1
End Catch
-- repeat above for EACH excel table
-- clean up the individual staging tables for the next import cycle for EACH excel table
truncate TableExcel1
d. Automate the procedure to go off
You now have two tables, one for successful imports and one for duplicates.
The reason I did what I did is two fold:
You need to know more detail than just the detail a lot of times like when it came in, from what source it came from, was it a duplicate, if you do this for millions of rows can it be indexed easily?
This model is easier to take apart and automate. It may be more work to set up but if a piece breaks you can see where and easily stop the import for one location by turning off the code in a section.