I have created a table in SQL Server 2014 (v12.0 - dedicated SQL pool) with no partitions using SQL Server Management Studio v16.0.
Using an automated script, created right values hourly partitions for 4 months i.e., 2880 partitions.
Now I would like to view the create script for it.
I tried right clicking on the table > script table as > Create to > New script. But this approach is taking forever to return a script
Instead of new script I tried saving it to file on local disk but even that is taking long time
Why is this taking too long and what is a better approach to get the script faster?
PS: the DW version doesn't support partition functions and partition schemes so had to create partitions directly on table
*Here is some workaround
By configuring a proxy server to allow access to crl.microsoft.com from your server
By Configuring your firewall to return a failure status quickly when it blocks access to the crl.microsoft.com website
And disabling the checks for certificate revocation.*
From your browser, open the Internet Options dialog, go to the Advanced Page, and un-check the Check for publisher's certificate revocation checkbox.
And generating the script to the clipboard will takes time. Instead try with New Query Editor Window
And also by using object explorer, as shown below.
For more information, please refer this link and tips
Related
Using Microsoft Access with a SQL Server connected. For years I have been able to run batch files that will start up code within MS Access that runs ETL. However, now it requires a SQL Server sign-in. How can I have MS Access remember my credentials so I don't have to login manually -- allowing the process to be automated?
The easy fix would be to delete the table links, and then re-link, and when you re-link, make sure you check this box:
That box DOES NOT re-appear on/when refreshing table links.
You can also consider caching the password, and not saving the UID/password in the table links, but for now, to eliminate the prompt(s) you receiving, you need to re-create the table links. If you have a lot of tables, then you might want to save the list of tables to a local table. but you will have to delete them to get the above prompt (save password) during a table re-link - using refresh links will not fix this issue (but VBA code can fix this).
When I extract a data-tier application from a Microsoft Azure SQL database that has a Master Key, I was unable to import it into SQL server on my local PC.
You will find others had this issue here: SSMS 2016 Error Importing Azure SQL v12 bacpac: master keys without password not supported
However the steps provided as the answer did not work on my installation.
Steps are
1. Disable auditing on the server (or database)
2. Drop the database master key with DROP MASTER KEY command.
Microsoft Tech Support verified this solution did not work on my installation of SQL Server and after actually taking remote control of my PC and trouble shooting, they were unable to determine why this was occurring.
I needed to find a way to remove the Master Key from the bacpac file. I have a Powershell script to remove the Master Key from the BACPAC file but it requires extracting, renaming files and running scripts from Windows Powershell to get the db imported.
Does anyone have a program or set of scripts which would automate the process of removing the Master Key and importing a SQL DB from Azure with a single command?
I am new to this forum. Please do not be harsh with this post. I am trying to do the best I can to help others to save the many hours I spent coming up with this.
I have cobbled together a T-SQL script which calls a Windows Powershell script (also cobbled from multiple sources) to extract a data-tier application (database) from Microsoft Azure SQL database and import it into a database on my local SQL Server by running ONE command. Over the months I found some of the code that is in my scripts from other blogs etc. I am not able to provide the credit due to those folks as I didn't keep track of where I got the info. If you are reading this and you see your code, please take credit. I apologize for not being able to give you the credit for your work.
There may be configuration settings on your PC and your local SQL server that need adjustments as this entire solution requires pretty much full access to your computer. If you run into trouble with compatibility, let me know and I will do the best I can to let you know how my system is configured in case it will help you.
I am using Windows 10 Pro and Microsoft SQL Server Developer (64-bit) v12.0.5207.0
I have placed the two files that do all the work on GitHub here: https://github.com/Wingloader/Auto-Azure-BACPAC-Download.git
GetNewBacpac-forGitHub.sql
GetAzureDB-forGitHub.ps1
WARNING: The Powershell script file will store your SQL sa password and your Azure SQL login in clear text!
If you don't want to do this, don't use this solution.
My computer is owned and controlled solely by me so I am able to open up the security in my system and I am willing to assume the responsibility of safeguarding it.
The basic steps of my solution are are accomplished as follows: (steps 1 and 2 are optional as I like to keep a version of the DB I am working with as of the point in time I pull down a clean production copy of my Azure DB)
Back up the current DB as MyLocalDB.bak.
Restore that backup from step 1 to a new DB with the previous day stamped at the end of the DB name (e.g., MyLocalDB20171231)
Delete the original MyLocalDB database (needed so we can recreate the DB with the original name later on)
Pull down the production database from Azure and create a new database with the name MyLocalDB.
The original DB is deleted in step 3 so that the restored DB can use the original name (important when you have data connections referring to that DB name)
In Step 4, the work of extracting the data-tier application DB from Azure is initiated by this line in the T-SQL:
EXEC MASTER..xp_cmdshell '%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\Git\GetUpdatedAzureDB\GetAzureDB.ps1"'
The Powershell script does the following:
The target for the extract is a file named today.bacpac (hardcoded). The first thing to do is delete that file if it already exists.
Extract the DB from Azure into the today.bacpac file.
Note: my DB on Azure has a Master Key for encryption. This will need to be removed from the files prior to importing the bacpac file into your local DB or it will fail (this may not be required in SQL 2017 according to my previous conversations with MS Support). If you do not use a Master Key, you can either strip out the code that does this step or just leave it alone. It won't remove anything if it isn't there. It would just add a little overhead to the program.
Open the today.bacpac file (zip file) and remove the MasterKey node from the Origin.xml file.
Modify the Model.xml file to updates the SHA hash length. This is required in order for the file not to appear to have been tampered with when SQL opens the bacpac file.
Re-zips the files back into a new file today-patched.bacpac
Runs this line of code (from Powershell) to import the bacpac file into SQL Server
&C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\SqlPackage.exe" /Action:Import /SourceFile:"C:\Git\GetUpdatedAzureDB\today-patched.bacpac" /TargetConnectionString:"Data Source=MyLocalSQLServer;User ID=sa; Password=MySAPassword; Initial Catalog=MyLocalDB; Integrated Security=false;"
After editing the two files to provide updated paths, usernames and passwords, run the SQL script. You do not need to edit the scripts again. You can run the SQL script again without modification and it will create a new copy of your Azure DB.
Done!
I'm migrating a database from Azure VM to Azure SQL Database. I tried to use the "Deploy Database to Azure SQL Database" function in SSMS but it failed several times, seemingly due to the size of the database (110 GB). So I made a copy of the source database on the same source server, truncated the table with the majority of the data in it, then tried the deploy again. Success.
Now I need to get that data from the original source table into the destination table. I've tried two different approaches to this and both gave errors
In SSMS, I connected to both SQL Servers. I ran the below while attached to the destination database:
INSERT INTO dbo.DestinationTable
SELECT *
FROM [SourceServer].[SourceDatabase].dbo.SourceTable
With that I was given the error:
Reference to database and/or server name in
'SourceServer.SourceDatabase.dbo.SourceTable' is not supported in this
version of SQL Server.
In SSMS, used the Export Data Wizard from the Source Table. When trying to start that job, I received this error during the validation phase:
Error 0xc0202049: Data Flow Task 1: Failure inserting into the
read-only column "CaptureId"
How can I accomplish what should be this seemingly simple task?
instead of using deploy database to azure directly for large databases ,you could try below steps
1.Extract bacpac on local machine
2.Copy bacpac to blob
3.Now while creating database, you could use the bacpac in blob as source
This approach worked for us and is very fast,you may also have to ensure,that blob is in same region as SQLAzure
I've solved the issue. Using option #2, you simply need to tick the checkbox for "Enable Identity Insertion" and it works with no errors. This checkbox is inside the Edit Mappings sub-menu of the Export Data Wizard.
I have following problem: When I try to deploy my SSAS project (with cube, dimensions and all that jazz) to sql-server, it throws error saying that
You cannot deploy the model because the DB deployment server is not running in multidimensional mode.
I'm new to this, so it might be a dumb question, but how do I change database mode from tabular to multidimensional?
It is possible to stop SSAS, edit the msmdsrv.ini and change DeploymentMode from 2 to 0. Empty the DataDir folder. Then start SSAS. This will change the instance from Tabular mode to Multidimensional mode. It will not convert models.
Cathy Dumas describes the reverse here.
Tabular and Multi Dimensional are completely different thing.
When you install SQL Server, you have to choose which one you are going to install.
So, if you create a Tabular model, you only can deploy it to Tabular installation of SSAS and the same for Multi Dimensional
you can not convert those model to each other.
Best recommendation is to reinstall only SQL Server Analysis Services feature without disrupting other features/components like SQL Server Engine. During reinstallation of the feature, we can change the configuration of Analysis Services to Multidimensional and Data Mining Mode. The whole reinstall process takes less than 10 minutes. So, this approach is easy and quick.
I'm enlisting all the steps here for SQL Server 2017 installation:
Go to Add Remove Program (ARP) Window in Control Panel. Alternatively, you run appwiz.cpl command from Windows Run prompt (Refer screenshot).
Select the row for Microsoft SQL Server 2017 (64-bit) and click on Uninstall/Change (Refer screenshot).
It'll open the SQL Server 2017 change wizard (Refer screenshot):
Click on Remove link
Select the Analysis Services checkbox on Select Features step of uninstallation wizard (Refer screenshot):
Complete remove action by following the remaining steps in the guided wizard. They are self-explanatory in nature.
Restart from step 1 but this time click on Add link (Refer screenshot in step # 2) to start installation wizard.
During feature addition, it'll ask for the location of SQL Server setup files. Setup files can be present in a folder in your hard disk, Compact Disk (CD), or a mounted virtual drive via an ISO image file.
Reinstall SQL Server Analysis Services feature. On the Analysis Services Configuration step of the installation wizard, go to Server Configuration tab and select the Multi-dimensional and Data Mining Mode option button (Refer screenshot):
Click Next > and complete the installation by following the remaining steps in the guided wizard. They are self-explanatory in nature.
All you have to do it go edit the MSMDSRV.ini and change the deployment mode to either (0,1,2,3) depending on what you are trying to use. Also remember that you have to log off the SQL studio and log back in.
I've got a project where I'm attempting to use SQLite via System.Data.SQLite. In my attempts to keep the database under version-control, I went ahead and created a Database Project in my VS2008. Sounds fine, right?
I created my first table create script and tried to run it using right-click->Run on the script and I get this error message:
This operation is not supported for the provider or data source you are using.
Does anyone know if there's an automatic way to use scripts that are part of database project against SQLite databases referenced by the databases, using the provider supplied by the System.Data.SQLite install?
I've tried every variation I can think of in an attempt to get the script to run using the default Run or Run On... commands. Here's the script in it's most verbose and probably incorrect form:
USE Characters
GO
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'U' AND name = 'Skills')
BEGIN
DROP Table Skills
END
GO
CREATE TABLE Skills
(
SkillID INTEGER PRIMARY KEY AUTOINCREMENT,
SkillName TEXT,
Description TEXT
)
GO
Please note, this is my first attempt at using a Database, and also the first time I've ever touched SQLite. In my attempts to get it to run, I've stripped any and everything out except for the CREATE TABLE command.
UPDATE: Ok, so as Robert Harvey points out below, this looks like an SQL Server stored procedure. I went into the Server Explorer and used my connection (from the Database project) to get do what he suggested regarding creating a table. I can generate SQL from to create the table and it comes out like thus:
CREATE TABLE [Skills] (
[SkillID] integer PRIMARY KEY NOT NULL,
[SkillName] text NOT NULL,
[Description] text NOT NULL
);
I can easily copy this and add it to the project (or add it to another project that handles the rest of my data-access), but is there anyway to automate this on build? I suppose, since SQLite is a single-file in this case that I could also keep the built database under version-control as well.
Thoughts? Best practices for this instance?
UPDATE: I'm thinking that, since I plan on using Fluent NHibernate, I may just use it's auto-persistence model to keep my database up-to-snuff and effectively in source control. Thoughts? Pitfalls? I think I'll have to keep initial population inserts in source-control separately, but it should work.
I built my database using an SQLite SQL script and then fed that into the sqlite3.exe console program like this.
c:\sqlite3.exe mydatabase.db < FileContainingSQLiteSQLCommands
John
Well, your script looks like a SQL Server stored procedure. SQLite most likely doesn't support this, because
It doesn't support stored procedures, and
It doesn't understand SQL Server T-SQL
SQL is actually a pseudo-standard. It differs between vendors and sometimes even between different versions of a product within the same vendor.
That said, I don't see any reason why you can't run any (SQLite compatible) SQL statement against the SQLite database by opening up connection and command objects, just like you would with SQL Server.
Since, however, you are new to databases and SQLite, here is how you should start. I assume you already have SQLite installed
Create a new Windows Application in Visual Studio 2008. The database application will be of no use to you.
Open the Server Explorer by pulling down the View menu and selecting Server Explorer.
Create a new connection by right-clicking on the Data Connections node in Server Explorer and clicking on Add New Connection...
Click the Change button
Select the SQLite provider
Give your database a file name.
Click OK.
A new Data Connection should appear in the Server Explorer. You can create your first table by right-clicking on the Tables node and selecting Add New Table.