creating SQL memory table visual studio - sql-server

I'm trying to setup an SQL database project (using Visual Studio 2017).
I want this to create a dacpac that could be run on either SQL Server or Azure SQL, would create a database from scratch with a single publishing.
However, I'm running into couple issues:
I can't create a memory optimised table if there are no existing database (and tables). When running this (SQL Server):
CREATE TABLE [dbo].[TicketReservationDetail] (
TicketReservationID BIGINT NOT NULL,
TicketReservationDetailID BIGINT IDENTITY NOT NULL,
Quantity INT NOT NULL,
FlightID INT NOT NULL,
Comment NVARCHAR (1000),
CONSTRAINT [PK_TicketReservationDetail] PRIMARY KEY NONCLUSTERED HASH
([TicketReservationDetailID]) WITH (BUCKET_COUNT=10000000)
) WITH (MEMORY_OPTIMIZED=ON);
GO
ALTER DATABASE [$(DatabaseName)] ADD FILEGROUP [mod] CONTAINS MEMORY_OPTIMIZED_DATA
It gives me this:
(63,1): SQL72014: .Net SqlClient Data Provider: Msg 10794, Level 16, State 125, Line 1 The operation 'AUTO_CLOSE' is not supported with databases that have a MEMORY_OPTIMIZED_DATA filegroup.
(63,0): SQL72045: Script execution error. The executed script:
ALTER DATABASE [$(DatabaseName)]
ADD FILEGROUP [mod] CONTAINS MEMORY_OPTIMIZED_DATA;
An error occurred while the batch was being executed.
However, if I publish a disk-table first then the SQL above works just fine.
I've tried having pre deployment script where I would create disk-table schema,
It publishes the disk-table, and fails when it tries to publish the actual memory optimised table schema.
I've used sample database from: Ticket-reservations
Any ideas or suggestions would be appreciated.

Related

Cannot import SQL Azure bacpac to 2016 CTP

I'm very familiar with the process of exporting from Azure SQL V12 down to my dev box and then importing to my local sql (2014) instance. I'm spinning up a new Win10 box and have installed the SQL 2016 CTP. I'm connecting to that same Azure instance and can operate against it -- and can export a .bacpac just as with 2014.
But when I try to import to local I'm getting:
Could not import package.
Warning SQL72012: The object [FOO33_Data] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.
Warning SQL72012: The object [FOO33_Log] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.
Error SQL72014: .Net SqlClient Data Provider: Msg 547, Level 16, State 0, Line 3 The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK_dbo.Address_dbo.User_idUser". The conflict occurred in database "FOO33", table "dbo.User", column 'idUser'.
Error SQL72045: Script execution error. The executed script:
PRINT N'Checking constraint: FK_dbo.Address_dbo.User_idUser [dbo].[Address]';
ALTER TABLE [dbo].[Address] WITH CHECK CHECK CONSTRAINT [FK_dbo.Address_dbo.User_idUser];
Since this question was also asked and answered on MSDN, I will share here.
https://social.msdn.microsoft.com/Forums/azure/en-US/0b025206-5ea4-4ecb-b475-c7fabdb6df64/cannot-import-sql-azure-bacpac-to-2016-ctp?forum=ssdsgetstarted
Text from linked answer:
I suspect what's going wrong here is that the export operation was performed using a DB instance that was changing while the export was on-going. This can cause the exported table data to be inconsistent because, unlike SQL Server's physical backup/restore, exports do not guarantee transactional consistency. Instead, they're essentially performed by connecting to each table in the database in turn and running select *. When a foreign key relationship exists between two tables and the read table data is inconsistent, it results in an error during import after the data is written to the database and the import code attempts to re-enable the foreign key. We suggest using the database copy mechanism (create database copyDb as copy of originalDb), which guarantees a copy with transactional consistency, and then exporting from the non-changing database copy.

sqlpackage breaks when trying to alter a FILESTREAM column

I have a SQL Database project, where I have recently amended a FILESTREAM column to allow nulls. I am trying to publish the project's dacpac to a db where the column already exists, as a filestream column, but is currently NOT NULL.
I am performing the publish using the sqlpackage.exe command line tool (version 12.0.2882.1). However, it produces the error:
Error SQL72014: .Net SqlClient Data Provider: Msg 4990, Level 16, State 1, Line 1 Cannot alter column 'document' in table 'Document' to add or remove the FILESTREAM column attribute.
Error SQL72045: Script execution error. The executed script:
ALTER TABLE [dbo].[Document] ALTER COLUMN [document] VARBINARY (MAX) NULL;
The sql in the error looks like the generated script is trying to remove the FILESTREAM attribute on the column, which I guess is why SQL Server is complaining... but why is it doing this? My DB project still has the column marked as FILESTREAM
Fixed by upgrading sqlpackage.exe and associated libraries from version 12.0.2882.1 to 12.0.3021.1

Error in export sql server database

I have SQL server and SQL management studio 2012. After i create tables and when i export the data base to the server "online" every "primary and auto incremental" field become as a normal field. so when i try to add row to database i have the following error
"Cannot insert the value NULL into column 'taskID', table 'lawyersDB.dbo.tasks'; column does not allow nulls. INSERT fails.
The statement has been terminated."
To solve this problem i have to manage the database "online" and go to each table and set again the fields primary and auto incremental. Since i have many tables, this process takes lot of time.
So Any idea to solve this problem!?
Note: my hosting is on Arvix company server
This is SQL Management Studio, not your account.
Try this:
You need to set this, or else if you have a non-nullable column, with no default error, if you provide no value it will error.
To set up auto-increment in SQL Server Management Studio:
Open your table in Design
Select your column and go to Column Properties
Under Indentity Specification, set (Is Identity)=Yes and Indentity Increment=1

Can Access pass-through queries see global temp tables on SQL Server created using ADO and/or SSMS?

According to Microsoft's definition, "Global temporary tables are visible to any user and any connection after they are created, and are deleted when all users that are referencing the table disconnect from the instance of SQL Server."
Why is it that the following work:
Create a global temp table (##SomeTempTableName) using SSMS and run a separate query using SSMS a few seconds or minutes later and the second query can see and use the temp table.
Create a global temp table using ADO and run a separate ADO query a few seconds or minutes later and the second query can see and use the temp table.
Create a global temp table using an Access pass-through query and run a separate Access pass-through query a few seconds or minutes later and the second query can see and use the temp table.
But if I create a global temp table in SSMS, ADO or via an Access pass-through query, none of them can see a temp table created by any other method? EDIT: I just created a temp table using an Access pass-through query that SSMS could see.
Note that I am not closing SSMS, closing the ADO connection or closing Access between any of these events.
In reference to the title question...
Can Access pass-through queries see global temp tables on SQL Server created using ADO and/or SSMS?
...my testing shows that the answer is "yes". Here's what I did:
After restarting the SQL Server machine I logged in to Windows as Administrator and then logged in to SQL Server locally via SSMS (Windows Authentication). In SSMS I ran the script...
USE [Accounting]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE ##gord(
[ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](50) NULL,
CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
...to create the global temp table, then I ran the script...
INSERT INTO ##gord ([Name]) VALUES ('bar')
...to give it a row. Then, with SSMS still running I launched Access and ran the pass-through query...
SELECT * FROM ##gord
...and it returned the row from the global temp table. I could keep running the pass-through query as long as SSMS was running, but as soon as I shut down SSMS the next attempt to run the Access pass-through query resulted in
ODBC--call failed
[Microsoft][ODBC SQL Server Driver][SQL Server]Invalid object name '##gord'. (#208)
In both the SSMS and the Access sessions I was connecting to the SQL Server as an Administrator (Server Role: sysadmin) so I'm wondering if perhaps you may have a SQL Server permissions issue that is preventing other sessions from seeing the global temp table.

ALTER DATABASE returning Major Error 0x80040E14, Minor Error 25501

I am attempting to recreate a SQL Server Compact database from a script. I started off creating it like this:
CREATE DATABASE [MyDatabase]
GO
and that seemed to work. The next commands in the script are these:
ALTER DATABASE [MyDatabase] SET ANSI_NULL_DEFAULT OFF
GO
and about two dozen similar commands. I have tried a representative sample of these and they all return the error:
Major Error 0x80040E14, Minor Error 25501
ALTER DATABASE [MyDatabase] SET ANSI_NULLS OFF
There was an error parsing the query. [ Token line number = 1,Token line offset = 7,Token in error = DATABASE ]
Does anyone know what the matter is?
I'm using Microsoft SQL Server Management Studio 2008 R2 on Windows XP.
If you check here - http://msdn.microsoft.com/en-US/library/ms174454(v=sql.90).aspx, I don't think you can do alter database in SQL Server CE.

Resources