Update compatibility level - Azure SSAS - sql-server

I need to update the compatibility level of the Azure SSAS Model to 1400. It's currently in 1200. When I click on Model.bim and go to Properties. (Inside Visual Studio 2017) There is no option to select from under the property "Compatibility Level".
I am currently on VS 2017 (Version 15.9.9)
The .NET Framework is on (Version 4.7.03062)
I did follow this article but still don't see the options to change it.
https://azure.microsoft.com/en-au/blog/1400-models-in-azure-as/
My solution is also in Source Control.

There is actually a bug in the server properties for SSAS in SSMS (at least as of SSMS 17.x). It probably applies to your case too.
The actual compatibility mode that is used by your model is set in the model itself.
The server has 2 properties:
DefaultCompatibilityMode - probably used only when Create is executed (without the compatibility)
SupportCompatibilityModes which is a list of supported levels.
Execute the following XMLA window in SSMS (taken from social.msdn):
<Discover xmlns="urn:schemas-microsoft-com:xml-analysis">
<RequestType>DISCOVER_XML_METADATA</RequestType>
<Restrictions>
<RestrictionList>
<ObjectExpansion>ObjectProperties</ObjectExpansion>
</RestrictionList>
</Restrictions>
<Properties>
<PropertyList>
</PropertyList>
</Properties>
</Discover>
Search for Compatibility. You should be able to see for an SSAS 2017 server:
<ddl400:DefaultCompatibilityLevel>1200</ddl400:DefaultCompatibilityLevel>
<ddl600:SupportedCompatibilityLevels>1100,1103,1200,1400</ddl600:SupportedCompatibilityLevels>
The DefaultCompatibilityLevel should match the requested compatibility level. In your case 1400. You should be able to confirm the mode that the model is running in by checking its properties in SSMS. You can, of course, change only to SupportedCompatibilityLevels. Should your required compatibility level not be listed, you are out of luck.

You can, of course, change only to SupportedCompatibilityLevels.
#tukan Thanks. I changed to 1400 in Visual Studio and I can see 1400 in the server.
<ddl600:SupportedCompatibilityLevels>1100,1103,1200,1400</ddl600:SupportedCompatibilityLevels>
However, when I try to deploy I get the following error:
The JSON DDL request failed with the following error: Failed to execute XMLA. Error returned: 'The operation cannot be performed because it references an object or property that is unavailable in the current edition of the server or the compatibility level of the database.
It feels like the level is 1200 and can't deploy 1400 even that it is supported.

Related

Trusted Assemblies feature broken after upgrade to SQL Server 2017 from 2014

We had several issues during the in-place upgrade from 2014 to 2017, namely the trusted assembly CLR feature that interfered with the successful installation of SSIS at the time. I have since gotten SSIS installed and working, but the feature is still broken.
The error I received at the time, and that I still receive when I query the system table directly is: "Internal table access error: failed to access the Trusted Assemblies internal table". The system view seems to run an OPENROWSET on the "table" TRUSTED_ASSEMBLIES. I don't see a lot of chatter on the internet of people dealing with this problem.
I don't plan on using that feature, but am fearful that it may cause issues in the future with updates or with outside vendors. Another symptom was that in order to fix some of the issues with SSIS package execution was I had to manually assign execute permissions to low-level procedures that are usually done for you (fortunately, the documentation indicated which built-in groups had access to the procs).
If anyone has any insight on the issue that would be appreciated; guessing a tear-down and complete rebuild might be in order.
Have you applied the CUs (Cumulative Updates) for SQL Server 2017? If not, you probably should.
Yes, the "CLR strict security" / "trusted assemblies" "feature" is quite the dumpster fire. Please see my answer to the following question (also here on S.O.) regarding the proper ways to work around the new (as of SQL Server 2017) restrictions (the final paragraph in that answer deals with your situation: pre-existing, unsigned assemblies):
CLR Strict Security on SQL Server 2017
Using module signing you should be able to get everything working without assigning any permissions directly.
As for that particular "Internal table access error" error, that's new to me. I assume you are executing SELECT * FROM sys.trusted_assemblies' as sa or some other login that is a member of the sysadmin fixed server role? If not you would get a permissions error stating:
Msg 300, Level 14, State 1, Line XXXXX
VIEW SERVER STATE permission was denied on object 'server', database 'master'.
Msg 297, Level 16, State 1, Line XXXXX
The user does not have permission to perform this action.
Since you aren't getting the permissions error, it's possible that some component didn't upgrade correctly / completely (hence making sure you have installed the latest CU might check / fix that).

Schema compare - Unexpected exception caught during population of source model: Object reference not set to instance object

I've been running a schema compare in a database project in Visual Studio 2017, when I do this I get the following message in the Error List tab:
Unexpected exception caught during population of source model: Object
reference not set to instance object.
I found this blog, which appears to be the same issue, but the suggested solution (of removing the entry using the Select Target Schema window) has not worked despite trying it a few times.
The compare has (and does) work fine with the same project and database in Visual Studio 2013, so I have a work around, but it would be nice to know what is causing the problem (and leave VS2013 behind!)
I found a solution to this; for database projects there is a 'Target platform' setting in the properties (see below). I set this to SQL Server 2017 and the compare now works.
The default platform required appears to depend on the compatibility of the database (see https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql-compatibility-level); while I required 2017 when I initially encountered this problem, a recurrence (against a database with a compatibility level of 120) needed SQL Server 2014 to be selected.
Oddly I have now seen that just switching the target platform back and forth can solve the problem e.g. I have a database project with SQL commands which were not present in SQL 2014, I ran the compare with a target of 2017 and it failed with the above error, ran with a 2014 target and it errors (as you would expect, since it does not understand the newer SQL functions), switch back to the 2017 target and the compare now works fine!
Edit: different job, different DB version (2019). Has all been working fine for months then this error cropped up. the above didn't work this time, so just in case anyone finds the same, the tried and tested closing and reopening VS sorted it!

JetBrains PhpStorm / DataGrip introspecting Oracle database error

On PhpStorm version 2017.1.3 but I think that this error is present on any JetBrains IDE with database support.
When I choose to synchronize an Oracle schema, some objects like triggers are not shown on database and I found an error on the log.
I could not find any reason and it was working on older PhpStorm / DataGrip version (before 2016.1)
In the options tab I've added an object filter. Without it there are 5000+ tables. Even removing the regular expression on the object filter I still have the same error.
Capture of Options and Advance Options. The oracle client used is Thin.
On your connection properties check if you are using Thin Driver and change it to change to OCI.

SQL Azure V12 BACPAC import error: "The internal target platform type SqlAzureV12DatabaseSchemaProvider does not support schema file version '3.3'"

Until a very few days ago I was able to import a V12 BACPAC from Azure to my local server with SQL Server 2014 SP1 CU6 (12.0.4449.0).
But now, when I try to import the BACPAC, my SQL Server Management Studio 2014 says:
"Internal Error. The internal target platform type SqlAzureV12DatabaseSchemaProvider does not support schema file version '3.3'. (File: D:\MyDB.bacpac) (Microsoft.Data.Tools.Schema.Sql)"
I think I've the latest SQL Server 2014 SP1 version with all the latest updates (build 12.0.4449.0) but still I get this error.
Please help!
Thanks
Fix: To resolve, use the latest SSMS Preview which installs the most up to date DacFx version. This understands how to process the latest features, notably Database Scoped Configuration Options. Once this is installed you can Import inside SSMS or using SqlPackage from the “C:\Program Files (x86)\Microsoft SQL Server\130\DAC\bin” location if you prefer command line tools.
Alternatively, execute the following command on the Azure DB to set MaxDop value back to default since it appears the issue is that this has been changed to 1. Future exports should now produce bacpacs that can be understood by the 2014 client tools, assuming no other new Azure features have been added to the DB.
ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 0
Root cause / why does this happen: The root cause is that your database have non-default values for 1 or more Database Scoped Configuration options. As these were only added very recently, older versions of the tools do not understand how to deploy them and so DacFx blocks. These are the only properties/objects with that high a schema version. Basically any time you see an error like “does not support schema file version '3.3'” it means you need to upgrade. One possible cause is if the database was migrated from AzureV1 -> AzureV12, which sets the MaxDop option to 1 from its default of 0.
Notes: It's strongly recommended that you use the latest SSMS and keep it up to date via the built-in update notifications if you're working with Azure. it will ensure that you avoid running into issues like this one. Generally if you only use the SQL Server 2014 surface area you should be able to use older tools when re-importing, but with the huge number of recent advancements in Azure SQL DB cases like this will crop up more and more often where the new tools are required in order to perform as expected.
For reference, I’m including the Database Scoped Configuration options and their default values below. If any of these properties are non-default on the DB when exporting the schema version gets bumped so that old tools do not break.
<!-- Database Scoped Configurations-->
<Property Name="MaxDop" Type="System.Int32" DefaultValue="0" />
<Property Name="MaxDopForSecondary" Type="System.Int32?" DefaultValue="null"/>
<Property Name="LegacyCardinalityEstimation" Type="System.Boolean" DefaultValue="false" />
<Property Name="LegacyCardinalityEstimationForSecondary" Type="System.Boolean?" DefaultValue="null" />
<Property Name="ParameterSniffing" Type="System.Boolean" DefaultValue="true" />
<Property Name="ParameterSniffingForSecondary" Type="System.Boolean?" DefaultValue="null" />
<Property Name="QueryOptimizerHotfixes" Type="System.Boolean" DefaultValue="false" />
<Property Name="QueryOptimizerHotfixesForSecondary" Type="System.Boolean?" DefaultValue="null" />
The simple "Alter" solution given by Kevin (ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 0) seems to be the fast solution to resolve the crisis for anyone having customer-down issues. Never mind about installing the latest DAC or SQL Server 2016, it's not necessary to resolve the immediate issue, PLUS all that is in preview status (beta). Hardly something you want to introduce into a production environment right now
This apparently only happened to us if we had a v11 database pending auto update by MSFT set for this last weekend. For those database upgrades we canceled and applied the upgrade ourselves, the Max Degree Of Parallelism field appears not to have gotten set to 0, and this error occurred. We have about 300 db's and noticed this as the pattern
FYI: You can check for that problem value with this SQL query
SELECT [dbscm].[value] AS [MaxDop],
[dbscm].[value_for_secondary] AS [MaxDopForSecondary],
[dbscl].[value] AS [LegacyCardinalityEstimation],
[dbscl].[value_for_secondary] AS
[LegacyCardinalityEstimationForSecondary],
[dbscp].[value] AS [ParameterSniffing],
[dbscp].[value_for_secondary] AS
[ParameterSniffingForSecondary],
[dbscq].[value] AS [QueryOptimizerHotfixes],
[dbscq].[value_for_secondary] AS
[QueryOptimizerHotfixesForSecondary]
FROM [sys].[databases] [db] WITH (NOLOCK)
LEFT JOIN [sys].[database_scoped_configurations] AS [dbscm] WITH
(NOLOCK) ON [dbscm].[name] = N'MAXDOP'
LEFT JOIN [sys].[database_scoped_configurations] AS [dbscl] WITH
(NOLOCK) ON [dbscl].[name] = N'LEGACY_CARDINALITY_ESTIMATION'
LEFT JOIN [sys].[database_scoped_configurations] AS [dbscp] WITH
(NOLOCK) ON [dbscp].[name] = N'PARAMETER_SNIFFING'
LEFT JOIN [sys].[database_scoped_configurations] AS [dbscq] WITH
(NOLOCK) ON [dbscq].[name] = N'QUERY_OPTIMIZER_HOTFIXES'
WHERE [db].[name] = DB_NAME();
I was facing the same issue while I was importing an export from azure to my local MSSQLLocalDB instance (for local debugging).
I did not want to touch the azure db neither wanted to download the latest preview.
So What I did was as follows On my local db:
Executed the alter query setting the value for MAXDOP to 1
ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 1
Imported the bacpac, which ran successfully.
Rest the value of MAXDOP to 0
ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 0
Hope it helps somebody in similar use case

Can't disable Vardecimal Storage Format

I recently moved a database from a 'SQL Server 2005 SP1' instance to 'SQL Server 2008 SP1' (using detach - attach). I now need to move it back but it fails with the error:
The database 'MyDB' cannot be opened because it is version 655. This server supports version 612 and earlier. A downgrade path is not supported.
After a bit of research I believe this is related to the new database option 'Vardecimal Storage Format' which has somehow been set ON for all my databases. I did not set this on myself, but if I check the database options in Management Studio (2008) I can see it is set to 'True' for all my databases. Also, this particular option is disabled in the UI, so I cannot turn it off.
I then tried the following to turn it off:
exec sp_db_vardecimal_storage_format 'MyDB', 'OFF'
go
which reported success, but when I check the options it is still ON.
I then read this very detailed article: "http://msdn.microsoft.com/en-us/library/bb508963.aspx" which states the following requirements to turn this option off:
Ensure no tables use vardecimal storage. Confirmed.
Set recovery mode to simple and do full backup. I did this.
But none of this makes any difference either. The option is still on and I can't change it.
Both instances of SQL Server are Express Edition (which isn't supposed to support Vardecimal Storage Format anyway).
Any ideas on how to turn this option off?
The vardecimal is a red herring because you can't downgrade a database whether this setting is true, false or non-existent. It's been asked before: Another question and again
Vardecimal is deprecated in SQL Server 2008 and has been replaced by compression of rows/tables.
You could try exporting your data to a script for an earlier version of sql server.

Resources