We have a tabular cube, processing database (full) in SSMS works fine, but when processing from SQL server agent, throws following error.
<return xmlns="urn:schemas-microsoft-com:xml-analysis">
<root xmlns="urn:schemas-microsoft-com:xml-analysis:empty">
<Messages xmlns="urn:schemas-microsoft-com:xml-analysis:exception">
<Warning WarningCode="1092550744" Description="Cannot order ''[] by [] because at least one value in [] has multiple distinct values in []. For example, you can sort [City] by [Region] because there is only one region for each city, but you cannot sort [Region] by [City] because there are multiple cities for each region." Source="Microsoft SQL Server 2016 Analysis Services Managed Code Module" HelpFile="" />
</Messages>
</root>
</return>
Here is the script is used from SQL server agent.
{
"refresh": {
"type": "full",
"objects": [
{
"database": "DBName"
}
]
}
}
Can anyone suggest how to eliminate this error or ignore this error/warning?
Thanks,
I had the same issue, tabular model in VS 2015, cube in SSAS. Builds fine when I process the database but the SQL Server Agent was bringing up this error. A couple of forums had some mention of the error but no steps for deeper investigation & resolution. Particularly difficult when the 'Cannot Order' is blank. I opened the model in VS, select every column in turn and looked for any sorting operation in either the filter or the 'Sort by Column' button which is easy to miss. Removed all the sorts and it built fine. Take a note of the ones removed as you may have a data issue.
Use SQL Server Integration Services (SSIS) for processing. Just create a package with an "Analysis Services Processing Task". This task processes the model like SSMS.
The error message correctly explains the problem but unhelpfully doesn't tell which attribute is the offending one. I was sorting account names by account number but because there were a few accounts with the same name but different number, I got this same error. Setting keepUniqueRows didn't help.
Removing the offending sortBy fixes the problem when processing with an SQL Server Agent. What's interesting is that when the sortBy is in place and I processed the model with SSMS the accounts were sorted as expected. This led me to think this is because SQL Agent Job interprets the warning as an error and does a rollback but SSMS ignores it. The SSIS task probably ignores the warning just like SSMS and processing succeeds.
Try this,
<Process xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Type>ProcessFull</Type>
<Object>
<DatabaseID>DBName</DatabaseID>
</Object>
</Process>
I also faced same problem. I just made type "full" to "automatic" and it starts working.
{
"refresh": {
"type": "automatic",
"objects": [
{
"database": "AU MY Model"
}
]
}
}
Related
Stream Analytics job ( iot hub to CosmosDB output) "Start" command is failing with the following error.
[12:49:30 PM] Source 'cosmosiot' had 1 occurrences of kind
'OutputDataConversionError.RequiredColumnMissing' between processing
times '2019-04-17T02:49:30.2736530Z' and
'2019-04-17T02:49:30.2736530Z'.
I followed the instructions and not sure what is causing this error.
Any suggestions please? Here is the CosmosDB Query:
SELECT
[bearings temperature],
[windings temperature],
[tower sway],
[position sensor],
[blade strain gauge],
[main shaft strain gauge],
[shroud accelerometer],
[gearbox fluid levels],
[power generation],
[EventProcessedUtcTime],
[EventEnqueuedUtcTime],
[IoTHub].[CorrelationId],
[IoTHub].[ConnectionDeviceId]
INTO
cosmosiot
FROM
TurbineData
If you're specifying fields in your query (ie Select Name, ModelNumber ...) rather than just using Select * ... the field names are converted to lowercase by default when using Compatibility Level 1.0, which throws off Cosmos DB. In the portal if you open your Stream Analytics job and go to 'Compatibility level' under the 'Configure' section and select v1.1 or higher that should fix the issue. You can read more about the compatibility levels in the Stream Analytics documentation here: https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-compatibility-level
Apache Drill 1.2 adds the exciting feature of including JDBC relational sources in your query. I would like to include Microsoft SQL Server.
So, following the docs I copied the SQL Server jar sqldjbc42.jar (the most recent MS JDBC driver) into the proper 3rd party directory.
I successfully added the storage.
The configuration is:
{
"type": "jdbc",
"driver": "com.microsoft.sqlserver.jdbc.SQLServerDriver",
"url": "jdbc:sqlserver://myservername",
"username": "myusername",
"password": "mypassword",
"enabled": true
}
as "mysqlserverstorage"
However, running queries fails. I've tried:
select * from mysqlserverstorage.databasename.schemaname.tablename
(of course I've use real existing tables instead of the placeholders here)
Error:
org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: From line 2, column 6 to line 2, column 17: Table 'mysqlserverstorage.databasename.schemaname.tablename' not found [Error Id: f5b68a73-973f-4292-bdbf-54c2b6d5d21e on PC1234:31010]
and
select * from mysqlserverstorage.`databasename.schemaname.tablename`
Error:
org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: Exception while reading tables [Error Id: 213772b8-0bc7-4426-93d5-d9fcdd60ace8 on PC1234:31010]
Has anyone had success in configuring and using this new feature?
Success has been reported using a storage plugin configuration, such as
{
type: "jdbc",
enabled: true,
driver: "com.microsoft.sqlserver.jdbc.SQLServerDriver",
url:"jdbc:sqlserver://172.31.36.88:1433;databaseName=msdb",
username:"root",
password:"<password>"
}
on pre-release Drill 1.3 and using sqljdbc41.4.2.6420.100.jar.
Construct you query as,
select * from storagename.schemaname.tablename
This will work with sqljdbc4.X as it works for me.
I am currently trying to integrate Azure Search to my Azure SQL Database in order to enable Spatial searcing. In my index there is a field that is type of Edm.GeographyPoint. What is Sql database's column type should be?, because Geography type did not work.
Additionally, my datasource's datachange field is like this:
"dataChangeDetectionPolicy" : {
"#odata.type" : "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
"highWaterMarkColumnName" : "RowLastVersion"
},
"dataDeletionDetectionPolicy" : {
"#odata.type" : "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy",
"softDeleteColumnName" : "Deleted",
"softDeleteMarkerValue" : "0"
}
Is there anything additionally i have to do for auto indexing?, Because These are also not working.
Azure Search is a perfect api. However, there is a lack of documents that we can use.
It turned out indexer schedule's startTime was set in the future. Invoking indexer using /run indexed as expected.
I am totally fresher to OLAP server. i have a OLAP query that is working fine, i just want to know, which tables are linked to send the result and how(i mean with which joins). Here is query.
WITH MEMBER [Measures].[ThisYearMonthToDate] AS 'Sum({[Time].[All Time].[2013].
[Q1].[January],[Time].[All Time].[2013].[Q1].[February],[Time].[All Time].[2013].
[Q1].[March],[Time].[All Time].[2013].[Q2].[April],[Time].[All Time].[2013].[Q2].[May]},
[Measures].[Main Temp Id])'MEMBER [Measures].[LastYearMonthToDate] AS
'Sum({[Time].[All Time].[2012].[Q1].[January],[Time].[All Time].[2012].[Q1].[February],
[Time].[All Time].[2012].[Q1].[March],[Time].[All Time].[2012].[Q2].[April],
[Time].[All Time].[2012].[Q2].[May]}, [Measures].[Main Temp Id])' SELECT {[Measures].
[LastYearMonthToDate], [Measures].[ThisYearMonthToDate]} ON COLUMNS,
{([PublicRegion].[All Regions].[USA]),([PublicRegion].[All Regions].[USA].[Northeast]),
([PublicRegion].[All Regions].[USA].[Midwest]),([PublicRegion].[All Regions].[USA].
[Southeast]),([PublicRegion].[All Regions].[USA].[Southwest]),([PublicRegion].[All
Regions].[USA].[West Coast]),([PublicRegion].[All Regions].[USA].[Misc]),
([PublicRegion].[All Regions].[Europe]),([PublicRegion].[All Regions].[Europe].[UK]),
([PublicRegion].[All Regions].[Europe].[France]),([PublicRegion].[All Regions].
[Europe].[Italy]),([PublicRegion].[All Regions].[Europe].[Germany]),
([PublicRegion].[All Regions].[Europe].[Spain]),([PublicRegion].[All Regions].
[Canada]),([PublicRegion].[All Regions].[Other])} ON ROWS FROM Public
i am not getting how to decode this query. Please help me..
There are two pretty easy ways to find out:
Log of you OLAP server: I'm almost sure that all leading OLAP tools logs SQL queries sent to database server.
Log of your database server: Set your database to log all queries from all users. By time of execution, and user name you declared in metadata file, you can easily filter queries sent by OLAP tool.
Hope this helps,
Best regards
We use the DBAmp for integrating Salesforce.com with SQL Server (which basically adds a linked server), and are running queries against our SF data using OPENQUERY.
I'm trying to do some reporting against opportunities and want to return the created date of the opportunity in the opportunity owners local date time (i.e. the date time the user will see in salesforce).
Our dbamp configuration forces the dates to be UTC.
I stumbled across a date function (in the Salesforce documentation) that I thought might be some help, but I get an error when I try an use it so can't prove it, below is the example useage for the convertTimezone function:
SELECT HOUR_IN_DAY(convertTimezone(CreatedDate)), SUM(Amount)
FROM Opportunity
GROUP BY HOUR_IN_DAY(convertTimezone(CreatedDate))
Below is the error returned:
OLE DB provider "DBAmp.DBAmp" for linked server "SALESFORCE" returned message "Error 13005 : Error translating SQL statement: line 1:37: expecting "from", found '('".
Msg 7350, Level 16, State 2, Line 1
Cannot get the column information from OLE DB provider "DBAmp.DBAmp" for linked server "SALESFORCE".
Can you not use SOQL functions in OPENQUERY as below?
SELECT
*
FROM
OPENQUERY(SALESFORCE,'
SELECT HOUR_IN_DAY(convertTimezone(CreatedDate)), SUM(Amount)
FROM Opportunity
GROUP BY HOUR_IN_DAY(convertTimezone(CreatedDate))')
UPDATE:
I've just had some correspondence with Bill Emerson (I believe he is the creator of the DBAmp Integration Tool):
You should be able to use SOQL functions so I am not sure why you are
getting the parsing failure. I'll setup a test case and report back.
I'll update the post again when I hear back. Thanks
A new version of DBAmp (2.14.4) has just been released that fixes the issue with using ConvertTimezone in openquery.
Version 2.14.4
Code modified for better memory utilization
Added support for API 24.0 (SPRING 12)
Fixed issue with embedded question marks in string literals
Fixed issue with using ConvertTimezone in openquery
Fixed issue with "Invalid Numeric" when using aggregate functions in openquery
I'm fairly sure that because DBAmp uses SQL and not SOQL, SOQL functions would not be available, sorry.
You would need to expose this data some other way. Perhaps it's possible with a Salesforce report, web-service, or compiling the data through the program you are using to access the (DBAmp) SQL Server.
If you were to create a Salesforce web service, the following example might be helpful.
global class MyWebService
{
webservice static AggregateResult MyWebServiceMethod()
{
AggregateResult ar = [
SELECT
HOUR_IN_DAY(convertTimezone(CreatedDate)) Hour,
SUM(Amount) Amount
FROM Opportunity
GROUP BY HOUR_IN_DAY(convertTimezone(CreatedDate))];
system.debug(ar);
return ar;
}
}