I've had an SSAS tabular model deployed to a server running SQL Server 2016 for about a month and it has been running fine. All of a sudden, today it is throwing random errors when I try to query it. I just tried to run the same query 8 times and got the following 8 error messages:
1) An unexpected error occurred (file 'xmvsquery.cpp', line 3184, function 'XMVSColumn::Bind').
2) An unexpected exception occurred.
3) Query (7, 46) A date column containing duplicate dates was specified in the call to function 'DATESYTD'. This is not supported.
4) Memory error: Allocation failure . If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.
5) Column 'RowNumber-2662979B-1795-4F74-8F37-6A1BA8059B61' in table 'table name' cannot be found or may not be used in this expression.
6) An unexpected error occurred (file 'tmmdmodeltm.cpp', line 2404, function 'MDModelTM::ResolveIMBIColumnId').
7) MdxScript(Model) (1, 66) Calculation error in measure 'measure name': A date column containing duplicate dates was specified in the call to function 'DATESYTD'. This is not supported.
8) Column 'RowNumber-2662979B-1795-4F74-8F37-6A1BA8059B61' in table 'table name' cannot be found or may not be used in this expression.
Looking in the application log on the server yields no further information - The description for Event ID 22 from source MSSQLServerOLAPService cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
A couple of things that are interesting. The errors only happen if I try to run an MDX query against the model. If I try to run an equivalent DAX query, it runs fine. If I clear the SSAS cache, I can run the MDX and DAX queries against the model for a short period of time before this starts happening again.
This model is currently deployed to Microsoft SQL Server 2016 (SP1-GDR) (KB3207512) - 13.0.4199.0 (X64) and the server is running Windows Server 2016.
I've tried the following so far, and after doing each it will work for a short period of time and then it starts up again
Redeploy the model
Delete the database completely and redploy the model
Do a full process of the model
Clear the SSAS cache
Any tips would be greatly appreciated!!!
It sounds to be like your are experiencing the same defect I ran into with 2016 SP1-RTM - a seemingly random pattern of unexpected exceptions, queries that would sometimes run, sometimes not, or even render the database unprocessed(!).
SP1-CU2 resolves a number of defects which can cause symptoms like you are seeing (see link for full list), so if you've now got the latest update you are probably OK.
The only defect I have seen which SP1-CU2 does not resolve is that SELECTCOLUMNS() does not play nicely with UNION().
Related
I'm having issues using Visual Studio 2019 to publish a database project to a target server where the database does not yet exist. During the publish process, the following error happens:
(46075,1): SQL72014: .Net SqlClient Data Provider: Msg 8623, Level 16, State 1, Line 14 The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.
(46062,0): SQL72045: Script execution error. The executed script:
The error does not appear to be related to specific SQL as the error message would suggest. If I comment out a script that generates the error, the error shows up in the next script in the sequence. Overall, the publish script produced by VS2019 is approximately 72k lines. The error pops up after approximately 46k lines.
EDIT:
Server details:
SQL Server 2019 Developer edition (15.0.2000.5). 4 processors, 16 gb memory
This was my system having the problem that Anthony posted about for me. We've now figured out the cause. The root cause was a post deployment script to populate a table with initial values. The script uses a reasonably complex merge statement and tried to insert about 18,500 rows of data.
We were thrown off initially because the error output pointed to a different script in the set, not the one causing a problem. Evidently just what was in the error buffer when it burped.
We recently upgraded our SQL server to 2016 and I turned on QueryStore to do the analysis that it provides. I'm encountering a problem where, even if the time period of the report is Last hour, it will generate a message that says "Couldn't connect to database" even when running it on the database server itself. Sometimes if I keep refreshing the report it will eventually display some data, but it's intermittent at best. I'm running SSMS 17.5 on a sql server 2016 server.
We are having a somewhat similar issue with another program that connects to the database where it will sometimes not be able to connect, but every time I run my queries in SSMS, run reports in SSRS, or even use activity monitor, I never see any connection drops, so I'm not sure if it is related.
Thank you in advance for any help!
I find it works fine with the statistic set to Avg, StdDev, or Total. Max and Min give the error.
I found this happens when the query store runs out of space and gets into cleanup mode.
In database properties in SSMS try playing around with Query Store settings: for how many days it stores the query stats and does it get into "size cleanup" mode. More info on how to keep it adjusted: https://learn.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store?view=sql-server-ver15#Configure
I was trying to pull data from Oracle to MSSqlserver database using Linked server.
select * from [LINK_NAME]..SCHEMA.TABLE;
But it was failing with the below error:
The OLE DB provider "OraOLEDB.Oracle" for linked server "LINK_NAME"
supplied inconsistent metadata for a column. The column "COLUMN_NAME"
(compile-time ordinal 6) of object ""SCHEMA"."TABLE"" was reported to
have a "LENGTH" of 100 at compile time and 200 at run time.
I also need to pass argument at run time in where condition. I found OPENQUERY as a solution but it does not support arguments at runtime.
Try using the OPENQUERY Syntax to see whether that helps..
SELECT * FROM OPENQUERY(LINK_NAME, 'SELECT * FROM db.Schema.Table')
More about OPENQUERY ...
I found solution:
The error was coming due to database column type mismatch.
ORACLE was using NVARCHAR for datatype but in case of SQLSERVER it was VARCHAR.
As NVARCHAR is double the size of VARCHAR that is why it was showing size mismatch error.
Changing the data type to same worked for me.
I have found a solution posted by this blogger. Try it out!
This tool from Sysinternals/Mark Russinovich is the best, and my only regret that day was not launching it earlier instead of scouring Google and going insane. I’ve limited Procmon to just sqlservr.exe, as it’s the SQL Service itself that loads/handles the providers and not the ssms.exe. Also of note is that the sqlservr.exe is a 64bit process while the management studio is still just 32bit. As the server service is loading the provider, and the service process is 64bit, the provider must also be available in 64 bit format.
The ODAC112021Xcopy_x64.zip was installed to C:\Oracle. What Procmon showed me however is that sqlservr is attempting to find the oci.dll in any folder but his! (It iterates through the %Path% sysvariable). When it finally gives up on find the dll, the SQL Service is in a unstable shape and the only way to stop the service was to kill it via taskmgr/procexp. Clearly I can see that the “xcopy” deployment – while not giving me any error messages – it also did not set the PATH variable! And this is what this post is really about… adding C:\Oracle and C:\Oracle\Bin to the Path variable or maybe it’s about employing investigative tools earlier in the process instead of relying on your search engine skills.
sqlservr.exe can now find the relevant DLL’s. The OCI.DLL in the root and the OraOLEDB11.DLL in the Bin subfolder. At this point I could query the database! If you did my steps as above and you still get the same error, I strongly suggest using Procmon.exe as I have instead of jumping to the next search result.
Full post is here with more details.
I have N number of tables in my database, which holds around 0.6 million records. I've created a SQL script which copies this data into same tables (basically it's a script to generate more data). I've tested the script it runs fine for small data (10k records). When I tried it to copy all data, it throws an error:
An error occurred while executing batch. Error message is: Error creating window handle.
1.What is the meaning of this error in SQL Server?
2.Does it has to do anything with my SQL in script, or is this cause of other component of SQL Server?
Handles are Windows tools to manage OS resources. When some app on your machine have memory leaks - you can run out of handles and this error occurs. Current state of handles can be seen in Task Manager (Handle Count)
As said in comments - it's a client side issue. For example large resultsets/query output to grid may end up to this error.
Solution: Reboot your PC, minimize the output of query. Also you can try to launch script via SQLCMD.
You can read more about it here.
Some explanation here.
I am using SSIS with VS2010 (shell) and databases going from SQL Server 2005 (32 bit)to SQL Server 2012 (64 bit). I am developing directly on the destination server (not optimal, but it works).
When I try to use the Transfer database task, it gives me an error message as follows:
"Error: The Execute method on the task returned error code 0x80131500 (An error occurred while transferring data. See the inner exception for details.). The Execute method must succeed, and indicate the result using an "out" parameter."
Here is the problem... how do I view an "inner exception"?? it is a GUI interface with no way to step through the code! I even tried setting up logging - it just logs the same useless error message.
Microsoft has no information for this error code in their reference docs (that I could find).
After googleing the error code, I saw others have this error code along with messages having to do with users, roles, and creating them.
I double checked that I have sysadmin rights on both servers, and
logins on both.
I tried the same Transfer Database task from each
server to itself (with changeing database name) and that worked
fine for both by themselves.
I tried both DatabaseOnline and DatabaseOffline options. (same error both ways)
I tried doing a "Transfer Logins" task before doing the transfer database task, that task worked, but not the Transfer databases task. Then it started throwing errors saying that the databases don't exist - which implies that I need to transfer logins AFTER I transfer databases.
Here are my settings:
What am I doing wrong? OR how can I get the "inner exception" message?
Also, follow my post to Microsoft's forums here:
http://social.technet.microsoft.com/Forums/en-US/sqlintegrationservices/thread/cda53c80-8da6-4ed1-898a-9f3ff8464ae2
This answer makes me sick to my stomach... I hope I save someone else this hassle. The problem was this:
First and foremost: the error message was not descriptive enough. The error should be handed to the interface.
Under "edit" on a "Transfer Database" task, the destination file paths are "auto-populated" with the file paths of the source database. They look right at first (and second, and third...) cursory glance. Upon further inspection the file paths were wrong. This makes sense if you are going from version to version - the folders are named with subtle differences according to version (MSSQL.1 vs. MSSQL11.<instanceName>).
In summary, the error was caused by the folder not existing because the path was set wrong. I imagine other low-level exceptions like this are also eaten by the interface with the same cryptic error message.
This is old but I bumped in the same cryptic message with SSMS 17.2. I tried and checked all the suggestions above to no avail.
In my case the issue was related to the TargetServerVersion property of the SSIS project in Visual studio 2017. By default this was set to SQL Server 2017, while my local server was SQL Server 2014 - once changed to the same version everything went smooth.
We ran into this where someone told us a valid date would always exist in the column in a MySQL database and we found out later that there were dates like '0000-00-00 00:00:00' and '0001-01-01 00:00:00'.
We handled it in the query that pulls in the data using a case statement to convert the bad date into a date SSIS can use :
CASE WHEN Product.PurchaseDate < '1900-01-01 00:00:00' THEN '1900-01-01 00:00:00' ELSE Product.PurchaseDate END AS PurchaseDate
Of course, you can set it to null also, your choice.
I have also had this same issue and it turned out to be an access issues. Try giving these access to the folder where the mdf and ldf files will be landing: NT Service\MSSQLSERVER, Owner Creator, System
"which implies that I need to transfer logins AFTER I transfer
databases."
not really, logins are on a server (instance) level so you can transfer logins and then the database. You would need to worry about users later, of course
a point here, I dont think SSIS would be prepared to transfer 2005 -> 2012. I mean, It wouldn't make sense to "skip" a version. You said you are using VS 2012, so it would be SSIS 2012. It think it can read only 2008 databases. The fact that you tested on the same server and it worked also makes this point stronger.