SSAS Cube Errors, Not Deploying - sql-server

I'm getting two errors which are very vague.
Internal error: The operation terminated unsuccessfully.
Server: The operation has been cancelled.
This occurs when I try to deploy the cube to the server.
Does anyone know how to fix this error?
Thanks,
Ethan
EDIT:
I just processed each dimension individually and all but one processed successfully.
I also have this one warning message that is suspicious
Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: 'dbo_DimPractice', Column: 'Phone', Value: '(111) 111-1111 ‎'. The attribute is 'Phone'.

So The problem was that there were duplicate phone numbers with different phone numbers. I fixed this by changing the key on the attribute column to include the other attributes.
After fixing that, I had the same warning come up for my time dimension and I did the same thing for it, and then it deployed successfully.

Re-apply the service account if it was working earlier & restart the service of SSAS it will starts working.

Related

Azure Data Factory Debugging Failure

I've been developing a pipeline in ADF with one simple copy activity taking data from SQL Server on premise up to an Azure SQL Database and yesterday came across an issue (pipeline image below)
My pipeline kept failing to debug in the same place with the same error
{
"errorCode": "BadRequest",
"message": "The integration runtime 'Integration-Runtime-Name' under data factory 'Data-Factory-Name' does not exist. ",
"failureType": "UserError",
"target": "PiplineActivity"
}
The day before it had worked without any issues, and I realised although the debug run failed, if I kicked off a trigger run the pipeline would succeed.
I tested this out with a couple of different runtime environments and a completely new pipeline, but got the same result. I even stripped the one copy task it was trying to do down to a simple test table with one column and one row.
Can anyone else verify they are seeing the same or different behavior? or point out if I'm doing anything wrong. I would like to emphasise it was working fine the day before yesterday.
Many thanks
The issue has been fixed. Please have a try again. Thanks.

How to deal Large Dataset using Eloquent?

In my table there are more than 5.600.000 records. When I try to get records using PAGINATE(15), after a very long processing server response Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, contact#example.com and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
Additionally, a 500 Internal Server Error error was encountered while trying to use an ErrorDocument to handle the request.". Please help, Thanks in Advance.
It's probably caused by php.ini configuration.
You can check your error logs for more info (php error logs or laravel /storage/logs).
It can be memory_limit, max_execution_time or some other setting in php.ini
Its the problem with paginate(). paginate() retrieves all the rows in a table and counts the total rows to determine the total no of pages required. Since you have 5600,000 rows in the table, it is taking a long time to retrieve the results which eventually might be reaching max_execution_time or memory_limit.
I suggest the best way is to use simplePaginate()

Broken indexer on Azure-Search (error: multiple columns with the same name)

We are experiencing a sudden and strange issue with our Azure Search indexer. We had an index (2015-02-28-preview version) with corresponding datasource and indexer based on a table of a SQL Azure v12 database. Change tracking was enabled and changes were properly forwarded in the index. A couple of days ago, our attention was drawn by the fact that last changes in the database were no more properly replicated to the index. Being in a development phase, this index was frequently rebuilt by developers and nobody has noticed when exactly things started to go wrong.
In the Azure portal, the index is displayed in red color with an error message stating we have a duplicate column in the datasource...("Datasource contains multiple columns with the same name 'ProductId'") which is false. We cleaned the database and tried several things but could not find any duplicate column. As for today, the situation is the following :
1/ After deleting and recreating everything (index, indexer and datasource) the index is filled with the 2000 documents present in the SQL table
2/ The index is full and can be queried without any issue, though it still shows up in red with the "duplicate column" error message
3/ Due to this error, we cannot manually force a new indexation from the azure portal
4/ In order to reflect changes of the indexed table, we have to run again the script which deletes index, indexer and datasource and re-creates everything. After running this script .. we're back at step 1 above (index queryable, but in error state and cannot be updated without drop/recreate).
This problem seems to have occurred all of a sudden without any change on our side, as if there had been a server-side version change. Are there any newer release of the Azure Search Rest APIs available ? Has anyone ever encountered the same issue or has any hints on things we could check ?
Thanks for your help shedding some light on what may be broken here,
Problem fixed thanks to Eugene investigations. He discovered a bug in the C# code used to generate the datasource : a casing difference between a “ProductId” column in the database and a “ProductID” field in the index.
We fixed the misspelling and the issue is gone. Microsoft support said that they'll "fix the issue in the coming weeks" : The same code used to work properly (and is still working properly on the first run), so it looks like the indexing process has somehow become more case sensitive than before.

Umbraco content tab error / mixed up document types

Using Umbraco 6.0.0 – I have no idea how this happened, but I'm trying to the bottom of it so I can either fix it or chalk it up as a learning experience.
Most of the content nodes are giving me this error:
The INSERT statement conflicted with the FOREIGN KEY constraint "FK_cmsPropertyData_cmsPropertyType_id". The conflict occurred in database "UmLLWebDev", table "dbo.cmsPropertyType", column 'id'.
The statement has been terminated.
Screenshot of the error: http://cl.ly/image/2t373f0r163I
The other weird thing that happened was all the properties from one of my child document types have 'moved' to the parent document type. I have no idea how this happened. i.e. I had several fields in a child Document Type called 'samples' and now these fields appear in its parent 'Master'.
Looking for suggestions on how to even go about investigating the problem because at this point I feel like I need to start over. My only lead for a cause is a batch sql script I use that backs up / restores the Umbraco database for deployment purposes.
We have just experienced this problem too.
We solved it by deleting the umbraco.config file in the app_data folder. You may also need to recycle the app pool in IIS or by modifying the web.config file.
Adding to my comment before: "This happened to my after renaming document properties."
Deleting the renamed properties and re-adding them again solved the problem. I can see the nodes in Umbraco.
We've faced this problem & SOLVED it by just saving the modified
document types.
Note:
We've NOT removed Umbraco.config & NOT recycled the app pool.

msg: the etag value in the request header does not match with the current etag value

I have a WPF program that interacts with SQL Server 2008 R2 on a remote server via an ODATA interface.
The program just started catching the error "the etag value in the request header does not match with the current etag value". I suspect this has something to do with possible changes to the table on the server.
The closest I cam to anything on the web is this post dealing with insert triggers. This table does not have any triggers.
Has anyone else run across this and do you have any ideas on how to go about debugging this?
I discovered what the issue was and how to work around it for my specific case. The table in question had an index with two columns forming a concatenated key. When the index is set as non-unique there was no problem.
When the index was changed to unique with ignore dupes then this error started occurring. By changing the index back to non-unique the problem went away.
I hope this helps someone. I still don't understand why this occurs, how to debug it, or how to fix it.

Resources