We have a PowerBI report that was connected to an on-premise SQL Server. This server was moved to Azure so we changed the report to now use Direct Query instead of import.
To do this I went into the existing PBIX and changed the data source settings so that the report dataset would now be a direct query one pointing to an azure SQL Server.
After that I've imported my PBIX to my Azure PowerBI Workspace using "powerbi import". I've then updated the dataset since it is a direct query and the credentials need to be updated with "powerbi update-connection"
All of these steps are successful.
I can then proceed get my reports "powerbi get-reports" and get access "powerbi create-embed-token". This also works.
The report loads in embedded setup but it stays white. There seems to be no data or no connection.
PowerBI generates a weird error that we don't generally see:
GET https://wabi-us-north-central-redirect.analysis.windows.net/powerbi/metadata/models/xxxxxxx/?modelOptions=Default&packageId=xxxxxxx 403 (Forbidden)
We have a lot of other reports running on direct query to other Azure SQL Server but none that are successful on this SQL Server that migrated to Azure.
I'm talking with Microsoft as soon as possible as well.
Update: Microsoft is looking at the problem. It seems my data source object got into a corrupt state due to a initial db property set (basically a bug). Will keep this post updated.
Update2: It seems PowerBi workspaces in Azure created before April/17 do not support connecting to more than one SQL Database. The solution would then be to create a different workspace but it seems PowerBi workspaces created via Azure are now deprecated. The solution is to migrate everything to the powerbi service(app.powerbi.com). A lot of rework in perspective.
I finally fixed this problem. It took a month.
the problem was 2 fold:
The PowerBi Azure workspace collection that I had been using was created before April 2017 and that meant that it couldn't support connecting to multiple SQL Server instances. This is undocumented as far as I know.
The command PowerBi CLI that I use to call PowerBi API had been modified so that my connection string's format was now deprecated. MS Product Team gave me the proper format. They did say when this changed but it did.
myversion:
powerbi update-connection -c [Workspace Collection Name] -k [Access Key] -w [Workspace Id] -d [Dataset Id] -s "Data Source=[Server];Initial Catalog [DB];User ID=[user];Password=[pwd]"
The version that works:
powerbi update-connection -c [Workspace Collection Name] -k [Access Key] -w [Workspace Id] -d [Dataset Id] -s "Data Source=[server];Initial Catalog=[db];" -u "[user]" -p "[pwd]"
That error is related to authentication. Try to modify the authentication to prompt for credentials as a workaround.
Hope this helps.
Related
I'm using SSRS (SQL Server reporting services) to display reports, my datasource is Snowflake
I have installed the ODBC snowflake driver and configured it properly
Click here to view the ODBC configuration
I have created a shared datasource on the SSRS server (via Report manager) and put in my own credentials and the connection works fine
Click here to view the connection on the SSRS Server
I'm able to build the SSRS report without any issues, when I run the report, everything works fine, I can publish the report on the server and the report renders perfectly fine on the browser
The issue is when i go back to the report the next day, i'm presented with an error:
An error has occurred during report processing. (rsProcessingAborted)
Query execution failed for dataset
'insert_name_of_my_dataset_here'. (rsErrorExecutingCommand)
ERROR [57P03] No active warehouse selected in the current session.
Select an active warehouse with the 'use warehouse' command.
So, this also means that the following doesn't work neither:
Subscriptions
Cache refresh
Snapshots
The only thing that works is if I open my report in SSRS Report builder, I right-click EACH of my datasets ("each" is very important, it doesn't work if i don't do all of them), I run the queries manually for each of them, and then the "connection" or "session" is "re-activated" and the report runs fine, both locally AND on the server...note i do not have to re-publish the report on the server for it to run
Click here to view screenshots of my process
Steps I have taken to addresss the issue (that didn't yield any resolution):
I have tried putting the "use warehouse WAREHOUSE_NAME;" command before each dataset's SQL script, but Snowflake's API doesn't allow multiple SQL commands to be sent, so I already saw that this functionality was in the development pipeline for Snowflake and found this link: https://github.com/snowflakedb/snowflake-connector-net/issues/33 - this work was started in 2018 and the last update dates from Apr 2019 that says they are starting to address the JDBC driver...no mention for the ODBC driver yet
I have set the snowflake parameter client-session-keep-alive to true (https://docs.snowflake.com/en/sql-reference/parameters.html#client-session-keep-alive), but according to the community portal: A similar "keep alive" parameter is not currently available for the ODBC driver. Instead, you could issue a dummy query every few hours to keep the connection alive. (https://community.snowflake.com/s/article/faq-how-long-can-my-jdbcodbc-connection-remain-idle)
List item
I have tried to create a cache refresh plan or a snapshot schedule that creates a snapshot or caches the report every 3 hours, and it works for the first schedule, but fails with the error for the other ones
The only thing I didn't try is to have snowflake never close the connection and keep the warehouse in the "started" state indefinitely...but this would increase my cost, and i'm pretty sure it won't work since the session would end anyways after 4 hours...
Any assistance is welcome!
Thanks
Specs:
SSRS 2014
Snowflake X-small
ODBC-64 bit driver, installed from the
snowflake driver repository (tested with 32-bit also, but 64-bit is
the one that is visible to SSRS)
I faced the same kind of issue and fixed adding the corresponding role with the data warehouse.
In the data warehouse add role with USAGE.
Could it be related with the data warehouse name (in the ODBC settings)? Is there a typo? COSNUMER_WH or CONSUMER_WH?
I strongly recommend setting default "context" configurations for situations like this, setting default role, warehouse, database, and schema with commands such as this:
ALTER USER xyz SET DEFAULT_WAREHOUSE = 'WH_NAME_HERE' ;
https://docs.snowflake.com/en/sql-reference/sql/alter-user.html
Preface: Installed SQL Server 2016 RC0 and installed and configured Reporting Services all fine, thoughts maybe on potential conflicts with existing SSRS instances?
Issue with SQL Server 2012 Reporting Services, every time I navigate through the configuration manager to either the Web Service URL or the Report Manager URL, I get the following errors:
Report Manager URL returns
HTTP 500 error
Console when opening the Report Manager URL returns
SCRIPT16389: Unspecified error.
http_500.htm (1,1)
HTML1524: Invalid HTML5 DOCTYPE. Consider using the interoperable form
!DOCTYPE html
.http_500.htm (1,1)
Web Service URL returns
The version of the report server database is either in a format that
is not valid, or it cannot be read. The found version is 'C.0.9.45'.
The expected version is '162'. (rsInvalidReportServerDatabase) Get
Online Help
Console when opening the Web Service URL returns,
GET http://localhost:8085/ReportServer_MYREPORTS 500 (Server Error)
To try and resolve this, i've already tried adding RSExecRole to RoleMembers under both ReportServer$MyReports and ReportServer$MyReportsTempDB.
Also tried recreating ReportServer database under the Config Manager, but get the following error.
Generating Database Script - Error:
Microsoft.ReportingServices.WmiProvider.WMIProviderException: An error occurred when attempting to connect to the report server remote procedure call (RPC) end point.
I can confirm that both RPC services, SSRS Services are running with no problems, and also restarted these services multiple times.
I have realised that my ReportServer$MyReports is missing the DBUpgradeHistory table.
Any thoughts?
This will fix the issue right away:
delete from dbo.ServerUpgradeHistory where ServerVersion=[The wrong or upgraded version]
In my case, there was no "ServerUpgradeHistory" table. I needed to install SSRS 2012 on a machine that already had SSRS 2016 to do regression testing for a client that doesn't want to upgrade their system.
What I discovered is that there are SharePoint settings that SSRS installs even if you don't install SharePoint or SharePoint Plugins on your box.
Here's what fixed the problem for me:
Navigate your File Explorer to C:\Windows\assembly\GAC_MSIL\
Note there are many directories in here that have nothing to do with SSRS. We will just focus on seven directories that appear to be related to this problem.
Navigate to Policy.11.0.Microsoft.ReportingServices.Alerting
Enter the assembly directory. This is usually a series of digits followed by a hexadecimal identifier.
E.G. 13.0.0.0__89845dcd8080cc91
This path will vary based on what is installed on the machine
Backup the config file in case you need to roll back changes
E.G. from a Command prompt: COPY *.config *.bak
Open the config file.
Look for the text in the newVersion attribute of the bindingRedirect tag:
<bindingRedirect oldVersion="11.0.0.0" newVersion="13.0.0.0">
Modify this to use "11.0.0.0"
<bindingRedirect oldVersion="11.0.0.0" newVersion="11.0.0.0">
Save your changes
Repeat steps 2 through 8 with the following subpaths:
Policy.11.0.Microsoft.ReportingServices.SharePoint.Common
Policy.11.0.Microsoft.ReportingServices.SharePoint.ObjectModel
Policy.11.0.Microsoft.ReportingServices.SharePoint.Server
Policy.11.0.Microsoft.ReportingServices.SharePoint.SharedService
Policy.11.0.Microsoft.ReportingServices.SharePoint12.Server
Policy.11.0.Microsoft.ReportingServices.SharePoint14.Server
Reboot your computer
Just followed the steps in https://msdn.microsoft.com/en-us/library/ms143724.aspx to migrate a Reporting Services installation onto a new server (from and to SQL 2012 Standard Edition)
But when I'm ready to verify my deployment using the Report Manager web interface I get the error:
The feature: "Scale-out deployment" is not supported in this edition of Reporting Services. (rsOperationNotSupported)
Indeed when I go back to the Reporting Services Configuration manager, under Scale-out Deployment I have 2 servers, the one on the local server (new machine) and a reference to the old Server that has a different name. Problem is when I try to remove it tells me the task has failed:
Microsoft.ReportingServices.WmiProvider.WMIProviderException: Unable to connect to the Report Server . ---> System.Runtime.InteropServices.COMException (0x800706BA): The RPC server is unavailable
I can understand why it's unavailable as it is on a different network all together. so my question is, how can I get rid of it so everything can finally work?
Found it. The way to remove the ghost server is to connect to the ReportServer database, and remove the old server from the dbo.Keys table.
After a restart of Reporting Services, the old server isn't in the list anymore.
USE ReportServer
go
select * from keys
--for safety added to the delete ghost machine if no recent executions in last 30 days.
delete from keys
where MachineName = 'YourGhostServer' --replace with your old server name, if multiple run one by one.
and MachineName not in (select substring(InstanceName,0,(charindex('\',InstanceName,0)))
from ExecutionLog
where timestart>getdate()-30
group by InstanceName)
CAREFUL, run the first part only with the select, analyze the output then copy the specific machine name value (old server name) you wish to delete into the where clause of the delete statement, replacing YourGhostServer verbiage.
Note, the Keys table may have legitimate machines that are network reachable and online. You can verify this by simply pinging them or checking if they run the SSRS Service, don't just simply delete from the table a server that's actually online, instead use the Report Server Manager to remove a Server that's online.
Deleting from the Keys table should only be done if the old machine is truly unreachable or has been decommissioned. At least, that's what I would do in my case. :)
I'm using the AWS Toolkit in Visual Studio 2013 to attempt to launch a new instance on Amazon RDS. I get through the wizard for creating the new instance and after clicking finish, there is a delay, and then a message appears saying:
Error launching DB instance: DB Security Groups can only be associated with VPC DB Instances using API version 2012-01-15 through 2012-09-17.
Launching different types of instances (SQL Server SE vs MySQL) doesn't seem to help, nor does selecting different versions of the platforms (SQL Server 2008 vs 2012). The only thing that gets it to go through is unchecking the box for "default" in the DB Security Groups area. However, I feel like something is going on here that shouldn't be happening.
Can anyone explain why this is happening and how I can resolve it other than by not setting a default security group? Thank you.
If you created your AWS account recently, you will be using a VPC by default.
It sounds like the API the plugin is trying to use hasn't been updated. The latest version is 1.5.6, and looking at the history it seems like some of these features were added in 1.5.0.
I finally solved it! Since I couldn't use the API that the VS 2013 plugin uses, I had to manually add my IP to the Security Group created for my Elastic Beanstalk.
Go to the console, ec2's security groups configuration
Find the one which description matches your Beanstalk (e.g.: Security Group created for Beanstalk Environment to give access to RDS instances)
Hit Inbound, Edit and add a new rule for All Traffic (I guess HTTP should be enough, but just in case).
In Source, select My IP and Save.
I have a MS SQL Server Database in house (but on a different network) that I am trying to connect to via cold fusion. However I keep getting this error: "Datasource could not be found."
Remote connections are enabled including tcpip.
Account name is correct. I can log in via test without any problems.
Here is my code:
<cfquery
name="getIT"
datasource="RemoteServerName_OR_ipAddress.DATABASE_NAME.dbo"
username="test"
password="test"
>
It looks like you need to set up the datasource in the Coldfusion Administrator.
http://www.quackit.com/coldfusion/tutorial/coldfusion_datasource.cfm
You need to first set up the datasource in ColdFusion Administrator, then use the datasource name - see the documentation adding data sources