I restored the tfs 2012 with an old backup after a new installation, every thing works just fine but cannot create new project thanks to the following error that i think a wrong credentials info somewhere in TFS or ReportServer databases, it doesn't match the new Report credentials.
Does anybody know where to change these info ?
http://abc:7979/ReportServer -> ok
http://abc:7979/Reports -> ok
> Module: Engine Event Description: TF30162: Task "Populate Reports"
> from Group "Reporting" failed Exception Type:
> Microsoft.TeamFoundation.Client.PcwException Exception Message: The
> Project Creation Wizard encountered an error while creating reports to
> the SQL Server Reporting Services on
> http://server:7979/ReportServer/ReportService2005.asmx. Exception
> Details: The Project Creation Wizard encountered a problem while
> creating reports on the SQL Server Reporting Services on
> http://server:7979/ReportServer/ReportService2005.asmx. The reason
> for the failure cannot be determined at this time. Because the
> operation failed, the wizard was not able to finish creating the SQL
> Server Reporting Services site. Stack Trace: at
> Microsoft.VisualStudio.TeamFoundation.PCW.RosettaReportUploader.Execute(ProjectCreationContext
> context, XmlNode taskXml) at
> Microsoft.VisualStudio.TeamFoundation.PCW.ProjectCreationEngine.TaskExecutor.PerformTask(IProjectComponentCreator
> componentCreator, ProjectCreationContext context, XmlNode taskXml)
> at
> Microsoft.VisualStudio.TeamFoundation.PCW.ProjectCreationEngine.RunTask(Object
> taskObj)
> -- Inner Exception -- Exception Message: TF30225: Error uploading report 'Backlog Overview':
> System.Web.Services.Protocols.SoapException: The current action cannot
> be completed. The user data source credentials do not meet the
> requirements to run this report or shared dataset. Either the user
> data source credentials are not stored in the report server database,
> or the user data source is configured not to require credentials but
> the unattended execution account is not specified. --->
> Microsoft.ReportingServices.Diagnostics.Utilities.InvalidDataSourceCredentialSettingException:
> The current action cannot be completed. The user data source
> credentials do not meet the requirements to run this report or shared
> dataset. Either the user data source credentials are not stored in the
> report server database, or the user data source is configured not to
> require credentials but the unattended execution account is not
> specified. at
> Microsoft.ReportingServices.Library.ReportingService2005Impl.SetCacheOptions(String
> Report, Boolean CacheReport, ExpirationDefinition Expiration, Guid
> batchId) at
> Microsoft.ReportingServices.WebServer.ReportingService2005.SetCacheOptions(String
> Report, Boolean CacheReport, ExpirationDefinition Expiration) (type
> ReportingUploaderException) Exception Stack Trace: at
> Microsoft.TeamFoundation.Client.Reporting.ReportingUploader.UploadReport(XmlNode
> report) at
> Microsoft.TeamFoundation.Client.Reporting.ReportingUploader.HandleCreateReports(XmlNode
> node) at
> Microsoft.TeamFoundation.Client.Reporting.ReportingUploader.Run()
> at
> Microsoft.VisualStudio.TeamFoundation.PCW.RosettaReportUploader.Execute(ProjectCreationContext
> context, XmlNode taskXml)
>
> Inner Exception Details:
>
> Exception Message: System.Web.Services.Protocols.SoapException: The
> current action cannot be completed. The user data source credentials
> do not meet the requirements to run this report or shared dataset.
> Either the user data source credentials are not stored in the report
> server database, or the user data source is configured not to require
> credentials but the unattended execution account is not specified.
> ---> Microsoft.ReportingServices.Diagnostics.Utilities.InvalidDataSourceCredentialSettingException:
> The current action cannot be completed. The user data source
> credentials do not meet the requirements to run this report or shared
> dataset. Either the user data source credentials are not stored in the
> report server database, or the user data source is configured not to
> require credentials but the unattended execution account is not
> specified. at
> Microsoft.ReportingServices.Library.ReportingService2005Impl.SetCacheOptions(String
> Report, Boolean CacheReport, ExpirationDefinition Expiration, Guid
> batchId) at
> Microsoft.ReportingServices.WebServer.ReportingService2005.SetCacheOptions(String
> Report, Boolean CacheReport, ExpirationDefinition Expiration) (type
> SoapException)SoapException Details: <detail><ErrorCode
> xmlns="http://www.microsoft.com/sql/reportingservices">rsInvalidDataSourceCredentialSetting</ErrorCode><HttpStatus
> xmlns="http://www.microsoft.com/sql/reportingservices">400</HttpStatus><Message
> xmlns="http://www.microsoft.com/sql/reportingservices">The current
> action cannot be completed. The user data source credentials do not
> meet the requirements to run this report or shared dataset. Either the
> user data source credentials are not stored in the report server
> database, or the user data source is configured not to require
> credentials but the unattended execution account is not
> specified.</Message><HelpLink
> xmlns="http://www.microsoft.com/sql/reportingservices">http://go.microsoft.com/fwlink/?LinkId=20476&EvtSrc=Microsoft.ReportingServices.Diagnostics.Utilities.ErrorStrings&EvtID=rsInvalidDataSourceCredentialSetting&ProdName=Microsoft%20SQL%20Server%20Reporting%20Services&ProdVer=11.0.2100.60</HelpLink><ProductName
> xmlns="http://www.microsoft.com/sql/reportingservices">Microsoft SQL
> Server Reporting Services</ProductName><ProductVersion
> xmlns="http://www.microsoft.com/sql/reportingservices">11.0.2100.60</ProductVersion><ProductLocaleId
> xmlns="http://www.microsoft.com/sql/reportingservices">127</ProductLocaleId><OperatingSystem
> xmlns="http://www.microsoft.com/sql/reportingservices">OsIndependent</OperatingSystem><CountryLocaleId
> xmlns="http://www.microsoft.com/sql/reportingservices">1033</CountryLocaleId><MoreInformation
> xmlns="http://www.microsoft.com/sql/reportingservices"><Source>ReportingServicesLibrary</Source><Message
> msrs:ErrorCode="rsInvalidDataSourceCredentialSetting"
> msrs:HelpLink="http://go.microsoft.com/fwlink/?LinkId=20476&EvtSrc=Microsoft.ReportingServices.Diagnostics.Utilities.ErrorStrings&EvtID=rsInvalidDataSourceCredentialSetting&ProdName=Microsoft%20SQL%20Server%20Reporting%20Services&ProdVer=11.0.2100.60"
> xmlns:msrs="http://www.microsoft.com/sql/reportingservices">The
> current action cannot be completed. The user data source credentials
> do not meet the requirements to run this report or shared dataset.
> Either the user data source credentials are not stored in the report
> server database, or the user data source is configured not to require
> credentials but the unattended execution account is not
> specified.</Message></MoreInformation><Warnings
> xmlns="http://www.microsoft.com/sql/reportingservices" /></detail>
> Exception Stack Trace: at
> Microsoft.TeamFoundation.Client.Channels.TfsHttpClientBase.HandleReply(TfsClientOperation
> operation, TfsMessage message, Object[]& outputs) at
> Microsoft.TeamFoundation.Client.Channels.TfsHttpClientBase.Invoke(TfsClientOperation
> operation, Object[] parameters, TimeSpan timeout, Object[]& outputs)
> at
> Microsoft.TeamFoundation.Client.Reporting.ReportingService.Invoke(TfsClientOperation
> operation, Object[] outputs) at
> Microsoft.TeamFoundation.Client.Reporting.ReportingService.SetCacheOptions(String
> Report, Boolean CacheReport, ExpirationDefinition Expiration) at
> Microsoft.TeamFoundation.Client.Reporting.ReportingUploader.UploadReport(XmlNode
> report)
UPDATE: i fixed it
When you open http:///Reports/Pages/Folder.aspx, you may find the datasource Tfs2010OlapReportDS and Tfs2010ReportDS, open the datasource, make sure that the datasource are using Credentials stored securely in the report server and Use as Windows credentials when connecting to the data source is checked.
Last time I used Windows Integrated security, so the web service might not be able to connect.
Not sure about 2012 version but this kind of info in 2010 version is set Team Foundation Administration Console. Give it a try – it shouldn’t be much different in 2012 version.
Related
I am using the spark dataframe reader to pull data from an SQL server database, do some minor changes like column renaming, data type casting and saving the dataframe to S3 using delta lake format. The job is triggered from Airflow using LivyOperator.
The code for the dataframe reader is as follows:
val table_source = spark.read
.format("com.microsoft.sqlserver.jdbc.spark")
.option("Driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.option("dbtable", select_query)
.option("inferschema", "true")
.option("url", ds_url)
.option("user", ds_user)
.option("password", ds_pass)
.option("numPartitions", num_partitions)
.option("partitionColumn", "RNO")
.option("lowerBound", 0)
.option("upperBound", rowCount).load()
Then I create a temporary view on top of this data and add a few more standard columns like client_id, timestamp etc and return a dataframe. Then the dataframe is saved as parquet files using 'delta' format.
table_source.createOrReplaceTempView("table_name")
val table_mapping = spark.sql(mapping_query)
table_mapping
.write.format("delta")
.mode("append")
.save(path)
So, now the issue is that for tables that have around 50k rows or more the job always gets stuck at this 'save' stage. Say I provided numPartitions=8, in the spark UI I can see that 8 tasks are generated for this stage by spark. Some of these tasks finish executing successfully within a few minutes. The remaining tasks get stuck for more than 2 hours and fail with a 'connection reset' error caused by SQLServerException. The tasks are then retried by spark, some of these tasks complete right away and again some get stuck for two more hours and so on until the stage is completed eventually.
Note: There is no limit on the maximum concurrent connections on the source server.
Finally job takes around 2+, 4+ or 6+ hours to complete. Using spark speculation helped bring the job completion time to 1 hour, but it is still too high a time for the volume of data we are dealing with. For comparison we tested fetching the data in the same environment as the EMR cluster using SSIS. It took just 20 mins to complete.
When the tasks are stuck we observed the following thread lock in the executor thread dump.
Thread ID: ##
Thread Name: Executor task launch worker for task 1.0 in stage 17.0 (TID 17)
Thread State: RUNNABLE
Thread Locks: Lock(java.util.concurrent.ThreadPoolExecutor$Worker#881566968}),
Monitor(com.microsoft.sqlserver.jdbc.TDSReader#1700342603})
When I expand this thread, I see the following trace.
> java.net.SocketInputStream.socketRead0(Native Method)
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:1819)
> com.microsoft.sqlserver.jdbc.TDSReader.readPacket(IOBuffer.java:5461)
> => holding Monitor(com.microsoft.sqlserver.jdbc.TDSReader#648027762}) com.microsoft.sqlserver.jdbc.TDSReader.nextPacket(IOBuffer.java:5371)
> com.microsoft.sqlserver.jdbc.TDSReader.ensurePayload(IOBuffer.java:5347)
> com.microsoft.sqlserver.jdbc.TDSReader.readBytes(IOBuffer.java:5640)
> com.microsoft.sqlserver.jdbc.TDSReader.readWrappedBytes(IOBuffer.java:5662)
> com.microsoft.sqlserver.jdbc.TDSReader.readInt(IOBuffer.java:5603)
> com.microsoft.sqlserver.jdbc.TDSReader.readUnsignedInt(IOBuffer.java:5620)
> com.microsoft.sqlserver.jdbc.PLPInputStream.readBytesInternal(PLPInputStream.java:313)
> com.microsoft.sqlserver.jdbc.PLPInputStream.getBytes(PLPInputStream.java:129)
> com.microsoft.sqlserver.jdbc.DDC.convertStreamToObject(DDC.java:438)
> com.microsoft.sqlserver.jdbc.ServerDTVImpl.getValue(dtv.java:2965)
> com.microsoft.sqlserver.jdbc.DTV.getValue(dtv.java:206)
> com.microsoft.sqlserver.jdbc.Column.getValue(Column.java:130)
> com.microsoft.sqlserver.jdbc.SQLServerResultSet.getValue(SQLServerResultSet.java:2087)
> com.microsoft.sqlserver.jdbc.SQLServerResultSet.getValue(SQLServerResultSet.java:2072)
> com.microsoft.sqlserver.jdbc.SQLServerResultSet.getString(SQLServerResultSet.java:2413)
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$12(JdbcUtils.scala:444)
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$12$adapted(JdbcUtils.scala:442)
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$$Lambda$1086/1697796400.apply(Unknown
> Source)
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:352)
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:334)
> org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
> Source)
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:35)
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:832)
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$$Lambda$1243/1672950527.apply(Unknown
> Source)
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1473)
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$$Lambda$1085/1036621064.apply(Unknown
> Source)
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
> org.apache.spark.scheduler.Task.run(Task.scala:131)
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
> org.apache.spark.executor.Executor$TaskRunner$$Lambda$465/565856309.apply(Unknown
> Source)
> org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:750)
>
>
> Version Information
> EMR: 6.40
> Spark: 3.1.2
> mssql-jdbc: 9.2.1.jre8
> spark-mssql-connector_2.12: 1.2.0
> delta-core: 1.0.0
We have tried setting QueryTimeout on the spark jdbc reader to no avail. Tried increasing executor and driver cores/memory and also using dynamic allocation but ended up with the same results. Also tried removing partitions, same issue. We have been at this for weeks, I would highly appreciate any pointers on solving this problem.
After I triggered and refreshed the dag task, it went from running, delayed, to failed. The error log from the airflow told me to check the error from sql server which I got "Failed to start system task System Task" when I checked the logs on my sql server docker container. I'm not sure if I need to specify a schema but the rest of the connection params are correct.
[entrypoint.sh]
"${AIRFLOW_CONN_MY_SRC_DB:=mssql+pyodbc://SA:P#SSW0RD#mssqlcsc380:1433/?driver=ODBC+Driver+17+for+SQL+Server}"
[dag.py]
with DAG (
'mssql_380_dag',
start_date=days_ago(1),
schedule_interval=None,
catchup=False,
default_args={
'owner' : 'me',
'retries' : 1,
'retry_delay' : dt.timedelta(minutes=5)
}
) as dag:
get_requests = MsSqlOperator(
task_id = 'get_requests',
mssql_conn_id = 'my_src_db',
sql = 'select * from Request',
dag = dag
)
The issue was just that it couldn't notice the table so I specified the database which fixed the issue even though the database should of been recognized since I've passed it on the connection string.
sql = 'use csc380db; select * from Request',
New to SSIS. I'm trying to connect OData Source [SharePoint] and just want to view it in the SSIS Package. When I run my solution, it comes up with the following error message:
> TITLE: Package Validation Error
> ------------------------------
>
> Package Validation Error
>
> ------------------------------ ADDITIONAL INFORMATION:
>
> Error at Data Flow Task [Row Count [50]]: The variable "(null)"
> specified by VariableName property is not a valid variable. Need a
> valid variable name to write to.
>
> Error at Data Flow Task [SSIS.Pipeline]: "Row Count" failed validation
> and returned validation status "VS_ISBROKEN".
>
> Error at Data Flow Task [SSIS.Pipeline]: One or more component failed
> validation.
>
> Error at Data Flow Task: There were errors during task validation.
>
> (Microsoft.DataTransformationServices.VsIntegration)
>
> ------------------------------ BUTTONS:
>
> OK
> ------------------------------
I have tried following this suggested solution but with the same results:
SSIS Row Count: Getting a null variable error where there is clearly a selected variable
Can someone please tell me what this error message means and what I can do to fix it?
Thank you.
That seems to indicate that the Rowcount component doesn't have a package variable set, to which to write the result of its Count. Double-click on this component and set the variable. Watch out that SSIS variable names are case-sensitive.
You can have a valid-looking variable name in the Rowcount box, which in fact no longer exists in the package. So click on the drop-down, even if a variable name is there, to check that it's actually an existing package variable.
How do I switch between databases? Or, more specifically, if multiple databases are open, how would I specify which database to run a query against?
thufir#dur:~/basex$
thufir#dur:~/basex$ basex
[warning] /usr/bin/basex: Unable to locate /usr/share/java/jing.jar in /usr/share/java
BaseX 9.0.1 [Standalone]
Try 'help' to get more information.
>
> LIST
Name Resources Size Input Path
----------------------------------------------------------
books99 1 61253 /home/thufir/basex/db.books.xml
foo 1 61253 /home/thufir/basex/db.books.xml
new 1 61253 /home/thufir/basex/db.books.xml
3 database(s).
>
> OPEN foo
Database 'foo' was opened in 72.11 ms.
>
> OPEN new
Database 'new' was opened in 16.43 ms.
>
> CLOSE foo
Stopped at , 1/6:
Syntax: CLOSE
Close current database.
Closes the currently opened database.
>
> CLOSE
Database 'new' was closed.
>
> exit
Enjoy life.
thufir#dur:~/basex$
Mostly I'm just running XQUERY / from the BaseX console at the moment, establishing the existence of data.
Perhaps not a complet answer, but to run a query against a specific database open that database and run the query:
thufir#dur:~/basex$
thufir#dur:~/basex$ basex
[warning] /usr/bin/basex: Unable to locate /usr/share/java/jing.jar in /usr/share/java
BaseX 9.0.1 [Standalone]
Try 'help' to get more information.
>
> LIST
Name Resources Size Input Path
----------------------------------------------------------------
books 1 61253 /home/thufir/basex/db.books.xml
bookstore 1 6164 /home/thufir/basex/db.bookstore.xml
2 database(s).
>
> OPEN books
Database 'books' was opened in 67.74 ms.
>
> XQUERY /bookstore/book/title
Query executed in 217.72 ms.
>
> OPEN bookstore
Database 'bookstore' was opened in 2.2 ms.
>
> XQUERY /bookstore/book/title
<title lang="en">Everyday Italian</title>
<title lang="en">Harry Potter</title>
<title lang="en">XQuery Kick Start</title>
<title lang="en">Learning XML</title>
Query executed in 6.08 ms.
>
> OPEN books
Database 'books' was opened in 2.48 ms.
>
> XQUERY /bookstore/book/title
Query executed in 1.09 ms.
>
> exit
Have a nice day.
thufir#dur:~/basex$
notably, never closed a database. I suppose the context is whichever database was most recently opened?
I'm trying to assign securables against a user, but every time I try and grant permissions on a table-valued function, I get the error:
Key cannot be null.
Parameter name: key (mscorlib)
There is no error when adding stored procedures, views or tables, and using T-SQL to grant the permissions works fine. I've tried this on different databases on the same server and two instances of SQL Server 2016 with exactly the same errors.
SQL Server 2016 v13.0.4202.2
SSMS v16 build 13.0.15900.1
Edit: Upgrade to SSMS 17.1 build 14.0.17119.0 didn't help
As a side note (possibly related), I'm also getting an error when trying to list all objects belonging to a schema via User > Securables > Search > All objects belonging to the schema. It doesn't matter which schema I pick, the dialog closes with the error:
Value does not fall within the expected range. (SqlMgmt)
More information:
Stack trace from the "Technical details" button:
===================================
Key cannot be null.
Parameter name: key (mscorlib)
------------------------------
Program Location:
at System.Collections.Hashtable.ContainsKey(Object key)
at System.Collections.Hashtable.Contains(Object key)
at System.Collections.Specialized.HybridDictionary.Contains(Object key)
at Microsoft.SqlServer.Management.SqlMgmt.PermissionsData.SecurableColumnParent.ApplyRevokes(SqlSmoObject obj)
at Microsoft.SqlServer.Management.SqlMgmt.PermissionsData.Principal.ApplyChanges(String principalName, Server server)
at Microsoft.SqlServer.Management.SqlMgmt.PermissionsDatabasePrincipal.OnRunNow(Object sender)
at Microsoft.SqlServer.Management.SqlMgmt.PanelExecutionHandler.Run(RunType runType, Object sender)
at Microsoft.SqlServer.Management.SqlMgmt.SqlMgmtTreeViewControl.DoPreProcessExecutionAndRunViews(RunType runType)
at Microsoft.SqlServer.Management.SqlMgmt.SqlMgmtTreeViewControl.ExecuteForSql(PreProcessExecutionInfo executionInfo, ExecutionMode& executionResult)
at Microsoft.SqlServer.Management.SqlMgmt.SqlMgmtTreeViewControl.Microsoft.SqlServer.Management.SqlMgmt.IExecutionAwareSqlControlCollection.PreProcessExecution(PreProcessExecutionInfo executionInfo, ExecutionMode& executionResult)
at Microsoft.SqlServer.Management.SqlMgmt.ViewSwitcherControlsManager.RunNow(RunType runType, Object sender)