Connector Manifest not Validating in DataStudio - google-data-studio

I am trying to test a connector I am building. I have created the manifest file (appsscript.json) and am trying to add this by Deployment ID to DataStudio. I keep getting an error that states:
The connector manifest could not be retrieved or is invalid. Check the connector and try again.
Here is a a copy/paste of my appsscripts.json file:
{
"timeZone": "America/New_York",
"dependencies": {
"libraries": [
{
"userSymbol": "OAuth2",
"libraryId": "1B7FSrk5Zi6L1rSxxTDgDEUsPzlukDsi4KGuTMorsTQHhGBzBkMun4iDF",
"version": "24"
}
]
},
"dataStudio": {
"name": "VALID NAME",
"company": "VALID NAME",
"logoUrl": "VALID LOGO",
"addonUrl": "VALID URL",
"supportUrl": "VALID URL",
"description": "VALID DESCRIPTION"
}
}
I would expect this to enable the workflow of enabling the connector and then allowing the OAuth flow to be tested.
Instead, I get this error:
The connector manifest could not be retrieved or is invalid. Check the connector and try again.
Can anyone advise why this is validating? I have followed these steps:
https://developers.google.com/datastudio/connector/use
Thanks!

Double check the permissions of your script. Especially if you are using corporate account (not from your #gmail.com address) the default sharing is corporate-only, therefore your script (and the manifest) is not available from outside.

Before Selecting the link, you need to select the Install add-on. It worked for me. I have enclosed the screenshot for reference

Check the installation URL if you are signed into multiple google accounts.
You may have to change to the u/0 section to u/1, or whichever number appears here for the relevant google account:
https://datastudio.google.com/**u/0**/datasources/create?connectorId=...

Related

How to add SFTP filepath dynamically in Azure Logic App Workflow from Azure Functions

I have a Logic App workflow which takes input from Azure Function for SFTP filepath. I am able to read the file in SFTP filepath when I specify its path manually but the same does not work when I pass the path from Azure Function. It always returns a 404 error:
Error Message:
"status": 404,
"message": "A reference was made to a file or folder which does not exist.\r\nclientRequestId: 20e05109-7277-4476-924b-ff69715a9134",
"source": "sftp-logic-cp-westeurope.logic-ase-westeurope.p.azurewebsites.net"

Setting up VSCode with xdebug: pathMapping

I am trying to set up debugging in VSCode and have run into a bit of a challenge. I typed the path to the localSourceRoot but Intellisense is telling me that it is deprecated and I should use pathMapping instead.
I am a newbie and don't know how to properly set that up. If someone could explain to me the variables and/or attributes pathMapping is requesting I would be forever in your debt.
My system info is as follows:
PHP version: 5.524
xdebug version: 2.2.5
OS Windows 8.1
Using Desktop Server version: 3.8.5
I checked the phpinfo() and it shows Xdebug in the file so I know that it is installed. The launch.json file is pretty basic with port 9000 and all of that. I just need to get that darned pathMapping thing done.
Thanks for any and all help.
I guess you're using the PHP debug extension ?
https://github.com/felixfbecker/vscode-php-debug
The README.md says the following:
Remote Host Debugging
To debug a running application on a remote host, you need to tell XDebug to connect to a different IP than localhost. This can either be done by setting xdebug.remote_host to your IP or by setting xdebug.remote_connect_back = 1 to make XDebug always connect back to the machine who did the web request. The latter is the only setting that supports multiple users debugging the same server and "just works" for web projects. Again, please see the XDebug documentation on the subject for more information.
To make VS Code map the files on the server to the right files on your local machine, you have to set the pathMappings settings in your launch.json. Example:
// server -> local
"pathMappings": {
"/var/www/html": "${workspaceRoot}/www",
"/app": "${workspaceRoot}/app"
}
Please also note that setting any of the CLI debugging options will not work with remote host debugging, because the script is always launched locally. If you want to debug a CLI script on a remote host, you need to launch it manually from the command line.
Thus is as much a reference to myself as well as others who might find this helpful. I am running VSCODE with xdebug and drupalvm and the following works for me after setting the following in php.ini
php_xdebug_idekey: VSCODE
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Listen for XDebug",
"type": "php",
"request": "launch",
"port": 9000,
"pathMappings": {
"/var/www/drupalvm/drupal": "${workspaceRoot}/drupal",
},
"log": true
},
{
"name": "Launch currently open script",
"type": "php",
"request": "launch",
"program": "${file}",
"cwd": "${fileDirname}",
"port": 9000
}
]
}

Google App Engine Deploy error 13 - Deployment Manager operation failed

I am trying to deploy an app first time and getting this at the very end of 'gcloud app deploy' operation.
ERROR: (gcloud.app.deploy) Error Response: [13] Deployment Manager operation failed, name: operation-1522364367335-5689513556e59-0732922b-1662dc1e, error:
[
{
"code":"RESOURCE_ERROR",
"location":"/deployments/aef-default-20180329t155754/resources/aef-default-20180329t155754-hcfw",
"message":{
"ResourceType":"compute.v1.firewall",
"ResourceErrorCode":"404",
"ResourceErrorMessage":{
"code":404,
"errors":[
{
"domain":"global",
"message":"The resource 'projects/kubernetes-staging/global/networks/default' was not found",
"reason":"notFound"
}
],
"message":"The resource 'projects/kubernetes-staging/global/networks/default' was not found",
"statusMessage":"Not Found",
"requestPath":"https://www.googleapis.com/compute/v1/projects/kubernetes-staging/global/firewalls",
"httpMethod":"POST"
}
}
},
{
"code":"RESOURCE_ERROR",
"location":"/deployments/aef-default-20180329t155754/resources/aef-default-20180329t155754-00it",
"message":{
"ResourceType":"compute.v1.instanceTemplate",
"ResourceErrorCode":"404",
"ResourceErrorMessage":{
"code":404,
"errors":[
{
"domain":"global",
"message":"The resource 'projects/kubernetes-staging/global/networks/default' was not found",
"reason":"notFound"
}
],
"message":"The resource 'projects/kubernetes-staging/global/networks/default' was not found",
"statusMessage":"Not Found",
"requestPath":"https://www.googleapis.com/compute/v1/projects/kubernetes-staging/global/instanceTemplates",
"httpMethod":"POST"
}
}
}
]
As the error states, the default global network (entitled 'default') hasn't been found, which appears to be the reason why your app isn't deploying.
You can create a global network by executing this command in Cloud Shell before deploying your app:
$ gcloud compute networks create default --subnet-mode auto
If you didn't modify the original default network, it would suggest that the issue is related to the network settings in your app.yaml configuration file. Information on configuring these settings can be found here.
If you follow the steps above but are still having trouble, I suggest that you create a new issue in the Public Issue Tracker and provide us with the contents of your app.yaml file as well as your Project ID, where I'd be happy to investigate further.
(Disclaimer: I work for Google Cloud Platform Support)

How do I persist data within a Codenvy/Che Workspace?

I have a workspace with the following config
{
"environments": {
"default": {
"machines": {
"db": {
"attributes": {
"memoryLimitBytes": "536870912"
},
"servers": {},
"agents": [
"org.eclipse.che.terminal",
"org.eclipse.che.exec"
]
},
"dev-machine": {
"attributes": {
"memoryLimitBytes": "2684354560"
},
"servers": {},
"agents": [
"org.eclipse.che.ssh",
"org.eclipse.che.ws-agent",
"org.eclipse.che.terminal",
"org.eclipse.che.exec"
]
}
},
"recipe": {
"type": "compose",
"content": "services:\n db:\n image: 'terrywbrady/dspacedb:latest'\n mem_limit: 1073741824\n dev-machine:\n image: 'terrywbrady/dspace:latest'\n mem_limit: 2147483648\n depends_on:\n - db\n",
"contentType": "application/x-yaml"
}
}
},
...
}
I can start my workspace, build code, and deploy to tomcat. Data is written to postgres.
When I halt my workspace and then restart it, all of my built content is gone.
How can I declare volumes that will persist from workspace session to workspace session?
It really depends on a Che flavor and version you are using.
Is it local Che?
Which version of Che?
Is it hosted at codenvy.com?
Is it on Docker or openshift or kubernetes?
Depending on this I can help you figuring out what to do.
So looks like there are couple of ppl that run different flavours of Che. And there are other flavors which could be interesting for other ppl.
For codenvy there are 2 solution (it runs enterprise grade modification of Che 5):
- snapshot workspaces
- configure software to persist data in /projects folder which is automatically synced
For local Che 6 (which has all the enterprise stuff and more out of the box) it is better to follow thread on GitHub. There is no snapshotting functionality but it allows to configure volumes for custom paths.
Depending on a platform that runs Che 6 (Docker, Kubernetes, Openshift) you might need to additionally configure Che to achieve persistence in a way that fit your needs best. To get more info it is better to ask on GitHub since all the maintainers track it.

Cannot connect Strongloop / Loopback datasource to a SQL Server Express database

Within a newly created loopback framework running slc arc I am attempting to connect to an existing SQL Server Express database at ALEX\SQLEXPRESS (I've also tried variations like LOCALHOST\SQLEXPRESS).
But I am getting the error message:
Oops! Something is wrong
Failed to connect to ALEX:undefined in 15000ms
I've also tried ALEX\\SQLEXPRESS since it looks like the undefined might be caused by the slash.
Unfortunately no luck. Does anyone know how to make this work?
These are the settings
This is the connector that gets created in the folders:
{
"db": {
"name": "db",
"connector": "memory"
},
"mssql": {
"host": "ALEX\\SQLEXPRESS",
"database": "bbdb-dev",
"password": "********",
"name": "mssql",
"user": "sa",
"connector": "mssql"
}
}
It appears that the SQL server must be configured correctly I used these instructions http://blog.citrix24.com/configure-sql-express-to-accept-remote-connections/ and then connected via IP address instead of a named connection with a slash in it.
Make sure that SQL server is configured correctly.
Install connector globally sudo npm install loopback-connector-mssql -g
Give in the credentials in Arc. refer screen-shot to see how I've connected
Hope it helps! Works for me :)

Resources