OpenWhisk: how to include new/updated document in Cloudant change feed - cloudant

Following the steps outlined in http://heidloff.net/article/how-to-trigger-openwhisk-cloudant, the triggered action does not receive the JSON document as a parameter, even though the trigger definition includes includeDocs true.
Details:
Define an action (displayEvent.js) that simply returns the parameters:
function main(params) {
return {payload: params};
}
Create action:
$ wsk action update displayEvent displayEvent.js
ok: updated action displayEvent
Bind Cloudant package:
$ wsk package bind /whisk.system/cloudant srcCloudant --param bluemixServiceName cloudant_for_openwhisk --param dbname src --param host *** --param overwrite false --param username *** --param password ***
Create trigger for Cloudant change feed:
$ wsk trigger create newTrackingEvent --feed /ptitzler_org_dev/srcCloudant/changes --param includeDocs true
ok: invoked /...
(I also tried parameter include_docs but the results are the same.)
Activity log entries:
"2017-01-25T16:12:34.711142625Z stdout: cloudant trigger feed: { bluemixServiceName: 'cloudant_for_openwhisk',",
"2017-01-25T16:12:34.711173464Z stdout: authKey: '***',",
"2017-01-25T16:12:34.711184757Z stdout: username: '***',",
"2017-01-25T16:12:34.711194489Z stdout: host: '***',",
"2017-01-25T16:12:34.711205213Z stdout: dbname: 'src',",
"2017-01-25T16:12:34.711214336Z stdout: includeDocs: true,",
"2017-01-25T16:12:34.711223392Z stdout: overwrite: false,",
"2017-01-25T16:12:34.711233436Z stdout: package_endpoint: '10.143.15.111:11000',",
"2017-01-25T16:12:34.711242728Z stdout: lifecycleEvent: 'CREATE',",
"2017-01-25T16:12:34.711251853Z stdout: triggerName: '/_/newTrackingEvent',",
"2017-01-25T16:12:34.711261256Z stdout: password: '***' }",
"2017-01-25T16:12:34.810688295Z stdout: cloudant trigger feed: done http request",
"2017-01-25T16:12:34.811401516Z stdout: { id: ':ptitzler_org_dev:newTrackingEvent',",
"2017-01-25T16:12:34.811430393Z stdout: accounturl: 'https://***-bluemix.cloudant.com',",
"2017-01-25T16:12:34.811441235Z stdout: dbname: 'src',",
"2017-01-25T16:12:34.811451032Z stdout: user: '***',",
"2017-01-25T16:12:34.811460676Z stdout: pass: '***',",
"2017-01-25T16:12:34.81147041Z stdout: host: '***-bluemix.cloudant.com',",
"2017-01-25T16:12:34.811490203Z stdout: protocol: 'https',",
"2017-01-25T16:12:34.811500637Z stdout: apikey: '***',",
"2017-01-25T16:12:34.811518951Z stdout: callback: { action: { name: '/ptitzler_org_dev/newTrackingEvent' } },",
"2017-01-25T16:12:34.811528767Z stdout: maxTriggers: -1,",
"2017-01-25T16:12:34.811537989Z stdout: triggersLeft: -1,",
"2017-01-25T16:12:34.811558803Z stdout: retriesLeft: 10 }"
Create rule:
$ wsk rule create logEvent newTrackingEvent displayEvent
ok: created rule logEvent
Create new document in Cloudant
{
"_id": "0689f19a88edd98512a33df24ab084a4",
"myproperty": 123
}
The resulting displayEvent activity log output only includes the document metadata but not the document itself:
{
"duration": 64,
"name": "displayEvent",
"subject": "***",
"activationId": "8d81abfb258e4752bbe74d79601c5a7e",
"publish": false,
"annotations": [
{
"key": "limits",
"value": {
"timeout": 60000,
"memory": 256,
"logs": 10
}
},
{
"key": "path",
"value": "ptitzler_org_dev/displayEvent"
}
],
"version": "0.0.4",
"response": {
"result": {
"payload": {
"seq": "10-g1...4",
"id": "0689f19a88edd98512a33df24ab084a4",
"changes": [
{
"rev": "1-bf5fdf2758669105decf1fa0c6853626"
}
],
"dbname": "src"
}
},
"success": true,
"status": "success"
},
"end": 1485361317416,
"logs": [ ],
"start": 1485361317352,
"namespace": "ptitzler_org_dev"
}
What am I missing?

includeDocs is no longer supported. Please see the following: https://developer.ibm.com/openwhisk/2016/12/05/cloudant-feed-change-no-longer-support-includedocs/

Related

Unable to fetch existing file. Error: "the remote file does not exist, not transferring, ignored"

I want ansible to read a csv file in test server (10.15.170.22), but keep getting this error : "the remote file does not exist, not transferring, ignored".
This is how I structured my playbooks:
- name: Find files in sub-folders
hosts: testserverhost
tasks:
- name: Search for files in folder
win_find:
paths: C:\folder\data\
register: file
- name: set_fact
set_fact:
filename: "{{ file | json_query('files[0].filename') }}"
Since there seems to be no way for ansible to directly read the file in test server 10.15.170.22, I had to include a fetch file task before the read_csv task.
- name: Fetching file and reading data
hosts: localhost
tasks:
- name: Set fact
set_fact:
filename: "{{ hostvars['10.15.170.22']['filename'] }}"
- name: Fetch file from test server to ansible controller server
fetch:
src: C:\folder\data\{{ filename }}
dest: /var/lib/awx/projects/ansibleproject/testproject/
flat: yes
- name: Read file
read_csv:
path: /var/lib/awx/projects/ansibleproject/testproject/{{ filename }}
key: FirstName
fieldnames: FirstName,LastName
delimiter: ','
register: userdata
Output of playbooks:
TASK [Search for files in folder] ********************************************************************************************
ok: [10.15.170.22] => {"changed": false, "examined": 1, "files": [{"attributes": "Archive", "checksum": "b86f10b0f2305c6c97a1464c19d9f53dce6b0367", "creationtime": 1676388747.9286883, "exists": true, "extension": ".csv", "filename": "testdata.csv", "hlnk_targets": [], "isarchive": true, "isdir": false, "ishidden": false, "isjunction": false, "islnk": false, "isreadonly": false, "isreg": true, "isshared": false, "lastaccesstime": 1676388747.9504724, "lastwritetime": 1676301040, "lnk_source": null, "lnk_target": null, "nlink": 1, "owner": "BUILTIN\\Administrators", "path": "C:\\folder\\data\\testdata.csv", "sharename": null, "size": 1225}], "matched": 1}
TASK [set_fact] ********************************************************************************************
ok: [10.15.170.22] => {
"ansible_facts": {
"filename": "testdata.csv"
},
"changed": false
}
TASK [Set fact] ********************************************************************************************
ok: [localhost] => {
"ansible_facts": {
"filename": "testdata.csv"
},
"changed": false
}
TASK [Fetch file from test server to ansible controller server] ********************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"src": "C:\\folder\\data\\testdata.csv"
}
},
"msg": "the remote file does not exist, not transferring, ignored"
}
I tried switching testserverhost to localhost, and also tried C:\folder\data\testdata.csv and C:/folder/data/testdata.csv for src, but I still get errors. I don't understand where did it go wrong.

Using Volttron aggregation agent

I'm trying to get the aggregate agent to work with timescale db.
https://volttron.readthedocs.io/en/main/developing-volttron/developing-agents/specifications/aggregate.html
I have a file aggregation.config
{
"connection": {
"type": "postgresql",
"params": {
"dbname": "volttrondb",
"host": "127.0.0.1",
"port": 5432,
"user": "user",
"password": "password",
"timescale_dialect": true
}
},
"aggregations":[
# list of aggregation groups each with unique aggregation_period and
# list of points that needs to be collected
{
"aggregation_period": "1h",
"use_calendar_time_periods": true,
"utc_collection_start_time":"2016-03-01T01:15:01.000000",
"points": [
{
"topic_names": ["campus/building/fake/EKG_Cos", "campus/building/fake/EKG_Sin"],
"aggregation_topic_name":"campus/building/fake/avg_of_EKG_Cos_EKG_Sin",
"aggregation_type": "avg",
"min_count": 2
}
]
}
]
}
And run the following command
vctl install services/core/SQLAggregateHistorian/ --tag aggregate-historian -c config/aggregation.config --start
It starts correctly - the vctl status shows it running and there are no errors in the log.
I do not see the point campus/building/fake/avg_of_EKG_Cos_EKG_Sin in the topics table.
Any suggestions?

Xdebug is not launched in WSL2 in a docker CakePHP 3 application

I am struggling with Xdebug + WSL2 + CakePHP 3 + VSCode. Checking the debug console it seems that I have running Xdebug correctly, but when I run a script in the browser, the Xdebug is not launched. This is the code:
Note: I forgot to mention that I am working on docker, that's why the "0.0.0.0" in the hostname parameter.
This is the xdebug.ini
zend_extension=xdebug
[xdebug]
zend_extension=xdebug
xdebug.mode=develop,debug
xdebug.client_host='host.docker.internal'
xdebug.start_with_request=yes
xdebug.client_port = 9003
xdebug.start_with_request=yes
xdebug.log=/var/log/xdebug/xdebug.log
xdebug.connect_timeout_ms=2000
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Listen for Xdebug",
"type": "php",
"request": "launch",
"port": 9003,
"hostname": "0.0.0.0",
"pathMappings": {
"/webroot": "${workspaceRoot}"
},
"log": true
},
{
"name": "Launch currently open script",
"type": "php",
"request": "launch",
"program": "${file}",
"cwd": "${fileDirname}",
"port": 0,
"runtimeArgs": [
"-dxdebug.start_with_request=yes"
],
"env": {
"XDEBUG_MODE": "debug,develop",
"XDEBUG_CONFIG": "client_port=${port}"
}
},
{
"name": "Launch Built-in web server",
"type": "php",
"request": "launch",
"runtimeArgs": [
"-dxdebug.mode=debug",
"-dxdebug.start_with_request=yes",
"-S",
"localhost:0"
],
"program": "",
"cwd": "${workspaceRoot}",
"port": 9003,
"serverReadyAction": {
"pattern": "Development Server \\(http://localhost:([0-9]+)\\) started",
"uriFormat": "http://localhost:%s",
"action": "openExternally"
}
}
]
}
This is the debug console:
Listening on { address: '0.0.0.0', family: 'IPv4', port: 9003 }
<- launchResponse
Response {
seq: 0,
type: 'response',
request_seq: 2,
command: 'launch',
success: true
}
<- initializedEvent
InitializedEvent { seq: 0, type: 'event', event: 'initialized' }
-> setBreakpointsRequest
{
command: 'setBreakpoints',
arguments: {
source: {
name: 'index.php',
path: '/root/server/webroot/index.php'
},
lines: [ 40 ],
breakpoints: [ { line: 40 } ],
sourceModified: false
},
type: 'request',
seq: 3
}
<- setBreakpointsResponse
Response {
seq: 0,
type: 'response',
request_seq: 3,
command: 'setBreakpoints',
success: true,
body: {
breakpoints: [
{
verified: true,
line: 40,
source: {
name: 'index.php',
path: '/root/server/webroot/index.php'
},
id: 1
}
]
}
}
The xdebug.log file
[20] Log opened at 2022-05-16 04:42:03.776649
[20] [Step Debug] INFO: Connecting to configured address/port: host.docker.internal:9003.
[20] [Step Debug] INFO: Connected to debugging client: host.docker.internal:9003 (through xdebug.client_host/xdebug.client_port). :-)
[20] [Step Debug] -> <init xmlns="urn:debugger_protocol_v1" xmlns:xdebug="https://xdebug.org/dbgp/xdebug" fileuri="file:///var/www/html/webroot/info.php" language="PHP" xdebug:language_version="7.4.19" protocol_version="1.0" appid="20"><engine version="3.1.2"><![CDATA[Xdebug]]></engine><author><![CDATA[Derick Rethans]]></author><url><![CDATA[https://xdebug.org]]></url><copyright><![CDATA[Copyright (c) 2002-2021 by Derick Rethans]]></copyright></init>
[20] [Step Debug] <- feature_set -i 1 -n resolved_breakpoints -v 1
[20] [Step Debug] -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="https://xdebug.org/dbgp/xdebug" command="feature_set" transaction_id="1" feature="resolved_breakpoints" success="1"></response>
[20] [Step Debug] <- run -i 12
[20] [Step Debug] -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="https://xdebug.org/dbgp/xdebug" command="run" transaction_id="12" status="stopping" reason="ok"></response>
[20] [Step Debug] <- stop -i 13
[20] [Step Debug] -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="https://xdebug.org/dbgp/xdebug" command="stop" transaction_id="13" status="stopped" reason="ok"></response>
[20] Log closed at 2022-05-16 04:42:03.812679
UPDATE: Following this suggestion (I got from this link, by HolyGonzo) https://www.reddit.com/r/PHPhelp/comments/rqiw4h/need_help_troubleshooting_xdebug_configuration/ I added xdebug_break(); to my code, and then the debugger started working. It is pretty clear to understand that the issue it is in the VSCode configuration not in Xdebug.
SOLUTION:
After fight with this thing a few days, finally I found the issue:
In the launch.json in VSCode I updated this line and it works!!! (note that my path was wrong :( I had this "/var/www/webroot" instead of "/var/www/html/webroot").
"pathMappings": {
"/var/www/html/webroot": "${workspaceFolder}/webroot"
},
Update:
In order to allows Xdebug to look into the vendors folder, and the other folders outside /webroot, the code needs to be upated as follows (in my case, regarding to my server paths):
{
"name": "Listen for Xdebug",
"type": "php",
"request": "launch",
"port": 5902,
"hostname": "localhost",
"pathMappings": {
"/var/www/html": "${workspaceFolder}"
},
"log": true
}

Starting and stopping services in vespa

In the benchmarking page "https://docs.vespa.ai/en/performance/vespa-benchmarking.html" it is given that we need to restart the services after we increase the persearch thread using the commands vespa-stop-services and vespa-start-services.
Could you tell us if we need to do this on all the content nodes or just the config nodes?
When deploying a change that requires a restart, the deploy command will list the actions you need to take. For example when changing the global per search thread setting changing from 2 to 5 in the below example:
curl --header Content-Type:application/zip --data-binary #target/application.zip localhost:19071/application/v2/tenant/default/prepareandactivate |jq .
{
"log": [
{
"time": 1645036778830,
"level": "WARNING",
"message": "Change(s) between active and new application that require restart:\nIn cluster 'mycluster' of type 'search':\n Restart services of type 'searchnode' because:\n 1) # Number of threads used per search\nproton.numthreadspersearch has changed from 2 to 5\n"
}
],
"tenant": "default",
"url": "http://localhost:19071/application/v2/tenant/default/application/default/environment/prod/region/default/instance/default",
"message": "Session 8 for tenant 'default' prepared and activated.",
"configChangeActions": {
"restart": [
{
"clusterName": "mycluster",
"clusterType": "search",
"serviceType": "searchnode",
"messages": [
"# Number of threads used per search\nproton.numthreadspersearch has changed from 2 to 5"
],
"services": [
{
"serviceName": "searchnode",
"serviceType": "searchnode",
"configId": "mycluster/search/cluster.mycluster/0",
"hostName": "vespa-container"
}
]
}
],
"refeed": [],
"reindex": []
}
}

JSON Input Transformer Path Specification

I am trying to transform the following JSON log: (AWS CloudWatch/Trail if it matters)
{
"eventVersion": "1.08",
"userIdentity": {
"type": "IAMUser",
"principalId": "xxx",
"arn": "arn:aws:iam::xxx",
"accountId": "xxx",
"accessKeyId": "xxx",
"userName": "xxx",
"sessionContext": {
"sessionIssuer": {},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "true",
"creationDate": "2021-01-07T13:50:07Z"
}
}
},
"eventTime": "2021-01-07T14:55:03Z",
"eventSource": "ec2.amazonaws.com",
"eventName": "AuthorizeSecurityGroupIngress",
"awsRegion": "us-east-1",
"sourceIPAddress": "xxx",
"userAgent": "console.ec2.amazonaws.com",
"requestParameters": {
"groupId": "sg-xxx",
"ipPermissions": {
"items": [
{
"ipProtocol": "tcp",
"fromPort": 22,
"toPort": 22,
"groups": {},
"ipRanges": {
"items": [
{
"cidrIp": "x.x.x.x/32"
"description": "x"
}
]
},
"ipv6Ranges": {},
"prefixListIds": {}
}
]
}
},
"responseElements": {
"requestId": "xxx",
"_return": true
},
"requestID": "xxx",
"eventID": "xxx",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "xxx"
}
To the following output:
"AuthorizeSecurityGroupIngress made against sg-xxx on [accountname] from [user#x.x.x.x]"
"Port range: 22"
"Source IP: x.x.x.x"
"Description: x"
Currently, by passing these 2 blocks into the CloudWatch Input Transformer:
{
"event":"$.detail.eventName",
"sg":"$.detail.requestParameters.groupId",
"user":"$.detail.userIdentity.userName",
"sourceip":"$.detail.sourceIPAddress",
"dsc":"$.detail.requestParameters.ipPermissions.items"
}
"<event> made against <sg> on [accountname] from [<user>#<sourceip>]"
"Details: <dsc>"
I am able to create the following output:
"AuthorizeSecurityGroupIngress made against sg-xxx on [accountname] from [x#x.x.x.x]"
"Details: {items:[{ipProtocol:tcp,fromPort:22,toPort:22,groups:{},ipRanges:{items:[{cidrIp:x.x.x.x/32,description:x}]},ipv6Ranges:{},prefixListIds:{}}]}"
However, when I attempt to specify the input path even further by passing more specific placeholders:
{
"event":"$.detail.eventName",
"sg":"$.detail.requestParameters.groupId",
"user":"$.detail.userIdentity.userName",
"sourceip":"$.detail.sourceIPAddress",
"prt":"$.detail.requestParameters.ipPermissions.items.toPort",
"src":"$.detail.requestParameters.ipPermissions.items.ipRanges.items.cidrIp",
"dsc":"$.detail.requestParameters.ipPermissions.items.ipRanges.items.description"
}
"<event> made against <sg> on [accountname] from [<user>#<sourceip>]"
"Port Range: <prt>"
"Source IP: <src>"
"Description: <dsc>"
The output is blank for the placeholders' (prt,src,dsc) values:
"AuthorizeSecurityGroupIngress made against sg-xxx on [accountname] from [user#x.x.x.x]"
"Port range: "
"Source IP: "
"Description: "
VS. expected
"AuthorizeSecurityGroupIngress made against sg-xxx on [accountname] from [user#x.x.x.x]"
"Port range: 22"
"Source IP: x.x.x.x"
"Description: x"
Where am I messing up in the input path?
Is it the '[]' brackets causing the issue?
In two places your JSON has an items array, but your code treats them like objects. You need to call out the array element you want to pluck properties from:
"prt":"$.detail.requestParameters.ipPermissions.items[0].toPort",
"src":"$.detail.requestParameters.ipPermissions.items[0].ipRanges.items[0].cidrIp",
"dsc":"$.detail.requestParameters.ipPermissions.items[0].ipRanges.items[0].description"

Resources