How to call a function when "RPC.allow" is applied? - volttron

 In the volttron/platform/store.py file, it contains:
# RPC.export
# RPC.allow ('edit_config_store')
def manage_store (self, identity, config_name, raw_contents, config_type = "raw"):
contents = process_raw_config (raw_contents, config_type)
self._add_config_to_store (identity, config_name, raw_contents, contents, config_type,
                                  trigger_callback = True)
To call this function from outside, I wrote the code as below.
 self.vip.rpc.call (CONFIGURATION_STORE, "manage_store", 'platform.driver', config_name, raw_contents, 'json')
The error code is as follows.
volttron.platform.jsonrpc.Error: method "manage_store" requires capabilities {'edit_config_store'}, but capability [] was provided for user pnp
auth is registered as below.
INDEX: 8
{
  "domain": null,
  "address": null,
  "mechanism": "CURVE",
  "credentials": "6vjPXC8ctO8oWkeMXAOe5FsAM9vZD_sg0vkLrstnVFs",
  "groups": [],
  "roles": [],
  "capabilities": {
    "edit_config_store": {
      "identity": "pnp.b"
    }
  },
  "comments": "Automatically added on agent install",
  "user_id": "pnp.b",
  "enabled": true
}
How do I fix Capability?

This is a security feature. By default an agent can only update its own config store. So the agent with identity pnp.b can only edit its own config store and not of platform.driver. But you (or whoever has access to run vctl auth command or to directly edit the $VOLTTRON_HOME/auth.json file) can edit config store by giving the pnp.b agent the capability to edit the config store of platform.driver.
The capabilities entry for the agent can be changed to a regular expression that allows pnp.b or platform.driver (Or any other pattern you want). Regular expressions should be enclosed in / For example
{
"domain": null,
"address": null,
"mechanism": "CURVE",
"credentials": "6vjPXC8ctO8oWkeMXAOe5FsAM9vZD_sg0vkLrstnVFs",
"groups": [],
"roles": [],
"capabilities": {
"edit_config_store": {
"identity": "/pnp.b|platform.driver/"
}
},
"comments": "Automatically added on agent install",
"user_id": "pnp.b",
"enabled": true
}

Thank you very much for your answer.
Referring to your answer, I was correcting auth's capability.
INDEX: 8
{
"domain": null,
"address": null,
"mechanism": "CURVE",
"credentials": "TG3z7cEa1FnMp_642srvNLyd6HsxTq18xMOg20FFWjE",
"groups": [],
"roles": [],
"capabilities": {
"edit_config_store": {
"identity": "/pnp.b|platform.driver/"
}
},
"comments": "Automatically added on agent install",
"user_id": "pnp.b",
"enabled": true
}
However, it still shows that the agent is not authorized as shown in the log below.
Is it my mistake during correction?
Do you have any comments on this?
Note: I use the volttron 7.0rc branch.
2020-04-07 09:09:37,467 () volttron.platform.vip.agent.subsystems.rpc ERROR: unhandled exception in JSON-RPC method 'manage_store':
Traceback (most recent call last):
File "/volttron7_200331/volttron/platform/vip/agent/subsystems/rpc.py", line 158, in method
return method(*args, **kwargs)
File "/volttron7_200331/volttron/platform/vip/agent/subsystems/rpc.py", line 283, in checked_method
raise jsonrpc.exception_from_json(jsonrpc.UNAUTHORIZED, msg)
volttron.platform.jsonrpc.Error: method "manage_store" requires capabilities {'edit_config_store'}, but capability [] was provided for user pnp

Related

not able to update trust policy for a role

I am trying to create featureGroup using sagemaker API in ec2 instance.
got below error while running python script which creates featureGroup.
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the CreateFeatureGroup operation: The execution role ARN is invalid. Please ensure that the role exists and that its trust relationship policy allows the action 'sts:AssumeRole' for the service principal 'sagemaker.amazonaws.com'.
I observed that the role I am using doesn't have "sagemaker.amazonaws.com" as a Trusted entity so I tried to add that however getting error "user: arn:aws:sts::xxxxxx11:assumed-role/engineer/abcUser is not authorized to perform: iam:UpdateAssumeRolePolicy on resource: role app-role-12345 with an explicit deny in an identity-based policy"
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": ["ec2.amazonaws.com","sagemaker.amazonaws.com"]
},
"Action": "sts:AssumeRole"
}
]
}
I tried through terraform as well
data "aws_iam_policy_document" "instance-assume-role-policy" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com", "sagemaker.amazonaws.com"]
    }
  }
}
resource "aws_iam_role" "instance" {
  name  = "engineer-12345"
  assume_role_policy = data.aws_iam_policy_document.instance-assume-role-policy.json
}
however its not working. Got access denied error.
Can anyone help to resolve this?
code used:
import pandas as pd
import sagemaker
from time import gmtime, strftime, sleep
from sagemaker.feature_store.feature_group import FeatureGroup
import time
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
role = sagemaker.get_execution_role()
print("role : ",role)
print("start")
try:
customer_data = pd.read_csv("data.csv",dtype={'customer_id': int,'city_code': int, 'state_code': int, 'country_code': int, 'eventtime': float })
customers_feature_group_name = "customers-fg-01"
customers_feature_group = FeatureGroup(name=customers_feature_group_name, sagemaker_session=sagemaker_session
)
current_time_sec = int(round(time.time()))
record_identifier_feature_name = "customer_id"
customers_feature_group.load_feature_definitions(data_frame=customer_data)
customers_feature_group.create(
s3_uri="s3://xxxx/sagemaker-featurestore/",
record_identifier_name=record_identifier_feature_name,
event_time_feature_name="eventtime",
role_arn='arn:aws:iam::1234:role/role-1234',
enable_online_store=True,
online_store_kms_key_id = 'arn:aws:kms:us-east-1:1234:key/1111'
)
except Exception as e:
print(str(e))

Referencing lambda environment variable in Cloudformation template throws "circular dependency" error

I have an application that uses an AWS lambda function triggered by an implicit http api event, which gets post/get requests from a React app:
"mylambda": {
"Type": "AWS::Serverless::Function",
"Properties": {
"InlineCode": "exports.handler = async (event, context)",
"MemorySize": {
"Ref": "LambdaMemorySize"
},
"Handler": "backend/index.handler",
"Role": {
"Fn::GetAtt": ["myrole", "Arn"]
},
"Timeout": {
"Ref": "LambdaTimeout"
},
"Runtime": "nodejs12.x",
"FunctionName": "myFunction",
"Events": {
"httpapi": {
"Type": "HttpApi",
"Properties": {
"Path": "/myApi",
"Method": "ANY"
}
}
}
}
}
},
I want to add an environment variable inside the Lambda function for the http-api endpoint so that I can use it in lambda's handler:
"Environment": {
"Variables": {
"apiEndpoint": {
"Fn::Join": [
"",
[
"https://",
{ "Ref": "ServerlessHttpApi" },
".execute-api.",
{ "Ref": "AWS::Region" },
".amazonaws.com"
]
]
}
}
}
The problem is that this throws a circular dependency error and I can see why (lambda relies on api-gateway and api-gateway relies on lambda).
I tried to create the http-api separately but then there seemed to be no way of referencing the api inside lambda as a trigger i.e. when I deploy, the template creates a lambda function that doesn't have my api as a trigger, while my api was created separately.
I know the whole env variables thing can be done easily from the console but my deployment model assumes everything should be done in the CF template.
An answer here suggested:
"You could easily extract the information you need from that event instead of having it as an environment variable, but this depends on your exact use case."
I could do this in the body of the lambda's handler in index.js:
module.exports.apiendpoint = event.requestContext.domainName;
But this will collide with the fact that 1) I can't use index.js variables that are outside React's src folder and 2) I'm not sure how the app will run for the first time since it'll require the get/post request first to trigger lambda.
I think my main problem is simple: I don't know how I can reference the http api endpoint in Lambda's environment variables without throwing an error. Is it that the whole structure of my app is wrong?

Sequence contains no elements error once I go from Service provider Project to Identity Project

I'm using ITFoxtec SAML 2.0 where I have started multiple projects; TestIdpCore and TestWebAppCore. Once I click on the TestWebAppCore login link, I face the error Sequence contains no elements.
The error is because the identity provider TestIdpCore cannot find the relying party TestWebAppCore.
I think maybe the TestWebAppCore endpoint have changed or that the application is not answering.
The relying party TestWebAppCore is default exposed on https://localhost:44306/. And the relying party is configured in the identity provider TestIdpCore appsettings.json with the metadata endpoint "https://localhost:44306/metadata".
"Settings": {
"RelyingParties": [
{
"Metadata": "https://localhost:44327/metadata"
},
{
"Metadata": "https://localhost:44306/metadata"
},
{
"Metadata": "https://localhost:44307/metadata"
},
{
"Metadata": "https://localhost:44308/metadata"
},
{
"Metadata": "https://localhost:44309/metadata"
}
]
}
If the TestWebAppCore endpoint has changed you need to change the identity provider configuration.

Auth0 returns a 401 on token request. Auth0 logs show login is successful

I'm integrating auth0 from the tutorial into my own application and have encountered a couple of problems with authentication reflected in the auth0 logs.
This occurs on hitting my react login button:
Login.js
import React from "react";
import { useAuth0 } from "#auth0/auth0-react";
import '../components/App.css'
const LoginButton = () => {
const { loginWithRedirect } = useAuth0();
return <button class="btn btn-primary" onClick={() => loginWithRedirect()}>Log In</button>;
};
export default LoginButton;
However on the Auth0 Application logs I see that I am successfully authenticated and I also get a Failed Exchange, Successful Login and a Warning During Login.
Fixed Log: Warning During Login
Here's the text of the log for Warning During Login:
You are using Auth0 development keys which are only intended for use
in development and testing. This connection (google-oauth2) should be
configured with your own Development Keys to enable the consent page
to show your logo instead of Auth0's and to enable SSO for this
connection. AUTH0 DEVELOPMENT KEYS ARE NOT RECOMMENDED FOR PRODUCTION
ENVIRONMENTS. To learn more about Development Keys please refer to
https://auth0.com/docs/connections/social/devkeys.
This was fixed by following these instructions on the Auth0 website. Essentially:
Creating a google project and OAuth credentials
Adding the credentials inside my Auth0 connected apps
Broken: Login Successful
The log shows that it was a successful login. However on my application, I click the Login button and the expected auth0 modal does not appear.
{
"date": "2020-10-14T09:14:06.549Z",
"type": "s",
"connection_id": "",
"client_id": "<MyClientId>",
"client_name": "<MyClientName>",
"ip": "<MyIP>",
"user_agent": "Safari 13.1.2 / Mac OS X 10.15.6",
"details": {
"prompts": [],
"completedAt": 1602666846548,
"elapsedTime": null,
"session_id": "m0AeJer-FhZ0rb9UFPWgvDkvN7MW36h_"
},
"hostname": "<MyHost>",
"user_id": "<MyUserID>",
"user_name": "<MyUserName>",
"auth0_client": {
"name": "auth0-react",
"version": "1.1.0"
},
"log_id": "90020201014091409270008789595401783120816526823843168290",
"_id": "90020201014091409270008789595401783120816526823843168290",
"isMobile": false,
"description": "Successful login"
}
And looking at the response headers in safari, the token request has 401'd
URL: https://<testdomain>.auth0.com/oauth/token
Status: 401
Source: Network
Address: <testaddress>
Initiator:
auth0-spa-js.production.esm.js:15
Fixed Log: Failed Exchange
After ensuring that I was connecting to goole properly I saw that the issue persisted. Looking at the log I get the following under the heading of a Failed Exchange.
{
"date": "2020-10-14T09:14:07.304Z",
"type": "feacft",
"description": "Unauthorized",
"connection_id": "",
"client_id": "<MyClientId>",
"client_name": null,
"ip": "<TheIP>",
"user_agent": "Safari 13.1.2 / Mac OS X 10.15.6",
"details": {
"code": "*************Rw7"
},
"hostname": "<MyHostName>",
"user_id": "",
"user_name": "",
"log_id": "90020201014091410270002070951766882711015226887425228818",
"_id": "90020201014091410270002070951766882711015226887425228818",
"isMobile": false
}
This question fixed the Failed Exchange issue for me. Change your Auth0 Application properties settings to:
Application Type: Regular Web Application
Token Endpoint Authentication Method: None
This however, unearthed a new issue...
Broken Log: Failed Silent Auth
There's a number of fixes I did here so I'll document them in the answer.
Warning During Login
This was fixed by ensuring the my credentials provider had been properly set up. In this case google. For instructions on how to add google as a credentials provider see here.
Failed Exchange
This was fixed by going to the auth0 dashboard application settings and modifying the setting Application Type to Regular Web Application and the setting Token Endpoint Authentication Method to None.
Login Successful (but not really)
This disappeared once I fixed the Failed Exchange above.
Failed Silent Auth
This was never "fixed" and the error still appears on the log. However the comment on this question prompted me to revisit my Allowed Web Origins and Allowed Origins (CORS) on my auth0 to the below:
https://<mydomain>.eu.auth0.com, http://localhost:3000
This was the last issue in the chain and I could now use login and logout as expected.

Google Compute Engine: Can't authorize request to Task Queue API

everyone.
I'm having trouble trying to authorize my Compute Engine instance to lease tasks on a Task Queue queue.
I've included de necessary scopes (I think), in the instance creation config:
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script-url",
"value": "[MY-STARTUP-SCRIPT]"
},
{
"key": "service_account_scopes",
"value": "https://www.googleapis.com/auth/cloud-platform"
}
]
},
"serviceAccounts": [
{
"email": "[MY-SERVICE-ACCOUNT]",
"scopes": [
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/taskqueue",
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/compute"
]
}
Also in my queue.yaml, I have added the same service account to the acl directive with the "user_email" attribute:
queue:
- name: [MY-QUEUELIST]
mode: pull
retry_parameters:
task_retry_limit: 5
acl:
- user_email: [MY-COMPUTE-ENGINE-SERVICE-ACCOUNT]
Finally, the script that I run on my instance uses the GoogleCredentials.get_application_default() function to obtain the credentials. This credentials are passed as argument to the build() method (as stated here: https://cloud.google.com/compute/docs/authentication):
The end result is that when I try to list the task of the given taskqueue I get this error:
googleapiclient.errors.HttpError: https://www.googleapis.com/tasks/v1/lists/documentation-compiler-queue/tasks?alt=json
returned "Insufficient Permission">
What am I missing?!
Thanks in advance.
I got my own mistake!
Just ignore this cuestion. I was using:
from googleapiclient.discovery import build
taskqueue_service = build('task', 'v1beta2', credentials=credentials)
instead of:
from googleapiclient.discovery import build
taskqueue_service = build('taskqueue', 'v1beta2', credentials=credentials)
Note the [API name] string in the build method

Resources