As per camel documentation for consul(camel.apache.org/consul-component.html), the supported HTTP API are kv, event and agent. There are example of kv (key/value store) which are working fine but there is no such example for agent API. I went thruogh the documentation of Consul [www.consul.io/docs/agent/http/agent.html] and the corresponding java client [github.com/OrbitzWorldwide/consul-client] as well and tried to figure out how consul:agent component should work but I have found nothing simple there.
main.getCamelTemplate().sendBodyAndHeader(
"consul:agent?url=http://localhost:8500/v1/agent/service/register",
payload,
ConsulConstants.CONSUL_ACTION, ConsulAgentActions.AGENT); //also tried with ConsulAgentActions.SERVICES, but no luck
I also checked the test cases mention at https://github.com/apache/camel/tree/master/components/camel-consul/src/test/java/org/apache/camel/component/consul but unable to find anything related to agent api.
So my question is that how to use consul:agent component.
UPDATE: I tried the below code and able to get the services.
Object res = main.getCamelTemplate().requestBodyAndHeader("consul:agent", "", ConsulConstants.CONSUL_ACTION, ConsulAgentActions.SERVICES);
It seems that the consul component only work for the GET operation of the HTTP agent API. But in that case how do I register a new service (like /v1/agent/service/register : Registers a new local service) with consul component?
This code works for me:
ImmutableService service =
ImmutableService.builder()
.id("service-1")
.service("service")
.addTags("camel", "service-call")
.address("127.0.0.1")
.port(9011)
.build();
ImmutableCatalogRegistration registration =
ImmutableCatalogRegistration.builder()
.datacenter("dc1")
.node("node1")
.address("127.0.0.1")
.service(service)
.build();
ProducerTemplate template = main.getCamelTemplate();
Object res = template.requestBodyAndHeader("consul:catalog", registration, ConsulConstants.CONSUL_ACTION, ConsulCatalogActions.REGISTER);
But it's looking some inelegantly (like workaround), and i think there are other solutions.
One can use
.to("consul:agent?action=SERVICES")
to retrieve the registered Services as Map<String, Service>, with service id as map key.
And
.to("consul:catalog?action=REGISTER")
to write registrations, expecting an ImmutableCatalogRegistration as body
Note that you can employ a CamelServiceRegistrationRoutePolicy to register Camel routes as services automatically.
Related
I am currently trying to use camunda platform and in this concept I am building a react application to make a call to a graphQL api and perform some actions. So far, I have used the api with postman and does the job I want to, The graphql mutation is the following:
mutation claimTask ($taskId: String!, $assignee: String) {
claimTask (taskId: $taskId, assignee: $assignee) {
id
name
taskDefinitionId
processName
creationTime
completionTime
assignee
variables {
id
name
value
previewValue
isValueTruncated
}
taskState
sortValues
isFirst
formKey
processDefinitionId
candidateGroups
}
}
And the endpoint is
http://{my_ip}:8082/graphql
which is set in a personal vm server. What I am trying to do now, is make the same request through the react app (apollo client). So far, I am getting a cors policy error:
Access to fetch at 'http://{my_ip}:8082/graphql' from origin 'http://localhost:3000' has been blocked by CORS policy
I understand that I have to configure somehow the uri that can be accepted by the server. My question is, since I am using an existing api should I do this from the express server (apollo server) configuration? Because so far every solution I found talks about implementing the api from the scratch, including defining the schemas.
I have concluded, that I should use the express server to create a kind of proxy so that the react app will hit the api through there but I cannot figure out how exactly is this implemented.
I know that this is a vague question, but any suggestion could be very useful.
Thank you!!
It is a best practice to not hit the GraphQL API directly, but to create your own facade, which exposes the functionality your front-end needs, possibly in a more use case specific way. This means connectivity only needs to be allowed server-to-server between the back-ends. It is more secure as you don't need to open the API to the public and it also solves the cross-domain challenge you have. Your facade will be exposed under your domain.
Here is a example NestJS client "Generating the Tasklist service":
https://docs.camunda.io/docs/apis-clients/tasklist-api/tasklist-api-tutorial/#generating-the-tasklist-service
On your express backend you would do something similar.
(This example uses a Java back-end with react, but I am guess you want JS: https://github.com/camunda-community-hub/camunda-8-lowcode-ui-template/blob/main/src/main/java/org/example/camunda/process/solution/facade/TaskController.java .)
I am working on something which includes LWC with tooling API. I wrote this below method which makes a callout. but when I call this method this method from lwc at that time I'm unable to get session Id, but if I call this same method from the developer console then it works fine.
Apex Code:
#AuraEnabled
public static string getList(String fieldName){
HttpRequest req = new HttpRequest();
req.setHeader('Authorization', 'Bearer ' + UserInfo.getSessionId());
System.debug('res------>'+UserInfo.getSessionID());
req.setHeader('Content-Type', 'application/json');
req.setEndpoint('callout:Tooling_Query/query/?q=Select+id,Namespaceprefix,developername,TableEnumOrId+FROM+customfield+Where+developername+LIKE\'' +fieldName+ '\'');
req.setMethod('GET');
Http h = new Http();
HttpResponse res = h.send(req);
System.debug('res------>'+res.getBody());
return res.getBody();
}
When I call it from lwc it returns this
[{"message":"This session is not valid for use with the REST API","errorCode":"INVALID_SESSION_ID"}]
so, how can I get session-id from lwc, I already set up a Connected App and Named Credential by the name of Tooling_Query
and add URL to remote sites.
please help me here.
You can't. Your Apex code called in a Lightning Web Components context cannot get an API-enabled Session Id. This is documented in the Lightning Web Components Dev Guide:
By security policy, sessions created by Lightning components aren’t enabled for API access. This restriction prevents even your Apex code from making API calls to Salesforce. Using a named credential for specific API calls allows you to carefully and selectively bypass this security restriction.
The restrictions on API-enabled sessions aren’t accidental. Carefully review any code that uses a named credential to ensure you’re not creating a vulnerability.
The only supported approach is to use a Named Credential authenticated as a specific user.
There is a hack floating around that exploits a Visualforce page to obtain a Session Id from such an Apex context. I do not recommend doing this, especially if you need to access the privileged Tooling API. Use the correct solution and build a Named Credential.
I am moving my app from Svelte SPA (original) to Sveltekit multi page app (new).
In the original app, I configure a http client up top and put it context using:
setContext(HTTP_CLIENT, httpClient)
Now the entire app can get that http client using
const httpClient = getContext(HTTP_CLIENT)
I do this because my app can be started with debug parameters than turn on http request logging.
I'm not clear how to do similar in Sveltekit, because it seems that pages do not share a context.
I tried sticking the http client in the session like this:
import { session } from "$app/stores";
$session.httpClient = httpClient
and I got:
Error: Failed to serialize session data: Cannot stringify arbitrary non-POJOs
So $session is meant to be serialized, ok. Does that mean that I need to put whatever debug parameters a user supplied in $session, and each page needs to freshly instantiate its own http client? Or is there some other idiomatic sveltekit way of doing this?
PS I know sveltekit has its own fetch so you might want to say "don't use your own http client", but my app uses many different service objects (graphql client for example) that can be configured in debug (and other) modes, so please don't zero in on the fact that my example is a http client.
One way around this could be to send down the configuration in the top __layout file, create the http client there and store in a store. Since stores are shared across all pages the client can then freely use this store.
The API is a backend to a mobile app. I don't need user authentication. I simply need a way to secure access to this API. Currently, my backend is exposed.
The documentation seems to only talk about user authentication and authorization, which is not what I need here. I just need to ensure only my mobile app can talk to this backend and no one else.
Yes, you can do that: use authentication to secure your endpoints without doing user authentication.
I have found that this way of doing it is not well documented, and I haven't actually done it myself, but I intend to so I paid attention when I saw it being discussed on some of the IO13 videos (I think that's where I saw it):
Here's my understanding of what's involved:
Create a Google API project (though this doesn't really involve their API's, other than authentication itself).
Create OATH client ID's that are tied to your app via its package name and the SHA1 fingerprint of the certificate that you will sign the app with.
You will add these client ID's to the list of acceptable ID's for your endpoints. You will add the User parameter to your endpoints, but it will be null since no user is specified.
#ApiMethod(
name = "sendInfo",
clientIds = { Config.WEB_CLIENT_ID, Config.MY_APP_CLIENT_ID, Config.MY_DEBUG_CLIENT_ID },
audiences = { Config.WEB_CLIENT_ID }
// Yes, you specify a 'web' ID even if this isn't a Web client.
)
public void sendInfo(User user, Info greeting) {
There is some decent documentation about the above, here:
https://developers.google.com/appengine/docs/java/endpoints/auth
Your client app will specify these client ID's when formulating the endpoint service call. All the OATH details will get taken care of behind the scenes on your client device such that your client ID's are translated into authentication tokens.
HttpTransport transport = AndroidHttp.newCompatibleTransport();
JsonFactory jsonFactory = new JacksonFactory();
GoogleAccountCredential credential = GoogleAccountCredential.usingAudience( ctx, Config.WEB_CLIENT_ID );
//credential.setSelectedAccountName( user ); // not specify a user
Myendpoint.Builder builder = new Myendpoint.Builder( transport, jsonFactory, credential );
This client code is just my best guess - sorry. If anyone else has a reference for exactly what the client code should look like then I too would be interested.
I'm sorry to say that Google doesn't provide a solution for your problem (which is my problem too).
You can use their API key mechanism (see https://developers.google.com/console/help/new/#usingkeys), but there is a huge hole in this strategy courtesy of Google's own API explorer (see https://developers.google.com/apis-explorer/#p/), which is a great development tool to test API's, but exposes all Cloud Endpoint API's, not just Google's services API's. This means anyone with the name of your project can browse and call your API at their leisure since the API explorer circumvents the API key security.
I found a workaround (based on bossylobster's great response to this post: Simple Access API (Developer Key) with Google Cloud Endpoint (Python) ), which is to pass a request field that is not part of the message request definition in your client API, and then read it in your API server. If you don't find the undocumented field, you raise an unauthorized exception. This will plug the hole created by the API explorer.
In iOS (which I'm using for my app), you add a property to each request class (the ones created by Google's API generator tool) like so:
#property (copy) NSString *hiddenProperty;
and set its value to a key that you choose. In your server code (python in my case) you check for its existence and barf if you don't see it or its not set to the value that your server and client will agree on:
mykey,keytype = request.get_unrecognized_field_info('hiddenProperty')
if mykey != 'my_supersecret_key':
raise endpoints.UnauthorizedException('No, you dont!')
Hope this puts you on the right track
The documentation is only for the client. What I need is documentation
on how to provide Service Account functionality on the server side.
This could mean a couple of different things, but I'd like to address what I think the question is asking. If you only want your service account to access your service, then you can just add the service account's clientId to your #Api/#ApiMethod annotations, build a GoogleCredential, and invoke your service as you normally would. Specifically...
In the google developer's console, create a new service account. This will create a .p12 file which is automatically downloaded. This is used by the client in the documentation you linked to. If you can't keep the .p12 secure, then this isn't much more secure than a password. I'm guessing that's why this isn't explicitly laid out in the Cloud Endpoints documentation.
You add the CLIENT ID displayed in the google developer's console to the clientIds in your #Api or #ApiMethod annotation
import com.google.appengine.api.users.User
#ApiMethod(name = "doIt", scopes = { Constants.EMAIL_SCOPE },
clientIds = { "12345678901-12acg1ez8lf51spfl06lznd1dsasdfj.apps.googleusercontent.com" })
public void doIt(User user){ //by convention, add User parameter to existing params
// if no client id is passed or the oauth2 token doesn't
// match your clientId then user will be null and the dev server
// will print a warning message like this:
// WARNING: getCurrentUser: clientId 1234654321.apps.googleusercontent.com not allowed
//..
}
You build a client the same way you would with the unsecured version, the only difference being you create a GoogleCredential object to pass to your service's MyService.Builder.
HttpTransport httpTransport = new NetHttpTransport(); // or build AndroidHttpClient on Android however you wish
JsonFactory jsonFactory = new JacksonFactory();
// assuming you put the .p12 for your service acccount
// (from the developer's console) on the classpath;
// when you deploy you'll have to figure out where you are really
// going to put this and load it in the appropriate manner
URL url = getClass().class.getResource("/YOURAPP-b12345677654.p12");
File p12file = new File(url.toURI());
GoogleCredential.Builder credentialBuilder = new GoogleCredential.Builder();
credentialBuilder.setTransport(httpTransport);
credentialBuilder.setJsonFactory(jsonFactory);
//NOTE: use service account EMAIL (not client id)
credentialBuilder.setServiceAccountId("12345678901-12acg1ez8lf51spfl06lznd1dsasdfj#developer.gserviceaccount.com"); credentialBuilder.setServiceAccountScopes(Collections.singleton("https://www.googleapis.com/auth/userinfo.email"));
credentialBuilder.setServiceAccountPrivateKeyFromP12File(p12file);
GoogleCredential credential = credentialBuilder.build();
Now invoke your generated client the same way
you would the unsecured version, except the builder takes
our google credential from above as the last argument
MyService.Builder builder = new MyService.Builder(httpTransport, jsonFactory, credential);
builder.setApplicationName("APP NAME");
builder.setRootUrl("http://localhost:8080/_ah/api");
final MyService service = builder.build();
// invoke service same as unsecured version
I am implementing Cloud Endpoints with a Python app that uses custom authentication (GAE Sessions) instead of Google Accounts. I need to authenticate the requests coming from the Javascript client, so I would like to have access to the cookie information.
Reading this other question leads me to believe that it is possible, but perhaps not documented. I'm not familiar with the Java side of App Engine, so I'm not quite sure how to translate that snippet into Python. Here is an example of one of my methods:
class EndpointsAPI(remote.Service):
#endpoints.method(Query_In, Donations_Out, path='get/donations',
http_method='GET', name='get.donations')
def get_donations(self, req):
#Authenticate request via cookie
where Query_In and Donations_Out are both ProtoRPC messages (messages.Message). The parameter req in the function is just an instance of Query_In and I didn't find any properties related to HTTP data, however I could be wrong.
First, I would encourage you to try to use OAuth 2.0 from your client as is done in the Tic Tac Toe sample.
Cookies are sent to the server in the Cookie Header and these values are typically set in the WSGI environment with the keys 'HTTP_...' where ... corresponds to the header name:
http = {key: value for key, value in os.environ.iteritems()
if key.lower().startswith('http')}
For cookies, os.getenv('HTTP_COOKIE') will give you the header value you seek. Unfortunately, this doesn't get passed along through Google's API Infrastructure by default.
UPDATE: This has been enabled for Python applications as of version 1.8.0. To send cookies through, specify the following:
from google.appengine.ext.endpoints import api_config
AUTH_CONFIG = api_config.ApiAuth(allow_cookie_auth=True)
#endpoints.api(name='myapi', version='v1', auth=AUTH_CONFIG, ...)
class MyApi(remote.service):
...
This is a (not necessarily comprehensive list) of headers that make it through:
HTTP_AUTHORIZATION
HTTP_REFERER
HTTP_X_APPENGINE_COUNTRY
HTTP_X_APPENGINE_CITYLATLONG
HTTP_ORIGIN
HTTP_ACCEPT_CHARSET
HTTP_ORIGINALMETHOD
HTTP_X_APPENGINE_REGION
HTTP_X_ORIGIN
HTTP_X_REFERER
HTTP_X_JAVASCRIPT_USER_AGENT
HTTP_METHOD
HTTP_HOST
HTTP_CONTENT_TYPE
HTTP_CONTENT_LENGTH
HTTP_X_APPENGINE_PEER
HTTP_ACCEPT
HTTP_USER_AGENT
HTTP_X_APPENGINE_CITY
HTTP_X_CLIENTDETAILS
HTTP_ACCEPT_LANGUAGE
For the Java people who land here. You need to add the following annotation in order to use cookies in endpoints:
#Api(auth = #ApiAuth(allowCookieAuth = AnnotationBoolean.TRUE))
source
(Without that it will work on the local dev server but not on the real GAE instance.)