We have global Variable concept in TIBCO where do we have the same concept in Mulesoft ?
Like setting a Global Variable so that during the run time or deployment based on environment. Is there any similar case in Mulesoft
There are global-properties that can be set per mule app and also environment variables that can be set to override and set environment-specific properties. These can be configured individually or environment variables can be set to load specific property files per environment. All info is in the documentation here: https://docs.mulesoft.com/mule-runtime/4.2/configuring-properties
For every Mule app, there are global properties that can be configured accordingly. In Mule 4 as they call it "Property placeholder" values of which can be made available in variety of ways.
so we can use global property syntax to reference .yaml or .properties files and create new global properties that depend on the configuration properties.
for more info refer to
https://docs.mulesoft.com/mule-runtime/4.2/configuring-properties
https://www.appnovation.com/blog/centralized-configuration-management-mule-applications
https://blogs.perficient.com/2017/02/02/mule-variable-scopes-and-passing-global-values-with-mule-registry/
If it is global variables you are asking? Then it has more to do with mule 3. In Mule 3 we have a session, local, global variables to work with, based on the scope of use.
In mule 4 there is no concept of global and local variables. All variables declared in mule 4 are having global scope. This means you can access it across all flows, sub-flows, XML files.
Note: The scope of the Mule message payload is not global.
All the other answers talk about property parameterization using property placeholder (mule 3 concept) and global configurations in global elements.
I believe you must change this question. Don't call the values you pass in runtime as "Global Variables". Global Variables is an entirely different concept.
Mule 4 has carried over the legacy of global Variables or in few general terms referred as “Property Placeholders” from earlier Runtime versions:
This aspect of Mule ESB is used for mainly placing values to environment specific variables and in frequent cases to maintain abstraction and security:
Property Placeholders:
<smtp:outbound-endpoint user="${smtp.username}" password="${smtp.password}"/>
Global Properties:
<global-property name="smtp.host" value="smtp.mail.com"/>
<global-property name="smtp.subject" value="Subject of Email"/>
Properties files:
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns="http://www.mulesoft.org/schema/mule/core"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.1.xsd">
<context:property-placeholder location="smtp.properties"/>
<flow name="myProject_flow1">
<logger message="${propertyFromFile}" doc:name="System Property Set in Property File"/>
</flow>
To hold multiple properties file:
<context:property-placeholder location="email.properties,http.properties,system.properties"/>
Message Properties:
#[message.inboundProperties['Content-Type']]
System Properties
Environment Variables from OS or in General case:
General: ${variableName}
From OS: <logger message="${USER}" doc:name="Environment Property Set in OS" />
Properties of globally referenced variables in cloud hub:
Log in to your Anypoint Platform account.
1. Go to CloudHub.
2. Either click Deploy Application to deploy a new application, or select a running application and click Manage Application.
3. Select the Properties tab in the Settings section.
Reference: https://docs.mulesoft.com/mule-runtime/{Runtime-version}/configuring-properties
In Mulesoft we call this runtime variable. we create some files in Mulesoft called properties files, that are environment-specific.for example
if we are deploying our Mulesoft API on the dev environment then create a file with the name
"dev.properties". Add all properties in this file that we need at deployment times below
api.host=abc
api.port=8081
now in your main interface add a global configuration to read this file as below#
<configuration-properties
doc:name="env file properties configuration"
doc:id="010e36f9-1ef3-4104-b42f-21d2d4012ef7"
file="properties/${mule_env}.properties"
doc:description="Global configuration to specify environmnet property files" />
here mule_env=environment name as here its dev will get read from your API deployment process (with the help of dev-ops you can set it in scripts)
Related
There is a statement in
https://create-react-app.dev/docs/adding-custom-environment-variables/ :
Any other variables except NODE_ENV will be ignored to avoid accidentally exposing a private key on the machine that could have the same name.
What is meant here by exposing a private key on the machine that could have the same name?
I could get nothing from this sentence. Could you please explain this statement with an example?
Thank you in advance.
As mentioned in the docs, a React app will include the environment variables in the source code when you build the application.
Environment variables are embedded into the build, meaning anyone can view them by inspecting your app's files.
Imagine that you have a React frontend and a backend service, both hosted on the same machine. Also imagine that you accidentally refer to an environment variable in your React app, which contains some secret used by the backend service. Now that secret will be exposed to world in the frontend source.
That is why an "intentionally verbose" prefix is added to the environment variables used by the React app. It forces you to be explicit about what you want exposed in the frontend.
It is intentionally verbose. Otherwise there is a risk of accidentally exposing a private key on the machine that happens to have the same name.
https://github.com/facebook/create-react-app/issues/865#issuecomment-252199527
In my serverless.yml file, I’m trying to add an environment variable called GOOGLE_APPLICATION_CREDENTIALS which points to my service account credentials JSON file but when added to the serverless file I'm getting an error Environment variable GOOGLE_APPLICATION_CREDENTIALS must contain string
I tried adding the environment variable GOOGLE_APPLICATION_CREDENTIALS using AWS CLI and it worked fine. But I want to add the environment variable from serverless file.
I’ve tried the below methods but none of the method seem to work
environment:
GOOGLE_APPLICATION_CREDENTIALS: ‘${file(./serviceAccountCreds.json)}’
environment:
GOOGLE_APPLICATION_CREDENTIALS: “${file(./serviceAccountCreds.json)}”
environment:
GOOGLE_APPLICATION_CREDENTIALS: ${file(./serviceAccountCreds.json)}
My use case is I need to load the google application credentials to call the GCP APIs from the AWS lambda. I’ve read answers regarding support for google cloud functions for setting the environment variable but doesn’t seem to help with the AWS functions. Not sure the support in generic one or only to GCP functions.
Edited : Tried setting the environment variable at the run time process.env.GOOGLE_APPLICATION_CREDENTIALS as well and worked. But this still leaves me with a question whether the serverless has support of setting env.variables to JSON files as a whole.
Links I followed:
https://www.serverless.com/framework/docs/providers/aws/guide/variables
https://github.com/serverless/serverless-google-cloudfunctions/issues/122
https://github.com/serverless/serverless-google-cloudfunctions/pull/123
Try setting a variable like this:
GOOGLE_APPLICATION_CREDENTIALS=$(cat ./serviceAccountCreds.json)
wchich will set the value of the variable to whatever content is in your credentials JSON file.
If the value has to contain only the path to a json file then try this:
GOOGLE_APPLICATION_CREDENTIALS=./serviceAccountCreds.json
You may also find this question interesting (very simillar case).
And here's some discussion on how to pass a variable from a file in Bash.
Lastly - some very basic examples on how to work with variables.
I have following working database connection setup for my Rocket app:
main.rs:
#[database("my_db")]
pub struct DbConn(diesel::PgConnection);
Rocket.toml:
[global.databases]
my_db = { url = "postgres://user:pass#localhost/my_db" }
I would like to set username, password and a database name from the environment. Expected it to be something like ROCKET_MY_DB=postgres://user:pass#localhost/my_db, but it didn't work. Was unable find relevant database example for Rocket.
After a lot of experiments (as there is no specific instructions for the database and I expected something that looked more like a standard approach: ENV_PARAM=conn_string, i.e. in Diesel) I finally figured out that I need to place a complex object into the environment.
The solution is this ugly string:
ROCKET_DATABASES={my_db={url="postgres://user:pass#localhost/my_db"}}
I would like to set username, password and a database name from the environment. Didn't find relevant example for Rocket.
Front page of the doc
Rocket and Rocket libraries are configured via the Rocket.toml file and/or ROCKET_{PARAM} environment variables. For more information on how to configure Rocket, see the configuration section of the guide as well as the config module documentation.
Example just follow link:
All configuration parameters, including extras, can be overridden through environment variables. To override the configuration parameter {param}, use an environment variable named ROCKET_{PARAM}. For instance, to override the "port" configuration parameter, you can run your application with:
ROCKET_PORT=3721 ./your_application
🔧 Configured for development.
=> ...
=> port: 3721 ```
In our Apache Camel project, we are consuming a rest service which requires a .jks file.
Currently we are storing .jks file in a physical location and referring to that in Camel project. But it can't be used always, as we may be having access to the Fuse Management Console only and not to the physical location accessible from management console.
Another option is to store key file within bundle, which is can't be employed because, certificate may change based on the environment.
In this scenario, what can be a better solution to store key file?
Note
One option about which I thought was, storing .jks file within fabric profile. But could n't find any way to do that. Is it possible to store a file in Fabric profile?
What about storing the .jks in a java package and reading it as a resource?
You bundle imports org.niyasc.jks and loads the file from there. The bundle need not to change between environments.
Then you write 2 bundles to provide the same package org.niyasc.jks, one with production file and one with test file.
Production env:
RestConsumerBundle + ProductionJksProviderBundle
Test env:
RestConsumerBundle + TestJksProviderBundle
Mind that deploying both of them may be possible and RestConsumerBundle will be bound to the first deployed bundle. You can eventually play with OSGi directives to give priority to one of them.
EDIT:
A more elegant solution would be creating an OSGi service which exposes the .jks as an InputStream or byte[]. You can even play with JNDI if you feel to.
From Blueprint declare the dependency as mandatory, so your bundle will not start if the service is not available.
<!-- RestConsumerBundle -->
<reference id="jksProvider"
interface="org.niyasc.jks.Provider"
availability="mandatory"/>
Storing the JKS files in the Fuse profile could be a good idea.
If you have a broker profile created, such as "mq-broker-Group.BrokerName", take a look at it via the Fuse Web Console.
You can then access the jks file as a resource in the property file, as in "truststore.file=profile:truststore.jks"
And also check the "Customizing the SSL keystore.jks and truststore.jks file" section of this chapter:
https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.3/html/fabric_guide/mq#MQ-BrokerConfig
It has some good pointers.
Regarding how to add files to a Fabric profile, you can store any resources under src/main/fabric8 and use the fabric8 Maven plugin. For more, see:
https://fabric8.io/gitbook/mavenPlugin.html
-Codrin
I am using the Spring SAML extension with WSO2 IS as the IdP. Currently I set the entityBaseURL property for the MetadataGenerator inside the Spring XML config. For now, this works fine going against a single server since the entityBaseURL matches the servername. Since I have several environments (dev, test, and UAT) I need to programmatically set the entityBaseURL because each environment has a different server name and that servername won't match the entityBaseURL prop. It is undesirable to rebuild the WAR artifact for every environment. We keep our config for each environment in a database. So settings and properties specific to a particular stack of machines can be read at runtime. I would like to read the servername for the entityBaseURL property from our DB and set it programmatically. Should I replace the MetadataGenerator with my own class? It is unclear to me where the entityBaseURL property is initialized.
I have found a workable path to solve this. I ended up extending the MetadataGeneratorFilter class and overriding the getDefaultBaseURL method. The default implementation of the getDefaultBaseURL method is to compute the value using properties found in the HTTP request. I changed this behavior to do a DB lookup and return the value stored in the database. I could be short-sited here, but this does work. I was able to verify that the AssertionConsumerServiceURL attribute of the SAML AuthnRquest is getting set properly. The generated metadata is also correct.
Note: the entityBaseURL property can still be set manually in the Spring config. If it is then the value returned from the getDefaultBaseURL method is not used.