How to set up a database connection from environment in Rocket? - database

I have following working database connection setup for my Rocket app:
main.rs:
#[database("my_db")]
pub struct DbConn(diesel::PgConnection);
Rocket.toml:
[global.databases]
my_db = { url = "postgres://user:pass#localhost/my_db" }
I would like to set username, password and a database name from the environment. Expected it to be something like ROCKET_MY_DB=postgres://user:pass#localhost/my_db, but it didn't work. Was unable find relevant database example for Rocket.

After a lot of experiments (as there is no specific instructions for the database and I expected something that looked more like a standard approach: ENV_PARAM=conn_string, i.e. in Diesel) I finally figured out that I need to place a complex object into the environment.
The solution is this ugly string:
ROCKET_DATABASES={my_db={url="postgres://user:pass#localhost/my_db"}}

I would like to set username, password and a database name from the environment. Didn't find relevant example for Rocket.
Front page of the doc
Rocket and Rocket libraries are configured via the Rocket.toml file and/or ROCKET_{PARAM} environment variables. For more information on how to configure Rocket, see the configuration section of the guide as well as the config module documentation.
Example just follow link:
All configuration parameters, including extras, can be overridden through environment variables. To override the configuration parameter {param}, use an environment variable named ROCKET_{PARAM}. For instance, to override the "port" configuration parameter, you can run your application with:
ROCKET_PORT=3721 ./your_application
🔧 Configured for development.
=> ...
=> port: 3721 ```

Related

H2 database "if" query grammar [duplicate]

I have some problems with using a schema.sql file to create my sql schema when executing a junit test while this schema contains mysql specific expression. I have to add the mode=mysql to the H2 url.
For example something like this:
jdbc:h2:mem:testd;MODE=MYSQL
But Spring boot automatically uses the url defined in the enum
org.springframework.boot.autoconfigure.jdbc.EmbeddedDatabaseConnection with its url
jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE.
I have tried similiar approaches to get this to work, but spring does not take the spring.datasource.url=jdbc:h2:mem:testdb;MODE=MYSQL from my test-application.properties. All other settings from my test-application.properties have been read successfully.
If I let spring/hibernate create the schema (without the schema.sql file) with the javax.persistence annotations in my entities everything works fine.
Is there a simple way to add a mode?
Set
spring.datasource.url=jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE;MODE=MYSQL
in application-test.properties, plus
#RunWith(SpringRunner.class)
#DataJpaTest
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
#ActiveProfiles("test")
on the test class
I was having this same issue. It would not pick up the url when running tests. I'm using flyway to manage my scripts. I was able to get all of these working together by following these few steps.
Created a V1_init.sql script in src/test/resources/db/migration so that it is the first script run by flyway.
SET MODE MYSQL; /* another h2 way to set mode */
CREATE SCHEMA IF NOT EXISTS "public"; /* required due to issue with flyway --> https://stackoverflow.com/a/19115417/1224584*/
Updated application-test.yaml to include the schema name public:
flyway:
schemas: public
Ensure the test specified the profile: #ActiveProfiles("test")
I have tried similiar approaches to get this to work, but spring does not take the spring.datasource.url=jdbc:h2:mem:testdb;MODE=MYSQL from my test-application.properties
Did you try to append this parameters instead of rewriting the existing ones?
spring.datasource.url=jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE;MODE=MYSQL
All other settings from my test-application.properties have been read successfully.
I thought that file should be named application-test.properties.
I was able to run it with this config:
# for integration tests use H2 in MySQL mode
spring.datasource.url=jdbc:h2:mem:testdb;DATABASE_TO_LOWER=TRUE;MODE=MySQL;
spring.jpa.database-platform=org.hibernate.dialect.MariaDBDialect
The main trick here is to force Hibernate to generate SQL scripts for MariaDB dialect because otherwise Hibernate tries to use H2 dialect while H2 is already waiting for MySQL like commands.
Also I tried to use more fresh MariaDB103Dialect for MariaDB 10.3 but it doesn't worked properly.
You need to set MYSQL mode on h2 and disable replacing of datasource url for embedded database:
Modify application-test.yaml
spring:
datasource:
url: jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=false;MODE=MYSQL
test:
database:
replace: NONE

Shiro how to secure the data source password

I have been exploring Apache Shiro with Zeppelin and so far has been able to make authentication work with JdbcRealm but one thing that is not going well is giving the data source password as plain text.
Is there a way to avoid that?
My shiro.ini looks like:
[main]
dataSource = org.postgresql.ds.PGPoolingDataSource
dataSource.serverName = localhost
dataSource.databaseName = dp
dataSource.user = dp_test
dataSource.password = Password123
ps = org.apache.shiro.authc.credential.DefaultPasswordService
pm = org.apache.shiro.authc.credential.PasswordMatcher
pm.passwordService = $ps
jdbcRealm = org.apache.shiro.realm.jdbc.JdbcRealm
jdbcRealmCredentialsMatcher = org.apache.shiro.authc.credential.Sha256CredentialsMatcher
jdbcRealm.dataSource = $dataSource
jdbcRealm.credentialsMatcher = $pm
shiro.loginUrl = /api/login
[roles]
admin = *
[urls]
/** = authc
Is there a way to avoid giving data source password as plain text
dataSource.password = Password123?
Would like to give something like:
$shiro1$SHA-256$500000$YdUEhfDpsx9KLGeyshFegQ==$m+4wcq4bJZo1HqDAGECx50LcEkRZI0zCyq99gtRqZDk=
yes, there is a way, but there will still be a password lying around somewhere due to the nature of shiro needing to know the password.
Why Hashing does not work
You posted
something like: $shiro1$SHA-256[…]
This is a hash, and thus it is not reversible. There is no way shiro could log into the datasource using this String.
Container managed datasources
The best approach I can recommend at this point is to have a container managed resource. A container is referring to a (web) application server in this case, like tomcat, OpenLiberty or Wildfly.
For your use case, try looking into the following:
extend org.apache.shiro.realm.jdbc.JdbcRealm or AuthorizingRealm
Add the JPA API to your module and inject a persistence context like so:
#PersistenceContext
EntityManager entityManager;
Override the methods
protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken token)
protected AuthorizationInfo doGetAuthorizationInfo(PrincipalCollection principals)
… to load from your managed datasource instead.
Drawbacks of this approach:
You just delegated datasource login to your container / application server. The server is still facing the same problems. E.g. with OpenLiberty, you will still need to store a master key of an encrypted (not hashed) password somewhere, and thus liberty will do exactly this.
use another configuration source
Instead of using a shiro.ini file, you could also write your own environment loader. You could request the file from an IP-restricted web service or a cryptographic hardware device.
Always a goal: restrict the environment
You should always restrict the environment.
E.g. create a user which can install, but not run your application and who cannot read the logs (called setup-user or so).
Create another user which can start the application, read but not modify configuration files and write logs, called a run-user.
Restrict access to configurations and logs for all other users on that system.
Getting involved
If you have other needs, feel welcome to discuss other solutions on the shiro mailing lists.

Environment variable GOOGLE_APPLICATION_CREDENTIALS must contain string

In my serverless.yml file, I’m trying to add an environment variable called GOOGLE_APPLICATION_CREDENTIALS which points to my service account credentials JSON file but when added to the serverless file I'm getting an error Environment variable GOOGLE_APPLICATION_CREDENTIALS must contain string
I tried adding the environment variable GOOGLE_APPLICATION_CREDENTIALS using AWS CLI and it worked fine. But I want to add the environment variable from serverless file.
I’ve tried the below methods but none of the method seem to work
environment:
GOOGLE_APPLICATION_CREDENTIALS: ‘${file(./serviceAccountCreds.json)}’
environment:
GOOGLE_APPLICATION_CREDENTIALS: “${file(./serviceAccountCreds.json)}”
environment:
GOOGLE_APPLICATION_CREDENTIALS: ${file(./serviceAccountCreds.json)}
My use case is I need to load the google application credentials to call the GCP APIs from the AWS lambda. I’ve read answers regarding support for google cloud functions for setting the environment variable but doesn’t seem to help with the AWS functions. Not sure the support in generic one or only to GCP functions.
Edited : Tried setting the environment variable at the run time process.env.GOOGLE_APPLICATION_CREDENTIALS as well and worked. But this still leaves me with a question whether the serverless has support of setting env.variables to JSON files as a whole.
Links I followed:
https://www.serverless.com/framework/docs/providers/aws/guide/variables
https://github.com/serverless/serverless-google-cloudfunctions/issues/122
https://github.com/serverless/serverless-google-cloudfunctions/pull/123
Try setting a variable like this:
GOOGLE_APPLICATION_CREDENTIALS=$(cat ./serviceAccountCreds.json)
wchich will set the value of the variable to whatever content is in your credentials JSON file.
If the value has to contain only the path to a json file then try this:
GOOGLE_APPLICATION_CREDENTIALS=./serviceAccountCreds.json
You may also find this question interesting (very simillar case).
And here's some discussion on how to pass a variable from a file in Bash.
Lastly - some very basic examples on how to work with variables.

Is it possible to configure `.devcontainer` settings globally?

My development workflow uses one docker container spun up through Docker Compose for all projects.
To the best I can tell the Remote - Container Extension only allows a configuration file to be created per project.
Having the ability to setup the extension either through a global devcontainer.json or in the User settings would be ideal so that:
It can use a docker-compose.yml file external to the project files when attaching to a running container
A user other than root can be set when attaching to the container through the extension
Take advantage that Docker Compose allows you to attach using the container name instead of id, as I rebuild the containers often which causes previous workspaces to fail when attempting to reconnect to the remote.
Perhaps I'm missing something obvious, but I've read through the documentation and also went to the user settings to look through the autocomplete of options. I kind of assumed since devcontainer.json is similar in nature to launch.json it too could be set in User settings, but it was not an available option like "launch" is.

Exporting a Typo3 site bit by bit

(edit: I'm leaving all the mistaken assumptions in just in case someone else makes the same mistakes)
I have an ancient Typo3 3.8.1 site on a remote server. I don't have access to that server, and the team in charge of maintaining the site doesn't know who to contact to get access to the server. I do have the admin rights on that site, though. (edit: no I don't. oops.)
This is what I see in the (not) admin menu:
I'm not sure if this version supports extensions, I can't find an extension manager anywhere. (because I'm not an admin)
I want to export the site so I can host it on a server on my own domain instead. The problem is the export file is too large, I can't download it. Will I destroy the directory structure if I export a bunch of pages at a time?
If you have admin access to the backend you can try to install Quixplorer - file manager. Using it you can try to zip folders in the main directory ie. (typo3, typo3conf, fileadmin etc) one by one and download them via browser.
It's important to download and remove typo3conf.zip from the server as soon as possible, cause it contains sensitive data.
Additionally you can also install PhpMyAdmin extension (search in repository) i you haven't other MySQL client.
Edit:
If you can't use Quixplorer the only way is... to write own extension and upload it via Extension Manager, there you'll need to try perform primitive file system operations like:
(PHP)
system('zip -R t3c.zip typo3conf/');
Sometimes the server allows more memory and execution_time that the T3D Export. So, if you can change PHP files on that server, try to change typo3/sysext/impexp/class.tx_impexp.php - search for ini_set and change that settings. If the server allows, you can then create bigger t3d-files.
And you could try some shell-extensions to get hands on that server:
http://typo3.org/extensions/repository/view/phpshell
http://typo3.org/extensions/repository/view/mw_shell
http://typo3.org/extensions/repository/view/shell
But to answer your initial question: you can crate a couple of T3D-files and import them again. Just force uid if you import them - and install all needed extensions first!

Resources