How to setup CouchDB with JWT auth? - database

I'm trying to setup just for CouchDB, so I can use my current API tokens to authenticate in CouchDB.
But the docs don't seem to provide enough information to set this up.
I don't understand what the payload of the gut needs to contain identifying the user. How do I configure the jwt secret?
Is there any simple example out there or something like or tutorial to do this correctly?

this thread may be helpful:
https://github.com/apache/couchdb/discussions/2947
edit:
ok i found a configuration that worked for me - this solution won't be suitable for anything more than a testing couchdb instance.
configure local.ini (docker image)
1.1 -> ssh to your docker docker exec -it bash
1.2 -> install vim for convenience and disable visual mode:
$ apt-get update
$ apt-get install vim
$ echo "set mouse-=a" >> ~/.vimrc
1.3 -> update local.ini
$ vi /opt/couchdb/etc/local.ini
in [chttpd] section add line
authentication_handlers = {chttpd_auth, jwt_authentication_handler}, {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}
at the very end of the file add jwt_keys config
[jwt_keys]
hmac:_default = aGVsbG8=
hmac:foo = aGVsbG8y
restart your container
configure postman :
bearer token for _default hmac is
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ1c2VyXzEiLCJleHAiOjE1OTI2MTEyMDB9.Y9jNgSeSBl54V2MHg1hXhivyZsdXTeiAVJR2DSlF6LQ
put it into postman and issue following get request:
http://localhost:5984/_session
you should see something like :
{"ok":true,"userCtx":{"name":"user_1","roles":[]},"info":{"authentication_handlers":["jwt","cookie","default"],"authenticated":"jwt"}}
now go in your browser to couchdb ui and login as admin
http://127.0.0.1:5984/_utils/#login
then click the "lock" icon next to the database you wish the user_1 to have privileges and update the permissions accordingly.
check
check if your user_1 authenticating via jwt has permissions on the database by issuing appropriate request, for example
http://localhost:5984/campaigns

Related

Deploy ReactJs Azure Static Web App based on Azure DevOps git repo via Azure CLI

I'm actually working on the Continuous Delivery pipeline for the UI project made in ReactJs with the use of the Azure Static Web App.
I want to create and deploy the static web app to Azure based on the git repo located in Azure DevOps.
The reason behind this is I see a huge opportunity to create a Pull Request Environment pipeline for the system I work on every day with the usage of static web apps which seem to be a super cheap and fast solution! Then the pipeline would allow testing the Pull Request changes in isolation before releasing to DEV, QA,... Prod environments.
Anyway, straight to the point.
The official Microsoft documentation provides only an example of how to do this for GitHub repo but I cannot find any info on how this can be achieved when using Azure DevOps git repo:
az staticwebapp create \
-n my-first-static-web-app \
-g <RESOURCE_GROUP_NAME> \
-s https://github.com/<YOUR_GITHUB_ACCOUNT_NAME>/my-first-static-web-app \
-l <LOCATION> \
-b main \
--app-artifact-location "build" \
--token <YOUR_GITHUB_PERSONAL_ACCESS_TOKEN>
I thought that the way the az staticwebapp create works in the provided example should be analogical with the Azure DevOps.
I thought that equivalent of YOUR_GITHUB_PERSONAL_ACCESS_TOKEN in Azure DevOps would be an access token that can be generated:
When I try to run the following code:
az staticwebapp create -l westus2 -n appNameTest1 -g TestPrEnvResourceGroup -s "https://dev.azure.com/myOrganisationName/myProjectName/_git/myRepoName -b "main" --token "generatedTokenValuefwfsdgsgsd"
I'm getting the following exception;
Command group 'staticwebapp' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus Operation returned an invalid status 'Bad Request'
Also, I don't think it can matter but the TestPrEnvResourceGroup is created under the UK West location.
It doesn't tell me much, like if the token or some other parameter is wrong...
Any ideas?
Cheers
Have you tried this way?
You need to generate a deployment token from the Azure Static Web App and have it in the YAML
I tried the same for Angular app and it worked pretty well
azure_static_web_apps_api_token: $(deployment_token)
UPDATE : As per the Github issue , currently this is not supported.
We currently don’t support automatically creating Azure DevOps
pipelines. This is the supported path for using Azure DevOps:
https://learn.microsoft.com/en-us/azure/static-web-apps/publish-devops
You can vote for the feature here

Not able to install the GUI package(odl-dlux-all) related to OpenDaylight Carbon version

I am not able to install the package odl-dlux-all on the Ubuntu 16.04 machine. Following is the error message
Error executing command: Can't install feature odl-dlux-all/0.0.0:
null
VM : Ubuntu 16.04
Opendaylight version : Carbon
What is the issue?
Should i install gnome-desktop for this?
Prat,
This is what I have found. It looks like you and I were in the same boat. I ran into this issue, also. After additional searching, I found that ODL's website has a guide for the DLUX features.
These are the features I installed and it got me where I needed:
odl-dlux-core
odl-dluxapps-nodes
odl-dluxapps-topology
odl-dluxapps-yangui
odl-dluxapps-yangvisualizer
odl-dluxapps-yangman
Be sure you enter them as separate commands using the feature:install command prior to each of them.
I found the guide on ODL's website HERE.
I hope this helps!! :)
The way OpenDaylight's DLUX features are structured was changed in Carbon. Application-specific logic was broken out into odl-dluxapps-* Karaf features for easier maintenance.
Install and start OpenDaylight:
sudo dnf install -y http://cbs.centos.org/repos/nfv7-opendaylight-70-release/x86_64/os/Packages/opendaylight-7.0.0-1.el7.noarch.rpm
sudo systemctl start opendaylight
Connect to the Karaf shell (make take a moment for Karaf's SSH server to come up):
ssh -p 8101 karaf#localhost
# password: karaf
See the available DLUX features:
opendaylight-user#root>feature:list | grep dluxapps
odl-dluxapps-yangutils
odl-dluxapps-yangui
odl-dluxapps-topology
odl-dluxapps-yangvisualizer
odl-dluxapps-applications
odl-dluxapps-yangman
odl-dluxapps-nodes
features-dluxapps
Install the ones you're interested in:
opendaylight-user#root>feature:install odl-dluxapps-topology
In a browser on the same machine:
http://localhost:8181/index.html#/yangui/index
Login with admin/admin and things should work.
Here are the DLUX docs.
Note that DLUX isn't widely used by ODL developers, and isn't packaged as a product by vendors. Most people use the REST API directly to query OpenDaylight. There are REST API examples in the NetVirt Postman Collection, as an example.
It is true. You have to install all dlux features manually.
The Change against ODL Boron is, that Carbon removed feature odl-dlux-all. And in Carbon odl-dlux-core installs only core and nothing more. I had always gray login page in DLUX WEB login, there was nothing, only blank gray page.
I suggest you to use command: feature:list | grep dlux
This will create for you complete list of available DLUX features. And You have to install all of them.
After you finish installation of DLUX use same command with parameter -i which will show you only succesfully installed features:
feature:list -i | grep dlux
so you will see the result.
Don't forget that after instalation Dlux needs a few minutes to be fully ready. If you try to login to dlux during this time, you can get ERROR403 but also login page will not accept the credentials even they are correct. So be patient and wait.
+----------------------------------------------------------------------+
DOCUMENTATION OF OPENDAYLIGHT
IS HORIBBLE AND SOMETIMES PURELY WRONG
+----------------------------------------------------------------------+

Mesosphere installation PermissionError:/genconf/config.yaml

I got a Mesosphere-EE, and install on fedora 23 server (kernel 4.4)with:
$bash dcos_generate_config.ee.sh --web –v
then output:
Running mesosphere/dcos-genconf docker with BUILD_DIR set to/home/mesos-ee/genconf
Usage of loopback devices is strongly discouraged for production use.Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
07:53:46:: Logger set to DEBUG
07:53:46:: ====> Starting DCOS installer in web mode
07:53:46:: DCOS Installer v1
07:53:46:: Starting server ('0.0.0.0', 9000)
Then I start firefox though vnc, the vnc is on root. then:
07:53:57:: Root page requested. 07:53:57:: Serving/usr/local/lib/python3.4/site-packages/dcos_installer/templates/index.html
07:53:58:: Request for configuration type made.
07:53:58::Configuration file not found, /genconf/config.yaml. Writing new onewith all defaults.
07:53:58:: Error handling request
PermissionError: [Errno 13] Permission denied: '/genconf/config.yaml'
But I already have a genconf/config.yaml, it look like:
bootstrap_url: http://<bootstrap_public_ip>:<your_port>
cluster_name: '<cluster-name>'
exhibitor_storage_backend: zookeeper
exhibitor_zk_hosts: <host1>:2181,<host2>:2181,<host3>:2181
exhibitor_zk_path: /dcos
master_discovery: static
master_list:
- <master-private-ip-1>
- <master-private-ip-2>
- <master-private-ip-3>
superuser_username: <username>
superuser_password_hash: <hashed-password>
resolvers:
- 8.8.8.8
- 8.8.4.4
I do not know what’s going on. If you have any idear, please let me know, thank you very much!
Disable Selinux!
Configure SELINUX=disabled in the /etc/selinux/config file and then reboot!
Be ensure the selinux is disabled by the command getenforce.
$ getenforce
Disabled
zhe.
Correctly installing the enterprise edition depends on the correct system prerequisites. Anyway I suppose you're still on the bootstrap node so I will give you some path to succed in your current task.
Run the script as root or as a user issuing sudo dcos_generate_config.ee.sh
The script will also generate the config file automatically; if you want to use your own configuration file then create a folder named genconf and put it inside before running the script. You should changes the values inside <> with your specific configuration. If you need more help for your specific case send me an email to infofs2 at gmail.com

GAE Upload Download Data / Import Data to localhost for testing on my dev server

I needed to test some changes on my local dev server before pushing to production. Doing so required having the full dataset on my local machine.
A colleague directed me to:
https://developers.google.com/appengine/docs/python/tools/uploadingdata?csw=1
I downloaded the data using an administrator's username and password, but unfortunately, I was unable to upload the data to my localhost "dev" app engine server.
Ran this command from the commandline:
appcfg.py upload_data --filename=../data/data1.dat --url=http://localhost:9080/_ah/remote_api ./
Where:
9080 was my app port on my localhost copy of the app
I was running this command from my app directory
Had the downloaded data stored in relative directory
../data/data1.dat
Received this error:
raise _ToDatastoreError(err)
google.appengine.api.datastore_errors.BadRequestError: app "dev~appname" cannot access app "appname"'s data
UPDATE: It seems that the answer was as simple as adding the following to my upload_data call:
--application="dev~appname"
Thanks #DavidBennett.
ORIGINAL ANSWER: (which also works)
After a ton of searching on SO and code.google.com, the solution I found that worked was a comment on this question:
devappserver2, remote_api, and --default_partition
I used my original command as described in the question:
appcfg.py upload_data --filename=../data/data1.dat --url=http://localhost:9080/_ah/remote_api ./
The username and password I entered when prompted were my apps username (in my case, my email) and the corresponding password. (If that doesn't work you might want to try blank or test#example.com based on other comments I've read, but have not tested that theory.)
I also restarted my app engine with the following flag: (don’t forget to remove the flag the next time you restart the server) (You might want to try without using this flag, since I can’t confirm that it affects anything - I’m including it here, since it was a setting that I used.)
--clear_datastore=yes
The commenter recommends to delete “dev~” in your local server code on line 84 in this file:
google/appengine/tools/devappserver2/application_configuration.py, line 84
Where:
that base directory 'google' is located inside of:
/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/
assuming your GoogleAppEngineLauncher.app directory is in your Applications directory on your Mac
IMPORTANT: Restart your local app engine server for the changes to take effect.

Easy way to push postgres db to heroku in Win7? problems with db:pull and pg:transfer

Using Rails 3.2.2, finishing up my migration from sqlite to postgres 9.2.
Used answer in this tutorial as a guide to install postgres and got stuck on Step 11 where it asks run heroku db:pull where I get:
Failed to connect to database: Sequel::AdapterNotFound -> LoadError: cannot load such file --pg
I dug deeper and found db:pull (taps gem) is deprecated and came across a few recommendations for pg:transfer. Installed pg:transfer, but I get the impression it may be *nix only(?) as if I run: heroku pg:transfer it returns:
Heroku client internal error. No such file or directory - .env (Errno:ENOENT)
If I do pg:transfer with -f and -t it gives me:
'env' is not recognized as an internal or external command, operable program or batch file which means it isn't bound to path or doesn't exist as a command in windows.
Any thoughts on above errors?
Resolved by using pg:backups gem, which was recommended as the replacement for taps in the Heroku docs. I used this guide and uploaded my dump to dropbox for Heroku to pick it up.
Here's my exact list of steps and cmds:
Added pgbackups from heroku.com add-ons to my instance.
heroku pgbackups:capture DATABASE (this just backs up your heroku db)
pg_dump -h localhost -U <pg username> -Fc dbname > dbname.dump
Moved dbname.dump into a folder on my dropbox
In Dropbox, right-click on dbname.dump => "Share link"
Cancel the sharing dialogue pop-up, right-click on "Download button", Copy Link Address (Chrome)
heroku pgbackups:restore DATABASE <paste dropbox download link here>
Dropbox trickiness: don't use the file link provided by Dropbox since it's an html redirect and will cause pg:restore to fail, even though the extension ends in .dump
Instead, navigate to your dropbox page and "right-click copy link address" on the Download button. That's the address you use in your pgbackups:restore (should be something like db.dump?token=<long random string>)
A bit clunky, but got the job done. If you know a better way please let me know!
You need to make a .env file containing something like:
DATABASE_URL=postgres://localhost/myapp_development
References:
https://github.com/ddollar/heroku-pg-transfer
https://devcenter.heroku.com/articles/config-vars#local-setup

Resources