How to implement authentication (user/password) to swupdate Web Interface - swupdate

I need a way to implement some sort of authentication (user/password) to the swupdate web interface, in order to allow firmware updates to authorized users only.
I tried to place an .htaccess file in the root folder of the web interface (namely in the /www directory), but it seems to be ignored.
Anybody has a working example about my requirement ?
And also: In the configuration file swupdate.cfg I found the following parameter:
global-auth-file
for the embedded webserver, but I don't find which content (and in which format) this file must have.
Thanks in advance

Create htdigest file using apache's htdigest tool. For example: htdigest -c .htdigest myrealm someuser
Then run the swupdate adding the following mongoose arguments --auth-domain myrealm --global-auth-file /path_to_your_htdigest/.htdigest.
A full example: /usr/bin/swupdate -v -H "my_hardware:1.0" -f /etc/swupdate.cfg -w "--auth-domain myrealm --global-auth-file /www/.htdigest" -p 'reboot'

Related

How to setup CouchDB with JWT auth?

I'm trying to setup just for CouchDB, so I can use my current API tokens to authenticate in CouchDB.
But the docs don't seem to provide enough information to set this up.
I don't understand what the payload of the gut needs to contain identifying the user. How do I configure the jwt secret?
Is there any simple example out there or something like or tutorial to do this correctly?
this thread may be helpful:
https://github.com/apache/couchdb/discussions/2947
edit:
ok i found a configuration that worked for me - this solution won't be suitable for anything more than a testing couchdb instance.
configure local.ini (docker image)
1.1 -> ssh to your docker docker exec -it bash
1.2 -> install vim for convenience and disable visual mode:
$ apt-get update
$ apt-get install vim
$ echo "set mouse-=a" >> ~/.vimrc
1.3 -> update local.ini
$ vi /opt/couchdb/etc/local.ini
in [chttpd] section add line
authentication_handlers = {chttpd_auth, jwt_authentication_handler}, {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}
at the very end of the file add jwt_keys config
[jwt_keys]
hmac:_default = aGVsbG8=
hmac:foo = aGVsbG8y
restart your container
configure postman :
bearer token for _default hmac is
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ1c2VyXzEiLCJleHAiOjE1OTI2MTEyMDB9.Y9jNgSeSBl54V2MHg1hXhivyZsdXTeiAVJR2DSlF6LQ
put it into postman and issue following get request:
http://localhost:5984/_session
you should see something like :
{"ok":true,"userCtx":{"name":"user_1","roles":[]},"info":{"authentication_handlers":["jwt","cookie","default"],"authenticated":"jwt"}}
now go in your browser to couchdb ui and login as admin
http://127.0.0.1:5984/_utils/#login
then click the "lock" icon next to the database you wish the user_1 to have privileges and update the permissions accordingly.
check
check if your user_1 authenticating via jwt has permissions on the database by issuing appropriate request, for example
http://localhost:5984/campaigns

cURL error 60: SSL: no alternative certificate subject name matches target host name. Inter-project communication

So I'm still in the process of updating a Drupal 7 site to 8 using drush and ddev.
After running the import, I get an error with upgrade_d7_file.
I've tried to install a certificate using this article:
https://www.ddev.com/ddev-local/ddev-local-trusted-https-certificates/
However still get the error, any ideas?
ddev exec drush migrate-import --all
ddev exec drush mmsg upgrade_d7_file
cURL error 60: SSL: no alternative certificate subject name matches target host name
'drupal7migration2.ddev.site'
(see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
(https://drupal7migration2.ddev.site//sites/default/files/Virtual%20Challenges%20%28Results%20and%2
0PBs%29%2020200709.xlsx)
When you want one DDEV-Local project to talk to another using https, curl on the client side has to trust the server side that you're talking to. There are two ways to do this:
(built-in, no changes needed): Use ddev-<projectname>-web (the container name) as the target hostname in the URL. For example in your case, use curl https://ddev-drupal7migration2-web. This hostname is already trusted among various ddev projects.
(requires docker-compose.*.yaml): If you want to use the real full FQDN of the target project (https://drupal7migration2.ddev.site in your case) then you'll need to add that as an external_link in the client project's .ddev. So add a file named .ddev/docker-compose.external_links.yaml in the client side (migration1?) project, with these contents:
version: '3.6'
services:
web:
external_links:
- "ddev-router:drupal7migration2.ddev.site"
That will tell Docker to route requests to "drupal7migration2.ddev.site" to the ddev-router, and your container and curl trust it (it has that name in its cert list).

Solr Error: Unable to create core [mycore] Caused by solr.ICUCollationField

I am trying to create a solr core, I am using drupalvm with vagrant and virtual box.
When setting up solr with this command:
sudo su - solr -c "/opt/solr/bin/solr create -c m4m -d /tmp/search_api_solr/solr-conf/7.x/"
I am getting this error:
INFO - 2018-11-05 19:21:45.804; org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing SSL Credential Provider chain: env;sysprop
ERROR: Error CREATEing SolrCore 'mycore': Unable to create core [mycore] Caused by: solr.ICUCollationField
Creating a core without specifying the -d <confdir> option is successful but gives me some really weird errors in the solr dashboard and Drupal UI which research indicates has something to do with a corrupted core.
Any help with why I am getting this error would be much appreciated. Other developers using the same vagrant installation is running without issue.
If you create the core without the config directory, solr will use it's default configurations.
Which in turn, will have none of the drupal needed field definitions, and so forth.
What you need to do, if you know a little bit about the solr's structure, and if you use solr > version 7 is:
go to where your solr installation is
cd /PATH_TO_SOLR/server/solr-webapp/webapp/WEB-INF/lib
Copy all jars from the analysis-extras folder to your wEB-INF/lib folder
cp /PATH_TO_SOLR/contrib/analysis-extras/lib/*.jar ./
restart solr the way you normally do, specifying your -d config directory. That's important.
Hope this helps.
OR...
Save your hassle and let the pros handle all this for you with a SaaS such as the likes of https://opensolr.com
You can create your solr index with 1 click, and you need 2 more clicks to upload your config files and you're done.
I need jars from 2 directories:
cd /PATH_TO_SOLR
cp solr/contrib/analysis-extras/lib/*.jar solr/server/solr-webapp/webapp/WEB-INF/lib/
cp solr/contrib/analysis-extras/lucene-libs/*.jar solr/server/solr-webapp/webapp/WEB-INF/lib/
see solr/contrib/analysis-extras/README.txt

issues making default collection for solr

I am trying to install solr so that my data catalog could use it. To do so, I used these steps which are mentioned on the documentation of my data portal:
cd solr/solr-config
wget http://apache.crihan.fr/dist/lucene/solr/6.0.0/solr-6.0.0.tgz
tar xvfz solr-6.0.0.tgz
solr-6.0.0/bin/solr start -c -p 8984
solr-6.0.0/bin/solr create -p 8984 -c catalog_srv -d src/main/solr-cores/catalog
accourding to these instructions I made a directory /opt/solr/solr-config the downloaded and unziped solr, and started it on port 8984. Now I don't understand how does the last command works. what should be the second path src/main/solr-cores/catalog? I thought I should generatet the directories src, main solr-cores and catalog inside my solr-config directory and then run this command. but I got errors that solrconfig.xml could not be found. after adding solrconfig.xml to /opt/solr/solr-config/src/main/solr-cores/catalog, now I get an error:
ERROR: Failed to create collection 'catalog_srv' due to: {127.0.1.1:8984_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://127.0.1.1:8984/solr: Error CREATEing SolrCore 'catalog_srv_shard1_replica1': Unable to create core [catalog_srv_shard1_replica1] Caused by: Can't find resource 'schema.xml' in classpath or '/configs/catalog_srv', cwd=/opt/solr/solr-config/solr-6.6.0/server}
what is schema.xml? is it something about data of my data portal? could you please explain the issue and how should I determine the path src/main/solr-cores/catalog to avoid these errors? what does exactly a default collection of solr do?
More info: my data portal is an opensource software called geonetwork and the documentation about solr is here: http://geonetwork-opensource.org/manuals/trunk/eng/users/maintainer-guide/installing/installing-solr.html?highlight=solr

How to upload file using loopback api explorer?

I'm using loopback Api Explorer I need to upload a file by explore how can I upload that because I don't find any option to upload file please refer the screenshot
.
Simply put, the answer is that you can't. Uploading a file requires multi-part form data. This isn't currently possible via the loopback-component-explorer. You should checkout the loobpack-component-storage instead. There is an example here; I recommend using the example-2.0.
You can test it with something like POSTMAN.
But, the only that you need, is the path of the file, not the file.
Simpler than using Postman would be using curl direct on the terminal :
Here is the command I use when need(I work with some services using loopback/explorer as well) :
curl -i -X POST -H "Content-Type: multipart/form-data" -F "blob=#/path/to/your/file.jpg" -v http://HOST:PORT/pathToYourEndpoint?access_token=xxxxxxxxxxx

Resources