Invalid value "zookeeper" for flag -a: valid streams are STDIN, STDOUT and STDERR - solr

I am trying to follow this blog to setup solr cloud with docker:
https://lucidworks.com/blog/solrcloud-on-docker/
I was able to create the zookeeper image successfully. docker images command lists the image too.
However, when I try to create and run the zookeeper container with the following command, it errors out:
docker run -name zookeeper -p 2181 -p 2888 -p 3888 myusername/zookeeper:3.4.6
Error:
Warning: '-n' is deprecated, it will be removed soon. See usage.
invalid value "zookeeper" for flag -a: valid streams are STDIN, STDOUT and STDERR
See 'docker run --help'.
flag provided but not defined: -name
See 'docker run --help'.
What am I missing here?

Please use --name instead.
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
-a, --attach=[] Attach to STDIN, STDOUT or STDERR
--add-host=[] Add a custom host-to-IP mapping (host:ip)
--blkio-weight=0 Block IO weight (relative weight)
-c, --cpu-shares=0 CPU shares (relative weight)
--cap-add=[] Add Linux capabilities
--cap-drop=[] Drop Linux capabilities
--cgroup-parent="" Optional parent cgroup for the container
--cidfile="" Write the container ID to the file
--cpu-period=0 Limit CPU CFS (Completely Fair Scheduler) period
--cpu-quota=0 Limit CPU CFS (Completely Fair Scheduler) quota
--cpuset-cpus="" CPUs in which to allow execution (0-3, 0,1)
--cpuset-mems="" Memory nodes (MEMs) in which to allow execution (0-3, 0,1)
-d, --detach=false Run container in background and print container ID
--device=[] Add a host device to the container
--dns=[] Set custom DNS servers
--dns-search=[] Set custom DNS search domains
-e, --env=[] Set environment variables
--entrypoint="" Overwrite the default ENTRYPOINT of the image
--env-file=[] Read in a file of environment variables
--expose=[] Expose a port or a range of ports
--group-add=[] Add additional groups to run as
-h, --hostname="" Container host name
--help=false Print usage
-i, --interactive=false Keep STDIN open even if not attached
--ipc="" IPC namespace to use
-l, --label=[] Set metadata on the container (e.g., --label=com.example.key=value)
--label-file=[] Read in a file of labels (EOL delimited)
--link=[] Add link to another container
--log-driver="" Logging driver for container
--log-opt=[] Log driver specific options
--lxc-conf=[] Add custom lxc options
-m, --memory="" Memory limit
--mac-address="" Container MAC address (e.g. 92:d0:c6:0a:29:33)
--memory-swap="" Total memory (memory + swap), '-1' to disable swap
--memory-swappiness="" Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
--name="" Assign a name to the container
--net="bridge" Set the Network mode for the container
--oom-kill-disable=false Whether to disable OOM Killer for the container or not
-P, --publish-all=false Publish all exposed ports to random ports
-p, --publish=[] Publish a container's port(s) to the host
--pid="" PID namespace to use
--privileged=false Give extended privileges to this container
--read-only=false Mount the container's root filesystem as read only
--restart="no" Restart policy (no, on-failure[:max-retry], always)
--rm=false Automatically remove the container when it exits
--security-opt=[] Security Options
--sig-proxy=true Proxy received signals to the process
-t, --tty=false Allocate a pseudo-TTY
-u, --user="" Username or UID (format: <name|uid>[:<group|gid>])
--ulimit=[] Ulimit options
--disable-content-trust=true Skip image verification
--uts="" UTS namespace to use
-v, --volume=[] Bind mount a volume
--volumes-from=[] Mount volumes from the specified container(s)
-w, --workdir="" Working directory inside the container

Related

Access user keyring from systemd hook or crontab

I'm trying to implement a systemd hook (systemd-sleep) to connect and disconnect from protonvpn. However, these scripts are executed as root and do not have access to the keyring. The backend is kwallet, since I am on fedora-kde plasma. When I execute protonvpn-cli c --sc, I get the following error:
[...] keyring_data_user = ExecutionEnvironment().keyring[ File "/usr/lib/python3.10/site-packages/protonvpn_nm_lib/core/keyring/linuxkeyring.py", line 32, in __getitem__ raise exceptions.KeyringError(e) protonvpn_nm_lib.exceptions.KeyringError: Environment variable DBUS_SESSION_BUS_ADDRESS is unset
I tried several things, none of them worked:
run sudo -E -u myuser protonvpn-cli c --sc, or replacing -E by -i, or none.
importing the DBUS session of myuser (who is currently logged ir and has kwallet opened). In this case, I get the error [...] bus = secretstorage.dbus_init() File "/usr/lib/python3.10/site-packages/secretstorage/__init__.py", line 80, in dbus_init raise SecretServiceNotAvailableException(str(ex)) from ex secretstorage.exceptions.SecretServiceNotAvailableException: [Errno 32] Broken pipe
I tried systemd --user services, but I've found no way to link this service to the suspend.target, or hibernate.target, since they are system targets and are executed on separate process.
Is there a way give access the an already opened kwallet to this hook script?

Confirmation that the message was sent to the CAN bus using socketCAN

I would like to confirm that my message has been saved on the CAN bus with socketCAN library.
The socketCAN documentation describes this possibility when using the recvmsg() function, I have problems with its implementation.
The function I want to achieve is to confirm that my message won the arbitration process.
I think mentioning recvmsg(2) you refer to the following paragraph of the SocketCAN docs:
MSG_CONFIRM: set when the frame was sent via the socket it is received on.
This flag can be interpreted as a 'transmission confirmation' when the
CAN driver supports the echo of frames on driver level, see 3.2 and 6.2.
In order to receive such messages, CAN_RAW_RECV_OWN_MSGS must be set.
The key words here are "when the
CAN driver supports the echo of frames on driver level", so you have to ensure that first. Next, you need to enable the corresponding flags. Finally, such confirmation has nothing to do with arbitration. When a frame looses arbitration, the controller tries to re-transmit it as soon as the bus becomes free.
I think you can use the command "candump can0/can1" on your PC, it will shows the CAN packet received on given CAN interface.
Usage: candump [options] <CAN interface>+
(use CTRL-C to terminate candump)
Options: -t <type> (timestamp: (a)bsolute/(d)elta/(z)ero/(A)bsolute w date)
-c (increment color mode level)
-i (binary output - may exceed 80 chars/line)
-a (enable additional ASCII output)
-b <can> (bridge mode - send received frames to <can>)
-B <can> (bridge mode - like '-b' with disabled loopback)
-u <usecs> (delay bridge forwarding by <usecs> microseconds)
-l (log CAN-frames into file. Sets '-s 2' by default)
-L (use log file format on stdout)
-n <count> (terminate after receiption of <count> CAN frames)
-r <size> (set socket receive buffer to <size>)

Rebalancing rate when new node is added

When a new node is added, we see that it is starting to receive new tablets (in the http://:7000/tablet-servers page) and the system is rebalancing. But the default rate seems low. Are there any knobs to determine this rate?
The rebalance in YugaByte DB is rate limited.
One of the parameters that governs this behavior is the yb-tserver gflag remote_bootstrap_rate_limit_bytes_per_sec which defaults to 256MB/sec and is the maximum transmission rate (inbound + outbound) related to rebalance that any one server (yb-tserver) may do.
To inspect the current setting on a yb-tserver you can try this:
$ curl -s 10.150.0.20:9000/varz | grep remote_bootstrap_rate
--remote_bootstrap_rate_limit_bytes_per_sec=268435456
This particular param can also be changed on the fly without needing a yb-tserver restart. For example to set the rate to 512MB/sec.
bin/yb-ts-cli --server_address=$TSERVER_IP:9100 set_flag --force remote_boostrap_rate_limit_bytes_per_sec 536870912
A second aspect of this is the cluster wide global settings on how many tablet rebalances can happen simultaneously in the system. These are governed by a few yb-master gflags.
$ bin/yb-ts-cli --server_address=$MASTER_IP:7100 set_flag -force load_balancer_max_concurrent_adds 3
$ bin/yb-ts-cli --server_address=$MASTER_IP:7100 set_flag -force load_balancer_max_over_replicated_tablets 3
$ bin/yb-ts-cli --server_address=$MASTER_IP:7100 set_flag -force load_balancer_max_concurrent_tablet_remote_bootstraps 3

How to change keycloak jvm arguments via CLI in standalone configuration

Is there any way to change JVM arguments via command line interface?
I connected CLI using /opt/keycloak/bin/jboss-cli.sh -c controller=127.0.0.1:9990
but couldn't able to set the JVM arguments.I could see via ps -aef | grep keycloak default heap size and max heap size is -Xms64m -Xmx512m.
You can set in standalone.conf or set environment variable JAVA_OPTS before calling standalone.sh. But be aware that it will overwrite all default settings.
commons.sh script is executed in the standalone.sh
It is better to have additional JAVA_OPTS in common.sh without changing the standalon.sh
Add the below entry in the common.sh
DEFAULT_MODULAR_JVM_OPTIONS="$DEFAULT_MODULAR_JVM_OPTIONS -Dkeycloak.profile.feature.upload_scripts=enabled"

Keep user env variables executing gksu

I have a program in c/gtk which is opened with gksu. The problem is that when I get the environment variable $HOME with getenv("HOME") it returns "root" obviously. I would like to know if there is a way to know who was the user that executed the gksu or the way to get his environmental variables.
Thanks in advance!
See the man page. Use gksu -k command... to preserve the environment (in particular, PATH and HOME).
Or, like Lewis Richard Phillip C indicated, you can use gksu env PATH="$PATH" HOME="$HOME" command... to reset the environment variables for the command. (The logic is that the parent shell, the one run with user privileges, substitutes the variables, and env re-sets them when superuser privileges have been attained.)
If your application should only be run with root privileges, you can write a launcher script -- just like many other applications do. The script itself is basically
#!/bin/sh
exec gksu -k /path/to/your/application "$#"
or
#!/bin/sh
exec gksu env PATH="$PATH" HOME="$HOME" /path/to/your/application "$#"
The script is installed in /usr/bin, and your application as /usr/bin/yourapp-bin or /usr/lib/yourapp/yourapp. The exec means that the command replaces the shell; i.e. nothing after the exec command will ever be executed (unless the application or command cannot be executed at all) -- and most importantly, there will not be an extra shell in memory while your application is begin executed.
While Linux and other POSIX-like systems do have a notion of effective identity (defining the operations an application may do) and real identity (defining the user that is doing the operation), gksu modifies all identities. In particular, while getuid() returns the real user ID for example for set-UID binaries, it will return zero ("root") when gksu is used.
Therefore, the above launch script is the recommended method to solve your problem. It is also a common one; run
file -L /usr/bin/* /usr/sbin/* | sed -ne '/shell/ s|:.*$||p' | xargs -r grep -lie launcher -e '^exec /'
to see which commands are (or declare themselves to be) launcher scripts on your system, or
file -L /bin/* /sbin/* /usr/bin/* /usr/sbin/* | sed -ne '/shell/ s|:.*$||p' | xargs -r grep -lie gksu
to see which ones use gksu explicitly. There is no harm in adopting a known good approach.. ;)
you could assign the values of these environment vars to standard variables, then execute gksu exporting the vars after gkSU... By defining these after the gkSU using && to bind together your executions, so that you essentially execute using cloned environment variables...
A better question is why do this at all? I realize you are wanting to keep some folders, but am not sure why as any files created as root, would have to be globally writable, probably using umask, or you would have to manually revise permissions or change ownership... This is such a bad Idea!
Please check out https://superuser.com/questions/232231/how-do-i-make-sudo-preserve-my-environment-variables , http://www.cyberciti.biz/faq/linux-unix-shell-export-command/ & https://serverfault.com/questions/62178/how-to-specify-roots-environment-variable

Resources