arangodb data definition execution issue - database

I am trying to execute below command in arangodb kubernetes based container, where arango "Version 0.13.7, build bdac926" is running.
command which tried from my end :
arangodb-ddl-exec-cli.jar -input_file arangodb_ddl.json -db_url "http://11.22.3.5:32532" -username root -password XXXX**
while executing above command getting belwo error:
bash: arangodb-ddl-exec-cli-1.0.0-RELEASE-standalone.jar: command not found
My specification:
Docker - 18.03.1-ce
Kubernetes - v1.12.0
Kubernetes arangodb service port ( Type: Nodeport ) - 8529:32532/TCP
please let me know how to resolve it.
thanks in advance.

If I understand correctly you are trying to run a JAR file inside the container?
If yes then you should first install the JRE in the container (for example by adding it in Dockerfile or at the moment from the inside of the container).
First check if Java is installed using java --version
If it is not there - install it. When you have it run:
java -jar arangodb-ddl-exec-cli.jar and rest of your arguments.
Hope this helps.

Related

Sagemaker lifecycle config: could not find conda environment conda_python3 & Any other Environment is not found

#!/bin/bash
set -e
ENVIRONMENT=python3
NOTEBOOK_FILE="/home/ec2-user/SageMaker/untitled.ipynb"
source activate python3
nohup jupyter nbconvert --to notebook --ExecutePreprocessor.timeout=600 --ExecutePreprocessor.kernel_name=conda_python3 --execute "$NOTEBOOK_FILE" &
The Above script which i use to start my notebook file "untitled.ipynb" when my sagemaker notebook instnace starts.
But when i start my notebook instance i am getting an error "Could not find conda environment: python3"
if any one knows the solution please post.
When you run lifecycle config, you need to specify the activate script path where anaconda resides (and typically where the environments are then).
Then just modify the code part like this:
ENVIRONMENT=python3
source /home/ec2-user/anaconda3/bin/activate "$ENVIRONMENT";
You can also see official examples of various configurations in this repo:
amazon-sagemaker-notebook-instance-lifecycle-config-samples

SSH Agent Plugin v1.17 with Jenkins Declaritive Pipeline not working with Windows

I have been having issues getting my multibranch pipeline to perform git commands with an SSH key via the SSH Agent plugin on Windows.
I am able to successfully perform a git clone with the ssh from Git Bash on windows server that is running Jenkins.
In my pipeline log I am getting the following error when trying to use the sshagent plugin:
[ssh-agent] Looking for ssh-agent implementation... Could not find
ssh-agent: IOException: Cannot run program "ssh-agent": CreateProcess
error=2, The system cannot find the file specified Check if ssh-agent
is installed and in PATH [ssh-agent] FATAL: Could not find a suitable
ssh-agent provider
I have seen that installing Apache Tomcat Native libraries has helped some people, but the steps for doing so are not very descriptive.
Any help is appreciated. Thanks!

Start Appium Server on bat file with version 1.6.4

With the previous version of Appium, I could start an appium server through command line in batch like this.
START node.exe node_modules\appium\bin\appium.js --port 4723
But now since the new release 1.0.0 (or 1.6.4) depends on how you calling it, There no node.exe anymore or appium.js either.
Can someone tell me where can I find these file ? or the "new" way of doing it ?
Thanks a lot !
With Appium 1.6.4, there are 2 ways of installing/using it -
We can do it the old fashioned way with Node. Installation happens via node with command - npm install -g appium. Then you can run it the same way you mentioned in your post.
If we install it via Appium Desktop (v1.0.0), then we don't need Node and the server can be started using the Appium Desktop UI.
I have also used appium -a 127.0.0.1 -p 4723 in command prompt and it works fine. And if you want to look at Java code to start appium, you can check this out as well - Start Appium server with Java
You can use main.js in place of appium.js to start the appium, it is the change we need to incorporate from Appium 1.6.4

Starting jetty fail in ubuntu 14

I install the solr-jetty package in a Ubuntu 14 container running in a cloud9 workspace.
To install the package I run the following command:
sudo apt-get install solr-jetty
The installation doesn't return any error.
Then I try to start solr with the following command:
sudo service jetty start
But I receive the following error:
* Starting Jetty servlet engine. jetty
* Jetty servlet engine started, reachable on http://host-solr-3694477:8983/. jetty
...fail!
In the log file of jetty I get the following message:
failed setting default capabilities.
set_caps(CAPS) failed for user 'jetty'
Service exit with a return value of 4
How can I resolve this issue?
To resolve the problem I had to change the user that run jetty from jetty to root.
This can be configured by editing the /etc/default/jetty file.
I think it is not the more correct solution because it can add security problems. If anyone have a better solution ...
Docker user here, same problem, but - this worked for me (and this is as unadvised as changing the user to 'root', suggested above):
https://docs.docker.com/engine/reference/run/#/runtime-privilege-and-linux-capabilities
Set the following on your 'docker run' command when creating a container:
--privileged=true
I'm just using docker for development, so not overly concerned yet with the security implications of this.

Starting and stopping App Engine instances with Docker

I've set up an App Engine project locally using Docker (on OSX), and have been running a server using the usual "gcloud preview app run app.yaml" command. From what I can tell, this keeps creating new images over and over again. After an hour or so of work I end up with something like 30 docker images, each taking 130MB.
Eventually I'm told I can no longer bind to localhost:8080. I tried killing all containers and images, but still cannot use localhost:8080 until I reboot.
Seems like I'm not using Docker/gcloud correctly. Anyone have an idea what I might be doing wrong? Is there another way I should be restarting App Engine instances other than hitting command C and running the "run" command again?
UPDATE: After looking closer, I noticed I'm getting this message when I run an app locally and a container is created: "http: Hijack is incompatible with use of CloseNotifier". I'm not familiar enough with Docker to understand what's going on here. All searches seem to point to Go, which I am not using.
UPDATE 2: Here is the trace:
Creating container...
INFO 2015-05-05 02:23:28,293 containers.py:560] Container 1564ce4344957114312d6d1dc696ffbb4176b40ace6dcff5e4239e13ee04a8f6 created.
Exception in thread Thread-2:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/Users/judeosborn/google-cloud-sdk/platform/google_appengine/google/appengine/tools/docker/containers.py", line 643, in _ListenToLogs
for line in log_lines:
File "/Users/judeosborn/google-cloud-sdk/./lib/docker/docker/client.py", line 225, in _multiplexed_response_stream_helper
socket = self._get_raw_response_socket(response)
File "/Users/judeosborn/google-cloud-sdk/./lib/docker/docker/client.py", line 167, in _get_raw_response_socket
self._raise_for_status(response)
File "/Users/judeosborn/google-cloud-sdk/./lib/docker/docker/client.py", line 119, in _raise_for_status
raise errors.APIError(e, response, explanation=explanation)
APIError: 500 Server Error: Internal Server Error ("http: Hijack is incompatible with use of CloseNotifier")
INFO 2015-05-05 02:23:28,606 module.py:1745] New instance for module "default" serving on:
http://localhost:8080
There's an ongoing issue with Docker 1.6.x [reference] that prevents gcloud to work well with Managed VMs (as you seem to be using). Easiest workaround until it gets fixed is to downgrade Docker in your development machine to version 1.5.0, which is the latest version known to work.
For Ubuntu, you can do something like:
$ curl -sSL https://get.docker.com/ubuntu | sed 's/lxc-docker/lxc-docker-1.5.0/' | sudo sh
For other Linux distros, you might have to modify that sed pattern, though.
On the other hand, if you're using Boot2Docker under Mac OS X, follow these steps:
Fully uninstall your previous Boot2Docker/Docker setup; there is a nice guide here
Reinstall Boot2Docker/Docker following instructions here. IMPORTANT: You MUST stop right after completing "Install Boot2Docker" step and before "Start the Boot2Docker Application". Once you get there, open up a terminal and execute the following commands:
$ mkdir ~/.boot2docker
$ echo 'ISOURL="https://github.com/boot2docker/boot2docker/releases/download/v1.5.0/boot2docker.iso"' > ~/.boot2docker/profile
At this point, you can continue with "Start the Boot2Docker Application" section and finish the installation. You should now have a valid Docker launchpad with which to start Managed VMs. It'd be nice to double check that you have the right versions installed by issuing:
$ boot2docker ssh docker version | egrep "(Client|Server) version"
The output should look like:
Client version: 1.5.0
Server version: 1.5.0
Now you can try again your original command:
$ gcloud preview app run app.yaml
Try running:
$ ps uax | egrep "gcloud|appserver"
If you see anything running, kill it... you may even need to kill -9 it.

Resources