vespa deploy --wait 300 app-1-getting-started get error-code "METHOD_NOT_ALLOWED" - vespa

When I follow this article to practice vepsa
https://docs.vespa.ai/en/tutorials/news-1-getting-started.html
when i do this step
vespa deploy --wait 300 app-1-getting-started
i got this error
{
"error-code": "METHOD_NOT_ALLOWED",
"message": "Method 'POST' is not supported"
}
why and how can i fix this error ?

I am unable to reproduce, I just ran through the steps. I suggest you submit an issue at https://github.com/vespa-engine/vespa/issues with your environment and also include vespa.log for the Vespa Team to have a look

Vespa deploys to http://localhost:19071/ and if the service running on that port is not the Vespa configuration service, but a different HTTP server that returns 405, this might explain the behavior you observe. The tutorial starts the Vespa container image using 3 port bindings
8080:8080 is the Vespa container (data plane, read and write)
19071:19071 is the Vespa configuration service which accepts app package (control plane)
docker run -m 10G --detach --name vespa --hostname vespa-tutorial \
--publish 8080:8080 --publish 19071:19071 --publish 19092:19092 \
vespaengine/vespa

Related

How do you run pyflink scripts on AWS EMR?

I am struggling to run the basic word_count.py pyflink example that comes loaded with the apache flink on AWS EMR
Steps taken:
Successfully created AWS EMR 6.5.0 cluster with the following applications [Flink, Zookeeper] - verified that there is a flink and flink-yarn-session binary in $PATH. AWS says it installed v1.14.
Ran the java version successfully by doing the following
sudo flink-yarn-sessions
sudo flink run -m yarn-cluster -yid <application_id> /usr/lib/flink/examples/batch/WordCount.jar
Tried running the same with the python but no dice
sudo flink run -m yarn-cluster -yid <application_id> -py /usr/lib/flink/examples/python/table/word_count.py
This fails but error makes it obvious that its picking up python2.7 even though python3 is default!!
Fixed the issue by somewhat following this link. Then tried with a simple example to print out sys.version. This confirmed that its picking up my python version
Try again with venv
sudo flink run -m yarn-cluster -yid <application_id> -pyarch file:///home/hadoop/venv.zip -pyclientexec venv.zip/venv/bin/python3 -py /usr/lib/flink/examples/python/table/word_count.py
At this point, I start seeing various issues ranging from no file found to mysterious
pyflink.util.exceptions.TableException: org.apache.flink.table.api.TableException: Failed to execute sql
I ran various permutation of with/without yarn cluster. But no progress made thus far.
I am thinking my issues are either environment related (why isn't AWS taking care of proper python version is beyond me) or my inexperience with yarn/pyflink.
Any pointer would be greatly appreciated.
This is what you do. To make a cluster:
aws emr create-cluster --release-label emr-6.5.0 --applications Name=Flink --configurations file://./config.json --region us-west-2 --log-uri s3://SOMEBUCKET --instance-type m5.xlarge --instance-count 2 --service-role EMR_DefaultRole --ec2-attributes KeyName=YOURKEYNAME,InstanceProfile=EMR_EC2_DefaultRole --steps Type=CUSTOM_JAR,Jar=command-runner.jar,Name=Flink_Long_Running_Session,Args=flink-yarn-session,-d
Contents of config.json:
[
{
"Classification": "flink-conf",
"Properties": {
"python.executable": "python3",
"python.client.executable": "python3"
},
"Configurations": [
]
}
]
Then once you are in, try this
sudo flink run -m yarn-cluster -yid YID -py /usr/lib/flink/examples/python/table/batch/word_count.py
You can find the YID in the AWS EMR console under application user interfaces.

Connecting to apache atlas + hbase + solr setup with gremlin cli

I am new to atlas and janusgraph, I have a local setup of atlas with hbase and solr as the backends with dummy data.
I would like to use gremlin cli + gremlin server and connect to the existing data in hbase. ie: view and traverse the dummy atlas metadata objects.
This is what I have done so far:
Run atlas server + hbase + solr - inserted dummy entities
Run gremlin server with the right configuration
I have set the graph: { ConfigurationManagementGraph: ..} to janusgraph-hbase-solr.properties
Run gremlin cli, connect with :remote connect tinkerpop.server conf/remote.yaml session which connects the gremlin server just fine.
I do graph = JanusGraphFactory.open(..../janusgraph-hbase-solr.properties) and create g = graph.traversal()
I am able to create my own vertex and edges and list them, but not able to list anything related to atlas ie: entities etc.
What am I missing?
I want to connect to existing atlas setup and traverse the graph with gremlin cli.
Thanks
To be able to access Atlas artifacts from gremlin cli you will have to add Atlas dependency jars to Janusgraph's lib directory.
You can get the jars from Atlas maven repo or from your local build.
$ cp atlas-* janusgraph-0.3.1-hadoop2/lib/
list of JARs
atlas-common-1.1.0.jar
atlas-graphdb-api-1.1.0.jar
atlas-graphdb-common-1.1.0.jar
atlas-graphdb-janus-1.1.0.jar
atlas-intg-1.1.0.jar
atlas-repository-1.1.0.jar
A sample query could be:
gremlin> :> g.V().has('__typeName','hive_table').count()
==>10
As ThiagoAlvez mentioned, Atlas docker image can be used since Tinknerpop Gremlin support is now build-in into it and can be easily used to play with Janusgraph, and Atlas artifacts using gremlin CLI:
Pull the image:
docker pull sburn/apache-atlas
Start Apache Atlas in a container exposing Web-UI port 21000:
docker run -d \
-p 21000:21000 \
--name atlas \
sburn/apache-atlas \
/opt/apache-atlas-2.1.0/bin/atlas_start.py
Install gremlin-server and gremlin-console into the container by running included automation script:
docker exec -ti atlas /opt/gremlin/install-gremlin.sh
Start gremlin-server in the same container:
docker exec -d atlas /opt/gremlin/start-gremlin-server.sh
Finally, run gremlin-console interactively:
docker exec -ti atlas /opt/gremlin/run-gremlin-console.sh
I had this same issue when trying to connect to the Apache Atlas JanusGraph database (org.janusgraph.diskstorage.solr.Solr6Index).
I got it solved after moving the atlas jars to the JanusGraph lib folder as anand said and then configuring janusgraph-hbase-solr.properties.
These are the configurations that set on janusgraph-hbase-solr.properties:
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=hbase
storage.hostname=localhost
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.5
index.search.backend=solr
index.search.solr.mode=http
index.search.solr.http-urls=http://localhost:9838/solr
index.search.solr.zookeeper-url=localhost:2181
index.search.solr.configset=_default
atlas.graph.storage.hbase.table=apache_atlas_janus
storage.hbase.table=apache_atlas_janus
I'm running Atlas using this docker image: https://github.com/sburn/docker-apache-atlas

IBM Cloud Private-Community Edition - Waiting for cloudant database initialization

I tried below command
docker run --rm -t -e LICENSE=accept --net=host -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0 install
the response is
Waiting for cloudant initialization
I entered the command received the logs shown in the image. No error shown. Please give a solution
From the error message, for cloudant database initialization issue, it may be caused by the cloudant docker image is pulled from dockerhub while ICP installation. The cloudant docker image is big, you can run below command to check whether the image is ready in your environment.
$ docker images | grep icp-datastore
If the cloudant docker image is ready in your environment, and the ICP installation still has cloudant database initialization issue, you can try to install the latest ICP 2.1.0.3 Community Edition. From 2.1.0.3, ICP removes the cloudant database. The ICP 2.1.0.3 installation documentation:
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.3/installing/install_containers_CE.html
If you still want to check the cloudant database initialization issue in ICP 2.1.0.1 environment, you can:
Ensure your ICP nodes match the system and hardware requirements firstly.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/supported_system_config/system_reqs.html
Let us know the ICP installation configurations. You can check the contents for config.yaml and hosts files.
Check the system logs (in /var/log/messages or /var/log/syslog file) to find the relevant errors.
Run 'docker logs ' command to check the logs or errors.

FLINK : Deployment took more than 60 seconds

I am new to flink and trying to deploy my jar on EMR cluster. I have used 3 node cluster (1 master and 2 slaves) with their default configuration. I have not done any configuration changes and sticking with default configuration. On running the following command on my master node:
flink run -m yarn-cluster -yn 2 -c Main /home/hadoop/myjar-0.1.jar
I am getting the following error:
INFO org.apache.flink.yarn.YarnClusterDescriptor- Deployment took more than 60 seconds. Please check if the requested resources are available in the YARN cluster
Can anyone please explain what could be the possible reason for this error?
As you didn't determine any resources (Memory, CPU core), I guess it's because the YARN cluster has not the desired resource, especially memory.
Try submitting your jar file using the following type of commands:
flink run -m yarn-cluster -yn 5 -yjm 768 -ytm 1400 -ys 2 -yqu streamQ my_program.jar
You can find more information about the command here
You can check application logs in YARN WebUI to see what's the problem exactly.
Also, check this posts:
Post1
post2

Possible? How to setup VNC in a Google Managed VM Environment

I'm using Java but this isn't necessarily a Java question. Google's "java-compat" image is Debian (3.16.7-ckt20-1+deb8u3~bpo70+1 (2016-01-19)).
Here is my Dockerfile:
FROM gcr.io/google_appengine/java-compat
RUN apt-get -qqy update && apt-get qqy install curl xvfb x11vnc
RUN mkdir -p ~/.vnc
RUN x11vnc -storepasswd xxxxxxxx ~/.vnc/passwd
EXPOSE 5900
ADD . /app
And in the Admin Console I created a firewall rule to open up 5900. And lastly I am calling the vnc server itself in the "_ah/start" startup hook with this command:
x11vnc -forever -usepw -create
All seems to be setup correctly but I'm unable to connect with TightVNC. I use the public (ephemeral) IP address for the instance I find in the Admin Console followed by ::5900 (TightVNC requires two colons for some reason). I'm getting a message that the server refused the connection. And indeed when I try to telnet to port 5900 it's blocked.
Next I SSH into the container machine and when I test the port on the container with wget xxx.xxx.xxx.xxx:5900 I get a connection. So it seems to me the container is not accepting connections on port 5900. Am I getting this right? Is it possible to open up ports and route my VNC client into the docker container? Any help appreciated.
Why I can't use Compute Engine. Just to preempt some comments about using google's Compute Engine environment instead of Managed VMs. I make heavy use of the Datastore and Task Queues in my code. I don't think those can run (or run natively/efficiently) on Compute Engine. But I may pose that as a separate question.
Update: Per Paul in the comments... having learned some of the docker terminology: Can I publish a port on the container in Google's environment?
Out of curiosity - why are you trying to VNC into your instances? If it's just for management purposes, you can SSH into Managed VM instances.
That having been said - you can use the network/forwarded_ports config to route traffic from the VM to the application container:
network:
forwarded_ports:
- 5900
instance_tag: vnc
Put that in your app.yaml, and re-deploy your app. You'll also need to open the port in your firewall (if you intend on accessing this from the public internet):
gcloud compute firewall-rules create default-allow-vnc \
--allow tcp:5900 \
--target-tags vnc \
--description "Allow vnc traffic on port 5900"
Hope this helps!

Resources