How to pass classpath via rest api in apache flink 1.9 - apache-flink

We successfully star jobs via cli, something like:
./bin/flink run -p 1 -C file://tmp/test-fatjar.jar -c ru.test.TestApps test.jar * some arguments*
Also, we sucessfully can run this job via api, if we register fatjar, json looks like:
{
"entryClass": "ru.test.TestApps",
"parallelism": "1",
"programArgsList" : [ *** cut *** ]
}
How to pass classpath (argument -C) via api?
Thank you.

There is no equivalent to the general classpath option of the CLI. The REST API always expects you to use a fat jar. Since your example also uses a fat jar, I'd point out the general flow:
Upload your fat jar with /jars/upload. The response contains filename (=jarid).
Post to /jars/:jarid/run to start your job. The response contains jobid, which you can use to query the status and cancel.

Related

How to implement authentication (user/password) to swupdate Web Interface

I need a way to implement some sort of authentication (user/password) to the swupdate web interface, in order to allow firmware updates to authorized users only.
I tried to place an .htaccess file in the root folder of the web interface (namely in the /www directory), but it seems to be ignored.
Anybody has a working example about my requirement ?
And also: In the configuration file swupdate.cfg I found the following parameter:
global-auth-file
for the embedded webserver, but I don't find which content (and in which format) this file must have.
Thanks in advance
Create htdigest file using apache's htdigest tool. For example: htdigest -c .htdigest myrealm someuser
Then run the swupdate adding the following mongoose arguments --auth-domain myrealm --global-auth-file /path_to_your_htdigest/.htdigest.
A full example: /usr/bin/swupdate -v -H "my_hardware:1.0" -f /etc/swupdate.cfg -w "--auth-domain myrealm --global-auth-file /www/.htdigest" -p 'reboot'

Apache Flink Dynamically setting JVM_OPT env.java.opts

Is it possible to set the custom JVM Options env.java.opts when submitting a job without specifying it in the conf/flink-conf.yaml file?
The reason I am asking is I want to use some custom variables in my log4j. I am also running my job on YARN.
I have tried the following command using the CLI and it strips everything off from the = sign onwards
$ flink run -m yarn-cluster -yn 2 -yst -yD env.java.opts="-DappName=myapp -DcId=mycId"
At the moment this is not possible due to the way Flink parses the dynamic properties. Flink assumes that dynamic properties have the form -D<KEY>=<VALUE> and that <VALUE> does not contain any = which is clearly wrong. Thus, for the moment you have to specify the env.java.opts via flink-conf.yaml.
I've opened a JIRA issue to fix this problem.
Update
The problem has been fixed for Flink >= 1.3.0 and >= 1.2.2.
A simple solution which I tried was passing the configuration parameters in application.properties as arguments like below,
~/flink/bin/flink run app.jar --Brokers=Broker1:9093 --TopicName=some-topic
Also you can also pass in the parameters as a properties file,
~/flink/bin/flink run app.jar -Dspring.config.name=<full-path>/application.properties

How to export fossil-scm timeline to another format

I'm using FossilSCM as my only solution for control version and tickets. So far, so good. Its self contained and minimalist approach suit my needs. But I would like to start to make some analysis on the projects history and development and a good soruce for that are the projects timelines. I could go with some html parsing trying to convert the Fossil timeline output to something else, but I would like if there is any option to export that info in other structured format (e.g JSON or similar). Web search has not produce any useful finding on that issue. Any pointers to a solution?
Thanks,
Offray
Have you tried fossil json timeline branch trunk?
fossil help json
Usage: fossil json SUBCOMMAND ?OPTIONS?
In CLI mode, the -R REPO common option is supported. Due to limitations
in the argument dispatching code, any -FLAGS must come after the final
sub- (or subsub-) command.
The commands include:
anonymousPassword
artifact
branch
cap
config
diff
dir
g
login
logout
query
rebuild
report
resultCodes
stat
tag
timeline
user
version (alias: HAI)
whoami
wiki
Run 'fossil json' without any subcommand to see the full list (but be
aware that some listed might not yet be fully implemented).
Compile json when you build from source:
./configure --json
The key for having this working is to enable json support in fossil by compiling it from sources. Current version have it disabled, so looking for any clue on it in command line help got me nothing originally. Thanks to user 2612611 for the initial clue about it. Here is the procedure I followed:
Go to https://www.fossil-scm.org/download.html and download the source tarball package.
Uncompress the previous package.
Go to the folder where you uncompressed the package (lets call it /uncompress-folder
Run ./configure --json
Run make.
Optional: Put your newly created fossil binary in your path or where the last one was installed (something like sudo mv /uncompress-folder/fossil /usr/bin/fossil.
Open the fossil repository that you want to export its history and launch the fossil web interface (fossil ui).
Go to http://localhost:8080/json/timeline/checkin?limit=0 ,where http://localhost:8080 is your local machine interface for fossil ui, and json/timeline/checkin?limit=0 is the json API call saying: json export of timeline (/json/timeline) chekins (/checkin) for all history (?limit=0). If instead of the 0 at the end of the url you put another integer you will have the last n checkins.
From command prompt you should be able to get the same result by running fossil json timeline checkin --limit=0 > timeline.json stored on the file timeline.json, instead of the web browser but in local test it didn't work.
API is still a moving target, but you can find documentation on this excellent project at [1] and a demo interface to test the parameters at [2]
[1] https://docs.google.com/document/d/1fXViveNhDbiXgCuE7QDXQOKeFzf2qNUkBEgiUvoqFN4/view?pli=1#
[2] http://fossil.wanderinghorse.net/repos/fossil-sgb/json/

Google compute engine returned 399 internal server error

Google compute engine console return 399 error code already asks my question but the solution is not as suggested there. Since the URL is little old starting a new thread.
I am trying to do a wget using:
wget https://console.developers.google.com/m/cloudstorage/b/m-lab/o/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
I see the error:
Resolving console.developers.google.com (console.developers.google.com)... 216.239.32.27
Connecting to console.developers.google.com (console.developers.google.com)|216.239.32.27|:443... connected.
HTTP request sent, awaiting response... 399 Internal Server Error
2014-08-26 20:02:18 ERROR 399: Internal Server Error.
I am new to Linux commands so wanted to know if am missing something obvious.
The address works when I use Chrome downloader but fails with wget with me as well
I have never seen this behaviour before
You can also use cURL to download files, I used the -v switch and got a dns error(no idea why)
curl -v http://console.developers.googlO.com/m/cloudstorage/b/m-lab/o/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
We cannot download with traditional tools we have to use gsutil utility provided by google, using which automation is possible.
You need to use the following URI pattern:
http://storage.googleapis.com/<bucket>/<object>
In this case, you can download that file using the command:
wget http://storage.googleapis.com/m-lab/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz

Google App Engine endpointscfg.py command starting 1.8.6 does not accept argument -f

This problem just started in Google App Engine version 1.8.6:
When executing command (based on instruction https://developers.google.com/appengine/docs/python/endpoints/gen_clients):
endpointscfg.py get_client_lib java -o . -f rest your_module.YourApi
We get error:
endpointscfg.py: error: unrecognized arguments: -f
The command with argument -f execute without any issue for Google App Engine version 1.8.5.
With 1.8.6, I don't know how to generate client end point library, because of this error. If you have a workaround, please help.
When you use get_client_lib to generate client library, rest format is the only option. So if you intend to generate a Rest client library, simply remove ".f rest" option. And you will get your Rest client without any problem.
If you want to use RPC client (which is currently only supported in iOS client). Please refer to https://developers.google.com/appengine/docs/python/endpoints/consume_ios for instruction.
I think one piece might be missing from the documentation above. In order to get the api-v1-rpc.discovery, you need to run get_discovery_doc command like following:
endpointscfg.py get_discovery_doc -o . -f rpc your_module.YourApi
Hope it helps.

Resources