Apache Zeppelin Flink Interpretor can not connect Flink 1.5.2 - apache-flink

I tried to connect Flink 1.5.2 from zeppelin.
Flink cluster started on the local host.
flink-conf.yaml is default.
/opt/flink1.5.2/current/bin/start-cluster.sh
enter image description here
zeppelin
git pull and buid.
git clone https://github.com/apache/zeppelin.git
cd zeppelin
mvn clean package -DskipTests -Dflink.version=1.5.2 -Pscala-2.11
bin/zeppelin-daemon start
interpretor setting this
enter image description here
submit sample flink notebook
%flink // let Zeppelin know what interpreter to use.
val text = benv.fromElements("In the time of chimpanzees, I was a monkey", // some lines of text to analyze
"Butane in my veins and I'm out to cut the junkie",
"With the plastic eyeballs, spray paint the vegetables",
"Dog food stalls with the beefcake pantyhose",
"Kill the headlights and put it in neutral",
"Stock car flamin' with a loser in the cruise control",
"Baby's in Reno with the Vitamin D",
"Got a couple of couches, sleep on the love seat",
"Someone came in sayin' I'm insane to complain",
"About a shotgun wedding and a stain on my shirt",
"Don't believe everything that you breathe",
"You get a parking violation and a maggot on your sleeve",
"So shave your face with some mace in the dark",
"Savin' all your food stamps and burnin' down the trailer park",
"Yo, cut it")
/* The meat and potatoes:
this tells Flink to iterate through the elements, in this case strings,
transform the string to lower case and split the string at white space into individual words
then finally aggregate the occurrence of each word.
This creates the count variable which is a list of tuples of the form (word, occurances)
counts.collect().foreach(println(_)) // execute the script and print each element in the counts list
*/
val counts = text.flatMap{ _.toLowerCase.split("\\W+") }.map { (_,1) }.groupBy(0).sum(1)
counts.collect().foreach(println(_)) // execute the script and print each element in the counts list
Error
ERROR [2018-09-11 19:21:18,004] ({pool-2-thread-2} Job.java[run]:174) - Job failed
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1081)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:502)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487)
at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:904)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:748)
INFO [2018-09-11 19:21:18,034] ({pool-2-thread-2} SchedulerFactory.java[jobFinished]:115) - Job paragraph_1536657575367_1976000602 finished by scheduler interpreter_1496279000
why Address already in use ??
I do not want to run the job with Flink in zeppelin.
I want to run a job with Flink that already exists.

Related

How to deploy a smart contract on KADENA

Please how do I deploy a smart contract to the testnet or mainnet WITHOUT Chainweaver web UI? I know I need a YAML file for that, but what do I do with it and where exactly do I send it?
Do I need to run a pact server, chainweb api or...? I couldn't find any guide for that
Step 0: Install the Prerequisites
Install Pact
Step 1: Create the Pact Module
We will be deploying the following pact module. For simplicity's sake the pact code we are deploying is not using a transaction's data field (read-keyset is one such pact function that makes use of this field). Otherwise, the accompanying YAML file will have to change. We also assume that this pact code is saved as test.pact.
(namespace 'free)
(module someModuleName AUTONOMOUS
(defcap AUTONOMOUS ()
true)
(defun dummy ()
(+ 1 2)
)
)
Step 2: Create the YAML file
The following YAML file will be used along with pact -a to sign and produce the escaped JSON needed to submit a transaction to Testnet.
codeFile: /Users/linda.ortega.cordoves/pact/test.pact
networkId: testnet04
publicMeta:
chainId: "0"
gasLimit: 1000
ttl: 28000
creationTime: 1585056536
sender: "testing"
gasPrice: 0.00001
keyPairs:
- public: 1d877a7b4524b6724a6ae708cf9ea7396d6ee9d17b10098b7793800177669c1d
secret: 33fcd94b8a42057bd4e3190f8983e3a73ec96c3f60df95c9e2aa3f13602c714f
nonce: step02
This file makes a couple of assumptions that might change depending on your specific implementation:
The full path of the pact we want to upload is: /Users/linda.ortega.cordoves/pact/test.pact
We want to submit a transaction to Testnet, whose network id is testnet04
We want to submit to the zero'th chain on Testnet, which has a chain id of "0"
That the current creation time in UNIX Epoch time is 1585056536 seconds. This value MUST CHANGE, so calculate it by either navigating to this website or running date +%s on the command line.
That "testing" is the account paying for gas (aka the "sender") on the Testnet network. To create a Testnet account and fund it some coins, navigate to the Testnet Coin Faucet. You will need to have generated an ED22519 public-private key pair to use the faucet. You can use pact -g to generate this key pair. Make sure to save it somewhere save.
That the key pair specified in "keyPairs" corresponds to the key pair used to create the gas payer account, which in this example is "testing". This must change from the defaults provided.
That we saved this YAML file as /Users/linda.ortega.cordoves/pact/test.yaml.
Step 3: Submit Transaction to Testnet
We will now submit the example pact module we created by hitting the /send endpoint of a Testnet node. In the command line, run the following command:
pact -a /Users/linda.ortega.cordoves/pact/test.yaml | curl -H "Content-Type: application/json" -d #- https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/send
Some of the assumptions we made when creating the YAML file become important here:
The network id must match the node endpoint we submit to. Since the network id we chose is testnet04, we must submit to /chainweb/0.0/testnet04/. And the node we submit to (in this case us1.testnet.chainweb.com) must have this network id.
The chain id must also match. We chose chain id of "0", so we must submit to /chain/0/.
That we saved the yaml file to /Users/linda.ortega.cordoves/pact/test.yaml.
If we submitted the transaction successfully we will see the following:
{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}
This means that our transaction was successfully added to the blockchain's mempool and is waiting to be mined. Make note of the request key returned from /send as we will use it when polling for the result of the transaction.
It is also possible that our transaction will fail node validation when we attempt to submit it. If this happens, you will receive a validation failure message instead of the request key.
Step 4: Verify the Result of the Transaction
We will now try to get the results of the transaction we submitted to the Testnet network by hitting the /poll endpoint. In the command line, run the following command:
curl -H "Content-Type: application/json" -d '{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}' -X POST https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/poll
Again, we make a couple of assumptions in this step:
That the Testnet node we want to poll from is us1.testnet.chainweb.com.
That the network id is testnet04. Note that part of the endpoint is /chainweb/0.0/testnet04/.
That the chain id we are polling from is chain "0". Note that part of the endpoint is /chain/0/.
That the request key we are polling for is Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek.
If the transaction was successfully mined and thus added to the blockchain, then /poll will return the following JSON object:
{
"Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek": {
"gas": 58,
"result": {
"status": "success",
"data": "Loaded module free.linda-test, hash n0g99JhWnO2F7X7f8o_zcAiSHBAWS_QSAfn4yUaqpps"
},
"reqKey": "Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek",
"logs": "0KzZQDJmEgnAKvPnO20UeGoE7KGCIN22nhjraeyp1aw",
"metaData": {
"blockTime": 1585056990071469,
"prevBlockHash": "dIYmpjBQge9yw0Yzhn0Sau-wJFwsLOFBmGbV3_0xYeE",
"blockHash": "yULpC5C-7tzRcc9sWm-f1bOC3JDvtxwT61hruW0aXrA",
"blockHeight": 261712
},
"continuation": null,
"txId": 266084
}
}
Please note that it is possible that a transaction fails at the pact level, but still gets added to the blockchain and gas gets charged. If this happens the result.status field will be failure.
If a transaction has not be mined yet, /poll will return {}. Keep retrying until you receive the JSON object shown above.
source: https://gist.github.com/LindaOrtega/1c219f887d9782c6745dbd827bdbfb4d

Broadcast an Intent from Historical Broadcast

The story behind it: I'm trying to command an Android box (with a proprietary launcher) that also manages TV channels. To enter the channel section it is not sufficient to type a number, but a specific key must be pressed. I want to find a way to replicate that key, and then use this command on Home Assistant. I could try with a bluetooth sniffer, but it will be after the failure of my actual attempt.
I ran this adb command after pressing the specific key for TV channels:
adb.exe shell dumpsys activity broadcasts history
And the last broadcast in history is this (timvision is the name of the box):
Historical Broadcast foreground #0:
BroadcastRecord{560a0f4 u0 android.intent.action.GLOBAL_BUTTON} to user 0
Intent { act=android.intent.action.GLOBAL_BUTTON flg=0x10000010 (has extras) }
targetComp: {timvision.launcher/timvision.launcher.TimVisionKeyReceiver}
extras: Bundle[{android.intent.extra.KEY_EVENT=KeyEvent { action=ACTION_UP, keyCode=KEYCODE_LAST_CHANNEL, scanCode=377, metaState=0, flags=0x8, repeatCount=0, eventTime=10092564, downTime=10092515, deviceId=27, source=0x301, displayId=-1 }}]
caller=android 3514:system/1000 pid=3514 uid=1000
enqueueClockTime=2022-04-16 14:43:54.577 dispatchClockTime=2022-04-16 14:43:54.578
dispatchTime=-3s691ms (+1ms since enq) finishTime=-3s502ms (+189ms since disp)
resultTo=null resultCode=0 resultData=null
nextReceiver=1 receiver=null
Deliver +189ms #0: (manifest)
priority=0 preferredOrder=0 match=0x0 specificIndex=-1 isDefault=false
ActivityInfo:
name=timvision.launcher.TimVisionKeyReceiver
packageName=timvision.launcher
enabled=true exported=true directBootAware=false
resizeMode=RESIZE_MODE_RESIZEABLE
Is possible to replicate this broadcast ? I tried this (with extra_key values too) but seems is not allowed:
adb.exe shell am broadcast -a android.intent.action.GLOBAL_BUTTON -n timvision.launcher/timvision.launcher.TimVisionKeyReceiver
Error:
java.lang.SecurityException: Permission Denial: not allowed to send broadcast android.intent.action.GLOBAL_BUTTON from pid=32487, uid=2000
Alternatives or ideas are welcome.
Thanks
It is not the solution to the question but it is the solution to my problem. So I post it.
The key to enter in the TV channels section is listed on the official Android documentation and is KEYCODE_LAST_CHANNEL with code 229.
On Home Assistant the service to launch is this:
service: androidtv.adb_command
data:
command: input keyevent 229
target:
entity_id: media_player.your_android_tv_entity

Unable to stream with Flink Streaming

I am new to Flink Streaming framework and I am trying to understand the components and the flow. I am trying to run the basic wordcount example using the DataStream. I am trying to run the code on my IDE. The code runs with no issues when I feed data using the collection as
DataStream<String> text = env.fromElements(
"To be, or not to be,--that is the question:--",
"Whether 'tis nobler in the mind to suffer",
"The slings and arrows of outrageous fortune",z
"Or to take arms against a sea of troubles,"
);
But, it fails everytime when I am trying to read data either from the socket or from a file as below:
DataStream<String> text = env.socketTextStream("localhost", 9999);
or
DataStream<String> text = env.readTextFile("sample_file.txt");
For both the socket and textfile, I am getting the following error:
Exception in thread "main" java.lang.reflect.InaccessibleObjectException: Unable to make field private final byte[] java.lang.String.value accessible: module java.base does not "opens java.lang" to unnamed module
Flink currently (as of Flink 1.14) only supports Java 8 and Java 11. Install a suitable JDK and try again.

Flink - unable to recover after yarn node termination

We are running flink on yarn. We were performing Disaster recovery Testing and as part of that, we manually terminated one of the nodes that had a flink application running. Once the instance was brought back up, the application went in for multiple attempts and each attempt had the following error :
AM Container for appattempt_1602902099413_0006_000027 exited with exitCode: -1000
Failing this attempt.Diagnostics: Could not obtain block: BP-986419965-xx.xx.xx.xx-1602902058651:blk_1073743332_2508
file=/user/hadoop/.flink/application_1602902099413_0006/application_1602902099413_0006-flink-conf.yaml1528536851005494481.tmp
org.apache.hadoop.hdfs.BlockMissingException:
Could not obtain block: BP-986419965-10.61.71.85-1602902058651:blk_1073743332_2508 file=/user/hadoop/.flink/application_1602902099413_0006/application_1602902099413_0006-flink-conf.yaml1528536851005494481.tmp
at org.apache.hadoop.hdfs.DFSInputStream.refetchLocations(DFSInputStream.java:1053)at
org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:1036)at
org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:1015)at
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:647)at
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:926)at
org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:982)at
java.io.DataInputStream.read(DataInputStream.java:100)at
org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:90)at
org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:64)at
org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:125)at
org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369)at
org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:267)at
org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)at
org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)at
org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)at
java.security.AccessController.doPrivileged(Native Method)at
javax.security.auth.Subject.doAs(Subject.java:422)at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)at
org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:359)at
org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)at
java.util.concurrent.FutureTask.run(FutureTask.java:266)at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)at
java.util.concurrent.FutureTask.run(FutureTask.java:266)at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at
java.lang.Thread.run(Thread.java:748)For
more detailed output, check the application tracking page: http://<>.compute.internal:8088/cluster/app/application_1602902099413_0006 Then click on links to logs of each attempt.
Could someone let us know what content is being stored in HDFS and if this could be redirected to S3?
Adding checkpoint related settings :
StateBackend rocksDbStateBackend = new RocksDBStateBackend("s3://Path", true);
streamExecutionEnvironment.setStateBackend(rocksDbStateBackend)
streamExecutionEnvironment.enableCheckpointing(10000);
streamExecutionEnvironment.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
streamExecutionEnvironment.getCheckpointConfig().setMinPauseBetweenCheckpoints(5000);
streamExecutionEnvironment.getCheckpointConfig().setCheckpointTimeout(60000);
streamExecutionEnvironment.getCheckpointConfig().setMaxConcurrentCheckpoints(60000);
streamExecutionEnvironment.getCheckpointConfig().enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
streamExecutionEnvironment.getCheckpointConfig().setPreferCheckpointForRecovery(true);
I had the same issue,
I fixed with setting for hdfs-site
{
"Classification": "hdfs-site",
"Properties": {
"dfs.client.use.datanode.hostname": "true",
"dfs.replication": "2",
"dfs.namenode.replication.min": "2",
"dfs.namenode.maintenance.replication.min": "2"
}
}
I think hdfs lost data on node terminated, so i replication data on multi node.

How can I read the blcokfile_xxxx

I tried to read the fabric produced blockfile_00000 which is at the directory /var/hyperledger/production/ledgersData/chains/chains/mychannel of the peer node.
But I can't read it through the method like:
configtxgen -profile TwoOrgsChannel -inspectBlock ./channel-artifacts/blockfile_000000
the error is
[common/tools/configtxgen] main -> CRIT 004 Error on inspectBlock: Could not read block ./channel-artifacts/blockfile_000000
using confitxlator
configtxlator proto_decode --input ./channel-artifacts/blockfile_000000 --type common.Block
the error is
configtxlator: error: Error decoding: error unmarshaling: proto: can't skip unknown wire type 6 for common.Block
I know the blockfile actually is chunk which is the collection of blocks, how to handle it?
configtxlator version
configtxlator:
Version: 1.2.0
Commit SHA: f6e72eb
Go version: go1.10
OS/Arch: linux/amd64
Any help would be greatly appreciated.
I use docker exec command into peer node and get block through peer channel fetch . Then, read the block by configtxlator. But how to read the transaction information.
the part log is(block 6):
"header": {
"data_hash": "kVFRQLFjY7+6l6QsL+jOgt5ICoCUlRG4VedgmBXv/mE=",
"number": "6",
"previous_hash": "GQ4w7x7MQB+Jvsa3neJcTNdU7aXdKVHySA7Va3SktOs="
},
There are APIs which can be used to query blocks for any given channel:
GetChainInfo returns the current block height for a given channel
GetBlockByNumber returns individual blocks by number (you get the latest block from the GetChainInfo API work backwards from there)
All of the SDKs have methods to invoke these APIs
configtxgen and configtxlator are meant for generating configuration transactions and translating configuration transactions into readable format respectively. Configuration Transactions include creation of channel, update anchor peers in the channel, setting the reader/writer of a channel etc. They are not meant for normal transactions which gets stored in blockfile_xxx.
You can use Hyperledger Explorer to view your block data.

Resources