How can I read the blcokfile_xxxx - database

I tried to read the fabric produced blockfile_00000 which is at the directory /var/hyperledger/production/ledgersData/chains/chains/mychannel of the peer node.
But I can't read it through the method like:
configtxgen -profile TwoOrgsChannel -inspectBlock ./channel-artifacts/blockfile_000000
the error is
[common/tools/configtxgen] main -> CRIT 004 Error on inspectBlock: Could not read block ./channel-artifacts/blockfile_000000
using confitxlator
configtxlator proto_decode --input ./channel-artifacts/blockfile_000000 --type common.Block
the error is
configtxlator: error: Error decoding: error unmarshaling: proto: can't skip unknown wire type 6 for common.Block
I know the blockfile actually is chunk which is the collection of blocks, how to handle it?
configtxlator version
configtxlator:
Version: 1.2.0
Commit SHA: f6e72eb
Go version: go1.10
OS/Arch: linux/amd64
Any help would be greatly appreciated.
I use docker exec command into peer node and get block through peer channel fetch . Then, read the block by configtxlator. But how to read the transaction information.
the part log is(block 6):
"header": {
"data_hash": "kVFRQLFjY7+6l6QsL+jOgt5ICoCUlRG4VedgmBXv/mE=",
"number": "6",
"previous_hash": "GQ4w7x7MQB+Jvsa3neJcTNdU7aXdKVHySA7Va3SktOs="
},

There are APIs which can be used to query blocks for any given channel:
GetChainInfo returns the current block height for a given channel
GetBlockByNumber returns individual blocks by number (you get the latest block from the GetChainInfo API work backwards from there)
All of the SDKs have methods to invoke these APIs

configtxgen and configtxlator are meant for generating configuration transactions and translating configuration transactions into readable format respectively. Configuration Transactions include creation of channel, update anchor peers in the channel, setting the reader/writer of a channel etc. They are not meant for normal transactions which gets stored in blockfile_xxx.
You can use Hyperledger Explorer to view your block data.

Related

How to deploy a smart contract on KADENA

Please how do I deploy a smart contract to the testnet or mainnet WITHOUT Chainweaver web UI? I know I need a YAML file for that, but what do I do with it and where exactly do I send it?
Do I need to run a pact server, chainweb api or...? I couldn't find any guide for that
Step 0: Install the Prerequisites
Install Pact
Step 1: Create the Pact Module
We will be deploying the following pact module. For simplicity's sake the pact code we are deploying is not using a transaction's data field (read-keyset is one such pact function that makes use of this field). Otherwise, the accompanying YAML file will have to change. We also assume that this pact code is saved as test.pact.
(namespace 'free)
(module someModuleName AUTONOMOUS
(defcap AUTONOMOUS ()
true)
(defun dummy ()
(+ 1 2)
)
)
Step 2: Create the YAML file
The following YAML file will be used along with pact -a to sign and produce the escaped JSON needed to submit a transaction to Testnet.
codeFile: /Users/linda.ortega.cordoves/pact/test.pact
networkId: testnet04
publicMeta:
chainId: "0"
gasLimit: 1000
ttl: 28000
creationTime: 1585056536
sender: "testing"
gasPrice: 0.00001
keyPairs:
- public: 1d877a7b4524b6724a6ae708cf9ea7396d6ee9d17b10098b7793800177669c1d
secret: 33fcd94b8a42057bd4e3190f8983e3a73ec96c3f60df95c9e2aa3f13602c714f
nonce: step02
This file makes a couple of assumptions that might change depending on your specific implementation:
The full path of the pact we want to upload is: /Users/linda.ortega.cordoves/pact/test.pact
We want to submit a transaction to Testnet, whose network id is testnet04
We want to submit to the zero'th chain on Testnet, which has a chain id of "0"
That the current creation time in UNIX Epoch time is 1585056536 seconds. This value MUST CHANGE, so calculate it by either navigating to this website or running date +%s on the command line.
That "testing" is the account paying for gas (aka the "sender") on the Testnet network. To create a Testnet account and fund it some coins, navigate to the Testnet Coin Faucet. You will need to have generated an ED22519 public-private key pair to use the faucet. You can use pact -g to generate this key pair. Make sure to save it somewhere save.
That the key pair specified in "keyPairs" corresponds to the key pair used to create the gas payer account, which in this example is "testing". This must change from the defaults provided.
That we saved this YAML file as /Users/linda.ortega.cordoves/pact/test.yaml.
Step 3: Submit Transaction to Testnet
We will now submit the example pact module we created by hitting the /send endpoint of a Testnet node. In the command line, run the following command:
pact -a /Users/linda.ortega.cordoves/pact/test.yaml | curl -H "Content-Type: application/json" -d #- https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/send
Some of the assumptions we made when creating the YAML file become important here:
The network id must match the node endpoint we submit to. Since the network id we chose is testnet04, we must submit to /chainweb/0.0/testnet04/. And the node we submit to (in this case us1.testnet.chainweb.com) must have this network id.
The chain id must also match. We chose chain id of "0", so we must submit to /chain/0/.
That we saved the yaml file to /Users/linda.ortega.cordoves/pact/test.yaml.
If we submitted the transaction successfully we will see the following:
{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}
This means that our transaction was successfully added to the blockchain's mempool and is waiting to be mined. Make note of the request key returned from /send as we will use it when polling for the result of the transaction.
It is also possible that our transaction will fail node validation when we attempt to submit it. If this happens, you will receive a validation failure message instead of the request key.
Step 4: Verify the Result of the Transaction
We will now try to get the results of the transaction we submitted to the Testnet network by hitting the /poll endpoint. In the command line, run the following command:
curl -H "Content-Type: application/json" -d '{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}' -X POST https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/poll
Again, we make a couple of assumptions in this step:
That the Testnet node we want to poll from is us1.testnet.chainweb.com.
That the network id is testnet04. Note that part of the endpoint is /chainweb/0.0/testnet04/.
That the chain id we are polling from is chain "0". Note that part of the endpoint is /chain/0/.
That the request key we are polling for is Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek.
If the transaction was successfully mined and thus added to the blockchain, then /poll will return the following JSON object:
{
"Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek": {
"gas": 58,
"result": {
"status": "success",
"data": "Loaded module free.linda-test, hash n0g99JhWnO2F7X7f8o_zcAiSHBAWS_QSAfn4yUaqpps"
},
"reqKey": "Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek",
"logs": "0KzZQDJmEgnAKvPnO20UeGoE7KGCIN22nhjraeyp1aw",
"metaData": {
"blockTime": 1585056990071469,
"prevBlockHash": "dIYmpjBQge9yw0Yzhn0Sau-wJFwsLOFBmGbV3_0xYeE",
"blockHash": "yULpC5C-7tzRcc9sWm-f1bOC3JDvtxwT61hruW0aXrA",
"blockHeight": 261712
},
"continuation": null,
"txId": 266084
}
}
Please note that it is possible that a transaction fails at the pact level, but still gets added to the blockchain and gas gets charged. If this happens the result.status field will be failure.
If a transaction has not be mined yet, /poll will return {}. Keep retrying until you receive the JSON object shown above.
source: https://gist.github.com/LindaOrtega/1c219f887d9782c6745dbd827bdbfb4d

[flink]Task manager initialization failed

I am new to flink. I am trying to run the flink example on my local PC(windows).
However, after I run the start-cluster.bat, I login to the dashboard, it shows the task manager is 0.
I checked the log and seems it fails to initialize:
2020-02-21 23:03:14,202 ERROR org.apache.flink.runtime.taskexecutor.TaskManagerRunner - TaskManager initialization failed.
org.apache.flink.configuration.IllegalConfigurationException: Failed to create TaskExecutorResourceSpec
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpec.FromConfig(TaskExecutorResourceUtils.java:72)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:356)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<init>(TaskManagerRunner.java:152)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:308)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerSecurely$2(TaskManagerRunner.java:322)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:321)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:287)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: The required configuration option Key: 'taskmanager.cpu.cores' , default: null (fallback keys: []) is not set
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkConfigOptionIsSet(TaskExecutorResourceUtils.java:90)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.lambda$checkTaskExecutorResourceConfigSet$0(TaskExecutorResourceUtils.java:84)
at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkTaskExecutorResourceConfigSet(TaskExecutorResourceUtils.java:84)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:70)
... 7 more
2020-02-21 23:03:14,217 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
Basically, it looks like a required option 'taskmanager.cpu.cores' is not set. However, I can't find this property in flink-conf.yaml and in the document(https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html) either.
I am using flink 1.10.0. Any help would be highly appreciated!
That configuration option is intended for internal use only -- it shouldn't be user configured, which is why it isn't documented.
The windows start-cluster.bat is failing because of a bug introduced in Flink 1.10. See https://jira.apache.org/jira/browse/FLINK-15925.
One workaround is to use the bash script, start-cluster.sh, instead.
See also this mailing list thread: https://lists.apache.org/thread.html/r7693d0c06ac5ced9a34597c662bcf37b34ef8e799c32cc0edee373b2%40%3Cdev.flink.apache.org%3E

Using JanusGraph with Solr

Setting up JanusGraph i noticed the following in the console:
09:04:12,175 INFO ReflectiveConfigOptionLoader:173 - Loaded and initialized config classes: 10 OK out of 12 attempts in PT0.023S
09:04:12,230 INFO Reflections:224 - Reflections took 28 ms to scan 1 urls, producing 2 keys and 2 values
09:04:12,291 WARN GraphDatabaseConfiguration:1445 - Local setting index.search.index-name=entity (Type: GLOBAL_OFFLINE) is overridden by globally managed value (janusgraph). Use the ManagementSystem interface instead of the local configuration to control this setting.
09:04:12,294 WARN GraphDatabaseConfiguration:1445 - Local setting index.search.backend=solr (Type: GLOBAL_OFFLINE) is overridden by globally managed value (elasticsearch). Use the ManagementSystem interface instead of the local configuration to control this setting.
09:04:12,300 INFO CassandraThriftStoreManager:628 - Closed Thrift connection pooler.
and then i see the following:
Exception in thread "main" java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex
How do i stop using elasticsearch and switch to Solr?
My properties file is as follows:
index.search.backend=solr
index.search.directory=/path/to/directory/for/solr/index/something
index.search.index-name=something
index.search.solr.mode=http
index.search.solr.http-urls=http://127.0.0.1:8983/solr
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
The answer to this basically the same as this one for Titan. JanusGraph was forked from Titan.
You are probably trying to connect to an existing graph that was previously configured to use Elasticsearch. By default, the keyspace is named janusgraph.
1) You could connect to a different keyspace by updating conf/janusgraph-cassandra.properties
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
storage.cassandra.keyspace=mygraph
2) You could drop the existing keyspace. If you used bin/janusgraph.sh start from the quick start directions (which starts a single node Cassandra and a single node Elasticsearch),
bin/janusgraph.sh clean
Or if you have a standalone Cassandra installation:
$CASSANDRA_HOME/bin/cqlsh -e 'drop keyspace if exists janusgraph'
Then you would be able to connect with the default conf/janusgraph-cassandra.properties.

How to send CSV into file in straming mode where i get input in "java.io.PipedInputStream#1bca8e6" format in mule esb?

My requirement is, I get input xml and based on some condition checking(using choice) I need to send into 2 different files. As I'm getting big file(100-400MB) I'm sending in stream mode(by enable streaming in file and datamapper components).
It is working fine for small size input xml(10-20MB). But when I give large input xml. Condition checking and XML to CSV conversion is working fine but while writing CSVB-data I'm getting error message.,
INFO 2015-09-08 12:03:49,227 [[simplebatch_1].simplebatchFlow.stage1.02] org.mule.api.processor.LoggerMessageProcessor: default logger java.io.PipedInputStream#1bca8e6
INFO 2015-09-08 12:03:49,258 [[simplebatch_1].File1.dispatcher.01] org.mule.lifecycle.AbstractLifecycleManager: Initialising: 'File1.dispatcher.29118412'. Object is: FileMessageDispatcher
INFO 2015-09-08 12:03:49,258 [[simplebatch_1].File1.dispatcher.01] org.mule.lifecycle.AbstractLifecycleManager: Starting: 'File1.dispatcher.29118412'. Object is: FileMessageDispatcher
INFO 2015-09-08 12:03:49,258 [[simplebatch_1].File1.dispatcher.01] org.mule.transport.file.FileConnector: Writing file to: D:\MulePOC's\output\myoutput1
ERROR 2015-09-08 12:03:54,999 [XML_READER0_0] org.jetel.graph.Node: java.lang.OutOfMemoryError: Java heap space
ERROR 2015-09-08 12:03:55,000 [WatchDog_0] org.jetel.graph.runtime.WatchDog: Component [XML READER:XML_READER0] finished with status ERROR.
Java heap space
Please suggest me on this., Thanks..,
You need increase your JVM memory for Mule. You can found the config file in $MULE_HOME/conf/wrapper.conf
You will found something like this:
# Increase Permanent Generation Size from default of 64m
# Increase this value if you get "Java.lang.OutOfMemoryError: PermGen space error"
# This property is not used when running java 8 and may cause a warning.
wrapper.java.additional.7=-XX:PermSize=256m
wrapper.java.additional.8=-XX:MaxPermSize=256m
# GC settings
wrapper.java.additional.9=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.10=-XX:+AlwaysPreTouch
wrapper.java.additional.11=-XX:+UseParNewGC
wrapper.java.additional.12=-XX:NewSize=512m
wrapper.java.additional.13=-XX:MaxNewSize=512m
wrapper.java.additional.14=-XX:MaxTenuringThreshold=8
You can change this configurations as you like.

about qpid exchange,queue

every one:
i am new to qpid and encounter some problem. the exchange created by me can’t route message to the queue, as follows:
first i create a durbale queue “test-queue-1” in the qpid use the quid-config command:
qpid-config add queue test-queue-1 --durable
next i create a durable direct exchange “test-exchange-1" in the qpid also use the qpid-config command:
qpid-config add exchange direct test-exchange-1 --durable
the last, in bind them as follow command:
qpid-config bind test-exchange-1 test-queue-1 test-queue-1
everything seems ok in the qpid-tool:
Object Summary:
ID Created Destroyed Index
========================================================================================
128 12:28:28 - org.apache.qpid.broker:queue:qmfc-v2-hb-iZ23c6sri0pZ.12680.1
129 12:28:28 - org.apache.qpid.broker:queue:qmfc-v2-iZ23c6sri0pZ.12680.1
130 12:28:28 - org.apache.qpid.broker:queue:qmfc-v2-ui-iZ23c6sri0pZ.12680.1
131 12:28:28 - org.apache.qpid.broker:queue:reply-iZ23c6sri0pZ.12680.1
132 12:24:17 - org.apache.qpid.broker:queue:test-queue-1
133 12:28:28 - org.apache.qpid.broker:queue:topic-iZ23c6sri0pZ.12680.1
116 12:27:20 -
and
org.apache.qpid.broker:binding:org.apache.qpid.broker:exchange:test-exchange-1,org.apache.qpid.broker:queue:test-queue-1,test-queue-1
now i am ready to test them, start the recv/send demo program:
[devel#iZ23c6sri0pZ build]$ ./recv amqp://127.0.0.1/test-queue-1
send the message:
[devel#iZ23c6sri0pZ build]$ ./send -a amqp://127.0.0.1/test-exchange-1 hi,everyone
but the "recv program” can’t recv any message.
if i send message like this :
[devel#iZ23c6sri0pZ build]$ ./send -a amqp://127.0.0.1/test-queue-1 hi,everyone
the “recv program” can recv the message:
Address: amqp://127.0.0.1/test-queue-1
Subject: Hello Subject
Content: "hi,everyone"
who can tell me why?i read the amqp protocol, maybe the routing-key in the message don’t match the binding-key, but if this, how could i set the routing-key?
my recv/send writed by proton-c , version 0.8. qpidd is 0.32 version.
When you send a message to a qpid direct exchange, it gets routed to a bound queue based on the routing-key of the message. In proton-c you can set the routing key by setting the message-subject using the function
PN_EXTERN int pn_message_set_subject (pn_message_t* msg,const char* subject)
Unfortunately this is not implemented in the example send.c that is shipped with proton-c v0.8 You can insert the following line somewhere around here and rebuild your send executable
pn_message_set_subject(message, "my-routing-key");
You can also with some effort, add a new command-line option to accept and use a routing-key from ./send
The java example implements a -s option to set the message subject.
I too think it's a binding issue.
Try binding with following,
qpid-config bind test-exchange-1 test-queue-1 test-exchange-1
#Feng Fang: "test-exchange-1" is a routing key which is you are using while sending a message. If that does not give a try with "test-exchange-1/test-exchange-1"
Keep rest as-is and give a try.
I hope this helps!

Resources