TDengine database 2.6 cluster failed to build - tdengine

The first node had data. Now add the second node tdengine-server-b, but the status not received cannot be added.
I want to add the second node tdengine-server-b

Related

How to change where clause dynamically in SOQL in salesforce node in Node Red

How to take parameter from post API and use that in where clause in SOQL in node red.
In my case I am passing 'name' from postman and in node red have connected salseforce SOQL node with http(post) node. Trying to change where clause as where name = '{msg.payload.name}' but its not working.
How to do it ??
Assuming this is for the SOQL node in the node-red-contrib-salesforce collection of nodes.
The doc for this node explains that the query can be passed in via the msg.query parameter.
The query can be configured in the node, however if left blank, the query should be set in an incoming message on msg.query. See the Salesforce SOQL documentation for more information.
So to do what you want you will have to build the query in a function node or a change node before passing it into the SOQL node.

Unable to setup replication in ClickHouse using Zookeeper

I've spent past two days trying to setup replication in ClickHouse, but what ever configuration I try I end up with the same behavior.
I'm able to create a ReplicatedMergeTree table on the first node and insert data to it. Then I create a replica on the second node. The data gets replicated and I can see it querying the second node. But when I insert data to the second node the weird behavior starts. Data is not copied to the first node and it gets the following error:
2017.11.14 11:16:43.464565 [ 30 ] <Error> DB::StorageReplicatedMergeTree::queueTask()::<lambda(DB::StorageReplicatedMergeTree::LogEntryPtr&)>: Code: 33, e.displayText() = DB::Exception: Cannot read all data, e.what() = DB::Exception,
It is very similar to this issue on GitHub.
When I restart the first node it is able to load the new data inserted to the second node and seems to be working. However inserting some more data brings the same error again.
The most recent setup I tried:
Following the tutorial, I have a three node Zookeeper cluster with the following config:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zoo2/data
clientPort=12181
server.1=10.201.1.4:2888:3888
server.2=0.0.0.0:12888:13888
server.3=10.201.1.4:22888:23888
The zookeeper config for ClickHouse loooks like this:
<?xml version="1.0"?>
<yandex>
<zookeeper>
<node>
<host>10.201.1.4</host>
<port>2181</port>
</node>
<node>
<host>10.201.1.4</host>
<port>12181</port>
</node>
<node>
<host>10.201.1.4</host>
<port>22181</port>
</node>
</zookeeper>
</yandex>
I create all tables like this:
CREATE TABLE t_r (
id UInt32,
d Date
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/t_r', '03', d, (d, id), 8192);
The only difference accross all replicas is the replica id '03' which is set accordingly.
Thanks for any advice!
Actually I figured out the issue by myself. Thanks to #egorlitvinenko I went through all the configs again and I noticed that for all three nodes I had set up the same interserver_http_port. It would not be problem if all the nodes were running on separate machines, but in my test scenario they run side by side hosted on the same OS.
ReplicatedMergeTree('/clickhouse/tables/t_r', '03', d, (d, id), 8192);
You should configure zookeeper unique id for each replicas. Currently you use '03', it is not correct.
In tutorial, by {replica} means macros, which configured in clickhouse config file on each nodes.
See - https://clickhouse.yandex/docs/en/table_engines/replication.html#replicatedmergetree
p.s. For futher help, please, provide config of all nodes.

SymmetricDS Could Not Find Batch to Acknowledge as OK

Im doing bi-directional push on 3 tier nodes.
Why on 1st and 2nd tier nodes are spamming error like this:
1st tier node is logging error:
"IncomingBatchService - Skipping batch x"
"DataLoaderService - x data and x batches loaded during push request from 2nd tier. There were x batches in error."
2nd tier node is logging error:
"PushService - Push data sent to 3rd tier"
"AcknowledgeService - Could not find batch to acknowledge as OK"
"PushService - Pushed data to 3rd tier. x data and x batches were processed"
After checking DBs:
On 2nd tier node the batch is pointed to 3rd tier node with LD status and reload channel. No batch in same id that pointed to 1st tier node
On 1st tier node the batch is pointed to 2nd tier node with OK status an reload channel
Help, thank you.
there must be logs on target nodes with exceptions thrown by data loader trying to load batches in error. find them and they'll tell what's wrong
there's a mistake in the 3rd tier node. sync.url should be http://<3rd_tier_node_IP>/sync/<engine.name>

Drupal 7 : How can see number views of node

I'm using Drupal 7. After enabling the Statistics module, I see, under each node, how many times it has been read (e.g. "4 reads").
I need to knew where this views (e.g. "4 reads") save in table in database ?
I need to know where is saved to using it in my SQL
This data is stored in database table node_counter, field totalcount.
You can use function statistics_get() to get the total number of times the node has been viewed.
Example:
// the node nid
$nid = 1;
// get the node statistics
$node_stats = statistics_get($nid);
// get the count of the node reads
$node_reads = $node_stats['totalcount'];
Or, if you need to access it directly with SQL code,
SELECT totalcount FROM node_counter WHERE nid = 1;

New Solr node in "Active - Joining" state for several days

We are trying to add a new Solr node to our cluster:
DC Cassandra
Cassandra node 1
DC Solr
Solr node 1 <-- new node (actually, a replacement for an old node; we followed the steps for "replacing a dead node")
Solr node 2
Solr node 3
Solr node 4
Solr node 5
Our Cassandra data is approximately 962gb. Replication factor is 1 for both DCs. Is it normal for the new node to be in "Active - Joining" state for several days? Is there a way to know the progress?
Last week, there was a time when we had to kill and restart the DSE process because it began throwing "too many open files" exception. Right now, the system log is full of messages about completed compaction/flushing tasks (no errors so far).
EDIT:
The node is still in "Active - Joining" state as of this moment. It's been exactly a week since we restarted the DSE process in that node. I started monitoring the size of the solr.data directory yesterday and so far I haven't seen an increase. The system.log is still filled with compacting/flushing messages.
One thing that bothers me is that in OpsCenter Nodes screen (ring/list view), the node is shown under the "Cassandra" DC even though the node is a Solr node. In nodetool status, nodetool ring, and dsetool ring, the node is listed under the correct DC.
EDIT:
We decided to restart the bootstrap process from scratch by deleting the data and commitlog directories. Unfortunately, during the subsequent bootstrap attempt:
The stream from node 3 to node 1 (the new node) failed with an exception: ERROR [STREAM-OUT-/] 2014-04-01 01:14:40,887 CassandraDaemon.java (line 196) Exception in thread Thread[STREAM-OUT-/,5,main]
The stream from node 4 to node 1 never started. The last relevant line in node 4's system.log is: StreamResultFuture.java (line 116) Received streaming plan for Bootstrap. It should have been followed by: Prepare completed. Receiving 0 files(0 bytes), sending x files(y bytes)
How can I force those streams to be retried?

Resources