I want to test failure scenarios for ADB Adapter in tibco. The states are C,N,F,P.
P = indicates pending acknowledgement.
N = indicates that a new message has arrived, but has not yet been published.
C = indicates complete.
F = indicates failed.
Can someone tell me in which scenario can I get status as F or P.
AFAIK, C stands for "confirmed", not "complete".
The message goes into the C state after a subscriber has confirmed its reception using the activity "Confirm".
So you can obtain a P state, for instance, by turning off the subscriber process while the publisher process is producing. All the messages sent but not consumed are in "P" (and you should find them into the ledger file)
I'm sorry, I've never seen the F state :-)
Related
I'm currently trying to get data from the BSC pending transactions, so I have been using this coding lines to see the changes in the mempool:
web3_filter= web3.eth.filter('pending')
transaction_hashes = web3.eth.getFilterChanges(web3_filter.filter_id)
for tx in transaction_hashes:
Datatx = web3.eth.getTransaction(tx)
Seems that it works, I can see new pending transactions that are added in the pool and refreshed in a while loop. But when I execute a "swapExactTokensForETH" to test and see my tx in the mempool it doesn't appear. What am I doing wrong! Is there anything I have been missing?
I have a BACnet system for HVAC controls where I am using the VOLTTRON actuator agent to write # priority 10 in BACnet to a value of 2 which works good.
result = self.vip.rpc.call('platform.actuator', 'set_multiple_points', self.core.identity, set_multi_topic_values_master).get(timeout=20)
_log.debug(f'*** [Setter Agent INFO] *** - set_multiple_points ON ALL VAVs WRITE SUCCESS!')
Then the system sleeps for some time period for testing purposes:
_log.debug(f'*** [Setter Agent INFO] *** - SETTING UP GEVENT SLEEP!')
gevent.sleep(120)
_log.debug(f'*** [Setter Agent INFO] *** - GEVENT SLEEP DONE!')
Where after the gevent sleep I am running into some issues on the revert point not working. The code below executes just fine but using a BACnet scanning tool the priority 10 value of 2 are still present on the HVAC controls, like the revert point isn't doing anything.
for device in revert_topic_devices_jci:
response = self.vip.rpc.call('platform.actuator', 'revert_point', self.core.identity, topic_jci, self.jci_setpoint_topic).get(timeout=20)
_log.debug(f'*** [Setter Agent INFO] *** - REVERT POINTS ON {device} SUCCESS!')
_log.debug(f'*** [Setter Agent INFO] *** - REVERT POINTS JCI DONE DEAL SUCCESS!')
One thing I notice is the building automation writes occupancy/unoccupancy to the HVAC controls # BACnet priority 12. Its either ALWAYS a 1 for occupancy or a 2 for unoccupancy.
What I am trying to do with VOLTTRON is write in BACnet at priority 10 a value of 2, and then release to nothing on the revert. Could this by the revert isnt doing anything because there was nothing to revert too? I was hoping that VOLTTRON could write # BACnet priority 10 and then just release. On BACnet scan tool I can do the same thing write # priority 10 then release priority 10 with a priority 10 write null
Should I just be writing at priority 12 same as the building automation system so VOLTTRON can just revert back too whatever the building automation was doing?
I have a few observations:
In your revert loop, the third code-block above, you're not actually
changing the topic being passed to the RPC call. Each call will use
the device topic which is not in that code-block (but that we can
see is not being changed inside the block) and a device topic, which
similarly is not defined in the block but at least appears not to be
being changed. It is likely worth setting some breakpoints and/or
debug statements here to be sure that you're passing the correct
topics to revert on.
Your use of priority appears to be consistent
with BACnet protocol specification, and with the VOLTTRON BACnet
driver implementation. We would not recommend that you attempt to
write at the same priority as an existing building automation
system.
The BACnet driver code will send a NULL (None) value in a "writeProperty"
service request when the "revert_point" function is called by the
Platform Driver. This functionality I am frankly not terribly
familiar with, but given that your scan tool performs the expected
revert functionality when passed a NULL value, I suspect this is the expected
way of performing a "revert to previous value" type function in BACnet protocol.
I do not have reason to believe that the behavior you're experiencing is the
result of a bug in the driver code base.
Overall, I suggest debugging the topics being passed in the "revert_point" RPC call.
I am having a good luck to revert point using set_multiple_points to None
Something like this:
self.jci_device_map = {
'VMA-2-6': '27',
'VMA-2-4': '29',
'VMA-2-7': '30',
'VMA-1-8': '6',
}
revert_multi_topic_values_master = []
set_multi_topic_values_master = []
for device in self.jci_device_map.values():
topic_jci = '/'.join([self.building_topic, device])
final_topic_jci = '/'.join([topic_jci, self.jci_setpoint_topic])
# BACnet enum point for VAV occ
# 1 == occ, 2 == unnoc
# create a (topic, value) tuple and add it to our topic values
set_multi_topic_values_master.append((final_topic_jci, self.unnoccupied_value)) # TO SET UNNOCUPIED
revert_multi_topic_values_master.append((final_topic_jci, None)) # TO SET FOR REVERT
result = self.vip.rpc.call('platform.actuator', 'set_multiple_points', self.core.identity, revert_multi_topic_values_master).get(timeout=20)
I would like to have information about the setting of the publisher in the pubsub environment of gcp. I would like to enqueue messages that will be consumed via a google function. To achieve this, the publication will trigger when a number of messages is reached or from a certain time.
I set the topic as follows :
topic.PublishSettings = pubsub.PublishSettings{
ByteThreshold: 1e6, // Publish a batch when its size in bytes reaches this value. (1e6 = 1Mo)
CountThreshold: 100, // Publish a batch when it has this many messages.
DelayThreshold: 10 * time.Second, // Publish a non-empty batch after this delay has passed.
}
When I call the publish function, I have a 10 second delay on each call. Messages are not added to the queue ...
for _, v := range list {
ctx := context.Background()
res := a.Topic.Publish(ctx, &pubsub.Message{Data: v})
// Block until the result is returned and a server-generated
// ID is returned for the published message.
serverID, err = res.Get(ctx)
if err != nil {
return "", err
}
}
Someone can help me ?
Cheers
Batching the publisher side is designed to allow for more cost efficiency when sending messages to Google Cloud Pub/Sub. Given that the minimum billing unit for the service is 1KB, it can be cheaper to send multiple messages in the same Publish request. For example, sending two 0.5KB messages as separate Publish requests would result in being changed for sending 2KB of data (1KB for each). If one were to batch that into a single Publish request, it would be charged as 1KB of data.
The tradeoff with batching is latency: in order to fill up batches, the publisher has to wait to receive more messages to batch together. The three batching properties (ByteThreshold, CountThreshold, and DelayThreshold) allow one to control the level of that tradeoff. The first two properties control how much data or how many messages we put in a single batch. The last property controls how long the publisher should wait to send a batch.
As an example, imagine you have CountThreshold set to 100. If you are publishing few messages, it could take awhile to receive 100 messages to send as a batch. This means that the latency for messages in that batch will be higher because they are sitting in the client waiting to be sent. With DelayThreshold set to 10 seconds, that means that a batch would be sent if it had 100 messages in it or if the first message in the batch was received at least 10 seconds ago. Therefore, this is putting a limit on the amount of latency to introduce in order to have more data in an individual batch.
The code as you have it is going to result in batches with only a single message that each take 10 seconds to publish. The reason is the call to res.Get(ctx), which will block until the message has been successfully sent to the server. With CountThreshold set to 100 and DelayThreshold set to 10 seconds, the sequence that is happening inside your loop is:
A call to Publish puts a message in a batch to publish.
That batch is waiting to receive 99 more messages or for 10 seconds to pass before sending the batch to the server.
The code is waiting for this message to be sent to the server and return with a serverID.
Given the code doesn't call Publish again until res.Get(ctx) returns, it waits 10 seconds to send the batch.
res.Get(ctx) returns with a serverID for the single message.
Go back to 1.
If you actually want to batch messages together, you can't call res.Get(ctx) before the next Publish call. You'll want to either call publish inside a goroutine (so one routine per message) or you'll want to amass the res objects in a list and then call Get on them outside the loop, e.g.:
var res []*PublishResult
ctx := context.Background()
for _, v := range list {
res = append(res, a.Topic.Publish(ctx, &pubsub.Message{Data: v}))
}
for _, r := range res {
serverID, err = r.Get(ctx)
if err != nil {
return "", err
}
}
Something to keep in mind is that batching will optimize cost on the publish side, not on the subscribe side. Cloud Functions is built with push subscriptions. This means that messages must be delivered to the subscriber one at a time (since the response code is what is used to ack or nack each message), which means there is no batching of messages delivered to the subscriber.
Inspired by this, I wrote a simple mutex on Cassandra 2.1.4.
Here is a how the lock/unlock (pseudo) code looks:
public boolean lock(String uuid){
try {
Statement stmt = new SimpleStatement("INSERT INTO LOCK (id) VALUES (?) IF NOT EXISTS", uuid);
stmt.setConsistencyLevel(ConsistencyLevel.QUORUM);
ResultSet rs = session.execute(stmt);
if (rs.wasApplied()) {
return true;
}
} catch (Throwable t) {
Statement stmt = new SimpleStatement("DELETE FROM LOCK WHERE id = ?", uuid);
stmt.setConsistencyLevel(ConsistencyLevel.QUORUM);
session.execute(stmt); // DATA DELETED HERE REAPPEARS!
}
return false;
}
public void unlock(String uuid) {
try {
Statement stmt = new SimpleStatement("DELETE FROM LOCK WHERE id = ?", uuid);
stmt.setConsistencyLevel(ConsistencyLevel.QUORUM);
session.execute(stmt);
} catch (Throwable t) {
}
}
Now, I am able to recreate at will a situation where a WriteTimeoutException is thrown in lock() in a high load test. This means the data may or may not be written. After this my code deletes the lock - and again a WriteTimeoutException is thrown. However, the lock remains (or reappears).
Why is this?
Now I know I can easily put a TTL on this table (for this usecase), but how do I reliably delete that row?
My guess on seeing this code is a common error that happens in Distributed Systems programming. There is an assumption that in case in failure your attempt to correct the failure will succeed.
In the above code you check to make sure that initial write is successful, but don't make sure that the "rollback" is also successful. This can lead to a variety of unwanted states.
Let's imagine a few scenarios with Replicas A, B and C.
Client creates Lock but an error is thrown. The lock is present on all replicas but the client gets a timeout because that connection is lost or broken.
State of System
A[Lock], B[Lock], C[Lock]
We have an exception on the client and attempt to undo the lock by issuing a delete but this fails with an exception back at the client. This means the system can be in a variety of states.
0 Successful Writes of the Delete
A[Lock], B[Lock], C[Lock]
All quorum requests will see the Lock. There exists no combination of replicas which would show us the Lock has been removed.
1 Successful Writes of the Delete
A[Lock], B[Lock], C[]
In this case we are still vulnerable. Any request which excludes C as part of the quorum call will miss the deletion. If only A and B are polled than we'll still see the lock existing.
2/3 Successful Writes of the Delete (Quorum CL Is Met)
A[Lock/], B[], C[]
In this case we have once more lost the connection to the driver but somehow succeeded internally in replicating the delete request. These scenarios are the only ones in which we are actually safe and that future reads will not see the Lock.
Conclusion
One of the tricky things with situations like this is that if you fail do make your lock correctly because of network instability it is also unlikely that your correction will succeed since it has to work in the exact same environment.
This may be an instance where CAS operations can be beneficial. But in most cases it is better to not attempt to use distributing locking if at all possible.
In my program, I’ve got several threads in pool that each try to write to the DB. The number of threads created is dynamic. When the number of threads created is only one, all works fine. However, when there are multi-thread executing, I get the error:
org.apache.ddlutils.DatabaseOperationException: org.postgresql.util.PSQLException: Cannot commit when autoCommit is enabled.
I’m guessing, perhaps since each thread executes in parallel, two threads are trying to write at the same time and giving this error.
Do you think this is the case, if not, what could be causing this error?
Otherwise, if what I said is the problem, what I can do to fix it?
In your jdbc code, you should turn off autocommit as soon as you fetch the connection. Something like this:
DataSource datasource = getDatasource(); // fetch your datasource somehow
Connection c = null;
try{
c = datasource.getConnection();
c.setAutoCommit(false);