How can I see my pending transactions in the BSC pending pool? - web3js

I'm currently trying to get data from the BSC pending transactions, so I have been using this coding lines to see the changes in the mempool:
web3_filter= web3.eth.filter('pending')
transaction_hashes = web3.eth.getFilterChanges(web3_filter.filter_id)
for tx in transaction_hashes:
Datatx = web3.eth.getTransaction(tx)
Seems that it works, I can see new pending transactions that are added in the pool and refreshed in a while loop. But when I execute a "swapExactTokensForETH" to test and see my tx in the mempool it doesn't appear. What am I doing wrong! Is there anything I have been missing?

Related

Azure SQL DB, Extended Events using Ring Buffer stop on their own after a while - why?

I run a fairly basic Xevent on Azure SQL DB, using a Ring Buffer target, that looks for Severity errors over 10, based on a post Brent Ozar made. However, the session stops on its own, and I'm not sure why. I suspect it's filling up, even though all the documentation says it's FIFO and will drop the oldest events. Am I missing something? Do I need to set something differently? Thanks!
update: weirdly, I made a much smaller test, with max_memory = 200 and a much bigger amount of data in the failing code, to try and force it to stop/die, but I show it's looping as I expected. So I'm still confused why it's stopping, but it doesn't seem to be because it's filling up.
CREATE EVENT SESSION
severity_10plus_errors_XE
ON database
ADD EVENT sqlserver.error_reported
(
ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack,sqlserver.username)
--ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username)
WHERE ([severity]> 10)
)
ADD TARGET
package0.ring_buffer
(SET
max_memory = 10000 ) -- Units of KB.
WITH (MAX_DISPATCH_LATENCY = 60SECONDS)
GO
ALTER EVENT SESSION severity_10plus_errors_XE
ON DATABASE
STATE = START;
GO

How to fix System.LimitException: Apex CPU time limit exceeded caused by workflows with email alert?

I'm trying to execute batch test method on 100 records and get CPU Runtime Limit error.
I placed the Limits.getCpuTime() method in my code and noticed that my code without the workflow segment takes 3148 ms to complete. However, when I activate two workflows that sends emails to one user each, I get the CPU runtime limit error. In total my process without those two workflows takes around 10 seconds to complete while with them activated it takes around 20 seconds.
#IsTest
static void returnIncClientAddress(){
//Select Required Records
User incidentClient = [SELECT Id FROM User WHERE Username = 'bbaggins#shire.qa.com' LIMIT 1];
BMCServiceDesk__Category__c category = [SELECT Id FROM BMCServiceDesk__Category__c WHERE Name = 'TestCategory'];
BMCServiceDesk__BMC_BaseElement__c service = [SELECT ID FROM BMCServiceDesk__BMC_BaseElement__c WHERE Name = 'TestService'];
BMCServiceDesk__BMC_BaseElement__c serviceOffering = [SELECT ID FROM BMCServiceDesk__BMC_BaseElement__c WHERE Name = 'TestServiceOffering'];
//Create Incidents
List<BMCServiceDesk__Incident__c> incidents = new List<BMCServiceDesk__Incident__c>();
for(integer i = 0; i < 100; i++){
BMCServiceDesk__Incident__c incident = new BMCServiceDesk__Incident__c(
BMCServiceDesk__FKClient__c = incidentClient.ID,
BMCServiceDesk__FKCategory__c = category.ID,
BMCServiceDesk__FKServiceOffering__c = serviceOffering.ID,
BMCServiceDesk__FKBusinessService__c = service.ID,
BMCServiceDesk__FKStatus__c = awaiting_for_handling
);
incidents.add(incident);
}
test.startTest();
insert incidents;
test.stopTest();
}
I expected the email workflows and alerts to be processed in batch and sent without being so expensive in CPU time, but it seems that Salesforce takes a lot of time both checking the workflows rules and executing on them when needed. The majority of the process' time seems to be spent on sending the workflows' emails (which it doesn't actually do because it's a test method).
There's not much you can do to control the execution time of Workflow Rules. You could try converting them into Apex and benchmarking to see whether that results in improvement in time consumed, but I suspect the real solution is that you're going to have to dial down your bulk test.
The CPU limit for a transaction is 10 seconds. If your unit test code is already taking approximately 10 seconds to complete without Workflows (I'm not sure exactly what bounds your 3148 ms and 10 s refer to), you've really got only two choices:
Make the sum total of automation running on insert of this object faster;
Reduce the quantity of data you're processing in this unit test.
It's not clear what you're actually testing here, but if it's an Apex trigger, you should make sure that it's properly bulkified and does not consume unnecessary CPU time, including through trigger recursion. Reviewing the call stack in your logs (or simply adding System.debug() statements) may help with that.
Lastly - make sure you write assertions in your test method. Test methods without assertions are close to worthless.
Are there triggers on the BMCServiceDesk__Incident__c or on objects modified by the Workflow? Triggers on updates could possible cause the code to execute multiple times in the same execution context causing you to hit the cpu limit. Consider prventing reentry into triggers or performing check to only run triggers if specific criteria is met.
Otherwise consider refactoring the code if possible to have work executed within the same loop if possible as loops especially nested loops drive up your cpu usage. Usually workflow on their own dont drive up CPU limit unless triggers are executes due to workflow updates.

sql waits for sp/rpc/stmt completed and other events

In SQL XE for sp/rpc/stmt completed events it will be great if we can include wait types like IO/Network waits etc. Just like we can see reads/cpu/duration, If we can also gets other resource waits, we can get a good idea why sql is slow during scenarios where the duration/CPU is high and reads are low.
You can actually track the wait statistics of a query but the other way around - by tracking the wait statistics themselves. Take a look at the following snippet and the result image:
CREATE EVENT SESSION [Wait Statistics] ON SERVER
ADD EVENT sqlos.wait_info
(
ACTION(sqlserver.database_name,sqlserver.session_id,sqlserver.sql_text)
WHERE (
opcode = N'End'
AND duration > 0
)
),
ADD EVENT sqlos.wait_info_external
(
ACTION(sqlserver.database_name,sqlserver.session_id,sqlserver.sql_text)
WHERE (
opcode = N'End'
AND duration > 0
)
)
Here we capture the end of every wait (this is done because at this point SQL Server knows the duration of the statistic, so we can output it int he result) that has has duration greater than 0. In the ACTION part we retrieve the database name, the text of the query that caused the statistic and the session id of the query.
Beware though. Tracking wait statistics through Extended Events(and not through sys.dm_os_wait_stats that collects aggregate data) can generate a ton of data overwhelmingly fast. Should you choose this method, you should define very carefully which wait statistics you want to keep track of and what from what duration on the statistic causes you problem.

Cassandra data reappearing

Inspired by this, I wrote a simple mutex on Cassandra 2.1.4.
Here is a how the lock/unlock (pseudo) code looks:
public boolean lock(String uuid){
try {
Statement stmt = new SimpleStatement("INSERT INTO LOCK (id) VALUES (?) IF NOT EXISTS", uuid);
stmt.setConsistencyLevel(ConsistencyLevel.QUORUM);
ResultSet rs = session.execute(stmt);
if (rs.wasApplied()) {
return true;
}
} catch (Throwable t) {
Statement stmt = new SimpleStatement("DELETE FROM LOCK WHERE id = ?", uuid);
stmt.setConsistencyLevel(ConsistencyLevel.QUORUM);
session.execute(stmt); // DATA DELETED HERE REAPPEARS!
}
return false;
}
public void unlock(String uuid) {
try {
Statement stmt = new SimpleStatement("DELETE FROM LOCK WHERE id = ?", uuid);
stmt.setConsistencyLevel(ConsistencyLevel.QUORUM);
session.execute(stmt);
} catch (Throwable t) {
}
}
Now, I am able to recreate at will a situation where a WriteTimeoutException is thrown in lock() in a high load test. This means the data may or may not be written. After this my code deletes the lock - and again a WriteTimeoutException is thrown. However, the lock remains (or reappears).
Why is this?
Now I know I can easily put a TTL on this table (for this usecase), but how do I reliably delete that row?
My guess on seeing this code is a common error that happens in Distributed Systems programming. There is an assumption that in case in failure your attempt to correct the failure will succeed.
In the above code you check to make sure that initial write is successful, but don't make sure that the "rollback" is also successful. This can lead to a variety of unwanted states.
Let's imagine a few scenarios with Replicas A, B and C.
Client creates Lock but an error is thrown. The lock is present on all replicas but the client gets a timeout because that connection is lost or broken.
State of System
A[Lock], B[Lock], C[Lock]
We have an exception on the client and attempt to undo the lock by issuing a delete but this fails with an exception back at the client. This means the system can be in a variety of states.
0 Successful Writes of the Delete
A[Lock], B[Lock], C[Lock]
All quorum requests will see the Lock. There exists no combination of replicas which would show us the Lock has been removed.
1 Successful Writes of the Delete
A[Lock], B[Lock], C[]
In this case we are still vulnerable. Any request which excludes C as part of the quorum call will miss the deletion. If only A and B are polled than we'll still see the lock existing.
2/3 Successful Writes of the Delete (Quorum CL Is Met)
A[Lock/], B[], C[]
In this case we have once more lost the connection to the driver but somehow succeeded internally in replicating the delete request. These scenarios are the only ones in which we are actually safe and that future reads will not see the Lock.
Conclusion
One of the tricky things with situations like this is that if you fail do make your lock correctly because of network instability it is also unlikely that your correction will succeed since it has to work in the exact same environment.
This may be an instance where CAS operations can be beneficial. But in most cases it is better to not attempt to use distributing locking if at all possible.

What does: "Cannot commit when autoCommit is enabled” error mean?

In my program, I’ve got several threads in pool that each try to write to the DB. The number of threads created is dynamic. When the number of threads created is only one, all works fine. However, when there are multi-thread executing, I get the error:
org.apache.ddlutils.DatabaseOperationException: org.postgresql.util.PSQLException: Cannot commit when autoCommit is enabled.
I’m guessing, perhaps since each thread executes in parallel, two threads are trying to write at the same time and giving this error.
Do you think this is the case, if not, what could be causing this error?
Otherwise, if what I said is the problem, what I can do to fix it?
In your jdbc code, you should turn off autocommit as soon as you fetch the connection. Something like this:
DataSource datasource = getDatasource(); // fetch your datasource somehow
Connection c = null;
try{
c = datasource.getConnection();
c.setAutoCommit(false);

Resources