Solidity try catch showing execution reverted of internal tx on Etherscan - try-catch

I have a function to check if a given token has an Uniswap factory address:
address possibleFactoryAddress;
try IUniswapV2Pair(token).factory() returns (address factory) {
possibleFactoryAddress = factory;
} catch {}
Now this code is working perfectly - The catch block will be executed if given token does not return an address through its factory() function (properly because it does not have that function). However when the tx shows on Etherscan, it has a big yellow exclamation mark on it and a scary warning:
Although one or more Error Occurred [execution reverted] Contract Execution Completed
I know that there is nothing wrong with the code, the tx completed without reverting and all state changes are stored properly, but is this possible for Etherscan to NOT show this big warning? I'm afraid it'd confuse a lot of non-technical people and scare them away.
Thank you.

Related

cypress not resolving localstorage item, although it shows on local storage

I'm trying to test my react application. And some data are saved in the local storage, and I want to make sure that these are working. So I started writing tests with cypress. Really cool library, it was so fun that I already wrote ~50 tests. But it started showing problems with local storage.
My code:
describe("delete account",()=>{
it("delete",()=>{
cy.visit("/")
assert.equal(localStorage.getItem("---current---"), null)
const username = 'abcd1234'
cy.get('[data-cy=username-input]').click().type(username)
cy.get('[data-cy=login-button]').click()
cy.wait(4*1000)
cy.log({...localStorage})
// assert.equal(localStorage.getItem(currentPlayerLS), username)
})
})
My goal was to check when the user logs in, it gets the local storage and checks if the ---current--- player value matches or not. But it raises an exception saying expected null to equal 'abcd1234'
I guessed there was some issue with resolving, so I even added a delay of 4 seconds.
I also logged the whole {...localSotage} says it is an empty dictionary. But the local storage says it has some value stored there.
I'm not sure how to handle it!! Can anyone help me?
Here is the snap:
The log at cy.log({...localStorage}) takes it's value before the test runs.
You should use this to get the value after the login.
cy.then(() => cy.log({...localStorage}))
As for the final assert, try directly using the key ---current--- in case currentPlayerLS is something else (some indication this is so).
assert.equal(localStorage.getItem('---current---'), username)

Weird (intermittent) Can't set headers after they are sent error

Have a very strange intermittent problem, and one that I just can not understand at all. Not certain it is code related, bug in express or just me missing something.
Have an app, all in MEAN, that like so many other apps around does a bunch of API calls. Some even in parallel.
It all worked perfect this morning, actually, read the "where it gets weird", but then all of a sudden it stops working and the server starts falling over with the below error:
GET /api/skillList 304 45.345 ms - -
_http_outgoing.js:335
throw new Error('Can\'t set headers after they are sent.');
^
Error: Can't set headers after they are sent.
at ServerResponse.OutgoingMessage.setHeader (_http_outgoing.js:335:11)
at ServerResponse.header (/Users/bengtbjorkberg/WebstormProjects/ResourceEdge/node_modules/express/lib/response.js:695:10)
at ServerResponse.json (/Users/bengtbjorkberg/WebstormProjects/ResourceEdge/node_modules/express/lib/response.js:232:10)
at /Users/bengtbjorkberg/WebstormProjects/ResourceEdge/routes/api.js:78:9
at /Users/bengtbjorkberg/WebstormProjects/ResourceEdge/node_modules/mongoose/node_modules/kareem/index.js:160:11
at Query._findOne (/Users/bengtbjorkberg/WebstormProjects/ResourceEdge/node_modules/mongoose/lib/query.js:1145:12)
at /Users/bengtbjorkberg/WebstormProjects/ResourceEdge/node_modules/mongoose/node_modules/kareem/index.js:156:8
at /Users/bengtbjorkberg/WebstormProjects/ResourceEdge/node_modules/mongoose/node_modules/kareem/index.js:18:7
at process._tickCallback (node.js:355:11)
Process finished with exit code 1
What I have checked.
Threw tons of log messages, looks like it falls over at different places, or I am missing one of the places
Spent a lot of time looking at the last call (/api/skilllist), to be working quite OK.
Where it gets REALLY weird
If I start the developer console in Chrome, the problem DOES NOT OCCURE, which is probably why it worked all day but then stopped working when I wanted to show it to someone...
While I was writing this, I realised that I could use safari, and it falls over with the java console on, and it is the same line server side. And its the database call below. But if I turn the console on in Chrome, it starts working... What am I missing
exports.canlist = function (req, res) {
// use mongoose to get all profiles in the database
console.log("Canlist called");
Profile.find( {}, {'_id':1, 'alias':1, 'img':1, 'summary':1, 'keys':1}, function(err, profiles) {
// if there is an error retrieving, send the error. nothing after res.send(err) will execute
if (err) {
console.log("Error " + err)
res.send(err)
}
console.log("Sending back " + profiles.length + " profiles")
res.json(profiles); // return all todos in JSON format
});
};
So, after much faffing about, I figured it out. Not why it works when using java console it open on Chrome.
Basically the res.send(err) command is wrong.
First its missing return.
Secondly I THINK you are suppose to use the longer form like this:
return res.status(500).send(err);
return res.json(profile);
Think the problem was that it was confusing express, which probably do some pre processing, which may explain why it "seems" to blow up at all kind of times.
(Would have kept quiet about this, as it is a bit of a ridiculous mistake, but it saves someone a few hours staring at the screen I might get a christmas card this year. (or at least get to tell santa that I've been a good boy)

SignalR server doesn't consistently call methods on client

I have an AngularJS application that I intend to have receive communications via SignalR from the server, most notably when data changes and I want the client to refresh itself.
The following is my hub logic:
[HubName("update")]
public class SignalRHub : Hub
{
public static void SendDataChangedMessage(string changeType)
{
var context = GlobalHost.ConnectionManager.GetHubContext<SignalRHub>();
context.Clients.All.ReceiveDataChangedMessage(changeType);
}
}
I use the following within my API after the data operation has successfully occurred to send the message to the clients:
SignalRHub.SendDataChangedMessage("newdata");
Within my AngularJS application, I create a service for SignalR with the following javascript that's referenced in the HTML page:
angular.module('MyApp').value('signalr', $.connection.update);
Within the root for the AngularJS module, I set this up with the following so that it starts and I can see the debug output:
$(function () {
$.connection.hub.logging = true;
$.connection.hub.start();
});
$.connection.hub.error(function(err) {
console.log('An error occurred: ' + err);
});
Then I've got my controller. It's got all sorts of wonderful things in it, but I'll show the basics as relate to this issue:
angular.module('MyApp').controller('MyController', function($scope, signalr) {
signalr.client.ReceiveDataChangedMessage = function dataReceived(changeType) {
console.log('DataChangedUpdate: ' + changeType);
};
});
Unfortunately, when I set a breakpoint in the javascript, this never executes though the rest of the program works fine (including performing the operation in the API).
Some additional (hopefully) helpful information:
If I set a breakpoint in the SignalRHub class, the method is successfully called as expected and throws no exceptions.
If I look at Fiddler, I can see the polling operations but never see any sign of the call being sent to the client.
The Chrome console shows that the AngularJS client negotiates the websocket endpoint, it opens it, initiates the start request, transitions to the connected state, and monitors the keep alive with a warning and connection lost timeout. There's no indication that the client ever disconnects from the server.
I reference the proxy script available at http://localhost:port/signalr/hubs in my HTML file so I disregard the first error I receive stating that no hubs have been subscribed to. Partly because the very next message in the console is the negotiation with the server and if I later use '$.connection.hub' in the console, I'll see the populated object.
I appreciate any help you can provide. Thanks!
It's not easy to reproduce it here, but it's likely that the controller function is invoked after the start of the connection. You can verify with a couple of breakpoints on the first line of the controller and on the start call. If I'm right, that's why you are not called back, because the callback on the client member must be defined before starting the connection. Try restructuring your code a bit in order to ensure the right order.

store all the errors occurred on eval() in PHP

Stack community.
I'm using the eval() function in PHP so my users can execute his own code in my website (Yes, i know it is a dangerous function, but that's not the point).
I want to store all the PHP errors that occur during the interpretation of the code, is there a way to fetch all of them? i want to get and register them in a table of my database.
The error_get_last gets only the last error, but i want all of them.
Help me, please. It is even possible?
General
You cannot use eval() for this, as the evaled code will run in the current context, meaning that the evaled code can overwrite all vars in your context. Beside from security considerations this could / would break functionality. Check this imaginal example:
$mode = 'execute'
// here comes a common code example, it will overwrite `$mode`
eval('
$mode = 'test';
if(....) { ...
');
// here comes your code again, will fail
switch ( $mode) {
...
}
Error Tracking
You cannot track the errors this way. One method would be to use set_error_handler() to register a custom error handler which stores the errors to db. This would work, but what if the user uses the function in it's code? Check the following examples:
set_error_handler('my_handler');
function my_handler($errno, $errstr, $errfile, $errline) {
db_save($errstr, ...);
}
eval('
$a = 1 / 0; // will trigger a warning
echo $b; // variable not defined
'
);
This would work. But problems will arise if have an evaled code like this:
eval('
restore_error_handler();
$a = 1 / 0; // will trigger a warning
echo $b; // variable not defined
'
);
Solution
A common solution to make it possible that others can execute code on your servers is:
store user code into temporary file
disable critical functions like fopen() ... in the php.ini
execute the temporary php file by php-cli and display output (and errors) to the user
if you separate stdin from stdout when calling the php-cli, you can parse the error messages and store them in a DB
According to the documentation, you just can't :
If there is a parse error in the evaluated code, eval() returns FALSE and execution of the following code continues normally. It is not possible to catch a parse error in eval() using set_error_handler().
EDIT: you can't do it with eval(), but you apparently can with php_check_syntax function. You have to write the code to a file in order to check its syntax.

Indexeddb: Differences between onsuccess and oncomplete?

I use two different events for the callback to respond when the IndexedDB transaction finishes or is successful:
Let's say... db : IDBDatabase object, tr : IDBTransaction object, os : IDBObjectStore object
tr = db.transaction(os_name,'readwrite');
os = tr.objectStore();
case 1 :
r = os.openCursor();
r.onsuccess = function(){
if(r.result){
callback_for_result_fetched();
r.result.continue;
}else callback_for_transaction_finish();
}
case 2:
tr.oncomplete = callback_for_transaction_finish();
It is a waste if both of them work similarly. So can you tell me, is there any difference between them?
Sorry for raising up quite an old thread, but it's questioning is a good starting point...
I've looked for a similar question but in a bit different use case and actually found no good answers or even a misleading ones.
Think of a use case when you need to make several writes into the objectStore of even into several ones. You definitely don't want to manage each single write and it's own success and error events. That is the meaning of transaction and this is the (proper) implementation of it for indexedDB:
var trx = dbInstance.transaction([storeIdA, storeIdB], 'readwrite'),
storeA = trx.objectStore(storeIdA),
storeB = trx.objectStore(storeIdB);
trx.oncomplete = function(event) {
// this code will run only when ALL of the following requests are succeed
// and only AFTER ALL of them were processed
};
trx.onerror = function(error) {
// this code will run if ANY of the following requests will fail
// and only AFTER ALL of them were processed
};
storeA.put({ key:keyA, value:valueA });
storeA.put({ key:keyB, value:valueB });
storeB.put({ key:keyA, value:valueA });
storeB.put({ key:keyB, value:valueB });
Clue to this understanding is to be found in the following statement of W3C spec:
To determine if a transaction has completed successfully, listen to the transaction’s complete event rather than the success event of a particular request, because the transaction may still fail after the success event fires.
While it's true these callbacks function similarly they are not the same: the difference between onsuccess and oncomplete is that transactions complete but requests, which are made on those transactions, are successful.
oncomplete is only defined in the spec as related to a transaction. A transaction doesn't have an onsuccess callback.
I would only caution that there is no garentee that getting a successful trx.oncomplete means the data was written to the disk/database:
We are seeing a problem with trx.oncomplete where the data is not being written to the db on disk. FireFox has an explanation of what they did that is causing this problem here: https://developer.mozilla.org/en-US/docs/Web/API/IDBTransaction/oncomplete
It seems that windows/edge is also having the same issue. Basically, there is no guarantee that your app will have data written to the database, if/when the user decides to kill or power down the device. We've even tried waiting up to 15 minutes before shutting down in some cases and haven't seen the data written. For me I'd always want to ensure that a data write completes and is committed.
Are there other solutions for a real persistent database, or enhancements to the IndexedDB beyond FF experimental add...

Resources