How to view multiple events with bot.wait_for? - discord

I want to listen for many events with bot.wait_for and not a single event. First event blocks the second one when I arrange them back to back.

done, pending = await asyncio.wait([
bot.loop.create_task(bot.wait_for('message')),
bot.loop.create_task(bot.wait_for('reaction_add'))
], return_when=asyncio.FIRST_COMPLETED)
try:
stuff = done.pop().result()
except ...:
# If the first finished task died for any reason,
# the exception will be replayed here.
for future in done:
# If any exception happened in any other done tasks
# we don't care about the exception, but don't want the noise of
# non-retrieved exceptions
future.exception()
for future in pending:
future.cancel() # we don't need these anymore

Related

How do I create a timer in Godot?

How do I create a timer in Godot which destroys the script's object after a given amount of time? I am looking to remove bullets from a game after a while to reduce lag.
There is a Timer node that you can use. You can add it as a child, set the wait time (in seconds) - you probably to set it as one shot and auto start - connect the "timeout" signal to your script, and on the method call queue_free to have the Node (and children, which includes the Timer) freed safely.
You can do that from code too, if that is what you prefer. So, let us go over what I just said, but instead of doing it from the editor, let us see the equivalent code:
Create a Timer, add it as a child:
var timer := Timer.new()
add_child(timer)
Set the wait time (in seconds):
timer.wait_time = 1.0
Set as oneshot:
timer.one_shot = true
Instead of setting it to auto start (which would be timer.autostart = true, let us start it:
timer.start()
Connect the "timeout" signal to a method. In this case, I'll call the method "_on_timer_timeout":
timer.connect("timeout", self, "_on_timer_timeout")
func _on_timer_timeout() -> void:
pass
Then in that method _on_timer_timeout, call queue_free:
timer.connect("timeout", self, "_on_timer_timeout")
func _on_timer_timeout() -> void:
queue_free()
You may want to use the SceneTreeTimer, like in the following code:
func die(delay: float):
yield(get_tree().create_timer(delay), "timeout")
queue_free()
Please refer to Godot Engine's documentation.
In Godot 4, there's an easier way to do this:
# Do some action
await get_tree().create_timer(1.0).timeout # waits for 1 second
# Do something afterwards
queue_free() # Deletes this node (self) at the end of the frame
However, if you do this in the _process() or _physics_process() functions, more timers get created every frame, which causes several runs to occur all at once before then running the following code. To handle this, simply track whether a timed event is happening.
Example in the _process() with simple attack logic:
var attack_started = false;
func _process(delta):
if attack_started:
print("Not attacking, attack code running in background")
return
else:
attack_started = true
prepare_attack()
await get_tree().create_timer(1.0).timeout # wait for 1 second
begin_attack()
attack_started = false
This await keyword works with everything that emits signals, including collision events!
FYI: yield was replaced with await in Godot 4, and await really just waits for a signal/callback to complete:
await object.signal
get_tree().create_timer(5.0) will create a timer that runs for 5 seconds, and then has a timeout callback/signal you can tap into.

DEBUG:snowflake.connector.connection:Rest object has been destroyed, cannot close session

Can some one please explain the technicality behind: "DEBUG:snowflake.connector.connection:Rest object has been destroyed, cannot close session"
The following Python was executed successfully:
try:
time_start = pd.Timestamp.now()
connection.execute(SQL)
df = pd.read_sql_query(SQL, engine)
time_end = pd.Timestamp.now()
timer = pd.Timedelta(time_end-time_start).microseconds/1000
print(timer)
except ProgrammingError as e:
if e.errno == 604:
print("timeout")
connection.cursor().execute("rollback")
else:
raise e
else:
connection.cursor().execute("commit")
finally:
connection.close()
engine.dispose()
logging.debug('-------- Finished --------' )
if to_csv:
col_names = df.columns.tolist()
if col_names_upper:
col_names = [x.upper() for x in col_names]
csv_file_name = 'data.csv'
csv_path = os.path.join(dir_path,csv_file_name)
if append:
mode='a'
else:
mode='w'
df.to_csv(csv_path,index=False, mode=mode, header=col_names)
return None
else:
return df.to_dict()
But when I checked the log file, I found the following at the end of the log:
DEBUG:snowflake.connector.network:SUCCESS
DEBUG:snowflake.connector.network:Active requests sessions: 0, idle: 4
DEBUG:snowflake.connector.network:ret[code] = None, after post request
DEBUG:snowflake.connector.connection:Session is closed
DEBUG:root:-------- Finished --------
DEBUG:snowflake.connector.connection:Rest object has been destroyed, cannot close session
DEBUG:snowflake.connector.connection:Rest object has been destroyed, cannot close session
I don't understand what it meant by:"DEBUG:snowflake.connector.connection:Rest object has been destroyed, cannot close session".
The message Rest object has been destroyed, cannot close session is printed by the Snowflake Python Connector's connection object typically when it is attempted for closure multiple times.
This is normal to observe when using a connection pool manager: The SQLAlchemy-based engine object will attempt to close all managed connection objects when engine.dispose() is called), and also due to Python's internal garbage collection that calls connection.__del__() on objects reaching zero reference counts.
The logger level for the message is intentionally DEBUG to not worry users about successive attempts at connection cleanup by the runtime and frameworks in use. It is safe to ignore this message as it appears after the connection was closed up successfully (indicated via the Session is closed message preceding it).

How do I catch PermanentTaskFailure

Is this the correct way to catch PermannentTask Failure? (https://cloud.google.com/appengine/articles/deferred)
def do_something_with_key(k):
entity = k.get()
# Do something with entity
entity.put()
k = ndb.Key('MyModel', 123)
try:
deferred.defer(do_something_with_key, k, _countdown=60)
except PermanentTaskFailure:
#catch here
Or do I need to put try/except inside do_something_with_key function
The PermanentTaskFailure exception is typically raised when the task executes or attempts to execute, so you won't catch it when you create the task. Unless, maybe, if you do that from another task execution handler, but in that case it'd be for the enqueueing task, not for the task being enqueued. Or maybe if enqueuing itself has trouble? Not sure - I never got it in such case.
So, at best, I think you might be able to catch it from do_something_with_key(). But you won't be able to catch it for all cases - for example if the task code fails to execute - the exception is caught by the deferred library code itself, see an example in Issue with appengine deferred tasks, execution throws unknown error.
I was able to catch it (again, probably not for all cases), but that was after I switched from the deferred library to directly using the push tasks (which is what the deferred library uses under the hood).
The article you referenced discusses PermanentTaskFailure in the context of your handler code (intentionally) raising the exception to signal to the deferred library that it shouldn't enqueue yet another copy of the task - which is what it does by default if the task execution fails (based on its return code for the request), until the maximum number of retries is reached.

configurable timer

I'm trying to build a security system that is configurable via a config file, perhaps XML. One of the configurable options is hours of the day/night when the system should be recording. There will be an entry or more for each day of the week. I'm trying to implement this in python which is not a language I know well. My main problem is how to implement this cleanly. Basically the program will look something like this:
def check_if_on():
"""
check if the system should be on based on current time and config file
returns True iff system should be on, False otherwise
"""
...
# main loop
while True:
# do something
# do something else
if check_if_on():
# do something
else:
# do something else or nothing
time.sleep(10)
the config file will look something like:
<on-times>
<on day="1" time="1900"/>
<off day="2" time="0700"/>
<on day="2" time="1800"/>
<off day="3" time="0900"/>
</on-times>
A friend with a lot more experience than me said to implement the on/off times as timed events in a queue but I'm not sure if that's a good idea or even how to do it
If super precise timing is not necessary, you could run it every minute as a cron job to save some CPU cycles. If time exceeded the configured time, do something, else do nothing.

Datastore write limit tests - trying to break app engine, but it won´t break ;)

We´re trying to test the write limit exceptions mentioned to be about 1 write / second to prep our code for it (https://developers.google.com/appengine/docs/python/datastore/exceptions -> Timeout)
So I´m creating a item and updating it with the loop count 10k times via tasks and 10k times via a loop... It doesn´t seem to trigger a exception although the writes per second should be high enough (I remember something like more than one write per second gets critical).
Always the same: things don´t break when your´re you want them to ;).
class Message(ndb.Model):
text = ndb.StringProperty()
count = ndb.IntegerProperty()
#defined in seperate file
class DeferredClass(object):
def put(self, id, x):
msg = Message.get_by_id(id)
msg.count = x
try:
msg.put()
except:
logging.error("error putting the Message")
logging.error(sys.exc_info()[0])
msg = Message(text="TestGreeting", count=0)
key = msg.put()
id = key.id()
test = DeferredClass()
for x in range(10000):
deferred.defer(test.put, id, x)
for x in range(10000):
msg.count = x
try:
msg.put()
except:
logging.error("error putting the Message")
logging.error(sys.exc_info()[0])
self.response.out.write("done")
PS: We´re aware that the docs are for db and the code is ndb... the basic limitations should still exist... Also: Docs on ndb Exceptions would be great! Anyone?
Using a non-default TaskQueue with a increased rate limit of 350/tasks/sec led to 20 instances being fired up and plenty of Timeout Exceptions... Thanks Mr. Steinrücken!
The Exception is google.appengine.api.datastore_errors.Timeout, which is the same as documented for the db package - so no ndb extras there.
PS: Our idea is to catch the exception in our cache handling class as a sign of datastore overload and automatically set up shading for that item... monitoring the quests a minute and diable shading again when not needed...

Resources