I have an erlang module with behaviour gen_server.
Now, I have:
init(_Args) ->
erlang:send_after(?PROCESS_STATE_INTERVAL,self(),processState),
{ok, []}.
and
handle_info(processState, _State)->
{ok, NewState} = gen_server:call(self(), {updateLvls}), %works fine, tested
timer:send_after(?PROCESS_STATE_INTERVAL,self(),processState),
{noreply, NewState}.
When I start it with something like {ok, Test}=gen_server:start_link({local,challenge_manager},challenge_manager,[],[]). after a few seconds I get ** exception error: {timeout,{gen_server,call,[<0.329.0>,{updateLvls}]}}
Am I doing something wrong??
You cannot call your own gen_server from within itself. That will result in a dead lock (which is what you see). The server process is busy handling your first request (since you haven't returned yet) and will queue the second request (which is made from the handling of the first), thus dead lock.
To solve this, either create a library function which both handle_call and handle_info uses, or take a look at the reply/2 function which will let you do asynchronous replies (if you return {noreply, ...} from your handle_call function).
Related
How do I create a timer in Godot which destroys the script's object after a given amount of time? I am looking to remove bullets from a game after a while to reduce lag.
There is a Timer node that you can use. You can add it as a child, set the wait time (in seconds) - you probably to set it as one shot and auto start - connect the "timeout" signal to your script, and on the method call queue_free to have the Node (and children, which includes the Timer) freed safely.
You can do that from code too, if that is what you prefer. So, let us go over what I just said, but instead of doing it from the editor, let us see the equivalent code:
Create a Timer, add it as a child:
var timer := Timer.new()
add_child(timer)
Set the wait time (in seconds):
timer.wait_time = 1.0
Set as oneshot:
timer.one_shot = true
Instead of setting it to auto start (which would be timer.autostart = true, let us start it:
timer.start()
Connect the "timeout" signal to a method. In this case, I'll call the method "_on_timer_timeout":
timer.connect("timeout", self, "_on_timer_timeout")
func _on_timer_timeout() -> void:
pass
Then in that method _on_timer_timeout, call queue_free:
timer.connect("timeout", self, "_on_timer_timeout")
func _on_timer_timeout() -> void:
queue_free()
You may want to use the SceneTreeTimer, like in the following code:
func die(delay: float):
yield(get_tree().create_timer(delay), "timeout")
queue_free()
Please refer to Godot Engine's documentation.
In Godot 4, there's an easier way to do this:
# Do some action
await get_tree().create_timer(1.0).timeout # waits for 1 second
# Do something afterwards
queue_free() # Deletes this node (self) at the end of the frame
However, if you do this in the _process() or _physics_process() functions, more timers get created every frame, which causes several runs to occur all at once before then running the following code. To handle this, simply track whether a timed event is happening.
Example in the _process() with simple attack logic:
var attack_started = false;
func _process(delta):
if attack_started:
print("Not attacking, attack code running in background")
return
else:
attack_started = true
prepare_attack()
await get_tree().create_timer(1.0).timeout # wait for 1 second
begin_attack()
attack_started = false
This await keyword works with everything that emits signals, including collision events!
FYI: yield was replaced with await in Godot 4, and await really just waits for a signal/callback to complete:
await object.signal
get_tree().create_timer(5.0) will create a timer that runs for 5 seconds, and then has a timeout callback/signal you can tap into.
I checked the docs and stackoverflow but didn't find exactly a suiting approach.
E.g. this post seems very close: Dispatch a blocking service in a Reactive REST GET endpoint with Quarkus/Mutiny
However, I don't want so much unneccessary boilerplate code in my service, at best, no service code change at all.
I generally just want to call a service method which uses entity manager and thus is a blocking action, however, want to return a string to the caller immidiately like "query started" or something. I don't need a callback object, it's just a fire and forget approach.
I tried something like this
#NonBlocking
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public Uni<String> triggerQuery() {
return Uni.createFrom()
.item("query started")
.call(() -> service.startLongRunningQuery());
}
But it's not working -> Error message returned to the caller:
You have attempted to perform a blocking operation on a IO thread. This is not allowed, as blocking the IO thread will cause major performance issues with your application. If you want to perform blocking EntityManager operations make sure you are doing it from a worker thread.",
I actually expected quarkus takes care to distribute the tasks accordingly, that is, rest call to io thread and blocking entity manager operations to worker thread.
So I must using it wrong.
UPDATE:
Also tried an proposed workaround that I found in https://github.com/quarkusio/quarkus/issues/11535 changing the method body to
return Uni.createFrom()
.item("query started")
.emitOn(Infrastructure.getDefaultWorkerPool())
.invoke(()-> service.startLongRunningQuery());
Now I don't get an error, but service.startLongRunningQuery() is not invoked, thus no logs and no query is actually sent to db.
Same with (How to call long running blocking void returning method with Mutiny reactive programming?):
return Uni.createFrom()
.item(() ->service.startLongRunningQuery())
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool())
Same with (How to run blocking codes on another thread and make http request return immediately):
ExecutorService executor = Executors.newFixedThreadPool(10, r -> new Thread(r, "CUSTOM_THREAD"));
return Uni.createFrom()
.item(() -> service.startLongRunningQuery())
.runSubscriptionOn(executor);
Any idea why service.startLongRunningQuery() is not called at all and how to achieve fire and forget behaviour, assuming rest call handled via IO thread and service call handled by worker thread?
It depends if you want to return immediately (before your startLongRunningQuery operation is effectively executed), or if you want to wait until the operation completes.
If the first case, use something like:
#Inject EventBus bus;
#NonBlocking
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public void triggerQuery() {
bus.send("some-address", "my payload");
}
#Blocking // Will be called on a worker thread
#ConsumeEvent("some-address")
public void executeQuery(String payload) {
service.startLongRunningQuery();
}
In the second case, you need to execute the query on a worker thread.
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public Uni<String> triggerQuery() {
return Uni.createFrom(() -> service.startLongRunningQuery())
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool());
}
Note that you need RESTEasy Reactive for this to work (and not classic RESTEasy). If you use classic RESTEasy, you would need the quarkus-resteasy-mutiny extension (but I would recommend using RESTEasy Reactive, it will be way more efficient).
Use the EventBus for that https://quarkus.io/guides/reactive-event-bus
Send and forget is the way to go.
I am a newbie in CANOPEN. I wrote a program that read actual position via PDO1 (default is statusword + actual position).
void canopen_init() {
// code1 setup PDO mapping
nmtPreOperation();
disablePDO(PDO_TX1_CONFIG_COMM);
setTransmissionTypePDO(PDO_TX1_CONFIG_COMM, 1);
setInhibitTimePDO(PDO_TX1_CONFIG_COMM, 0);
setEventTimePDO(PDO_TX1_CONFIG_COMM, 0);
enablePDO(PDO_TX1_CONFIG_COMM);
setCyclePeriod(1000);
setSyncWindow(100);
//code 2: enable OPeration
readyToSwitchOn();
switchOn();
enableOperation();
motionStart();
// code 3
nmtActiveNode();
}
int main (void) {
canopen_init();
while {
delay_ms(1);
send_sync();
}
}
If I remove "code 2" (the servo is in Switch_on_disable status), i can read position each time sync send. But if i use "code 2", the driver has error "sync frame timeout". I dont know driver has problem or my code has problem. Does my code has problem? thank you!
I don't know what protocol stack this is or how it works, but these:
setCyclePeriod(1000);
setSyncWindow(100);
likely correspond to these OD entries :
Object 1006h: Communication cycle period (CiA 301 7.5.2.6)
Object 1007h: Synchronous window length (CiA 301 7.5.2.7)
They set the SYNC interval and time window for synchronous PDOs respectively. The latter is described by the standard as:
If the synchronous window length expires all synchronous TPDOs may be discarded and an EMCY message may be transmitted; all synchronous RPDOs may be discarded until the next SYNC message is received. Synchronous RPDO processing is resumed with the next SYNC message.
Now if you set this sync time window to 100us but have a sloppy busy-wait delay delay_ms(1), then that doesn't add up. If you write zero to Object 1007h, you disable the sync window feature. I suppose setSyncWindow(0); might do that. You can try to do that to see if that's the issue. If so, you have to drop your busy-wait in favour for proper hardware timers, one for the SYNC period and one for PDO timeout (if you must use that feature).
Problem fixed. Due to much EMI from servo, that make my controller didn't work properly. After isolating, it worked very well :)!
I'm trying to implement a very simple service connected to an AMQP broker with Alpakka. I just want it to consume messages from its queue as a stream at the moment they are pushed on a given exchange/topic.
Everything seemed to work fine in my tests, but when I tried to start my service, I realized that my stream was only consuming my messages once and then exited.
Basically I'm using the code from Alpakka documentation :
def consume()={
val amqpSource = AmqpSource.committableSource(
TemporaryQueueSourceSettings(connectionProvider, exchangeName)
.withDeclaration(exchangeDeclaration)
.withRoutingKey(topic),
bufferSize = prefetchCount
)
val amqpSink = AmqpSink.replyTo(AmqpReplyToSinkSettings(connectionProvider))
amqpSource.mapAsync(4)(msg => onMessage(msg)).runWith(amqpSink)
}
I tried to schedule the consume() execution every second, but I experienced OutOfMemoryException issues.
Is there any proper way to make this code run as an infinite loop ?
If you want to have a Source restarted when it fails or is cancelled, wrap it with RestartSource.withBackoff.
Is this the correct way to catch PermannentTask Failure? (https://cloud.google.com/appengine/articles/deferred)
def do_something_with_key(k):
entity = k.get()
# Do something with entity
entity.put()
k = ndb.Key('MyModel', 123)
try:
deferred.defer(do_something_with_key, k, _countdown=60)
except PermanentTaskFailure:
#catch here
Or do I need to put try/except inside do_something_with_key function
The PermanentTaskFailure exception is typically raised when the task executes or attempts to execute, so you won't catch it when you create the task. Unless, maybe, if you do that from another task execution handler, but in that case it'd be for the enqueueing task, not for the task being enqueued. Or maybe if enqueuing itself has trouble? Not sure - I never got it in such case.
So, at best, I think you might be able to catch it from do_something_with_key(). But you won't be able to catch it for all cases - for example if the task code fails to execute - the exception is caught by the deferred library code itself, see an example in Issue with appengine deferred tasks, execution throws unknown error.
I was able to catch it (again, probably not for all cases), but that was after I switched from the deferred library to directly using the push tasks (which is what the deferred library uses under the hood).
The article you referenced discusses PermanentTaskFailure in the context of your handler code (intentionally) raising the exception to signal to the deferred library that it shouldn't enqueue yet another copy of the task - which is what it does by default if the task execution fails (based on its return code for the request), until the maximum number of retries is reached.