I'm using dronekit and using event listeners to keep a track of camera video recording Status. This is because I didn't find a way to identify the recording status. So I'm keeping track of the commands that I'm sending and changing the modes if they are successful.
But I observed that my listener is not receiving all events. Is this a common issue? Can it be Fixed? Is there a frequency setting that I need to change?
#vehicle.on_message('GOPRO_SET_RESPONSE')
def listener(self, name, message):
global mode, recording, way_points, nadir_taken
if message.cmd_id == 2:
log.debug('Shutter:%s' % message)
if message.status == 0:
if mode == MODE_VIDEO:
if recording:
recording = False
log.info("Stopped video")
# message_handler.set(message_handler.get() + " Stopped Recording.")
record_handler.set(NO_STRING)
plot.info(STOP_STRING_VIDEO)
note.info(STOP_STRING_VIDEO)
thread.start_new(speak, (VIDEO_RECORD_ON_MSG,))
else:
recording = True
log.info("started recording video")
# message_handler.set(message_handler.get() + "\n Started Recording.")
record_handler.set(YES_STRING)
plot.info(START_STRING_VIDEO)
note.info(START_STRING_VIDEO)
thread.start_new(speak, (VIDEO_RECORD_OFF_MSG,))
else:
log.info("Image Captured at %s", str(loc))
else:
log.info('Unidentified Message:%s' % message)
Related
I have a bot in writing in python and I want to incorporate a number game into the bot. The game code is below. (nl is a variable to say os.linesep)
secret_number = random.randint(0, 100)
guess_count = 0
guess_limit = 5
print(f'Welcome to the number guessing game! The range is 0 - 100 and you have 5 attempts to guess the correct number.')
while guess_count < guess_limit:
guess = int(input('Guess: '))
guess_count += 1
if guess > secret_number:
print('Too High.', nl, f"You have {guess_limit - guess_count} attempts remaining.")
elif guess < secret_number:
print('Too Low.', nl, f"You have {guess_limit - guess_count} attempts remaining.")
elif guess == secret_number:
print("That's correct, you won.")
break
else:
print("Sorry, you failed.")
print(f'The correct number was {secret_number}.')
So I want to be able to use that in the discord messaging system. My issue is that I need the bot to scan the most recent messages for a number from that specific user that initiated the game. How could I do so?
To simplify my question how can I say this:
if message from same message.author = int():
guess = that^
The solution here would be to wait_for another message from that user. You can do this with:
msg = await client.wait_for('message', check=message.author == PREVIOUS_SAVED_MSG_AUTHOR_AS_VARIABLE, timeout=TIMEOUT)
# You don't really need a timeout but you can add one if you want
# (then you'd need a except asyncio.TimeoutError)
if msg.content == int():
guess = msg
else:
...
One of the goal of the project I'm working with is making Pepper robot patrolling hospital wards "autonomously". So I downloaded some basic application to start with navigation (https://github.com/aldebaran/naoqi_navigation_samples). The "explore" application is critical since it is needed by the other two (places and patrol). I tried to launch "explore" on Choregraphe, but the robot does not move (so it does not explore neither creates a map, obviously) and the application ends by saying the final sentence. In particular the block "Get map" gives an error. So, the application starts correctly but it does not work properly.
I saved "explore" as a robot application and tried in both autonomous life and not autonomous life.
I can not understand where I'm wrong: could you help me please?
Make sure the charging flap is not open when you run. Also try this code, it will create a map
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
"""Example: Use explore method."""
import qi
import argparse
import sys
import numpy
from PIL import Image
def main(session):
"""
This example uses the explore method.
"""
# Get the services ALNavigation and ALMotion.
navigation_service = session.service("ALNavigation")
motion_service = session.service("ALMotion")
# Wake up robot
motion_service.wakeUp()
# Explore the environement, in a radius of 2 m.
radius = 5.0
error_code = navigation_service.explore(radius)
if error_code != 0:
print ("Exploration failed.")
return
# Saves the exploration on disk
path = navigation_service.saveExploration()
print ("Exploration saved at path: \"" + path + "\"")
# Start localization to navigate in map
navigation_service.startLocalization()
# Come back to initial position
navigation_service.navigateToInMap([0., 0., 0.])
# Stop localization
navigation_service.stopLocalization()
# Retrieve and display the map built by the robot
result_map = navigation_service.getMetricalMap()
map_width = result_map[1]
map_height = result_map[2]
img = numpy.array(result_map[4]).reshape(map_width, map_height)
img = (100 - img) * 2.55 # from 0..100 to 255..0
img = numpy.array(img, numpy.uint8)
Image.frombuffer('L', (map_width, map_height), img, 'raw', 'L', 0, 1).show()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="127.0.0.1",
help="Robot IP address. On robot or Local Naoqi: use '127.0.0.1'.")
parser.add_argument("--port", type=int, default=9559,
help="Naoqi port number")
args = parser.parse_args()
session = qi.Session()
try:
session.connect("tcp://" + args.ip + ":" + str(args.port))
except RuntimeError:
print ("Can't connect to Naoqi at ip \"" + args.ip + "\" on port " + str(args.port) +".\n"
"Please check your script arguments. Run with -h option for help.")
sys.exit(1)
main(session)
I am currently working on Softbanks' robot Pepper and I try to use Watson speech-to-text solution on Pepper's audio buffers remote streaming by using websocket protocol.
I used the answer to that former question NAO robot remote audio problems to find a way to access remotly pepper's audio buffers and that project https://github.com/ibm-dev/watson-streaming-stt to learn how to use websocket protocole to use watson streaming stt.
However, after I open my websocket application, I start sending buffers to watson and after a few sendings, I receive error: 'Unable to transcode from audio/l16;rate=48000;channel=1 to one of: audio/x-float-array; rate=16000; channels=1'
Each time I'm trying to send Pepper's audio buffer to watson, it is unable to understand it.
I compared data I send with data sent in watson streaming stt example (using pyaudio streaming from microphone instead of Pepper's buffer streaming) and I don't see any difference. Both time I'm pretty sure that I am sending a string containing raw chunks of bytes. Which is what Watson asks for in it documentation.
I try to send chunks of 8192 bytes with a sample rate of 48kHz and I can easily convert Pepper's audio buffer in hexa so I don't understand why Watson can't transcode it.
Here is my code:
# -*- coding: utf-8 -*-
#!/usr/bin/env python
import argparse
import base64
import configparser
import json
import threading
import time
from optparse import OptionParser
import naoqi
import numpy as np
import sys
from threading import Thread
import ssl
import websocket
from websocket._abnf import ABNF
CHANNELS = 1
NAO_IP = "172.20.10.12"
class SoundReceiverModule(naoqi.ALModule):
"""
Use this object to get call back from the ALMemory of the naoqi world.
Your callback needs to be a method with two parameter (variable name, value).
"""
def __init__( self, strModuleName, strNaoIp):
try:
naoqi.ALModule.__init__(self, strModuleName );
self.BIND_PYTHON( self.getName(),"callback" );
self.strNaoIp = strNaoIp;
self.outfile = None;
self.aOutfile = [None]*(4-1); # ASSUME max nbr channels = 4
self.FINALS = []
self.RECORD_SECONDS = 20
self.ws_open = False
self.ws_listening = ""
# init data for websocket interfaces
self.headers = {}
self.userpass = "" #userpass and password
self.headers["Authorization"] = "Basic " + base64.b64encode(
self.userpass.encode()).decode()
self.url = ("wss://stream.watsonplatform.net//speech-to-text/api/v1/recognize"
"?model=fr-FR_BroadbandModel")
except BaseException, err:
print( "ERR: abcdk.naoqitools.SoundReceiverModule: loading error: %s" % str(err) );
# __init__ - end
def __del__( self ):
print( "INF: abcdk.SoundReceiverModule.__del__: cleaning everything" );
self.stop();
def start( self ):
audio = naoqi.ALProxy( "ALAudioDevice", self.strNaoIp, 9559 );
self.nNbrChannelFlag = 3; # ALL_Channels: 0, AL::LEFTCHANNEL: 1, AL::RIGHTCHANNEL: 2; AL::FRONTCHANNEL: 3 or AL::REARCHANNEL: 4.
self.nDeinterleave = 0;
self.nSampleRate = 48000;
audio.setClientPreferences( self.getName(), self.nSampleRate, self.nNbrChannelFlag, self.nDeinterleave ); # setting same as default generate a bug !?!
audio.subscribe( self.getName() );
#openning websocket app
self._ws = websocket.WebSocketApp(self.url,
header=self.headers,
on_open = self.on_open,
on_message=self.on_message,
on_error=self.on_error,
on_close=self.on_close)
sslopt={"cert_reqs": ssl.CERT_NONE}
threading.Thread(target=self._ws.run_forever, kwargs = {'sslopt':sslopt}).start()
print( "INF: SoundReceiver: started!" );
def stop( self ):
print( "INF: SoundReceiver: stopping..." );
audio = naoqi.ALProxy( "ALAudioDevice", self.strNaoIp, 9559 );
audio.unsubscribe( self.getName() );
print( "INF: SoundReceiver: stopped!" );
print "INF: WebSocket: closing..."
data = {"action": "stop"}
self._ws.send(json.dumps(data).encode('utf8'))
# ... which we need to wait for before we shutdown the websocket
time.sleep(1)
self._ws.close()
print "INF: WebSocket: closed"
if( self.outfile != None ):
self.outfile.close();
def processRemote( self, nbOfChannels, nbrOfSamplesByChannel, aTimeStamp, buffer ):
"""
This is THE method that receives all the sound buffers from the "ALAudioDevice" module"""
print "receiving buffer"
# self.data_to_send = self.data_to_send + buffer
# print len(self.data_to_send)
#self.data_to_send = ''.join( [ "%02X " % ord( x ) for x in buffer ] ).strip()
self.data_to_send = buffer
#print("buffer type :", type(data))
#print("buffer :", buffer)
#~ print( "process!" );
print( "processRemote: %s, %s, %s, lendata: %s, data0: %s (0x%x), data1: %s (0x%x)" % (nbOfChannels, nbrOfSamplesByChannel, aTimeStamp, len(buffer), buffer[0],ord(buffer[0]),buffer[1],ord(buffer[1])) );
if self.ws_open == True and self.ws_listening == True:
print "sending data"
self._ws.send(self.data_to_send, ABNF.OPCODE_BINARY)
print "data sent"
#print self.data_to_send
aSoundDataInterlaced = np.fromstring( str(buffer), dtype=np.int16 );
#
aSoundData = np.reshape( aSoundDataInterlaced, (nbOfChannels, nbrOfSamplesByChannel), 'F' );
# print "processRemote over"
# processRemote - end
def on_message(self, ws, msg):
print("message")
data = json.loads(msg)
print data
if "state" in data:
if data["state"] == "listening":
self.ws_listening = True
if "results" in data:
if data["results"][0]["final"]:
self.FINALS.append(data)
# This prints out the current fragment that we are working on
print(data['results'][0]['alternatives'][0]['transcript'])
def on_error(self, ws, error):
"""Print any errors."""
print(error)
def on_close(self, ws):
"""Upon close, print the complete and final transcript."""
transcript = "".join([x['results'][0]['alternatives'][0]['transcript']
for x in self.FINALS])
print("transcript :", transcript)
self.ws_open = False
def on_open(self, ws):
"""Triggered as soon a we have an active connection."""
# args = self._ws.args
print "INF: WebSocket: opening"
data = {
"action": "start",
# this means we get to send it straight raw sampling
"content-type": "audio/l16;rate=%d;channel=1" % self.nSampleRate,
"continuous": True,
"interim_results": True,
# "inactivity_timeout": 5, # in order to use this effectively
# you need other tests to handle what happens if the socket is
# closed by the server.
"word_confidence": True,
"timestamps": True,
"max_alternatives": 3
}
# Send the initial control message which sets expectations for the
# binary stream that follows:
self._ws.send(json.dumps(data).encode('utf8'))
# Spin off a dedicated thread where we are going to read and
# stream out audio.
print "INF: WebSocket: opened"
self.ws_open = True
def version( self ):
return "0.6";
def main():
"""initialisation
"""
parser = OptionParser()
parser.add_option("--pip",
help="Parent broker port. The IP address or your robot",
dest="pip")
parser.add_option("--pport",
help="Parent broker port. The port NAOqi is listening to",
dest="pport",
type="int")
parser.set_defaults(
pip=NAO_IP,
pport=9559)
(opts, args_) = parser.parse_args()
pip = opts.pip
pport = opts.pport
# We need this broker to be able to construct
# NAOqi modules and subscribe to other modules
# The broker must stay alive until the program exists
myBroker = naoqi.ALBroker("myBroker",
"0.0.0.0", # listen to anyone
0, # find a free port and use it
pip, # parent broker IP
pport) # parent broker port
"""fin initialisation
"""
global SoundReceiver
SoundReceiver = SoundReceiverModule("SoundReceiver", pip) #thread1
SoundReceiver.start()
try:
while True:
time.sleep(1)
print "hello"
except KeyboardInterrupt:
print "Interrupted by user, shutting down"
myBroker.shutdown()
SoundReceiver.stop()
sys.exit(0)
if __name__ == "__main__":
main()
I would be thankful if anyone had any idea on how to bypass that error or on what to try to get useful info. I first believed that I was sending "wrong" data to watson however after lots of attempts I have no clue on how to fix that problem.
Thank you a lot,
Alex
I want to perform some tasks after flink job is completed,I am not having any issues when I run code in Intellij but there are isssues when I run Flink jar in a shell file. I am using below line to make sure that execution of flink program is complete
//start the execution
JobExecutionResult jobExecutionResult = envrionment.execute(" Started the execution ");
is_job_finished = jobExecutionResult.isJobExecutionResult();
I am not sure, if above check is correct or not ?
Then I am using the above varible in below method to perform some tasks
if(print_mode && is_job_finished){
System.out.println(" \n \n -- System related variables -- \n");
System.out.println(" Stream_join Window length = " + WindowLength_join__ms + " milliseconds");
System.out.println(" Input rate for stream RR = " + input_rate_rr_S + " events/second");
System.out.println("Stream RR Runtime = " + Stream_RR_RunTime_S + " seconds");
System.out.println(" # raw events in stream RR = " + Total_Number_Of_Events_in_RR + "\n");
}
Any suggestions ?
You can register a job listener to execution environment.
For example
env.registerJobListener(new JobListener {
//Callback on job submission.
override def onJobSubmitted(jobClient: JobClient, throwable: Throwable): Unit = {
if (throwable == null) {
log.info("SUBMIT SUCCESS")
} else {
log.info("FAIL")
}
}
//Callback on job execution finished, successfully or unsuccessfully.
override def onJobExecuted(jobExecutionResult: JobExecutionResult, throwable: Throwable): Unit = {
if (throwable == null) {
log.info("SUCCESS")
} else {
log.info("FAIL")
}
}
})
Register a JobListener to your StreamExecutionEnvironment.
JobListener is grate program if not SQL API.
if use SQL API, onJobExecuted will never be called. I have a idea, you can refer to it. the source is Kafka, sink can use any type.
let me explain it :
EndSign: follow to last data. when your Flink job consumed it, meaning the partition element rest is empty.
Close loigcal:
When you flink job processing EndSign. job need to call JobController, then JobController counter +1
Until the JobController counter equals partition count. then JobController will check consumer group lag, ensure Flink job get all data.
Now, we know the job is finished
I developed a simple app in MIT APP Inventor that controls a heat pump water heater.
The app sends a different string for each button pressed and the NodeMcu verifies which button was pressed, as you can see in the code below.
srv=net.createServer(net.TCP)
srv:listen(80,function(conn)
conn:on("receive", function(client,request)
local buf = ""
local _, _, method, path, vars = string.find(request, "([A-Z]+) (.+)?(.+) HTTP")
if(method == nil)then
_, _, method, path = string.find(request, "([A-Z]+) (.+) HTTP")
end
local _GET = {}
if (vars ~= nil)then
for k, v in string.gmatch(vars, "(%w+)=(%w+)&*") do
_GET[k] = v
end
end
local _on,_off = "",""
if(_GET.pin == "ip")then
local ip=wifi.sta.getip()
local ler_ip=string.sub(ip,1,13)
dofile("novolcd.lua").cls()
dofile("novolcd.lua").lcdprint("IP="..ler_ip)
elseif(_GET.pin == "05a60")then
sp_temperaturas[5]=60
elseif(_GET.pin == "06a60")then
sp_temperaturas[6]=60
elseif(_GET.pin == "Ferias")then
dofile("novolcd.lua").cls()
dofile("novolcd.lua").lcdprint(" Modo ferias ")
modo_ferias()
elseif(_GET.pin == "Parar2")then
dofile("novolcd.lua").cls()
dofile("novolcd.lua").lcdprint(" Parar")
end
end)
conn:on("sent", function (c) c:close() end)
end)
And when the button "Ferias" is pressed the system starts the heating process by calling the function modo_ferias().
function aquecer()
local kp=100/10
local ki=5
local kd=5
local tempo_on=0
local tempo_off=0
i=0
local duty=0
erro=sp_temp-t_atual
soma_erro = soma_erro + erro/5;
dif_erro = (erro - erro_ant)/5;
erro_ant = erro;
print(erro.." "..soma_erro.." "..dif_erro)
duty= kp * erro + soma_erro / ki + dif_erro * kd
print(duty)
tempo_on= duty *50
if (tempo_on > 5000) then
tempo_on = 5000
end
tempo_off = 5000 - tempo_on
repeat
i = i + 1
tmr.delay(1000)
until (i >= tempo_off)
gpio.write(8, gpio.HIGH)
repeat
i = i + 1
tmr.delay(1000)
until (i == 5000)
gpio.mode(8, gpio.INPUT)
end
function modo_ferias()
repeat
sair_ferias=0
pressao()
if (pressao_psi <=3)
sair_ferias=1
end
t_atual=ler_temperatura()
local int = string.format("%d", t_atual ) -- parte inteira
local dec = string.format("%.1d", t_atual % 10000) -- parte decimal com uma casa
t_display = int .. "." .. dec
rtc()
dofile("novolcd.lua").cls()
dofile("novolcd.lua").lcdprint("Temp="..t_display.." ST="..sp_temp..data_horas)
sp_temp=40
local horas_ferias=hour
if(horas_ferias==0 or horas_ferias==1 or horas_ferias==2 or horas_ferias==3 or horas_ferias==4 or horas_ferias==5 or horas_ferias==6) then
sp_temp=70
end
if (sp_temp>t_atual) then
aquecer()
end
until (sair_ferias==1)
end
And here is where my problem appears. If I press a button form the app after the "Ferias" button been pressed, the NodeMcu won't know it because the program is in the heating functions and not verifying if the app sended any instruction.
Is there any way to listen the app commands and to do the heating process at the same time?
There are a few things
Global state
because the program is in the heating functions and not verifying if the app sended any instruction
If the commands triggered by pressing the various buttons can not run independently from each other then you need some form of global state to make sure they don't interfere.
Busy loop
repeat
i = i + 1
tmr.delay(1000)
until (i == 5000)
This is a no-go with NodeMCU as it's essentially a stop-the-world busy loop. Besides, tmr.delay is scheduled to be removed because it's abused so often.
Task posting
node.task.post is a possible solution to scheduling tasks for execution "in the near future". You could use this to post task from the on-receive callback rather executing them synchronously.