I create a timer with :timer.apply_after/4 function
{:ok, ref} = :timer.apply_after(60000, Module , :function , [args])
the ref returned here is a tuple like this
{-6451215415 , #Reference<0.265269.0.1654156.8484>}
and I am trying to cancel this process before implement it, is there a way?
I tried
:timer.cancel(ref)
but it returned arguments error. How do I cancel this timer before the time end?
This is surely possible and works as expected:
iex|1 ▶ {:ok, ref} = :timer.apply_after(1_000, IO, :inspect, ["¡Hola!"])
#⇒ {:ok, {-576460467744153, #Reference<0.2854054855.1602748417.47727>}}
# ... # after 1 second:
#⇒ "¡Hola!"
iex|2 ▶ with {:ok, ref} <-
...|2 ▶ :timer.apply_after(1_000, IO, :inspect, ["¡Hola!"]),
...|2 ▶ do: :timer.cancel(ref)
#⇒ {:ok, :cancel}
Related
I want to know when the merge() method on AggregateFunction gets called. From what I've understood from the answers here and here, is that its applicable to Session Windows only and occurs on every event that can be merged with the previous window since every event for a Session Window create a new Window. I'm using PyFlink and would appreciate any help by providing an example.
Let's take an example that I put together from the documentation for the AverageAggregate function and some custom code:
class MyTimestampAssigner(TimestampAssigner):
def extract_timestamp(self, value, record_timestamp) -> int:
return int(value[1])
class AverageAggregate(AggregateFunction):
def create_accumulator(self) -> Tuple[int, int]:
return 0, 0
def add(self, value: Tuple[str, int], accumulator: Tuple[int, int]) -> Tuple[int, int]:
return accumulator[0] + value[1], accumulator[1] + 1
def get_result(self, accumulator: Tuple[int, int]) -> float:
return accumulator[0] / accumulator[1]
def merge(self, a: Tuple[int, int], b: Tuple[int, int]) -> Tuple[int, int]:
return a[0] + b[0], a[1] + b[1]
if __name__ == '__main__':
env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(1)
# define the source
data_stream = env.from_collection([
('hi', 1), ('hi', 2), ('hi', 3), ('hi', 4), ('hi', 8), ('hi', 9), ('hi', 15)],
type_info=Types.TUPLE([Types.STRING(), Types.INT()]))
# define the watermark strategy
watermark_strategy = WatermarkStrategy.for_monotonous_timestamps() \
.with_timestamp_assigner(MyTimestampAssigner())
ds = (
data_stream
.assign_timestamps_and_watermarks(watermark_strategy)
.key_by(lambda x: x[0], key_type=Types.STRING())
.window(EventTimeSessionWindows.with_gap(Time.milliseconds(3)))
.aggregate(AverageAggregate())
)
# print the results
ds.print()
# submit for execution
env.execute()
From my understanding, the merge() method should have run on the second event ('hi', 2) since that is within the window size of 3 ms and then again for the input ('hi', 4) and so on. But while executing the code, the merge() method doesn't even fire once. So if anyone can modify the sample code above to show merge() being executed and explain how it works would be greatly appreciated.
While it's not a direct PyFlink example, you can have a look at the DataStream API recipe at https://docs.immerok.cloud/docs/how-to-guides/development/using-session-windows/#merging-data-in-one-session-window for info on the merge() method.
Disclaimer: I work for Immerok
I'm trying to code a simple program for a ESP32 board.
My main program is fairly simple and it has to run on a loop.
On the side, the device also needs to be able to respond to HTTP requests with a very simple response.
This is my attempt (a rework of https://randomnerdtutorials.com/micropython-esp32-esp8266-bme280-web-server/):
try:
import usocket as socket
except:
import socket
from micropython import const
import time
REFRESH_DELAY = const(60000) #millisecondi
def do_connect():
import network
wlan = network.WLAN(network.STA_IF)
wlan.active(True)
if not wlan.isconnected():
print('connecting to network...')
wlan.config(dhcp_hostname=HOST)
wlan.connect('SSID', 'PSWD')
while not wlan.isconnected():
pass
print('network config:', wlan.ifconfig())
import json
import esp
esp.osdebug(None)
import gc
gc.collect()
do_connect()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, SENSOR_SCKT_PORT))
s.listen(5)
prevRun = 0
i = 0
while True:
print("iteration #"+str(i))
i += 1
# run every 60 seconds
curRun = int(round(time.time() * 1000))
if curRun - prevRun >= REFRESH_DELAY:
prevRun = curRun
# MAIN PROGRAM
# ......
# whole bunch of code
# ....
# run continuously:
try:
if gc.mem_free() < 102000:
gc.collect()
conn, addr = s.accept()
conn.settimeout(3.0)
print('Got a connection from %s' % str(addr))
request = conn.recv(1024)
conn.settimeout(None)
request = str(request)
#print('Content = %s' % request)
measurements = 'some json stuff'
conn.send('HTTP/1.1 200 OK\n')
conn.send('Content-Type: text/html\n')
conn.send('Connection: close\n\n')
conn.send(measurements)
conn.close()
except OSError as e:
conn.close()
print('Connection closed')
what happens is I only get the iteration #0, and then the while True loop halts.
If I ping this server with a HTTP request, I get a correct response, AND the loop advances to iteration #1 and #2 (no idea why it thinks I pinged it with 2 requests).
So it seems that socket.listen(5) is halting the while loop.
Is there any way to avoid this?
Any other solution?
I don't think that threading is an option here.
The problem is that s.accept() is a blocking call...it won't return until it receives a connection. This is why it pauses your loop.
The easiest solution is probably to check whether or not a connection is waiting before calling s.accept(); you can do this using either select.select or select.poll. I prefer the select.poll API, which would end up looking something like this:
import esp
import gc
import json
import machine
import network
import select
import socket
import time
from micropython import const
HOST = '0.0.0.0'
SENSOR_SCKT_PORT = const(1234)
REFRESH_DELAY = const(60000) # milliseconds
def wait_for_connection():
print('waiting for connection...')
wlan = network.WLAN(network.STA_IF)
while not wlan.isconnected():
machine.idle()
print('...connected. network config:', wlan.ifconfig())
esp.osdebug(None)
gc.collect()
wait_for_connection()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, SENSOR_SCKT_PORT))
s.listen(5)
poll = select.poll()
poll.register(s, select.POLLIN)
prevRun = 0
i = 0
while True:
print("iteration #"+str(i))
i += 1
# run every 60 seconds
curRun = int(round(time.time() * 1000))
if curRun - prevRun >= REFRESH_DELAY:
prevRun = curRun
# MAIN PROGRAM
# ......
# whole bunch of code
# ....
# run continuously:
try:
if gc.mem_free() < 102000:
gc.collect()
events = poll.poll(100)
if events:
conn, addr = s.accept()
conn.settimeout(3.0)
print('Got a connection from %s' % str(addr))
request = conn.recv(1024)
conn.settimeout(None)
request = str(request)
# print('Content = %s' % request)
measurements = 'some json stuff'
conn.send('HTTP/1.1 200 OK\n')
conn.send('Content-Type: text/html\n')
conn.send('Connection: close\n\n')
conn.send(measurements)
conn.close()
except OSError:
conn.close()
print('Connection closed')
You'll note that I've taken a few liberties with your code to get it running on my device and to appease my sense of style; primarily, I've excised most of your do_connect method and put all the imports at the top of the file.
The only real changes are:
We create a select.poll() object:
poll = select.poll()
We ask it to monitor the s variable for POLLIN events:
poll.register(s, select.POLLIN)
We check if any connections are pending before attempting to handle a connection:
events = poll.poll(100)
if events:
conn, addr = s.accept()
conn.settimeout(3.0)
[...]
With these changes in place, running your code and making a request looks something like this:
iteration #0
iteration #1
iteration #2
iteration #3
iteration #4
iteration #5
iteration #6
Got a connection from ('192.168.1.169', 54392)
iteration #7
iteration #8
iteration #9
iteration #10
Note that as written here, your loop will iterate at least once every 100ms (and you can control that by changing the timeout on our call to poll.poll()).
Note: the above was tested on an esp8266 device (A Wemos D1 clone) running MicroPython v1.13-268-gf7aafc062).
I am learning to use the LLDB.py module in python and am trying to run the following example I found on http://lldb.llvm.org/python-reference. I already added lldb.so to PYTHONPATH. Here is the result I got:
Creating a target for './a.out'
a.out
SBBreakpoint: id = 1, name = 'main', module = a.out, locations = 1
SBProcess: pid = 0, state = launching, threads = 0, executable = a.out
It seems like the program doesn't get started, the state of process is always Launching. Is there any configuration problems or any missing codes?
import lldb
import os
def disassemble_instructions(insts):
for i in insts:
print i
# Set the path to the executable to debug
exe = "./a.out"
# Create a new debugger instance
debugger = lldb.SBDebugger.Create()
# When we step or continue, don't return from the function until the process
# stops. Otherwise we would have to handle the process events ourselves which, while doable is
#a little tricky. We do this by setting the async mode to false.
debugger.SetAsync (False)
# Create a target from a file and arch
print "Creating a target for '%s'" % exe
target = debugger.CreateTargetWithFileAndArch (exe, lldb.LLDB_ARCH_DEFAULT)
if target:
# If the target is valid set a breakpoint at main
main_bp = target.BreakpointCreateByName ("main", target.GetExecutable().GetFilename());
print main_bp
# Launch the process. Since we specified synchronous mode, we won't return
# from this function until we hit the breakpoint at main
process = target.LaunchSimple (["./story.txt"], None, os.getcwd())
# Make sure the launch went ok
if process:
# Print some simple process info
state = process.GetState ()
print process
if state == lldb.eStateStopped:
# Get the first thread
thread = process.GetThreadAtIndex (0)
if thread:
# Print some simple thread info
print thread
# Get the first frame
frame = thread.GetFrameAtIndex (0)
if frame:
# Print some simple frame info
print frame
function = frame.GetFunction()
# See if we have debug info (a function)
if function:
# We do have a function, print some info for the function
print function
# Now get all instructions for this function and print them
insts = function.GetInstructions(target)
disassemble_instructions (insts)
else:
# See if we have a symbol in the symbol table for where we stopped
symbol = frame.GetSymbol();
if symbol:
# We do have a symbol, print some info for the symbol
print symbol
}
In a ModelForm I can write a clean_<field_name> member function to automatically validate and clean up data entered by a user, but what can I do about dirty json or csv files (fixtures) during a manage.py loaddata?
Fixtures loaded with loaddata are assumed to contain clean data that doen't need validation (usually as an inverse operation to a prior dumpdata), so the short answer is that loaddata isn't the approach you want if you need to clean your inputs.
However, you probably can use some of the underpinnings of loaddata while implementing your custom data cleaning code--I'm sure you can easily script something using the Django serialization libs to read your existing data files them in and the save the resulting objects normally after the data has been cleaned up.
In case others want to do something similar, I defined a model method to do the cleaning (so it can be called from ModelForms)
MAX_ZIPCODE_DIGITS = 9
MIN_ZIPCODE_DIGITS = 5
def clean_zip_code(self, s=None):
#s = str(s or self.zip_code)
if not s: return None
s = re.sub("\D","",s)
if len(s)>self.MAX_ZIPCODE_DIGITS:
s = s[:self.MAX_ZIPCODE_DIGITS]
if len(s) in (self.MIN_ZIPCODE_DIGITS-1,self.MAX_ZIPCODE_DIGITS-1):
s = '0'+s # FIXME: deal with other intermediate lengths
if len(s)>=self.MAX_ZIPCODE_DIGITS:
s = s[:self.MIN_ZIPCODE_DIGITS]+'-'+s[self.MIN_ZIPCODE_DIGITS:]
return s
Then wrote a standalone python script to clean up my legacy json files using any clean_ methods found among the models.
import os, json
def clean_json(app = 'XYZapp', model='Entity', fields='zip_code', cleaner_prefix='clean_'):
# Set the DJANGO_SETTINGS_MODULE environment variable.
os.environ['DJANGO_SETTINGS_MODULE'] = app+".settings"
settings = __import__(app+'.settings').settings
models = __import__(app+'.models').models
fpath = os.path.join( settings.SITE_PROJECT_PATH, 'fixtures', model+'.json')
if isinstance(fields,(str,unicode)):
fields = [fields]
Ns = []
for field in fields:
try:
instance = getattr(models,model)()
except AttributeError:
print 'No model named %s could be found'%(model,)
continue
try:
cleaner = getattr(instance, cleaner_prefix+field)
except AttributeError:
print 'No cleaner method named %s.%s could be found'%(model,cleaner_prefix+field)
continue
print 'Cleaning %s using %s.%s...'%(fpath,model,cleaner.__name__)
fin = open(fpath,'r')
if fin:
l = json.load(fin)
before = len(l)
cleans = 0
for i in range(len(l)):
if 'fields' in l[i] and field in l[i]['fields']:
l[i]['fields'][field]=cleaner(l[i]['fields'][field]) # cleaner returns None to delete records
cleans += 1
fin.close()
after = len(l)
assert after>.5*before
Ns += [(before, after,cleans)]
print 'Writing %d/%d (new/old) records after %d cleanups...'%Ns[-1]
with open(fpath,'w') as fout:
fout.write(json.dumps(l,indent=2,sort_keys=True))
return Ns
if __name__ == '__main__':
clean_json()
I would like to create a timer using Lua, in a way that I could specify a callback function to be triggered after X seconds have passed.
What would be the best way to achieve this? ( I need to download some data from a webserver that will be parsed once or twice an hour )
Cheers.
If milisecond accuracy is not needed, you could just go for a coroutine solution, which you resume periodically, like at the end of your main loop, Like this:
require 'socket' -- for having a sleep function ( could also use os.execute(sleep 10))
timer = function (time)
local init = os.time()
local diff=os.difftime(os.time(),init)
while diff<time do
coroutine.yield(diff)
diff=os.difftime(os.time(),init)
end
print( 'Timer timed out at '..time..' seconds!')
end
co=coroutine.create(timer)
coroutine.resume(co,30) -- timer starts here!
while coroutine.status(co)~="dead" do
print("time passed",select(2,coroutine.resume(co)))
print('',coroutine.status(co))
socket.sleep(5)
end
This uses the sleep function in LuaSocket, you could use any other of the alternatives suggested on the Lua-users Wiki
Try lalarm, here:
http://www.tecgraf.puc-rio.br/~lhf/ftp/lua/
Example (based on src/test.lua):
-- alarm([secs,[func]])
alarm(1, function() print(2) end); print(1)
Output:
1
2
If it's acceptable for you, you can try LuaNode. The following code sets a timer:
setInterval(function()
console.log("I run once a minute")
end, 60000)
process:loop()
use Script.SetTimer(interval, callbackFunction)
After reading this thread and others I decided to go with Luv lib. Here is my solution:
uv = require('luv') --luarocks install luv
function set_timeout(timeout, callback)
local timer = uv.new_timer()
local function ontimeout()
uv.timer_stop(timer)
uv.close(timer)
callback()
end
uv.timer_start(timer, timeout, 0, ontimeout)
return timer
end
set_timeout(1000, function() print('ok') end) -- time in ms
uv.run() --it will hold at this point until every timer have finished
On my Debian I've install lua-lgi packet to get access to the GObject based libraries.
The following code show you an usage demonstrating that you can use few asynchronuous callbacks:
local lgi = require 'lgi'
local GLib = lgi.GLib
-- Get the main loop object that handles all the events
local main_loop = GLib.MainLoop()
cnt = 0
function tictac()
cnt = cnt + 1
print("tic")
-- This callback will be called until the condition is true
return cnt < 10
end
-- Call tictac function every 2 senconds
GLib.timeout_add_seconds(GLib.PRIORITY_DEFAULT, 2, tictac)
-- You can also use an anonymous function like that
GLib.timeout_add_seconds(GLib.PRIORITY_DEFAULT, 1,
function()
print( "There have been ", cnt, "tic")
-- This callback will never stop
return true
end)
-- Once everything is setup, you can start the main loop
main_loop:run()
-- Next instructions will be still interpreted
print("Main loop is running")
You can find more documentation about LGI here