volttron CSV Agent onstart add in time delay - volttron

On the CSV driver agent script how could I add in a 300 second delay to the onstart function? I was hoping to delay the self.actuate_point method/function only one time on startup by 300 seconds when the agent is installed.
#csv driver start
#Core.receiver("onstart")
def onstart(self, sender, **kwargs):
"""
This is method is called once the Agent has successfully connected to the platform.
This is a good place to setup subscriptions if they are not dynamic or
do any other startup activities that require a connection to the message bus.
Called after any configurations methods that are called at startup.
Usually not needed if using the configuration store.
"""
# ADD time.sleep?
time.sleep(300)
# Every time_delay_normal seconds, ask the core agent loop to run the actuate point method with no parameters
self.core.periodic(self.time_delay_normal, self.actuate_point)

#HenryHub you shouldn't use the main time library in most of volttron. Instead you should use gevent.sleep(0.3)
import gevent
gevent.sleep(seconds)

Related

Deploying a python bot script on Google Cloud Run (GCR)

I have been racking my brains on this for a few weeks now, trying different variations from Google Cloud service offerings but can't seem to find the proper one.
I have a python script with dependencies etc, that I have containerized, pushed, and deploy to GCR.
The script is a bot that connects to an external websocket receiving signals perpetually to then do other processing via API against another external service.
What would be the best service offering from Google Cloud to run this?
So far, I've tried:
GCR Web Service - requires listening service (:8080) which I do not provide in this use case, and, it scales down your service when there is no traffic so no go.
GCR Job Service - Seems like the next ideal option (no HTTP port requirement) - however, since the script (my entry point), upon launch, doesn't 'return' unless it quits, the service launch just allows it to run for a minute or so, until the jobs API declares it as 'failed' - basically, it is launching it via my entry point which just executes the script as if it was running locally and my script isn't meant to return anything.
To try and get around this, I went the google's recommended way and built a main.py with they standard boilerplate, and built it as a wrapper to act as a launcher for the actual script. I did this via a simple subprocess.Popen using their sample main.py as shown below.
main.py
import json
import os
import sys
import subprocess
# Retrieve Job-defined env vars
TASK_INDEX = os.getenv("CLOUD_RUN_TASK_INDEX", 0)
TASK_ATTEMPT = os.getenv("CLOUD_RUN_TASK_ATTEMPT", 0)
# Define main script
def main():
print(f"Starting Task #{TASK_INDEX}, Attempt #{TASK_ATTEMPT}...")
subprocess.Popen(["python3", "myscript.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(f"Completed Task #{TASK_INDEX}.")
# Start script
if __name__ == "__main__":
try:
main()
except Exception as err:
message = f"Task #{TASK_INDEX}, " \
+ f"Attempt #{TASK_ATTEMPT} failed: {str(err)}"
print(json.dumps({"message": message, "severity": "ERROR"}))
sys.exit(1) # Retry Job Task by exiting the process
My thinking being, this would allow the job to execute my script and mark the job as completed, while the actual script remains running. Also, since subprocess.Popen sets its stdout and stderr to PIPE, my thinking is it would get caught by the google logging and I would see the output.
The job runs and marks it as succeed, however, I see no indication of the actual script executing anywhere.
I had similar issue with Google Cloud functions. Jobs seemed like an ideal option since I can run on their scheduler to make sure it is launching after saying, every hour (my script uses a lock file so it doesn't run again if running).
Am I just missing the point on how these cloud services run?
Do offerings like google cloud run jobs/functions, etc meant to execute jobs that return and quit until launched again by however scheduled?
Do I need to consider Google Computing engine as an option for this use case that is, a full running VM instead of stateless/serverless options?
I am trying to use this in a containerized, scalable as needed, fashion to both make my project portable and minimize costs as much as possible given the always running nature of the job.
Lastly, I know services like pythonanywhere as I am sure others, make this kinda stuff easier, but I would like to learn how to do this via standard cloud offerings like GCR, AWS, etc.
thanks for any insight / advice!
Cloud Run best fit is for HTTP Rest APIs serving (stateless services). There are also Jobs in beta.
One of the top feature of Run is that it scales to 0, when there are not requests to your service (your service instance gets totally destroyed).
If your bot needs to stay alive "for ever", Run is not for you... (Even if you can configure Run to keep at least one instance live).
I would consider instead AppEngine or Compute.

call VOLTTRON agent function from external app via rpc

Is it possible to call a VOLTTRON agent method running on an edge device via rpc from a Python web app running on a different PC on the LAN?
Sorry if this sounds like a silly question, but the raise_setpoints_up function is what I would want to call on my VOLTTRON edge device if its even possible.
##RPC.export
def rpc_method(self):
"""
RPC method
May be called from another agent via self.core.rpc.call
"""
def raise_setpoints_up(self):
_log.debug(f'*** [Setter Agent INFO] *** - STARTING raise_setpoints_up FUNCTION!')
Is there any documentation to read up on for this process? I am hoping to find something I can use with aiohttp with something like this below, any tips or links to read up on greatly appreciated.
import aiohttp
import asyncio
from jsonrpcclient.clients.aiohttp_client import aiohttpClient
async def main(loop):
async with aiohttp.ClientSession(loop=loop) as session:
client = aiohttpClient(session, "http://xxx.xxx.xxx.xxx:5000/") # VOLTTRON edge?
response = await client.request("agent.core.rpc.call") # agent method?
print(response.data.result)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
Would it matter if the VOLTTRON edge device is single or part of a multi-platform to call an agent via rpc?

Add a new page to Volttron Central

I have a standalone HTML page with jQuery. The jQuery is used to do AJAX call to the Python backend. I need to integrate it with Volttron Central. I have looked at the documentation but there is no section talking about this. I think it would be nice to have this kind of info in the doc.
My current approach is to convert the backend Python to be a Volttron agent but I don't know how to integrate the front end HTML page with VC.
Any suggestion where to start? Thanks.
When you have an agent that is going to register its own endpoint you should do that during the onstart signal. The following was extracted from the volttron central agent. It shows how to register an endpoint that is dynamic(uses volttron rpc as the endpoint) as well as static(where the html is simply served). I have removed the un-necessary bits for this example.
onstart volttron central code
For clarity MASTER_WEB and VOLTTRON_CENTRAL are unique identifiers for those specific agents running on the volttron instance.
#Core.receiver('onstart')
def _starting(self, sender, **kwargs):
""" Starting of the platform
:param sender:
:param kwargs:
:return:
"""
...
# Registers dynamic route.
self.vip.rpc.call(MASTER_WEB, 'register_agent_route',
r'^/jsonrpc.*',
self.core.identity,
'jsonrpc').get(timeout=30)
# Registers static route.
self.vip.rpc.call(MASTER_WEB, 'register_path_route', VOLTTRON_CENTRAL,
r'^/.*', self._webroot).get(timeout=30)
Since you added the route onstart you should also remove it when the agent is stopped. onstop referenced code
#Core.receiver("onstop")
def stopping(self, sender, **kwargs):
'''
Release subscription to the message bus because we are no longer able
to respond to messages now.
'''
try:
# unsubscribes to all topics that we are subscribed to.
self.vip.pubsub.unsubscribe(peer='pubsub', prefix=None, callback=None)
except KeyError:
# means that the agent didn't start up properly so the pubsub
# subscriptions never got finished.
pass

GAE behavior when relocating an application to another server

Two questions:
Does Google App Engine send any kind of message to an application just before relocating it to another server?
If so, what is that message?
No it doesnt. It doesnt relocate either, old instances keep running (and eventually stop when idle for long enough) while new ones are spawned.
There are times when App Engine needs to move your instance to a different machine to improve load distribution.
When App Engine needs to turn down a manual scaling instance it first
notifies the instance. There are two ways to receive this
notification. First, the is_shutting_down() method from
google.appengine.api.runtime begins returning true. Second, if you
have registered a shutdown hook, it will be called. It's a good idea
to register a shutdown hook in your start request. After the
notification is issued, existing requests are given 30 seconds to
complete, and new requests immediately return 404.
If an instance is
handling a request, App Engine pauses the request and runs the
shutdown hook. If there is no active request, App Engine sends an
/_ah/stop request, which runs the shutdown hook. The /_ah/stop request
bypasses normal handling logic and cannot be handled by user code; its
sole purpose is to invoke the shutdown hook. If you raise an exception
in your shutdown hook while handling another request, it will bubble
up into the request, where you can catch it.
The following code sample demonstrates a basic shutdown hook:
from google.appengine.api import apiproxy_stub_map
from google.appengine.api import runtime
def my_shutdown_hook():
apiproxy_stub_map.apiproxy.CancelApiCalls()
save_state()
# May want to raise an exception
runtime.set_shutdown_hook(my_shutdown_hook)
Alternatively, the following sample demonstrates how to use the is_shutting_down() method:
while more_work_to_do and not runtime.is_shutting_down():
do_some_work()
save_state()
More details here: https://developers.google.com/appengine/docs/python/modules/#Python_Instance_states

Creating Web-service for modifying XPO objects by timer

I have several clients that create new objects. When new object is created I need to start a timer that will change some object properties when time is elapsed (each object can be visible only for defined client groups certain time).
I want to use for this purpuses web-service and wrote a method that starts timer.
For example I need to set timer to 5 minutes. Are there any restrictions for executing time? Will a timer keep my web-service alive?
Perhaps, I don't understand your task completely, but your idea about Web Service usage looks strange to me. Web Services are usually used to process requests from remote clients. I.e. a client calls method of a Web Service and Web Service returns a result to this client.
I think, I got your idea :). If you need to just change data in the DB, I think the better solution is to create a windows service which will ping web service when needed.

Resources