call VOLTTRON agent function from external app via rpc - volttron

Is it possible to call a VOLTTRON agent method running on an edge device via rpc from a Python web app running on a different PC on the LAN?
Sorry if this sounds like a silly question, but the raise_setpoints_up function is what I would want to call on my VOLTTRON edge device if its even possible.
##RPC.export
def rpc_method(self):
"""
RPC method
May be called from another agent via self.core.rpc.call
"""
def raise_setpoints_up(self):
_log.debug(f'*** [Setter Agent INFO] *** - STARTING raise_setpoints_up FUNCTION!')
Is there any documentation to read up on for this process? I am hoping to find something I can use with aiohttp with something like this below, any tips or links to read up on greatly appreciated.
import aiohttp
import asyncio
from jsonrpcclient.clients.aiohttp_client import aiohttpClient
async def main(loop):
async with aiohttp.ClientSession(loop=loop) as session:
client = aiohttpClient(session, "http://xxx.xxx.xxx.xxx:5000/") # VOLTTRON edge?
response = await client.request("agent.core.rpc.call") # agent method?
print(response.data.result)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
Would it matter if the VOLTTRON edge device is single or part of a multi-platform to call an agent via rpc?

Related

Deploying a python bot script on Google Cloud Run (GCR)

I have been racking my brains on this for a few weeks now, trying different variations from Google Cloud service offerings but can't seem to find the proper one.
I have a python script with dependencies etc, that I have containerized, pushed, and deploy to GCR.
The script is a bot that connects to an external websocket receiving signals perpetually to then do other processing via API against another external service.
What would be the best service offering from Google Cloud to run this?
So far, I've tried:
GCR Web Service - requires listening service (:8080) which I do not provide in this use case, and, it scales down your service when there is no traffic so no go.
GCR Job Service - Seems like the next ideal option (no HTTP port requirement) - however, since the script (my entry point), upon launch, doesn't 'return' unless it quits, the service launch just allows it to run for a minute or so, until the jobs API declares it as 'failed' - basically, it is launching it via my entry point which just executes the script as if it was running locally and my script isn't meant to return anything.
To try and get around this, I went the google's recommended way and built a main.py with they standard boilerplate, and built it as a wrapper to act as a launcher for the actual script. I did this via a simple subprocess.Popen using their sample main.py as shown below.
main.py
import json
import os
import sys
import subprocess
# Retrieve Job-defined env vars
TASK_INDEX = os.getenv("CLOUD_RUN_TASK_INDEX", 0)
TASK_ATTEMPT = os.getenv("CLOUD_RUN_TASK_ATTEMPT", 0)
# Define main script
def main():
print(f"Starting Task #{TASK_INDEX}, Attempt #{TASK_ATTEMPT}...")
subprocess.Popen(["python3", "myscript.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(f"Completed Task #{TASK_INDEX}.")
# Start script
if __name__ == "__main__":
try:
main()
except Exception as err:
message = f"Task #{TASK_INDEX}, " \
+ f"Attempt #{TASK_ATTEMPT} failed: {str(err)}"
print(json.dumps({"message": message, "severity": "ERROR"}))
sys.exit(1) # Retry Job Task by exiting the process
My thinking being, this would allow the job to execute my script and mark the job as completed, while the actual script remains running. Also, since subprocess.Popen sets its stdout and stderr to PIPE, my thinking is it would get caught by the google logging and I would see the output.
The job runs and marks it as succeed, however, I see no indication of the actual script executing anywhere.
I had similar issue with Google Cloud functions. Jobs seemed like an ideal option since I can run on their scheduler to make sure it is launching after saying, every hour (my script uses a lock file so it doesn't run again if running).
Am I just missing the point on how these cloud services run?
Do offerings like google cloud run jobs/functions, etc meant to execute jobs that return and quit until launched again by however scheduled?
Do I need to consider Google Computing engine as an option for this use case that is, a full running VM instead of stateless/serverless options?
I am trying to use this in a containerized, scalable as needed, fashion to both make my project portable and minimize costs as much as possible given the always running nature of the job.
Lastly, I know services like pythonanywhere as I am sure others, make this kinda stuff easier, but I would like to learn how to do this via standard cloud offerings like GCR, AWS, etc.
thanks for any insight / advice!
Cloud Run best fit is for HTTP Rest APIs serving (stateless services). There are also Jobs in beta.
One of the top feature of Run is that it scales to 0, when there are not requests to your service (your service instance gets totally destroyed).
If your bot needs to stay alive "for ever", Run is not for you... (Even if you can configure Run to keep at least one instance live).
I would consider instead AppEngine or Compute.

Total cpu utilization of running app engine flexible instances

I need to make decisions in an external system based on the current CPU utilization of my App Engine Flexible service. I can see the exact values / metrics I need to use in the dashboard charting in my Google Cloud Console, but I don't see a direct, easy way to get this information from something like a gcloud command.
I also need to know the count of running instances, but I think I can use gcloud app instances list -s default to get a list of my running instances in the default service, and then I can use a count of lines approach to get this info easily. I intend to make a python function which returns a tuple like (instance_count, cpu_utilization).
I'd appreciate if anyone can direct me to an easy way to get this. I am currently exploring the StackDriver Monitoring service to get this same information, but as of now it is looking super-complicated to me.
You can use the gcloud app instances list -s default command to get the running instances list, as you said. To retrieve CPU utilization, have a look on this Python Client for Stackdriver Monitoring. To list available metric types:
from google.cloud import monitoring
client = monitoring.Client()
for descriptor in client.list_metric_descriptors():
print(descriptor.type)
Metric descriptors are described here. To display utilization across your GCE instances during the last five minutes:
metric = 'compute.googleapis.com/instance/cpu/utilization'
query = client.query(metric, minutes=5)
print(query.as_dataframe())
Do not forget to add google-cloud-monitoring==0.28.1 to “requirements.txt” before installing it.
Check this code that locally runs for me:
import logging
from flask import Flask
from google.cloud import monitoring as mon
app = Flask(__name__)
#app.route('/')
def list_metric_descriptors():
"""Return all metric descriptors"""
# Instantiate client
client = mon.Client()
for descriptor in client.list_metric_descriptors():
print(descriptor.type)
return descriptor.type
#app.route('/CPU')
def cpuUtilization():
"""Return CPU utilization"""
client = mon.Client()
metric = 'compute.googleapis.com/instance/cpu/utilization'
query = client.query(metric, minutes=5)
print(type(query.as_dataframe()))
print(query.as_dataframe())
data=str(query.as_dataframe())
return data
#app.errorhandler(500)
def server_error(e):
logging.exception('An error occurred during a request.')
return """
An internal error occurred: <pre>{}</pre>
See logs for full stacktrace.
""".format(e), 500
if __name__ == '__main__':
# This is used when running locally. Gunicorn is used to run the
# application on Google App Engine. See entrypoint in app.yaml.
app.run(host='127.0.0.1', port=8080, debug=True)

FTP to Google Storage

Some files get uploaded on a daily basis to an FTP server and I need those files under Google Cloud Storage. I don't want to bug the users that upload the files to install any additional software and just let them keep using their FTP client.
Is there a way to use GCS as an FTP server? If not, how can I create a job that periodically picks up the files from an FTP location and puts them in GCS?
In other words: what's the best and simplest way to do it?
You could write yourself an FTP server which uploads to GCS, for example based on pyftpdlib
Define a custom handler which stores to GCS when a file is received
import os
from pyftpdlib.handlers import FTPHandler
from pyftpdlib.servers import FTPServer
from pyftpdlib.authorizers import DummyAuthorizer
from google.cloud import storage
class MyHandler:
def on_file_received(self, file):
storage_client = storage.Client()
bucket = storage_client.get_bucket('your_gcs_bucket')
blob = bucket.blob(file[5:]) # strip leading /tmp/
blob.upload_from_filename(file)
os.remove(file)
def on_... # implement other events
def main():
authorizer = DummyAuthorizer()
authorizer.add_user('user', 'password', homedir='/tmp', perm='elradfmw')
handler = MyHandler
handler.authorizer = authorizer
handler.masquerade_address = add.your.public.ip
handler.passive_ports = range(60000, 60999)
server = FTPServer(("127.0.0.1", 21), handler)
server.serve_forever()
if __name__ == "__main__":
main()
I've successfully run this on Google Container Engine (it requires some effort getting passive FTP working properly) but it should be pretty simple to do on Compute Engine. According to the above configuration, open port 21 and ports 60000 - 60999 on the firewall.
To run it, python my_ftp_server.py - if you want to listen on port 21 you'll need root privileges.
You could setup a cron and rsync between the FTP server and Google Cloud Storage using gsutil rsync or open source rclone tool.
If you can't run those commands on the FTP server periodically, you could mount the FTP server as a local filesystem or drive (Linux, Windows)
I have successfully set up an FTP proxy to GCS using gcsfs in a VM in Google Compute (mentioned by jkff in the comment to my question), with these instructions:
http://ilyapimenov.com/blog/2015/01/19/ftp-proxy-to-gcs.html
Some changes are needed though:
In /etc/vsftpd.conf change #write_enable=YES to
write_enable=YES
Add firewall rules in your GC project to allow
access to ports 21 and passive ports 15393 to 15592 (https://console.cloud.google.com/networking/firewalls/list)
Some possible problems:
If you can access the FTP server using the local ip, but not the remote ip, it's probably because you haven't set up the firewall rules
If you can access the ftp server, but are unable to write, it's probably because you need the write_enable=YES
If you are tying to read on the folder you created on /mnt, but get a I/O error, it's probably because the bucket in gcsfs_config is not right.
Also, your ftp client needs to use the transfer mode set to "passive".
Set up a VM in the google cloud, using some *nix flavor. Set up ftp on it, and point it to a folder abc. Use google fuse to mount abc as a GCS bucket. Voila - back and forth between gcs / ftp without writing any software.
(Small print: fuse rolls up and dies if you push too much data, so bounce it periodically, once a week or once a day; also you might need to set the mount or fuse to allow permissions for all users)

Add a new page to Volttron Central

I have a standalone HTML page with jQuery. The jQuery is used to do AJAX call to the Python backend. I need to integrate it with Volttron Central. I have looked at the documentation but there is no section talking about this. I think it would be nice to have this kind of info in the doc.
My current approach is to convert the backend Python to be a Volttron agent but I don't know how to integrate the front end HTML page with VC.
Any suggestion where to start? Thanks.
When you have an agent that is going to register its own endpoint you should do that during the onstart signal. The following was extracted from the volttron central agent. It shows how to register an endpoint that is dynamic(uses volttron rpc as the endpoint) as well as static(where the html is simply served). I have removed the un-necessary bits for this example.
onstart volttron central code
For clarity MASTER_WEB and VOLTTRON_CENTRAL are unique identifiers for those specific agents running on the volttron instance.
#Core.receiver('onstart')
def _starting(self, sender, **kwargs):
""" Starting of the platform
:param sender:
:param kwargs:
:return:
"""
...
# Registers dynamic route.
self.vip.rpc.call(MASTER_WEB, 'register_agent_route',
r'^/jsonrpc.*',
self.core.identity,
'jsonrpc').get(timeout=30)
# Registers static route.
self.vip.rpc.call(MASTER_WEB, 'register_path_route', VOLTTRON_CENTRAL,
r'^/.*', self._webroot).get(timeout=30)
Since you added the route onstart you should also remove it when the agent is stopped. onstop referenced code
#Core.receiver("onstop")
def stopping(self, sender, **kwargs):
'''
Release subscription to the message bus because we are no longer able
to respond to messages now.
'''
try:
# unsubscribes to all topics that we are subscribed to.
self.vip.pubsub.unsubscribe(peer='pubsub', prefix=None, callback=None)
except KeyError:
# means that the agent didn't start up properly so the pubsub
# subscriptions never got finished.
pass

GAE behavior when relocating an application to another server

Two questions:
Does Google App Engine send any kind of message to an application just before relocating it to another server?
If so, what is that message?
No it doesnt. It doesnt relocate either, old instances keep running (and eventually stop when idle for long enough) while new ones are spawned.
There are times when App Engine needs to move your instance to a different machine to improve load distribution.
When App Engine needs to turn down a manual scaling instance it first
notifies the instance. There are two ways to receive this
notification. First, the is_shutting_down() method from
google.appengine.api.runtime begins returning true. Second, if you
have registered a shutdown hook, it will be called. It's a good idea
to register a shutdown hook in your start request. After the
notification is issued, existing requests are given 30 seconds to
complete, and new requests immediately return 404.
If an instance is
handling a request, App Engine pauses the request and runs the
shutdown hook. If there is no active request, App Engine sends an
/_ah/stop request, which runs the shutdown hook. The /_ah/stop request
bypasses normal handling logic and cannot be handled by user code; its
sole purpose is to invoke the shutdown hook. If you raise an exception
in your shutdown hook while handling another request, it will bubble
up into the request, where you can catch it.
The following code sample demonstrates a basic shutdown hook:
from google.appengine.api import apiproxy_stub_map
from google.appengine.api import runtime
def my_shutdown_hook():
apiproxy_stub_map.apiproxy.CancelApiCalls()
save_state()
# May want to raise an exception
runtime.set_shutdown_hook(my_shutdown_hook)
Alternatively, the following sample demonstrates how to use the is_shutting_down() method:
while more_work_to_do and not runtime.is_shutting_down():
do_some_work()
save_state()
More details here: https://developers.google.com/appengine/docs/python/modules/#Python_Instance_states

Resources