We have a requirement where we want to run AWS GreenGrass on our VM in our own data center which will not have connectivity with AWS IoT. Thought is to have similar cloud and on-premise solution(behind a firewall). We want to reuse the cloud solution, which has lambda functions in AWS GreenGrass. Wanted to know if it is possible to run AWS GreenGrass without AWS IoT? Any insight would be of great help.
You should consider the hardware requirements of Greengrass and you can find more information here.
For defining the Greengrass configuration for first time, you need to connect your Greengrass to AWS IoT by deploying, after that you can run Greengrass without any AWS IoT connectivity. But Consider that the deployment of serverless functions (Lambda functions) will be done through AWS IoT. You can find more information about Greengrass here.
Related
I have a Jetson, set up as core device. It has greengrass installed on it (inc. deployments, components etc)
I want to set up an AWS SSH tunnel, installed and configured the aws-iot-device-client, but it gets disconnected. In the AWS console at MQTT Test I get the error message DUPLICATE_CLIENT_ID.
Any thoughts or ideas are highly appreciated.
If you are using AWS IoT Greengrass V2, you can deploy the AWS-provided Secure Tunneling component to install and configure the AWS IoT Device Client for you, so you can create SSH tunnels. This component uses the Greengrass nucleus' MQTT connections, so it avoids the duplicate ID error.
In AWS Greengrass v2 developer guide, Move from AWS IoT Greengrass Version 1, it says, AWS IoT Greengrass V2 currently doesn't support connected devices, also called Greengrass devices.
Wonder what would be the recommended ways (for the time being) to connect edge devices, such as ESP devices, to the Greengrass core device running Greengrass version 2.
Found on AWS IoT forum an explanation from Farad at AWS in Feb 2021:
Greengrass V2 unfortunately does not provide GGAD (Greengrass Aware Device) yet. They are actively working on addressing this gap and hope to release it in the coming months. Until that time, you can use Greengrass version 1 or you could use this example MQTT Bridge code for Greengrass V2 as a skeleton for implementing your own MQTT broker on V2 in order to support remote devices.
Is there a way to use AWS IoT rule engine locally at AWS IoT greengrass?
Is the rule engine a feature of AWS IoT core (cloud) only?
Is a lambda function deployed to the local AWS IoT greengrass that subscribes to a topic and takes an action an equivalent of the AWS IoT core rules engine?
Even though greengrass is an extension of IoT Core, there is no native rules engine component in the greengrass service, greengrass serves the purpose of eventually sending the data to IoT Core where you can use rules engine to trigger other cloud services.
If you're specifically looking into Lambdas which run on greengrass, these Lambdas are running on your hardware and not the cloud and hence need to be handled by you, either using subscriptions or active invocations (code invoke).
This is because when you create a deployment from the cloud, greengrass service will containerize the greengrass group that you configure and deploy it on the GG core device. Once the container reaches the Core device, it cannot be altered/managed from the cloud unless you make another deployment with modifications.
Also there are 2 types of lambdas, long lived (think of it as daemon processes) and on-demand (think of it as a code which has to be triggered manually) lambda. The only way to trigger an on demand lambda is either subscriptions or active invocation. There’s no native feature which triggers on-demand lambdas, it has to be in your code logic.
I am a big fan of particle.io and was very excited when they added a Google Cloud Platform (GCP) integration so I can save my IoT data into a GCP "DataStore".
I've followed their tutorial and got it working but I need some advice on implementing this so it can scale on GCP.
My current implementation is like so:
https://docs.particle.io/tutorials/integrations/google-cloud-platform/#example-use-cases
Basically I have a GCP "Compute Engine" instance which runs a node.js script that listens for the PubSub events (sent by my IoT devices) and saves it to DataStore.
Now because I want it to scale, ideally this node.js script should run on a managed service that can respond to spikes automatically. But GCP does not seem to have anything like this.
In AWS I could so this:
IoT Data -> Particle.io AWS WebHook -> AWS API Gateway Endpoint -> AWS Lambda -> AWS DynamoDB
All the AWS points are managed.
What's the best way to have that node.js script always running in a fully-managed, always-available way on GCP? which can run my node.js script that listens for PubSub events and saves to the DataStore and automatically scales as load increases
Any help/advice will be appreciated.
Thanks very much,
Mark
You have a number of options:
1- As someone else mentioned, there is Cloud Functions. It's basically a Node.js function you deploy and Google Cloud takes care of scaling it up/down for you.
2- You can deploy your Node.js app to App Engine Flex which has autoscaling enabled by default.
3- If you want to stay on Compute Engine, you can manually set autoscaling on Compute Engine.
I am looking for a back end web service interface for our mobile apps but also
need to host a Ubuntu server behind this. Google seems to have an ideal solution
for this with their Compute and App engine, how do I do this with Amazon AWS. Reason being
is the Amazon EC2 has free tier.
Thanks