What is the purpose of having a local only shadow in AWS IoT greengrass? - aws-iot

AWS IoT greengrass works over a local network. This means there is no problem/need for an internet connection. So why is local only shadow concept provided in AWS IoT greengrass?
It makes sense to use local shadow when sync'd with cloud so that when IoT core tries to send msg to greengrass device when there is no internet connectivity then the msg is not lost, and instead the msg is sent to IoT core shadow and then the greengrass device will get the msg when the connectivity is available.
But other than this what is the reason for a local only greengrass shadow?

I am guessing here and I want to be corrected if wrong. The purpose of local only shadow - suppose I am a mining client and have no need for (or cannot have) internet connectivity - I am in some remote part of the world. But I have a lot of local systems/devices on-prem performing state change on my device - I want to sequence the calls for state change and not loose them if my device needs to reboot. In this context I would need a local shadow.

Related

Connecting a mobile device to IOT system in a home

I am interested in building a device that will be connected to my home network and I will be able to control remotely via an app on my cell phone.
My original thought is this:
Both systems will pull data from some cloud service
My mobile device will modify a data entry in the cloud service
The device will be constantly pulling this data and update the necessary setting when they change
Concerns with the original thought:
Constant pulling of the device in order to see when data has been changed (inefficient)
No way to communicate from the device to the phone (feature limiting)
Question:
What is the best way to create the link between my mobile device and the device that will remain in my home?
Potential similar architectures include Nest, Ring, etc.
The concept is correct - typically, IoT devices and mobile apps communicate through the cloud. There are cases when you want to connect your app directly to a device (e.g. through bluetooth, LAN or sometimes even internets), but that is done usually for a specific reason, e.g.:
IP cameras generate a lot of traffic, putting it through a centralized server is very expensive, so cheap home products do P2P from mobile app (centralized served could help you get the IP address for your camera)
TV with phone as remote control. Using cloud service hear would be weird and limiting access to LAN simplifies the setup
In general cases (Next, Ring) you can be pretty sure the IoT devices talk to the cloud, instead of the mobile app directly. It both offer more feature (as the data can be processed in bulk in cloud), but there's also benefits for using the cloud even if it's used only to relay data.
In many cases, you also want to limit traffic (e.g. save batteries or data plan). Two key aspects to be able to do that:
Don't waste time setting up a connection. That means connecting to a stable cloud service (your mobile might not be online) and using optimized network (e.g. LAN with static IP address, NB-IoT or LTE-M)
Choose optimal M2M protocol. Two popular M2M protocols are COAP ("light weight HTTP") and MQTT (a messaging protocol)
Once you've set up how both your device and mobile app can exchange data with the server, it's up to you to design how to use that. Note that there's nothing stopping from the device to also push data to the server - all connection (IoT device <-> cloud <-> mobile app) can use two-way protocols.

Using Amazon MQ or AWS IoT for self managed IoT Devices

I've tried Amazon MQ today and found out that it is very easy to setup and we can integrate the pub/sub feature on our IoT device side quickly. But unfortunately, when I check the limitations of Amazon MQ, the maximum connection for an instance is only 1000.
The ff screenshot taken from Amazon MQ Docs:
This is too low. And I don't see a quota increase options for that. Plus, I am only allowed to have a maximum of 20 brokers per region, so basically a max of 20k devices only. And to create 20 brokers, the cost is too high as compared to setting up an MQTT broker by myself in an EC2 instance of 8Gb memory and 2CPUs, I can handle up to 50k connections.
Then I saw another option which is to use the AWS IoT for device management. It supports up to 500k devices. But the downside is I have to register all my devices as "Thing", and I have to get certificate for each device. But I really don't need Amazon to manage my devices and keep track of the device's states, we already have it done. Plus, we have to familiarize our self on how devices are managed using the AWS IoT device management console. Therefore, using the AWS IoT service as a message broker is more time consuming to implement than using Amazon MQ.
So, my question is, is Amazon MQ really not designed for IoT devices? Is there anyway to use just the MQTT broker service alone of AWS IoT device management without using its management features(I don't think this is possible)?
Amazon MQ is a cloud managed service for Apache ActiveMQ. One of its aims is to make it easy to migrate an existing product using the protocols that ActiveMQ supports to a cloud managed solution.
So, my question is, is Amazon MQ really not designed for IoT devices?
Your question presumes that there is a black and white answer. Amazon MQ may be entirely suitable for an existing product that needs a managed cloud broker. For another product with different requirements it may not be suitable.
Is there anyway to use just the MQTT broker service alone of AWS IoT device management without using its management features(I don't think this is possible)?
Yes, it is possible to use the AWS IoT broker without using the 'thing' management features. From https://docs.aws.amazon.com/iot/latest/developerguide/iot-thing-management.html
You do not need to create a thing in the registry to connect a device to AWS IoT.
You can connect a client device to the AWS IoT MQTT broker using just a certificate without creating a thing. Though typically each device has its own certificate and the thing registry is a means to manage the relationship between a device and a certificate.
There are also alternate means for clients to authenticate.
As of now Amazon proposes managed RabbitMQ, where there are no fixed limits on the number of connections (it just depends on the size of your machines):
EDIT: they don't support the mqtt plugin for the moment, so this will do for AMQP, but not for MQTT.

Connect to on-premises device through CGP app engine

I would like to know how to send a TCP request from Flex App engine (python application) to the on-premises device TCP port 9701 and get the data back from the device.
Option 1- Set up Cloud VPN and put the firewall hardware in front of the on-premises existing router(if it is not VPN IPSEC supported)
Option 2- Set the on-premises router as DMZ mode with IP mapping and port forwarding.
Could anyone try it and give me some idea of how that works and using any hardware firewall that worked with the GCP VPN?
Thanks in advance!
Your question is actually very complex. I will briefly touch upon both of your options.
Option 1- Set up Cloud VPN and put the firewall hardware in front of
the on-premises existing router(if it is not VPN IPSEC supported)
To set up Google Cloud VPN will require hardware routers on your side that support Google's requirements. Most cheap routers will not meet the minimum requirements.
This method is called site-to-site and you are basically connecting your internal network to your Google Cloud networks (VPCs). This requires a good understanding of VPNs and routing. The benefit is that all your traffic is secure and encrypted. Your internal systems can access your Google systems using their private IP addresses.
Your router must have a public static reliable IPv4 address.
Your internal network addressing cannot overlap with your VPCs.
If you put a firewall in front of your VPN router, the firewall must support passing thru ESP (IPsec) and IKE traffic.
Your router must support prefragmentation.
Dynamic routing (BGP) is preferred, Static routing is supported.
Option 2- Set the on-premises router as DMZ mode with IP mapping and
port forwarding.
This method does not involve Cloud VPNs. Your side is public and your Google resources (App Engine) just access your public IP address. There is no added encryption or security in this configuration unless you add it yourself. For low-cost setups that do not require traffic security beyond HTTPS, this is OK usually. However, you have not provided your network map, services, etc to review how you should NAT/PAT and secure your traffic.
A word about DMZ. Most people assume that this is secure. It is not unless you also have an intelligent firewall in front of your DMZ. A DMZ just passes traffic blindly from port A on the public side to Port B on the private side. Many a system has been hacked because the admin thought that DMZ translated to security. Any system connected thru a DMZ should be considered high-risk to attacks and being breached.
What is the best solution? Redesign your requirements so that App Engine does not need to get into your internal network.
Complementing the last answer.
If you can't bought or you don't known what HW is supported by the cloud VPN service, you can can use a VM or onpremise server like as Firewall using pfsense, this is freebsd distribution that have the capability to manage your network security like a NGFW and can be installed on bare metal or in a VM.
having said that, to configure site to site VPN connection between your own network and GCP network you can follow this tutorial, this tutorial explain step by step the configuration that you need to perform in GCP and in pf sense.

How to change MQTT hostname for Google Iot Core

I am working on an iot device using google-cloud-iot-core, but i would like to have allow for 3rd party support, so i want to change the host name of the mqtt, how can it be done?
Thanks in advance
If you change that endpoint, then you're no longer talking to IoT Core. It's not configurable, as that's the endpoint to talk to the service.
You can setup your own MQTT server somewhere else (could do it in GCE or GKE with a custom container) and make the hostname whatever you want, and then setup your own broker to take the MQTT payloads and create Pub/Sub messages, or even to act as a forwarding proxy to IoT Core itself I suppose (although security and auth might get a bit odd).
Or you could even just go directly to Pub/Sub. It all just depends on your need.
As I mentioned, changing the hostname for IoT Core (the mqtt.googleapis.com) it means that you aren't using IoT Core any longer. There's no other way to access the communication broker piece of IoT Core to do the Pub/Sub message creation, etc. If you don't use the IoT Core endpoint (hostname) then you'd be on your own for creating the Pub/Sub messages from the IoT device data.

What measures does google cloud take to protect the instances from IP spoofing?

I am running my server on google app engine and i have all of my services (e.g MongoDB, Redis, Elasticsearch) are deployed on compute engine. Now i wanted to connect my compute engine instances from App engine only that's why i deleted all of my firewall rules of my compute engines which were connecting them from external ip's, now only the instances that are within the internal network of my google cloud project can connect to themselves, now i am just wondering about IP spoofing that as nobody from outside my internal network can connect to my instances now can they fake their ip by telling my firewall that their ip is the ip which any of my instance is having because if that can happen then my whole security will be breached.
Now one question does google cloud project's firewall implement any measures to secure our instances from IP Spoofing or we have to setup something in order to avoid that.
If any of you have any idea about this please enlighten me.
Thanks
It's not quite clear which spoofing scenario you are concerned about. These two come to mind:
External party spoofing packets for your internal network, ie. the 10.0.0.0/8 range. This is not possible as packets inside your network can only come from VMs and VPNs in that private network.
Spoofing packets from other Google / GCE IP ranges; eg. the ones used for external addresses: This should be caught by Google's network ACLs.
I would however not recommend to authenticate based on IP address. For example, if you are communicating over external IP addresses between GCE/GAE entities, it's easy to be too broad, also allowing other GCE/GAE customers. Even if you only whitelist single IP addresses there is a risk that over time, your setup becomes more complex. Imagine for example, if an employee deletes a GCE instance without also removing the IP from the whitelist. In that case, the IP would be released and available to other GCE customers who could then access your service.
Therefore, it's usually safer to use an application level authentication mechanism such as SSL client certificates.

Resources