I have created a 4 node network, where node-A and Node-B are routed to send data to node-D via node-C. I sent one data packet from node-A to node-D via node-C. While analyzing the trace.json file, I got that there are two transmission events are logged in the trace.json file from node-A. I tried turning off reliability but the same thing happen. What could be the reason? Please help.
Related
I have an app that, when activated, uploads location data. Currently it sends the data to the server via REST, however I would like to save on server costs and send the data via iot-core.
Previously, I would queue location updates, and only send them about once every few minutes, this way the phone would only turn on its data broadcasting once every few minutes and not keep it constantly, and save on battery life.
Is there a way to enable similar battery saving when uploading to AWS iot-core? I haven't run tests, but I assume that constantly sending messages via mqtt, websockets, or http is just as battery draining as regular rest messages.
This is somewhat related to Aws IoT Message Delivery.
Scenario: WPF application that sends a file to a server and the server, in turn, performs a series of validations that can take several minutes, during which the server sends a series of messages to tell the application what it is doing (signalr), in at the end of everything, it warns that the process has ended.
What I'm doing: Every time the application is sending a new file I start the hub await hubConnection.Start() and at the end of it all I stop.
The process consists of: doing some local validations, sending the file to the server, monitoring the processing (signalr) and at the end, if applicable, downloading a file with the error;
It's all working fine, but I'm afraid something might go wrong along the way, flag not sending the messages, etc.
My question: What is the correct way to do this? When should I connect the hub (await for hubConnection.Start())? Should I do this when starting the app? How do I receive the messages later (in each message I have an identifier of the file I am working on)?
Sir , i am trying to make a breadcrumb network using raspberryPi and Xbees. Please tell what destination address should i put in Coordinator node so that it receives data only from a single router.I am using all Xbees in API modes. Thanks in advance.[Image of the Addresses of my xbees i am using.Click to get image
Router3 Router2 Router1 Coordinator
Please suggest what addresses should i put so that Router3 send data to Router2, Router2 to Router1 and Router1 to Coordinator.
If suppose Router3 send some data, it first goes to Router2, then to Router1 and then reaches Coordinator.
I am trying to connect the GPS to all the Router RaspberryPis and trying to send these to Coordinator Node.
With a mesh network, you direct your message to the final destination and the nodes take care of relaying it as necessary to reach the destination.
So nothing to do on the coordinator, and each router uses a destination address of 0 in their API frames to send to the coordinator.
If you really want to force the messages to hop from router to router, just use the router's 64-bit address (the ATSH and ATSL values) in the API frames you're using to send your data. When you receive a frame on a node, just replace the destination address with the next hop and resend it.
I created a web service and a mobile application that communicate between each other. When everything is working, it works great. When the server doesn't respond, it starts to break down.
The mobile device sends a message to the server with a bunch of records. Getting the records on the server never seems to be a problem. It gets the records and then sends a response back to the mobile device that the update was received. The PROBLEM is that the mobile device doesn't always get the response, so it doesn't know it shouldn't send those records again for updating.
Next time it sends the records again and now I have duplicate records. How can I solve this?
Idea 1) Create a transaction number unique on the mobile device that I can compare against the server to see if the record was already uploaded. Then just don't write that record and attempt to send back the response that it was written.
Idea 2) Send the records to the server, but before writing them respond to the mobile device that they were received. This way the mobile device can tag them and then send another response to the server telling it to write them. At the point the mobile device almost doesn't care if it gets a response. Only thing, you don't know if the server ever got the message.
Looking for ideas on how to handle this that either confirm one of these ideas or has a completely different one.
I ended up creating logs that the device attempts to resolve when it gets back successful responses from the server.
I tag items as a batch of lines and send them up to the server. Once they are up there, I create a log about the success or failure of each line item in a batch of items and then save the log to the file system.
When the mobile device is unsuccessful in hearing back a response from the server, in rare cases, it asks the server about a batch number. If the server doesn't respond with a status of that batch, it assumes the server never received it and remarks those items for another upload attempt. If it hears back, it processes the success and failure line by line and then marks the items on the mobile device accordingly. If the mobile device doesn't ask about the log in the next upload, the server assumes the batch's lifecycle is complete and it no longer needs to maintain that log. It is then deleted.
The server doesn't delete a log until it has a successful request from the specific device no longer asking to hear about the log. So if I have log 1 on the server and the device doesn't ask in the next upload to hear back about that log, the server then removes that log assuming the device got the response it wanted or doesn't care about it anymore.
I have a server that sends data as fast as it can produce it and sends the data over a socket. The server uses a queue and has a producer thread and a consumer thread that sends the produced data out a socket to the client.
The problem is reading the data on the client side. How do I design a client to handle the data without it being out of sync?
If I send an acknowledgement from the client to the server I lose the concurrency speed on the server side. How can I write/design a client to handle the incoming data fast enough?
Do I need to implement a queue on the client side?
Unless you have a requirement that you must use something other than TCP, just let TCP do the job of flow control for you. Let the client consume the data as fast as it wants to, and the server will block after it sends more data than the client is prepared to consume and it fills up the TCP window.
TCP will never get out of sync in the sense that data on the socket will always be delivered in order. But the server may certainly have sent out more data than the client has consumed and so it may have moved on to sending the next batch of data while the client is still consuming the previous one. Is this what you mean by out of sync?
You don't want to make the client send an acknowledge before the server starts on the next task because that will cost an RTT (round trip time, i.e. the time for the last of one batch of data to arrive at the client and for the acknowledge to go back), which will slow down your protocol on a high-latency link.
If you don't want this RTT price, you are inevitably going to have to allow either:
for the client to request more than one batch at a time. You can use a tagged protocol like IMAP for this: the client submits several jobs at once on one socket, each with its own tag. The server responds to each request, with the tags in the header of each response so the client knows which response goes with which request. When the client has seen "enough" responses, it submits more requests. The client gets to control how many requests can be ongoing at the same time. If the client allows only one at a time, this degenerates to the simple ACK case with the RTT cost.
for the server to work a little ahead of the client, sending several responses to the client before the client has acknowledged the first one. After the pipe is filled to the maximum number of unacknowledged jobs that the server is willing to allow, it waits to acknowledged and sends one additional job response for each acknowledge it receives from the client. If the server allows only one outstanding job, this degenerates to the simple ACK case as above. If the server allows too many unacknowledged jobs at a time, this degenerates to just filling up TCP's buffers and counting on TCP flow control to block the server until the client is ready to accept more data.