How to force xbee s2 end device to select a particular parent using api or at mode? - xbee

Actually I want to implement XMesh protocol with XBee Series 2 modules.
I am implementing this protocol with 1 coordinator, 2 routers and 2 end devices.
According to this protocol, an end device should select its parent based on linkcost(linkcost = 1/(linkquality)).
This linkquality is measured by sending some messages(expected messages) to the 2 routers from 2 end devices. From the transmit status response I can count received messages using API mode arduino XBee library
(linkquality=(received messages)/(expected messages)).
These end devices each should select the one parent node among two routers.
But the problem here is XBee Series 2 modules are already having inbuilt protocol which is forming adhoc network on the fly. End devices are also choosing parent based on the whichever router provides best network coverage on the spot.
So how can i force end devices to select a particular router as its parent based on the minimum linkcost (i explained it above) using API mode (I am using Arduino XBee library) ?
Below is my network diagram..
BS-> Base station (Coordinator)
0,1-> routers
2,3-> end devices

I don't believe that will be possible -- the Series 2 modules will form a ZigBee mesh network following the ZigBee specification for choosing a parent.
If you are trying to form your own mesh network with different priorities (overall link cost to a base station, instead of best link quality of available routers), you might want to consider the XBee Series 1 modules, which don't have built-in mesh networking.
Is there a reason you feel that your method of choosing the parent is better than the methods currently used by the Series 2?

Related

Embedded system: How to identify required tasks/threads?

I'm studying embedded programming, so I'm new in this field.
Can someone explain how to identify tasks/threads from given system description. Also, how can I estimate timing constraints, execution times... I'm really stuck.
Here is system description I'm working on:
Implement a 2 degree of freedom servo motor system. A two-axis joystick is used for controlling servo motors. Additionally, enable recording and reproducing the user path of the joystick, so that identical movement could be replicated multiple times. It is necessary to support the recording of 3 motion profiles with a length of at least 5 minutes. The profile needs to be recorded to a non-volatile memory, and the recording/playback control via the joystick button. Provide appropriate signalling for the current selected profile and operating mode (recording / playback) using one LED for each profile. In the RT-Thread system realize the necessary drivers on the Raspberry Pi Pico platform as support for the devices used and the application itself that implements the described system with clear separated threads for each of the observed functionalities.
It is tempting to partition functionally, but in practice you should partition based on deadlines, determinism and update rates. For simple systems, that may turn out to be the same as a functional partitioning. The part:
clear separated threads for each of the observed functionalities
May lead you to an inappropriate partitioning. However that may be the partitioning your tutor expects even if it is sub-optimal.
There is probably no single solution, but obvious candidates for tasks are:
Joystick reader,
Servo controller,
Recorder,
Replayer.
Now considering these candidates, it can be seen that the joystick control and replay control are mutually exclusive, and also that replay itself is selected through the joystick button. Therefore it makes sense to make that a single task. Not least because the replayer will communicate with the servo controller in the same way as the joystick. So you might have:
Joystick reader / replayer,
Servo controller,
Recorder.
The recorder is necessarily a separate thread because access to NV memory may be slower and non-deterministic. You would need to feed time and x/y position data in a message queue of sufficient length to ensure the recording does not affect the timing motion control.
It is not clear what king of servo you are using or if your application is responsible for the PID motion control or simply sends a position signal to a servo controller. If the latter, there may be no reason to separate teh servo control from the reader/replayer. In which case you would have:
Joystick reader / replayer / Servo controller,
Recorder.
Other solutions are possible. For example the recorder might record the servo position over time rather the joystick position, and have the servo controller handle replay.
Joystick reader,
Servo controller / replayer,
Recorder.
That makes sense if the Joystick polling and Servo update rates differ, because you'd what to replay what the servo did, not what the joystick did.

Using Broadcast State To Force Window Closure Using Fake Messages

Description:
Currently I am working on using Flink with an IOT setup. Essentially, devices are sending data such as (device_id, device_type, event_timestamp, etc) and I don't have any control over when the messages get sent. I then key the steam by device_id and device_type to preform aggregations. I would like to use event-time given that is ensures the timers which are set trigger in a deterministic nature given a failure. However, given that this isn't always a high throughput stream a window could be opened for a 10 minute aggregation period, but not have its next point come until approximately 40 minutes later. Although the calculation would aggregation would eventually be completed it would output my desired result extremely late.
So my work around for this is to create an additional external source that does nothing other than pump fake messages. By having these fake messages being pumped out in alignment with my 10 minute aggregation period, even if a device hadn't sent any data, the event time windows would have something to force the windows closed. The critical part here is to make it possible that all parallel instances / operators have access to this fake message because I need to close all the windows with this single fake message. I was thinking that Broadcast state might be the most appropriate way to accomplish this goal given: "Broadcast state is replicated across all parallel instances of a function, and might typically be used where you have two streams, a regular data stream alongside a control stream that serves rules, patterns, or other configuration messages." Quote Source
Questions:
Is broadcast state the best method for ensuring all parallel instances (e.g. windows) receive my fake messages?
Once the operators have access to this fake message via the broadcast state can this fake message then be used to advance the event time watermark?
You can make this work with broadcast state, along the lines you propose, but I'm not convinced it's the best solution.
In an ideal world I'd suggest you arrange for the devices to send occasional keepalive messages, but assuming that's not possible, I think a custom Trigger would work well here. You can extend the EventTimeTrigger so that in addition to the event time timer it creates via
ctx.registerEventTimeTimer(window.maxTimestamp());
you also create a processing time timer, as a fallback, and you FIRE the window if the window still exists when that processing time timer fires.
I'm recommending this approach because it's simpler and more directly addresses the specific need. With the broadcast state approach you'll have to introduce a source for these messages, add a broadcast state descriptor and stream, add special fake watermarks for the non-broadcast stream (set to Watermark.MAX_WATERMARK), connect the broadcast and non-broadcast streams and implement a BroadcastProcessFunction (that probably doesn't really do anything), etc. It's a lot of moving parts spread across several different operators.

What scheduling should I choose for my program on a FreeRTOS system?

I have a project (a 2-player game) made in FreeRTOS. The game has 3 tasks
(Game Render, Joystick Task and a PC Serial Communication).
Shared resources include:
Player 1 and Player 2 locations/coordinates. They are manipulated by Serial and Joystick task respectively. The game render reads both of these locations and displays them. (Player 1 location is shared with Game Render and Player 2 with Game Render).
A queue that is shared between the game render and the serial task (sending data and getting acks); the queue has been protected with a mutex on all write operations.
My question is which of these 2 scheduling is more suitable for this project: Rate Monotonic or Deadline Monotonic?
The tasks are not independent in a way that the serial communication uses acks? I think it should be Deadline Monotonic but not entirely sure?
To choose between DMS and RMS you need to know periods and deadlines for each task. From my experience it is better to focus on good overall design first and then measure and tweak the priorities to achieve best response times.
On of best summaries of good design principles I've encountered is this. In your case I would represent the two players as 'active objects' with own input event queues. Send event to the players from serial task, or even directly from ISR. The game render would then also be an AO receiving events from players, or a mutex-protected resource - it depends on what the render output is (how long does it take mostly). Serial input and serial output should be considered two separate things - in most cases it doesn't make sense to conflate the two.
Here is another link that might be useful - look at '1.4 The Design of the “Fly ‘n’ Shoot” Game'
Also, you don't need a mutex lock for xQueueSend and for sending from ISR you only need to use xQueueSendFromISR.

Multiple identical I2C sensors with the vl53L0x API (ST Microelectronics)

In a professional context, I have to use the vl53L0x. This sensor was released recently, along with it's API, meaning that there's no help on the internet yet :
http://www.st.com/content/st_com/en/products/embedded-software/proximity-sensors-software/stsw-img005.html
This API contains some source and headers file, that I compiled with the gcc. It works fine, despite clearly lacking comments. I flash the memory of a stm32 (NUCLEO-F401RE), which controls a vl53L0x sensor via an I2C bus. I now want to add more vl53L0x sensors on the same I2C bus, and refer to this document (if you want to read it, go directly to the bottom half of the page 5, the wiring is already done) :
http://www.st.com/content/ccc/resource/technical/document/application_note/group0/0e/0a/96/1b/82/19/4f/c2/DM00280486/files/DM00280486.pdf/jcr:content/translations/en.DM00280486.pdf
The principle, that I already applied on other sensors, is that they all start with the same address. You then have to activate one, change it's address, then activate the next one, change it's address, etc.
Unfortunately, ST Microelectronics didn't publish the list of the I2C registers, so I have to use their API to control multiple sensors. The document linked above explains how to do so. Among other things, it specifies :
In vl53L0x_platform.h API file
• Set VL53L0x_SINGLE_DEVICE_DRIVER macro to 0 so that API implementation will
be automatically adapted to a multi-device context.
I looked everywhere in the API folder, I was not able to find any reference to a VL53L0x_SINGLE_DEVICE_DRIVER macro. Setting it to 0 won't change anything, as this string is not present anywhere in the API files. Did anyone run into a similar problem ?
I'm working on the same thing. It seems that you're further ahead than I am. However, putting this in my while(1) loop seems to make both the sensors work.
ResetAndDetectSensor(0);
TimeStamp_Reset();
The guide says that in order to use all the sensors simultaneously, you need to pull the XSHUT pin high for all the sensors, reset the timestamp and then pick up the sensor which actually detects something.

ArduPilot, Dronekit-Python, Mavproxy and Mavlink - Hunt for the Bottleneck

I have Ardupilot on plane, using 3DR Radio back to Raspberry Pi on the ground doing some advanced geo and attitude based maths, and providing audio feedback to pilot (rather than looking to screen).
I am using Dronekit-python, which in turn uses Mavproxy and Mavlink. What I am finding is that I am only getting new attitude data to the Pi at about 3hz - and I am not sure where the bottleneck is:
3DR is running at 57.6 khz and all happy
I have turned off the automatic push of logs from Ardupilot down to Pi (part of Mavproxy)
The Pi can ask for Attitude data (roll, yaw etc.) through the DroneKit Python API as often as it likes, but only gets new data (ie, a change in value) about every 1/3 second.
I am not deep enough inside the underlying architecture to understand what the bottleneck may be -- can anyone help? Is it likely a round trip message response time from base to plan and back (others seem to get around 8hz from Mavlink from what I have read)? Or latency across the combination of Mavproxy, Mavlink and Drone Kit? Or is there some setting inside Ardupilot or Telemetry that copuld be driving this.
I am aware this isn't necessarily a DroneKit issue, but not really sure where it goes as it spans quite a few components.
Requesting individual packets should work, but that was never meant to be requested lots of times per second.
In order to get a certain packet many times per second, set up streams. A stream will trigger a certain number of times per second, and will then send whichever packet is associated with it, automatically. The ATTITUDE message is in the group called EXTRA1.
Let's suppose you want to receive 10 ATTITUDE messages per second. The relevant parameter is called SR0_EXTRA1. This defines the number of Attitude packets sent per second. The default is 4. Try increasing that parameter to 10.

Resources