Im currently developing a low power IoT node based on Contiki-ng running on a TI CC1350 launchpad board. My problem is that my power consumption is always >6mA.
Compiling and running the energest example I can see the MCU radio is always listening, no matter if I compile with MAKE_MAC = MAKE_MAC_NULLMAC and MAKE_NET = MAKE_NET_NULLNET. Running
MAKE_MAC = MAKE_MAC_TSCH or MAKE_MAC = MAKE_MAC_CSMA increases consumption by around 2mA as the CPU is always active, but the radio is never duty cycled.
Is there a way of reducing current consumption for Contiki-ng on this platform?
With Contiki-NG, you have two options:
Use CSMA or NullMAC and turn off the radio from the application code with NETSTACK_RADIO.off().
Use TSCH and make sure the schedule has some idle slots. The radio is going to turn off automatically once the node has joined a TSCH network.
If you the latter, still see big consumption, and you're sure about your code, submit an issue to the Contiki-NG git - there can be an energy consumption bug in the OS specific to the CC1350 board.
Related
I'm studying embedded programming, so I'm new in this field.
Can someone explain how to identify tasks/threads from given system description. Also, how can I estimate timing constraints, execution times... I'm really stuck.
Here is system description I'm working on:
Implement a 2 degree of freedom servo motor system. A two-axis joystick is used for controlling servo motors. Additionally, enable recording and reproducing the user path of the joystick, so that identical movement could be replicated multiple times. It is necessary to support the recording of 3 motion profiles with a length of at least 5 minutes. The profile needs to be recorded to a non-volatile memory, and the recording/playback control via the joystick button. Provide appropriate signalling for the current selected profile and operating mode (recording / playback) using one LED for each profile. In the RT-Thread system realize the necessary drivers on the Raspberry Pi Pico platform as support for the devices used and the application itself that implements the described system with clear separated threads for each of the observed functionalities.
It is tempting to partition functionally, but in practice you should partition based on deadlines, determinism and update rates. For simple systems, that may turn out to be the same as a functional partitioning. The part:
clear separated threads for each of the observed functionalities
May lead you to an inappropriate partitioning. However that may be the partitioning your tutor expects even if it is sub-optimal.
There is probably no single solution, but obvious candidates for tasks are:
Joystick reader,
Servo controller,
Recorder,
Replayer.
Now considering these candidates, it can be seen that the joystick control and replay control are mutually exclusive, and also that replay itself is selected through the joystick button. Therefore it makes sense to make that a single task. Not least because the replayer will communicate with the servo controller in the same way as the joystick. So you might have:
Joystick reader / replayer,
Servo controller,
Recorder.
The recorder is necessarily a separate thread because access to NV memory may be slower and non-deterministic. You would need to feed time and x/y position data in a message queue of sufficient length to ensure the recording does not affect the timing motion control.
It is not clear what king of servo you are using or if your application is responsible for the PID motion control or simply sends a position signal to a servo controller. If the latter, there may be no reason to separate teh servo control from the reader/replayer. In which case you would have:
Joystick reader / replayer / Servo controller,
Recorder.
Other solutions are possible. For example the recorder might record the servo position over time rather the joystick position, and have the servo controller handle replay.
Joystick reader,
Servo controller / replayer,
Recorder.
That makes sense if the Joystick polling and Servo update rates differ, because you'd what to replay what the servo did, not what the joystick did.
I'm writing because I am currently working on a project using the gem5 simulator to simulate an ARM bigLITTLE configuration where the big CPU cluster has an L2 but the little CPU cluster does not. That is, I would like to simulate a system in which the little cores are even simpler than their default configuration. I am running the project using the full system bigLITTLE file (i.e., gem5/configs/example/fs_bigLITTLE.py). Is it possible to configure the system in this way?
My initial thought was to modify the python file so that the little cluster configuration is composed of the following:
class LittleCluster(devices.CpuCluster):
def __init__(self, system num_cpus, cpu_clock, cpu_voltage="1.0V"):
cpu_config = [ ObjectList.cpu_list.get("MinorCPU"), devices.L1I, devices.L1D, devices.WalkCache, None]
super(LittleCluster, self).__init__(system, num_cpus, cpu_clock, cpu_voltage, *cpu_config)
or, in layman's terms, provide None as the SimObject class name for the L2. Unfortunately, as one might expect, this causes the system to crash as gem5 expects an object to connect the ports.
My next idea was to write a new SimObject called EmptyCache that inherits the Cache class from gem5, but does nothing. That is, on every call access to the cache this object would return false, and it would be configured to have no tag, data, or response latency. However, this causes coherence issues with the L1 caches in the little cluster, so I then changed it so that it evicts any cache block that it "hits" before returning false (the following is based on a prior post to the gem5-users mailing list: https://www.mail-archive.com/gem5-users#gem5.org/msg16882.html)
EmptyCache::access(PacketPtr pkt, CacheBlk *&blk, Cycles &lat, PacketList &writebacks)
{
if (Cache::access(pkt, blk, lat, writebacks) {
Cache::evictBlock(blk);
}
return false;
}
This seems to solve the coherence issues above with the L1 caches, but this seemed to cause coherence issues in the memory bus (which implements the SystemXBar class (which implements the CoherentXBar class)).
At this point, I feel pretty much stuck and would appreciate any advice that you could provide! Thank you!
Edit: I've continued to try to make headway with this project despite the setbacks in this direction by modifying the gem5/configs/example/arm/devices.py file. I made note that one way to approach this issue would be to add a MinorCPU private-member cache, and then setting the MinorCPU cluster directly to the memory bus while connecting the major CPU cluster to a cache system, but the issue in this direction is the incoherence between packets to the minor and major clusters.
Namely, there are several assertions and panic_if statements in the caches that anticipate that a particular MemCmd enumeration and/or that the packet "hasSharers". Naturally, I assume that this issue has to do with this simulation setup, because the simulator actually runs without it, but is a way to configure the simulator in this direction so that there is some semblance of cohesion between the major and minor CPU clusters?
Again, thank you for your help!
I have Ardupilot on plane, using 3DR Radio back to Raspberry Pi on the ground doing some advanced geo and attitude based maths, and providing audio feedback to pilot (rather than looking to screen).
I am using Dronekit-python, which in turn uses Mavproxy and Mavlink. What I am finding is that I am only getting new attitude data to the Pi at about 3hz - and I am not sure where the bottleneck is:
3DR is running at 57.6 khz and all happy
I have turned off the automatic push of logs from Ardupilot down to Pi (part of Mavproxy)
The Pi can ask for Attitude data (roll, yaw etc.) through the DroneKit Python API as often as it likes, but only gets new data (ie, a change in value) about every 1/3 second.
I am not deep enough inside the underlying architecture to understand what the bottleneck may be -- can anyone help? Is it likely a round trip message response time from base to plan and back (others seem to get around 8hz from Mavlink from what I have read)? Or latency across the combination of Mavproxy, Mavlink and Drone Kit? Or is there some setting inside Ardupilot or Telemetry that copuld be driving this.
I am aware this isn't necessarily a DroneKit issue, but not really sure where it goes as it spans quite a few components.
Requesting individual packets should work, but that was never meant to be requested lots of times per second.
In order to get a certain packet many times per second, set up streams. A stream will trigger a certain number of times per second, and will then send whichever packet is associated with it, automatically. The ATTITUDE message is in the group called EXTRA1.
Let's suppose you want to receive 10 ATTITUDE messages per second. The relevant parameter is called SR0_EXTRA1. This defines the number of Attitude packets sent per second. The default is 4. Try increasing that parameter to 10.
We see in all today electronic devices like mobile a Visual battery charging indicator,that a graphical Container composed of bars that increases one by one when the battery is charged for long, and decreases one by one when the mobile is used for long time.
I see the same thing laptop in every GUI operating system like windows and Linux.
I am not sure whether i am posting in the right place, because this requires a System Programmer and Electrical Engineer.
A Visual view of my Question is here:
http://gickr.com/results4/anim_eaccb534-1b58-ec74-697d-cd082c367a25.gif
I am thinking from long long ago , under what logic this works?
How the Program is managed to Monitor the battery.
I made a simple logic based on Amps-hour, that how much time the bar should increase when the battery is in charging mode.??? But that does not work perfectly for me.
Also i read a battery indicator Android application source code of my fried, but the function he used were System Calls based on Andriod Kernel (Linux Kernel).
I need the thing from the scratch....
I need this logic to know............. Because i am working on an Operating system kernel project, which later on will need battery charging monitor.
But the thing i will implement right now is to show just percentage on the Console Screen.
Please give me an idea how i can do it.... Thanks in Advance
Integrating amps over time is not a reliable way to code a battery meter. Use voltage instead.
Refer to the battery's datasheet for a graph of (approximate) voltage vs. charge level.
Obtain an analog input to your CPU. If it's a microcontroller with a built-in ADC, then hopefully that's sufficient.
Plug a reference voltage (e.g. a zener diode) into the analog input. As the power supply voltage decreases, the reference will appear to increase because the ADC only measures voltage proportionally. The CPU may include a built-in reference voltage generator that you can mux to the ADC, or the ADC might always measure relative to a fixed reference instead of rail-to-rail. Consult the ADC manual (or ADC section of micro controller manual) for details.
Ensure that the ADC provides sufficient accuracy.
Sample the battery level and run a simple low-pass filter to eliminate noise, like displayed_level = (displayed_level * 0.95) + (measured_level * 0.05). Run that through an approximate function mapping the apparent reference voltage to the charge level.
Display the charge level.
Similar to Tetris on Facebook, where if you're at 100 Energy, after usage (playing a game), it goes down 5 units, and the recharges 1 unit every 10 minutes. I'm curious how to handle polling, and possibly making it server-side so that there's no "time manipulation" (e.g setting the clock forward in the future) to circumvent measures for receiving energy early. Thanks in advance!
You will need some kind of persistent infrastructure, and ideally you want to run a check against the server from the client side, just have it return a JSON string with the value whenever it loads. Also, don't use the client clock, because that could change across machines etc. which would not yield the ideal results.