dronekit set_position_target_local_ned_encode - dronekit-python

Running dronekit-python with ArduCopter as SITL. When specifying a velocity (only) in the set_position_local_ned_encode, the drone moves for a few seconds and stops.
This happens both with the example code (guided_set_speed_yaw.py) and a very small test program that ONLY does the set_position after the appropriate init. All other parts of all examples seem to work fine.
All running on Fedora. I don't see this listed as a bug, or any issues related to this. Any ideas or pointers are appreciated.

ArduCopter 3.3-rc9 added a 3 second velocity timeout. This is to prevent a lost connection from causing a flyaway. To continue flying in the same direction, just send the same packet repeatedly.

For future readers, the exact definition by Ardupilot:
Starting in Copter 3.3, velocity commands should be resent every second (the vehicle will stop after a few seconds if no command is received). Prior to Copter 3.3 the command was persistent, and would only be interrupted when the next movement command was received.
And by Dronekit:
From Copter 3.3 the vehicle will stop moving if a new message is not received in approximately 3 seconds. Prior to Copter 3.3 the message only needs to be sent once, and the velocity remains active until the next movement command is received. The example code works for both cases!

Related

Glitchy audio output, no underruns

When using snd_pcm_writei() in non-blocking mode everything works perfect for a while but eventually the audio gets choppy. It sounds like the ring buffer pointers are getting out of sync (ie. sometimes I can tell that the audio is playing out of order). How long it takes for the problem to start it's hardware dependent. On a Gentoo box on real hardware it seldom happens, but on a buildroot system running on QEMU it happens after about 5 minutes. On both cases draining the pcm stream fixes the problem. I have verified that I'm writing the samples correctly by also writting them to a file and playing them with aplay.
Currently I'm setting avail_min to the period size (1024 frames) and calling snd_pcm_wait() before writting chunks of the period size. But I tried a number of different variations (different chunk sizes, checking avail myself and use pthread_cond_timedwait() instead of snd_pcm_wait(), etc). But the only thing that works fine is using blocking mode but I can not do that.
You can see the current source code here: https://bitbucket.org/frodzdev/mediabox/src/5a6471316c7ae481b329e7e0d4af1bb68a32e71d/src/audio.c?at=staging&fileviewer=file-view-default (it needs a little cleanup since I'm trying all kinds of things). The code that does the actual IO starts at line 375.
Edit:
I think I got a solution but I don't understand why it seems to work. It seems that it does not matter if I'm using non-blocking mode, the problem is when I wait to make sure there's room on the buffer (either through snd_pcm_wait(), pthread_cond_timedwait(), or usleep()).
The version that seems to work is here: https://bitbucket.org/frodzdev/mediabox/src/c3eb290087d9bbe0d5f37653a33a1ba88ef0628b/src/audio.c?fileviewer=file-view-default. I switched to blocking mode while still waiting before calling snd_pcm_writei() and it didn't made a difference. Then I added the call to snd_pcm_avail() before calling snd_pcm_status() on avbox_audiostream_gettime(). This function is called constantly by another thread to get the stream clock and it only uses snd_pcm_status() to get the timestamps. Now it seems to work (at least it is a lot less probable to happen) but I don't understand exactly why. I understand that snd_pcm_avail() will synchronize the pointers with the kernel but I don't really understand when it needs to be called and the difference between snd_pcm_state() et al and snd_pcm_status(). Does snd_pcm_status() also synchronize anything? It seems not because sometimes snd_pcm_status_get_state() will return RUNNING when snd_pcm_avail() returns -EPIPE. The ALSA documentation is really vague. Perhaps understanding these things will help me understand my problem?
Now, when I said that it seems to be working I mean that I cannot reproduce it on real hardware. It still happens on QEMU though way less often. But considering that on the next commit I switched to blocking mode without waiting (which I've used in the past and never had a problem with on real hardware) and it still happens in QEMU and also the fact that this is a common issue with QEMU I'm starting to think that I may have fixed the issue on my end and now it's just a QEMU problem. Is there any way to determine if the problem is a bug on my end that is easier to trigger on the emulator or if it's just an emulator problem?
Edit: I realize that I should fill the buffer before waiting but at this point my concern is not to prevent underruns but to make sure that my code can handle them when they happen. Besides the buffer is filling up after a few iterations. I confirmed this by outputing avail, buffer_size, etc before writing each packet and the numbers I get don't make perfect sense, they show an error of 1 or 2 periods about every 8th period. Also (and this is the main problem) I'm not detecting any underruns, the audio get choppy but all writes succeed. In fact, if the problem start happening and I trigger an underrun by overloading the CPU it will correct itself when the pcm is reset.
In line 505: You're using time as argument to malloc.
In line 568: Weren't you playing audio? In this case you should do wait only after you wrote the frames. Let's think ...
Audio device generates an interrupt when it terminates to process a period.
| period A | period B |
^ ^
irq irq
Before you start the pcm, audio device doesn't generate any interrupt. Notice here that you're waiting and you haven't started the pcm yet. You only starts it when you call snd_pcm_writei().
When you wait for audio data you'll be awake only when the current period has been fully processed -- in your first wait the first period wasn't even written -- so in a comfortable situation you should write the whole buffer, wait for the first interrupt, and then write the just processed period, and on and on.
Initially, buffer is empty:
| | |
write():
|############|############|
wait():
..............
When we wake up:
| |############|
write():
|############|############|
I found the problem is you're writing audio just before it be played, then sometimes it may arrive delayed in the buffer.

Increment an output signal in labview

![enter image description here][1]I have a high voltage control VI and I'd like it to increase the output voltage by a user set increment every x number of seconds. At the moment I have a timed sequence outside the main while loop but it never starts. When it's inside the while loop it delays all other functions. I'm afraid I'm such a beginner at this that I can't post a picture yet. All that needs to happen is an increase in voltage by x amount every y seconds. Is there a way to fix this or a better way of doing it? I'm open to suggestions! Thanks!
Eric,
Without seeing the code I am guessing that you have the two loops in series (i.e. the starting of the while loop depends upon an output of the timed loop; this is the only way that one loop might block another). If this is the case, then decouple the two loops so that they are not directly dependent on each other.
If the while loop is dependent on user input, then use an event structure and then pass the new parameters via a queue (this would be your producer-consumer pattern).
Also, get rid of the timed loop and replace with a while loop. The timed loop is only simulated on non-real time machines and it can disrupt determinisitic features of a real-time system. Given that you are looking for sending out a a signal on the order of seconds, it is absolutely not necessary.
Anyways, if I am off base, please throw the code in question up so that we can review it.
Cheers, Matt

Why is there a long delay between pcap_loop() and getting a packet?

I'm writing a sniffer using libpcap. My problem is that there's a 7-10 second delay between calling pcap_loop() or pcap_next() and actually getting a packet(the callback function being called). However, if I use wireshark with the same filter on the same device, there is no such delay after I hit the "start" button. Why is there a delay in my program and is there a way to fix that?
I'm working on atheros wifi chips. The device is set to monitor mode using
airmon-ng start wlan0
I'm sure there're plenty of traffic to listen to, for I can see the packages in wireshark.
Thank you.
I'm using 10000
The to_ms argument to pcap_open_live() and pcap_set_timeout() is in milliseconds.
10000 milliseconds is 10 seconds.
Try using 1000, which is the value tcpdump uses - that'll reduce the delay to 1 second - or using 100, which is the value Wireshark uses - that'll reduce the delay to 1/10 second.
I read on a tutorial about this field: " on at least some platforms, this means that you may wait until a sufficient number of packets arrive before seeing any packets, so you should use a non-zero timeout"
The tutorial in question is the tcpdump.org "How to use libpcap" tutorial, and the passage in question was added in this CVS commit:
revision 1.8
date: 2005/08/27 23:58:39; author: guy; state: Exp; lines: +34 -31
Use a non-zero timeout in pcap_open_live(), so you don't wait for a
bufferful of packets before any are processed.
Correctly explain the difference between pcap_loop() and
pcap_dispatch().
In sniffex.c, don't print the payload if there isn't any.
so I'm familiar with it. :-)
I'd have to spend some time looking at the Linux kernel code (again) to see what effect a timeout value of 0 would have on newer kernels. However, when writing code that uses libpcap/WinPcap to do live captures, you should always act as if you're writing code for such a platform; your code will then be more portable to other platforms and will not break if the behavior of a zero timeout changes.

Spi interrupt handler works when a printf() is used

I am trying to initiate a spi communication between an omap processor an sam4l one. I have configured spi protocol and omap is the master. Now what I see is the test data I am sending is correctly reaching on sam4l and I can see the isr is printing that data. Using more printf here and there in isr makes the operation happen and the respective operation happens, but if I remove all printfs I can't see any operation happening. What can be the cause of this anomaly? Is it a usual case of wrong frequency settings or something?
If code is needed I will post that too but its big.
Thanks
I think you are trying to print message in driver.
As printing message on console with slow down your driver, it may behave slowly and your driver work well.
Use pr_info() for debug and change setting to not come message on console by editing /proc/sys/kernel/printk to 4 4 1 7
-> It will store debug message in buffer.
-> Driver not slow down because of printing message on screen.
-> And you can see it by typing dmesg command later.
Then find orignal problem which may cause error.
If a routine works with printf "here and there" and not otherwise, almostcertainly the problem is that there are timing issues. As a trivial example, let's say you write to an SPI flash and then check its content. The flash memory write will take some times, so if you check immediately, the data would not be valid, but if you insert a printf call in between, it may have taken enough time that the read back is now valid.

GPS - Speed doesn't update as should - EM408 & Arduino Mega & GSM

I'm developing a system that will get the GPS signal and send it though the GSM with information about position, speed and temperature from some digital sensors.
Currently I'm using the GPS EM408, the Arduino mega plus the GSM board (official one).
The problem is that the GPS (by the library TinyGPSPlus) gives me the same speed for long time or sometimes give me 0km/h.
The sketch works like this:
loop()
{
getGPSData() - ~ 1 sec to execute and take one data from the GPS.
getSensors() - ~ 1 sec to execute and take one data from the digital sensors.
sendData() - ~ 6 n 10 secs to send the data through the internet.
}
The whole process takes around 10 ~ 15 secs to be completed.
If I remove the sendData() and the system starts getting the GPS information each second the speed value works perfectly but if I get the data from the GPS each 12 secs (because of the GSM delay) the speed doesn't work as expected.
I understand that the problem is because the library TinyGPSPlus calculate the speed between two points and the getGPSData() only takes one information each loop and the next point has 15 secs of difference.
Although I've added a "for(i=0;i<=4;i++)" to the getGPSData() enforcing it to get at least 4 times of position before the GSM send it over the internet, now is working better but still getting the wrong value or sometimes it freezes to the same speedy for long time.
I've tried to add a second board and put both to communicate with I2C turning it "dual core", where one board will be getting the data from the GPS each second and another one will send though the data each 15 secs, but the GSM freezes sometimes when the I2C is connected :(.
Does anyone has any clue how to do it?
It is not good to add "for(i=0;i<=4;i++)" loop like you tried and furthermore to add duplicate device - so far from the solution. Instead you should unbind getGPSdata() from sendData() in other words separate the calls of them into different tasks.
I can imagine it is a simple Round-robin scheduling provided with that library, not a complete RTOS in there, isn't it? Nevertheless, put them into different tasks, loops or whatever. Probably you will need to arrange a buffer to collect GPS data to and sending out this from the buffer from a different loop.
Hope it helps.

Resources