Implementing collision detection python script on dronekit - dronekit-python

We're building off of the Tower app, which was built with dronekit-android, and flying the 3dr solo with it. We're thinking about adding some sort of collision detection with it.
Is it feasible to run some python script on the drone, basically reading some IR or ultrasound sensor via the accessory bay, and basically yell at the Android tablet when it detects something? That way, the tablet will tell the drone to fly backwards or something.
Otherwise, would we use the dronekit-python libs to do that? How would use a tablet / computer to have a Tower-like functionality with that?
Thanks a bunch.

Related

Quartus 18.0 Lite MAX10 device board model number not listed in programmer menu

I have an assignment at my university which involves using Quartus - they use Quartus 18.0 Lite.
The board is a terasiC DE10 -Lite board which uses the chip 10M50DAF484C7G
I have installed this on both my windows and linux machines with the same issue.
I download Quartus with the MAX10 device .qdz file so it should be in there.
Note: When creating a project I can set the device to 10M50DAF484C7G but when it comes to uploading my logic circuit design to the board it is not listed in the choices. I have attached a screenshot for clarity:
If anyone is able to help it would be greatly appreciated as it means I cannot test my work on the weekends as our electronics lab is only open 9am to 5pm weekdays for obvious health and safety reasons.
I see "10M50DAF484" is listed, you can probably use that, since the C7G just indicates the operating temperature (basic), fabric speed grade (medium), and version (production).

How to run c code(on codeblocks) without desktop on raspberry?(Like work of omxplayer)

Sorry my bad English. I work 3d shape with opengl on raspberry pi3(debian) for a while. I want to run my code don't use on desktop(or window). I searched but puzzled my mind. In a nutshell I want to run my code as well as in image below.
enter image description here
When I searched this topic, I have seen about EGL library but I don't know if I can use this.
If you used OpenMAX library before you know openmax doesn't use window. All image or video can run on console mode. You don't need any dosktop. I wonder this Is there a way I can use Opengl in this way ?(Can Opengl run like OpenMAX library or not) If there is any way How should I build my code ? I want render my image without desktop. I want use console mode.
Thanks your time. Best Regards.
The most straightforward solution would be to just create a fullscreen window, that has no border and no decorations (titlebar, buttons, etc.). If you want actual graphic output, there's nothing wrong with using X11. Despite some hearsay thrown around on the Internet the Xorg X11 servers are actually pretty lightweight.
If you really want to go without X11, then you should look at things like the kmscube demo https://cgit.freedesktop.org/mesa/kmscube/tree/ which does OpenGL directly to the display, without a graphics server or windowing system in between.
If you want it to be a little more abstracted, then have a look at how Wayland compositors talk to the display. The developers of the Sway Wayland compositor developed a nice abstraction library for this: https://github.com/swaywm/wlroots
You need to start display server first.
What you need could work with "xinit" which would manualy start xorg server, after that I suspect you should start "openbox" which is window manager. This way your desktop application should run as is, no changes needed.
Best practice is to create shell script for starting your application which could look like this:
set -e
xset s off
xset -dpms
xset s noblank
openbox &
cd /home/your_applicaton_directory
your_executable 2>/dev/null >/dev/null
Save this script and mark it executable whith
chmod +x
Then try to run this:
xinit /full_path_to_above_script
Hope this helps a bit... :)
Qt has a platform backend called eglfs, which lets your application run fullscreen on a single screen by using EGL and kms with very little overhead. Should work nicely with whatever OpenGL stuff you want to do.
You would just program a Qt application like normal, and launch it with ./myapp -platform eglfs from a tty.
http://doc.qt.io/qt-5/embedded-linux.html#eglfs

Controlling movement without GPS

I was trying to use the DroneKit-Python API to control the movement of a drone. I've been reading what it's in that link, but I can't find what I need. I want to be able to run the code with the dron indoors (and of course outdoors), so I can't rely in the GPS. I've tried to eliminate that part and use only the send_ned_velocity() method (without the propeller). But I couldn't hear a significant change in the movement of the engines.
The only way I can think of is using the channel_override, but it doesn't seem to be the better choice. Can anyone help me?
Thank you in advance.
send_ned_velocity() will only work if you are in guided mode. With Arducopter 3.3, you can only be in guided mode if you have a gps lock. So you aren't going to be able to use this command indoors.
You'll have to wait for 3.4 to be released, then guided mode will be supported without gps. But instead of gps, you will need an optical flow module and a rangefinder installed and configured.

How to send waypoints programmatically to drone?

I am very new at this and trying to get an understanding of this. I have read a lot on the DroneKit-Python site trying to figure out how exactly am I able to communicate with it.
Drone I am currently using is Iris+
I have looked more and there are software that already provide this, but I want to be able to control it plus more.
I want to set waypoints, tell it to then fly give the way points and keep going to them. Also, to be able to arm itself, which is in the example, and override the safety mechanism.
Here is the basic of what I am trying to use it for. Have it fly up at a certain time. Go to the waypoints 1,2,3,1,etc.. Then after X amount of time or on low battery go back to launch point and land.
I have found plenty of code that provides what i need to do, though I don't know if it will work and more importantly I don't even know how to start programming for this. Maybe I have the wrong approach in doing this?
I kind of want this to be a light API, so that in the future I can make a simple UI on my phone and insert some coordinates to give it ways points and that is it. I know there is software out there already that does it, but I want to remove the need for touching the drone. I want it to start and end autonomously.
If anyone could help provide some info that much would be greatly appreciated.
Assuming you have no companion computer (Iris+ does not by default), you are OK with running a ground station app (you won't be out of range to send commands to "end mission on time expiry") and that driving the behaviour from your phone is important, I would be looking at DroneKit Android.
Some notes:
You're going to have to touch the drone at some point to attach the
batteries.
You can arm the device from dronekit
You can override the safety mechanism from a script. I hope you have
a lot of money to pay for the new drones you're going to have to buy when they crash and all the litigation from damaged people and property (in other words "don't do it".
The default behaviour is to return the device to launch (RTL) on low battery. This is convigurable
Setting a time is more "problematic". You can have a timer in a script that then sends return-to-launch but the script needs to be connected to the UAV. This means that either you have to be running in a connected ground station (which might potentially be out of range) or on a companion computer.
Iris+ does not have a companion computer. You have to install one or connect from a Ground Control Station.
DroneKit-Python runs on Linux, MacOSX or Windows. You can't just run it on an ordinary phone, though you could find some other mechanism to send messages/scripts to it running on a companion Computer.
DroneKit Android runs on Android. We do have a planned iOS version too. In theory these could run on a companion computer, but in practice currently these are only used as ground stations.

Is it a good idea to use a Screensaver on a raspberry pi as digital signage?

I asked this question in the Raspberry PI section, so please forgive me for posting this here again. Its just there doesn't seem to be as active as this section of the forum. So, onto my question...
I have an idea and I'm working on it right now. I just wanted to see what the community's thought was on using a screensaver as digital signage. Every tutorial I've read shows someone using chromium in kiosk mode, and while that's fine and works well for some uses, it doesn't work for what I need. I have successfully completed a chromium kiosk, and it was cool. But the signage that I need to create now, has to work without internet. I've thought about installing LAMP locally on the PI, and still using chromium. I still may have to if this idea doesn't pan out. All I need from the signage is a Title Message in the top center, and a message body underneath it, with roughly 300-400 character limit. My idea is to write a screensaver module, in C, that will work with a screensaver such as xscreensaver. The module would need to be able to load messages from a directory on the pi. Then for my clients to update their signage text, I would write a simple client that sent commands as well as the text via SSH to the pi. I want to know what other people think about this. Is it a good idea? Bad idea? Should I "waste" my time doing something like this?
Thanks in advance.
I am already using a rPi as digital signage, just over a year. I am using two different setups:
version 1 uses Raspian loading xdesktop and qiv image viewer to cycle images stored on the Pi itself, synchronized with a remote server. The problem I found was power and SD stability, when the power fails, which it will do no matter what, just when... The Sd card can become corrupt due to all the writing that Raspian does all the time. Certainly does not really need to write to SD.
version 2 uses a RO-filesystem and a command line image tool. Uses the same process to show images from local, and sync with server. But power fail causes no ill effects.
I am not using screensaver to display images, that seemed redundant to me, and unnecessary to wait for the SS to start just to display the images.
Some of the images are created using imagemagik, which is nicely dynamic where needed.

Resources