Problem Modelling from pixhawk log files with data-driving-dynamics from ethz-asl - data-modeling

I am having trouble with modeling from the log files of Pixhawk. I am following data-driven-dynamics GitHub page. I can successfully generate a model but I can't check the model without simulating. When I simulate it take-offs nicely and starts to oscillate before the desired altitude then gets out of the control and crashes. Any suggestions?
Thank you very much

Related

Errors with Watson Visual Recognition, training not possible

I am trying to train two models on Watson VR. One is for object (details) recognition within a picture. The other is to estimate the class object.
I have been able to prepare the classes of object for both models.
However, it seems I have multiple issues with training and I am now stack. I have found a similar post in Stack Overflow but it relates to data size and type; my data are all in .jpg format and all dataset is below 250 MB.
Classifier:
The classifier is the one that gives me more issues.
Firstly, I have tried to train the model but then the server went down. The day after I have found the model "trained" but with errors. I basically restarted by preparing again the classes.
All classes have at least 10-12 pictures (10 is minimum required). When I click on "Train Model" I receive the following error:
In the dashboard I am given explanation of the failed training:
Data size was originally about 241/250 MB, now it is 18.4/250 MB. I am not sure what brought the change.
Thank you for the help!
Thanks for providing the screenshots, that is very helpful!
It says your "DrinksClassifier" is in a failed state. It's best to delete that collection from Studio, and start over. Make sure you have at least 10 examples of each class... the lower screenshot seems to show it didn't find any examples for "AgedCoffee".

How to get Android Bugs Reports Data?

Since Android is open source, I guess there won't be problem if I use the data for my project. I want to do a project where I need a dataset that contains bug reports that many users have sent across the world, and I want to predict the severity of the bug by analyzing the report either by performing text mining on the description of the report or by stacktracing. But, the main problem is Dataset. Can someone tell me how do I get the dataset for this project? Also, if you have any idea on this project, help me out.

How would I add live video streaming code to the graphite dashboard?

I am running an apache2 graphite host from an Orange Pi One, having written a service to translate and send data from sensors on the GPIO to the carbon line receiver. My project is to incorporate all the I/O from the device into a dashboard.
There are loads of graphite dashboards, but I can't find one that has a simple video stream applet/plugin.
I have searched graphite-web github and can easily adapt dashboard.html, but I am not sure whether the entire file is a placeholder, and whether any additions would render properly after all the javascript has run and rendered the page. It would seem I might need to reverse engineer the javascript, which seems quite an effort for the simple task I want.
If I can figure out the video stream code for the CSI camera, then I can adapt it to modify the dashboard with all the other data I want to display.
So, I am really looking for some guidance on getting started with dashboard code modification?
You can use the text panel to add HTML content to the dashboard.

App Engine backup never finishes only clue is failure in map reduce worker_callback

Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!

Need ideas on retrieving data from a website

I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.

Resources