Yolo V5 can't detect moss - artificial-intelligence

I use colab to train my moss dataset, however it can't detect anything in the picture, how do i conquer this problem? To train more pictures? And while labeling the moss with labelimg, i just labeled the moss that i saw in the picture, but there are too many! Is it appropriate to do that?
enter image description here

You need to label all the objects in the training data that needs to be detected. If you are not labeling, the model will get confused whether it needs to predict it or not. By not labeling the data, you will be penalising the model when it tries to do a correct prediction.

Related

How to sort the Activity Timeline by Profile/User on an Account in Salesforce

I am a relative novice with the Salesforce interface and platform itself so if I misuse some terminology, I apologize. In my current role, I am undertaking the challenge of learning APEX and one of the tasks that I have been assigned is figuring out how to sort/filter the Activity Timeline by Profile/User on an Account's page. I have been reading up on this topic but haven't found anything concrete. The closest thing to discovering an answer is the following link,
https://trailblazer.salesforce.com/ideaView?id=0873A0000003XdlQAE
however based off of the conversation, I believe the post is referring to the desire to have an already built-in filter beyond Date Range, Activities, and Activity Type. So with that being said, I was wondering if it is possible to:
Filter the Activity Panel by User and if so ...
How can I complete this task, whether through APEX or some other method
The following image is the Activity Timeline that I am referring to, and the names highlighted in yellow are the User/Profiles that I am referring to. My objective is to sort the display by these names instead of the default chronological order per month. Thank you in advance!
bit late to this but as far as I know there's no way to filter the standard component, which is why I built TimelinePlus (you can find it on the AppExchange). It's kindof a complicated build once you get past the basics but it's certainly possible to build this from scratch, there are also SLDS components to help with the styling.

Errors with Watson Visual Recognition, training not possible

I am trying to train two models on Watson VR. One is for object (details) recognition within a picture. The other is to estimate the class object.
I have been able to prepare the classes of object for both models.
However, it seems I have multiple issues with training and I am now stack. I have found a similar post in Stack Overflow but it relates to data size and type; my data are all in .jpg format and all dataset is below 250 MB.
Classifier:
The classifier is the one that gives me more issues.
Firstly, I have tried to train the model but then the server went down. The day after I have found the model "trained" but with errors. I basically restarted by preparing again the classes.
All classes have at least 10-12 pictures (10 is minimum required). When I click on "Train Model" I receive the following error:
In the dashboard I am given explanation of the failed training:
Data size was originally about 241/250 MB, now it is 18.4/250 MB. I am not sure what brought the change.
Thank you for the help!
Thanks for providing the screenshots, that is very helpful!
It says your "DrinksClassifier" is in a failed state. It's best to delete that collection from Studio, and start over. Make sure you have at least 10 examples of each class... the lower screenshot seems to show it didn't find any examples for "AgedCoffee".

When exactly a site uses it's database for retrieval

my knowledge on how a database exactly works is close to zero and I'm trying to understand when exactly a site uses it's database to retrieve information. So for example the site retrieves all the information the moment i load the site(so when i choose for example "funny pictures" it doesn't have to retrieve anything from the database) or it retrieves information only when i make a specific choice? I hope you kind of understand my question, I'm sorry for my bad English.
It depends how it has been implemented and with whihc technology.
Most of time, It load a specific set of information only relevant of the current context.
It depends on SW technology used, settings and site code. Normally it is set to show something only when needed, but it is possible to change this behaviour.
There is one more possibility you haven't noticed, but often used - for example, you have a set of pictures. You see all their small icons on the page, but the whole picture - only if you click on an icon. Then while you are looking on the icons, or on one of the pictures, others pictures are downloaded at the background, to be quickly shown when needed. Sometimes even some primitive prediction system works, to guess what you'll look at the next time.
And all these behaviours work not only for data from databases, but for all data on the pages.
Again, the main thought - it is for you to choose.

Need ideas on retrieving data from a website

I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.

How do I plot the points (data) from the database to the OpensStreetMap?

I wonder how the mapping thing works in the OpenStreetMap. I'm building an app that uses my own database(which I will build using OSM dumps using Osmosis; same as in the OpenStreetMap website). I have really no idea how it works. Thanks in advance! :D
http://wiki.openstreetmap.org/wiki/Develop is your friend for these kind of answers. They explain (depening on the page) pretty detailed how things work.
I don't know how Osmosis does things since we are working with osm2pgsql but I assume they are almost similar: It basically looks for certain tags (since everything is "tagged" by the OSM community) and stores it in the database. So if you have a supermarket POI "some_supermarket" that has a tag "supermarket" an entry in the database will reflect these infos and the coordinates. Streets, buildings and so on are only coordinates that get connected when rendering or processing it.
If you ask for the rendering of the tiles/geo-images, there are renderes available that do these tasks. The wiki from above will give you lots of answers, just search for "renderer". They just retrieve the information (depenging on zoom level and your setting) from the database for a certain bounding box and interpret the data from the database e.g. the know that a street is connected and needs to be colored in grey.

Resources