How to create and train own model with own dataset? - tensorflow.js

I want create and train own model. If not possible to create own, any way to train other models? (Yolo, MobileNet, Coco?)
There some requirements:
- I know only JS (tried Python, no, i don't continue, i can't, Python best language, but i can't, sorry)
- Performance, at least 24 FPS, like real-time detection
- Own dataset freedom (like file /dogs/pitbull/01.png)
I tried Python, but i don't/can't work in Python due of 10 years experience with JS doesn't lets me use Python
Thanks for everyone for help

This answer is written by an IBMer.
If you want to build a model like the one described (image classification/object detection) without having to deal with python - and you want to use it with javascript in a browser, you can try the tooling available at https://cloud.annotations.ai/. It works with a IBM Cloud account but you can stay go on with the free tier - so you just need to register, at least to do your first experiments.
You'll find here https://github.com/cloud-annotations code and boilerplate to use your model on different platform.
It is not an advanced tool but it enables you to put your hands on the topic.

Related

SCORM authoring tool

Im starting a new project. The aim of the project is to create a e-authoring tool for building courses in SCORM Complaint. Im new to this domain and I have little idea on this. I have taken a view on authoring tool in Articulate, which my customer requires to do the same. I understood the content creation, but I am trying to understand How can I export this as SCORM compliant course? In between I learned about xAPI as well And understood it is a kind of enhanced SCORM.
Could any one guide me to understand this,
1) How can create content from my custom authoring tool and export as SCORM complaint
2) Is it better to use xAPI or SCORM.
3) How is the SCORM pacakge communicate with my custom made LMS?
4) Heard about LRS,
My custom authoring tool will be made in React and store would MondDB
Any help would be greatly appreciated. Thankyou!
That is a lot to take on, particularly all at once.
1) The SCORM spec is made up of multiple parts. There is a packaging portion and a runtime portion. The basics are that your package needs to be a zip file, and that zip needs to include specific files that indicate to the LMS what type of standard it is along with other metadata about the package. For SCORM this will be called an imsmanifest.xml file. For xAPI you are most likely going to use a cmi5.xml (see cmi5) or a tincan.xml file (what Articulate Storyline exports when it says "xAPI"). The other parts of the package will depend on what standard and version of that standard (for SCORM 1.2, 2004 2nd, 3rd, or 4th edition) you are targeting, realizing that different LMSs support different standards and different degrees of those standards.
Once you have a package constructed that will import, the content itself (usually an HTML file) will need to locate the JavaScript API provided by the SCORM player (from the LMS) and make specific calls depending on what the content is needing to store or read, this is the runtime portion. The calls will again depend on the standard and version. For xAPI based packages (either tincan.xml packages or cmi5 packages) the content will communicate directly to the LRS based on the information provided on the URL at launch time (there is no built in JavaScript API).
2) This entirely depends on what your customer base looks like and the types of data that you intend to capture. SCORM is a more mature landscape and has wider adoption and is more heavily specified, if the information you need to capture fits into its limited information model then it is still an excellent choice. If you need significant data portability and/or the information you need to capture goes beyond compliance data (pass/fail, complete, and score) and/or interaction data (questions + answers) then you should consider xAPI, specifically via cmi5.
3) The LMS must provide a JavaScript API (specified by the SCORM runtime) which the content will use as its interface. The storage/retrieval of data is implementation specific for the LMS beyond what is included in the specification for the JavaScript API.
4) You didn't really include a question here.
I would suggest familiarizing yourself with the two sets of standards via http://scorm.com and http://xapi.com. And although it is a plug for my company's product, you may want to consider the Rustici Driver as it is a product (library) specifically designed to make it easy for an authoring tool to export content as SCORM 1.2, 2004, AICC, cmi5 or Tin Can (the latter two being xAPI). Once you have your tool up and running with minimal standards support you should consider testing it on Rustici's SCORM Cloud (it is free for this purpose), see http://cloud.scorm.com.
The format is huge, there is no quick reference guides. And different authoring tools have different scorm-support depths. You should probably start with this document
Sounds like you're talking about designing editable content; and the content "framework" itself.
This is a massive effort! This is massive support! That said, people do it.
Having built a CMS system for many supporting subject matters I had to divide and conquer this task.
Few ways I'd think to digest this beast- data, data, data
Requirements on Activities (Interaction types)
Design (static/dynamic) on these interactions
The view/facade displaying can change. Tech moves at the speed of light. Need to come up with a super solid data model.
I'd think about how these can be generic, and how they can be extended to meet the customers goals/needs. All depends how much customization (if any) can happen.
I start mapping all this to SCORM CMI Object level calls. Scoring, Progress, Interactions, Objectives etc...
Get your self a wicked SCORM Content API library or write one yourself. You'll be re-using a lot of these calls, no sense baking them into all your interactions
Get up on SCORM Packaging .. much of this has to be defined at author time. Lots of reading, and a lot of features you need to pick thru if your customers even use. Don't dev in places that have .1% market need. The low hanging fruit get you to market.
Surround yourself with passionate great people. You'll need them.
As far as the standards go, it's all about portability. SCORM works directly with a LMS if thats where your customer goes. Others use a LRS which is coded to work with one they set at author time. You can even do both.
Aside from React and MongoDB, you'll need something that can do the lift and shift of all this content.

How to collect data from a website

Preface: I have a broad, college knowledge, of a handful of languages (C++, VB,C#,Java, many web languages), so go with which ever you like.
I want to make an android app that compares numbers, but in order to do that I need a database. I'm a one man team, and the numbers get updated biweekly so I want to grab those numbers off of a wiki that gets updated as well.
So my question is: how can I access information from a website using one of the languages above?
What I understand the problem to be: Some entity generates a data set (i.e. numbers) every other week and you have a need to download that data set for treatment (e.g. sorting).
Ideally, the web site maintaining the wiki would provide a Service, like a RESTful interface, to easily gather the data. If that were the case, I'd go with any language that provides easy manipulation of HTTP request & response, and makes your data manipulation easy. As a previous poster said, Java would work well.
If you are stuck with the wiki page, you have a couple of options. You can parse the HTML your browser receives (Perl comes to mind as a decent language for that). Or you can use tools built for that purpose such as the aforementioned Jsoup.
Your question also mentions some implementation details such as needing a database. Evidently, there isn't enough contextual information for me to know whether that's optimal, so I won't address this aspect of the problem.
http://jsoup.org/ is a great Java tool for accessing content on html pages
Consider https://scraperwiki.com/ - it's a site where users can contribute scrapers. It's free as long as you let your scraper be public. The results of your scraper are exposed as csv and JSON.
If you don't know what a "scraper" is, google "screen scraping" - it's a long and frustrating tradition for coders, who have dealt with the same problem you have since the beginning of networked computing.
You could check out :http://web-harvest.sourceforge.net/
For Python, BeautifulSoup is one of the most tolerant HTML parsers out there. The documentation also lists similar libraries in Ruby and Java, so you'll probably find something relevant there.

Feature tracking WinForms

I would like to extend my WinForms app, which a feature that allows me to monitor which functions are used by the users.
The idea is to count how many times e.g. a button has been clicked, or a popup was opened.
I want to know which features are used more or less often by the users.
Any ideas how this can be done? (Or even if somebody solved this problem already)
tia,
Martin
The only mechanism I can think on to do what your looking for is to use a logger like log4net / Log4PostSharp to log details to a log file on the machine, this would give you details on usage for that particular client. You would have to create a custom attribute that you could decorate your methods with that would result in something being written out to the log file, otherwise your code would end up littered with code to implement the logging!
Have a look at this article too, it uses Log4PostSharp with AOP (Aspect Oriented Programming) which would make the implementation of the logging much more cleaner (uses attributes).
http://www.codeproject.com/KB/dotnet/log4postsharp-intro.aspx
You can find some if you google for the term "application analytics" instead of "feature tracking".
I have found the following products:
includeapp.com
Software Statistics Service
Dotfuscator for .NET, DashO for Java
FusionAnalytics
Flurry Analytics
OpenSpan Desktop Analytics
DeskMetrics
EQATEC Analytics
Rapidengines
I might say that I also plan to create such a product. When it will be Beta I will add it to the list.

How flexible is elgg?

I know it has great out-of-the-box features but is it easy to customize?
Like when I query stuff from the database or change css layouts.
Is it faster to create my own modules for it or just go on and write everything from scratch using frameworks like Cake
I'm currently working on an Elgg-based site and I absolutely hate it. The project was near completion when I stepped in, but the people who created were no longer available, so I took it over as a freelancer.
As a personal impression, you are much better off writing the app from scratch in a framework. I don't know if the people before me butchered it, but the code looks awful, the entity-based relationship model is wierd to say the least and debugging is horrendous. Also, from my point of view, it doesn't scale very well. If you were to have a consistent user base, I'd be really really worried.
It keeps two global objects ($vars and $CONFIG) that have more than 5000(!) members loaded in memory on each page. This is a crap indicator.
I've worked extensively with cake. With Elgg, for about a month in a project that is on QA stage right now.
My advise is: if you need something quick with a lot of features and you only need to customize a little, go with Elgg.
If you're going to customize a lot and you can afford the development of all the forums, friends, invites, etc. features, go with Cake or any other MVC framework.
I have been working on a Elgg site for the past month or so, its code is horrible, however it's not the worst I've seen :D. it's not built for programmers like Drupal is :D. But it's not too bad. Once I got a handle on the metadata functions and read most of the code I was able to navigate it well and create custom modules and such.
What would help immensely would be some real documentation and explanation of the Elgg system. I don't think that's going to happen though :).
Out of the box there are a few problems, there are some bugs that haven't been fixed for a while and I've had to go in and fix them myself. Overall, you can make it pretty and it has some cool functions, but i wouldn't dive in until i had read the main core code to get a handle on what's happening on the backend.
Oh and massive use of storing values in globals. and a crap ton of DB calls (same with Drupal though).
i wonder if the use of storing everything, and i mean everything for your site in the globals will really hinder the server if you have a massive user load.
If you want to build a product based on a social networking platform/framework then Elgg is definately a good way to go. The code is not that bad if you actually look before leaping and doing what elgg expects. You go against its processes and structures and it will leave you beaten by the side of the road.
Developing modules/plugins or editing CSS is easy and Elgg does give you great flexability to basically build your own product ontop of it. Dolphin, as comparrison, does not allow you to do anything outside of what it expects you to do.
If you however just need a framework (not primarily for social networking etc) with some user based functionality then i suggest Cake, or if your project is HUGE then maybe Symfony or Zend. They all have plugins you can download and use/hack which would be easirer to adjust for personalised needs.
To show what you can do with elgg here is a site Mobilitate we built with Elgg 1.7. This is a very complicated website and was built ontop of Elgg.
We are starting a new project with Elgg 1.8. The new version is a major improvement they have made a lot of elements easier, incorporated better JS and CSS implementation/structure and have better commented their own code.
Elgg's database schema is horrific. They've essentially implemented a NoSQL database in SQL. It completely defeats the purpose of using a relational table structure.
If you can ignore this, and aren't doing much customization, you might be OK with Elgg. If not, STAY AWAY.
I've been working with Elgg for over a year. It is easier to customize than it would be to build something from scratch using a framework like CakePHP. I tried CakePHP and found it even more complicated than Elgg.
It is difficult to query the database due to the entity-based relationship model. You should use the build-in methods for accessing data. However, I have written many queries to double check on what is actually stored in the database.
You cannot change layouts using CSS alone. You have to deal with the various Elgg views. But CakePHP uses the same Model/View/Controller MVC concept so that would be just as difficult.

Simplest way to create a tiny database app in linux

I'm looking to create a very small cataloguing app for personal use (although I'd open source it if I thought anyone else would use it). I don't want a web app as it seems like overkill to have an application server just for this - plus I like the idea of it being standalone and sticking it on a USB stick.
My Criterea:
Interface must be simple to program. It can be curses-style if that makes it easer to code. My experience with ncurses would suggest otherwise, but I'd actually quite like a commanline UI.
Language doesn't really matter. My rough order of preference (highest first):
Python
C
C++
Java
I'll consider anything linux-friendly
I'm thinking sqlite for storage, but other (embeddable) suggestions welcome.
Has anyone done this sort of thing in the past? Any suggestions? Pitfalls to avoid?
EDIT:
Ok, it looks like python+sqlite is the early winner. That just leaves the question of which ui library. I know you get tkinter for free in python - but it's just so ugly (I'd rather have a curses interface). I've done some GTK in C, but it looks fairly un-natural in python. I had a very brief dabble with wxwidgets but the documentation's pretty atrocious IIRC (They renamed the module at some point I think, and it's all a bit confused).
So that leaves me with pyqt4, or some sort of console library. Or maybe GTK. Thoughts? Or have I been too hasty in writing off one of the above?
I would definitely recommend (or second, if you're already thinking it) - python with sqlite3. It's simple, portable and no big db drivers. I wrote a similar app for my own cataloguing purposes and it's doing just fine.
I vote for pyqt or wx for the GUI. (And second the Python+sqlite votes to answer the original question.)
I second (or third) python and sqlite.
As far as suggestions are concerned:
If you're feeling minimally ambitious, I'd suggest building a very simple web service to synchronize your catalog to a server. I've done this (ashamedly, a few times) for similar purposes in the past.
With sqlite, backups can literally be as simple as uploading or downloading the latest database file, depending on the file's timestamp.
Then, if you lose or break your flash drive (smashed to pieces, in my case), your catalog isn't lost. You gain more portability, at least 1 backup, and some peace of mind.

Resources