Im starting a new project. The aim of the project is to create a e-authoring tool for building courses in SCORM Complaint. Im new to this domain and I have little idea on this. I have taken a view on authoring tool in Articulate, which my customer requires to do the same. I understood the content creation, but I am trying to understand How can I export this as SCORM compliant course? In between I learned about xAPI as well And understood it is a kind of enhanced SCORM.
Could any one guide me to understand this,
1) How can create content from my custom authoring tool and export as SCORM complaint
2) Is it better to use xAPI or SCORM.
3) How is the SCORM pacakge communicate with my custom made LMS?
4) Heard about LRS,
My custom authoring tool will be made in React and store would MondDB
Any help would be greatly appreciated. Thankyou!
That is a lot to take on, particularly all at once.
1) The SCORM spec is made up of multiple parts. There is a packaging portion and a runtime portion. The basics are that your package needs to be a zip file, and that zip needs to include specific files that indicate to the LMS what type of standard it is along with other metadata about the package. For SCORM this will be called an imsmanifest.xml file. For xAPI you are most likely going to use a cmi5.xml (see cmi5) or a tincan.xml file (what Articulate Storyline exports when it says "xAPI"). The other parts of the package will depend on what standard and version of that standard (for SCORM 1.2, 2004 2nd, 3rd, or 4th edition) you are targeting, realizing that different LMSs support different standards and different degrees of those standards.
Once you have a package constructed that will import, the content itself (usually an HTML file) will need to locate the JavaScript API provided by the SCORM player (from the LMS) and make specific calls depending on what the content is needing to store or read, this is the runtime portion. The calls will again depend on the standard and version. For xAPI based packages (either tincan.xml packages or cmi5 packages) the content will communicate directly to the LRS based on the information provided on the URL at launch time (there is no built in JavaScript API).
2) This entirely depends on what your customer base looks like and the types of data that you intend to capture. SCORM is a more mature landscape and has wider adoption and is more heavily specified, if the information you need to capture fits into its limited information model then it is still an excellent choice. If you need significant data portability and/or the information you need to capture goes beyond compliance data (pass/fail, complete, and score) and/or interaction data (questions + answers) then you should consider xAPI, specifically via cmi5.
3) The LMS must provide a JavaScript API (specified by the SCORM runtime) which the content will use as its interface. The storage/retrieval of data is implementation specific for the LMS beyond what is included in the specification for the JavaScript API.
4) You didn't really include a question here.
I would suggest familiarizing yourself with the two sets of standards via http://scorm.com and http://xapi.com. And although it is a plug for my company's product, you may want to consider the Rustici Driver as it is a product (library) specifically designed to make it easy for an authoring tool to export content as SCORM 1.2, 2004, AICC, cmi5 or Tin Can (the latter two being xAPI). Once you have your tool up and running with minimal standards support you should consider testing it on Rustici's SCORM Cloud (it is free for this purpose), see http://cloud.scorm.com.
The format is huge, there is no quick reference guides. And different authoring tools have different scorm-support depths. You should probably start with this document
Sounds like you're talking about designing editable content; and the content "framework" itself.
This is a massive effort! This is massive support! That said, people do it.
Having built a CMS system for many supporting subject matters I had to divide and conquer this task.
Few ways I'd think to digest this beast- data, data, data
Requirements on Activities (Interaction types)
Design (static/dynamic) on these interactions
The view/facade displaying can change. Tech moves at the speed of light. Need to come up with a super solid data model.
I'd think about how these can be generic, and how they can be extended to meet the customers goals/needs. All depends how much customization (if any) can happen.
I start mapping all this to SCORM CMI Object level calls. Scoring, Progress, Interactions, Objectives etc...
Get your self a wicked SCORM Content API library or write one yourself. You'll be re-using a lot of these calls, no sense baking them into all your interactions
Get up on SCORM Packaging .. much of this has to be defined at author time. Lots of reading, and a lot of features you need to pick thru if your customers even use. Don't dev in places that have .1% market need. The low hanging fruit get you to market.
Surround yourself with passionate great people. You'll need them.
As far as the standards go, it's all about portability. SCORM works directly with a LMS if thats where your customer goes. Others use a LRS which is coded to work with one they set at author time. You can even do both.
Aside from React and MongoDB, you'll need something that can do the lift and shift of all this content.
Related
I want create and train own model. If not possible to create own, any way to train other models? (Yolo, MobileNet, Coco?)
There some requirements:
- I know only JS (tried Python, no, i don't continue, i can't, Python best language, but i can't, sorry)
- Performance, at least 24 FPS, like real-time detection
- Own dataset freedom (like file /dogs/pitbull/01.png)
I tried Python, but i don't/can't work in Python due of 10 years experience with JS doesn't lets me use Python
Thanks for everyone for help
This answer is written by an IBMer.
If you want to build a model like the one described (image classification/object detection) without having to deal with python - and you want to use it with javascript in a browser, you can try the tooling available at https://cloud.annotations.ai/. It works with a IBM Cloud account but you can stay go on with the free tier - so you just need to register, at least to do your first experiments.
You'll find here https://github.com/cloud-annotations code and boilerplate to use your model on different platform.
It is not an advanced tool but it enables you to put your hands on the topic.
I'll try to explain as briefly as possible:
C#.Windows application for categories and descriptions of files.
Windows Forms - for the user
A library I want to be saved for future usage - I got nice algorithms for tasks with XML,Files,strings. In this case they are to serve the WF, but i don't want to keep them in the Form classes. I want to have them as a separate library with namespaces and classes in it. But I don't know what type of project or addition to the whole VS "Solution" that has to be.
Windows Service - get notifications on file changes and updates the same db the WF is using.
LINQ to SQL - for the data access
WCF - I am just throwing that here, because it seems that I need to use it(answer from a previous related topic) : https://stackoverflow.com/a/15998122/1356685
SO...yeah...architecture, architecture. Any guideline for a good architecture in my case is welcomed. Now I know in these conversations people start throwing terms like: "business logic","persistence layer","model layer" and what not. However I don't quite understand them, so please be specific.
Thanks in advance for the help !
Microsoft has a pretty extensive application architecture guide and their patterns and practices website has a lot of information and code samples showing you how to structure applications.
As for your 'library to be saved for future usage'... you can create a C# Class Library project and add it as a Reference in whatever applications you'd like to use it in.
Preface: I have a broad, college knowledge, of a handful of languages (C++, VB,C#,Java, many web languages), so go with which ever you like.
I want to make an android app that compares numbers, but in order to do that I need a database. I'm a one man team, and the numbers get updated biweekly so I want to grab those numbers off of a wiki that gets updated as well.
So my question is: how can I access information from a website using one of the languages above?
What I understand the problem to be: Some entity generates a data set (i.e. numbers) every other week and you have a need to download that data set for treatment (e.g. sorting).
Ideally, the web site maintaining the wiki would provide a Service, like a RESTful interface, to easily gather the data. If that were the case, I'd go with any language that provides easy manipulation of HTTP request & response, and makes your data manipulation easy. As a previous poster said, Java would work well.
If you are stuck with the wiki page, you have a couple of options. You can parse the HTML your browser receives (Perl comes to mind as a decent language for that). Or you can use tools built for that purpose such as the aforementioned Jsoup.
Your question also mentions some implementation details such as needing a database. Evidently, there isn't enough contextual information for me to know whether that's optimal, so I won't address this aspect of the problem.
http://jsoup.org/ is a great Java tool for accessing content on html pages
Consider https://scraperwiki.com/ - it's a site where users can contribute scrapers. It's free as long as you let your scraper be public. The results of your scraper are exposed as csv and JSON.
If you don't know what a "scraper" is, google "screen scraping" - it's a long and frustrating tradition for coders, who have dealt with the same problem you have since the beginning of networked computing.
You could check out :http://web-harvest.sourceforge.net/
For Python, BeautifulSoup is one of the most tolerant HTML parsers out there. The documentation also lists similar libraries in Ruby and Java, so you'll probably find something relevant there.
I'm communicating with a logic analyzer (HP 1660A) over RS232. I issue a command which tells the analyzer to print screen its display and send it over to the controller (my pc) through serial communication. I'm saving the result (which is usually abut 25kB) to my computer and I would like to view it as a TIFF or other format. The problem is that the response from the analyzer comes in PCL format, therefore suitable to be sent to a printer and printed directly, but not to be opened as an image. I have tried a few PCL to image converters to do the job, I found one which does it properly, however I've used the trial version and I am reluctant to purchase it. I've given you the background of my labour. I would appreciate any kind of help, a reference to the commands in pcl 1 and what should I do in order to extract the data and format it properly from the PCL file. I have no experience with PCL and image processing whatsoever, so please, give me a hand here. Thank you.
P.S. I've obtained the PCL file from the analyzer, both in C# and matlab... I have one slight problem in C# with the serial port control, some images have some uninterpreted characters in the image, when using the above converters. I say all these because I need an algorithm or some indications, no matter the programming language, so please feel free to post.
PCL is complex to read. There are only a handful of tools out there that do a good job of this. We have lots of PCL expertise and still often look to other to supply conversion to PDF and other formats. If the PCL is quite simple, that is, just text, a few fonts, and a graphic or two, a couple of RegEx commands could deal with the extraction of the text and then you could mock up a new document using whatever tools you wish.
Looking at these files in stackoverflow might be tough. If you can get them on an ftp and post a link I can take a quick look and post my findings/thoughts here. The other option is to look to an outside tool. There are a few we've had success with. Our needs are broad so I've settled on one that works the best with many different PCL streams (some PCL coding is better than others). As you are dealing with a known quantity of PCL you may have a few options. Here are a few we've used and had some success with (in order of usefulness to us)
PCLWorks by PageTech (they have a GUI viewer and complete SDK)
VeryPDF PCL Converter (command line tool)
SwiftView
There are others, and even an opensource variant of Ghostscript that handles PCL (we've never had much luck as the PCL we use often contains very custom fonts, symbol sets, and tons of macros which seem to choke it.
GhostPCL
EDIT: Most recently we've been working with LincPDF (http://www.lincolnco.com/). This is also an excellent product with has one big benefit, deployment is simple. Some of the other tools have complex software installations. This solution is very easy for us to deploy as a feature in an application. It's also faster then any tools we've tested to date (at least with the PCL that we generate from our apps which is quite complex as they include specialized fonts and macros).
According to the spec sheet for the HP 1660 (pdf) series can send the TIFF,PCX and postscript.
Wouldn't it be easier to use TIFF?
The project was put on hold for a while, but I would like to offer a complete and usable solution.
#Adrian
You can save the image to a floppy disk, I've done that, saved it as TIFF and everything worked fine. Unfortunately, it sends only PCL through RS232. The idea to save the print screen over serial communication was to avoid using too much the floppy disk, which the device uses in order to boot.
#Douglas
Thank you for your elaborate answer. I'll take a look at the indicated tools, however, my desire is to offer a complete front-end solution, which yields directly the graphic. I've put some files from my tests here in order to see the complexity of the PCL constructions. Do you have any knowledge of a possible API that I could integrate into my application, which can parse the file and interpret the PCL?
Regards,
Cosmin
We capture the serial input via a serial spooler that watches COM1:. It's called SSpool.exe. It redirects the PCL as input to PCLXForm. PCLXForm converts it into any raster format (TIFF, JPG, PDF, BMP, etc.) However, we can also extract the text during the conversion and we can extract individual raster objects from the PCL for re-arrangement in the downstream application. Our pricing model is positioned for licensee's that need to convert up to 50,000 pages of invoices into indexed PDF's per month. However, this type of application normally requires a custom license in order to get our pricing down to the level required. In order to do so, we often have to restrict our product to convert unlimited files, but only up to the 20th page within any one PCL print file. That provides enough page volume and gives us the ability to reduce the pricing per unit. To demo, you would need the PCLTool SDK.
I'm implementing an Apache 2.0.x module in C, to interface with an existing product we have. I need to handle FORM data, most likely using POST but I want to handle the GET case as well.
Nick Kew's Apache Modules book has a section on handling form data. It provides code examples for POST and GET, which return an apr_hash_t of the key+value pairs in the form. parse_form_from_POST marshalls the bucket brigade and flattens it into a buffer, while parse_form_from_GET can simply reference the URL. Both routines rely on a parse_form_from_string routine to walk through each delimited field and extract the information into the hash table.
That would be fine, but it seems like there should be an easier way to do this than adding a couple hundred lines of code to my module. Is there an existing module or routines within apache, apr, or apr-util to extract the field names and associated data from a GET or POST FORM into a structure which C code can more easily access? I cannot find anything relevant, but this seems like a common need for which there should be a solution.
I switched to G-WAN which offers a transparent ANSI C scripts interface for GET and POST forms (and many other goodies like charts, GIF I/O, etc.).
A couple of AJAX examples are available at the GWAN developer page
Hope it helps!
While, on it's surface, this may seem common, cgi-style content handlers in C on apache are pretty rare. Most people just use CGI, FastCGI, or the myriad of frameworks such as mod_perl.
Most of the C apache modules that I've written are targeted at modifying the particular behavior of the web server in specific, targeted ways that are applicable to every request.
If it's at all possible to write your handler outside of an apache module, I would encourage you to pursue that strategy.
I have not yet tried any solution, since I found this SO question as a result of my own frustration with the example in the "Apache Modules" book as well. But here's what I've found, so far. I will update this answer when I have researched more.
Luckily it looks like this is now a solved problem in Apache 2.4 using the ap_parse_form_data funciton.
No idea how well this works compared to your example, but here is a much more concise read_post function.
It is also possible that mod_form could be of value.