Is there a definite list of MIME types? - mime-types

I understand that you can configure servers to associate a specific extension with a particular MIME type. Who defines those MIME types? Can I create my own custom type (like xyz/dfds) and if so, what's the point of it? Say I decide to do so, how will I tell browsers like Chrome etc how to interpret it?

The authoritative list of MIME types is the IANA Media Types registry at http://www.iana.org/assignments/media-types/media-types.xhtml but it is by no means exhaustive.
Like you surmise, there are valid reasons for implementors to use unofficial types which are not part of the registry, for a number of reasons, including, but not limited to
limited private use
specialized needs not adequately addressed by IANA
opposition to, or alternatives to, IANA-specified standard types
testing or preparation for official IANA standardization
A grandiose example is the plethora of different MIME type identifiers applied to the PKzip archive format; you see application/zip, application/x-zip-compressed, and just plain application/octet-stream, among the more popular ones.
For your specific question about browser support for ad-hoc types, a common approach is to create a plug-in for the browser(s) you want to support. Do note that MIME is not limited to the WWW; it originated with email, and has uses in many other applications as well.

Related

how to use webcal protocol

I want to create a file, that will be accessed by using the webcal:// protocol.
The final goal is to let the user subscribe to a shared calendar, and I know that this can be done in a million different ways, and that webcal has disadvantages, but please treat this question as a technical question about webcal and don't offer alternatives.
What should be its content, if I want it to allow a user to subscribe to a shared calendar?
How should I host such a file? Most of the servers I know support only http/s queries.
Thx!
Please refer to the RFC5545 shared calendar (ics)n specification https://www.rfc-editor.org/rfc/rfc5545 for the format of contents of the ics calendar files.
Note that webcal is an unofficial apple protocol for ics calendar files. Google and other calendar providers use https for their shared calendars. These can be hosted on most servers. An ics url is literally just a file (or active url). If you persist in using webcal protocol only, some calendar applications may not accept so I strongly suggest the official 'alternative'.
From page 5 of the specification
"The iCalendar format is suitable as an exchange format between
applications or systems. The format is defined in terms of a MIME
content type. This will enable the object to be exchanged using
several transports, including but not limited to SMTP, HTTP,....."

SCORM authoring tool

Im starting a new project. The aim of the project is to create a e-authoring tool for building courses in SCORM Complaint. Im new to this domain and I have little idea on this. I have taken a view on authoring tool in Articulate, which my customer requires to do the same. I understood the content creation, but I am trying to understand How can I export this as SCORM compliant course? In between I learned about xAPI as well And understood it is a kind of enhanced SCORM.
Could any one guide me to understand this,
1) How can create content from my custom authoring tool and export as SCORM complaint
2) Is it better to use xAPI or SCORM.
3) How is the SCORM pacakge communicate with my custom made LMS?
4) Heard about LRS,
My custom authoring tool will be made in React and store would MondDB
Any help would be greatly appreciated. Thankyou!
That is a lot to take on, particularly all at once.
1) The SCORM spec is made up of multiple parts. There is a packaging portion and a runtime portion. The basics are that your package needs to be a zip file, and that zip needs to include specific files that indicate to the LMS what type of standard it is along with other metadata about the package. For SCORM this will be called an imsmanifest.xml file. For xAPI you are most likely going to use a cmi5.xml (see cmi5) or a tincan.xml file (what Articulate Storyline exports when it says "xAPI"). The other parts of the package will depend on what standard and version of that standard (for SCORM 1.2, 2004 2nd, 3rd, or 4th edition) you are targeting, realizing that different LMSs support different standards and different degrees of those standards.
Once you have a package constructed that will import, the content itself (usually an HTML file) will need to locate the JavaScript API provided by the SCORM player (from the LMS) and make specific calls depending on what the content is needing to store or read, this is the runtime portion. The calls will again depend on the standard and version. For xAPI based packages (either tincan.xml packages or cmi5 packages) the content will communicate directly to the LRS based on the information provided on the URL at launch time (there is no built in JavaScript API).
2) This entirely depends on what your customer base looks like and the types of data that you intend to capture. SCORM is a more mature landscape and has wider adoption and is more heavily specified, if the information you need to capture fits into its limited information model then it is still an excellent choice. If you need significant data portability and/or the information you need to capture goes beyond compliance data (pass/fail, complete, and score) and/or interaction data (questions + answers) then you should consider xAPI, specifically via cmi5.
3) The LMS must provide a JavaScript API (specified by the SCORM runtime) which the content will use as its interface. The storage/retrieval of data is implementation specific for the LMS beyond what is included in the specification for the JavaScript API.
4) You didn't really include a question here.
I would suggest familiarizing yourself with the two sets of standards via http://scorm.com and http://xapi.com. And although it is a plug for my company's product, you may want to consider the Rustici Driver as it is a product (library) specifically designed to make it easy for an authoring tool to export content as SCORM 1.2, 2004, AICC, cmi5 or Tin Can (the latter two being xAPI). Once you have your tool up and running with minimal standards support you should consider testing it on Rustici's SCORM Cloud (it is free for this purpose), see http://cloud.scorm.com.
The format is huge, there is no quick reference guides. And different authoring tools have different scorm-support depths. You should probably start with this document
Sounds like you're talking about designing editable content; and the content "framework" itself.
This is a massive effort! This is massive support! That said, people do it.
Having built a CMS system for many supporting subject matters I had to divide and conquer this task.
Few ways I'd think to digest this beast- data, data, data
Requirements on Activities (Interaction types)
Design (static/dynamic) on these interactions
The view/facade displaying can change. Tech moves at the speed of light. Need to come up with a super solid data model.
I'd think about how these can be generic, and how they can be extended to meet the customers goals/needs. All depends how much customization (if any) can happen.
I start mapping all this to SCORM CMI Object level calls. Scoring, Progress, Interactions, Objectives etc...
Get your self a wicked SCORM Content API library or write one yourself. You'll be re-using a lot of these calls, no sense baking them into all your interactions
Get up on SCORM Packaging .. much of this has to be defined at author time. Lots of reading, and a lot of features you need to pick thru if your customers even use. Don't dev in places that have .1% market need. The low hanging fruit get you to market.
Surround yourself with passionate great people. You'll need them.
As far as the standards go, it's all about portability. SCORM works directly with a LMS if thats where your customer goes. Others use a LRS which is coded to work with one they set at author time. You can even do both.
Aside from React and MongoDB, you'll need something that can do the lift and shift of all this content.

Usage for profile parameter for JSON-LD requests

The documentation of JSON-LD mentions that clients can provide a profile parameter to the Accept header can be used to control the representation. It defines the three defaults for requesting compacted, expanded or flattened JSON-LD documents. It does also say that
If the profile parameter is given, a server should return a document that honors the profiles in the list which are recognized by the server.
It does not, however, explain whether there are any specific rules the server should follow. Is it completely up to the server to decide what the behavior is for custom profile URIs? Are there any discussions on that subject?
Would the examples below be correct?
Example 1
The client requests with
Accept: application/ld+json;
profile="http://www.w3.org/ns/json-ld#compacted http://schema.org"
And the server returns compacted document with http://schema.org as #context?
Example 2
The client requests with
Accept: application/ld+json; profile="http://schema.org"
And the server returns compacted document with http://schema.org as #context?
The JSON-LD 1.0 Spec defines profile in IANA Condierations. This defines the profile identifiers such as compacted you identified above. It doesn't provide a way to specify a specific context to use, and the semantics of profile would make it difficult to know what is meant by a different profile URI, as there is no way (AFAIK) to register this meaning elsewhere.
That said, I think it would be useful to be able to specify a context to use for compacted or expanded, and if/when we support framing, a frame to use. I think this might take the form of a type-specific Accept parameter context and/or frame, which would be used to specify the requested context or frame to be used when serializing the document. However, as with other profiles, these are SHOULD, not MUST; a client needs to be able to cope with getting a document back not so serialized, perhaps using a local jsonld.js instance to re-encoding the returned document. It might also be useful to recommend that the same parameters be used in the response with Content-Type for the server to communicate the profile/context/frame used as part of the response.
Please consider raising an issue at https://github.com/json-ld/json-ld.org/issues, as we're starting to look at new Community Group (i.e., not W3C Recommendations) drafts of the specs to address long outstanding community feature requests.

ASN.1 encoded file

I need to create an ASN.1 BER encoded file with multiple records. I've been searching for one (tools like oss, asn1c, ... etc), but I can't find one that suits me with a full example on how multiple records can be encoded in one file.
Does anyone know a good tool?
Thanks
The tools won't really help you design your file-format, or protocol; that is a manual task that you must perform. You will need to design the rules of how data is stored and in what form each element will take.
The tools will help with implementation, allowing you to take your protocol definition and generating C or C++ code that is capable of decoding and encoding files that conform to that protocol.
The company I work for uses OSS Nokalva, which is the best, but expensive. I have also used asn1c, for personal projects, with success.
You can use asn1c and define multiple records with
MultipleRecords :: SEQUENCE OF SingleRecord

What is necessary from a language implementation point of view to implement type providers like in F# 3.0?

F# 3.0 adds type providers, which make it basically unnecessary to manually write or generate mappings between a DB (or another data provider) and the language/type system, because the language can query structural information from the database itself directly with type providers.
What is necessary from a language implementation point of view to add such a feature to a language?
Does it require a fully pluggable type system? Or is it more like some hidden code generator integrated into the compiler?
What's necessary to implement a new type provider for F#?
Technically, you can think of F# type providers as "plugins" for the compiler. Instead of generating mappings, the compiler asks the type provider "What types do you know?" or "Do you know this type?" (depending on the context).
The plugin (type provider) answers and specifies what the type looks like (abstractly, without actually generating it). The compiler then works with this information and later asks the type provider to provide code that should be used when compiling code that uses these "fake" types. It is also possible to actually generate code (some samples do this, because they just use tools that are already there).
So yes, you can implement your own type provider. I said a few things about it in the GOTO Copenhagen talk which has been recorded and Don Syme said a few things in his earlier talks (I didn't see his BUILD talk yet).
The API docs show that the 'type provider interface' is surprisingly small, see ITypeProvider and IProvidedNamespace, as well as the whole API namespace it is in. Tomas' answer gives an overview, and the API docs show the specific interfaces.
As this page exists, it will probably be possible. But you're reffering to things that are in beta currently, so things might change.
As I understand the available documentation, the inferred types will be strongly typed, so I assume it's more a compiler thing than a language thing ( besides maybe some syntax ).
Looking at the MSDN docs for ITypeProvider and IProvidedNamespace
I can't see the documentation for the actual methods you would use to define the types,
ProvidedTypeDefinition(x,y,z) and ProvidedPropertyDefinition(x,y,z)
perhaps it's this http://fsharp3sample.codeplex.com/SourceControl/changeset/view/8670#195262
I can see from the examples that you can specify that a provided type derives from a known base class, but is it possible to specify that a provided type implements one or more existing interfaces? Seems like something you would want to be able to do, provided you can also provide the method body for the implementation.

Resources