I'm working on a Hydra documentation generator for Golang. I've been using the demo as an example and I was wondering about the ambiguity in some hydra terms.
What's the difference between hydra:title and rdfs:label? label is used in vocab:User, but hydra:title is used for Resource and Collection, as well as in properties.
Speaking of Resource and Collection, why are they re-described in this ApiDocumentation? Shouldn't they be part of hydra/core?
In many properties, there's both a hydra:title + hydra:description and label + description that contain the same information. Why is that? Can I ignore one and be fine?
Apologies in advance if I failed to spot that in the spec, but I've only recently gained an interest in hypermedia APIs and many concepts are still a bit hazy.
• What's the difference between hydra:title and rdfs:label?
rdfs:label is used for the vocabulary definition itself. hydra:title is used to overwrite that label in Hydra clients (that use it for instance to render forms). This was the first issue that was opend when Hydra's further development was moved into a W3C Community Group: Hydra ISSUE-1
• Speaking of Resource and Collection, why are they re-described in this ApiDocumentation? Shouldn't they be part of hydra/core?
They are part of the Hydra Core Vocabulary. As such, it isn't necessary to re-describe them. It was an implementation shortcut I took.
• In many properties, there's both a hydra:title + hydra:description and label + description that contain the same information. Why is that? Can I ignore one and be fine?
See the answer to the first question. In general, you should prefer the Hydra versions in a Hydra-specific tool but fall back to the rdfs properties.
Btw. there's a dedicated mailing list for Hydra. Join the W3C Community Group if you are interested in influencing the future development of Hydra. You should definitely announce your documentation generator there as well.
Related
I’m trying to use the Elsapy module to extract the abstracts of documents on certain topics.
I am able to do this but, unfortunately, only for a fraction of the documents found.
For example, a particular search returns 16 documents but I am only able to extract the information (e.g. abstracts) from 4 of them.
Upon further inspection, it seems that for the documents I can’t get the abstracts of:
-Don’t have a PII
-And have DOIs that don’t work.
I have tested the DOIs in the article retrieval interactive API guide
-The ones that returned abstracts worked fine
-The other ones return the error:
RESOURCE_NOT_FOUNDThe resource specified cannot be found.
Even though I have found the original articles and checked their DOI is correct.
An example of one that didn’t work is:
Sengupta, N. K., & Sibley, C. G. (2019). The political attitudes and subjective wellbeing of the one percent. Journal of Happiness Studies, 20(7), 2125-2140. doi:10.1007/s10902-018-0038-4
I have found that the ones that do ‘work’ all have the general form:
10.1016/j.ssmph.2019.100471
10.1016/j.apacoust.2015.03.004
Please let me know if you know why this is and how I can fix it.
Thanks for your help :)
The Article Retrieval API works for Elsevier content hosted on sciencedirect.com; all Elsevier articles have PII identifiers. The example DOI 10.1007/s10902-018-0038-4 does not work because it is published by Springer and, consequently, not available on ScienceDirect.
Kindly note that this is not a bug and everything is working as expected.
I'm writing a program that, given an OWL ontology, retrieves all the explanations for a query by using Pellet as reasoner.
To do that the OWLAPI provides a class named HSTExplanationGenerator that implements the Hitting Set Tree algorithm to find all the explanations.
When I want to create an instance of HSTExplanationGenerator I should give a class that implements the interface TransactionAwareSingleExpGen, a class that implements this interface should provide a method to compute an explanation.
Now, OWLAPI provides two classes which implement this interface: BlackBoxExplanation and GlassBoxExplanation. I have read the code of the two classes. GlassBoxExplanation gets the explanation from Pellet, prune it and then converts it into a set of OWLAxiom. However, I found it hard to understand what BlackBoxExplanation does. The questions are: which one should I use? Which are the main differences between these two classes?
GlassBoxExplanation is, as far as I can tell, provided by Pellet, not OWLAPI.
The main difference between a black box explanation and a glass box explanation is that the black box explanation cannot know the reasoner's internals - it is limited to what is available through the OWLReasoner interface. In this respect, the definition is no different from black box testing and white box testing in software engineering.
That said, you might want to use the owlexplanation project instead. It is based on laconic explanations, which are a more recent development in OWL entailment explanation than what is available in both OWLAPI and (old versions of) Pellet.
https://github.com/matthewhorridge/owlexplanation
Working on testing a React component, I was reading the docs and found scryRenderedDOMComponentsWithClass. I'm having trouble understanding the function of this component because it's unpronounceable, so I don't understand how it's naming maps to a mental model of what it's doing. (There are a number of related names, such as scryRenderedDOMComponentsWithTag.)
What does the scry part of this method name refer to? Scary? Scurry? What concept is this name trying to illustrate?
Short answer
"Scry" in this context just means "find all". See this comment on ReactTestUtils.scryRenderedComponentsWithClass. It's a single word, not an abbreviation, and it's pronounced like "cry" but with an "s" at the beginning.
Longer (and nerdier) answer
Elsewhere in that same file, you'll see a reference to DOM.scry:
/**
* Todo: Support the entire DOM.scry query syntax. For now, these simple
* utilities will suffice for testing purposes.
* #lends ReactTestUtils
*/
zpao explains in a comment on a GitHub issue:
That's a reference to an internal Facebook module. It's basically querySelectorAll with fallback behavior for handling old browsers and special cases. It is pretty unremarkable and doesn't actually translate super well here (except maybe a scryRenderedDOMComponentsWithQSA or something, but meh). We're working on improving the testing in other ways so I don't think there's anything we really want to do with this right now.
jimfb takes it a bit further in another GitHub issue, explaining that the name is a reference to Dungeons & Dragons:
Back in the day, we had a bunch of D&D fans on the team.
For reference:
http://www.dandwiki.com/wiki/SRD:Scrying
http://www.dandwiki.com/wiki/SRD3e:Scry_Skill
https://en.wikipedia.org/wiki/Scrying
Historically, we've used scry to indicate a helper that finds a set of results. As the framework matures, we should start choosing function names based on what the functions actually do instead of fantasy words that have very little meaning to the typical developer.
Though I would agree that the word has very little meaning to most, it's worth noting that "scry" is a real English word:
scry
[skrahy]
verb (used without object), scried, scrying.
to use divination to discover hidden knowledge or future events, especially by means of a crystal ball.
Interestingly, according to the data from Google's Ngram Viewer, it seems that the word fell out of normal usage in the early 19th century and then wallowed in obscurity until the 1980s, presumably after D&D gained popularity:
So I can't say I object to jimfb calling it a "fantasy word", especially considering the kind of imagery my imagination conjures up when I hear it.
I run a website that allows users to write blog-post, I would really like to summarize the written content and use it to fill the <meta name="description".../>-tag for example.
What methods can I employ to automatically summarize/describe the contents of user generated content?
Are there any (preferably free) methods out there that have solved this problem?
(I've seen other websites just copy the first 100 or so words but this strikes me as a sub-optimal solution.)
Think of the task of summarization as a challenge to 'select the most important sentences' from the document.
The method described in The Automatic Creation of Literature Abstracts by H.P. Luhn (1958) describes a naive method that actually performs quite well. Try giving it a shot.
If your website is in Python coding this algorithm using the NLTK (Natural Language Toolkit) is a fun task.
Make it predictable.
From a users perspective simply using the first paragraph is not bad at all.
Using any automation is bound to fall flat in some cases. So I suggest to display
the first paragraph (maybe truncating at some point) as a summary and offer the ability to override that by an optional field.
I might try using mechanical Turk or any number of other crowdsourcing options.
Another item to check out, a SourceForge project, AutoSummary Semantic Analysis Engine
Not a trivial task... You should look for articles or books on "extractive summarization"
A few starters could be:
Books:
Natural Language Processing with Python
Foundations of Statistical Natural Language Processing
Articles:
Language independent extractive summarization
Extractive summarization: how to identify the gist of a text
Extractive Summarization using Inter- and Intra- Event Relevance
Yahoo has a free API for this:
http://developer.yahoo.com/search/content/V1/termExtraction.html
Apple's patent 6424362 - Auto-summary of document content contains sample code which might be useful...
This borders on artificial intelligence so there's not going to be an "easy" solution out there, but there are products that target this problem.
Check out Copernic Summarizer, for one.
Noun phrases typically tend to be important elements of a sentence. Picking sentence(s) with a high density of noun phrases could yield a good summary. You could get noun phrases using a POS tagger.
For a good summary, it is desirable that it is a meaningful sentence. Reading a broken sentence is slightly jarring.
Alternatively, when the author posts the article, the author can highlight what are the keywords that can be used in the description which can then be automatically put in the meta description tag.
So we are sure that we will be taking our product internationally and will eventually need to internationalize it. How much internationalizing would you recommend we do as we go along?
I guess in other words, is there any internationalization that is easy now but can be much worse if we let the code base mature and that won't slow us down very much if we choose to start doing it now?
Tech used: C#, WPF, WinForms
Prepare it now, before you write all the strings in the codebase itself.
Everything after now will be too late. It's now or never!
It's true that it is a bit of extra effort to prepare well now, but not doing it will end up being a lot more expensive.
If you won't follow all the guidelines in the links below, at least heed points 1,2 and 7 of the summary which are very cheap to do now and which cause the most pain afterwards in my experience.
Check these guidelines and see for yourself why it's better to start now and get everything prepared.
Developing world ready applications
Best practices for developing world ready applications
Little extract:
Move all localizable resources to separate resource-only DLLs. Localizable resources include user interface elements such as strings, error messages, dialog boxes, menus, and embedded object resources. (Moving the resources to a DLL afterwards will be a pain)
Do not hardcode strings or user interface resources. (If you don't prepare, you know you will hardcode strings)
Do not put nonlocalizable resources into the resource-only DLLs. This causes confusion for translators.
Do not use composite strings that are built at run time from concatenated phrases. Composite strings are difficult to localize because they often assume an English grammatical order that does not apply to all languages. (After the interface design, changing phrases gets harder)
Avoid ambiguous constructs such as "Empty Folder" where the strings can be translated differently depending on the grammatical roles of the strings' components. For example, "empty" can be either a verb or an adjective, and this can lead to different translations in languages such as Italian or French. (Same issue)
Avoid using images and icons that contain text in your application. They are expensive to localize. (Use text rendered over the image)
Allow plenty of room for the length of strings to expand in the user interface. In some languages, phrases can require 50-75 percent more space. (Same issue, if you don't plan for it now, redesign is more expensive)
Use the System.Resources.ResourceManager class to retrieve resources based on culture.
Use Microsoft Visual Studio .NET to create Windows Forms dialog boxes, so they can be localized using the Windows Forms Resource Editor (Winres.exe). Do not code Windows Forms dialog boxes by hand.
IMHO, to claim something is going to happens "in a few years" literally translates to "we hope one day" which really means "never". Although I would still skim over various tutorials to make sure you don't make any horrendous mistakes. Doing correct internationalization support now will mean less work in the future, and once you get use to it, it won't have any real affect on today's productivity. But if you can measure the goal in years, maybe it's not worth doing at all right now.
I have worked on two projects that did internationalization: a C# ASP.NET (existed before I joined the project) app and a PHP app (homebrewed my own method using a free Internationalization control and my own management app).
You should store all the text (labels, button text, etc etc) as data inside a database. Reference these with keys (I prefer to use the first 4 words, made uppercase, spaces converted to underscores and non alpha-numerics stripped out) and when you have a duplicate, append a number to the end. The benefit of this key method is the programmer has a pretty strong understanding of the content of the text just by looking at the key.
Write a utility to extract the data and build .NET resource files that you add into your project for compile. Create a separate resource file for each language. In your code, use the key to point to the proper entry.
I would skim over the MS documents on the subject:
http://www.microsoft.com/globaldev/getwr/dotneti18n.mspx
Some basic things to avoid:
never ever ever use translation software, hire a pro or an intern taking that language at a local college
never try to create text by appending two existing entries, because grammar differs greately in each language, this will never work. So if you have a string that says "Click" and want one that says "Click Now", do not try to create a setup that merges two entries, or during translation, copy the word for click and translate the word now. Treat every string as a totally new translation from scratch
I will add to store and manipulate string data as Unicode (NVARCHAR in MS SQL).
Some questions to think about…
How match can you afford to delay the shipment of the English version of your application to save a bit of cost internationalize later?
Will you still be trading if you don’t get the cash flow from shipping the English version quickly?
How will you get the UI right, if you don’t get feedback quickly from some customers about it?
How often will you rewrite the UI before you have to internationalize it?
Do you English customers wish to be able to customize strings in the UI, e.g. not everyone calls a “shipping note” the same think.
As a large part of the pain of internationalize is making sure you don’t break the English version, is automated system testing of the UI a better investment?
The only thing I think I will always do is: “Do not use composite strings that are built at run time from concatenated phrases” and if you do so, don’t spread the code that builds up the a single string over lots of methods.
Having your UI automatically resize (and layout) to cope with length of labels etc will save you lots of time over the years if you can do it cheaply. There a lots of 3rd party control sets for Windows Forms that lets you label text boxes etc without having to put the labels on as separate controls.
I just starting to internationalize a WinForms application, we hope to mostly be able to use the “name” of each control as the lookup key, without having to move lots into resource files etc. It is not always as hard as you think at first….
You could use NGettext.Wpf (it can be installed from NuGet, and yes I am the author, but I made it out of the frustrations listed in the other answers).
It is hosted this github repository, and here is the getting started section at the time of writing:
NGettext.Wpf is intended to work with dependency injection. You need to call the following at the entry point of your application:
NGettext.Wpf.CompositionRoot.Compose("ExampleDomainName");
The "ExampleDomainName" string is the domain name. This means that when the current culture is set to "da-DK" translations will be loaded from "Locale\da-DK\LC_MESSAGES\ExampleDomainName.mo" relative to where your WPF app is running (You must include the .mo files in your application and make sure they are copied to the output directory).
Now you can do something like this in XAML:
<Button CommandParameter="en-US"
Command="{StaticResource ChangeCultureCommand}"
Content="{wpf:Gettext English}" />
Which demonstrates two features of this library. The most important is the Gettext markup extension which will make sure the Content is set to the translation of "English" with respect to the current culture, and update it when the current culture is changed. The other feature it demonstrates is the ChangeCultureCommand which changes the current culture to the given culture, in this case "en-US".
I also highly recommend reading Preparing Strings from the gettext utilities manual.
Internationalization will let your product be usable in other countries, it's easy and should be done from the start (this way English speaking people all over the world can use your software), those 3 rules will get you most of the way there:
Support international characters - use only Unicode data types in files and databases.
Support international date, time and number formats - use CultureInfo.InvariantCulture when storing data to file or computer readable storage, use CultureInfo.CurrentCulture when displaying data or parsing user input, never do your own parsing, never use any other culture objects.
textual data entered by the user should be considered a black box, don't try to break it up into words or letters, especially when displaying it to the user - different languages have diffract rules and the OS knows how to display left-to-right text, you don't.
Localization is translating the software into different languages, this is difficult and expensive, a good start is to never hard code strings and never build sentences out of smaller strings.
If you use test data, use non-English (e.g.: Russian, Polish, Norwegian etc) strings.
Encoding peeks it's little ugly head at every corner. If not in your own libraries, then in external ones.
I personally favor Russian because although I don't speak a word Russian (despite my name's origin) it has foreign chars in it and it takes way more space then English and therefor tests your spacing too.
Don't know if that is something language specific, or just because our Russian translator likes verbose strings.