Google doesn't recognise #type in JSON-LD - json-ld

Why does Google Structured Data Testing Tool show an error in this case?
How to resolve it?

Google’s Structured Data Testing Tool is not a general structured data validator. It only recognizes terms from vocabularies which Google makes use of (e.g., Schema.org and the deprecated Data-Vocabulary.org).
You are using the GS1 vocabulary, which doesn’t seem to be one of the vocabularies supported by Google.
All terms from other vocabularies produce this error. It’s perfectly fine to use such terms, so simply ignore these errors.

You might try it on the Structured Data Linter, which is not so tightly bound to schema.org or datavocabulary. IIRC, it doesn't have built-in knowledge of GS1, but this could be added fairly reasonably. Issues and pull-requests at http://github.com/structured-data/linter.

Related

Does Cloudant support rewrites as functions?

I have a Cloudant database, and I want to make pretty URLs for my slash-containing documents. So I define a rewrite function like so:
{
"_id": "_design/myRewrites",
"rewrites": "function (req2) {\n return {path: \"../../../\" + req2.path.slice(4).join(\"%2F\")};\n}"
}
Rewrite function formatted more nicely:
function (req2) {
return {path: "../../../" + req2.path.slice(4).join("%2F")};
}
According to the CouchDB docs, CouchDB has supported this kind of rewriting (as stringified functions) since CouchDB 1.7, but Cloudant's documentation doesn't speak about this particular functionality (only rewrites from arrays).
This is reflected in my experience when I try it out https://myAccount.cloudant.com/myDb/_design/myRewrites/_rewrite/hello/world/, I get the following response:
{"error":"unknown_command","reason":"unknown ddoc command 'rewrites'"}
However I read somewhere that Cloudant and CouchDB match their source code since 2.0, so I would expect Cloudant to support all CouchDB features. What's the deal?
Also see following tweet about this, in which IBM asks me to make a question on StackOverflow and suggests I might be on an outdated cluster: https://twitter.com/digitalheir/status/845910843934085120
My data location says "Porter, London". Could it help if I changed this?
tl;dr: Sorry, but no. Cloudant doesn't support rewrites as functions :(
We tried your example and got the same results. Digging deeper, I can now confirm that Cloudant does not support URL rewrites via stringified functions. The service only supports rewrites using the array approach.
I can't say for sure, but I suspect that the team overlooked this feature. That said, it's unlikely that Cloudant will support rewrites as JS functions anytime soon because the current approach does not scale well, as it can bog down the database if views are frequently updated. It's similar to the reason that Cloudant recommends people use the built-in reduce functions (which are implemented in Erlang), rather than writing their own custom JavaScript reduces.
Rewrites as arrays, however, does scale. But this approach obviously won't work if you're dynamically generating URLs. In this case, we suggest moving the URL rewrite functionality to an app server. Unfortunately, this all might be a moot point if you're building a CouchApp :/
This was confusing, so thank you for pointing it out. I'm going to ask the Cloudant team to note this difference in the documentation. Hope this at least helps provide some closure. You weren't wrong for expecting it to work.

Is there an automated way to document Nancy services?

Is there any way to auto-generate Swagger documentation (or similar) for a Nancy service?
I found Nancy.Swagger, but there's no information on how to use it and the demo application doesn't seem to demonstrate generating documentation (if it does, it's not obvious).
Any help would be appreciated. Thanks!
In my current project I've been looking a lot into this problem. I used both nancy.swagger and nancy.swagger.attributes.
I quickly discarded Nancy.swagger, because for me personally it doesn't sound right that you have to create a pure documentation class for each nancy module. The attributes solution was a bit "cleaner" - at least codebase and documentation were in one place. But very fast this became unmaintainable. Module code is unreadable because of many attributes. Nothing is generated automatically: you have to put path, all parameters, even http method as an attribute. This is a huge effort duplication. Problems came very fast, a few examples:
I changed POST to PUT in Nancy and forgot to update [Method] attribute.
I added a parameter but not the attribute for it.
I changed parameter from path to query and didn't update the attribute.
It's too easy to forget to update the attributes (let alone documentation module solution), which leads to discrepancies between your documentation and actual code base. Our UI team is in another country and they had some trouble using the APIs because docu just wasn't up-to-date.
My solution? Don't mix code and documentation. Generating docu from code (like Swashbuckle does) IS ok, but actually writing docu in code and try to dublicate the code in docu is NOT. It's not better than writing it in a Word document for your clients.
If you want Swagger docu, just do it the Swagger way.
- Spend some time with Swagger.Editor and really author your API in
YAML. It looks all-text and hard, but once you get used to it, it's
not.
- Spend some time with Swagger.Codegen and adapt it (it already does a fair job for generating Nancy server code and with a few
adjustments to moustache templates it was just what I needed).
- Automate your process: write a couple of batches to generate your modules and models from yaml and copy them to your repository.
Benefits? Quite a few:
-
Your YAML definition is now the single truth of your REST contract.
If somewhere something is defferent, it's wrong.
Nancy server code is auto-generated
Client code-bases are auto-generated (in our case it's android, ios and angular)
So whenever I change something in REST contract, all codebases are regenerated and added to projects in one batch. I just have to tell the teams something was updated. They don't have to look through some documents and search for it. They just have their code regenerated and probably see some compile errors, in case of breaking changes.
Do I still use nancy.swagger(.annotations)?
Yes, I do use it in another project, which has just one endpoint with a couple of methods. They don't change often. It's not worth the effort to set up everything, I have my swagger docu fast up and running. But if your project is big, API is changing, and you have multiple code-bases depending on your API, my advice is to invest some time into a real swagger setup.
I am quoting the author answer here from https://github.com/khellang/Nancy.Swagger/issues/59
The installation should be really simple, just pull down the NuGet package, add metadata modules to describe your routes, and hit /api-docs. That should get you the JSON. If you want to add swagger-ui as well, you have to add that manually right now.
No. Not in an automated. https://github.com/yahehe/Nancy.Swagger needs lots of manually created metadata.
There is a nice article here: http://www.c-sharpcorner.com/article/generating-api-document-in-nancy-using-swagger/
Looks like you still have to add swagger-ui separately.

Custom Request Handlers/Components with Solr-4.x

I was referring this http://sujitpal.blogspot.in/2011/02/solr-custom-search-requesthandler.html for making custom handlers in solr. They are pretty nice but conform to the old apis. Is there any similar example I can refer to for solr-4.3.0.
Your best bet is to download the Solr source and check the existing implementations Of RequestHandlers. There are some that are quiet simplistic and can give you a good grasp of the API and a starting point.PingRequestHandler comes first to mind, if your goal is to do something very simple. For more complex scenarios look for the ones that make use of components and take a look in solr config for their initialization parameters.
Best of luck!

Apache module FORM handling in C

I'm implementing an Apache 2.0.x module in C, to interface with an existing product we have. I need to handle FORM data, most likely using POST but I want to handle the GET case as well.
Nick Kew's Apache Modules book has a section on handling form data. It provides code examples for POST and GET, which return an apr_hash_t of the key+value pairs in the form. parse_form_from_POST marshalls the bucket brigade and flattens it into a buffer, while parse_form_from_GET can simply reference the URL. Both routines rely on a parse_form_from_string routine to walk through each delimited field and extract the information into the hash table.
That would be fine, but it seems like there should be an easier way to do this than adding a couple hundred lines of code to my module. Is there an existing module or routines within apache, apr, or apr-util to extract the field names and associated data from a GET or POST FORM into a structure which C code can more easily access? I cannot find anything relevant, but this seems like a common need for which there should be a solution.
I switched to G-WAN which offers a transparent ANSI C scripts interface for GET and POST forms (and many other goodies like charts, GIF I/O, etc.).
A couple of AJAX examples are available at the GWAN developer page
Hope it helps!
While, on it's surface, this may seem common, cgi-style content handlers in C on apache are pretty rare. Most people just use CGI, FastCGI, or the myriad of frameworks such as mod_perl.
Most of the C apache modules that I've written are targeted at modifying the particular behavior of the web server in specific, targeted ways that are applicable to every request.
If it's at all possible to write your handler outside of an apache module, I would encourage you to pursue that strategy.
I have not yet tried any solution, since I found this SO question as a result of my own frustration with the example in the "Apache Modules" book as well. But here's what I've found, so far. I will update this answer when I have researched more.
Luckily it looks like this is now a solved problem in Apache 2.4 using the ap_parse_form_data funciton.
No idea how well this works compared to your example, but here is a much more concise read_post function.
It is also possible that mod_form could be of value.

What is a DSL and where should I use it?

I'm hearing more and more about domain specific languages being thrown about and how they change the way you treat business logic, and I've seen Ayende's blog posts and things, but I've never really gotten exactly why I would take my business logic away from the methods and situations I'm using in my provider.
If you've got some background using these things, any chance you could put it in real laymans terms:
What exactly building DSLs means?
What languages are you using?
Where using a DSL makes sense?
What is the benefit of using DSLs?
DSL's are good in situations where you need to give some aspect of the system's control over to someone else. I've used them in Rules Engines, where you create a simple language that is easier for less-technical folks to use to express themselves- particularly in workflows.
In other words, instead of making them learn java:
DocumentDAO myDocumentDAO = ServiceLocator.getDocumentDAO();
for (int id : documentIDS) {
Document myDoc = MyDocumentDAO.loadDoc(id);
if (myDoc.getDocumentStatus().equals(DocumentStatus.UNREAD)) {
ReminderService.sendUnreadReminder(myDoc)
}
I can write a DSL that lets me say:
for (document : documents) {
if (document is unread) {
document.sendReminder
}
There are other situations, but basically, anywhere you might want to use a macro language, script a workflow, or allow after-market customization- these are all candidates for DSL's.
DSL stands for Domain Specific Language i.e. language designed specifically for solving problems in given area.
For example, Markdown (markup language used to edit posts on SO) can be considered as a DSL.
Personally I find a place for DSL almost in every large project I'm working on. Most often I need some kind of SQL-like query language. Another common usage is rule-based systems, you need some kind of language to specify rules\conditions.
DSL makes sense in context where it's difficult to describe\solve problem by traditional means.
If you use Microsoft Visual Studio, you are already using multiple DSLs -- the design surface for web forms, winforms, etc. is a DSL. The Class Designer is another example.
A DSL is just a set of tools that (at least in theory) make development in a specific "domain" (i.e. visual layout) easier, more intuitive, and more productive.
As far as building a DSL, some of the stuff people like Ayende have written about is related to "text parsing" dsls, letting developers (or end users) enter "natural text" into an application, which parses the text and generates some sort of code or output based on it.
You could use any language to build your own DSL. Microsoft Visual Studio has a lot of extensibility points, and the patterns & practices "Guidance Automation Toolkit" and Visual Studio SDK can assist you in adding DSL functionality to Visual Studio.
DSL are basic compilers for custom languages. A good 'free and open' tool to develop them is available at ANTLR. Recently, I've been looking at this DSL for a state machine language use on a new project . I agree with Tim Howland above, that they can be a good way to let someone else customize your application.
FYI, a book on DSLs is in the pipeline as part of Martin Fowler's signature series.
If its of the same standard as the other books in the series, it should be a good read.
More information here
DSL is just a fancy name and can mean different things:
Rails (the Ruby thing) is sometimes called a DSL because it adds special methods (and overwrites some built-in ones too) for talking about web applications
ANT, Makefile syntax etc. are also DSLs, but have their own syntax. This is what I would call a DSL.
One important aspect of this hype: It does make sense to think of your application in terms of a language. What do you want to talk about in your app? These should then be your classes and methods:
Define a "language" (either a real syntax as proposed by others on this page or a class hierarchy for your favorite language) that is capable of expressing your problem.
Solve your problem in terms of that language.
DSL is basically creating your own small sublanguage to solve a specific domain problem. This is solved using method chaining. Languages where dots and parentheses are optional help make these expression seem more natural. It can also be similar to a builder pattern.
DSL aren't languages themselves, but rather a pattern that you apply to your API to make the calls be more self explanatory.
One example is Guice, Guice Users Guide http://docs.google.com/View?docid=dd2fhx4z_5df5hw8 has some description further down of how interfaces are bound to implementations, and in what contexts.
Another common example is for query languages. For example:
NewsDAO.writtenBy("someUser").before("someDate").updateStatus("Deleted")
In the implementation, imagine each method returning either a new Query object, or just this updating itself internally. At any point you can terminate the chain by using for example rows() to get all the rows, or updateSomeField as I have done above here. Both will return a result object.
I would recommend taking a look at the Guice example above as well, as each call there returns a new type with new options on them. A good IDE will allow you to complete, making it clear which options you have at each point.
Edit: seems many consider DSLs as new, simple, single purpose languages with their own parsers. I always associate DSL as using method chaining as a convention to express operations.

Resources