I want to validate instances against the minimum cardinality restrictions of my ontology. To do so, I have to perform Closed World Reasoning.
According to several posts, Protégé supports such reasoning with Pellet reasoner. However, I could not find any tutorial about how to do it.
My question is how to configure Protégé (or Pellet in Protégé) to perform Closed world reasoning?
Thank you in advance!
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I've been assigned to a legacy project which runs on Ext JS 4.2. I know JavaScript but I'm totally unaware of Ext JS and I'm having trouble in understanding it. Can someone please guide me on how to learn Ext JS, what approach should be followed and the important topics to be covered? Or what sequence should be followed?
I have been using ExtJS (7.2.0) in a corporate project for six months, these are some tips I would have needed some time ago.
You should start from the official docs and examples given by Sencha:
guide --> the main topics are The Class Systems, MVC Application Architecture and Components, which are the basis of ExtJS;
examples --> I find the KitchenSink example very useful, since you can briefly overview all the components available in the system - you can also give a look to MVC examples, in order to see more complex architectures;
forum --> you can also check out the Sencha forum, which has many interesting topics (many more that you can find here on Stack Overflow).
Since you are using an older version of ExtJS, you will find many materials on the internet, because it was widely used several years ago, while now it is difficult to find updated sources.
You can check out Saki website or fiddle explorer sorting by created date ASC:
I used to be a trainer for Sencha, and I left the company in 2013 right around the time when ExtJS 4.2 was the main version.
The fastest way to get up to speed on the framework is to take a training class from Sencha. I haven't worked for Sencha for 8 years, and many of my colleagues (who I respected highly as trainers) aren't there any more, so this is not a plug for their services, but it's the fastest way. You will learn shortcuts that will take you much longer if you were to do it yourself. The framework is huge an complicated, and it's nice to get an overview of how it works from an experienced guide.
Before I was hired as a trainer, I took both the ExtJS and Sencha Touch classes that they had available, and the difference between the "before" and "after" in my understanding was huge. Yes, it's a week of your time, and yes, it's $2500, so your manager may not agree with my recommendation, but like I said, it's the fastest way to get up to speed.
If you do decide to take the class, spend some time with your legacy app and write down where you're getting stuck, and ask those questions in class. Part of the class value is that you can get some free light consulting for any issues you may be having.
The fact that you know Javascript is a big plus. I've had people in my classes who were new to Javascript, and that was another hurdle that they had to get over.
Good luck with your app!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
PostgreSQL makes use of intraoperation parallelism and that is of interest to me (for my undergad final year research project). I would like to know how operations like selection, projection, join, etc are parallelized, but when I tried to look at the source code, I got extremely overwhelmed. Is there a high-level PostgreSQL "map"?
I tried looking for books that discuss and explore the algorithms and implementations used in PostgreSQL, but unfortunately didn't find any. Though feel free to refer me to such a book if you know about one.
If the only option I have is to dig into the source code, how long would it take me to find the information I want? And if any of you have gone through the source code, what advise would you give to me?
The nice thing about open source is that there is no clear border between the source code and the documentation, since both are public. As soon as you get deeper into the implementation details, you will start reading the code. Fortunately the PostgreSQL code is well written and quite readable.
The first stop on your way into the source are the README files. These describe implementation principles, algorithms and code rules at a higher level. In your case, you should start with src/backend/access/transam/README.parallel.
Another good approach it to read the patches that introduced the feature, like 924bcf4f16d, 7aea8e4f2daa, d1b7c1ffe72, f0661c4e8c44 and 80558c1f5aa1. That introduces you to the places in the code that are concerned with parallel query and gives you an idea how it all works.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a homework in artificial intelligence.
I need to make a robot go from room A to room B and there are obstacles between the rooms.
The professor asked me to use STRIPS (Stanford Research Institute Problem Solver) but I can not understand how STRIPS works.
Can someone give me a good explanation and examples about what is strips and how it work?
Thank you.
[Please note, this is based on what I half-remember from nearly a year ago]
These days, I would expect that when the Prof. says STRIPS, they would be talking about the problem coding 'language', rather than the planner - check for example the Wikipedia page: STRIPS. I would imagine that your Prof. likely has a particular solver (and quite possibly algorithm too) in mind, and is wanting you to encode the domain and specific problem, to run on the solver. Without knowing more details of the assignment, I can't be sure what you need. If you're looking for a planner, as I understand it, Fast Downward is quite popular among researchers currently. The website has some instructions on how to use it, and IIRC it comes with a bunch of domains and problems for those domains. I would thoroughly recommend looking at those, they're pretty much what I learnt with. I also just found this and this.
STRIPS is essentially a way of encoding information about the nature of the problem you want the computer to find a solution to. Typically you encode a domain, which provides information about the problem overall, such as what objects may be involved, what states they can be in, and what actions can be taken. Then, you also encode a particular problem, which (generally) specifies the starting state of the problem, and what the goal state should look like. Both those files are fed into a solver, which takes them and then finds a solution to the problem. Note that this won't always necessarily be an optimal solution - that depends on what algorithm you use, and how you have told the solver what should be optimised (which I think you can generally do in the problem, though I can't remember for sure now).
I suggest you have a look at those links, and see what you can find out. That should hopefully give you a better idea of what gaps you need filled in your knowledge, and then you can narrow in on exact specifics. If this is a taught course assignment, then I would expect that surely the Prof. would have gone over some of this in lectures (do you have lecture slides available?), or at least pointed everyone towards a recommended planner and material to read up on. If you're still struggling, your best bet is to go back and see the Prof. in office hours.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm building a calendar server in .Net. I want the first version of the system to be functional and interoperable for any calendar client. The system is for my college and is my thesis, that's why I don't have enough time to implement all the protocols that these system should implement.
What protocols are REQUIRED in a calendar system server in order to be functional for the clients? Till now it implements RFC 5545 (iCalendar), I'm finishing the RFC 4791 (CalDAV) and some of the extension of WebDAV, after this I'm going to implement the RFC 3744 (ACL).
Should I implement the RFC 6638 (Scheduling Extensions to CalDAV), RFC 3253 (Versioning Extensions to WebDAV) or any other?
In the future I want to implement all these protocols but I have no time now.
Despite the "close votes", I think this is a valid question. There's a lot of standards out there, and a lot of dependencies. And you certainly don't need everything.
The truth is that you need only a subset, and almost no one implements the entire spec.
What's required for you depends on which clients and which features you want to support. So lets say that you want to support iCal, and Thunderbird.
Then at the very least you need large chunks of CalDAV (RFC4791). You don't need every REPORT, but at least calendar-multiget and calendar-query. The freebusy stuff is not used. But for the calendar-query report, there's a small subset of actual queries that clients do.
You need big parts of RFC3744. You can skip most of the REPORTs, but you need a principals system and access-control related WebDAV properties. You also don't need the ACL method. WebDAV ACL is primarily used for principals and reporting access information (but not altering it).
Nobody uses RFC3253 (versioning).
You probably need current-user-principal-URL (rfc5397).
You don't need scheduling (RFC6638). Without scheduling, clients will sync just fine.
Lastly, it's really useful to have support for WebDAV Sync (rfc6578). Clients should be able to live without it, but in reality they tend to misbehave. Without support for that spec, you can fall back on the proprietary ctag, which is widely supported. It's simpler, does the job, but is non-standard.
I would consider this answer a non-exhaustive list though. It's an overview to help you get started. If you have more specific questions about the specs I mentioned, comment here, I'm happy to further expand on this answer.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
When you are watching for news of particular Wikipedia article via its RSS channel,
its annoying without filtering the information, because most of the edits is spam,
vandalism, minor edits etc.
My approach is to create filters. I decided to remove all edits that don't contain a nickname of the contributor but are identified only by the IP address of the contributor, because most of such edits is spam (though there are some good contributions). This was easy to do with regular expressions.
I also removed edits that contained vulgarisms and other typical spam keywords.
Do you know some better approach utilizing algorithms or heuristics with regular expressions, AI, text-processing techniques etc.? The approach should be able to detect bad posts (minor edits or vandalisms) and should be able to incrementally learn what is good/bad contribution and update its database.
thank you
There are many different approaches you can take here, but traditionally spam filters with incremental learning have been implemented using Naive bayesian classifiers. Personally, I prefer the even easier to implement Winnow2 algorithm (details can be found in this paper).
First you need to extract features from the text you want to classify. Unfortunately the Wikipedia RSS feeds don't seem to be particularly machine readable, so you probably need to do some preprocessing. Alternatively you could directly use the Mediawiki API or see if one of the bot frameworks linked at the bottom of this page is of help to you.
Ideally you would end up with a list of words that were added, words that were removed, various statistics you can compute from that, and the metadata of the edit. I imagine the list of features would look something like this:
editComment: wordA (wordA appears in edit comment)
-wordB (wordB removed from article)
+wordC (wordC added to article)
numWordsAdded: 17
numWordsRemoved: 22
editIsMinor: Yes
editByAnIP: No
editorUsername: Foo
etc.
Anything you think might be helpful in distinguishing good from bad edits.
Once you have extracted your features, it is fairly simple to use them to train the Winnow/Bayesian classifier.