I have built several domains and problems in PDDL and now I'm looking for a possibilty to visualize my domains, problems and solutions. My questions are as follows:
Are there any tools for a rather simple graphical representation of plans (e.g. nodes and connections)?
Are there tools that can display the current world state of my domain at any time (i.e. the value of all variables in my domain, after each step in my plan) in plain text?
How do I integrate PDDL in the system architecture? If I want to use a 2D or 3D representation of my world (e.g. Gazebo) how can i "connect" Gazebo and PDDL?
Thanks!
Going over your questions one by one:
None made publicly available, but I am aware of several students of members of the Planning community working in tools like that. I'd suggest you to make the question on this mailing list: planning-list#googlegroups.com
For that you'd need to modify one of the many existing planners and work it out yourself from their data structures or extend whatever debugging methods planners' authors have built into their systems. If you have the time and good C++ skills I'd suggest you to look into either Fast Downward (http://www.fast-downward.org/) or the less mature and simpler lwaptk (https://github.com/miquelramirez/lwaptk). If you are after some fast prototyping, and you are happy to compute the plans off-line, then I'd recommend you to look at Pyperplan (https://bitbucket.org/malte/pyperplan) a native Python planner which should quite easy to interface with pretty much anything (it just won't be very efficient when it comes to compute plans).
You'll have to work out yourself an interface between the model of your "Gazebo" world and the planning back-end, and the details depend on what a) what are you modeling in the "front-end" and b) what do you exactly want the planner to do for you. For an example of how to expose a relatively complex object model I'd suggest you to look at this little demo I put together a while back (https://github.com/miquelramirez/pr-as-planning-demo).
Read this link, it might be useful for you
https://www.ida.liu.se/~TDDD48/labs/2015/planners.en.shtml
Related
In short, I am looking to create a POC (proof of concept) for an interface I have designed for aggregating data from several different sources. Currently I am just using flat tables in SQL with no relationships (although there are fields that are in the data for that purpose) to store the data, which is being gathered by various means (mostly PowerShell scripts). I also want it to be modular enough so I can add hooks to REST calls to other sources. So, if I make a call to get data about a certain asset, I don't have to know where that data is at that level. The middleware will know where to find it and use the appropriate method to get it (SQL, REST, etc.).
I have been in IT for over 31 years, I have experience with SQL (and have been a professional DBA at one point), a few scripting languages, and wrote a similar app in C# at one point, but I was given a template interface from a profession developer and I just ran with that and changed it as I went.
I want a high level view with colors indicating general health of a location, and the ability to search the data for assets (this could be anything from a VM, BM server, ethernet switches, FC switches, connected SAN, hypervisors, vCenters, Nutanix clusters, etc.) to find their location, health and most importantly, their connections to other assets.
I know a little about everything, enough to be dangerous in most and some a master of. What direction should I go here? I know BASIC, PowerShell well enough to call myself an expert, but more than likely, just a hacker. I know the basic concepts of coding well enough (I was an adjunct at a CC 20+ years ago) to teach it to others. I want to use a language I will be able to hand over to a professional to maintain once we have a new DevOps member on the team.
EDIT: I wrote a very small program earlier this year using React. That was broken into the interface and the 'middleware' with two separate project. Java is not an intuitive language for me and I could go this direction again, but I am looking for possible alternatives.
I'm not sure there's a 'non-developer' way of doing this unfortunately. If you really want to do this yourself, I'd suggest a C# API since you have some experience in that (even if it's minimal, it's still comfort), and luckily for you they just released minimal API's which should make your life much easier (https://dotnetcoretutorials.com/2021/07/16/building-minimal-apis-in-net-6/).
In terms of actually displaying the data, if it's a really simple UI I'd just keep it simple and use JavaScript and HTML/CSS (ie no frameworks such as React). It's a POC, and if someone comes in and wants to move over to using a framework because there's more work to be done on it, it's not that hard to take your existing code and move it into something more modern. But if it's small, a framework would be over engineering this, especially if you need to learn it all.
For smaller projects like this, you can kind of learn the code you need for the project, without needing to truly learn to code. Get your components from a library like Bootstrap (it's basically premade elements like your dropdowns, progress bars etc, their documentation is pretty good), and keep it as simple as possible. Good luck!
I'm trying to create a small web application for a "personal information manager" / wiki kind of tool where I can take notes in the form of HTML snippets (or maybe Markdown), annotate them with some https://schema.org/ microdata and store both the snippet and the metadata somewhere for querying.
My understanding so far is that most semantic data stores (triple/quad stores, or databases supporting RDF) are better suited for storing and querying mainly the metadata. So I'll probably also want some traditional store of some sort (relational, document store, key-value, or even a non-rdf graph db) where I can store the full text of each note and maybe some other bits like time of last access, user-id that owns the note, etc, and also perform traditional (non-semantic) fulltext queries.
I started looking for stores that would allow me to store both data and metadata in a single place. I found a few: Ontotext GraphDB, Stardog, MarkLogic, etc. All of these seem to do exactly what I want, but have some pretty limiting free license terms that really discourage me from studying them in depth: I prefer to study open technologies that I could potentially use on a real product.
Before digging deeper, I was wondering:
If my assumption is correct: that I'll need to use one store for the data and another for the metadata.
if there's any setup involving free/open source software that developers with experience in RDF/Sparql can recommend, given the problem I describe.
Right now I'm just leaning towards using Apache Jena for the RDF store and SPARQL queries, and something completely independent for the rest of the data (PostgreSQL most likely).
Before digging deeper, I was wondering:
If my assumption is correct: that I'll need to use one store for the data and another for the metadata.
Not necessarily, no, though there certainly are some cases in which that distinction may be useful. But most RDF databases offer scalable storage for both data and metadata. The only requirement is that your (meta)data is represented as RDF. If you're worried about performance of things like text queries, most of them offer support for full-text indexing through Lucene, Solr, or Elasticsearch.
if there's any setup involving free/open source software that developers with experience in RDF/Sparql can recommend, given the problem I describe.
This is really not the right place to ask this question. Tool recommendations are considered off-topic on StackOverflow since they attract biased answers. But as said, there's plenty of tools, both open-source/free and commercial, that you can look into. I suggest you pick one you like the look of, experiment a bit, and perhaps talk to that particular tool's community to explain what you're trying to do. Apache Jena and Eclipse Rdf4j are two popular open-source projects, but there's plenty of others.
I'm trying to implement a collaborative canvas in which many people can draw free-handly or with specific shape tools.
Server has been developed in Node.js and client with Angular1-js (and I am pretty new to them both).
I must use a consensus algorithm for it to show always the same stuff to all the users.
I'm seriously in troubles with it since I cannot find a proper tutorial its use. I have been looking and studying Paxos implementation but it seems like Raft is very used in practical.
Any suggestions? I would really appreciate it.
Writing a distributed system is not an easy task[1], so I'd recommend using some existing strongly consistent one instead of implementing one from scratch. The usual suspects are zookeeper, consul, etcd, atomix/copycat. Some of them offer nodejs clients:
https://github.com/alexguan/node-zookeeper-client
https://www.npmjs.com/package/consul
https://github.com/stianeikeland/node-etcd
I've personally never used any of them with nodejs though, so I won't comment on maturity of clients.
If you insist on implementing consensus on your own, then raft should be easier to understand — the paper is surprisingly accessible https://raft.github.io/raft.pdf. They also have some nodejs implementations, but again, I haven't used them, so it is hard to recommend any particular one. Gaggle readme contains an example and skiff has an integration test which documents its usage.
Taking a step back, I'm not sure if the distributed consensus is what you need here. Seems like you have multiple clients and a single server. You can probably use a centralized data store. The problem domain is not really that distributed as well - shapes can be overlaid one on top of the other when they are received by server according to FIFO (imagine multiple people writing on the same whiteboard, the last one wins). The challenge is with concurrent modifications of existing shapes, by maybe you can fallback to last/first change wins or something like that.
Another interesting avenue to explore here would be Conflict-free Replicated Data Types — CRDT. Folks at github used them to implement collaborative "pair" programming in atom. See the atom teletype blog post, also their implementation maybe useful, as collaborative editing seems to be exactly the problem you try to solve.
Hope this helps.
[1] Take a look at jepsen series https://jepsen.io/analyses where Kyle Kingsbury tests various failure conditions of distribute data stores.
Try reading Understanding Paxos. It's geared towards software developers rather than an academic audience. For this particular application you may also be interested in the Multi-Paxos Example Application referenced by the article. It's intended both to help illustrate the concepts behind the consensus algorithm and it sounds like it's almost exactly what you need for this application. Raft and most Multi-Paxos designs tend to get bogged down with an overabundance of accumulated history that generates a new set of problems to deal with beyond simple consistency. An initial prototype could easily handle sending the full-state of the drawing on each update and ignore the history issue entirely, which is what the example application does. Later optimizations could be made to reduce network overhead.
My lab is doing a lot of sequencing, but the way the sequences are documented makes it difficult to retrieve them or keep track of the data. I would like to create a database that has following features:
-A Graphical user interface to allow one to upload/retrieve/view data, and can incorporate links to quickly BLAST or analyse the sequences with other online tools.
-allows one to access it in the command line
-that has another section on the GUI that has records of what's in the lab, what needs to be ordered etc.
I wanted to know if there are general database templates I can adopt and modify to suit my lab needs? I have no experience in database design but have read about mySQL.
What are the first steps I should take in embarking on this project?
Thank you!
This is an interesting question and problem domain (one I now have expierence with btw). Your first step is to decide on a general architecture and then select technologies for this.
For the web/graphical side, there are lots of off the shelf components (I assume you are aware of tools like AntiSMASH, JBrowse, etc). But you will need to evaluate these. That is way outside the scope of the db side however.
On the database side, PostgreSQL performs admirably here. I have worked on a heavily loaded 10+TB db which was specifically storing sequencing data, BLAST reports, and so forth. If you add stuff like PostBIS on top of that, you get something quite functional.
A lot of the heavier portions of the industry however are using Hadoop because of the fact that the quantity of data available is increasing very rapidly but the amount of expertise required to make that work is also appropriately higher.
Given that a lot of people use content repositories. There must be a good reason. I'm building out a new web application that will need to store content. Can someone help me understanding this?
What are the advantages to using a content repository like Apache Jackrabbit as opposed to writing your own code/API to store images or text pages? Writing your own requires time etc. but so too does implementing and learning a new framework like the content repository API. A benefit to rolling your own seems to me that you know your code and have immediate expertise if you need to enhance or fix it. Using another framework you need to learn its foibles, and it is always easier to modify code you know that don't know... i.e. you don't know that underlying framework code as well as your own.
As I said a lot of people use them. There must be a reason. I can't see it as being just another "everyone is using them so, so should we." At least I hope it isn't that. :)
A JCR repository allows you to store all your content (from structured database-type data to large multimedia files) in a single place and with a single API, which is extremely convenient and makes your code simpler, avoiding the impedance mismatch between files and data that you usually have in content-based systems.
JCR also provides a lot of infrastructure functionality that you won't have to build or assemble yourself: search (including full-text), observation (callbacks when something changes) versioning, data types including multi-value, ordered nodes, etc...
If you allow a shameless plug, my "JCR - best of both worlds" article at http://java.dzone.com/articles/java-content-repository-best describes this in more detail and also provides a reading list for the JCR spec, that should allow you go get a good overview without reading the whole thing.
The article uses Apache Sling for its examples, which combined with a JCR repository provides a very nice (IMO, but as a Sling committer I'm biased ;-) platform for content-based applications.
My most recent projects have involved both choices: a custom-built data store (MySQL and image files) wtih a multi-level caching mechanism, and a JCR-based commercial repository.
A few thoughts:
In the short run, a DIY solution offers reduced complexity: you only have to build and learn what you need. And there is at least the opportunity to optimize
the data store for your particular application's needs -- more than likely speed of retrieval, but possibly storage footprint, security, or reliability concerns are foremost for you.
However, in the long run, you're looking at a significant increment of work to extend the home-grown system to a new content type (video, e.g.) or to provide new functionality (maybe,
versioning).
Also, it's difficult to separate the choice of a data store approach from the choice of tools that content providers will use to populate and maintain the data store. You'll have to give
your authors something more than an HTML form with a textarea and a submit button.
This is related to the advantages of standardization: compatibility and interchangeability. If everybody writes his own library and API, there is no compatibility and interchangeability, leading to higher cost.