not able to create an entity with db.type/tuple in datomic - datomic

Datomic newbie here. Playing with various valueTypes and can't get tuple data type to work.
Here's how defined the schema:
(d/transact conn {
:tx-data [{
:db/ident :df/Errors
:db/valueType :db.type/tuple
:db/tupleType :db.type/string
:db/cardinality :db.cardinality/many}]})
This worked. However, I can't figure out how to enter sample data. I tried
(d/transact conn {:tx-data [{
:df/Errors ["Error-code" "sample error message"]}]})
But it gives an error:
Invalid tuple value
As per docs, the tuple value is a vector with 2 to 8 elements. So, not sure what I'm doing wrong. Please help.

Since the cardinality is many, the sample data code should look like this:
(d/transact conn {:tx-data [{:df/Errors [["Error-code" "sample error message"]]}]})

Related

Is there way of querying to jsonb column with some conditions in "value array" in postgresql?

I'm a new beginner at Postgresql. And I'm looking for a way to make a query to meet the condition.
I've tried to make a query to find some data in jsonb column, but failed. Below is my sample case.
column name : data
column type : jsonb
sample data
{"id": "1", "ips": [3232238180, 3232238181]}
{"id": "1", "ips": [{"ip": 3232238180}, {"ip": 3232238181}]}
There are some IPs as an integer in "ips" array.
I want to find rows that have specific integer IPs using range query(ex: "ip" in 192.168.10.0/24)
I know I can find the rows if the data form was like (2).
select *
from table_name,
JSONB_ARRAY_ELEMENTS(data->'ips') ips
where (ips->>'ip')::NUMERIC > xxxxxx
and (ips->>'ip')::NUMERIC < xxxxxx
Nevertheless, I'm wondering I can find it in (1) type.
Because I think (1) type has some benefit of saving storage.
Is there anyone who knows about it?

How to create a Datomic partition without using db.part

In the official docs for Datomic (http://docs.datomic.com/schema.html) under the heading 'Creating new partitions' it says that a new partition (communities) can be created like this:
{:db/id #db/id[:db.part/db]
:db/ident :communities}
Here the ':communities' is not written as 'db.part/communities'
I can not install a new partition this way. For me it has to be with the leading 'db.part/'. Is the documentation wrong, or am I not seeing the bigger picture?
If you read further in the documentation, you'll see that you're missing another datom required for that transaction (labeled with "Here is the complete transaction..."). That datom is (with user assigned tempid as -1 optional):
[:db/add :db.part/db :db.install/partition #db/id[:db.part/db -1]]
Anything transacted with a tempid that resolves to the system partition (:db.part/db) must also include a datom marking the installation, as with :db.install/partition and :db.install/attribute (the reverse ref version for attribute included in the map is more common).
Transacting the full example from the docs works fine:
(def tx [{:db/id #db/id[:db.part/db -1]
:db/ident :communities}
[:db/add :db.part/db :db.install/partition #db/id[:db.part/db -1]]])
#(d/transact conn tx)
;; returns successful tx map

Npgsql DataReader Invalid attempt to read when no data is present

Im geting angry with this, i cant find a solution because the error handler have a huge lack of information.
I have a PostGreSQL Database and i am using the Npgsql.dll to conect, everything works by now as normal. Except for somethings like this:
I have a "test" table with 2 Columns (name, description) and 4 rows.
Im using "dr As NpgsqlDataReader" to execute a simple command
"SELECT * FROM test"
But with no reason it doesnt work at all, there are almost no extra info or help for this, it seems to be something related to npgsql.dll when i get the error.
dr = comando.ExecuteReader()
dr.Read()
Debug.Print("{0}", dr.Item(1)) //Works, give "description" value data
Debug.Print("{0}", dr.Item(0)) //Works, give "name" value data
Debug.Print("{0}", dr.GetName(0)) //Works, give column name "name"
Debug.Print("{0}", dr.GetString(0)) //Works, again, same "name" value data
Debug.Print("{0}", dr.GetValue(0)) //Works, same "name" value data
While dr.HasRows
For i As Integer = 0 To (dr.FieldCount - 1) //in this case, this is (2-1)
Debug.Print("{0}", dr.GetName(i)) //works, give column name "name"
Debug.Print("{0}", dr.Item(i)) //FAILSSSS
'MsgBox(String.Format("{0}: {1}", dr.GetName(i), dr.Item(i))) //FAILSS
Next
dr.Read()
End While
dr.Close()
After the FAIL i just get from error handler: "Invalid attempt to read when no data is present"
It would be so gratefull to have some extra opinions here to help. Thanks by advance.
I found the problem, it wasn't related directly to
Invalid attempt to read when no data is present
error.
While dr.HasRows is FORBIDDEN
Old documentation doesn't have an explicit explanation of this boolean procedure.
Something for example like explaining if true means that "dr" has rows IN GENERAL of the query, or AFTER the actual pointer position (in this case "dr.read()").
The correct is obviously (not for me): "in general", always TRUE or FALSE for the query command received.
Postgres answer this with a lack of data, far away from the problem and that confused me
Best use of this:
If dr.HasRows Then
While dr.Read()
For i As Integer = 0 To (dr.FieldCount - 1)
Debug.Print("{0}", dr.GetName(i))
Debug.Print("{0}", dr.GetString(i))
MsgBox(String.Format("{0}: {1}", dr.GetName(i), dr.Item(i)))
Next
End While
dr.Close()
End If
I have to say that i found similars answer to this in others questions here that helped me find my particular solution. Thx to all of them, or you, if you are one of them.

Selecting entitites with the highest value of some attribute

Suppose I have one million article entities in my backend with an inst attribute called date, or one million player entities with an int attribute called points. What's a good way to select the 10 latest articles or top-scoring players?
Do I need to fetch the whole millions to the peer and then sort and drop from them?
Until getting hold of the reverse index becomes a Datomic feature, you could manually define one.
e.g. for a :db.type/instant, create an additional attribute of type :db.type/long which you would fill with
(- (Long/MAX_VALUE) (.getTime date))
and the latest 10 articles could be fetched with
(take 10 (d/index-range db reverse-attr nil nil))
Yes, you would need to fetch all the data, since there's no index that would help you out here.
I would have created my own "index" and normalized this data. You can have a separate set of N entities where you keep as many as you'd like. You could start with 10, or consider storing 100 to trade some (possibly negligible) speed for more flexibility. This index can be stored on a separate "singleton" entity that you add as part of your schema.
;; The attribute that stores the index
{:db/id #db/id[:db.part/db]
:db/ident :indexed-articles
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db.install/_attribute :db.part/db}
;; The named index entity.
{:db/id #db/id[:db.part/db]
:db/ident :articles-index}
You can have a database function that does this. Every time you insert a new entity that you want to "index", call this function.
[[:db/add tempid :article/title "Foo]
[:db/add tempid :article/date ....]
[:index-article tempid 10]]
The implementation of index-article could look like this:
{:db/id #db/id[:db.part/user]
:db/ident :index-article
:db/fn #db/fn {:lang "clojure"
:params [db article-id idx-size]
:code (concat
(map
(fn [article]
[:db/retract
(d/entid db :articles-index)
:indexed-articles
(:db/id article)])
(->> (datomic.api/entity db :articles-index)
(sort-by (fn [] ... implement me ... ))
(drop (dec idx-size))))
[[:db/add (d/entid db :articles-index) :indexed-articles article-id]])}}
Disclaimer: I haven't actually tested this function, so it probably contains errors :) The general idea is that we remove any "overflow" entities from the set, and add the new one. When idx-size is 10, we want to ensure that only 9 items are in the set, and we add our new item to it.
Now you have an entity you can lookup from index, :articles-index, and the 10 most recent articles can be looked up from the index (all refs are indexed), without causing a full database read.
;; "indexed" set of articles.
(d/entity db :articles-index)
I've been looking into this and think I have a slightly more elegant answer.
Declare your attribute as indexed with :db/index true
{:db/id #db/id[:db.part/db -1]
:db/ident :ocelot/number
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/doc "An ocelot number"
:db/index true
:db.install/_attribute :db.part/db}
This ensures that the attribute is included in the AVET index.
Then the following gives you access to the "top ten", albeit using the low-level datoms call.
(take-last 10 (d/datoms (db conn) :avet :ocelot/number))
Obviously if you need to do any further filtering ("who are the top ten scorers in this club ?") then this approach won't work, but at that point you have a much smaller amount of data in your hand and shouldn't need to worry about the indexing.
I did look extensively at the aggregation functions available from Datalog and am having trouble getting my head around them - and am uncertain that e.g. max would use this index rather than a full scan of the data. Similarly the (index-range ...) function almost certainly does use this index but requires you to know the start and/or end values.

Finding certain model fields by hasMany model conditions- CakePHP

Trips hasMany Legs
I'm trying to find only trips that have a certain destination. I have consulted the following question: In cakephp how can I do a find with conditions on a related field? ad infinitum to no avail.
I get:
"Query failed: ERROR: missing FROM-clause entry for table "Leg"
LINE 1: ...p__origin_airport" FROM "trips" AS "Trip" WHERE "Leg"."des..."; and,
"SQL Error: ERROR: missing FROM-clause entry for table "Leg"
LINE 1: ...p__origin_airport" FROM "trips" AS "Trip"
I have tried setting up the find in all of the ways suggested in the above question and can't seem to figure this out. It's to the point where I'm beginning to think there's some other problem. Can someone help me with finding a certain model's results by a hasMany model's conditions?
Below is the version of the find that throws the above code. The other versions of the find() all return similar pg.query errors (no from clause)
Thanks!
$this->find('first', array('conditions'=>array('Leg.destination'=>'XXX'),'contain'=>array('Leg') ,
'order'=>'Trip.price ASC'));
$this->find('first', array('conditions'=>array('Leg.destination'=>'XXX'),
'order'=>'Trip.price ASC'));
this should work. can you paste what your sql dump at the bottom of the page and maybe explain the schema of your table in greater detail?

Resources