Query using bigint attribute return empty for certain values - datomic

I created a minimal entity with one attribute of bigint type, my problem is that the query fail for certain values; this is the schema:
[{:db/ident :home/area,
:db/valueType :db.type/bigint,
:db/cardinality :db.cardinality/one,
:db/doc "the doc",
:db.install/_attribute :db.part/db,
:db/id #db/id[:db.part/db -1000013]}]
I inserted a sample value:
(d/transact (d/connect uri2)
[{
:db/id #db/id[:db.part/user]
:home/area 123456789000000N}
])
And confirmed that It was created by using the datomic console. It happens that the following query doesn’t return the entity previously inserted, as expected:
(d/q '[
:find ?e
:in $ ?h
:where
[?e :home/area ?h]]
(d/db (d/connect uri2))
123456789000000N
)
;;--- #{}
Maybe I’m missing something in the way the value is expressed. Another test using a different value like 100N for the attribute :home/area returns the correct answer:
(d/transact (d/connect uri2)
[{
:db/id #db/id[:db.part/user]
:home/area 100N}
])
(d/q '[
:find ?e
:in $ ?h
:where
[?e :home/area ?h]]
(d/db (d/connect uri2))
100N
)
;;-- #{[17592186045451]}
Also works fine with the value 111111111111111111111111111111111111N which is confusing to me.
Datomic version: "0.9.5390" java version "1.8.0_05" Java(TM) SE
Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit
Server VM (build 25.5-b02, mixed mode) MySQL as Storage service
Thanks in advance for any any suggestions.

To Clojure users, the name :db.type/bigint can be misleading, since it actually maps to java.math.BigInteger, not clojure.lang.BigInt.
I reproduced the same steps and I can't tell you why the Datalog query fails on 123456789000000N but not 100N and 111111111111111111111111111111111111N. It seems however that the following always works:
(d/q '[
:find ?e
:in $ ?h
:where
[?e :home/area ?h]]
(d/db (d/connect uri2))
(.toBigInteger 100N)
)

I ran your example and got different results (it worked in all cases). I am not sure why, but maybe adding my example will help. The only changes I made were to use uri instead of uri2, I slurped the schema, and I performed a (def conn (d/connect uri)) and a (d/create-database uri). I assume you performed similar steps, which is why I don't know why my example worked:
Clojure 1.8.0
user=> (use '[datomic.api :only [q db] :as d])
nil
user=> (use 'clojure.pprint)
nil
user=> (def uri "datomic:mem://bigint")
#'user/uri
user=> (d/create-database uri)
true
user=> (def conn (d/connect uri))
#'user/conn
user=> (def schema-tx (read-string (slurp "path/to/the/schema.edn")))
#'user/schema-tx
user=> #(d/transact conn schema-tx)
{:db-before datomic.db.Db#b8774875,
:db-after datomic.db.Db#321a2712,
:tx-data [#datom[13194139534312 50 #inst "2016-08-14T18:53:23.158-00:00" 13194139534312 true]
#datom[63 10 :home/area 13194139534312 true] #datom[63 40 60 13194139534312 true]
#datom[63 41 35 13194139534312 true] #datom[63 62 "the doc" 13194139534312 true]
#datom[0 13 63 13194139534312 true]],
:tempids {-9223367638809264717 63}}
(d/transact (d/connect uri)
[{
:db/id #db/id[:db.part/user]
:home/area 123456789000000N}
])
#object[datomic.promise$settable_future$reify__6480 0x5634d0f4
{:status :ready, :val {:db-before datomic.db.Db#321a2712,
:db-after datomic.db.Db#f6ef3cd8,
:tx-data [#datom[13194139534313 50 #inst "2016-08-14T18:53:34.325-00:00" 13194139534313 true]
#datom[17592186045418 63 123456789000000N 13194139534313 true]],
:tempids {-9223350046623220288 17592186045418}}}]
(d/q '[
:find ?e
:in $ ?h
:where
[?e :home/area ?h]]
(d/db (d/connect uri))
123456789000000N
)
#{[17592186045418]}
(d/transact (d/connect uri)
[{
:db/id #db/id[:db.part/user]
:home/area 100N}
])
#object[datomic.promise$settable_future$reify__6480 0x3b27b497
{:status :ready, :val {:db-before datomic.db.Db#f6ef3cd8,
:db-after datomic.db.Db#2385c058,
:tx-data [#datom[13194139534315 50 #inst "2016-08-14T18:54:13.347-00:00" 13194139534315 true]
#datom[17592186045420 63 100N 13194139534315 true]],
:tempids {-9223350046623220289 17592186045420}}}]
(d/q '[
:find ?e
:in $ ?h
:where
[?e :home/area ?h]]
(d/db (d/connect uri))
100N
)
#{[17592186045420]}
user=>
Can you run (first schema-tx) on the REPL line to confirm your schema transacted? I noticed you were using the console and I am wondering if /bigint did not get defined or you were looking at the first uri (since I noticed you had a 2, I am assuming you have multiple examples).

Related

Datomic query - find all records (entities) with value

Query:
(d/q '[:find [?e ...]
:in $ ?value
:where [?e _ ?value]]
db "Germany")
returns nothing, while:
(d/q '[:find [?e ...]
:in $ ?value
:where [?e :country/name ?value]]
db "Germany")
returns list of entities as expected.
Shouldn't the _ serve as a wildcard for any attribute name and return everything that holds a value ?
I read this Datomic query: find all entities with some value, but can't figure how do I stick an actual value as a parameter.
Datomic version: datomic-pro-0.9.5966
I figured this dirty, time consuming method, but it does the job:
(defn all-by-value
[db value]
(reduce
(fn [res ident]
(try
(->> (d/q '[:find [?e ...] :in $ ?a ?v :where [?e ?a ?v]] db ident value)
(map #(d/entity db %))
(concat res))
(catch Exception _ res)))
[] (d/q '[:find [?e ...] :where [?e :db/ident]] db)))
Hope some of you will find it useful.

Working with Python in Azure Databricks to Write DF to SQL Server

We just switched away from Scala and moved over to Python. I've got a dataframe that I need to push into SQL Server. I did this multiple times before, using the Scala code below.
var bulkCopyMetadata = new BulkCopyMetadata
bulkCopyMetadata.addColumnMetadata(1, "Title", java.sql.Types.NVARCHAR, 128, 0)
bulkCopyMetadata.addColumnMetadata(2, "FirstName", java.sql.Types.NVARCHAR, 50, 0)
bulkCopyMetadata.addColumnMetadata(3, "LastName", java.sql.Types.NVARCHAR, 50, 0)
val bulkCopyConfig = Config(Map(
"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"user" -> "username",
"password" -> "*********",
"dbTable" -> "dbo.Clients",
"bulkCopyBatchSize" -> "2500",
"bulkCopyTableLock" -> "true",
"bulkCopyTimeout" -> "600"
))
df.bulkCopyToSqlDB(bulkCopyConfig, bulkCopyMetadata)
That's documented here.
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-spark-connector
I'm looking for an equivalent Python script to do the same job. I searched for the same, but didn't come across anything. Does someone here have something that would do the job? Thanks.
Please try to refer to PySpark offical document JDBC To Other Databases to directly write a PySpark dataframe to SQL Server via the jdbc driver of MS SQL Server.
Here is the sample code.
spark_jdbcDF.write
.format("jdbc")
.option("url", "jdbc:sqlserver://yourserver.database.windows.net:1433")
.option("dbtable", "<your table name>")
.option("user", "username")
.option("password", "password")
.save()
Or
jdbcUrl = "jdbc:mysql://{0}:{1}/{2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
connectionProperties = {
"user" : jdbcUsername,
"password" : jdbcPassword,
"driver" : "com.mysql.jdbc.Driver"
}
spark_jdbcDF.write \
.jdbc(url=jdbcUrl, table="<your table anem>",
properties=connectionProperties ).save()
Hope it helps.
Here is the complete PySpark code to write a Spark Data Frame to an SQL Server database including where to input database name and schema name:
df.write \
.format("jdbc")\
.option("url", "jdbc:sqlserver://<servername>:1433;databaseName=<databasename>")\
.option("dbtable", "[<optional_schema_name>].<table_name>")\
.option("user", "<user_name>")\
.option("password", "<password>")\
.save()

2 SQL Server Databases in Django Project

I have a problem loading data in database1 (default). You see, the system only has to load data that is in the database2 (source). the system works in the machine of my confessor but it has two different ports loaded and uses docker, I have the SQL server installed. The system starts, the problem is that when I want to load a data in the database1 it tells me that this data does not exist in the database2, then it does not. Now, if I try to load data that is not in the database2 if it loads correctly. I searched how to change the ports of the SQL Server but I did not get it. Can anyone help me?
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'database1',
'HOST': 'name\\name',
'PORT': '',
'USER': 'user1',
'PASSWORD': 'password1',
'OPTIONS': {
'driver': 'ODBC Driver 13 for SQL Server',
}
},
'source': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'database2',
'HOST': 'name\\name',
'PORT': '',
'USER': 'user2',
'PASSWORD': 'password2',
'OPTIONS': {
'driver': 'ODBC Driver 13 for SQL Server',
}
}
This is the configuration:
def decide_on_model(model):
"""Small helper function to pipe all DB operations of a worlddata model to the world_data DB"""
return 'source' if model._meta.app_label == 'source' else None
class TutoriasRouter:
"""
Implements a database router so that:
* Django related data - DB alias `default` - MySQL DB `world_django`
* Legacy "world" database data (everything "non-Django") - DB alias `world_data` - MySQL DB `world`
"""
def db_for_read(self, model, **hints):
return decide_on_model(model)
# def db_for_write(self, model, **hints):
# return decide_on_model(model)
def db_for_write(self, model, **hints):
return 'default'
def allow_relation(self, obj1, obj2, **hints):
# Allow any relation if both models are part of the worlddata app
# if obj1._meta.app_label == 'source' and obj2._meta.app_label == 'source':
# return True
# # Allow if neither is part of worlddata app
# elif 'source' not in [obj1._meta.app_label, obj2._meta.app_label]:
# return True
# # by default return None - "undecided"
return True
def allow_migrate(self, db, app_label, model_name=None, **hints):
# allow migrations on the "default" (django related data) DB
if db == 'default' and app_label != 'source':
return True
# allow migrations on the legacy database too:
# this will enable to actually alter the database schema of the legacy DB!
# if db == 'source' and app_label == "source":
# return True
return False
`

ADODB SQLServer connection with Seek and Index

In my program I want connect to SQL Server using ADODB.Connection, then read the data using ADODB.RecordSet, and use Index and Seek to find the searched record.
I tried using this connection string Provider=SQLNCLI11;;Data Source=localhost;Initial Catalog=TestDB;Integrated Security=SSPI;Persist Security Info=False;, adOpenDynamic as cursorType, adLockOptimistic as LockType and adCmdTable as CommandType.
But when I try to call the Supports method on recordset with adSeek or adIndex returns false.
There is a way to connect to SQL Server and open a Recordset with support for seek and index?
Edit here the code:
LOCAL oCn, nCursor, nLock,oRS
oCn := CreateObject( "ADODB.Connection" )
oCn:ConnectionString := "Provider=SQLNCLI11;;Data Source=localhost;Initial Catalog=TLPosWin;Integrated Security=SSPI;Persist Security Info=False;"
oRS := CreateObject( "ADODB.RecordSet" )
nCursor := adOpenDynamic
nLock := adLockOptimistic
oRS:CursorLocation := adUseServer
oRS:Open("Articoli",oCn, nCursor, nLock, adCmdTable)
? "seek",oRS:Supports(0x200000),"index",oRS:Supports(0x100000) //both false
oRS:Index := "Articoli_ARTART" //Error
oRS:Seek('=',1,'000611') //Error

How do I pull all entities linked from another entity in Datomic?

I don't know how to word my question.
:host/id has a link to :server/id. I want to pull all servers linked to a specific host.
I've tried several approaches but I get either an empty result, all results or an IllegalArgumentExceptionInfo :db.error/not-a-keyword Cannot interpret as a keyword.
I tried following the documentation but I keep getting lost. Here are my attempts so far:
All hosts
(d/q '[:find (pull ?server [{:host/id [:host/hostname]}])
:in $ ?hostname
:where
[?host :host/hostname ?hostname]
[?server :server/name]] db "myhost")
IllegalArgumentExceptionInfo
(d/q '[:find (pull ?server [{:host/id [:host/hostname]}])
:in $ ?hostname
:where
[?server :server/name ?host]
[?host :host/hostname ?hostname]] db "myhost")
[]
(d/q '[:find (pull ?host [{:host/id [:host/hostname]}])
:in $ ?hostname
:where
[?host :host/hostname ?hostname]
[?host :server/name]] db "myhost")
Assuming you have these entities in datomic:
(d/transact conn [{:host/name "host1"}])
(d/transact conn [{:server/name "db1"
:server/host [:host/name "host1"]}
{:server/name "web1"
:server/host [:host/name "host1"]}])
And assuming each server has a reference to host (please see schema below), in order to query which servers are linked to a host, use the reverse relation syntax '_':
(d/q '[:find (pull ?h [* {:server/_host [:server/name]}])
:in $ ?hostname
:where
[?h :host/name ?hostname]]
(d/db conn)
"host1")
will give you:
[[{:db/id 17592186045418,
:host/name "host1",
:server/_host [#:server{:name "db1"} #:server{:name "web1"}]}]]
Here is the sample schema for your reference:
(def uri "datomic:free://localhost:4334/svr")
(d/delete-database uri)
(d/create-database uri)
(def conn (d/connect uri))
(d/transact conn [{:db/ident :server/name
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/valueType :db.type/string}
{:db/ident :server/host
:db/cardinality :db.cardinality/one
:db/valueType :db.type/ref}
{:db/ident :host/name
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/valueType :db.type/string}])

Resources