I'm trying to set datatime calling backend.properties qiskit.
this is my code :
from qiskit import *
from qiskit.providers.ibmq import *
provider = IBMQ.get_provider(hub='ibm-q')
provider.backends(simulator=False)
backend = provider.get_backend(name)
prp = backend.properties(datatime=datatime).to_dict()
I'm getting this error : TypeError: properties() got an unexpected keyword argument 'datatime'.
If I use refresh = True and not datatime in propertis, the code works.
The keyword is datetime, not datatime.
Related
It's possible to get the Snowflake Query Id when using the snowflake-connector-python, i.e. the sfqid attribute from the cursor object.
Is it possible to get that attribute when using Snowflake's SQLAlchemy Toolkit? The doc page doesn't mention it - https://docs.snowflake.com/en/user-guide/sqlalchemy.html.
Thanks,
Eric
You can use sqlalchemy's execute method and get a reference to the SnowflakeCursor like this:
import os
from sqlalchemy import create_engine
from dotenv import load_dotenv
load_dotenv()
snowflake_username = os.getenv('SNOWFLAKE_USERNAME')
snowflake_password = os.getenv('SNOWFLAKE_PASSWORD')
snowflake_account = os.getenv('SNOWFLAKE_ACCOUNT')
snowflake_warehouse = os.getenv('SNOWFLAKE_WAREHOUSE')
snowflake_database = 'simon_db'
snowflake_schema = 'public'
if __name__ == '__main__':
engine = create_engine(
'snowflake://{user}:{password}#{account}/{db}/{schema}?warehouse={warehouse}'.format(
user=snowflake_username,
password=snowflake_password,
account=snowflake_account,
db=snowflake_database,
schema=snowflake_schema,
warehouse=snowflake_warehouse,
)
)
connection = engine.connect()
results = connection.execute("SELECT * FROM TEST_TABLE")
queryId = results.cursor.sfqid
print(f"queryId = {queryId}")
print(f"results: {results.fetchone()}")
connection.close()
engine.dispose()
This prints out:
queryId = 019edbef-0000-114f-0000-0f9500612j23
results: ('n/a', '2021-01-01')
One way I found was using the function LAST_QUERY_ID, something like this:
results = connection.execute("SELECT * FROM CITIBIKE_TRIPS LIMIT 1").fetchone()
query_id = connection.execute("SELECT LAST_QUERY_ID()").fetchone()
print(query_id)
I get back something like:
$ python test_sqlalchemy.py
('019edb4b-0502-8a31-0000-16490cd95072',)
Might not be the ideal way.
I would like to use Slick (3.2.3) to connect to a MSSQL database.
Currently, my project is the following.
In application.conf, I have
somedbname = {
driver = "slick.jdbc.SQLServerProfile$"
db {
host = "somehost"
port = "someport"
databaseName = "Recupel.Datawarehouse"
url = "jdbc:sqlserver://"${somedbname.db.host}":"${somedbname.db.port}";databaseName="${somedbname.db.databaseName}";"
user = "someuser"
password = "somepassword"
}
}
The "somehost" looks like XX.X.XX.XX where X's are numbers.
My build.sbt contains
name := "test-slick"
version := "0.1"
scalaVersion in ThisBuild := "2.12.7"
libraryDependencies ++= Seq(
"com.typesafe.slick" %% "slick" % "3.2.3",
"com.typesafe.slick" %% "slick-hikaricp" % "3.2.3",
"org.slf4j" % "slf4j-nop" % "1.6.4",
"com.microsoft.sqlserver" % "mssql-jdbc" % "7.0.0.jre10"
)
The file with the "main" object contains
import slick.basic.DatabaseConfig
import slick.jdbc.JdbcProfile
import slick.jdbc.SQLServerProfile.api._
import scala.concurrent.Await
import scala.concurrent.duration._
val dbConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig("somedbname")
val db: JdbcProfile#Backend#Database = dbConfig.db
def main(args: Array[String]): Unit = {
try {
val future = db.run(sql"SELECT * FROM somettable".as[(Int, String, String, String, String,
String, String, String, String, String, String, String)])
println(Await.result(future, 10.seconds))
} finally {
db.close()
}
}
}
This, according to all the documentation that I could find, should be enough to connect to the database. However, when I run this, I get
[error] (run-main-0) java.sql.SQLTransientConnectionException: somedbname.db - Connection is not available, request timed out after 1004ms.
[error] java.sql.SQLTransientConnectionException: somedbname.db - Connection is not available, request timed out after 1004ms.
[error] at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:548)
[error] at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:186)
[error] at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:145)
[error] at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:83)
[error] at slick.jdbc.hikaricp.HikariCPJdbcDataSource.createConnection(HikariCPJdbcDataSource.scala:14)
[error] at slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:453)
[error] at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:46)
[error] at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:37)
[error] at slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:249)
[error] at slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:248)
[error] at slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:37)
[error] at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:274)
[error] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
[error] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
[error] at java.base/java.lang.Thread.run(Thread.java:844)
[error] Nonzero exit code: 1
Perhaps related and also annoying, when I run this code for the second (and subsequent) times, I get the following error instead:
Failed to get driver instance for jdbcUrl=jdbc:sqlserver://[...]
which forces me to kill and reload sbt each time.
What am I doing wrong? Worth noting: I can connect to the database with the same credential from a software like valentina.
As suggested by #MarkRotteveel, and following this link, I found a solution.
First, I explicitly set the driver, adding the line
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
in the db dictionary, after password = "somepassword".
Secondly, the default timeout (after one second) appears to be too short for my purposes, and therefore I added the line
connectionTimeout = "30 seconds"
after the previous driver line, still in the db dictionary.
Now it works.
I want to create a for loop when the variable is in a string format.
my code is:
for i in range(1,32):
DEP_RATE = pd.read_sql_query('select DAYNUM,YYYYMM,sum(DEP_RATE) from ASPM.aspm_qtr where LOCID="ABQ" and DAYNUM = i:"{i}" group by YYYYMM',{i:str(i)},rconn)
the error is:
'dict' object has no attribute 'cursor'
I used this code:
for i in range(1,32):
DEP_RATE = pd.read_sql_query('select DAYNUM,YYYYMM,sum(DEP_RATE) from ASPM.aspm_qtr where LOCID="ABQ" and DAYNUM = "i" group by YYYYMM',rconn, params={i:str(i)})
It hasn't error but doesn't work. the problem is:
("Truncated incorrect DOUBLE value: 'i'")
This appears to be a PANDAS question, yes? I believe it's an argument mismatch with the read_sql_query API. The function signature is:
pandas.read_sql_query(sql, con, index_col=None, coerce_float=True, params=None, parse_dates=None, chunksize=None)
Which roughly says that the first two positional arguments are required, and everything else is defaulted or optional.
To make it more obvious, consider putting your core SQL string on a separate line:
sql = 'SELECT daynum, ... AND daynum = :i'
for i in range(1, 32):
DEP_RATE = pd.read_sql_query(sql, {i: str(i)}, rconn)
With this formulation, it's more obvious that your arguments do not match the API: your second argument is off. Perhaps you want (note the additional keyword argument params):
DEP_RATE = pd.read_sql_query(sql, rconn, params={i: str(i)})
I corrected the code. The correct answer is:
DEP_RATE = list(np.zeros((1,1)))*31
for i in range(1,32):
DEP_RATE[i-1] =np.array(pd.read_sql_query('select DAYNUM,YYYYMM,sum(DEP_RATE) from ASPM.aspm_qtr where LOCID="ABQ" and DAYNUM = '+str(i)+' group by YYYYMM;',rconn, params={i:str(i)}))
Getting an error in executing the code. I have a datastore entity which has a property of type Date. An example date property value stored in an entity for a particular row is 2016-01-03 (19:00:00.000) EDT
The code i am executing is filtering the entity values based on date greater than 2016-01-01. Any idea what is wrong with the code
Error
ValueError: Unknown protobuf attr type <type 'datetime.date'>
Code
import pandas as pd
import numpy as np
from datetime import datetime
from google.cloud import datastore
from flask import Flask,Blueprint
app = Flask(__name__)
computation_cron= Blueprint('cron.stock_data_transformation', __name__)
#computation_cron.route('/cron/stock_data_transformation')
def cron():
ds = datastore.Client(project="earningspredictor-173913")
query = ds.query(kind='StockPrice')
query.add_filter('date', '>', datetime.strptime("2016-01-01", '%Y-%m-%d').date())
dataframe_data = []
temp_dict = {}
for q in query.fetch():
temp_dict["stock_code"] = q["stock_code"]
temp_dict["date"] = q["date"]
temp_dict["ex_dividend"] = q["ex_dividend"]
temp_dict["split_ratio"] = q["split_ratio"]
temp_dict["adj_open"] = q["adj_open"]
temp_dict["adj_high"] = q["adj_high"]
temp_dict["adj_low"] = q["adj_low"]
temp_dict["adj_close"] = q["adj_close"]
temp_dict["adj_volume"] = q["adj_volume"]
dataframe_data.append(temp_dict)
sph = pd.DataFrame(data=dataframe_data,columns=temp_dict.keys())
# print sph.to_string()
query = ds.query(kind='EarningsSurprise')
query.add_filter('act_rpt_date', '>', datetime.strptime("2016-01-01", '%Y-%m-%d').date())
dataframe_data = []
temp_dict = {}
for q in query.fetch():
temp_dict["stock_code"] = q["stock_code"]
temp_dict["eps_amount_diff"] = q["eps_amount_diff"]
temp_dict["eps_actual"] = q["eps_actual"]
temp_dict["act_rpt_date"] = q["act_rpt_date"]
temp_dict["act_rpt_code"] = q["act_rpt_code"]
temp_dict["eps_percent_diff"] = q["eps_percent_diff"]
dataframe_data.append(temp_dict)
es = pd.DataFrame(data=dataframe_data,columns=temp_dict.keys())
You seem to be using the generic google-cloud-datastore client library, not the NDB Client Library.
For google-cloud-datastore all date and/or time properties have the same format. From Date and time:
JSON
field name: timestampValue
type: string (RFC 3339 formatted, with milliseconds, for instance 2013-05-14T00:01:00.234Z)
Protocol buffer
field name: timestamp_value
type: Timestamp
Sort order: Chronological
Notes: When stored in Cloud Datastore, precise only to microseconds; any additional precision is rounded down.
So when setting/comparing such properties try to use strings formatted as specified (or integers for protobuf Timestamp?), not directly objects from the datetime modules (which work with the NDB library). The same might be true for queries as well.
Note: this is based on documentation only, I didn't use the generic library myself.
I developing a project developing with Scala, Play framework 2.3.8, Anorm and works with MS SQL database.
Here the code:
val repeatedCardsQuery = SQL("Execute Forms.getListOfRepeatCalls {user}, {cardId}")
DB.withConnection { implicit c =>
val result = repeatedCardsQuery.on("user" -> user.name, "cardId" -> id)().map(row =>
Json.obj(
"Unified_CardNumber" -> row[Long]("scId"),
"ContentSituation_TypeSituationName" -> row[String]("typeOfEventsName"),
"Unified_Date" -> row[Date]("creationDate"),
"InfoPlaceSituation_Address" -> row[String]("address"),
"DescriptionSituation_DescriptionSituation" -> row[Option[String]]("description")
)
).toList
Ok(response(id, "repeats", result))
}
And it gives me runtime error:
play.api.Application$$anon$1: Execution exception[[RuntimeException: Left(TypeDoesNotMatch(Cannot convert 2015-02-14 15:38:15.4089363 +03:00: class microsoft.sql.DateTimeOffset to Date for column ColumnName(.creationDate,Some(creationDate))))]]
at play.api.Application$class.handleError(Application.scala:296) ~[play_2.10-2.3.8.jar:2.3.8]
at play.api.DefaultApplication.handleError(Application.scala:402) [play_2.10-2.3.8.jar:2.3.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:205) [play_2.10-2.3.8.jar:2.3.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:202) [play_2.10-2.3.8.jar:2.3.8]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library.jar:na]
Caused by: java.lang.RuntimeException: Left(TypeDoesNotMatch(Cannot convert 2015-02-14 15:38:15.4089363 +03:00: class microsoft.sql.DateTimeOffset to Date for column ColumnName(.creationDate,Some(creationDate))))
at anorm.MayErr$$anonfun$get$1.apply(MayErr.scala:34) ~[anorm_2.10-2.3.8.jar:2.3.8]
at anorm.MayErr$$anonfun$get$1.apply(MayErr.scala:33) ~[anorm_2.10-2.3.8.jar:2.3.8]
at scala.util.Either.fold(Either.scala:97) ~[scala-library.jar:na]
at anorm.MayErr.get(MayErr.scala:33) ~[anorm_2.10-2.3.8.jar:2.3.8]
at anorm.Row$class.apply(Row.scala:57) ~[anorm_2.10-2.3.8.jar:2.3.8]
[error] application - Left(TypeDoesNotMatch(Cannot convert 2015-02-14 15:38:15.4089363 +03:00: class microsoft.sql.DateTimeOffset to Date for column ColumnName(.creationDate,Some(creationDate))))
Actually, error occurs in this line:
"Unified_Date" -> row[Date]("creationDate"),
How solve that problem, and convert microsoft.sql.DateTimeOffset to Date in this case?
A custom column conversion can be added next to your Anorm call (defined or imported in same class/object which needed to support such type).
import java.util.Date
import microsoft.sql.DateTimeOffset
import anorm.Column
implicit def columnToDate: Column[Date] = Column.nonNull { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case ms: DateTimeOffset =>
Right(ms.getTimestamp)
case _ =>
Left(TypeDoesNotMatch(s"Cannot convert $value: ${value.asInstanceOf[AnyRef].getClass} to Boolean for column $qualified"))
}
}
Btw it would be much better to work with DB values represented as JDBC dates (java.sql.Date types supported by Anorm).