I get this error on using Npgsql Binary copy (see my implementation at the bottom):
invalid sign in external "numeric" value
The RetailCost column on which it fails has the following PostgreSQL properties:
type: numeric(19,2)
nullable: not null
storage: main
default: 0.00
The PostgreSQL log looks like this:
2019-07-15 13:24:05.131 ACST [17856] ERROR: invalid sign in external "numeric" value
2019-07-15 13:24:05.131 ACST [17856] CONTEXT: COPY products_temp, line 1, column RetailCost
2019-07-15 13:24:05.131 ACST [17856] STATEMENT: copy products_temp (...) from stdin (format binary)
I don't think it should matter, but there are only zero or positive RetailCost values (no negative or null ones)
My implementation looks like this:
using (var importer = conn.BeginBinaryImport($"copy {tempTableName} ({dataColumns}) from stdin (format binary)"))
{
foreach (var product in products)
{
importer.StartRow();
importer.Write(product.ManufacturerNumber, NpgsqlDbType.Text);
if (product.LastCostDateTime == null)
importer.WriteNull();
else
importer.Write((DateTime)product.LastCostDateTime, NpgsqlDbType.Timestamp);
importer.Write(product.LastCost, NpgsqlDbType.Numeric);
importer.Write(product.AverageCost, NpgsqlDbType.Numeric);
importer.Write(product.RetailCost, NpgsqlDbType.Numeric);
if (product.TaxPercent == null)
importer.WriteNull();
else
importer.Write((decimal)product.TaxPercent, NpgsqlDbType.Numeric);
importer.Write(product.Active, NpgsqlDbType.Boolean);
importer.Write(product.NumberInStock, NpgsqlDbType.Bigint);
}
importer.Complete();
}
Any suggestions would be welcome
I caused this problem when using MapBigInt, not realizing the column was accidentally numeric(18). Changing the column to bigint fixed my problem.
The cause of the problem was that the importer.Write statements was not in the same order as the dataColumns
Related
hi all ijson newbie I have a very large .json file 168 (GB) I want to get all possible keys, but in the file some values are written as NaN. ijson creates a generator and outputs dictionaries, in My code value. When a specific item is returned, it throws an error. How can you get a string instead of a dictionary instead of value? Tried **parser = ijson.items (input_file, '', multiple_values = True, map_type = str) **, didn't help.
def parse_json(json_filename):
with open('max_data_error.txt', 'w') as outfile:
with open(json_filename, 'r') as input_file:'''
# outfile.write('[ '
parser = ijson.items(input_file, '', multiple_values=True)
cont = 0
max_keys_list = list()
for value in parser:
for i in json.loads(json.dumps(value, ensure_ascii=False, default=str)) :
if i not in max_keys_list:
max_keys_list.append(i)
print(value)
print(max_keys_list)
for keys_item in max_keys_list:
outfile.write(keys_item + '\n')
if __name__ == '__main__':
parse_json('./email/emailrecords.bson.json')
Traceback (most recent call last):
File "panda read.py", line 29, in <module>
parse_json('./email/emailrecords.bson.json')
File "panda read.py", line 17, in parse_json
for value in parser:
ijson.common.IncompleteJSONError: lexical error: invalid char in json text.
litecashwire.com","lastname":NaN,"firstname":"Mia","zip":"87
(right here) ------^
Your file I not valid JSON (NaN is not a valid JSON value); therefore any JSON parsing library will complain about this, one way or another, unless they have an extension to handle this non-standard content.
The ijson FAQ found in the project description has a question about invalid UTF-8 characters and how to deal with them. Those same answers apply here, so I would suggest you go and try one of those.
This question already has answers here:
Issue querying from Access database: "could not convert string to float: E+6"
(3 answers)
Closed 7 years ago.
I am trying to access some of the columns in a Microsoft Access database table which contains numbers of type double but I am getting the error mentioned in the title.The code used for querying the database is as below, the error is occurring in the line where cur.execute(....) command is executed. Basically I am trying filter out data captured in a particular time interval. If I exclude the columns CM3_Up, CG3_Up, CM3_Down, CG3_Down which contains double data type in the cur.execute(....) command I wont get the error. Same logic was used to access double data type from other tables and it worked fine, I am not sure what is going wrong.
Code:
start =datetime.datetime(2015,03,28,00,00)
a=start
b=start+datetime.timedelta(0,240)
r=7
while a < (start+datetime.timedelta(1)):
params = (a,b)
sql = "SELECT Date_Time, CM3_Up, CG3_Up, CM3_Down, CG3_Down FROM
Lysimeter_Facility_Data_5 WHERE Date_Time >= ? AND Date_Time <= ?"
for row in cur.execute(sql,params):
if row is None:
continue
r = r+1
ws.cell(row = r,column=12).value = row.get('CM3_Up')
ws.cell(row = r,column=13).value = row.get('CG3_Up')
ws.cell(row = r,column=14).value = row.get('CM3_Down')
ws.cell(row = r,column=15).value = row.get('CG3_Down')
a = a+five_min
b = b+five_min
wb.save('..\SE_SW_Lysimeters_Weather_Mass_Experiment-02_03_26_2015.xlsx')
Complete error report:
Traceback (most recent call last):
File "C:\DB_PY\access_mdb\db_to_xl.py", line 318, in <module>
for row in cur.execute(sql,params):
File "build\bdist.win32\egg\pypyodbc.py", line 1920, in next
row = self.fetchone()
File "build\bdist.win32\egg\pypyodbc.py", line 1871, in fetchone
value_list.append(buf_cvt_func(alloc_buffer.value))
ValueError: could not convert string to float: E-3
As to this discussion:
Python: trouble reading number format
the trouble could be that e should be d, like:
float(row.get('CM3_Up').replace('E', 'D'))
Sounds weird to me though, but I know only little of Python.
It sounds like you receive strings like '2.34E-3', so try with a conversion. Don't know Python, but in C# it could be like:
ws.cell(row = r,column=12).value = Convert.ToDouble(row.get('CM3_Up'))
ws.cell(row = r,column=13).value = Convert.ToDouble(row.get('CG3_Up'))
ws.cell(row = r,column=14).value = Convert.ToDouble(row.get('CM3_Down'))
ws.cell(row = r,column=15).value = Convert.ToDouble(row.get('CG3_Down'))
I developing a project developing with Scala, Play framework 2.3.8, Anorm and works with MS SQL database.
Here the code:
val repeatedCardsQuery = SQL("Execute Forms.getListOfRepeatCalls {user}, {cardId}")
DB.withConnection { implicit c =>
val result = repeatedCardsQuery.on("user" -> user.name, "cardId" -> id)().map(row =>
Json.obj(
"Unified_CardNumber" -> row[Long]("scId"),
"ContentSituation_TypeSituationName" -> row[String]("typeOfEventsName"),
"Unified_Date" -> row[Date]("creationDate"),
"InfoPlaceSituation_Address" -> row[String]("address"),
"DescriptionSituation_DescriptionSituation" -> row[Option[String]]("description")
)
).toList
Ok(response(id, "repeats", result))
}
And it gives me runtime error:
play.api.Application$$anon$1: Execution exception[[RuntimeException: Left(TypeDoesNotMatch(Cannot convert 2015-02-14 15:38:15.4089363 +03:00: class microsoft.sql.DateTimeOffset to Date for column ColumnName(.creationDate,Some(creationDate))))]]
at play.api.Application$class.handleError(Application.scala:296) ~[play_2.10-2.3.8.jar:2.3.8]
at play.api.DefaultApplication.handleError(Application.scala:402) [play_2.10-2.3.8.jar:2.3.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:205) [play_2.10-2.3.8.jar:2.3.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:202) [play_2.10-2.3.8.jar:2.3.8]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library.jar:na]
Caused by: java.lang.RuntimeException: Left(TypeDoesNotMatch(Cannot convert 2015-02-14 15:38:15.4089363 +03:00: class microsoft.sql.DateTimeOffset to Date for column ColumnName(.creationDate,Some(creationDate))))
at anorm.MayErr$$anonfun$get$1.apply(MayErr.scala:34) ~[anorm_2.10-2.3.8.jar:2.3.8]
at anorm.MayErr$$anonfun$get$1.apply(MayErr.scala:33) ~[anorm_2.10-2.3.8.jar:2.3.8]
at scala.util.Either.fold(Either.scala:97) ~[scala-library.jar:na]
at anorm.MayErr.get(MayErr.scala:33) ~[anorm_2.10-2.3.8.jar:2.3.8]
at anorm.Row$class.apply(Row.scala:57) ~[anorm_2.10-2.3.8.jar:2.3.8]
[error] application - Left(TypeDoesNotMatch(Cannot convert 2015-02-14 15:38:15.4089363 +03:00: class microsoft.sql.DateTimeOffset to Date for column ColumnName(.creationDate,Some(creationDate))))
Actually, error occurs in this line:
"Unified_Date" -> row[Date]("creationDate"),
How solve that problem, and convert microsoft.sql.DateTimeOffset to Date in this case?
A custom column conversion can be added next to your Anorm call (defined or imported in same class/object which needed to support such type).
import java.util.Date
import microsoft.sql.DateTimeOffset
import anorm.Column
implicit def columnToDate: Column[Date] = Column.nonNull { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case ms: DateTimeOffset =>
Right(ms.getTimestamp)
case _ =>
Left(TypeDoesNotMatch(s"Cannot convert $value: ${value.asInstanceOf[AnyRef].getClass} to Boolean for column $qualified"))
}
}
Btw it would be much better to work with DB values represented as JDBC dates (java.sql.Date types supported by Anorm).
i have many data.frames() that i am trying to send to MySQL database via RMySQL().
# Sends data frame to database without a problem
dbWriteTable(con3, name="SPY", value=SPY , append=T)
# stock1 contains a character vector of stock names...
stock1 <- c("SPY.A")
But when I try to loop it:
i= 1
while(i <= length(stock1)){
# converts "SPY.A" into SPY
name <- print(paste0(str_sub(stock1, start = 1, end = -3))[i], quote=F)
# sends data.frame to database
dbWriteTable(con3,paste0(str_sub(stock1, start = 1, end = -3))[i], value=name, append=T)
i <- 1+i
}
The following warning is returned & nothing was sent to database
In addition: Warning message:
In file(fn, open = "r") :
cannot open file './SPY': No such file or directory
However, I believe that the problem is with pasting value onto dbWriteTable() since writing dbWriteTable(con3, "SPY", SPY, append=T) works but dbWriteTable(con3, "SPY", name, append=T) will not...
You are probably using a non-base package for str_sub and I'm guessing you get the same behavior with substr. Does this succeed?
dbWriteTable(con3, substr( stock1, 1,3) , get(stock1), append=T)
I'd like to know if a record is present in a database, using a field different from the key field.
I've try the following code :
function start()
{
jlog("start db query")
myType d1 = {A:"rabbit", B:"poney"};
/myDataBase/data[A == d1.A] = d1
jlog("db write done")
option opt = ?/myDataBase/data[B == "rabit"]
jlog("db query done")
match(opt)
{
case {none} : <>Nothing in db</>
case {some:data} : <>{data} in database</>
}
}
Server.start(
{port:8092, netmask:0.0.0.0, encryption: {no_encryption}, name:"test"},
[
{page: start, title: "test" }
]
)
But the server hang up, and never get to the line jlog("db query done"). I mispell "rabit" willingly. What should I've done ?
Thanks
Indeed it fails with your example, I have a "Match failure 8859742" exception, don't you see it?
But do not use ?/myDataBase/data[B == "rabit"] (which should have been rejected at compile time - bug report sent) but /myDataBase/data[B == "rabit"] which is a DbSet. The reason is when you don't use a primary key, then you can have more than one value in return, ie a set of values.
You can convert a dbset to an iter with DbSet.iterator. Then manipulate it with Iter: http://doc.opalang.org/module/stdlib.core.iter/Iter