I want to filter the objects based on their field being NULL in the database. I am using Django 1.7.1 and I want to use the following
frames = Frame.objects.filter(asset_id=None)
print frames
I get one frame whose asset_id field is NULL but for other frames I get the following error.
coercing to Unicode: need string or buffer, NoneType found
What am I missing?
Frame.objects.filter(asset_id__isnull=True)
https://docs.djangoproject.com/en/2.2/ref/models/querysets/#isnull
Related
When I try to run the Query in superset which contain elastic search Array values I am getting following error -
elasticsearch error: TransportError(500, 'ql_illegal_argument_exception', 'Arrays (returned by [decision.certification.comments.user.sn]) are not supported')
My Query is : SELECT "column_1" from "table_1";
where column_1 contains the Array Values and I am trying to get those values into the table form in superset but Array Values are not getting populated.
This is a known limitation of the Superset elasticsearch-dbapi driver. ElasticSearch documentation for their own SQL API explains the reason for this:
Array fields are not supported due to the "invisible" way in which
Elasticsearch handles an array of values: the mapping doesn’t indicate
whether a field is an array (has multiple values) or not, so without
reading all the data, Elasticsearch SQL cannot know whether a field is
a single or multi value.
I have the one table in sql server with one field annoation as text data type.
I have used spring jdbc template to get the annotation text field data and then I have used Following API (BaseRowMapper) to map table column to java pojo.
below is my table structure:
While retrieving data I am getting below exception.
org.springframework.beans.ConversionNotSupportedException: Failed to convert property value of type 'net.sourceforge.jtds.jdbc.ClobImpl' to required type 'java.lang.String' for property 'annotation'; nested exception is java.lang.IllegalStateException: Cannot convert value of type [net.sourceforge.jtds.jdbc.ClobImpl] to required type [java.lang.String] for property 'annotation': no matching editors or conversion strategy found
at org.springframework.beans.BeanWrapperImpl.convertIfNecessary(BeanWrapperImpl.java:464)
at org.springframework.beans.BeanWrapperImpl.convertForProperty(BeanWrapperImpl.java:495)
at org.springframework.beans.BeanWrapperImpl.setPropertyValue(BeanWrapperImpl.java:1099)
at org.springframework.beans.BeanWrapperImpl.setPropertyValue(BeanWrapperImpl.java:884)
at com.ecw.vascular.model.BaseRowMapper.mapRow(BaseRowMapper.java:39)
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:92)
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:60)
at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:651)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:589)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:639)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:664)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:704)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:179)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:185)
at com.ecw.vascular.dao.BaseDao.executeQuery(BaseDao.java:113)
at com.ecw.vascular.dao.ObservationDao.findByPatientAndEncounter(ObservationDao.java:64)
The problem lies on the datatype used in the SQL Server database to store a String of maximum 16 byte.
Text can store up to 2 GB of variable width character string data, so JDBCTemplate uses a CLOB to retrieve the data from that column.
Since the maximum length is 16, one solution can be to change the datatype on the database to a more appropriate varchar.
If this is not an option and since the error mentions a jtds implementation of CLOB, you can try to change the jdbc connection string to
jdbc:jtds:sqlserver://ServerName;**useLOBs=false**;DatabaseName=xxx;instance=xxx
A third-highly not recommended - option would be to use a CLOB instead of a String inside the java bean, with all the relative changes needed to deal with databases LOBs.
Friends i am working on jdev12c but i am facing issue i am able to create new record using bc4j tester but when i am trying to change(update) existing data it throws exception Invalid NumberError while selecting entity for CustmerInfo: ORA-01722: invalid number
I have searched for this error but i am not able to get solution just to provide more information I have one master and 2 child tables.In master table i have 2 column which uses DBSequence(seq and trigger from database) and one mandatory date field(timestamp).
I have found out the reason actually the customernumber column is varchar because i am concatenating the sequence with prefix and then storing it.now the problem is as soon as i change entity attribute to DBSEQUENCE it throws invalid number error for updation
DBSequence should only be used if the value you are getting is populated form a DB Sequence - which would be a number.
If you are manually populating that field - then use a regular String type for the field.
I´m using flink to load a csv file to a dataset of pojos, defined through a scala case class, using readCsvFile method, and I have a problem that i cannot solve.
When in the csv there is a record with some format error in any of its fields it is discarded, and I assume that the only way to keep those records is to type them all as String and do the validations myself.
The problem is that if the last field after the delimiter is empty, the record is discarded by default, I think because it is considered as not having the expected number of fields it should, and it is not possible to handle this record error, while if the empty value if in any of the previous fields there is no problem.
Example
field1|field2|field3
a||c
a|b|
In this example, first record is returned by readCsvFile method but not the second.
Is this behaviour right? and there is any walk around to get the record?
Thanks
Case classes and tuples in Flink do not support null values. Therefore, a||c is invalid if the empty field is not a String. I recommend to use the RowCsvInputFormat in this case. It supports nulls and generic rows can be converted to any other class in a following map operator.
The problem is that, as you say, if the field is a String, the record should be valid even if it is null, and this doesn't happen when the null value is in the last field. The behaviour is different depending on the position.
I will try also with RowCsvInputFormat as you recommend.
Thanks
I have a legacy database that I connect to and not sure why I get this result when I parse the json object in my client. The column Character customersMname is defined in the domains static constraints as:
customersMname nullable: true, maxSize: 1
the result I get back from the JSON object when the field is null is:
<jsonname2>customersMname</jsonname2>
<jsonvalue2>{"class":"java.lang.Character"}</jsonvalue2>
There is actual data in the database column and it should be P. Seems this is occurring with single character columns in MYSQL db when the datatype is defined as CHAR(1) or VARCHAR(1). Any ideas?
Apparently this is a bug in the system. Work around is to change the domain type to String and be done with it.