Binary Column is translated to null Java type - sql-server

I created a domain in JasperRreports Server. I have a table that stores Binary data. When I use it in my domain I get a the following error :
java.lang.IllegalArgumentException: getObjectType for javaType: null
returned null
I exported the schema and found the following:
<field id="Id" type="java.lang.Integer" />
<field id="FileData" type="null" />
As you can see, null is used in the type field. I tried changing this to java.io.InputStream which is the type that it maps to when I connect to the data source directly and got the same error:
java.lang.IllegalArgumentException: getObjectType for javaType:
java.io.InputStream returned null at
com.jaspersoft.commons.dataset.expr.ObjectTypeMapper.getObjectType(ObjectTypeMapper.java:69)
Any reports using that domain fail to run until I remove the Binary column. When I try to create a Domain report in iReport, it can not retrieve the domain fields. When I try to use the adhoc reporting tool I get the error above.
I am using SQL Server 2005, the type of the data is 'image'. I cast the column to varbinary in my view to see if JasperReports would recognize it and I still have the same error.
Has anyone successfully used binary data types in JasperReports Server domains?
Update: I configured the bean "jdbcMetaConfiguration" in applicationContext-semanticLayer.xml to map the binary column to java.io.InputStream and I still get the same error. The mapping worked, when I view the XML file "null" is replaced with "java;io.InputStream" but I still get IllegalArgumentException.

EDIT: Nope, it cannot be done. Sorry.
Original [overly-optimistic] answer:
The Ad Hoc editor cannot handle images (or other binary data types). It would be nice if it more gracefully ignored them... but it's no surprise that you cannot use them there.
But it should be possible to define the field as some sort of binary (image or array of bytes or just an Object or something like these) and then use it in iReport.

Related

solr DateFormatTransporter

I've set up and configured this DateFormatTransformer:
<field column="data_ini" name="data_inici_dt" dateTimeFormat="dd-MM-yyyy" locale="ca-ES"/>
At database data_ini is a varchar where format data is like dd-MM-yyy.
It works well. Nevertheless, in some cases, database contains strings like 1 -12-1982, or 23- 1-1987.
Some ideas comes in my head:
Try to add more compatible formats to transformer, I don't if it's possible.
When, a format is not recognized add a fixed date value...
Is there anyway to treat this issue?

TYPO3 Exception: Could not determine pid

While trying to add a new fe_users record, on save I get
(1/1) Exception
Could not determine pid
It's TYPO3 9.5.20.
We already have a lot of entries in multiple folders which could be edited without problem.
But those records were imported (by EXT:ig_ldap_sso_auth or with mysql terminal)
These records are used only to be shown (no login is used).
What configuration is missing or could be wrong?
EDIT:
as #biesior mentioned: the error message does not come from the core but from an extension. It's EXT:solrfal (in version 7.0.0)
The real error was not in EXT:solrfal. this extension just hides the error with a misleading message.
The real reason was a wrong database configuration for the table fe_users. Although it is not possible in SQL to have a default value for fields of type text (and any given value is ignored) TYPO3 expects a default value if it is configured. As this is not returned from the database it assumes an error. And EXT:solrfal hooks into the error handling and assumes a wrong error.
Hi just got the same problem.
The error message was called in solrfal ConsistencyAspect::getRecordPageId() which was called by ConsistencyAspect::getDetectorsForSiteExclusiveRecord(). I remember that I have added various tablenames to siteExclusiveRecordTables of Extension Settings of solrfal. And yes, there was one table without pid. After removing this table from list, deleting files works again.

Is there a way to find out details of data type erorr in Snowflake?

I am pretty new to Snowflake Cloud offering and was just trying to load a simple .csv file from AWS s3 staging are to a table in Snowflake using copy command.
Here is what I used as the command:
copy into "database name"."schema"."table name"
from #S3_ACCESS
file_format = (format_name = format name);
When run the above code, I get the following error: Numeric value '63' is not recognized
Please see the attached image. Not sure what this error is and i'm not able to find any lead in Snowflake UI itself to find out what could be wrong with the value.
Thanks in Advance!
The error says, it was waiting a numberic value, but it got "63", and this value can not be converted to numeric value.
From the image you share, I can see that there are some weird characters around 6 and 3. There could be an issue with file encoding or data is corrupted.
Please check encoding option for file format:
https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html#format-type-options-formattypeoptions
By the way, I recommend you always use utf-8.

Failed to convert property value of type 'net.sourceforge.jtds.jdbc.ClobImpl' to required type 'java.lang.String'

I have the one table in sql server with one field annoation as text data type.
I have used spring jdbc template to get the annotation text field data and then I have used Following API (BaseRowMapper) to map table column to java pojo.
below is my table structure:
While retrieving data I am getting below exception.
org.springframework.beans.ConversionNotSupportedException: Failed to convert property value of type 'net.sourceforge.jtds.jdbc.ClobImpl' to required type 'java.lang.String' for property 'annotation'; nested exception is java.lang.IllegalStateException: Cannot convert value of type [net.sourceforge.jtds.jdbc.ClobImpl] to required type [java.lang.String] for property 'annotation': no matching editors or conversion strategy found
at org.springframework.beans.BeanWrapperImpl.convertIfNecessary(BeanWrapperImpl.java:464)
at org.springframework.beans.BeanWrapperImpl.convertForProperty(BeanWrapperImpl.java:495)
at org.springframework.beans.BeanWrapperImpl.setPropertyValue(BeanWrapperImpl.java:1099)
at org.springframework.beans.BeanWrapperImpl.setPropertyValue(BeanWrapperImpl.java:884)
at com.ecw.vascular.model.BaseRowMapper.mapRow(BaseRowMapper.java:39)
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:92)
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:60)
at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:651)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:589)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:639)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:664)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:704)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:179)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:185)
at com.ecw.vascular.dao.BaseDao.executeQuery(BaseDao.java:113)
at com.ecw.vascular.dao.ObservationDao.findByPatientAndEncounter(ObservationDao.java:64)
The problem lies on the datatype used in the SQL Server database to store a String of maximum 16 byte.
Text can store up to 2 GB of variable width character string data, so JDBCTemplate uses a CLOB to retrieve the data from that column.
Since the maximum length is 16, one solution can be to change the datatype on the database to a more appropriate varchar.
If this is not an option and since the error mentions a jtds implementation of CLOB, you can try to change the jdbc connection string to
jdbc:jtds:sqlserver://ServerName;**useLOBs=false**;DatabaseName=xxx;instance=xxx
A third-highly not recommended - option would be to use a CLOB instead of a String inside the java bean, with all the relative changes needed to deal with databases LOBs.

TClientDataset Widestring field doubles in size after reading NVARCHAR from database

I'm converting one of our Delphi 7 projects to Delphi X3 because we want to support Unicode. We're using MS SQL Server 2008/R2 as our database server. After changing some database fields from VARCHAR to NVARCHAR (and the fields in the accompanying ClientDatasets to ftWideString), random crashes started to occur. While debugging I noticed some unexpected behaviour by the TClientDataset/DbExpress:
For a NVARCHAR(10) databasecolumn I manually create a TWideStringField in a clientdataset and set the 'Size' property to 10. The 'DataSize' property of the field tells me 22 bytes are needed, which is expected since TWideStringField's encoding is UTF-16, so it needs two bytes per character and some space for storing the length. Now when I call 'CreateDataset' on the ClientDataset and write the dataset to XML (using .SaveToFile), in the XML file the field is defined as
<FIELD WIDTH="20" fieldtype="string.uni" attrname="TEST"/>
which looks ok to me.
Now, instead of calling .CreateDataset I call .Open on the TClientDataset so that it gets its data through the linked components ->TDatasetProvider->TSQLDataset (.CommandText = a simple select * from table)->TSQLConnection. When I inspect the properties of the field in my watch list, Size is still 10, Datasize is still 22. After saving to XML file however, the field is defined as
<FIELD WIDTH="40" fieldtype="string.uni" attrname="TEST"/>
..the width has doubled?
Finally, if I call .Open on the TClientDataset without creating any fielddefinitions in advance at all, the Size of the field will afterwards be 20(incorrect !) and Datasize 42. After saving to XML, the field is still defined as
<FIELD WIDTH="40" fieldtype="string.uni" attrname="TEST"/>
Does anyone have any idea what is going wrong here?
Check the fieldtype and it's size at the SQLCommand component (which is before DatasetProvider).
Size doubling may be a result of two implicit "conversions": first - server provides NVarchar data which is stored into ansi-string field (and every byte becomes a separate character), second - it is stored into clientdataset's field of type Widestring and each character becomes 2 bytes (size doubles).
Note that in prior versions of Delphi string field size mismatch between ClientDataset's field and corresponding Query/Command field did not result in an exception but starting from one of XE*'s it offten results in AV. So you have to check carefully string field sizes during migration.
Sounds like because of the column datatype being changed, it has created unexpected issues for you. My suggestion is to
1. back up the table,multiple ways to doing this,pick your poison figuratively speaking
2. delete the table,
3. recreate the table,
4. import the data from the old table to the newly created table. See if that helps.
Sql tables DO NOT like it when column datatypes get changed, and unexpected issues may arise from doing just that. So try that, and worst case scenario, you have wasted maybe ten minutes of your time trying a possible solution.

Resources