add TDengine as data source in idea - database

I add the the tdengine as idea data source, but i found some problems.
1.when i executed a query, after i got the result. The idea still execute the query.However compare with mysql, the idea execution terminated as long as the result returned.enter image description here
2.also compare with the mysql, i can just get information db and tables, and there isn't any enter image description hereinformation about the columns.enter image description here
the exception of second image is
<failed to load>
java.sql.SQLException: ERROR (2315): unknown taos type in tdengine
at com.taosdata.jdbc.TSDBError.createSQLException(TSDBError.java:69)
at com.taosdata.jdbc.TSDBError.createSQLException(TSDBError.java:56)
at com.taosdata.jdbc.TSDBConstants.taosType2JdbcType(TSDBConstants.java:131)
at com.taosdata.jdbc.TSDBResultSetMetaData.getColumnType(TSDBResultSetMetaData.java:151)
in RemoteResultSetMetaDataImpl.getFixedColumnType(RemoteResultSetMetaDataImpl.java:105)

use taos-jdbcdriver-2.0.34 to solve this problem

Related

Is there a way to find out details of data type erorr in Snowflake?

I am pretty new to Snowflake Cloud offering and was just trying to load a simple .csv file from AWS s3 staging are to a table in Snowflake using copy command.
Here is what I used as the command:
copy into "database name"."schema"."table name"
from #S3_ACCESS
file_format = (format_name = format name);
When run the above code, I get the following error: Numeric value '63' is not recognized
Please see the attached image. Not sure what this error is and i'm not able to find any lead in Snowflake UI itself to find out what could be wrong with the value.
Thanks in Advance!
The error says, it was waiting a numberic value, but it got "63", and this value can not be converted to numeric value.
From the image you share, I can see that there are some weird characters around 6 and 3. There could be an issue with file encoding or data is corrupted.
Please check encoding option for file format:
https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html#format-type-options-formattypeoptions
By the way, I recommend you always use utf-8.

Potential Loss of Data reading from CSV with decimal

I have read a large number of questions and answers on this and I still can't get it to work.
I have a csv like the following:
Field1;Field2;Field3
CCC;DDD;0.03464
EEE;FFF;0.08432
...
When I attach a Flat File Source, in SSIS, it gives me the following:
[Sample CSV [2]] Error: Data conversion failed. The data conversion
for column "Field3" returned status value 2 and status text "The value
could not be converted because of a potential loss of data.".
I have already changed the output to DT_DECIMAL, with 5 as the scale value, in the advance properties but I still get the same error.
Any clue on this?
It seems like a simple solution that I am somehow overlooking.
Thanks!
There are many values that cannot be converted to DT_DECIMAL, you can detect the values that cause this error by utilizing of the Flat File Error Output which redirect the rows that are causing errors when loading data.
Helpful Links
ERROR HANDLING IN SSIS WITH AN EXAMPLE STEP BY STEP
SSIS error when loading data from flat files

SSIS Package error- SSIS Error Code DTS_E_PROCESSINPUTFAILED

SSIS job has failed and posting the below error
[Product Sales [749]] Error: An exception has occurred during data insertion, the message returned from the provider is: The given value of type String from the data source cannot be converted to type float of the specified target column.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Product Sales" (749) failed with error code 0xC020844B while processing input "ADO NET Destination Input" (752). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.
Can some one please advise if you have come across this kind of error
Thank you
Your error message is explaining the issue to you: "The given value of type String from the data source cannot be converted to type float of the specified target column."
Open the component that is failing and review the metadata. You have a float column somewhere and you are passing this column a string that can't be converted to a float, such as empty space or an alphanumeric value.
If you want to ensure these values are floats, you can add a script component above the one that is failing and write some code to ensure the value is properly sanitized:
string input = "1.1"; //Replace with your input buffer value
float result;
float.TryParse(input, out result); //Result = 0.0 if value was not parsed
Please add a data conversion task between source and destination to change data type from string to float , it will resolve your issue .
If still you are facing the issue then let me know the exact issue what are the source and which ssis task you are using.
Use ole db source and destination instead of odc and try to reduce the column name length and no paranthesis in column names and use table and fast load this should solve. I had the same problem where loading from analysis services cube through dax query into SQL Table of my local machine

pgadmin4 - Download Query result as CSV

I wrote a query using the query tool in pgadmin 4. Now I want to download the results as a csv. I´ve got two problems with that.
The 'Download as CSV'-button does not work sometimes. Especially when the result contains 1000+ rows.
When I finally have a csv and I want to open it, this message is all I see:
"'ascii' codec can't encode character u'\xbb' in position 26: ordinal not in range(128)"
Since I´m fairly new to all of this, could someone enlighten me to what is wrong?
On your questions:
The broken CSV download was a known bug that was fixed in pgAdmin v1.5 (Bug summary at the login-required https://redmine.postgresql.org/issues/2253; the gist is that there were multiple issues with exporting JSON data and Unicode). If you're not on that version, try updating and see whether you continue to have the issue.
You didn't specify where you're seeing that message regarding encoding, but the character referenced in the error is a "Right-Pointing Double Angle Quotation Mark" (») (http://www.codetable.net/hex/bb).

Access database in Physionet's ptbdb by Matlab

I set up the system first by
[old_path]=which('rdsamp');if(~isempty(old_path)) rmpath(old_path(1:end-8)); end
wfdb_url='http://physionet.org/physiotools/matlab/wfdb-app-matlab/wfdb-app-toolbox-0-9-3.zip';
[filestr,status] = urlwrite(wfdb_url,'wfdb-app-toolbox-0-9-3.zip');
unzip('wfdb-app-toolbox-0-9-3.zip');
cd mcode
addpath(pwd);savepath
I am trying to read databases from Physionet.
I have successfully reached one database mitdb by
[tm,sig]=rdsamp('mitdb/100',1)
but I want to reach the database ptbdb unsuccessfully by
[tm,sig]=rdsamp('ptbdb/100',1)
and get the error
Warning: Could not get signal information. Attempting to read signal without buffering.
> In rdsamp at 107
Error: Cannot convert to double:
init: can't open header for record ptbdb/100
Error using rdsamp (line 145)
Java exception occurred:
java.lang.NumberFormatException: Cannot convert
at org.physionet.wfdb.Wfdbexec.execToDoubleArray(Unknown Source)
The first error message refers to these lines in rdsamp.m:
if(isempty(N))
[siginfo,~]=wfdbdesc(recordName);
if(~isempty(siginfo))
N=siginfo(1).LengthSamples;
else
warning('Could not get signal information. Attempting to read signal without buffering.')
end
end
This line if(~isempty(siginfo)) is false means that the siginfo is empty that is there is no signal. Why? No access to the database, I think.
I think other errors follow from it.
So the error must follow from this line
[siginfo,~]=wfdbdesc(recordName);
What does the snake mean here in the brackets?
How can you get data from ptbdb by Matlab?
So
Does this error mean that the connection cannot be established to the database?
or
that there does not exists such data in the database?
It would be very nice to know how you can check if you have connection to the database like in Postrgres. It would be much easier to debug.
If you run physionetdb("ptdb",1) it will download the files to your computer. You will then be able to see the available records in the <current-dir>/ptdb/
Source: physionetdb function documentation. You are interested in the DoBatchDownload parameter.
After downloading it, I believe every command from the toolbox will check if you have the files locally before fetching from the server (as long as you give the function the correct path to the local files).
The problem is that the data unit "100" does not exist in the database ptbdb.
I run finally successfully after waiting 35 minutes with 100Mb cable broadband:
db_list = physionetdb('ptbdb')
and get not complete data finally to the patient 54 - there should be 294 patients.
'ptbdb/patient001/s0014lre' 'ptbdb/patient001/s0014lre' ... cut ...
The main developer, Ikaro's' answer helped me to wait so long:
The WFDB Toolbox connects to PhysioNet's file server. The databases
accessible through the WFDB Toolbox are not SQL database, they consist
of flat files. The error message that you are getting regarding the
ptdb/100 database is because you are attempting to get a record that
does not exist on the database.
For more information on a particular database or record in PhysioNet
please type:
help physionetdb
and
physionetdb('ptdb')
This flat file system is really a bottle neck in the system.
It would be a good time to change to SQL.

Resources