Access database in Physionet's ptbdb by Matlab - database

I set up the system first by
[old_path]=which('rdsamp');if(~isempty(old_path)) rmpath(old_path(1:end-8)); end
wfdb_url='http://physionet.org/physiotools/matlab/wfdb-app-matlab/wfdb-app-toolbox-0-9-3.zip';
[filestr,status] = urlwrite(wfdb_url,'wfdb-app-toolbox-0-9-3.zip');
unzip('wfdb-app-toolbox-0-9-3.zip');
cd mcode
addpath(pwd);savepath
I am trying to read databases from Physionet.
I have successfully reached one database mitdb by
[tm,sig]=rdsamp('mitdb/100',1)
but I want to reach the database ptbdb unsuccessfully by
[tm,sig]=rdsamp('ptbdb/100',1)
and get the error
Warning: Could not get signal information. Attempting to read signal without buffering.
> In rdsamp at 107
Error: Cannot convert to double:
init: can't open header for record ptbdb/100
Error using rdsamp (line 145)
Java exception occurred:
java.lang.NumberFormatException: Cannot convert
at org.physionet.wfdb.Wfdbexec.execToDoubleArray(Unknown Source)
The first error message refers to these lines in rdsamp.m:
if(isempty(N))
[siginfo,~]=wfdbdesc(recordName);
if(~isempty(siginfo))
N=siginfo(1).LengthSamples;
else
warning('Could not get signal information. Attempting to read signal without buffering.')
end
end
This line if(~isempty(siginfo)) is false means that the siginfo is empty that is there is no signal. Why? No access to the database, I think.
I think other errors follow from it.
So the error must follow from this line
[siginfo,~]=wfdbdesc(recordName);
What does the snake mean here in the brackets?
How can you get data from ptbdb by Matlab?
So
Does this error mean that the connection cannot be established to the database?
or
that there does not exists such data in the database?
It would be very nice to know how you can check if you have connection to the database like in Postrgres. It would be much easier to debug.

If you run physionetdb("ptdb",1) it will download the files to your computer. You will then be able to see the available records in the <current-dir>/ptdb/
Source: physionetdb function documentation. You are interested in the DoBatchDownload parameter.
After downloading it, I believe every command from the toolbox will check if you have the files locally before fetching from the server (as long as you give the function the correct path to the local files).

The problem is that the data unit "100" does not exist in the database ptbdb.
I run finally successfully after waiting 35 minutes with 100Mb cable broadband:
db_list = physionetdb('ptbdb')
and get not complete data finally to the patient 54 - there should be 294 patients.
'ptbdb/patient001/s0014lre' 'ptbdb/patient001/s0014lre' ... cut ...
The main developer, Ikaro's' answer helped me to wait so long:
The WFDB Toolbox connects to PhysioNet's file server. The databases
accessible through the WFDB Toolbox are not SQL database, they consist
of flat files. The error message that you are getting regarding the
ptdb/100 database is because you are attempting to get a record that
does not exist on the database.
For more information on a particular database or record in PhysioNet
please type:
help physionetdb
and
physionetdb('ptdb')
This flat file system is really a bottle neck in the system.
It would be a good time to change to SQL.

Related

TYPO3 Exception: Could not determine pid

While trying to add a new fe_users record, on save I get
(1/1) Exception
Could not determine pid
It's TYPO3 9.5.20.
We already have a lot of entries in multiple folders which could be edited without problem.
But those records were imported (by EXT:ig_ldap_sso_auth or with mysql terminal)
These records are used only to be shown (no login is used).
What configuration is missing or could be wrong?
EDIT:
as #biesior mentioned: the error message does not come from the core but from an extension. It's EXT:solrfal (in version 7.0.0)
The real error was not in EXT:solrfal. this extension just hides the error with a misleading message.
The real reason was a wrong database configuration for the table fe_users. Although it is not possible in SQL to have a default value for fields of type text (and any given value is ignored) TYPO3 expects a default value if it is configured. As this is not returned from the database it assumes an error. And EXT:solrfal hooks into the error handling and assumes a wrong error.
Hi just got the same problem.
The error message was called in solrfal ConsistencyAspect::getRecordPageId() which was called by ConsistencyAspect::getDetectorsForSiteExclusiveRecord(). I remember that I have added various tablenames to siteExclusiveRecordTables of Extension Settings of solrfal. And yes, there was one table without pid. After removing this table from list, deleting files works again.

AWS RDS MYSQL import db Access Denied

I cannot import a database in AWS RDS because of this commands in my sql file:
SET ##SESSION.SQL_LOG_BIN= 0;
SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
SET ##SESSION.SQL_LOG_BIN = #MYSQLDUMP_TEMP_LOG_BIN;
Are they important ? whiteout them there is no error.
log_bin_trust_function_creators parameter is set to 1 in a custom parameter.
FYI: MySql 5.7 and 8, same error
ERROR 1227 (42000) at line 20: Access denied; you need (at least one of) the SUPER, SYSTEM_VARIABLES_ADMIN or SESSION_VARIABLES_ADMIN privilege(s) for this operation
SET ##SESSION.SQL_LOG_BIN=0;
This is telling MySQL not to put these INSERT statements into the binary log. If you do not have binary logs enabled (not replicating), then it's not an issue to remove it. As far as I know, there is no way to enable this in RDS; I'm actually trying to figure out a way, which is how I found your question.
SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
Did you execute a RESET MASTER on the database where the dump originated from? Check here for an explanation of this value: gtid_purged
SET ##SESSION.SQL_LOG_BIN = #MYSQLDUMP_TEMP_LOG_BIN;
This is setting the ##SESSION.SQL_LOG_BIN variable back to the original setting; you should've seen another line like: SET #MYSQLDUMP_TEMP_LOG_BIN = ##SESSION.SQL_LOG_BIN;
If you're simply recovering this table into a new database that isn't writing to a binary log (for replication), it's safe to remove these lines. Hope this helps!

SQL Server 2016 R Services: sp_execute_external_script returns 0x80004005 error

I run some R code after querying 100M records and get the following error after the process runs for over 6 hours:
Msg 39004, Level 16, State 19, Line 300
A 'R' script error occurred during execution of 'sp_execute_external_script'
with HRESULT 0x80004005.
HRESULT 0x80004005 appears to be associated in Windows with Connectivity, Permissions or an "Unspecified" error.
I know from logging in my R code that the process never reaches the R script at all. I also know that the entire procedure completes after 4 minutes on a smaller number of records, for example, 1M. This leads me to believe that this is a scaling problem or some issue with the data, rather than a bug in my R code. I have not included the R code or the full query for proprietary reasons.
However, I would expect a disk or memory error to display a 0x80004004 Out of memory error if that were the case.
One clue I noticed in the SQL ERRORLOG is the following:
SQL Server received abort message and abort execution for major error : 18
and minor error : 42
However the time of this log line does not coincide with the interruption of the process, although it does occur after it started. Unfortunately, there is precious little on the web about "major error 18".
A SQL Trace when running from SSMS shows the client logging in and logging out every 6 minutes or so, but I can only assume this is normal keepalive behaviour.
The sanitized sp_execute_external_script call:
EXEC sp_execute_external_script
#language = N'R'
, #script = N'#We never get here
#returns name of output data file'
, #input_data_1 = N'SELECT TOP 100000000 FROM DATA'
, #input_data_1_name = N'x'
, #output_data_1_name = N'output_file_df'
WITH RESULT SETS ((output_file varchar(100) not null))
Server Specs:
8 cores
256 GB RAM
SQL Server 2016 CTP 3
Any ideas, suggestions or debugging hints would be greatly appreciated!
UPDATE:
Set TRACE_LEVEL=3 in rlauncher.config to turn on a higher level of logging and re-ran the process. The log reveals a cleanup process that ran, removing session files, at the time the entire process failed after 6.5 hours.
[2016-05-30 01:35:34.419][00002070][00001EC4][Info] SQLSatellite_LaunchSatellite(1, A187BC64-C349-410B-861E-BFDC714C8017, 1, 49232, nullptr) completed: 00000000
[2016-05-30 01:35:34.420][00002070][00001EC4][Info] < SQLSatellite_LaunchSatellite, dllmain.cpp, 223
[2016-05-30 08:04:02.443][00002070][00001EC4][Info] > SQLSatellite_LauncherCleanUp, dllmain.cpp, 309
[2016-05-30 08:04:07.443][00002070][00001EC4][Warning] Session A187BC64-C349-410B-861E-BFDC714C8017 cleanup wait failed with 258 and error 0
[2016-05-30 08:04:07.444][00002070][00001EC4][Info] Session(A187BC64-C349-410B-861E-BFDC714C8017) logged 2 output files
[2016-05-30 08:04:07.444][00002070][00001EC4][Warning] TryDeleteSingleFile(C:\PROGRA~1\MICROS~1\MSSQL1~1.MSS\MSSQL\EXTENS~1\MSSQLSERVER06\A187BC64-C349-410B-861E-BFDC714C8017\Rscript1878455a2528) failed with 32
[2016-05-30 08:04:07.445][00002070][00001EC4][Warning] TryDeleteSingleDirectory(C:\PROGRA~1\MICROS~1\MSSQL1~1.MSS\MSSQL\EXTENS~1\MSSQLSERVER06\A187BC64-C349-410B-861E-BFDC714C8017) failed with 32
[2016-05-30 08:04:08.446][00002070][00001EC4][Info] Session A187BC64-C349-410B-861E-BFDC714C8017 removed from MSSQLSERVER06 user
[2016-05-30 08:04:08.447][00002070][00001EC4][Info] SQLSatellite_LauncherCleanUp(A187BC64-C349-410B-861E-BFDC714C8017) completed: 00000000
It appears the only way to allow my long-running process to continue is to:
a) Extend the Job Cleanup wait time to allow the job to finish
b) Disable the Job Cleanup process
I have thus far been unable to find the value that sets the Job Cleanup wait time in the MSSQLLaunchpad service.
While a JOB_CLEANUP_ON_EXIT flag exists in rlauncher.config, setting it to 0 has no effect. The service seems to reset it to 1 when it is restarted.
Again, any suggestions or assistance would be much appreciated!
By default, SQL Server reads all data into R memory as a Data Frame before starting execution of R script. Based on the fact that the script works with 1M rows and fails to start with 100M rows, this could potentially be an Out of Memory error. To resolve memory issues, (other than increasing memory on machine/reducing data size) you can try one of these solutions
Increase memory allocation for R process execution using sys.resource_governor_external_resource_pools max_memory_percent setting. By default, SQL Server limits R process execution to 20% of memory.
Streaming execution for R script instead of loading all data into memory. Note that this parameter can only be used in cases where the output of the R script doesn’t depend on reading or looking at the entire set of rows.
The Warnings in RLauncher.log about data cleanup happened after the R script execution can be safely ignored and probably not the root cause for the failures you are seeing.
Unable to resolve this issue in SQL, I simply avoided the SQL Server Launchpad service which was interrupting the processing and pulled the data from SQL using the R RODBC library. The pull took just over 3 hours (instead of 6+ using sp_execute_external_procedure).
This might implicate the SQL Launchpad service, and suggests that memory was not the issue.
Please try your scenario in SQL Server 2016 RTM. There have been many functional and performance fixes made since CTP3.
For more information on how to get the SQL Server 2016 RTM checkout SQL Server 2016 is generally available today blogpost.
I had almost the same issue with SQL Server 2016 RTM-CU1. My Query failed with error 0x80004004 instead of 0x80004005. And it failed beginning with 10,000,000 records, but that could be related to only having 16 GB memory and/or different data.
I got around it by using a field list instead of "*". Even if the field list contains all the fields from the data source (a rather complicated view in my case), a query featuring a field list is always successful, while "SELECT TOP x * FROM ..." always fails for some large x.
I've had the a similar error (0x80004004), and the problem was that one of the rows in one of the columns contained a "very" special character (I'm saying "very" because other special characters did not cause this error).
So that when I replaced 'Folkelånet Telefinans' with 'Folkelanet Telefinans', the problem went away.
In your case, maybe at least one of the values in the last 99M rows contains something like that character, and you just have to replace it. I hope that Microsoft will resolve this issue at some point.

MATLab - Database Handle is Empty

I have the following problem. At work we have personal computers running windows7 with MATLab (including the database toolbox), Oracle and so on. I got a new process I should take care about which involves a MATLab script which connects to the oracle database. The scripts works fine on any computer of the department except of mine. Sadly the IT told me that every PC is configured the same and I have to find the misstake for my one.
So i started "debugging" by checking the connection struct MATLab creates when it connects via
conn = database(instance,username,password)
It appears that the content of the structure is equal to every one else, except that the handle is empty:
val =
Instance: '***'
UserName: '*'
Driver: []
URL: []
Constructor: [1x1 com.mathworks.toolbox.database.databaseConnect]
Message: [1x128 char]
Handle: 0
TimeOut: 0
AutoCommit: 'off'
Type: 'Database Object'
on all other systems the handle is set to:
sun.jdbc.odbc.JdbcOdbcConnection
So my question is: Do I have to configure MATLab or is the driver for the JDBC/ODBC is missing? I already checked systems preferences/adminstration/ODBC sources but it seems to by the same as everywhere else.
Do someone might know how I can track down the source of this issue? Any help or hint is highly apprechiated.
Thanks and best regards
stephan
after some research here is the way one can figure it out. It is actually very easy, but I did not try the obvious...
first, if the
connection = database(...)
can not be created, type
connection.message
in the MATLab console. This message will give you additional feedback on the error. In my case the DNS entries for the oracle Databases where empty. After adding them through system preferences it worked as expected.

How do I get more info for 'invalid format' error with onpladm on Windows?

This is my first time trying to use Informix. I have around 160 tables to load, using pipe-delimited text files. We have an older series of batch files that a previous developer wrote to load Informix data, but they're not working with the new version of Informix (11.5) that I installed. I'm running it on a Windows 2003 server.
I've modified the batch file to execute the onpladm commands for one file, so this batch file looks like this:
onpladm create project dif31US-1-table-Load
onpladm create object -F diffdbagidaxsid.dev
onpladm create object -F diffdbagidaxsid.fmt
onpladm create object -F diffdbagidaxsid.map
onpladm create object -F diffdbagidaxsid.job
When I run this, it successfully creates the project and device array,
but I get an error creating the format. The only error I get is:
Create object DELIMITEDFORMAT diffile1fmt failed!
Invalid format!
The diffdbagidaxsid.fmt file is as follows:
BEGIN OBJECT DELIMITEDFORMAT diffile1fmt
PROJECT dif31US-1-table-Load
CHARACTERSET ASCII
RECORDSTART
RECORDEND
FIELDSTART
FIELDEND
FIELDSEPARATOR |
BEGIN SEQUENCE
FIELDNAME agid
FIELDTYPE Chars
END SEQUENCE
BEGIN SEQUENCE
FIELDNAME axsid
FIELDTYPE Chars
END SEQUENCE
END OBJECT
As you can see, it is only 2 columns. It originally had nothing following the CHARACTERSET. I've tried it with ASCII, and with the numeric code for ASCII, and still get the same error.
Is there any way to get a more verbose error message?
Also, can anyone recommend a decent (meaning active community) forum for Informix? I've tried the old comp.databases.informix forum, http://www.dbforums.com, the 'official' forum on IBM DeveloperWorks, and here of course. None have very much activity. We have to do this testing because we have customers (or maybe just 1 big one) who uses it, so we have to test our data and API against it.
Succinctly, I don't think there is a way to get much more information out of onpladm.

Resources