TYPO3 Exception: Could not determine pid - solr

While trying to add a new fe_users record, on save I get
(1/1) Exception
Could not determine pid
It's TYPO3 9.5.20.
We already have a lot of entries in multiple folders which could be edited without problem.
But those records were imported (by EXT:ig_ldap_sso_auth or with mysql terminal)
These records are used only to be shown (no login is used).
What configuration is missing or could be wrong?
EDIT:
as #biesior mentioned: the error message does not come from the core but from an extension. It's EXT:solrfal (in version 7.0.0)

The real error was not in EXT:solrfal. this extension just hides the error with a misleading message.
The real reason was a wrong database configuration for the table fe_users. Although it is not possible in SQL to have a default value for fields of type text (and any given value is ignored) TYPO3 expects a default value if it is configured. As this is not returned from the database it assumes an error. And EXT:solrfal hooks into the error handling and assumes a wrong error.

Hi just got the same problem.
The error message was called in solrfal ConsistencyAspect::getRecordPageId() which was called by ConsistencyAspect::getDetectorsForSiteExclusiveRecord(). I remember that I have added various tablenames to siteExclusiveRecordTables of Extension Settings of solrfal. And yes, there was one table without pid. After removing this table from list, deleting files works again.

Related

SSIS script component *bug* – Column data type DT_DBDATE is not supported by the PipelineBuffer class

Does anyone know exactly why these types of issues happen using a script component that can be “fixed” by deleting and re add the same code to make fix this type of issue? Why would metadata change when you delete and re add code? what happens inside the engine when this happens? What kind of issue could it ever fix to delete and re add a script component, copy the same code and rewire it?
I can reproduce at will with the following steps:
Took a working package with a script component and two output buffers. The script component has additional input columns and output columns setup for the second output buffer that are not being populated yet by the source query (OLE DB source SQL command) yet. Only one column is being populated in the second output buffer from the source query .
Copied in a new source query with additional columns for the second output buffer.
Run the package. Get the error message Column data type DT_DBDATE is not supported by the PipelineBuffer class.
Comment out the two lines for the second output buffer, run the package, the package runs successfully:
RedactedOutputBuffer.AddRow();
RedactedOutputBuffer. RedactedColumnName = Row. RedactedColumnName;
Uncomment the same two lines. The package still works. So the package is now exactly the same as when it did not work.
Well, no, it's not really a bug, it's more like SSIS doesn't try to be clever and fit square pegs in round holes.
I mean the error message is pretty clear, innit? The PipelineBuffer class doesn't have any methods to handle data types of DT_DBDATE. So it throws you a UnsupportedBufferDataTypeException:
The exception that is thrown when assigning a value to a buffer column
that contains the incorrect data type.
Anyway, since you didn't print your full error message stack, it's hard to say exactly but my guess is it tried to call SetDateTime (or GetDateTime ) on your busted column. So when you set your source query, it sets the pipeline buffer's data type as DT_DBDATE, but when you comment it out, let it run, then uncomment it out, it has converted the pipeline buffer's data type to DT_DBTIMESTAMP, which is compatible with SetDateTime or whatever method is being called from the PipelineBuffer class which is throwing your error.
Anyway, this MSDN post should give you a little more flavor on what you're seeing, but the bottom line is make sure that the field coming out of your source query is correctly identified as the field type you want it to be in your destination. If that's a SQL Server datetime field, you either need to cast it as datetime in your source query or use a Data Conversion component to explicitly cast it for your script component.

Access database in Physionet's ptbdb by Matlab

I set up the system first by
[old_path]=which('rdsamp');if(~isempty(old_path)) rmpath(old_path(1:end-8)); end
wfdb_url='http://physionet.org/physiotools/matlab/wfdb-app-matlab/wfdb-app-toolbox-0-9-3.zip';
[filestr,status] = urlwrite(wfdb_url,'wfdb-app-toolbox-0-9-3.zip');
unzip('wfdb-app-toolbox-0-9-3.zip');
cd mcode
addpath(pwd);savepath
I am trying to read databases from Physionet.
I have successfully reached one database mitdb by
[tm,sig]=rdsamp('mitdb/100',1)
but I want to reach the database ptbdb unsuccessfully by
[tm,sig]=rdsamp('ptbdb/100',1)
and get the error
Warning: Could not get signal information. Attempting to read signal without buffering.
> In rdsamp at 107
Error: Cannot convert to double:
init: can't open header for record ptbdb/100
Error using rdsamp (line 145)
Java exception occurred:
java.lang.NumberFormatException: Cannot convert
at org.physionet.wfdb.Wfdbexec.execToDoubleArray(Unknown Source)
The first error message refers to these lines in rdsamp.m:
if(isempty(N))
[siginfo,~]=wfdbdesc(recordName);
if(~isempty(siginfo))
N=siginfo(1).LengthSamples;
else
warning('Could not get signal information. Attempting to read signal without buffering.')
end
end
This line if(~isempty(siginfo)) is false means that the siginfo is empty that is there is no signal. Why? No access to the database, I think.
I think other errors follow from it.
So the error must follow from this line
[siginfo,~]=wfdbdesc(recordName);
What does the snake mean here in the brackets?
How can you get data from ptbdb by Matlab?
So
Does this error mean that the connection cannot be established to the database?
or
that there does not exists such data in the database?
It would be very nice to know how you can check if you have connection to the database like in Postrgres. It would be much easier to debug.
If you run physionetdb("ptdb",1) it will download the files to your computer. You will then be able to see the available records in the <current-dir>/ptdb/
Source: physionetdb function documentation. You are interested in the DoBatchDownload parameter.
After downloading it, I believe every command from the toolbox will check if you have the files locally before fetching from the server (as long as you give the function the correct path to the local files).
The problem is that the data unit "100" does not exist in the database ptbdb.
I run finally successfully after waiting 35 minutes with 100Mb cable broadband:
db_list = physionetdb('ptbdb')
and get not complete data finally to the patient 54 - there should be 294 patients.
'ptbdb/patient001/s0014lre' 'ptbdb/patient001/s0014lre' ... cut ...
The main developer, Ikaro's' answer helped me to wait so long:
The WFDB Toolbox connects to PhysioNet's file server. The databases
accessible through the WFDB Toolbox are not SQL database, they consist
of flat files. The error message that you are getting regarding the
ptdb/100 database is because you are attempting to get a record that
does not exist on the database.
For more information on a particular database or record in PhysioNet
please type:
help physionetdb
and
physionetdb('ptdb')
This flat file system is really a bottle neck in the system.
It would be a good time to change to SQL.

Binary Column is translated to null Java type

I created a domain in JasperRreports Server. I have a table that stores Binary data. When I use it in my domain I get a the following error :
java.lang.IllegalArgumentException: getObjectType for javaType: null
returned null
I exported the schema and found the following:
<field id="Id" type="java.lang.Integer" />
<field id="FileData" type="null" />
As you can see, null is used in the type field. I tried changing this to java.io.InputStream which is the type that it maps to when I connect to the data source directly and got the same error:
java.lang.IllegalArgumentException: getObjectType for javaType:
java.io.InputStream returned null at
com.jaspersoft.commons.dataset.expr.ObjectTypeMapper.getObjectType(ObjectTypeMapper.java:69)
Any reports using that domain fail to run until I remove the Binary column. When I try to create a Domain report in iReport, it can not retrieve the domain fields. When I try to use the adhoc reporting tool I get the error above.
I am using SQL Server 2005, the type of the data is 'image'. I cast the column to varbinary in my view to see if JasperReports would recognize it and I still have the same error.
Has anyone successfully used binary data types in JasperReports Server domains?
Update: I configured the bean "jdbcMetaConfiguration" in applicationContext-semanticLayer.xml to map the binary column to java.io.InputStream and I still get the same error. The mapping worked, when I view the XML file "null" is replaced with "java;io.InputStream" but I still get IllegalArgumentException.
EDIT: Nope, it cannot be done. Sorry.
Original [overly-optimistic] answer:
The Ad Hoc editor cannot handle images (or other binary data types). It would be nice if it more gracefully ignored them... but it's no surprise that you cannot use them there.
But it should be possible to define the field as some sort of binary (image or array of bytes or just an Object or something like these) and then use it in iReport.

Cell.cross() returns error in Google Refine projects

I'm trying to create a new column based on my main project's Date column that pulls timeline events from another Google Refine project:
cell.cross("Clean5 Timeline", "TimelineDate").cells["TimelineEvent"].value[0]
The dates are in the same format in both Google Refine projects. But it fills no cells, and I get this error:
Error: Cannot retrieve field from null
This — 
cell.cross("Clean5 Timeline", "TimelineDate")
— returns [ ] for rows where there should be a match.
And this —
cell.cross("Clean5 Timeline", "TimelineDate").cells["TimelineEvent"]
— returns null for those rows.
I copied the syntax directly from the GREL help files: http://code.google.com/p/google-refine/wiki/GRELOtherFunctions. Can anyone suggest what I may be overlooking?
Thanks.
Without access to your projects it's going to be difficult to answer this, but the first thing I'd suggest is that you trim back your expression to find out exactly where the null is coming from.
Since
cell.cross("Clean5 Timeline", "TimelineDate")
is returning an empty array ([]), nothing based on that result is going to work.
There are three possible problems that I can think of: 1) the project name is wrong, 2) the column name is wrong, 3) the data values don't match (or Refine doesn't think they do), or 4) you are running into a caching bug with cross() that exists in Refine 2.5.
Restarting the Refine server should clear the cache if you're running into the bug and it's also fixed in the current source repository. The fix will be included in OpenRefine 2.6.

How do I get more info for 'invalid format' error with onpladm on Windows?

This is my first time trying to use Informix. I have around 160 tables to load, using pipe-delimited text files. We have an older series of batch files that a previous developer wrote to load Informix data, but they're not working with the new version of Informix (11.5) that I installed. I'm running it on a Windows 2003 server.
I've modified the batch file to execute the onpladm commands for one file, so this batch file looks like this:
onpladm create project dif31US-1-table-Load
onpladm create object -F diffdbagidaxsid.dev
onpladm create object -F diffdbagidaxsid.fmt
onpladm create object -F diffdbagidaxsid.map
onpladm create object -F diffdbagidaxsid.job
When I run this, it successfully creates the project and device array,
but I get an error creating the format. The only error I get is:
Create object DELIMITEDFORMAT diffile1fmt failed!
Invalid format!
The diffdbagidaxsid.fmt file is as follows:
BEGIN OBJECT DELIMITEDFORMAT diffile1fmt
PROJECT dif31US-1-table-Load
CHARACTERSET ASCII
RECORDSTART
RECORDEND
FIELDSTART
FIELDEND
FIELDSEPARATOR |
BEGIN SEQUENCE
FIELDNAME agid
FIELDTYPE Chars
END SEQUENCE
BEGIN SEQUENCE
FIELDNAME axsid
FIELDTYPE Chars
END SEQUENCE
END OBJECT
As you can see, it is only 2 columns. It originally had nothing following the CHARACTERSET. I've tried it with ASCII, and with the numeric code for ASCII, and still get the same error.
Is there any way to get a more verbose error message?
Also, can anyone recommend a decent (meaning active community) forum for Informix? I've tried the old comp.databases.informix forum, http://www.dbforums.com, the 'official' forum on IBM DeveloperWorks, and here of course. None have very much activity. We have to do this testing because we have customers (or maybe just 1 big one) who uses it, so we have to test our data and API against it.
Succinctly, I don't think there is a way to get much more information out of onpladm.

Resources