Background
I am attempting to set up a constraint on neo4j using the browser.
I have first of all ran the following, to ensure there is no duplicates;
MATCH(m:uri), (b:uri)
WHERE m.value = b.value
AND id(m) <> id(b)
RETURN m, b
ORDER BY id(m)
For clarity, I also checked the following returned 0;
MATCH (b:uri)
WITH b.value AS value, COLLECT(b) AS branches
WHERE SIZE(branches) > 1
RETURN branches
What I tried
CREATE CONSTRAINT constraint_urivalue IF NOT EXISTS ON (u:uri) ASSERT u.value IS UNIQUE
AND
CREATE CONSTRAINT constraint_urivalue ON (u:uri) ASSERT u.value IS UNIQUE
The Error
Neo.DatabaseError.Schema.ConstraintCreationFailed
Unable to create Constraint( name='constraint_urivalue', type='UNIQUENESS', schema=(:uri {value}) ):
What else
I have managed to create other node/attribute-constraints using the same code as used above.
Also, I have a development environment where this exact "uri constraint" has been implemented without any issue.
Both are running the exact same version of neo4j.
Is there anyway to see more detail of the error?
Any queries worth running to re-check my data?
Thanks!
Related
I've been instaling one of our products on a new test server and it's not working due to an error 264 - the column name 'kod_novy' is specified more than once in the SET clause.
I know where the problem is and reported it to the development for a fix. But we have the same application deployed and it works just fine.
In the code you can see the 'kod_novy' is used twice in the insert. My question is - does anyone know, how did our customer manage to ignore this error and successfully complete the T-SQL? Is that a server setting?
Thanks,
Z
insert into [server].db.dbo.prenos_c_banky (
id_prenos,
kod,
kod_novy,
ext_kod,
iud_job,
kod_banky,
kod_novy,
nazev,
znacka)
select
cast('2B06FB0A-2664-4714-91F6-A6D39BDE5B5F' as UNIQUEIDENTIFIER),
kod,
kod_novy,
ext_kod,
iud_job,
kod_banky,
kod_novy,
nazev,
znacka
from #c_banky
One really neat feature of Postgres that I have only just discovered is the ability to define composite type - also referred to in their docs as ROWS and as RECORDS. Consider the following example
CREATE TYPE dow_id AS
(
tslot smallint,
day smallint
);
Now consider the following tables
CREATE SEQUENCE test_id_seq INCREMENT 1 MINVALUE 1 MAXVALUE 2147483647 START 1 CACHE 1;
CREATE TABLE test_simple_array
(
id integer DEFAULT nextval('test_id_seq') NOT NULL,
dx integer []
);
CREATE TABLE test_composite_simple
(
id integer DEFAULT nextval('test_id_seq') NOT NULL,
dx dow_id
);
CREATE TABLE test_composite_array
(
id integer DEFAULT nextval('test_id_seq') NOT NULL,
dx dow_id[]
);
CRUD operations on the first two tables are relatively straightforward. For example
INSERT INTO test_simple_array (dx) VALUES ('{1,1}');
INSERT INTO test_composite_simple (dx) VALUES (ROW(1,1));
However, I have not been able to figure out how to perform CRUD ops when the table has an array of records/composite types as in test_composite_array. I have tried
INSERT INTO test_composite_array (dx) VALUES(ARRAY(ROW(1,1),ROW(1,2)));
which fails with the message
ERROR: syntax error at or near "ROW"
and
INSERT INTO test_composite_array (dx) VALUES("{(1,1),(1,2)}");
which fails with the message
ERROR: column "{(1,1),(1,2)}" does not exist
and
INSERT INTO test_composite_array (dx) VALUES('{"(1,1)","(1,2)"}');
which appears to work though it leaves me feeling confused since a subsequent
SELECT dx FROM test_composite_array
returns what appears to be a string result {"(1,1),(1,2)} although a further query such as
SELECT id FROM test_composite_array WHERE (dx[1]).tslot = 1;
works. I also tried the following
SELECT (dx[1]).day FROM test_composite_array;
UPDATE test_composite_array SET dx[1].day = 99 WHERE (dx[1]).tslot = 1;
SELECT (dx[1]).day FROM test_composite_array;
which works while
UPDATE test_composite_array SET (dx[1]).day = 99 WHERE (dx[1]).tslot = 1;
fails. I find that I am figuring out how to manipulate arrays of records/composite types in Postgres by trial and error and - altough Postgres documentation is generally excellent - there appears to be no clear discussion of this topic in the documentation. I'd be much obliged to anyone who can point me to an authoritative discussion of how to manipulate arrays of composite types in Postgres.
That apart are there any unexpected gotchas when working with such arrays?
You need square brackets with ARRAY:
ARRAY[ROW(1,1)::dow_id,ROW(1,2)::dow_id]
A warning: composite types are a great feature, but you will make your life harder if you overuse them. As soon as you want to use elements of a composite type in WHERE or JOIN conditions, you are doing something wrong, and you are going to suffer. There are good reasons for normalizing relational data.
I have a two tables in SQL Server, in which one is the source for a MERGE operation into another.
The source table has 30Mil Records
The Target table has 180Mil Records. Both tables have 227 columns.
I do have SSIS, but I'm told in this case, a MERGE statement is the better option. Below is a shortened version of it:
;WITH MySource as (
SELECT * FROM [STAGE].[dbo].[STAGE_TABLE]
)
MERGE [EDW].[dbo].[TARGET_TABLE] AS MyTarget
USING MySource
ON MySource.[ID_FIELD] = MyTarget.[ID_FIELD]
AND MySource.[LoadDate] >= MyTarget.[LoadDate]
WHEN MATCHED THEN UPDATE SET
<<Target Column>> = MySource.<<Source Colums>> --227 columns
WHEN NOT MATCHED THEN INSERT
(
[ID_FIELD],
[LoadDate],
<<225 Other Columns>>
)
VALUES (
MySource.[ID_FIELD],
MySource.[LoadDate],
MySource.<<225 other columns>>
);
The only changes I made to the script above is truncating the list of columns to keep the code block here short.
My Problem is that I am getting hung on the execution. The profile screen shows a CXPACKET suspension with the error: cwaitpipenewrow, node=2.
How do I troubleshoot this? Thank you.
Seems like CXPACKET and suspended state means that some threads which have completed are logging that other thread's state which have not completed yet.
Please check here. The query need to update upto 1 Billion values in the table. hence it would be slow running queries.
https://dba.stackexchange.com/questions/96346/cxpacket-suspended-and-null-wait-type
https://www.sqlshack.com/troubleshooting-the-cxpacket-wait-type-in-sql-server/
Hope these articles might help you debug.
I have created a lookup for a list of IDs and a subsequent Foreach loop to run an sql stmt for each ID.
My variable for catching the list of IDs is called MissingRecordIDs and is of type Object. In the Foreach container I map each value to a variable called RecordID of type Int32. No fancy scripts - I followed these instructions: https://www.simple-talk.com/sql/ssis/implementing-foreach-looping-logic-in-ssis-/ (without the file loading part - I am just running an SQL stmt).
It runs fine from within SSIS, but when I deploy it my Integration Services Catalogue in MSSQL it fails.
This is the error I get when running from SQL Mgt Studio:
I thought I could just put a Precendence Constraint after MissingRecordsIDs get filled to check for NULL and skip the Foreach loop if necessary - but I can't figure out how to check for NULL in an Object variable?
Here is the Variable declaration and Object getting enumerated:
And here is the Variable mapping:
The SQL stmt that is in 'Lookup missing Orders':
select distinct cast(od.order_id as int) as order_id
from invman_staging.staging.invman_OrderDetails_cdc od
LEFT OUTER JOIN invman_staging.staging.invman_Orders_cdc o
on o.order_id = od.order_id and o.BatchID = ?
where od.BatchID = ?
and o.order_id is null
and od.order_id is not null
In the current environment this query returns nothing - there are no missing Orders, so I don't want to go into the 'Foreach Order Loop' at all.
This is a known issue Microsoft is aware of: https://connect.microsoft.com/SQLServer/feedback/details/742282/ssis-2012-wont-assign-null-values-from-sql-queries-to-variables-of-type-string-inside-a-foreach-loop
I would suggest to add an ISNULL(RecordID, 0) to the query as well as set an expression to the component "Load missing Orders" in order to enable it only when RecordID != 0.
In my case it wasn't NULL causing the problem, the ID value which I loaded from database was stored as nvarchar(50), even if it was a integer, I attempted to use it as integer in SSIS and it kept giving me the same error message, this worked for me:
SELECT CAST(id as INT) FROM dbo.Table
Im learning DB2 and I had a problem while testing some options in my db.
I have 2 tables like this:
Country
=========
IdCountry -- PK
Name
State
=========
IdState -- PK
IdCountry -- FK to Country.IdCountry
Name
Code
And I am using queries like:
SELECT IdState, Name
FROM Tables.State
WHERE IdCountry = ?
Where ? is any working IdCountry, and everything worked fine.
Then I used set integrity in my db2 control center using the default info in the options and the process was successful but now my query isn't giving me any results.
I tried using :
SELECT *
FROM Tables.State
Where IdCountry = ?
and it gives me back results.
While making tests to the table I try adding new States and they appear in the query using column names instead of *, but old records still missing.
I have no clue about what's happening, does anyone have an idea?.
Thanks in advance, and sorry for my poor English.
I'm assuming here that you're on Linux/Unix/Windows DB2, since z/OS doesn't have a SET INTEGRITY command, and I couldn't find anything about it with a quick search on the iSeries Info Center.
It's possible that your table is still in "set integrity pending" state (previously known as CHECK PENDING). You could test this theory by checking SYSCAT.TABLES using this query:
SELECT TRIM(TABSCHEMA) || '.' || TRIM(TABNAME) AS tbl,
CASE STATUS
WHEN 'C' THEN 'Integrity Check Pending'
WHEN 'N' THEN 'Normal'
WHEN 'X' THEN 'Inoperative'
END As TblStatus
FROM SYSCAT.TABLES
WHERE TABSCHEMA NOT LIKE 'SYS%'
If your table shows up, you will need to use the SET INTEGRITY command to bring your table into checked status:
SET INTEGRITY FOR Tables.State IMMEDIATE CHECKED