Is it possible to repair label data in Neo4j? - database

At some point, based on the logs, my Neo4j server got stopped up and was behaving irrationally until I restarted it. I don't have logs showing what caused the issue, but the shutdown sequence dumped several exceptions into the logs, mainly:
2015-01-29 22:10:04.204+0000 INFO [API] Neo4j Server shutdown initiated by request
...
22:10:20.911 [Thread-13] WARN o.e.j.util.thread.QueuedThreadPool - qtp313783031{STOPPING,2<=1<=20,i=0,q=0} Couldn't stop Thread[qtp313783031-22088,5,main]
2015-01-29 22:10:20.923+0000 INFO [API] Successfully shutdown Neo4j Server.
2015-01-29 22:10:20.936+0000 ERROR [org.neo4j]: Exception when stopping org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource#68698d61 Failed to flush file channel /var/lib/neo4j/data/graph.db/neostore.propertystore.db
org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: Failed to flush file channel /var/lib/neo4j/data/graph.db/neostore.propertystore.db
... stack dump of exception ...
During the three day time period before this restart, some nodes and relationships were created that are able to be found in the system but don't have a proper or full label attached to them. See the below queries for an example of the enigma.
neo4j-sh (?)$ match (b:Book {book_id: 25937}) return b;
+---+
| b |
+---+
+---+
0 row
105 ms
neo4j-sh (?)$ match (b {book_id: 25937}) return b;
+-------------------------------------------------------------+
| b |
+-------------------------------------------------------------+
| Node[97574]{book_id:25937,title:"Writing an autobiography"} |
+-------------------------------------------------------------+
1 row
189 ms
neo4j-sh (?)$ match (b {book_id: 25937}) return id(b), labels(b);
+----------------------+
| id(b) | labels(b) |
+----------------------+
| 97574 | ["Book"] |
+----------------------+
1 row
165 ms
Even if I then explicitly add the label and query for :Book again, it still doesn't return.
neo4j-sh (?)$ match (b {book_id: 25937}) set b :Book return b;
+-------------------------------------------------------------+
| b |
+-------------------------------------------------------------+
| Node[97574]{book_id:25937,title:"Writing an autobiography"} |
+-------------------------------------------------------------+
1 row
143 ms
neo4j-sh (?)$ match (b:Book {book_id: 25937}) return b;
+---+
| b |
+---+
+---+
0 row
48 ms
Also not helpful is dropping and recreating the index(es) I have on these nodes.
The fact that these labels aren't functioning properly has caused my code to begin creating second, "duplicate" (exactly the same except with working labels) nodes do to my on-demand architecture.
I'm starting to believe these nodes are never going to work. Before I dive into reproducing ALL the nodes and relationships created during that three day time span, is it possible to repair what's existing?
-- EDIT --
It seems that dropping and recreating the index DOES solve this problem. I was testing immediately after dropping and immediately after recreating, forgetting that the indexes may take time to build. The system was still misbehaving while operating with NO index, so I thought it wasn't working at all.
TL/DR: Try creating an index on the broken nodes.

Related

Chaining rows in a SQL Server table in a distributed system

Let's say that I have the following SQL table where each value has a reference to the previous one:
ChainedTable
+------------------+--------------------------------------+------------+--------------------------------------+
| SequentialNumber | GUID | CustomData | LastGUID |
+------------------+--------------------------------------+------------+--------------------------------------+
| 1 | 792c9583-12a1-4c95-93a4-3206855d284f | OtherData1 | 0 |
+------------------+--------------------------------------+------------+--------------------------------------+
| 2 | 1022ffd3-afda-4e20-9d45-eec884bc2a50 | OtherData2 | 792c9583-12a1-4c95-93a4-3206855d284f |
+------------------+--------------------------------------+------------+--------------------------------------+
| 3 | 83729ad4-2564-4146-b451-00d82585bd96 | OtherData3 | 1022ffd3-afda-4e20-9d45-eec884bc2a50 |
+------------------+--------------------------------------+------------+--------------------------------------+
| 4 | d7197e87-d7d6-4175-8172-12656043a69d | OtherData4 | 83729ad4-2564-4146-b451-00d82585bd96 |
+------------------+--------------------------------------+------------+--------------------------------------+
| 5 | c1d3d751-ef34-4079-a73c-8952f93d17db | OtherData5 | d7197e87-d7d6-4175-8172-12656043a69d |
+------------------+--------------------------------------+------------+--------------------------------------+
If I were to insert the sixth row, I would retrieve the data of the last row using a query like this:
SELECT TOP 1 (SequentialNumber, GUID) FROM ChainedTable ORDER BY SequentialNumber DESC;
After that selection and before the insertion of the next row, an operation outside the database will take place.
That would suffice if it is ensured that only one entity is using the table every time. However, if more entities can do this same operation, there is a risk of a race condition. There is the possibility that one entity requests the information of the last row and before doing the insert on the second one.
At first, I thought of creating a new table with a value that indicates if the table is being used or not (the value can be null or the identifier of the process that has access to the table). In that solution, the entity won't start the request of the last operation if the value indicates that the table is being used by another process. However, one of the things that can happen in this scenario is that the process using the table can die without releasing the table, blocking the whole system.
I'm sure this is a "typical" computer science problem and that there are well known solutions to implement this. Can anyone point me in the right direction, please?
I think using Transaction in SQL may solve the problem For example, if you create a transaction that will add a new row, no one else will be able to do the same transaction until the first one is completed.

ad-hoc slowly-changing dimensions materialization from external table of timestamped csvs in a data lake

Question
main question
How can I ephemerally materialize slowly changing dimension type 2 from from a folder of daily extracts, where each csv is one full extract of a table from from a source system?
rationale
We're designing ephemeral data warehouses as data marts for end users that can be spun up and burned down without consequence. This requires we have all data in a lake/blob/bucket.
We're ripping daily full extracts because:
we couldn't reliably extract just the changeset (for reasons out of our control), and
we'd like to maintain a data lake with the "rawest" possible data.
challenge question
Is there a solution that could give me the state as of a specific date and not just the "newest" state?
existential question
Am I thinking about this completely backwards and there's a much easier way to do this?
Possible Approaches
custom dbt materialization
There's a insert_by_period dbt materialization in the dbt.utils package, that I think might be exactly what I'm looking for? But I'm confused as it's dbt snapshot, but:
run dbt snapshot for each file incrementally, all at once; and,
built directly off of an external table?
Delta Lake
I don't know much about Databricks's Delta Lake, but it seems like it should be possible with Delta Tables?
Fix the extraction job
Is our oroblem is solved if we can make our extracts contain only what has changed since the previous extract?
Example
Suppose the following three files are in a folder of a data lake. (Gist with the 3 csvs and desired table outcome as csv).
I added the Extracted column in case parsing the timestamp from the filename is too tricky.
2020-09-14_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/14 |
| 2 | B | 3 - Propose | | 9/12 | 9/14 |
2020-09-15_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/15 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/15 |
| 3 | C | 1 - Lead | | 9/14 | 9/15 |
2020-09-16_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/16 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/16 |
| 3 | C | 2 - Qualify | | 9/15 | 9/16 |
End Result
Below is SCD-II for the three files as of 9/16. SCD-II as of 9/15 would be the same but OppId=3 has only one from valid_from=9/15 and valid_to=null
| OppId | CustId | Stage | Won | LastModified | valid_from | valid_to |
|-------|--------|-------------|-----|--------------|------------|----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/14 | null |
| 2 | B | 3 - Propose | | 9/12 | 9/14 | 9/15 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/15 | null |
| 3 | C | 1 - Lead | | 9/14 | 9/15 | 9/16 |
| 3 | C | 2 - Qualify | | 9/15 | 9/16 | null |
Interesting concept and of course it would a longer conversation than is possible in this forum to fully understand your business, stakeholders, data, etc. I can see that it might work if you had a relatively small volume of data, your source systems rarely changed, your reporting requirements (and hence, datamarts) also rarely changed and you only needed to spin up these datamarts very infrequently.
My concerns would be:
If your source or target requirements change how are you going to handle this? You will need to spin up your datamart, do full regression testing on it, apply your changes and then test them. If you do this as/when the changes are known then it's a lot of effort for a Datamart that's not being used - especially if you need to do this multiple times between uses; if you do this when the datamart is needed then you're not meeting your objective of having the datamart available for "instant" use.
Your statement "we have a DW as code that can be deleted, updated, and recreated without the complexity that goes along with traditional DW change management" I'm not sure is true. How are you going to test updates to your code without spinning up the datamart(s) and going through a standard test cycle with data - and then how is this different from traditional DW change management?
What happens if there is corrupt/unexpected data in your source systems? In a "normal" DW where you are loading data daily this would normally be noticed and fixed on the day. In your solution the dodgy data might have occurred days/weeks ago and, assuming it loaded into your datamart rather than erroring on load, you would need processes in place to spot it and then potentially have to unravel days of SCD records to fix the problem
(Only relevant if you have a significant volume of data) Given the low cost of storage, I'm not sure I see the benefit of spinning up a datamart when needed as opposed to just holding the data so it's ready for use. Loading large volumes of data everytime you spin up a datamart is going to be time-consuming and expensive. Possible hybrid approach might be to only run incremental loads when the datamart is needed rather than running them every day - so you have the data from when the datamart was last used ready to go at all times and you just add the records created/updated since the last load
I don't know whether this is the best or not, but I've seen it done. When you build your initial SCD-II table, add a column that is a stored HASH() value of all of the values of the record (you can exclude the primary key). Then, you can create an External Table over your incoming full data set each day, which includes the same HASH() function. Now, you can execute a MERGE or INSERT/UPDATE against your SCD-II based on primary key and whether the HASH value has changed.
Your main advantage doing things this way is you avoid loading all of the data into Snowflake each day to do the comparison, but it will be slower to execute this way. You could also load to a temp table with the HASH() function included in your COPY INTO statement and then update your SCD-II and then drop the temp table, which could actually be faster.

SQL Server NEWSEQUENTIALID() - clarification for super fast .net core implementation

Currently I'm trying to write SQL Server NEWSEQUENTIALID() in .NET Core 2.2 that should be running really fast and also it should allocate minimum possible amount memory but I need clarification how calculate uuid version and when (which byte to place it or what bit shift is needed). So now I have generated timestamp, retrieved mac address and copied bytes 8 and 9 from some base random generated guid but surely I'm missing something because results doesn't match with output of original algorithm.
byte[16] guidArray;
// mac
guidArray[15] = macBytes[5];
guidArray[14] = macBytes[4];
guidArray[13] = macBytes[3];
guidArray[12] = macBytes[2];
guidArray[11] = macBytes[1];
guidArray[10] = macBytes[0];
// base guid
guidArray[9] = baseGuidBytes[9];
guidArray[8] = baseGuidBytes[8];
// time
guidArray[7] = ticksDiffBytes[0];
guidArray[6] = ticksDiffBytes[1];
guidArray[5] = ticksDiffBytes[2];
guidArray[4] = ticksDiffBytes[3];
guidArray[3] = ticksDiffBytes[4];
guidArray[2] = ticksDiffBytes[5];
guidArray[1] = ticksDiffBytes[6];
guidArray[0] = ticksDiffBytes[7];
var guid = new Guid(guidArray);
Current benchmark results:
Method | Mean | Error | StdDev | Ratio | RatioSD | Gen 0 | Gen 1 | Gen 2 | Allocated |
|--------------------------- |----------:|---------:|---------:|------:|--------:|-------:|------:|------:|----------:|
| SqlServerNewSequentialGuid | 37.31 ns | 0.680 ns | 0.636 ns | 1.00 | 0.00 | 0.0127 | - | - | 80 B |
| Guid_Standard | 63.29 ns | 0.435 ns | 0.386 ns | 1.70 | 0.03 | - | - | - | - |
| Guid_Comb | 299.57 ns | 2.902 ns | 2.715 ns | 8.03 | 0.13 | 0.0162 | - | - | 104 B |
| Guid_Comb_New | 266.92 ns | 3.173 ns | 2.813 ns | 7.16 | 0.11 | 0.0162 | - | - | 104 B |
| MyFastGuid | 70.08 ns | 1.011 ns | 0.946 ns | 1.88 | 0.05 | 0.0050 | - | - | 32 B |
Update:
Here are the latest results of benchmarking common id generators written in .net core.
As u can see my implementation NewSequentialGuid_PureNetCore is at most 2x worst performing then wrapper around rpcrt4.dll (which was my baseline) but me implementation eats less memory (30B).
Here are a sequence of sample first 10 guids:
492bea01-456f-3166-0001-e0d55e8cb96a
492bea01-456f-37a5-0002-e0d55e8cb96a
492bea01-456f-aca5-0003-e0d55e8cb96a
492bea01-456f-bba5-0004-e0d55e8cb96a
492bea01-456f-c5a5-0005-e0d55e8cb96a
492bea01-456f-cea5-0006-e0d55e8cb96a
492bea01-456f-d7a5-0007-e0d55e8cb96a
492bea01-456f-dfa5-0008-e0d55e8cb96a
492bea01-456f-e8a5-0009-e0d55e8cb96a
492bea01-456f-f1a5-000a-e0d55e8cb96a
If u want code then give me a sign ;)
The official documentation states it quite clearly:
NEWSEQUENTIALID is a wrapper over the Windows UuidCreateSequential
function, with some byte shuffling applied.
There are also links in the quoted paragraph which might be of interest for you. However, considering that the original code is written in C / C++, I somehow doubt that .NET can outperform it, so reusing the same approach might be a more prudent choice (even though it would involve unmanaged calls).
Having said that, I sincerely hope that you have researched the behaviour of this function and considered all its side effects before deciding to pursue this approach. And I certainly hope you aren't going to use this output as a clustered index for your table(s). The reason for this is also mentioned in the docs (as a warning, no less):
The UuidCreateSequential function has hardware dependencies. On SQL
Server, clusters of sequential values can develop when databases (such
as contained databases) are moved to other computers. When using
Always On and on SQL Database, clusters of sequential values can
develop if the database fails over to a different computer.
Basically, the function generates a monotonous sequence only while the database is in the same hosting environment. When:
a network card gets changed on the bare metal (or whatever else the function depends upon), or
a backup is restored someplace else (think Prod-to-Dev refresh, or simply prod migration / upgrade), or
a failover happens, whether in a cluster or in an AlwaysOn configuration
, the new SQL Server instance will have its own range of generated values, which is supposed not to overlap the ranges of other instances on other machines. If that new range comes "before" the existing values, you'll end up with fragmentation issues for absolutely no good reason. Oh, and top (1) to get the latest value won't work anymore.
Indeed, if all you need is a non-exhaustible monotonous sequence, follow the Greg Low's advice and just stick to bigint. It's half as wide, and no, you can't possibly exhaust it.

PostgreSQL + pgpool replication with miss balancing

I have a PostgreSQL replication M-S with pgpool as a load balancer on master server only. The replication is going OK and there is no delay on the process. The problem is that the master server is receiving more request than the slave even when I have configured a balance different from 50% for each server.
This is the pgpool show_pool_nodes with backend weigth M(1)-S(2)
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+-------------+------+--------+-----------+---------+------------+-------------------+-------------------
0 | master-ip | 9999 | up | 0.333333 | primary | 56348331 | false | 0
1 | slave-ip | 9999 | up | 0.666667 | standby | 3691734 | true | 0
as you can appreciate the master server is receiving +10x request than slave
This is the pgpool show_pool_nodes with backend weigth M(1)-S(5)
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+-------------+------+--------+-----------+---------+------------+-------------------+-------------------
0 | master-ip | 9999 | up | 0.166667 | primary | 10542201 | false | 0
1 | slave-ip | 9999 | up | 0.833333 | standby | 849494 | true | 0
The behave is quite similar when I assign M(1)-S(1)
Now I wonder if I miss understood the pgpool functioning:
Pgpool only balances read queries(as write queries are sent to
master always)
Backend Weight parameter is assigned to calculate distribution only
in balancing mode. As greater the value is more likely to be chosen
for pgpool, so if a server has a greater lb_weight it would be
selected more times than others with lower values.
If I'm right why is happening this?
Is there a way that I can actually assign a proper balancing configuration of select_cnt queries? My intention is to overcharge the slave with read queries and let to master only a "few" read queries as it is taking all the writing.
You are right on pgpool load balancing. There could be some reasons why this doesn't seem to work. For start, notice that you have the same port number for both backends. Try configuring your backend connection settings like shown in the sample pgpool.conf: https://github.com/pgpool/pgpool2/blob/master/src/sample/pgpool.conf.sample (lines 66-87), (where you also set the weights to your needs) and assign different port numbers to each backend.
Also check (assuming your running mode is master/slave):
load_balance_mode = on
master_slave_mode = on
-- changes require restart
There is a relevant FAQ entry " It seems my pgpool-II does not do load balancing. Why?" here: https://www.pgpool.net/mediawiki/index.php/FAQ (if pgpool version 4.1 also consider statement_level_load_balance). So far, i have assumed that the general conditions for load balancing (https://www.pgpool.net/docs/latest/en/html/runtime-config-load-balancing.html) are met.
You can try to adjust below one configs in pgpool.conf file:
1. wal lag delay size
delay_threshold = 10000000
it is used to let pgpool know if the slave postgresql wal is too delay to use. Change large more query can be pass to slave. Change small more query will go to master.
Besides, the pgbench testing parameter is also key. Use -C parameter, it will let connection per query, otherwise connection per session.
pgpoll load balance decision making depends of a matrix of parameter combination. not only a single parameter
Here is reference.
https://www.pgpool.net/docs/latest/en/html/runtime-config-load-balancing.html#GUC-LOAD-BALANCE-MODE

Trouble removing state with dual keys

To ask my question i first have to show you my data and my proposed solution to the dual key problem:
Data has 1 of 2 keys x and y. Sometimes x is pressent sometimes y. One type of event has both.
Type 1: Key x and y
Type 2: key x
Type 3: Key y
To have the full session at the end of the pipeline we need all data under one key: x+y.
To achieve this, I copy the messages with both keys and key one of them by x and the other by y. Then in the following Processor I enrich type x and y.
Each message looks like this: [Flink key, potentialX, potentialY, rest of msg...]
Pipeline
Here is my scenario: I have a close session message
which is type 2. This will be propagated to the key X processor. Here
it will be enriched and we can shut down appropriate
processors in the rest of the pipeline. However key y is
never evicted because it never gets the close session
message.
Close msg flow
Now for the question: How can i close the state in the Y processor?
Initially i thought to duplicate the type 2 msg in the enricher, and make a sideoutput for it, grab that sideoutput before the keyby, and therefore have it go to the correct processor. This is not possible as the sideoutput can only be used after the processor where it was created. Then i found some jira-tickets about side-inputs, but that seems to not be an actual feature yet.
Lastly i thought i might make a sink for the sideoutput mentioned above, and a source at the keyby. This seems a bit hacky tho.
I really hope someone can help!
Edit:
Adding new diagram, to try to clarify the original flow. In the original drawings i tried to make make the flow of data easier to understand by making 2 boxes for the Enrichment processor. I've tried to make the flow more correct with this new drawing:
Improved drawing
That's a bit complicated to follow, but I've seen this pattern before when trying to unify logged-out sessions with logged-in sessions from web logs. If I've understood the details well enough, I think you could take a side output from the X processor, and feed it into the Y processor, like this:
+------------+ +-------+
| +--------------------------> |
+--------+ +-------+ X | X proc | | |
| | | +-----> | sideout +-----------+ | X + Y |
| | | | | +---------> | | |
| source +-----> split | +------------+ | +----> |
| | | | | Y proc | +-------+
| | | +----------------------------> |
+--------+ +-------+ Y +-----------+

Resources