Below is the schema in yugabyte DB:
ycqlsh:example> CREATE TABLE users(user_id INT PRIMARY KEY, full_name TEXT) WITH default_time_to_live = 0 AND transactions = {'enabled': 'false'};
ycqlsh:example> CREATE TABLE entities(entity_id INT PRIMARY KEY, full_name TEXT) WITH default_time_to_live = 0 AND transactions = {'enabled': 'false'};
Version:
[ycqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Queries are initiated through multiple threads in the app
Does insert/update queries on users & entities table ensure ACID properties?
You are using YugabyteDB's YCQL API, which is based on Cassandra.
The tables are created with transactions = {'enabled': 'false'}, so this means you explicitly turned transactions, alias ACID properties off.
Related
I'm using a simple availability group with a primary server and a secondary replica. They are configured like this:
I have situations where I will insert a row into Table A on the primary using a query like this:
INSERT INTO TableA(UniqueId, Column1)
SELECT 123, Column2
FROM #TVP
And after that I immediately query the values from TableA using a read-only connection.
SELECT Column1
FROM TableA
WHERE UniqueId = 123
Sometimes when making this query, we do not get any rows back. I assume this is because the read-only replica hasn't gotten the data from the primary replica yet but I thought that the insert query would not return until the data had been hardened to the secondary replica.
What is going on here?
With a synchronous AG replica, the commit is hardened on both the primary and secondary nodes when the INSERT transaction commits. However, the changes will not be visible on the secondary until a redo thread on the secondary applies the changes. The latency is typically short but can be delayed due to blocking on high resource utilization on the secondary.
See the Data Latency topic in the documentation for more information.
I am replicating from MSSQL (SQL Server 13) to PostgreSQL (9.5) using SymmetricDS.
The table that is replicating has a composite key of 7 different columns. Everything works perfect from an initial load to inserting and updating data.
However, I run into a problem whenever I run an update that modifies data in one of the 7 columns that comprise the primary key. On the MSSQL side, it updates the row, no problem. On the Postgres side, rather than updating the column, it inserts an additional row.
If I modify the sym_transform_column entry to have 0 for pk the specific column then it will update the data correctly but will not utilize that column as a primary key to determine which row to update.
Example Generated SQL with pk=0 for sym_transform_column:
update table set pk1 = 0, value1 = 'test', value2 = 'test' where pk2 = 0 and pk3 = 0
Example Generated SQL with pk=1 for sym_transform_column:
update table set value1='test', value2='test' where pk1 = 0 and pk2 = 0 and pk3 = 0
I realize that it is generally accepted that PK should be immutable but to cover all contingencies, is there a way to replicate the update to primary key data from MSSQL to PostgreSQL using SymmetricDS?
Is it possible to add a column in the source table and treat it as a primary key? It could, for example, be a concatenation if the seven columns that comprise the composite key. Then declare this column as a primary key for the synchronization and add the same column in the table in the target database.
I used Postgres Database with replication.
I have used temp table in postgres function. I unable to update Temp Table while updating it through join.
Below is Postgres query(tempallergyupdates is temp table):
drop table if exists tempallergyupdates;
create temp table tempallergyupdates(patientallergyid int,updateid int, newupdateid int);
update tempallergyupdates set patientallergyid = 1;
Above query throws bellow exception:
cannot update table "tempallergyupdates" because it does not have a
replica identity and publishes updates
We just encountered this and found a solution. It turns out that PostgreSQL does not like tables, even temp tables, that lack a primary key where replication is involved. So either add one to your temp table or use a statement like this after creating the table:
ALTER TABLE table_name REPLICA IDENTITY FULL;
REPLICA IDENTITY FULL works with standard data types in the fields.
ALTER TABLE table_name REPLICA IDENTITY FULL;
but, when a json field appears, you will see a message like:
ERROR: could not identify an equality operator for type json
background worker logical replication worker exited with exit code 1
In this case, you must add a new unique index, maybe adding a serial column or simply skipping the json field or adding a new PK, etc.
An tell the replica process use this index.
/* id is a new serial column */
create unique index concurrently idx_someid on tablename (id);
alter table tablename REPLICA IDENTITY USING INDEX concurrently;
I am trying to get range-lock working with entity framework. Let's say i have a table with the following columns:
| Id | int |
| Type | int |
| Value | int |
Where Id is a PRIMARY KEY with CLUSTERED INDEX and Type has a NON-CLUSTERED NOT-UNIQUE INDEX.
If I want to select a value within serializable transaction using this code
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT Value FROM MyTable WHERE Type = 5
SELECT * FROM sys.dm_tran_locks WHERE request_session_id = ##SPID AND resource_type = 'KEY'
COMMIT
It correctly range-locks a row with Type = 5 and next row.
If I do this query:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT Id, Type, Value FROM MyTable WHERE Type = 5
SELECT * FROM sys.dm_tran_locks WHERE request_session_id = ##SPID AND resource_type = 'KEY'
COMMIT
It locks all rows. Unfortunately Entity Framwork selects all columns:
SELECT [Id], [Type], [Value] FROM ...
I am filtering my real table on a column with FOREIGN KEY and this column is not unique. I tried to make my NON-CLUSTERED INDEX on the Type column UNIQUE and it locks the correct rows even when I select all columns.
How can I get same with NON UNIQUE INDEX?
What is locked depends on the query plan. Everything that the plan reads is subject to locking. So you need to make SQL Server find the index that you want to lock on attractive. Start by creating an optimal index for that query.
Why do you want a specific locking pattern to occur? If it's for performance reasons that is totally valid. If it's for behavioral reasons that is quite unreliable.
You also can make EF select less columns by not selecting entities but DTP objects (e.g. anonymous types).
It's a pity, SERIALIZABLE transaction can't do range lock with Clustered Index when WHERE clause contains different columns which have NON-UNIQUE INDEX or NO INDEX.
I found a nice workaround for Entity Framework.
If you want to LOCK ROWS with specific values, for example all rows with Type=FINISHED, make a NON-UNIQUE index ( if the column can contain duplicates ).
We have to tell to SQL DB what INDEX we should use.
var tables = context.MyTables.SqlQuery("SELECT * FROM dbo.MyTable WITH(INDEX(MyIndex)) WHERE Type='FINISHED'").ToList();
I used WITH(INDEX(MyIndex)), so it locks all rows where Type='FINISHED' even it has NON-UNIQUE INDEX
Perhaps someone will bring better solution than RAW QUERY.
EDIT: Rangelock uses NON-UNIQUE INDEX without any problem. It did not use because not enough data in the database.
I have two tables on two different databases, DB1.Category and DB2.Category.
I need to merge all values so that DB1.Category and DB2.Category are identical, but I need to maintain the PK ID, CategoryID.
The CategoryID is an identity column with an increment and seed of 1 in DB1, but no identity in DB2.
Is there a way to sync all data in these tables from DB1 to DB2 while maintaining the PK?
This is what I have so far:
MERGE DB1.dbo.Category AS TARGET
USING DB2.dbo.Category AS SOURCE
ON (TARGET.MarketplaceName = SOURCE.MarketplaceName
AND TARGET.MarketplaceCategoryCode = SOURCE.MarketplaceCategoryCode
AND TARGET.MarketplaceCategoryName = SOURCE.MarketplaceCategoryName)
WHEN NOT MATCHED BY TARGET
INSERT(--*FIELDS*-
)
I am a little confused by your question. The Title says "Identical table" but the body of your question you point out difference. Can you provide the structure of the 2 tables?