So the situation is there are two servers pub_server(publisher) and sub_server (subscriber).
There are two databases on pub_server : db1 and db2.
There exists a table xyz_tbl in db1 which is replicated(transactional) to the sub_server (publisher name : publisher_old).
My task is to drop the subscription and article from publisher_old and create a new publisher publisher_new at db2 with same article xyz_tbl and same subscriber sub_server.
Now here is the problem : the xyz_tbl has a computed column. So when I executed the script for publisher_new I get error in Replication Monitor.
Error : The column "column_name" cannot be modified because it is either a computed column or is the result of a UNION operator
I am using #sync_type="replication support only" since table already exists at subscriber (from publisher_old). Then why distributor is trying to perform insertion on subscriber which generates above error.
If at all distributor is trying insertion then how come the replication was working from db1 i.e publisher_old.
How to handle computed columns in replication. I couldn't find any answer.
Please help!
Most of the works are not available for computed columns.Such is updating.
I recommend not to replicate computed column. You can compute it again in your replication db.
Other way is make computed column an actual column, and then replicate.
If you want insert to computed column, then you can make kind property be equal to PersistantReadOnly.
But if calculated column persisted in replication is not replicarted as definition replication of other object fail and if add it should be removed before bcp. and at the end you can add them on subscriber only through creating on publisher (dop and recreate) and replicating bit then you can have problem with FK and indexes.
Related
Once a day I have to synchronize table between two databases.
Source: Microsoft SQL Server
Destination: PostgreSQL
Table contains up to 30 million rows.
For the first time i will copy all table, but then for effectiveness my plan is to insert/update only changed rows.
In this way if I delete row from source database, it will not be deleted from the destination database.
The problem is that I don’t know which rows were deleted from the source database.
My dirty thoughts right now tend to use binary search - to compare the sum of the rows on each side and thus catch the deleted rows.
I’m at a dead end - please share your thoughts on this...
In SQL Server you can enable Change Tracking to track which rows are Inserted, Updated, or Deleted since the last time you synchronized the tables.
with TDS FDWs (Foreign Data Wrapper), map the source table with a temp table in pg, an use a join to find/exclude the rows that you need.
SQL Server Transactional Replication:
I understand that T-Rep supports both row and column filters and this can be done through the GUI while setting up replication.
I am trying to create SQL code which can list down all the filters (row and column) for all the tables which replicate from publisher. This must be possible by querying the publisher DB.
Any help will be highly appreciated
For row filters you can query sysarticles which contains a row for each article defined. This table is stored in the publication database and contains 2 columns, filter and filter_clause which can help you identify row filters.
For column filters you can query sysarticlecolumns which contains one row for each table column that is published and maps each column to its article. This table is stored in the publication database.
I am linking tables to a SQL 2008R2 DB via MS Access Linked Tables.
I am getting this warning when I want to change the data in an Access linked table where the underlying SQL table has more than one bit field in it:
The record has been changed by another user since you started editing
it. If you save the record, you will overwrite the changes the other
user made
I don't have any problems when there is only one bit field in the table. It's really a strange error imho. Has any one else encountered this before and found a work around for it by any chance?
I've seen this sort of issue in working with linked tables in general with SQL. I'm not sure why you're seeing the issue specifically with bit fields. Try adding a 'ts' column with the datatype of timestamp (rowversion) to the table and relink it in Access.
I know this is an old question, but maybe my answer will benefit others since I struggled with same and other similar issues.
I had similar error and was mostly able to get around it. One thing that may help is to use SQL Profiler on the database and watch the SQL commands made by Access while you are trying to add a new row.
Few things to check..
1) Verify that you have an ID column in the table set as the Primary key and AutoNumber
2) If this involves a master/child relationship between another table, in the Access Database Tools "Relationships", specify the relationship and the join type between these types.
3) If a join between tables, then play around with the primary column and foreign column being exposed in the query.
Using the SQL Profiler, I would see where it would try to find the row to update based on other columns besides the primary key. e.g.
update table
set ...
where id = 5 and data1 = somevalue and data2 == othervalue
When doing this, I would sometimes get the same error since I may have edited other values in the new row and therefore the complex where clause would fail. What you want is to have the update rely totally on the primary key.
I'm pretty good around Oracle but I've been struggling to find a decent solution to a problem I'm having with Sybase.
I have a table which has an IDENTITY column which is also a User Defined Datatype (UDD) "id" which is numeric(10,0). I've decided to replace the UDD with the native datatype but I get an error when I do this.
I've found that the only way to do this is:
Rename the original table (table_a to table_a_backup) using the procedure sp_rename
Recreate the original table (table_a) but use native data types
Copy the contents of the backup table to the original (i.e insert into table_a select * from table_b)
This works however I have over 10M records and it eventually runs out of log segment and halts (I can't increase the segment any more due to physical requirements).
Does anybody have a solution, preferably not a solution which would involve processing the records as anything but one large set?
Cheers,
JLove
conceptually, something like this works (in Sybase ASE 12.5.x) ...
do an "alter table drop column" on your current ID column
do "alter table add column" stmt to add new column (w/ native datatype) with IDENTITY attribute
Note that the ID field might not have the same numbers, so be very wary of doing the above if the ID field is used as an explicit or implicit key to other tables.
I am using SQL Server 2000 and I have two databases that both replicate (transactional push subscription) to a single database. I need to know which database the records came from.
So I want to add a fixed column specified in the publication to my table so I can tell which database the row originated from.
How do I go about doing this?
I would like to avoid altering the main databases mostly due to the fact there are many tables I would need to do this to. I was hoping for some built in feature of replication that would do this for me some where. Other than that I would go with the view idea.
You could use a calculated column Use the following on the two databases:
ALTER TABLE TableName ADD
MyColumn AS 'Server1'
Then just define the single "master" database to use a VARCHAR column (or whatever you want) that you fill using the calculated columns value.
You can create a view, which adds the "constant" column, and use it as a replication source.
So the solution for me was to set up the replication publications to allow transformations and create a DTS package for each site that appends the siteid into the tables to keep the ids unique as I can't use guids.