how to check high read or write usage on table in sybase 16 - sybase

I want which table have most read or write process in Sybase 16 in linux
I want which table have most read or write process in Sybase 16 in linux

Assuming you're working with the Sybase ASE product you might want to start with the MDA table (master..)monOpenObjectActivity which maintains several counters including logical/physical (page) reads, number of rows inserted/updated/deleted, etc.
For more details on setting up, configuring and using MDA tables see the Performance & Tuning Series: Monitoring Tables manual.

Related

Read Committed Snapshot isolation with LOBs

I have a table in an SQL Server 2017 DB used by a lot of long running transactions that originate from multiple threads. This causes deadlocking several times a day so I am considering implementing read committed snapshot isolation. The trick is that this table has 3 VARBINARY(MAX) columns and each of them contains data between 10-1000MB (with the mean around 20 MB) beside several int and bit columns.
Now the questions:
Q1: Will SQL Server copy the entire row (including the VARBINARY(MAX) columns) into the TEMPDB?
Q2: If so, would the performance benefit from moving the VARBINARY(MAX) columns into a separate table with a 1:1 relationship to the original table?
Sql Server has to present you with consistent view on your data (e.g. T2 sees your row, including LOB, as it were before T1 started mutating transaction). Which means -- yes, it has no choice but to copy LOB with the rest of the row data. Which makes me think that yes, performance may benefit from having separate table with LOBs.
As usual, I would recommend doing simple experiment that will measure performance with both configurations. Please post your results here.

Synchronize data from Oracle to PostgreSQL

We would like to synchronize data (insert, update) from Oracle (11g) to PostgreSQL (10). Our approach was the following:
A trigger on the table in Oracle updates a column with nextval from a sequence before insert and update.
PostgreSQL knows the last sequence number processed and fetches the rows from Oracle > lastSequenceNumberFetched.
We now have the following problem:
Session 1 in Oracle inserts a row, sequence number (let's say 45) is written but no COMMIT is done in Oracle.
Session 2 in Oracle inserts a row, sequence number is written (let's say 49 (because sequences in Oracle can have gaps)) and a COMMIT is done in Oracle.
Session in PostgreSQL fetches rows from Oracle with sequenceNumber > 44 (because the lastSequenceNumberFetched is 44) and gets the row with sequenceNumber 49. So this is the new lastSequenceNumberFetched.
Session 1 in Oracle makes a commit.
Session in PostgreSQL fetches rows from Oracle with sequenceNumber > 49. Problem is that the row with sequenceNumber 45 is never fetched.
Are there any better approaches for our use case avoiding our problem with missing data?
In case you don't have delete operations in your tables and the tables are not very big then I suggest to use Oracle System Change Number (SCN) on the row level which is returned by the pseudo column ORA_ROWSCN (link). This is the commit time presented by number. By default the SCN is tracked for the data block, but you can enable tracking on the row level (keyword rowdependencies). So you have to recreate your table with this keyword. At the sync procedure launch you get the current scn by the function call dbms_flashback.get_system_change_number, then scan all tables where ora_rowscn between _last_scn_value_ and _current_scn_value_. The disadvantage is that this pseudo columns is not indexed, so you will have full table scans, which is slow for big tables.
If you use delete statements then you have to track the records which were deleted. For this purpose you can use one log table having the following columns: table_name, table_id_value, operation (insert/update/delete). Table is filled by the trigger on base tables. So for your case when session 1 commits data in base table - then you have the record in log table to process. And you don't see it until the session commits. So no issues with sequence numbers that you described.
Hope that helps.
Is this purely a data project or do you have some client here. If you do have a middle tier you could use an ORM to abstract some of this and do writes to both. Do you care whether the sequences are the same? It would be possible to do something like collect all the data to synchronize since a particular timestamp (every table would have to have a UTC timestamp) and then take a hash of all the data and compare with what is in Postgres.
It might be useful to have some more of your requirements for the synchronization of data and the reasoning behind this e.g.
Do the keys need to be the same against both environments? Why?
Who views the data, is the same consumer looking at both sources.
Why wouldn't you just use an ORM to target only one db why do you need oracle and postgres?
I have seen a similar setup. An application on Postgres mostly for reporting and other secondary tasks while main app was on Oracle.
Some of the main app tables are cached in Postgres for convenience. But this setup brings in the sync problem.
The compromise solution was a mix of incremental sequence-based sync during daytime and full table copy overnight
Regarding other solutions proposed here:
Postgres fdw is slow for complex queries and it puts extra load on foreign db especially when where clause refer to both local and foreign tables.
The same query will run much faster if foreign table is cached in postgres.
Incremental/differential sync using sequence numbers -tried this and works acceptable for small tables, but the nightmare starts with child relations maybe an orm can help here
The ideal solution in my opinion would probably be to stream Oracle changes to Postgres or intermediary process that replicates changes to Postgres
I have no clue about how to do this as I understood it requires Oracle golden gate app (+ licence)

Having trouble with interface table structures in SQL Server

I'm currently working on a project that involves a third party database and application. So far we are able to successfully TEST and interface data between our databases. However we are having trouble when we are extracting a large set of data (ex 100000 rows and 10 columns per row) and suddenly it stopped at the middle of transaction for whatever reason(ex blackouts, force exit or etc..), missing or duplication of data is happening in this type of scenario.
Can you please give us a suggestions to handle these types of scenarios? Thank you!
Here's our current interface structure
OurDB -> Interface DB -> 3rdParty DB
OurDB: we are extracting records from OurDB (with bit column as false) to the InterfaceDb
InterfaceDB: after inserting records from OurDB, we will update OurDB bit column as true
3rdPartyDB: they will extract and delete all records from InterfaceDB (they assume that all records is for extraction)
Well, you defintitely need a ETL tool then and preferably SSIS. First it will drastically improve your transfer rates while also providing robust error handling. Additionally you will have to use lookup transforms to ensure duplicates do not enter the sytsem. I would suggest go for Cache Connection Manager in order to perform the look-ups.
In terms of design, if your source system (OurDB) is having a primary key say recId, then have a column say source_rec_id in your InterfaceDB table. Say your first run has transferred 100 rows. Now in your second run, you would then need to pick 100+1th record and move on to the next rows. This way you will have a tracking mechanism and one-to-one correlation between source system and destination system to understand how many records have got transferred, how many are left etc.
For best understanding of SSIS go to Channel 9 - msdn - SSIS. Very helpful resource.

How to find the longest running queries in sybase ASE 15?

How to find the longest running queries in sybase ASE 15 ?
Do we need to use MDA tables or is there another way ?
Without using the MDA tables, you can first look at the system table master..syslogshold.
Otherwise you can use the MDA tables master..monProcessStatement and master..monProcessSQLText.
You can also look at sp_monitor 'statement'.

What is the difference between Wide and Nonwide tables in SQL 2008?

I was looking at this page on MSDN:
Maximum Capacity Specifications for SQL Server 2008
And it says the following:
Max Columns per 'nonwide' table: 1,024
Max Columns per 'wide' table: 30,000
However I cannot find any information on the difference between 'wide' and 'nonwide' tables in SQL 2008. If I wanted to define a 'wide' table, how would I do it?
Special Table Types
All the info you need is in this MSDN article.
A wide table is just a table with sparse columns. To make a table wide you just add a column set to its definition.
I would say the difference is about 28,976 columns.
It is important to note that your total fixed and variable length data are still limited to 8019 bytes total. Being able to do this crazy extra large number of columns is only supported in sparse tables where MOST of the data is nulls. Otherwise you still end up with rows that exceed the 8019 bytes and end up with rowdata that won't fit, or overflow into extended row data (which is very expensive to maintain compared to normal data pages).
There is a really good book from Karen Delaney that has a ton of internal features and limits for SQL Server entitled SQL Server 2008 Internals. If you are really into the low level limits and how things are done in SQL Server it is a fantastic read. It will increase the depth of your knowledge for how SQL Server does what it does under the hood at the byte level to disk in some cases.
Another difference is that wide tables don't work with transactional or merge replication. See the "SQL Server Technologies That Support Sparse Columns" section here:
http://msdn.microsoft.com/en-us/library/cc280604(v=sql.105).aspx

Resources