Sybase monitoring updates LastUpdateDate null - sybase

I want to monitor a table's changes with master..monOpenObjectActivity monitoring table, but last update/insert/delete dates are NULL all the time.
After update/insert/delete, the corresponding monOpenObjectActivity columns are not set:
LastUpdateDate, LastInsertDate and LastDeleteDate are NULL
Updates, Inserts and Deletes are 0
but RowsInserted, RowsDeleted and RowsUpdated are counting correctly
The configuration parameters "enable monitoring", "per object statistics active", and "object lockwait timing" are enabled.
My configs for monitoring tables are the following:
Notes:
I want to mention that I did use the monOpenObjectActivity table with other tables, and had no problem. For any other table, the timestamps are registered in monOpenObjectActivity.
sybase version: ASE 16.0 SP02

Related

How to check the data rentention time for time travel for a specific database in Snowflake

How do I check the time retention period for a specific database in snowflake? I know this is to do it for the account level, SHOW PARAMETERS like '%DATA_RETENTION_TIME_IN_DAYS%' in account;
but need it for a database
One way can be show databases -
Use SHOW DATABASES and then result_scan to filter for specific database as needed.
(Output truncated - columnwise)
show DATABASES;
name
retention_time
SNOWFLAKE
1
SNOWFLAKE_SAMPLE_DATA
1
TEST_DB
1
alter database test_db set DATA_RETENTION_TIME_IN_DAYS=2;
Check for specific database (after running, show databases) -
select "name","retention_time" from table(result_scan(last_query_id())) where "name" like '%TEST%';
name
retention_time
TEST_DB
2
Along with show parameters -
show parameters like '%retention%' in database TEST_DB;
key
value
default
level
DATA_RETENTION_TIME_IN_DAYS
2
1
DATABASE

Can Streamsets Data Collector CDC read from and write to multiple tables?

I have a MSSQL database whose structure is replicated over a Postgres database.
I've enabled CDC in MSSQL and I've used the SQL Server CDC Client in StreamSets Data Collector to listen for changes in that db's tables.
But I can't find a way to write to the same tables in Postgres.
For example I have 3 tables in MSSQL:
tableA, tableB, tableC. Same tables I have in Postgres.
I insert data into tableA and tableC. I want those changes to be replicated over Postgres.
In StreamSets DC, in order to write to Postgres, I'm using JDBC Producer and in the Table Name field I've specified: ${record:attributes('jdbc.tables')}.
Doing this, the data will be read from tableA_CT, tableB_CT, tableC_CT. Tables created by MSSQL when you enable the CDC option. So I'll end up with those table names in the ${record:attribute('jdbc.tables')}.
Is there a way to write to Postgres in the same tables as in MSSQL ?
You can cut the _CT suffix off the jdbc.tables attribute by using an Expression Evaluator with a Header Attribute Expression of:
${str:isNullOrEmpty(record:attribute('jdbc.tables')) ? '' :
str:substring(record:attribute('jdbc.tables'), 0,
str:length(record:attribute('jdbc.tables')) - 3)}
Note - the str:isNullOrEmpty test is a workaround for SDC-9269.

Check SQL Server Replication Defaults

We have a database a that is replicated to a subscriber db b (used for SSRS reporting) every night at 2.45 AM.
We need to add a column to one of the replicated tables since it's source file in our iSeries is having a column added that we need to use in our SSRS reporting db.
I understand (from Making Schema Changes on Publication Databases) and the answer here from Damien_The_Unbeliever) that there is a default setting in SQL Server Replication whereby if we use a T-SQL ALTER TABLE DDL statement to add the new column to our table BUPF in the PION database, the change will automatically propagate to the subscriber db.
How can I check the replication of schema changes setting to ensure that we will have no issues with the replication following making the change?
Or should I just run ALTER TABLE BUPF ADD Column BUPCAT Char(5) NULL?
To add a new column to a table and include it in an existing publication, you'll need to use ALTER TABLE < Table > ADD < Column > syntax at the publisher. By default the schema change will be propagated to subscribers, publication property #replicate_ddl must be set to true.
You can verify if #replicate_ddl is set to true by executing sp_helppublication and inspecting the #replicate_ddl value. Likewise, you can set #replicate_ddl to true by using sp_changepublication.
See Making Schema Changes on Publication Databases for more information.

SQL Azure raise 40197 error (level 20, state 4, code 9002)

I have a table in a SQL Azure DB (s1, 250Gb limit) with 47.000.000 records (total 3.5Gb). I tried to add a new calculated column, but after 1 hour of script execution, I get: The service has encountered an error processing your request. Please try again. Error code 9002 After several tries, I get the same result.
Script for simple table:
create table dbo.works (
work_id int not null identity(1,1) constraint PK_WORKS primary key,
client_id int null constraint FK_user_works_clients2 REFERENCES dbo.clients(client_id),
login_id int not null constraint FK_user_works_logins2 REFERENCES dbo.logins(login_id),
start_time datetime not null,
end_time datetime not null,
caption varchar(1000) null)
Script for alter:
alter table user_works add delta_secs as datediff(second, start_time, end_time) PERSISTED
Error message:
9002 sql server (local) - error growing transactions log file.
But in Azure I can not manage this param.
How can I change my structure in populated tables?
Azure SQL Database has a 2GB transaction size limit which you are running into. For schema changes like yours you can create a new table with the new schema and copy the data in batches into this new table.
That said the limit has been removed in the latest service version V12. You might want to consider upgrading to avoid having to implement a workaround.
Look at sys.database_files by connecting to the user database. If the log file current size reaches the max size then you hit this. At this point either you have to kill the active transactions or update to higher tiers (if this is not possible because of the amount of data you modifying in a single transaction).
You can also get the same by doing:
DBCC SQLPERF(LOGSPACE);
Couple ideas:
1) Try creating an empty column for delta_secs, then filling in the data separately. If this still results in txn log errors, try updating part of the data at a time with a WHERE clause.
2) Don't add a column. Instead, add a view with the delta_secs column as a calculated field instead. Since this is a derived field, this is probably a better approach anyway.
https://msdn.microsoft.com/en-us/library/ms187956.aspx

How can I view full SQL Job History?

In SQL Server Management Studio, when I "View History" for a SQL Job, I'm only shown the last 50 executions of the Job.
How can I view a full log of every execution of a SQL Job since it was created on the server?
The SQL Server Job system limits the total number of job history entries both per job and over the whole system. This information is stored in the MSDB database.
Obviously you won't be able to go back and see information that has been since discarded, but you can change the SQL Server Agent properties and increase the number of entries that will be recorded from now on.
In the SQL Server Agent Properties:
Select the History page
Modify the 'Maximum job history log size (rows)' and 'Maximum job history rows per job' to suit, or change how historical job data is deleted based on its age.
It won't give you back your history, but it'll help with your future queries!
I'm pretty sure job history is stored somewhere in a dedicated database in SQL Server itself. If this is the case, you can use SQL Server Profiler to intercept SQL statements sent by SQL Server Management Studio and find out names of tables, etc.
Your outcome depends on a couple of things.
What you've set your "Limit size of job log history" and "Automatically remove agent history" settings [right click on SQL Agent | Properties | History] and
whether or not you are doing a "History Clean Up" task in a Maintenance Plan (or manually for that manner). The MP task runs the msdb.dbo.sp_purge_jobhistory stored procedure with an "oldest date" parameter which equates to the period you have selected.
You could use Temporal Table to change retention of data. Persisting job history in Azure SQL Managed Instance:
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD StartTime DATETIME2 NOT NULL DEFAULT ('19000101 00:00:00.0000000')
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD EndTime DATETIME2 NOT NULL DEFAULT ('99991231 23:59:59.9999999')
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD PERIOD FOR SYSTEM_TIME (StartTime, EndTime)
ALTER TABLE [msdb].[dbo].[sysjobhistory]
ADD CONSTRAINT PK_sysjobhistory PRIMARY KEY (instance_id, job_id, step_id)
ALTER TABLE [msdb].[dbo].[sysjobhistory]
SET(SYSTEM_VERSIONING = ON (HISTORY_TABLE = [dbo].[sysjobhistoryall],
DATA_CONSISTENCY_CHECK = ON, HISTORY_RETENTION_PERIOD = 1 MONTH))
select * from msdb.dbo.sysjobhistoryall
This approach allows to define retention period as time(here 1 MONTH) instead of maximum number of rows per job/xaximum job history log size (rows).

Resources