Postgre drop bytea[] colum take very long - arrays

i have a problem with dropping a bytea[] column from my postgree database.
First: The column had a size of 1.4GB so i set the column to null => Update XX set BYTACOLUMN = null.
Result => The column have the same size 1.4GB. This is a litte bit confused. How can i reset the size of the column?
Second: Oke, the size is 1.4GB. Let's try to drop the column -> ALTER TABLE XX DROP COLUMN XX.
Result: The Drop take very very very long (full weekend) with no result and the database blocked all incomming connections.
Questions:
How can i drop the bytea[] column and reset the full size of the column?
EDIT:
EDIT 2 :
active locks
|schemaname|relname|locktype|page|virtualtransaction|pid|mode|granted|
|----------|-------|--------|----|------------------|---|----|-------|
|GDI|Order|relation||-1/84181397||AccessShareLock|true|
pg_locks
|locktype|database|relation|page|tuple|virtualxid|transactionid|classid|objid|objsubid|virtualtransaction|pid|mode|granted|fastpath|
|--------|--------|--------|----|-----|----------|-------------|-------|-----|--------|------------------|---|----|-------|--------|
|relation|730418|1247||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|1249||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|12248||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|2685||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|2684||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|2679||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|2678||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|3455||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|2663||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|2662||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|2615||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|2610||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|1259||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|12186||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|12143||||||||9/87381|12284|AccessShareLock|true|true|
|relation|730418|3200026||||||||9/87381|12284|AccessShareLock|true|true|
|virtualxid|||||9/87381|||||9/87381|12284|ExclusiveLock|true|true|
|relation|730418|882100||||||||11/58020|2332|RowExclusiveLock|true|true|
|relation|730418|882099||||||||11/58020|2332|RowExclusiveLock|true|true|
|relation|730418|882098||||||||11/58020|2332|RowExclusiveLock|true|true|
|relation|730418|882003||||||||11/58020|2332|RowExclusiveLock|true|true|
|virtualxid|||||11/58020|||||11/58020|2332|ExclusiveLock|true|true|
|relation|730418|1462743||||||||8/132576|7956|RowExclusiveLock|true|true|
|virtualxid|||||8/132576|||||8/132576|7956|ExclusiveLock|true|true|
|relation|730418|1284520||||||||10/68630|11140|RowExclusiveLock|true|true|
|virtualxid|||||10/68630|||||10/68630|11140|ExclusiveLock|true|true|
|relation|730418|2656||||||||9/87381|12284|AccessShareLock|true|false|
|relation|75098|151106||||||||-1/91524731||AccessShareLock|true|false|
|relation|730418|882111||||||||-1/84181397||AccessShareLock|true|false|
|relation|730418|2703||||||||9/87381|12284|AccessShareLock|true|false|
|relation|75098|151112||||||||-1/91524731||AccessShareLock|true|false|
|relation|75098|150950||||||||-1/91524731||AccessShareLock|true|false|
|relation|75098|151103||||||||-1/91524731||AccessShareLock|true|false|
|relation|75098|151107||||||||-1/91524731||AccessShareLock|true|false|
|relation|75098|151111||||||||-1/91524731||AccessShareLock|true|false|
|relation|0|2676||||||||9/87381|12284|AccessShareLock|true|false|
|relation|730418|881809||||||||-1/84181397||AccessShareLock|true|false|
|relation|75098|151102||||||||-1/91524731||AccessShareLock|true|false|
|relation|730418|2604||||||||9/87381|12284|AccessShareLock|true|false|
|relation|730418|881815||||||||-1/84181397||AccessShareLock|true|false|
|relation|0|1262||||||||9/87381|12284|AccessShareLock|true|false|
|relation|730418|2658||||||||9/87381|12284|AccessShareLock|true|false|
|relation|0|2677||||||||9/87381|12284|AccessShareLock|true|false|
|relation|75098|151108||||||||-1/91524731||AccessShareLock|true|false|
|relation|730418|2704||||||||9/87381|12284|AccessShareLock|true|false|
|relation|75098|150943||||||||-1/91524731||AccessShareLock|true|false|
|transactionid||||||84181397||||-1/84181397||ExclusiveLock|true|false|
|relation|0|2672||||||||9/87381|12284|AccessShareLock|true|false|
|transactionid||||||91524731||||-1/91524731||ExclusiveLock|true|false|
|relation|730418|2659||||||||9/87381|12284|AccessShareLock|true|false|
|relation|75098|151109||||||||-1/91524731||AccessShareLock|true|false|
|relation|75098|151104||||||||-1/91524731||AccessShareLock|true|false|
|relation|75098|151110||||||||-1/91524731||AccessShareLock|true|false|
|relation|730418|2299546||||||||-1/84181397||AccessShareLock|true|false|
|relation|730418|1284518||||||||10/68630|11140|ShareUpdateExclusiveLock|true|false|
|relation|0|2671||||||||9/87381|12284|AccessShareLock|true|false|
|relation|730418|2657||||||||9/87381|12284|AccessShareLock|true|false|
|relation|730418|1462741||||||||8/132576|7956|ShareUpdateExclusiveLock|true|false|
|relation|730418|957042||||||||-1/84181397||AccessShareLock|true|false|
|relation|730418|881997||||||||11/58020|2332|ShareUpdateExclusiveLock|true|false|
|relation|75098|151105||||||||-1/91524731||AccessShareLock|true|false|
|relation|730418|957034||||||||-1/84181397||AccessShareLock|true|false|
|relation|0|1260||||||||9/87381|12284|AccessShareLock|true|false|
|page|730418|881815|7|||||||-1/84181397||SIReadLock|true|false|
pg_stat_activity
|datid|datname|pid|usesysid|usename|application_name|client_addr|client_hostname|client_port|backend_start|xact_start|query_start|state_change|wait_event_type|wait_event|state|backend_xid|backend_xmin|query|backend_type|
|-----|-------|---|--------|-------|----------------|-----------|---------------|-----------|-------------|----------|-----------|------------|---------------|----------|-----|-----------|------------|-----|------------|
|||6320|10|postgres|||||2020-11-15 19:23:08||||Activity|LogicalLauncherMain|||||logical replication launcher|
|||5808|||||||2020-11-15 19:23:08||||Activity|AutoVacuumMain|||||autovacuum launcher|
|730418|AlfaGateWay|1868|10|postgres|DBeaver 7.1.3 - Main <AlfaGateWay>|127.0.0.1||49568|2020-11-16 09:58:24||2020-11-16 09:58:24|2020-11-16 09:58:24|Client|ClientRead|idle|||SET application_name = 'DBeaver 7.1.3 - Main <AlfaGateWay>'|client backend|
|730418|AlfaGateWay|7136|10|postgres|DBeaver 7.1.3 - Metadata <AlfaGateWay>|127.0.0.1||49570|2020-11-16 09:58:24||2020-11-16 09:58:24|2020-11-16 09:58:24|Client|ClientRead|idle|||SET application_name = 'DBeaver 7.1.3 - Metadata <AlfaGateWay>'|client backend|
|730418|AlfaGateWay|9572|10|postgres|DBeaver 7.1.3 - SQLEditor <Script-4.sql>|127.0.0.1||49572|2020-11-16 09:58:24||2020-11-16 09:58:24|2020-11-16 09:58:24|Client|ClientRead|idle|||SET application_name = 'DBeaver 7.1.3 - SQLEditor <Script-4.sql>'|client backend|
|730418|AlfaGateWay|9416|10|postgres|DBeaver 7.1.3 - SQLEditor <Script-4.sql>|127.0.0.1||49574|2020-11-16 09:58:24||2020-11-16 09:59:00|2020-11-16 09:59:00|Client|ClientRead|idle|||SHOW search_path|client backend|
|730418|AlfaGateWay|11360|10|postgres|DBeaver 7.1.3 - SQLEditor <Script-3.sql>|127.0.0.1||49576|2020-11-16 09:58:24||2020-11-16 09:58:24|2020-11-16 09:58:24|Client|ClientRead|idle|||SET application_name = 'DBeaver 7.1.3 - SQLEditor <Script-3.sql>'|client backend|
|730418|AlfaGateWay|7956|||||||2020-11-16 09:55:04|2020-11-16 09:58:27|2020-11-16 09:58:27|2020-11-16 09:58:27|||active||84181397|autovacuum: VACUUM pg_toast.pg_toast_881742|autovacuum worker|
|730418|AlfaGateWay|12284|10|postgres|DBeaver 7.1.3 - SQLEditor <Script.sql>|127.0.0.1||49578|2020-11-16 09:58:24|2020-11-16 10:02:24|2020-11-16 10:03:42|2020-11-16 10:03:42|||active||84181397|SELECT * FROM pg_stat_activity|client backend|
|730418|AlfaGateWay|11140|||||||2020-11-16 10:00:09|2020-11-16 10:03:35|2020-11-16 10:03:35|2020-11-16 10:03:35|||active||84181397|autovacuum: VACUUM pg_toast.pg_toast_881809|autovacuum worker|
|730418|AlfaGateWay|4588|||||||2020-11-16 10:01:45|2020-11-16 10:03:40|2020-11-16 10:03:40|2020-11-16 10:03:40|||active||84181397|autovacuum: VACUUM pg_catalog.pg_class|autovacuum worker|
|||5176|||||||2020-11-15 19:23:08||||Activity|BgWriterMain|||||background writer|
|||1624|||||||2020-11-15 19:23:08||||Activity|CheckpointerMain|||||checkpointer|
|||6524|||||||2020-11-15 19:23:08||||Activity|WalWriterMain|||||walwriter|

Dropping a column is very fast. There must be an open transaction that has a lock on the table and blocks you. Close all long running open transactions and retry. Long running transactions are very bad for your database.
Neither dropping the column nor updating it to NULL will shrink your table (the update might even make it grow). To reclaim unused space in the table, run VACUUM (FULL) on it (but be aware that that blocks all access to the table until it is done).
Update:
In your special case, the transaction that blocks the ALTER TABLE is a prepared transaction, that is why it does not have a pid associated.
Look into pg_prepared_xacts, and you will find some abandoned prepared transactions. Roll them back using the gid from pg_prepared_xacts:
ROLLBACK PREPARED 'transactionname';
If you don't need prepared transactions, set max_prepared_xacts to 0. If you need them , you also need to provide a transaction manager that cleans up such prepared transactions if a problem occurs, since stale prepared transactions are just as bad for your database as long running open transactions.

Related

MariaDB replication is not working when no database is selected

I'm using MariaDB 10.6.8 and have one of master DB and two of slave DBs. Those DBs are set up for replication.
When I excute INSERT or UPDATE query without database selection, replication doesn't seem to work. In other words, master DB's data is changed but slave DB's data is remains intact.
/* no database is selected */
MariaDB [(none)]> show master status \G
*************************** 1. row ***************************
File: maria-bin.000007
Position: 52259873
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.000 sec)
MariaDB [(none)]> UPDATE some_database.some_tables SET some_datetime_column = now() WHERE primary_key_column = 1;
Query OK, 1 row affected (0.002 sec)
Rows matched: 1 Changed: 1 Warnings: 0
MariaDB [(none)]> show master status \G
*************************** 1. row ***************************
File: maria-bin.000007
Position: 52260068
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.000 sec)
/* only change master database's record even though the replication position is changed */
However, after selecting the database, replication work fine.
/* but, after selecting the database */
MariaDB [(none)]> USE some_database;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [some_database]> UPDATE some_tables SET some_datetime_column = now() WHERE primary_key_column = 1;
Query OK, 1 row affected (0.002 sec)
Rows matched: 1 Changed: 1 Warnings: 0
/* then change master and slave database's record */
Can anyone tell me what could be the cause of this situation?
Regardless of the binary log format (MIXED, STATEMENT, ROW) all DML commands will be written to the binary log file as soon the transaction will be committed.
When using ROW format a TABLE_MAP event will be logged first, which contains a unique ID, the database and table name. The ROW_EVENT (Delete/Insert/Update) refers to one or more table id's to identify the tables used.
The STATEMENT format logs a query event, which contains the default database name, timestamp and the SQL statement. If there is no default database, the statement itself will contain the database name.
Binlog dump example for STATEMENT format (I removed the non relevant parts such as timestamp and user variables from output)
without default database
#230210 4:42:41 server id 1 end_log_pos 474 CRC32 0x1fa4fa55 Query thread_id=5 exec_time=0 error_code=0 xid=0
insert into test.t1 values (1),(2)
/*!*/;
# at 474
#230210 4:42:41 server id 1 end_log_pos 505 CRC32 0xfecc5d48 Xid = 28
COMMIT/*!*/;
# at 505
with default database:
#230210 4:44:35 server id 1 end_log_pos 639 CRC32 0xfc862172 Query thread_id=5 exec_time=0 error_code=0 xid=0
use `test`/*!*/;
insert into t1 values (1),(2)
/*!*/;
# at 639
#230210 4:44:35 server id 1 end_log_pos 670 CRC32 0xca70b57f Xid = 56
COMMIT/*!*/;
If a session doesn't use a default database on the source server, it may not be replicated if a binary log filter was specified on the replica, e.g. replicate_do_db, since the replica doesn't parse the statement but checks if the database name applies to the filter.
To avoid inconsistent data on your replicas I would recommend to use ROW format instead.

How to resolve Application timeout issue due to SQL Query in 'Killed/ROLLBACK' Scenario

I've an application which has a database in SQL server 2012 and the application use entity framework to communicate with database. In the application, there is a functionality to update a record in a table based on a WHERE condition and the Primary Key field is the one used in the WHERE condition. So the update happens only to a single record (there is no loop or anything just an update to a single record). This is the background.
Here is the issue now I'm facing - I'm getting a timeout error message from the application when I invoke the functionality to update a record in the table (as mentioned above). I checked the query execution in the SQL server using 'Activity Monitor' and under the 'Processes' tab I could see that the Command of this query comes to 'KILLED/ROLLBACK' after some 'Wait Types' (like LOGBUFFER, pageiolatch_XX, etc...)
I tried to execute the update query directly in SQL server and that also not responding. So it's clear the issues is with the SQL server and it takes too much time to execute the update query. But the execution plan looks good and the PK field is used as the where condition. Is it something related with disk latency?
Note: This issues is not consistent, sometimes it works.
Here is the file stat. How can I interpret or reach a conclusion from these data...
My DB
DbId FileId TimeStamp NumberReads BytesRead IoStallReadMS NumberWrites BytesWritten IoStallWriteMS IoStallMS BytesOnDisk FileHandle
2 1 -1152466625 21199845 1351872315392 2528572322 21869447 1424883785728 10201419039 12729991361 28266332160 0x0000000000000C64
2 2 -1152466625 1063 45187072 87119 1013000 61628433920 178901888 178989007 6945505280 0x0000000000000CC4
TempDB
DbId FileId TimeStamp NumberReads BytesRead IoStallReadMS NumberWrites BytesWritten IoStallWriteMS IoStallMS BytesOnDisk FileHandle
18 1 -1152466625 390905 27728437248 52640514 196501 6927843328 817347538 869988052 58378551296 0x0000000000001F3C
18 2 -1152466625 24840 1596173312 645024 56563 3335298048 2590871 3235895 938344448 0x00000000000012BC

postgresql deadlock

Sometimes postgresql raise error deadlocks.
In trigger for table setted FOR UPDATE.
Table comment:
http://pastebin.com/L1a8dbn4
Log (INSERT sentences is cutted):
2012-01-26 17:21:06 MSK ERROR: deadlock detected
2012-01-26 17:21:06 MSK DETAIL: Process 2754 waits for ExclusiveLock on tuple (40224,15) of relation 735493 of database 734745; blocked by process 2053.
Process 2053 waits for ShareLock on transaction 25162240; blocked by process 2754.
Process 2754: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (1756235868, 935967, 11378142, 'text1') RETURNING comment.id;
Process 2053: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (4071267066, 935967, 11372945, 'text2') RETURNING comment.id;
2012-01-26 17:21:06 MSK HINT: See server log for query details.
2012-01-26 17:21:06 MSK CONTEXT: SQL statement "SELECT comments_count FROM content WHERE content.id = NEW.content_id FOR UPDATE"
PL/pgSQL function "increase_comment_counter" line 5 at SQL statement
2012-01-26 17:21:06 MSK STATEMENT: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (1756235868, 935967, 11378142, 'text1') RETURNING comment.id;
And trigger on table comment:
CREATE OR REPLACE FUNCTION increase_comment_counter() RETURNS TRIGGER AS $$
DECLARE
comments_count_var INTEGER;
BEGIN
SELECT INTO comments_count_var comments_count FROM content WHERE content.id = NEW.content_id FOR UPDATE;
UPDATE content SET comments_count = comments_count_var + 1, last_comment_dt = now() WHERE content.id = NEW.content_id;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER increase_comment_counter_trigger AFTER INSERT ON comment FOR EACH ROW EXECUTE PROCEDURE increase_comment_counter();
Why it can happens?
Thanks!
These are two comments being inserted with the same content_id. Merely inserting the comment will take out a SHARE lock on the content row, in order to stop another transaction deleting that row until the first transaction has completed.
However, the trigger then goes on to upgrade the lock to EXCLUSIVE, and this can be blocked by a concurrent transaction performing the same process. Consider the following sequence of events:
Txn 2754 Txn 2053
Insert Comment
Insert Comment
Lock Content#935967 SHARE
(performed by fkey)
Lock Content#935967 SHARE
(performed by fkey)
Trigger
Lock Content#935967 EXCLUSIVE
(blocks on 2053's share lock)
Trigger
Lock Content#935967 EXCLUSIVE
(blocks on 2754's share lock)
So- deadlock.
One solution is to immediately take an exclusive lock on the content row before inserting the comment. i.e.
SELECT 1 FROM content WHERE content.id = 935967 FOR UPDATE
INSERT INTO comment(.....)
Another solution is simply to avoid this "cached counts" pattern completely, except where you can prove it is necessary for performance. If so, consider keeping the cached count somewhere other than the content table-- e.g. a dedicated table for the counter. That will also cut down on the update traffic to the content table every time a comment gets added. Or maybe just re-select the count and use memcached in the application. There's no getting round the fact that wherever you store this cached count is going to be a choke point, it has to be updated safely.

Why does an UPDATE take much longer than a SELECT?

I have the following select statement that finishes almost instantly.
declare #weekending varchar(6)
set #weekending = 100103
select InvoicesCharges.orderaccnumber, Accountnumbersorders.accountnumber
from Accountnumbersorders, storeinformation, routeselecttable,InvoicesCharges, invoice
where InvoicesCharges.pubid = Accountnumbersorders.publication
and Accountnumbersorders.actype = 0
and Accountnumbersorders.valuezone = 'none'
and storeinformation.storeroutename = routeselecttable.istoreroutenumber
and storeinformation.storenumber = invoice.store_number
and InvoicesCharges.invoice_number = invoice.invoice_number
and convert(varchar(6),Invoice.bill_to,12) = #weekending
However, the equivalent update statement takes 1m40s
declare #weekending varchar(6)
set #weekending = 100103
update InvoicesCharges
set InvoicesCharges.orderaccnumber = Accountnumbersorders.accountnumber
from Accountnumbersorders, storeinformation, routeselecttable,InvoicesCharges, invoice
where InvoicesCharges.pubid = Accountnumbersorders.publication
and Accountnumbersorders.actype = 0
and dbo.Accountnumbersorders.valuezone = 'none'
and storeinformation.storeroutename = routeselecttable.istoreroutenumber
and storeinformation.storenumber = invoice.store_number
and InvoicesCharges.invoice_number = invoice.invoice_number
and convert(varchar(6),Invoice.bill_to,12) = #weekending
Even if I add:
and InvoicesCharges.orderaccnumber <> Accountnumbersorders.accountnumber
at the end of the update statement reducing the number of writes to zero, it takes the same amount of time.
Am I doing something wrong here? Why is there such a huge difference?
transaction log file writes
index updates
foreign key lookups
foreign key cascades
indexed views
computed columns
check constraints
locks
latches
lock escalation
snapshot isolation
DB mirroring
file growth
other processes reading/writing
page splits / unsuitable clustered index
forward pointer/row overflow events
poor indexes
statistics out of date
poor disk layout (eg one big RAID for everything)
Check constraints with UDFs that have table access
...
Although, the usual suspect is a trigger...
Also, your condition extra has no meaning: How does SQL Server know to ignore it? An update is still generated with most of the baggage... even the trigger will still fire. Locks must be held while rows are searched for the other conditions for example
Edited Sep 2011 and Feb 2012 with more options
The update has to lock and modify the data in the table, and also log the changes to the transaction log. The select does not have to do any of those things.
Because reading does not affect indices, triggers, and what have you?
In Slow servers or large database i usually use UPDATE DELAYED, that waits for a "break" to update the database itself.

SQL Server FTI: How to check table status?

In the SQL Server Full-Text Indexing scheme i want to know if a table is in
start_chage_tracking mode
update_index mode
start_change_tracking and start_background_updateindex modes
The problem is that i set my tables to "background update index", and then tell it to "start change tracking", but then some months later it doesn't seem to be tracking changes.
How i can i see the status of "background updateindex" and "change tracking" flags?
example:
sp_fulltext_table #tabname='DiaryEntry', #action='start_background_updateindex'
Server: Msg 15633, Level 16, State 1, Procedure sp_fulltext_table, Line 364
Full-text auto propagation is currently enabled for table 'DiaryEntry'.
sp_fulltext_table #tabname='Ticket', #action='start_background_updateindex'
Server: Msg 15633, Level 16, State 1, Procedure sp_fulltext_table, Line 364
Full-text auto propagation is currently enabled for table 'Ticket'.
Obviously a table has an indexing status, i just want to know it show i can display it to the user (i.e. me).
The other available API:
EXECUTE sp_help_fulltext_tables
only returns the tables that are in the catalog, it doesn't return their status.
TABLE_OWNER TABLE_NAME FULLTEXT_KEY_INDEX_NAME FULLTEXT_KEY_COLID FULLTEXT_INDEX_ACTIVE FULLTEXT_CATALOG_NAME
=========== ========== ======================= ================== ===================== =====================
dbo DiaryEntry PK_DiaryEntry_GUID 1 1 FrontlineFTCatalog
dbo Ticket PK__TICKET_TicketGUID 1 1 FrontlineFTCatalog
And i can get the PopulateStatus of an entire catalog:
SELECT FULLTEXTCATALOGPROPERTY('MyCatalog', 'PopulateStatus') AS PopulateStatus
which returns a status for the catalog:
0 = Idle
1 = Full population in progress
2 = Paused
3 = Throttled
4 = Recovering
5 = Shutdown
6 = Incremental population in progress
7 = Building index
8 = Disk is full. Paused.
9 = Change tracking
but not for a table.
SQL Server 2000 SP4
SELECT ##version
Microsoft SQL Server 2000 - 8.00.194 (Intel X86)
Aug 6 2000 00:57:48
Copyright (c) 1988-2000 Microsoft Corporation
Standard Edition on Windows NT 5.0 (Build 2195: Service Pack 4)
Regardless of any bug, i want to create UI to easily be able to see its status.
Christ. i had a whole nicely formatted answer. i was scrolling to hit save when IE crashed.
Short version:
OBJECTPROPERTY
TableFullTextPopulateStatus
TableFullTextBackgroundUpdateIndexOn
TableFullTextCatalogId
TableFullTextChangeTrackingOn
TableFullTextKeyColumn
TableHasActiveFulltextIndex
TableFullTextBackgroundUpdateIndexOn
1=TRUE
0=FALSE
TableFullTextPopulateStatus
0=No population
1=Full population
2=Incremental population
Full example:
SELECT
--indicates whether full-text change-tracking is enabled on the table (0, 1)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableFullTextChangeTrackingOn') AS TableFullTextChangeTrackingOn,
--indicate the population status of a full-text table (0=No population, 1=Full Population, 2=Incremental Population)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableFullTextPopulateStatus') AS TableFullTextPopulateStatus,
--indicates whether a table has full-text background update indexing (0, 1)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableFullTextBackgroundUpdateIndexOn') AS TableFullTextBackgroundUpdateIndexOn,
-- provides the full-text catalog ID in which the full-text index data for the table resides (0=table is not indexed)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableFullTextCatalogId') AS TableFullTextCatalogId,
--provides the column ID of the full-text unique key column (0=table is not indexed)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableFullTextKeyColumn') AS TableFullTextKeyColumn,
--indicates whether a table has an active full-text index (0, 1)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableHasActiveFulltextIndex') AS TableHasActiveFulltextIndex
What version of SQL / Service pack are you running; this used to be a bug in sql 2000
http://support.microsoft.com/kb/290212
execute the sp_fulltext_table in this sequence to temporarily fix the issue. (The low disk space is likely the cause)
stop_change_tracking
start_change_tracking
stop_background_updateindex
start_background_updateindex
OK to monitor the status you need to look at this very handy resource on SQL Server FTI on MSSQL Tips; i think the script there will give you what you are looking for.

Resources