yugabyte audit logs doesn't show in tserver logs - pgadmin-4

I've enabled audit logs in yugabyte following instructions here: https://docs.yugabyte.com/preview/secure/audit-logging/audit-logging-ysql/
To test it, I ran the create table command (in pgadmin4) and I saw the expected audit log in the query terminal for e.g
NOTICE: AUDIT: SESSION,2,1,DDL,CREATE TABLE,TABLE,public.employees,
"create table employees ( empno int, ename text, address text, salary int,
account_number text );",<not logged>
CREATE TABLE
However when I try to find the same log snippets in tserver log files, I don't see any entries which would confirm that my audit loggings are working. Is there a way to fix this?

Found it in the postgres log file.

Related

Error msg on phpmyadmin when installing database #1050 - Table 'batch' already exists

First, extract the downloaded file from Themeforest.
Step 2. You will see the folders "DATABASE", "Documentation", "For
Exits Drupal Installation", "For New Fresh Drupal Installation"
Step 3: Create a database and a user name for that database. Please
set the database permission for user name.
Step 4. Open phpMyadmin and select the database you just created.
Then enter the DEMO database in the "DATABASE" folder. Make sure
there aren't any errors during the import of the database.
after step 4, below msg pop out.
Table structure for table batch--CREATE TABLE batch ( bid
int(10) UNSIGNED NOT NULL COMMENT 'Primary Key: Unique batch ID.',
token varchar(64) CHARACTER SET ascii NOT NULL COMMENT 'A string
token generated against the current user''s session id and the batch
id, used to ensure that only the user who submitted the batch can
effectively access it.', timestamp int(11) NOT NULL COMMENT 'A Unix
timestamp indicating when this batch was submitted for processing.
Stale batches are purged at cron time.', batch longblob COMMENT 'A
serialized array containing the processing data for the batch.')
ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='Stores details about
batches (processes that run in…';

how to remove dirty data in yugabyte ( postgresql )

I try to add a column to a table with GUI Tableplus, but no response for long time.
So I turn to the db server, but got these error:
Maybe some inconsistent data generated during the operation through the Tableplus.
I am new to postgresql , and don't know what to do next.
-----updated------
I did some operation as #Dri372 told, and got some progress.
The failed reason for table sys_role and s2 is that the tables are not empty, they have some records.
If I run sql like this create table s3 AS SELECT * FROM sys_role; alter table s3 add column project_code varchar(50);, I successed.
Now how could I still work on the table sys_role?

SQL Azure raise 40197 error (level 20, state 4, code 9002)

I have a table in a SQL Azure DB (s1, 250Gb limit) with 47.000.000 records (total 3.5Gb). I tried to add a new calculated column, but after 1 hour of script execution, I get: The service has encountered an error processing your request. Please try again. Error code 9002 After several tries, I get the same result.
Script for simple table:
create table dbo.works (
work_id int not null identity(1,1) constraint PK_WORKS primary key,
client_id int null constraint FK_user_works_clients2 REFERENCES dbo.clients(client_id),
login_id int not null constraint FK_user_works_logins2 REFERENCES dbo.logins(login_id),
start_time datetime not null,
end_time datetime not null,
caption varchar(1000) null)
Script for alter:
alter table user_works add delta_secs as datediff(second, start_time, end_time) PERSISTED
Error message:
9002 sql server (local) - error growing transactions log file.
But in Azure I can not manage this param.
How can I change my structure in populated tables?
Azure SQL Database has a 2GB transaction size limit which you are running into. For schema changes like yours you can create a new table with the new schema and copy the data in batches into this new table.
That said the limit has been removed in the latest service version V12. You might want to consider upgrading to avoid having to implement a workaround.
Look at sys.database_files by connecting to the user database. If the log file current size reaches the max size then you hit this. At this point either you have to kill the active transactions or update to higher tiers (if this is not possible because of the amount of data you modifying in a single transaction).
You can also get the same by doing:
DBCC SQLPERF(LOGSPACE);
Couple ideas:
1) Try creating an empty column for delta_secs, then filling in the data separately. If this still results in txn log errors, try updating part of the data at a time with a WHERE clause.
2) Don't add a column. Instead, add a view with the delta_secs column as a calculated field instead. Since this is a derived field, this is probably a better approach anyway.
https://msdn.microsoft.com/en-us/library/ms187956.aspx

Cannot find the object because it does not exist or you do not have permissions. Error in SQL Server

I have a database and have a Sql script to add some fields to a table called "Products" in the database.
But when i am executing this script, I am getting the following error:
Cannot find the object "Products" because it does not exist or you do not have permissions
Why is the error occurring and what should I do to resolve it?
I found a reason why this would happen. The user had the appropriate permissions, but the stored procedure included a TRUNCATE statement:
TRUNCATE TableName
Since TRUNCATE deletes items without logging, you (apparently) need elevated permissions to execute a stored procedure that contains it. We changed the statement to:
DELETE FROM TableName
...and the error went away!
Are you sure that you are executing the script against the correct database? In SQL Server Management studio you can change the database you are running the query against in a drop-down box on one of the toolbars, or you can start your query with this:
USE SomeDatabase
It can also happen due to a typo in referencing a table such as [dbo.Product] instead of [dbo].[Product].
Does the user you're executing this script under even see that table??
select top 1 * from products
Do you get any output for this??
If yes: does this user have the permission to modify the table, i.e. execute DDL scripts like ALTER TABLE etc.? Typically, regular users don't have this elevated permissions.
Look for any DDL operation in the script.
Maybe the user does not have access rights to run changes.
In my case it was SET IDENTITY_INSERT tblTableName ON
You can either add db_ddladmin for the whole database or for just the table to solve this issue (or change the script)
-- give the non-ddladmin user INSERT/SELECT as well as ALTER:
GRANT ALTER, INSERT, SELECT ON dbo.tblTableName TO user_name;
It could also be possible that you have created the "Products" in your login schema and you were trying to execute the same in a different schema (probably dbo)
Steps to resolve this issue
1)open the management studio
2) Locate the object in the explorer and identify the schema under which your object is? ( it is the text before your object name ). In the image below its the "dbo" and my object name is action status
if you see it like "yourcompanydoamin\yourloginid" then you should
you can modify the permission on that specific schema and not any other schema.
you may refer to "Ownership and User-Schema Separation in SQL Server"
I've been trying to copy a table from PROD to DEV but get an error:
"Cannot find the object X because it does not exist or you do not have permissions."
However, the table did exist, and I was running as sa so I did have permissions.
The problem was actually with CONTRAINTS. I'd renamed the table on DEV to be old_XXX months ago. But when I tried to copy the original one over from PROD, the Defaut Constraint names clashed.
The error message was misleading
You can right click the procedure, choose properties and see which permissions are granted to your login ID. You can then manually check off the "Execute" and alter permission for the proc.
Or to script this it would be:
GRANT EXECUTE ON OBJECT::dbo.[PROCNAME]
TO [ServerInstance\user];
GRANT ALTER ON OBJECT::dbo.[PROCNAME]
TO [ServerInstance\user];
This could be a permission issue. The user needs at least ALTER permission to truncate a table.
Another option is to call DELETE FROM instead of TRUNCATE TABLE, but this operation is slower because it writes to the Log file, whereas TRUNCATE does not write to the log file.
The minimum permission required is ALTER on table_name. TRUNCATE TABLE
permissions default to the table owner, members of the sysadmin fixed
server role, and the db_owner and db_ddladmin fixed database roles,
and are not transferable. However, you can incorporate the TRUNCATE
TABLE statement within a module, such as a stored procedure, and grant
appropriate permissions to the module using the EXECUTE AS clause.
Sharing my case, hope that will help.
In my situation inside MY_PROJ.Database->MY_PROJ.Database.sqlproj I had to put this:
<Build Include="dbo\Tables\MyTableGeneratingScript.sql" />
In my case I was running under a different user than the one I was expecting.
My code passed 'DRIVER={SQL Server};SERVER=...;DATABASE=...;Trusted_Connection=false;User Id=XXX;Password=YYY' as the connection string to pypyodbc.connect(), but it ended up connecting with the credentials of the Windows user that ran the script instead of the User Id= from the connection string.
(I verified this using the SQL Server Profiler and by putting an invalid uid/password combination in the connection string - which didn't result in an expected error).
I decided not to dig into this further, since switching to this better way of connecting fixed the issue:
conn = pypyodbc.connect(driver='{SQL Server}', server='servername',
database='dbname', uid='userName', pwd='Password')
In my case the sql server version on my localhost is higher than that on the production server and hence some new variables were added to the generated script from the localhost. This caused errors in creating the table in the first place.
Since the creation of the table failed, subsequent query on the "NON EXISITING" table also failed.
Luckily, in among the long list of the sql errors, I found this "OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF" to be the new varialbe in the script causing my issue. I did a search and replace and the error went away.
Hope it helps someone.
The TRUNCATE statement was my first problem, glad to find the solution here. But I was using SSIS and trying to load data from another database, and it failed with the same error on any table that used IDENTITY to create an auto-incrementing ID. If I was scripting it myself I'd first need to use the command SET IDENTITY_INSERT tablename ON, and then SET IDENTITY_INSERT tablename OFF when the table update was done. But this requires ALTER permissions on the table, which I do not have. Hence the error message in SSIS on the table load (even though the previous step had just deleted all the data out of the table.)
You receive this error, when you use an ORM like GORM (https://gorm.io/) in Go for example.
When you try to create a struct and accidentally pass the ID (primary key) although it's inserted automatically.
Rich features IDE like Visual Studio Code make this mistake happen easily:
if tx := db.Create(&myStruct{
Ts: Time.Now(),
ID: 42,
}); tx.Error != nil {
t.Fatal(tx.Error)
}
You can still use auto-filling by Visual Studio Code, but delete your entry for your model's primary keys:
if tx := db.Create(&myStruct{
Ts: Time.Now(),
}); tx.Error != nil {
t.Fatal(tx.Error)
}

The merge process could not update the list of subscriptions

I have replication set up between a sql-server 2005 instance and multiple sql-server 2000 instances. The replication will successfully for a while before I get the following error message:
Violation of UNIQUE KEY constraint 'unique_pubsrvdb'. Cannot insert duplicate key in object 'dbo.sysmergesubscriptions'. (Source: MSSQLSERVER, Error number: 2627)
When I checked sysmergesubscriptions there were extra entries that appear to be coming from the 2000 instances.
My question is has anyone encountered this issue and how did you deal with it (without rebuilding the entire thing)
In my case handling multiple subscriptions and just had to adapt to delete subscriptions that had problems with:
delete
from sysmergesubscriptions
where pubid not in (select pubid from sysmergepublications)
and subscriber_server = 'SUBSCRIPTIONSERVER'
The problem was that one of the subscribers had old publications and subscriptions in the system tables that were replicated through out the entire system. Which caused the violation of UNIQUE KEY constraint.
Once we removed these old entires we were able to restart replication.
We were able to identify the valid records in sysmergepublication because we knew the state of this table before the invalid entries were replicated. This forum post shows you how to location invalid publications if you need to.
We used the follow sql to check for additional subscription entries:
select *
from sysmergepublications
select *
from sysmergesubscriptions
where pubid in ( select pubid from sysmergepublications)
select *
from sysmergesubscriptions
where pubid not in ( select pubid from sysmergepublications)
Here is the sql that we used to delete the invalid subscriptions:
delete from sysmergesubscriptions
where pubid not in ( select pubid from sysmergepublications)
Note: the code sample above assumes that the sysmergepublication contains only valid publications
Alternatively: You can use the EXEC sp_removedbreplication #dbname='<dbname>' to remove replication from the database completely. This command appears to remove all replication triggers from the database.

Resources