Check if my database instance on SQL server is encrypted by TDE? - sql-server

I have a question about SQL server's transparent encryption (TDE). I need to dump a database instance, which will be restored by another DBA remotely by dumped data files. I was asked to make sure the dumped data files has no TDE so DBA can restore it. I checked online, and I found a query to list the encryption status as follows:
SELECT db_name(database_id), encryption_state
FROM sys.dm_database_encryption_keys;
my database instance is not in the result at all. I run another query as follows:
SELECT
db.name,
db.is_encrypted,
dm.encryption_state,
dm.percent_complete,
dm.key_algorithm,
dm.key_length
FROM
sys.databases db
LEFT OUTER JOIN sys.dm_database_encryption_keys dm
ON db.database_id = dm.database_id;
GO
My database instance has value 0 for is_encrypted, and all other values null.
Does it mean my database instance is not encrypted at all?

If your output looks like this...
name | is_encrypted | encryption_state | percent_complete | key_algorithm | ley_length
--------------------------------------------------------------------------------------------
MyDatabase | 0 | NULL | NULL | NULL | NULL
... your database, [MyDatabase], is NOT encrypted. Nor does it have a database encryption key configured.
If, however, any databases have non-NULLs in columns other than [is_encrypted] (e.g. [encryption_state] = 1), those databases are either encrypted, partially encrypted/decrypted or prepped for encryption.
Read up here for detail on encrpytion states:
https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-database-encryption-keys-transact-sql?view=sql-server-ver15

Related

How to check if a request located in JDBC_SESSION_INIT_STATEMENT is working? DataframeReader

I am trying to connect to sql server with spark-jdbc, using JDBC_SESSION_INIT_STATEMENT to create a temporary table and then download data from the temporary table in the main query.
I have the following code:
//df is org.apache.spark.sql.DataFrameReader
val s = """select * into #tmp_table from ( SELECT op.ID,
| op.Date,
| op.DocumentID,
| op.Amount,
| op.AmountCurr,
| op.CurrencyID,
| operson.ObjectTypeId AS PersonOT,
| op.PersonID,
| ocontract.ObjectTypeId AS ContractOT,
| op.ContractID,
| op.DocNum,
| op.MomentCreate,
| op.ObjectTypeID,
| op.OwnerObjectID
|FROM dbo.Operation op With (Index = IX_Operation_Date) --Без хинта временами уходит в скан всей таблицы
|LEFT JOIN dbo.Object ocontract ON op.ContractID = ocontract.ID
|LEFT JOIN dbo.Object operson ON op.PersonID = operson.ID
|WHERE op.Date>='2019-01-01' and op.Date<'2020-01-01' AND 1=1
|) wrap_for_single_connect
|OPTION (LOOP JOIN, FORCE ORDER, MAX_GRANT_PERCENT=25)""".stripMargin
df
.option(JDBCOptions.JDBC_SESSION_INIT_STATEMENT, s)
.jdbc(
jdbcUrl,
"(select * from tempdb.#tmp_table) sub",
connectionProps)
i get com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name '#tmp_table'.
And I have a feeling that JDBC_SESSION_INIT_STATEMENT is not working, because I deliberately tried to mess up the request and still got the Invalid object error.
How can I check if the request is working in JDBC_SESSION_INIT_STATEMENT?
One way to know whether your JDBCOptions.JDBC_SESSION_INIT_STATEMENT is executed is to enable INFO logging level for org.apache.spark.sql.execution.datasources.jdbc logger.
That should trigger this line and print out the following message to the logs:
Executing sessionInitStatement: [sql]
Given the comment I don't think you should use it to create a source table to load records from:
// This executes a generic SQL statement (or PL/SQL block) before reading
// the table/query via JDBC. Use this feature to initialize the database
// session environment, e.g. for optimizations and/or troubleshooting.
You should use dbtable or query parameter instead.

Why does create table operation attach owner as 'yugabyte' to a new table yet the database to which am connected has a different owner?

I have installed yugabytedb in minikube on my laptop and created a database with owner 'Rodgers'.
Then I run the ysqlsh to execute ysql commands from the terminal, one of which is 'CREATE DATABASE ...'.
Problem
When I try connecting to the database using an external Go application by providing the application with user as 'Rodgers' and the set password, it fails to connect.
I have found out that the tables created were attached to owner 'yugabyte', not 'Rodgers'.
But the database to which I have connected and from where am running the CREATE DATABASE command belongs to Rodgers.
What's going on here?
It's best to rehearse all this using "ysqlsh". When everything works there, connecting from any client program (Python, go, ...) etc will work — as long as you have the right driver. The PostgresSQL drivers work with YugabyteDB.
The following is mainly commands for "ysqlsh" — both SQLs and so-called metacommands (the ones starting with backslash). But occasionally, there are commands that you do from the O/S prompt. So you must read the following carefully and then do what it says after each comment — mainly in "ysqlsh" but a couple of times at the O/S prompt. So you can't simply run the script "lights out".
Start with virgin YB single-node cluster (fresh from "yb-create).
$ ysqlsh -h localhost -p 5433 -d yugabyte -U yugabyte
Now follow the script.
-- Shows two "Superuser" users: "postgres" and "yugabyte" (nothing else).
\du
-- Shows two databases: "postgres" and "yugabyte" (nothing else except "system" databases).
-- Both "postgres" and "yugabyte" databases are owned by "postgres".
\l
-- Create a new "ordinary user and connect as that user.
create user rodgers login password 'p';
alter user rodgers createdb;
-- Now connect to database yugabyte as user rodgers
\c yugabyte rodgers
-- Create a new database and check it's there.
create database rog_db owner rodgers;
\l
-- Name | Owner | Encoding | Collate | Ctype | Access privileges
-- -----------------+----------+----------+---------+-------------+-----------------------
...
-- rog_db | rodgers | UTF8 | C | en_US.UTF-8 |
-- ...
-- Now connect to the new "rog_db" database. Works fine.
\c rog_db rodgers
-- Quit "ysqlsh.
\q
Connect again. Works fine.
$ ysqlsh -h localhost -p 5433 -d rog_db -U rodgers
Now carry on with the script.
-- Works fine.
create table t(k int primary key);
-- Inspect it. First "\d", then "\d t".
\d
-- List of relations
-- Schema | Name | Type | Owner
-- --------+------+-------+---------
-- public | t | table | rodgers
\d t
-- Table "public.t"
Column | Type | Collation | Nullable | Default
-- --------+---------+-----------+----------+---------
-- k | integer | | not null |
-- Indexes:
-- "t_pkey" PRIMARY KEY, lsm (k HASH)
-- This is OK for playing. But terrible for real work.
drop table t;
\c rog_db yugabyte
drop schema public;
\c rog_db rodgers
create schema rog_schema authorization rodgers;
-- For future connect commands.
alter user rodgers set search_path = 'rog_schema';
-- for here and now.
set schema 'rog_schema';
create table t(k int primary key);
\d
-- List of relations
-- Schema | Name | Type | Owner
-- ------------+------+-------+---------
-- rog_schema | t | table | rodgers
--------------------------------------------------------------------------------
I just stepped through all of this using "YB-2.2.0.0-b0" on my laptop (macOS Big Sur). It all worked fine.
Please try this in your minikube env and report back.
Regards, Bryn Llewellyn, Technical Product Manager at Yugabyte Inc.

How to delete and recreate databases in postgreSQL (error: template1 being already used)

I'm using postgreSQL with python/django and windows7.
I dropped a database from pgAdmin4. Maybe this isn't the correct way to delete a database. Maybe I nedd to do something more to sever the connection?
Anyway now I want to recreate the database but when I do (from pgAdmin4) I receive the error:
ERROR: database "template1" is being accessed by other users
DETAIL: There are 1 other session(s) using the database.
I restarted the server, even reboot the pc but I continue to receive that error. I don't know which session is using template1, maybe is my old (and deleted) database? Maybe is this (first one):
Where SQL (partially visible on the pic) is:
SELECT cl.relkind, COALESCE(cin.nspname, cln.nspname) as nspname,
COALESCE(ci.relname, cl.relname) as relname, cl.relname as indname
FROM pg_class cl
JOIN pg_namespace cln ON cl.relnamespace=cln.oid
LEFT OUTER JOIN pg_index ind ON ind.indexrelid=cl.oid
LEFT OUTER JOIN pg_class ci ON ind.indrelid=ci.oid
LEFT OUTER JOIN pg_namespace cin ON ci.relnamespace=cin.oid
WHERE cl.oid IN (SELECT objid FROM pg_shdepend WHERE refobjid=10::oid) AND cl.oid > 13317::oid
UNION ALL SELECT 'n', null, nspname, null
FROM pg_namespace nsp
WHERE nsp.oid IN (SELECT objid FROM pg_shdepend WHERE refobjid=10::oid) AND nsp.oid > 13317::oid
UNION ALL SELECT CASE WHEN typtype='d' THEN 'd' ELSE 'y' END, null, typname, null
FROM pg_type ty
WHERE ty.oid IN (SELECT objid FROM pg_shdepend WHERE refobjid=10::oid) AND ty.oid > 13317::oid
UNION ALL SELECT 'C', null, conname, null
FROM pg_conversion co
WHERE co.oid IN (SELECT objid FROM pg_shdepend WHERE refobjid=10::oid) AND co.oid > 13317::oid
UNION ALL SELECT CASE
I yet don't know how to use database, so I have no idea to what to do. Should I terminate that session?
I tried from the SQL shell but this is the first times I use it so I'm not sure to have done the things in the correct way. In particular I'm not connected like postgres but like gm (a superuser).
Server [localhost]:
Database [postgres]:
Port [5433]:
Username [postgres]: gm
Inserisci la password per l'utente gm:
psql (12.1)
I tried the comand:
postgres=# SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'elenconomi_db';
where elenconomi_db is the database I deleted, so it don't exist anymore and the comand gives error
So I tried postgres=# \list (sorry for the italian, I don't know why it's writing in italian):
Lista dei database
Nome | Proprietario | Codifica | Ordinamento | Ctype |
Privilegi di accesso
-----------+--------------+----------+--------------------+--------------------+
-----------------------
postgres | postgres | UTF8 | Italian_Italy.1252 | Italian_Italy.1252 |
template0 | postgres | UTF8 | Italian_Italy.1252 | Italian_Italy.1252 |
=c/postgres +
| | | | |
postgres=CTc/postgres
template1 | postgres | UTF8 | Italian_Italy.1252 | Italian_Italy.1252 |
=c/postgres +
| | | | |
postgres=CTc/postgres
(3 righe)
I read is better to do not drop template1, so I ask before to DROP DATABASE template1;. Should I do?
And for the next time there is a better way to drop a database and recreate it?
I saw many question like this but I can't find a solution working for me.
Close the database session that is connected to template1. Then CREATE DATABASE will succeed.
To find and close the session, use
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'template1';

SQL Server : query to replace data

I need to take all passwords for a live SQL Server database, and replace them with passwords from a backup.
Basic table setup:
Main:
Account_ID Username Password
----------------------------------
1 Blah gy12uid91
2 Blah2 gy12uid92
Backup:
Account_ID Username Password
----------------------------------
1 Blah xxxxxxxx
2 Blah2 xxxxxxxx
I need to take ONLY the passwords from the backup database, and put them in where the username is the same as the live database, so that both would be the same, but only where the usernames are the same.
I was thinking some kind of query like:
UPDATE livedb
FROM backupdb
WHERE username ='name'
SET password ='xxxxxxxx'
WHERE username ='name'
In other words, I have 3600 rows and need to change all the passwords on a massive scale without just copy pasting.
UPDATE l
SET password=b.password
FROM Livedb.schema.table l
INNER JOIN backupdb.schema.table b
ON l.Username=b.Username
Assuming you have your database restored to another location on the same server and the database are called live and restored:
UPDATE liveUsers
SET liveUsers.password = restoredUsers.password
FROM live.users liveUsers
JOIN restored.users restoredUsers
ON liveUsers.Account_ID = restoredUsers.Account_ID
This will replace the password column form restored into live by matching on Account_ID

How to get last access/modification date of a PostgreSQL database?

On development server I'd like to remove unused databases. To realize that I need to know if database is still used by someone or not.
Is there a way to get last access or modification date of given database, schema or table?
You can do it via checking last modification time of table's file.
In postgresql,every table correspond one or more os files,like this:
select relfilenode from pg_class where relname = 'test';
the relfilenode is the file name of table "test".Then you could find the file in the database's directory.
in my test environment:
cd /data/pgdata/base/18976
ls -l -t | head
the last command means listing all files ordered by last modification time.
There is no built-in way to do this - and all the approaches that check the file mtime described in other answers here are wrong. The only reliable option is to add triggers to every table that record a change to a single change-history table, which is horribly inefficient and can't be done retroactively.
If you only care about "database used" vs "database not used" you can potentially collect this information from the CSV-format database log files. Detecting "modified" vs "not modified" is a lot harder; consider SELECT writes_to_some_table(...).
If you don't need to detect old activity, you can use pg_stat_database, which records activity since the last stats reset. e.g.:
-[ RECORD 6 ]--+------------------------------
datid | 51160
datname | regress
numbackends | 0
xact_commit | 54224
xact_rollback | 157
blks_read | 2591
blks_hit | 1592931
tup_returned | 26658392
tup_fetched | 327541
tup_inserted | 1664
tup_updated | 1371
tup_deleted | 246
conflicts | 0
temp_files | 0
temp_bytes | 0
deadlocks | 0
blk_read_time | 0
blk_write_time | 0
stats_reset | 2013-12-13 18:51:26.650521+08
so I can see that there has been activity on this DB since the last stats reset. However, I don't know anything about what happened before the stats reset, so if I had a DB showing zero activity since a stats reset half an hour ago, I'd know nothing useful.
PostgreSQL 9.5 let us to track last modified commit.
Check track commit is on or off using the following query
show track_commit_timestamp;
If it return "ON" go to step 3 else modify postgresql.conf
cd /etc/postgresql/9.5/main/
vi postgresql.conf
Change
track_commit_timestamp = off
to
track_commit_timestamp = on
Restart the postgres / system
Repeat step 1.
Use the following query to track last commit
SELECT pg_xact_commit_timestamp(xmin), * FROM YOUR_TABLE_NAME;
SELECT pg_xact_commit_timestamp(xmin), * FROM YOUR_TABLE_NAME where COLUMN_NAME=VALUE;
My way to get the modification date of my tables:
Python Function
CREATE OR REPLACE FUNCTION py_get_file_modification_timestamp(afilename text)
RETURNS timestamp without time zone AS
$BODY$
import os
import datetime
return datetime.datetime.fromtimestamp(os.path.getmtime(afilename))
$BODY$
LANGUAGE plpythonu VOLATILE
COST 100;
SQL Query
SELECT
schemaname,
tablename,
py_get_file_modification_timestamp('*postgresql_data_dir*/*tablespace_folder*/'||relfilenode)
FROM
pg_class
INNER JOIN
pg_catalog.pg_tables ON (tablename = relname)
WHERE
schemaname = 'public'
I'm not sure if things like vacuum can mess this aproach, but in my tests it's a pretty acurrate way to get tables that are no longer used, at least, on INSERT/UPDATE operations.
I guess you should activate some log options. You can get information about logging on postgreSQL here.

Resources