I have a requirement I need your help with.
Number of rows in a table : 130
That is the only data I have. Based on this, Is it possible to find out the table names from an Oracle Database that contain 130 rows in them.
Thanks
Sam
SELECT TABLE_NAME FROM dba_tables WHERE num_rows = 130
-- num_rows = 130 can be replaced with any requirement you have
You can try with some dynamic SQL:
declare
n number;
begin
for t in (
select owner || '.' || table_name as tab
from dba_tables
where owner = 'YOUR_SCHEMA' /* IF YOU KNOW THE SCHEMA */
)
loop
execute immediate 'select count(1) from ' || t.tab into n;
if n = 130 then
dbms_output.put_line('Table ' || t.tab );
end if;
end loop;
end;
Please consider that, depending on the number of tables/records in your DB, this can take very long to run.
I hope this query may help you:
Query 1 : SELECT CONCAT('SELECT COUNT(*) as cnt FROM ', table_name, ' union all') FROM information_schema.tables WHERE table_schema = 'aes';
Query 2: select max(attendance) from ( paste the results obtained from the above query and remove the last union all) as tmptable;
Reference:
Find Table with maximum number of rows in a database in mysql
Related
I am having a hard time grasping why this query is telling me the TaxPayerID is NOT found, when in the beginning, I am clearly checking for it and only using the databases, which should contain the TaxPayerID column in the nTrucks table.
sp_MSforeachdb
'
IF EXISTS (SELECT * FROM [?].INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = ''nTrucks'' AND COLUMN_NAME = ''TaxPayerID'')
BEGIN
SELECT "?", nTrucks.UnitNumber, ntrucks.Companyid, nCompanyData.CompanyName, nTrucks.Owner, nTrucks.TaxPayerID
FROM nTrucks
INNER JOIN nCompanyData ON nTrucks.CompanyID = nCompanyData.CompanyID
WHERE nTrucks.Owner like ''%Trucker%''
END
'
I am getting multiple 'Invalid column name 'TaxPayerID'.' errors, I assume it is from the databases NOT containing this column.
If anyone here can throw me a bone, a simple "you're a dummy, do it this way!", I would be very appreciative.
JF
You're a dummy! (you asked for it) :)
How to debug this error:
Locate the database that throws an error and try executing an actual SQL query on it directly to see if it will compile:
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = ''nTrucks'' AND COLUMN_NAME = ''TaxPayerID'')
BEGIN
SELECT nTrucks.UnitNumber, ntrucks.Companyid, nCompanyData.CompanyName, nTrucks.Owner, nTrucks.TaxPayerID
FROM nTrucks
INNER JOIN nCompanyData ON nTrucks.CompanyID = nCompanyData.CompanyID
WHERE nTrucks.Owner like ''%Trucker%''
END
It will fail.
Now you know that SQL server checks schema at query parse time rather than run time.
Then you follow #GordonLinoff suggestion and convert the SELECT query into dynamic SQL as follows:
sp_MSforeachdb
'
IF EXISTS (SELECT * FROM [?].INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = ''nTrucks'' AND COLUMN_NAME = ''TaxPayerID'')
BEGIN
EXEC(
''SELECT "?", nTrucks.UnitNumber, ntrucks.Companyid, nCompanyData.CompanyName, nTrucks.Owner, nTrucks.TaxPayerID
FROM [?]..nTrucks
INNER JOIN [?]..nCompanyData ON nTrucks.CompanyID = nCompanyData.CompanyID
WHERE nTrucks.Owner like ''''%Trucker%''''
'' )
END
'
(I hope I got my quotes right)
If your query is supposed to reference a central nCompareData table then remove [?].. before nCompareData
I have two tables one of them have historical(cdr_hist) data other table have data from today(cdr_stage). My script must run every 30 minutes and calculate data from last 4 hours but every night at 12 all data move at cdr_hist.
The question is how I can switch and take data from history table when script run at 12:00 because cdr_stage is empty...
I tried this:
IF OBJECT_ID ('[**CDR_Stage**]') IS NOT NULL
BEGIN
Select.....
From **CDR_Stage**
END
ELSE
Select.....
From **CDR_Hist**
END
But its not work correctly...
Any ideas??
No need for IFs , that can be done with pure sql using UNION and NOT EXISTS() :
SELECT * FROM CDR_Stage
UNION ALL
SELECT * FROM CDR_Hist
WHERE NOT EXISTS(SELECT 1 FROM CDR_Stage) -- Second select will return data only if first one won't .
You need to check the record existence instead of table existence
IF EXISTS (SELECT 1
FROM CDR_Stage)
SELECT *
FROM CDR_Stage
ELSE
SELECT *
FROM CDR_Hist
Or Dynamic Sql
DECLARE #sql VARCHAR(4000)
SET #sql = 'select * from '
+ CASE
WHEN EXISTS (SELECT 1
FROM CDR_Stage) THEN 'CDR_Stage'
ELSE 'CDR_Hist'
END
EXEC (#sql)
I am executing below query. It takes 80 seconds for just 17 records.
can any body tell me reason if knows. I have already tried with Indexes.
SELECT DISTINCT t.i_UserID,
u.vch_LoginName,
t.vch_PreviousEmailAddress AS 'vch_EmailAddress',
u.vch_DisplayName,
t.d_TransactionDate AS 'd_DateAdded',
'Old' AS 'vch_RecordStatus'
FROM tblEmailTransaction t
INNER JOIN tblUser u
ON t.i_UserID = u.i_UserID
WHERE t.vch_PreviousEmailAddress LIKE '%kala%'
Change collation for vch_PreviousEmailAddress column on Latin1_General_100_BIN2
Create covered index:
CREATE NONCLUSTERED INDEX ix
ON dbo.tblEmailTransaction (vch_PreviousEmailAddress)
INCLUDE (i_UserID, d_TransactionDate)
GO
And have fun with this query:
SELECT t.i_UserID,
u.vch_LoginName,
t.vch_PreviousEmailAddress AS vch_EmailAddress,
u.vch_DisplayName,
t.d_TransactionDate AS d_DateAdded,
'Old' AS vch_RecordStatus
FROM (
SELECT DISTINCT i_UserID,
vch_PreviousEmailAddress,
d_TransactionDate
FROM dbo.tblEmailTransaction
WHERE vch_PreviousEmailAddress LIKE '%kala%' COLLATE Latin1_General_100_BIN2
) t
JOIN dbo.tblUser u ON t.i_UserID = u.i_UserID
One other thing, which I find useful in solving problems like this:
Try running the following script. It will tell you which indexes you could ask to your SQL Server database, which would make the most (positive) improvement.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 100
ROUND(s.avg_total_user_cost * s.avg_user_impact * (s.user_seeks + s.user_scans),0) AS 'Total Cost',
s.avg_user_impact,
d.statement AS 'Table name',
d.equality_columns,
d.inequality_columns,
d.included_columns,
'CREATE INDEX [IndexName] ON ' + d.statement + ' ( '
+ case when (d.equality_columns IS NULL OR d.inequality_columns IS NULL)
then ISNULL(d.equality_columns, '') + ISNULL(d.inequality_columns, '')
else ISNULL(d.equality_columns, '') + ', ' + ISNULL(d.inequality_columns, '')
end + ' ) '
+ CASE WHEN d.included_columns IS NULL THEN '' ELSE 'INCLUDE ( ' + d.included_columns + ' )' end AS 'CREATE INDEX command'
FROM sys.dm_db_missing_index_groups g,
sys.dm_db_missing_index_group_stats s,
sys.dm_db_missing_index_details d
WHERE d.database_id = DB_ID()
AND s.group_handle = g.index_group_handle
AND d.index_handle = g.index_handle
ORDER BY [Total Cost] DESC
The right-hand column displays the CREATE INDEX command which you'd need to run, to create that index.
This one of those lifesaver scripts, which I run on our in-house databases once ever so often.
But yes, in your example, this is just likely to tell you that you need an index on the vch_PreviousEmailAddress field in your tblEmailTransaction table.
The probable bottleneck are 2:
Missing Index on tblEmailTransaction.i_UserID: Check if the table has the index
Missing Index on tblUser.i_UserID: Check if the table has the index
Like Statement: Like statement is know to be not good in performance, as Devart suggested, try to specify collection in this way:
WHERE vch_PreviousEmailAddress LIKE '%kala%' COLLATE Latin1_General_100_BIN2
To have a better view on your query, You have to run this command with your query:
SET IO STATISTICS ON
It will write all the IO Access that the query does and the we can see what happen.
Just a final question ?
How many rows contains the two tables?
Ciao
More than a question, its an information sharing post.
I have come across a situation today where i needed to look for a sting in the entire database of an application with no idea of, which table/column it belongs to.
Below is a PL/SQL block i wrote and used to help my propose. Hope its helps others to with a similar requirement.
Declare
i NUMBER := 0;
counter_intable NUMBER :=0;
BEGIN
FOR rec IN (
select
'select count(*) ' ||
' from '||table_name||
' where '||column_name||' like''%732-851%'' ' as sql_command
from user_tab_columns
where data_type='VARCHAR2'
)
LOOP
execute immediate rec.sql_command into counter_intable;
IF counter_intable != 0 THEN
i := i + 1;
DBMS_OUTPUT.put_line ('Match found using command ::' || rec.sql_command);
DBMS_OUTPUT.put_line ('count ::' || counter_intable);
END IF;
END LOOP;
DBMS_OUTPUT.put_line ('total commands matched :: ' || i);
END;
replace your string at the place of : 732-851 in the code block
Why PL/SQL? You could do the same in SQL using xmlsequence.
For example, I want to search for the value 'KING' -
SQL> variable val varchar2(10)
SQL> exec :val := 'KING'
PL/SQL procedure successfully completed.
SQL> SELECT DISTINCT SUBSTR (:val, 1, 11) "Searchword",
2 SUBSTR (table_name, 1, 14) "Table",
3 SUBSTR (column_name, 1, 14) "Column"
4 FROM cols,
5 TABLE (xmlsequence (dbms_xmlgen.getxmltype ('select '
6 || column_name
7 || ' from '
8 || table_name
9 || ' where upper('
10 || column_name
11 || ') like upper(''%'
12 || :val
13 || '%'')' ).extract ('ROWSET/ROW/*') ) ) t
14 ORDER BY "Table"
15 /
Searchword Table Column
----------- -------------- --------------
KING EMP ENAME
SQL>
You could search for any data type values, please read SQL to Search for a VALUE in all COLUMNS of all TABLES in an entire SCHEMA
Is this possible? I am using ORACLE 10g.
For example: I have 50 tables name A01, A02, A03, A04.........A50.
And all these tables have the "SAME COLUMN NAME"
For example: name, age, location
(Note: The Column Names are the same but not the value in the columns).
In the END... I want to view all data from column: name, age, location FROM ALL tables starting with letter A.
(Note 2: All tables starting with letter A are NOT STATIC, they are dynamic meaning different changes could occur. Example: A01 to A10 could be deleted and A99 Could be added).
Sorry for not clarifying.
DECLARE
TYPE CurTyp IS REF CURSOR;
v_cursor CurTyp;
v_record A01%ROWTYPE;
v_stmt_str VARCHAR2(4000);
BEGIN
for rec in (
select table_name
from user_tables
where table_name like 'A%'
) loop
if v_stmt_str is not null then
v_stmt_str := v_stmt_str || ' union all ';
end if;
v_stmt_str := v_stmt_str || 'SELECT * FROM ' || rec.table_name;
end loop;
OPEN v_cursor FOR v_stmt_str;
LOOP
FETCH v_cursor INTO v_record;
EXIT WHEN v_cursor%NOTFOUND;
-- Read values v_record.name, v_record.age, v_record.location
-- Do something with them
END LOOP;
CLOSE v_cursor;
END;
As per my understanding if you want to view all column names of tables starting with A then try below
select column_name,table_name from user_tab_cols where table_name like 'A%';
If your requirement is something else then specify it clearly.
If understand you correctly and the number of tables is constant then you can create a VIEW once
CREATE VIEW vw_all
AS
SELECT name, age, location FROM A01
UNION ALL
SELECT name, age, location FROM A01
UNION ALL
...
SELECT name, age, location FROM A50
UNION ALL
And then use it
SELECT *
FROM vw_all
WHERE age < 35
ORDER BY name
This returns you all tables you need:
select table_name
from user_tables
where table_name like 'A__';
From this, you can build a dynamic sql statement as:
select listagg('select * from '||table_name,' union all ') within group(order by table_name)
from user_tables
where table_name like 'A__'
This returns actually an SQL statement which contains all tables and the unions:
select * from A01 union all select * from A02 union all select * from A03
And finally execute this via native dynamic sql. You can do that in PL/SQL, so you need a function:
create function getA
query varchar2(32000);
begin
select listagg('select * from '||table_name,' union all ') within group(order by table_name)
into query
from user_tables
where table_name like 'A__';
open res for query;
return res;
end;
Note that what you're doing manually is basically called partitioning, and Oracle has a super-great support already available for that out of the box. I.e. you can have something which looks like a super-huge table, but technically it is stored as a set of smaller tables (and smaller indexes), splitted by a partitioning criteria. For example, if you have millions of payment records, you may partition it by year, this way one physical table contains only a reasonable set of data. Still, you can freely select in this, and if you're hitting data from other partitions, Oracle takes care of pulling those in.