Single Subquery returns multiple rows in Oracle - database

select distinct a.person.name, b.title,b.director.name
from movie_roles a, movie b
where a.person.name=
( select b.director.name
from movie b, movie_roles a
where b.director.name=a.person.name)
and b.movieID=a.movie.movieID;
I keep getting error that saying single single subquery returns multiple rows in Oracle.
Can anyone help me to solve this problem?

It's self explanatory.In the following line
where a.person.name= ( select b.director.name from movie b, movie_roles a where b.director.name=a.person.name)
you get more than one result so you cannot use "=". Try
where a.person.name IN( select b.director.name from movie b, movie_roles a where b.director.name=a.person.name)

Related

Max Recursion Exhausted before Statement Completion

I know this has been asked and answered a few times here, but I can't seem to find the answer to my specific problem. Here's the recursive query:
CTE as (
SELECT
ZipCode
,Age
,[Population]
,Deaths
,DeathRate
,Death_Proportion
,DeathProbablity
,SurvivalProbablity
,PersonsAlive
FROM ProbabilityTable
WHERE Age = 0
UNION ALL
SELECT
p.ZipCode
,p.Age
,p.[Population]
,p.Deaths
,p.DeathRate
,p.Death_Proportion
,p.DeathProbablity
,p.SurvivalProbablity
,LAG(c.PersonsAlive,1) OVER(PARTITION BY p.ZipCode ORDER BY p.Age) * p.SurvivalProbablity
FROM ProbabilityTable p
INNER JOIN CTE c
ON p.ZipCode = c.ZipCode
and p.Age = c.Age
WHERE p.Age < 86
)
In the ProbabilityTable PersonsAlive is set to 100,000 when Age = 0. What I'm looking to do with the recursive CTE is multiple the previous value of PersonsAlive by the current SurvivalProbability to calculate the PersonsAlive of that Age. Age goes up to 85 so that's why I have my termination clause set at 86.
I've tried tweaking the recursive part of the query a number of times (and also setting PersonsAlive to 100,000 in the anchor part) but I can't figure it out. This is my first attempt at a recursive query and even with some course work it's not clicking for me.
EDIT
Here is the updated code that actually runs:
CTE as (
SELECT
ZipCode
,Age
,[Population]
,Deaths
,DeathRate
,Death_Proportion
,DeathProbablity
,SurvivalProbablity
,PersonsAlive
FROM ProbabilityTable
WHERE Age = 0
UNION ALL
SELECT
p.ZipCode
,p.Age
,p.[Population]
,p.Deaths
,p.DeathRate
,p.Death_Proportion
,p.DeathProbablity
,p.SurvivalProbablity
,LAG(c.PersonsAlive,1) OVER(PARTITION BY p.ZipCode ORDER BY p.Age) * p.SurvivalProbablity
FROM ProbabilityTable p
INNER JOIN CTE c
ON p.ZipCode = c.ZipCode
and p.Age = c.Age + 1
WHERE p.Age < 6
)
And here is the results it returns:
What I want the results to be for PersonsAlive is as follows:
So with each iteration of the CTE, it needs to reference the previous row of PersonsAlive and the current row of SurvivalProbability to calculate PersonsAlive
It's hard to test this without your raw data but I think your issue is you're lagging over the previous row, causing your frame of reference to be 2 rows back.
When you're using a recursive CTE, you already have access to the previous row, via CTE c. When you do LAG(c.PersonsAlive,1) you're actually telling it to look at PersonsAlive from 2 rows back from the current row (lagging 1 row back from the previous row).
Since on the first recursive pass, there is only 1 row back, the LAG() function will return NULL by default since there is no 2 rows back at that point. This is why every row in your results has NULL for the PersonsAlive column, except for the first row (anchor row from the first half of your UNION ALL clause). So if you remove the LAG() function from it and instead just do c.PersonsAlive * p.SurvivalProbablity, you should get all of the expected PersonsAlive values.
That being said, a recursive CTE seems like overkill here and you probably can just use the LAG() window function in a static call on your ProbabilityTable like so:
SELECT
ZipCode,
Age,
[Population],
Deaths,
DeathRate,
Death_Proportion,
DeathProbablity,
SurvivalProbablity,
ISNULL(LAG(PersonsAlive,1) OVER (PARTITION BY ZipCode ORDER BY Age), PersonsAlive) AS PersonsAlive
FROM ProbabilityTable
As I mentioned, I can't really test this, so please let me know if you run into any issues, and I'll help you accordingly.
Recursive CTEs are good for tree-like problems, e.g. when you need to compare multiple child rows to their parent, or interact with multiple levels of the tree simultaneously. Window functions like LAG() allow you to interact with any single row at a time relative to the current row. Your problem seems to be the latter kind.

Snowflake: Trouble getting numbers to return from a PIVOT function

I am moving a query from SQL Server to Snowflake. Part of the query creates a pivot table. The pivot table part works fine (I have run it in isolation, and it pulls numbers I expect).
However, the following parts of the query rely on the pivot table- and those parts fail. Some of the fields return as a string-type. I believe that the problem is Snowflake is having issues converting string data to numeric data. I have tried CAST, TRY_TO_DOUBLE/NUMBER, but these just pull up 0.
I will put the code down below, and I appreciate any insight as to what I can do!
CREATE OR REPLACE TEMP TABLE ATTR_PIVOT_MONTHLY_RATES AS (
SELECT
Market,
Coverage_Mo,
ZEROIFNULL(TRY_TO_DOUBLE('Starting Membership')) AS Starting_Membership,
ZEROIFNULL(TRY_TO_DOUBLE('Member Adds')) AS Member_Adds,
ZEROIFNULL(TRY_TO_DOUBLE('Member Attrition')) AS Member_Attrition,
((ZEROIFNULL(CAST('Starting Membership' AS FLOAT))
+ ZEROIFNULL(CAST('Member Adds' AS FLOAT))
+ ZEROIFNULL(CAST('Member Attrition' AS FLOAT)))-ZEROIFNULL(CAST('Starting Membership' AS FLOAT)))
/ZEROIFNULL(CAST('Starting Membership' AS FLOAT)) AS "% Change"
FROM
(SELECT * FROM ATTR_PIVOT
WHERE 'Starting Membership' IS NOT NULL) PT)
I realize this is a VERY big question with a lot of moving parts... So my main question is: How can I successfully change the data type to numeric value, so that hopefully the formulas work in the second half of the query?
Thank you so much for reading through it all!
EDITED FOR SHORTENING THE QUERY WITH UNNEEDED SYNTAX
CAST(), TRY_TO_DOUBLE(), TRY_TO_NUMBER(). I have also put the fields (Starting Membership, Member Adds) in single and double quotation marks.
Unless you are quoting your field names in this post just to highlight them for some reason, the way you've written this query would indicate that you are trying to cast a string value to a number.
For example:
ZEROIFNULL(TRY_TO_DOUBLE('Starting Membership'))
This is simply trying to cast a string literal value of Starting Membership to a double. This will always be NULL. And then your ZEROIFNULL() function is turning your NULL into a 0 (zero).
Without seeing the rest of your query that defines the column names, I can't provide you with a correction, but try using field names, not quoted string values, in your query and see if that gives you what you need.
You first mistake is all your single quoted columns names are being treated as strings/text/char
example your inner select:
with ATTR_PIVOT(id, studentname) as (
select * from values
(1, 'student_a'),
(1, 'student_b'),
(1, 'student_c'),
(2, 'student_z'),
(2, 'student_a')
)
SELECT *
FROM ATTR_PIVOT
WHERE 'Starting Membership' IS NOT NULL
there is no "starting membership" column and we get all the rows..
ID
STUDENTNAME
1
student_a
1
student_b
1
student_c
2
student_z
2
student_a
So you need to change 'Starting Membership' -> "Starting Membership" etc,etc,etc
As Mike mentioned, the 0 results is because the TRY_TO_DOUBLE always fails, and thus the null is always turned to zero.
now, with real "string" values, in real named columns:
with ATTR_PIVOT(Market, Coverage_Mo, "Starting Membership", "Member Adds", "Member Attrition") as (
select * from values
(1, 10 ,'student_a', '23', '150' )
)
SELECT
Market,
Coverage_Mo,
ZEROIFNULL(TRY_TO_DOUBLE("Starting Membership")) AS Starting_Membership,
ZEROIFNULL(TRY_TO_DOUBLE("Member Adds")) AS Member_Adds,
ZEROIFNULL(TRY_TO_DOUBLE("Member Attrition")) AS Member_Attrition
FROM ATTR_PIVOT
WHERE "Starting Membership" IS NOT NULL
we get what we would expect:
MARKET
COVERAGE_MO
STARTING_MEMBERSHIP
MEMBER_ADDS
MEMBER_ATTRITION
1
10
0
23
150

postgres comparing varying character with array from string_to_array

I'm having problems comparing Postgres types, and would be grateful for some help. I am extracting valid document types from a configuration table that holds a tilda-separated string, as follows:
SELECT string_to_array(value,'|') as document_kinds
FROM company_configs
WHERE option = 'document_kinds'
this gives me an array of values, so
'doc1|doc2|doc3' becomes {doc1,doc2,doc3}
Next I need to select the documents for a given person which match my document types:
SELECT * FROM people
JOIN documents ON ...
WHERE kind IN
(SELECT string_to_array(value,'|') as document_kinds
FROM company_configs
WHERE option = 'document_kinds')
the documents.kind column is 'character varying'
my understanding is that string_to_array is producing an array of text values 'text[]'
This query produces the error 'ERROR: operator does not exist: character varying = text[]'
If I cast 'kind' into text, with
SELECT * FROM people
JOIN documents ON ...
WHERE kind::text IN
(SELECT string_to_array(value,'|') as visa_document_kinds FROM languages_united.company_configs WHERE option = 'visa_document_kinds')
I get the error 'ERROR: operator does not exist: text = text[]'
I'm not sure how to compare the two, and would be grateful for any advice.
Thanks in advance
Dan
Postgres 9.4.1
You can select against any array element by using the ANY operator, if your sub-query returns exactly one row:
SELECT *
FROM people
JOIN documents ON ...
WHERE kind = ANY (
SELECT string_to_array(value,'|') as document_kinds
FROM company_configs
WHERE option = 'document_kinds');
If the sub-query possibly returns multiple rows, you can use the regexp_split_to_table() function:
SELECT *
FROM people
JOIN documents ON ...
JOIN (
SELECT document_kinds
FROM company_configs,
regexp_split_to_table(value, '\|') as document_kinds
WHERE option = 'document_kinds') sub ON sub.document_kinds = kind;
(You will have to tweak this to match the rest of your query.)

Table function for each record in a query

I have a function that has two outputs ...
dbo.func1(#code) -> Table(out1, out2)
This function is too costly and takes much time to calculate these two outputs.
and I have a query like this :
SELECT code, name,
(SELECT out1 dbo.func1(code)), (SELECT out2 dbo.func1(code))
FROM MyInnerJoinedTablesResult
But my costly function is call for two time but I want to call it one time for each record selected in my table... and result in two column in any row (not double rows)
SELECT code, name,
(out1 in func1), (out2 in func2)
FROM MyInnerJoinedTablesResult
You need to use Cross Apply
SELECT code, name, func.out1, func.out2
FROM MyInnerJoinedTablesResult
cross apply dbo.func1(code) as func

SSRS Count Distinct Fields!B.value Where Fields!A.value =

I have a report with two tables. The first table is a list of clients, detailing each client that falls into one category of three with different items for each client. The SQL joins up a client_table with the client_items table (there can be multiple items per client) the SQL results looks like so:
Type ClientID ItemID
A 1 1
A 3 1
A 3 2
B 2 1
B 4 3
C 5 2
My second table is going to return counts of the distinct ClientIDs from various combinations of the types:
Total (A+B+C) - I've got this one figured out with =CountDistinct(Fields!ClientID.Value,"datasource1")
Type B+C - Distinct ClientIDs where the type is B or C
Type C - Distinct ClientIDs where the type is C
I hope that was clear, if not let me know what I need to add to clear it up.
CountDistinct() counts non-null values, so you could use an expression to null out those values you don't want to count and do something similar to your first value.
Type B+C:
=CountDistinct
(
IIf
(
Fields!Type.Value = "B" or Fields!Type.Value = "C"
, Fields!ClientId.Value
, Nothing
)
, "datasource1"
)
Type C:
=CountDistinct
(
IIf
(
Fields!Type.Value = "C"
, Fields!ClientId.Value
, Nothing
)
, "datasource1"
)
=Sum(CountDistinct(Fields!UserName.Value,"IssueDate7"))
Here Username is the row group and Issuedate7 is the column group
Regards
Mercy S

Resources