As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Is it fair to say that it takes no time (compared to the nested SELECT) to make the second (outer) 'SELECT' from a result-set like this?
SELECT some_column
FROM
(
SELECT some_column
FROM some_table
)
AS _alias
The SQL optimizer is likely to treat that SELECT statement as if it was written:
SELECT some_column FROM some_table
So there'll be no performance difference whatsoever. The optimizer does its best to minimize the cost of producing the answer and will rework the query you write to speed things up. Only the most naïve optimizer would evaluate the inner SELECT and save the results in a table and then run the outer SELECT on that result.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Recently I was investigating this question: Storing long values in DataGridView C#. After some tests I found out that the DataGridViewCell will not display any text with a length larger than 43679 characters, even if the value is actually stored in the cell and you can access it programmatically.
Actually, this has also been reported as a bug for SQL Server Management Studio: SSMS - Can not paste more than 43679 characters from a column in Grid Mode.
I guess that the limit is intended to prevent any overload on the UI rendering. But my questions are:
Why this exact value is being used?
Is there any documentation for this limit?
If convert 43679 to hex then it's 0xAAA0. Or in inverse form 0xAAA0 = 0xFFFF - 0x555F.
0x555F is code of Chinese Unicode symbol 'open; begin'. May be it is a Chinese message or some kind of Chinese magic)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Why is EXEC or EXECUTE required when calling a stored procedure? Both Oracle and SQL Server have cases when EXEC is required and when it's not required or necessary. What's the point of ever requiring it?
This makes the syntax more distinct and unambiguous. Since statement separators (semicolons) are not mandatory in T-SQL, it makes it clearer to find where statements begin.
On a sidenote, I recomment to strictly use semicolons after each statement because there are ambiguity problems (especially often encountered when using the WITH keyword, which can either start the definitions of common table expressions before DML statements, or be used at the end of DML statement for defining hints; without a semicolon, the parser cannot know which one to pick really).
It's undoubtedly a parser problem/challenge. If a call to a stored proc can be anywhere in a body of text, you'll kill yourself trying to write a universal "thing" that can understand it all. Instead, the designers publish the BNF for the language and you, the user, are responsible for understanding how to speak it.
Or, in the SQL Server world always use EXEC/EXECUTE and never have to worry about the finer points of when it is needed. As noted by #alex, that doesn't hold true for Oracle.
In Oracle, EXECUTE (or EXEC) is used in SQL*Plus as a shortcut for an anonymous PL/SQL block. EXECUTE will not work in PL/SQL. You could do either:
SQL> execute my_proc;
Or, as a fully specified anonymous block:
SQL> DECLARE
BEGIN
my_proc;
END;
/
It's entirely client syntax in Oracle.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am studying in a database development course at the moment and I am having trouble getting my head this!
My course notes describe a tuple as:
A tuple is a row of a relation
From what I have understood since working with MySQL you search for row(s). Or when browsing through a database you are looking through rows in a table.
And from what I understood a record is information within a row.
Is there any distinct differences between the three?
I know someone has posted something similar but I couldn't really understand his answer.
Thanks for all help in advnce!
Peter
In your context they are different words to mean exactly the same thing.
A tuple, in general, means an ordered list with possibly repeated elements (as contrasted to a set, which has all unique elements and is not ordered)
They are the same.
A row—also called a record or tuple—represents a single, implicitly structured data item in a table.
They mean exactly same thing: tuple, rows or records.
Your SELECT query will generate results that may contain 0 or more rows/records or tuples.
A SELECT query can span 1 or more tables
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have few questions which i have been asked in interview:
Performance difference between delete and truncate?
Delete duplicate data from a table which is not having any id column and should not use CTE.
Why we are able to delete data using CTE?
DELETE logs each individual deletion, whereas TRUNCATE is a bulk logged operation, hence is faster.
You could SELECT DISTINCT data into a temp table, TRUNCATE the first then reinsert.
Not a scooby...
here are some pointers to solve your issues:
Since TRUNCATE doesn't actually delete data, but deallocate the data by removing pointers to the indexes it will be much faster than DELETE, when you use DELETE everything is stored in the transaction log row by row, hence it's much slower.
http://www.codeproject.com/Tips/159881/How-to-remove-duplicate-rows-in-SQL-Server-2008-wh
http://blog.sqlauthority.com/2009/06/23/sql-server-2005-2008-delete-duplicate-rows/
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
We have a discussion over SQL Server 2008 and join. One half says the more joins the slower you sql runs. The other half says ihat it does not matter because SQL server takes care of business so you wil not notice any performance loss. What is true?
Instead of asking the question the way you have, consider instead:
Can I get the data I want without the join?
No => You need the join, end of discussion.
It is also a matter of degree. It is impossible for a join not to add additional processing. Even if the Query Optimizer takes it out (e.g. left join with nothing used from the join) - it still costs CPU cycles to parse it.
Now if the question is about comparing joins to another technique, such as one special case of LEFT JOIN + IS NULL vs NOT EXISTS for a record in X not in Y scenario, then let's discuss specifics - table sizes (X vs Y), indexes etc.
It will slow it down: the more complicated a query, the more work the database server has to do to execute it.
But about that "performance loss": over what? Is there another way to get at the same data? If so, then you can profile the various options against each other to see which is fastest.