What does /=/ mean in Netezza? - netezza

I am converting some views from Netezza into another DBMS.
I keep running into this operator /=/, which I imagine is some sort of equality operator.
However, I have searched this site and the official docs, but I cannot find a definition of how this operator works.
What does /=/ mean in Netezza?
EDIT:
I am seeing it in case statements.
Here is an example:
CASE WHEN (A_TABLE.A_COL /=/ 'ONE'::VARCHAR) THEN 'ONE'::VARCHAR
WHEN (A_TABLE.A_COL /=/ 'TWO'::VARCHAR) THEN 'TWO'::VARCHAR
WHEN (A_TABLE.A_COL /=/ 'THREE'::VARCHAR) THEN 'THREE'::VARCHAR
WHEN (A_TABLE.A_COL /=/ 'FOUR'::VARCHAR) THEN 'FOUR'::VARCHAR
ELSE 'OTHER'::VARCHAR END

It is a quite powerful feature, often used in JOIN statements and as here in CASE.
Its an operator that tells the database to match NULL in one value to NULL in another. Normally all functions and operators return NULL if one of the arguments is NULL, and since NULL is not TRUE you will not find a match.
This whole tri-state logic surrounding NULL can be quite confusing at times and has clearly been invented in the wrinkled minds of mathematicians, but this special /=/ operator has a behavior that is quite easy to wrap you brain around.

Related

Not-safe expression in datalog

Why is this goal not considered safe?
MANAGER(Name) :- WORKER(Name, Age, _ ), ¬ SUBORDINATE (_, Name), Age <= 40
Our teacher says that it is because SUBORDINATE is negate, and so it can not have undefined (_) spaces, but it seems to be logic for me this expression.
Anyone that can help me?
The safety requirements in Datalog are intended to prevent infinite results. If you have a variable that occurs in the head and only negated in the body, then it can be bound to infinitely many values, which would obviously be a problem.
The specific requirements for safety are hard to precisely formulate, so usually you see the requirements simplified to 'every variable has to occur positively'. This is a bit more restrictive than needed.
The most informative answer to the question would be that the rule is technically unsafe, but that it does not have an infinite result. Some Datalog engines would allows this rule and return the finite result.
This rule is perfectly safe and it does not produce an infinite relation. It is an implementation deficiency of the Datalog engine you are using.
In general, an easy way to handle _ is to convert it into a fresh variable. This makes the implementation of the engine easy, but probably is the reason why this clause throws an error. If it was a variable, there would be an infinite number of values SUBORDINATE's first parameter cannot be.

Order of operation for AND and OR in SQL Server queries

I have a sql statement (inherited) that has the following WHERE clause:
WHERE
(Users_1.SecurityLevel IN ('Accounts', 'General manager'))
AND (PurchaseOrders.Approval = 1)
AND (PurchaseOrders.QuotedAmount = 0)
AND (Users_1.StaffNumber = ISNULL(ServiceRequests.POC_UserID, PurchaseOrders.Approval_UserID))
OR
(Users_1.SecurityLevel IN ('Accounts', 'General manager'))
AND (PurchaseOrders.QuotedAmount = 0)
AND (ServiceRequests.POC = 1)
AND (Users_1.StaffNumber = ISNULL(ServiceRequests.POC_UserID, PurchaseOrders.Approval_UserID))
OR
(ISNULL(ISNULL(PurchaseOrders.InvoiceNumber, ServiceRequests.InvoiceNumber), '!#') <> '!#')
AND (Users_1.StaffNumber = ISNULL(ServiceRequests.POC_UserID, PurchaseOrders.Approval_UserID))
I'm trying to figure out the order of operations when things are not nicely bracketed.
How are AND and OR statements ordered in the above example?
Is there an easy rule so that I can put brackets around things to make it more readable?
I'm looking for something like "BODMAS" when it comes to the mathematical order of operations, for SQL WHERE clause operators.
Thanks
The page on Operator Precedence tells you:
When a complex expression has multiple operators, operator precedence determines the sequence in which the operations are performed. The order of execution can significantly affect the resulting value.
And that AND has a higher precedence than OR.
However, it's not correct. In SQL, you tell the system what you want, not how to do it, and the optimizer is free to re-order operations, provided that the same logical result is produced.
So, whilst operator precedence tells you how the operators are logically combined, it does not, in fact, control the order in which each piece of logic is actually performed. This means that idioms which may be safe in other languages because of guarantees of execution order are not in fact safe in SQL. E.g. a check such as:
<String can be parsed as an int> && <convert the string to an int and compare to 20>
Can be perfectly safe in languages such as C#. The same logic in SQL is not safe since the optimizer may choose to perform the string to int conversion before it evaluates whether the string can be parsed as an int and so can throw an error about a failed conversion. (Of course, it can also work as you may have expected and not produce an error)

Implementation of the switch statement in C [duplicate]

I read somewhere that the switch statement uses "Binary Search" or some sorting techniques to exactly choose the correct case and this increases its performance compared to else-if ladder.
And also if we give the case in order does the switch work faster? is it so? Can you add your valuable suggestions on this?
We discussed here about the same and planned to post as a question.
It's actually up to the compiler how a switch statement is realized in code.
However, my understanding is that when it's suitable (that is, relatively dense cases), a jump table is used.
That would mean that something like:
switch(i) {
case 0: doZero(); break;
case 1: doOne();
case 2: doTwo(); break;
default: doDefault();
}
Would end up getting compiled to something like (horrible pseudo-assembler, but it should be clear, I hope).
load i into REG
compare REG to 2
if greater, jmp to DEFAULT
compare REG to 0
if less jmp to DEFAULT
jmp to table[REG]
data table
ZERO
ONE
TWO
end data
ZERO: call doZero
jmp END
ONE: call doOne
TWO: call doTwo
jmp END
DEFAULT: call doDefault
END:
If that's not the case, there are other possible implementations that allow for some extent of "better than a a sequence of conditionals".
How swtich is implemented depends on what values you have. For values that are close in range, the compiler will generally generate a jump table. If the values are far apart, it will generate a linked branch, using something like a binary search to find the right value.
The order of the switch statements as such doesn't matter, it will do the same thing whether you have the order in ascending, descending or random order - do what makes most sense with regard to what you want to do.
If nothing else, switch is usually a lot easier to read than an if-else sequence.
On some googling I found some interestin link and planned to post as an answer to my question.
http://www.codeproject.com/Articles/100473/Something-You-May-Not-Know-About-the-Switch-Statem
Comments are welcome..
Although it can be implemented as several ways it depends on how the language designer wants to implement it.
One possible efficient way is to use Hash Maps
Map every condition (usually integer) to the corresponding expression to be evaluated followed by a jump statement.
Other solutions also might work as often switch has finite conditions but a efficient solution shall be to use Hash map

Why Blank space is accepted and null keyword is not accepted [duplicate]

In SQL server if you have nullParam=NULL in a where clause, it always evaluates to false. This is counterintuitive and has caused me many errors. I do understand the IS NULL and IS NOT NULL keywords are the correct way to do it. But why does SQL server behave this way?
Think of the null as "unknown" in that case (or "does not exist"). In either of those cases, you can't say that they are equal, because you don't know the value of either of them. So, null=null evaluates to not true (false or null, depending on your system), because you don't know the values to say that they ARE equal. This behavior is defined in the ANSI SQL-92 standard.
EDIT:
This depends on your ansi_nulls setting. if you have ANSI_NULLS off, this WILL evaluate to true. Run the following code for an example...
set ansi_nulls off
if null = null
print 'true'
else
print 'false'
set ansi_nulls ON
if null = null
print 'true'
else
print 'false'
How old is Frank? I don't know (null).
How old is Shirley? I don't know (null).
Are Frank and Shirley the same age?
Correct answer should be "I don't know" (null), not "no", as Frank and Shirley might be the same age, we simply don't know.
Here I will hopefully clarify my position.
That NULL = NULL evaluate to FALSE is wrong. Hacker and Mister correctly answered NULL.
Here is why. Dewayne Christensen wrote to me, in a comment to Scott Ivey:
Since it's December, let's use a
seasonal example. I have two presents
under the tree. Now, you tell me if I
got two of the same thing or not.
They can be different or they can be equal, you don't know until one open both presents. Who knows? You invited two people that don't know each other and both have done to you the same gift - rare, but not impossible §.
So the question: are these two UNKNOWN presents the same (equal, =)? The correct answer is: UNKNOWN (i.e. NULL).
This example was intended to demonstrate that "..(false or null, depending on your system).." is a correct answer - it is not, only NULL is correct in 3VL (or is ok for you to accept a system which gives wrong answers?)
A correct answer to this question must emphasize this two points:
three-valued logic (3VL) is counterintuitive (see countless other questions on this subject on Stackoverflow and in other forum to make sure);
SQL-based DBMSes often do not respect even 3VL, they give wrong answers sometimes (as, the original poster assert, SQL Server do in this case).
So I reiterate: SQL does not any good forcing one to interpret the reflexive property of equality, which state that:
for any x, x = x §§ (in plain English: whatever the universe of discourse, a "thing" is always equal to itself).
.. in a 3VL (TRUE, FALSE, NULL). The expectation of people would conform to 2VL (TRUE, FALSE, which even in SQL is valid for all other values), i.e. x = x always evaluate to TRUE, for any possible value of x - no exceptions.
Note also that NULLs are valid " non-values " (as their apologists pretend them to be) which one can assign as attribute values(??) as part of relation variables. So they are acceptable values of every type (domain), not only of the type of logical expressions.
And this was my point: NULL, as value, is a "strange beast". Without euphemism, I prefer to say: nonsense.
I think that this formulation is much more clear and less debatable - sorry for my poor English proficiency.
This is only one of the problems of NULLs. Better to avoid them entirely, when possible.
§ we are concerned about values here, so the fact that the two presents are always two different physical objects are not a valid objection; if you are not convinced I'm sorry, it is not this the place to explain the difference between value and "object" semantics (Relational Algebra has value semantics from the start - see Codd's information principle; I think that some SQL DBMS implementors don't even care about a common semantics).
§§ to my knowledge, this is an axiom accepted (in a form or another, but always interpreted in a 2VL) since antiquity and that exactly because is so intuitive. 3VLs (is a family of logics in reality) is a much more recent development (but I'm not sure when was first developed).
Side note: if someone will introduce Bottom, Unit and Option Types as attempts to justify SQL NULLs, I will be convinced only after a quite detailed examination that will shows of how SQL implementations with NULLs have a sound type system and will clarify, finally, what NULLs (these "values-not-quite-values") really are.
In what follow I will quote some authors. Any error or omission is
probably mine and not of the original authors.
Joe Celko on SQL NULLs
I see Joe Celko often cited on this forum. Apparently he is a much respected author here. So, I said to myself: "what does he wrote about SQL NULLs? How does he explain NULLs numerous problems?". One of my friend has an ebook version of Joe Celko's SQL for smarties: advanced SQL programming, 3rd edition. Let's see.
First, the table of contents. The thing that strikes me most is the number of times that NULL is mentioned and in the most varied contexts:
3.4 Arithmetic and NULLs 109
3.5 Converting Values to and from NULL 110
3.5.1 NULLIF() Function 110
6 NULLs: Missing Data in SQL 185
6.4 Comparing NULLs 190
6.5 NULLs and Logic 190
6.5.1 NULLS in Subquery Predicates 191
6.5.2 Standard SQL Solutions 193
6.6 Math and NULLs 193
6.7 Functions and NULLs 193
6.8 NULLs and Host Languages 194
6.9 Design Advice for NULLs 195
6.9.1 Avoiding NULLs from the Host Programs 197
6.10 A Note on Multiple NULL Values 198
10.1 IS NULL Predicate 241
10.1.1 Sources of NULLs 242
...
and so on. It rings "nasty special case" to me.
I will go into some of these cases with excerpts from this book, trying to limit myself to the essential, for copyright reasons. I think these quotes fall within "fair use" doctrine and they can even stimulate to buy the book - so I hope that no one will complain (otherwise I will need to delete most of it, if not all). Furthermore, I shall refrain from reporting code snippets for the same reason. Sorry about that. Buy the book to read about datailed reasoning.
Page numbers between parenthesis in what follow.
NOT NULL Constraint (11)
The most important column constraint is the NOT NULL, which forbids
the use of NULLs in a column. Use this constraint routinely, and remove
it only when you have good reason. It will help you avoid the
complications of NULL values when you make queries against the data.
It is not a value; it is a marker that holds a place where a value might go.
Again this "value but not quite a value" nonsense. The rest seems quite sensible to me.
(12)
In short, NULLs cause a lot of irregular features in SQL, which we will discuss
later. Your best bet is just to memorize the situations and the rules for NULLs
when you cannot avoid them.
Apropos of SQL, NULLs and infinite:
(104) CHAPTER 3: NUMERIC DATA IN SQL
SQL has not accepted the IEEE model for mathematics for several reasons.
...
If the IEEE rules for math were allowed in
SQL, then we would need type conversion rules for infinite and a way to
represent an infinite exact numeric value after the conversion. People
have enough trouble with NULLs, so let’s not go there.
SQL implementations undecided on what NULL really means in particular contexts:
3.6.2 Exponential Functions (116)
The problem is that logarithms are undefined when (x <= 0). Some SQL
implementations return an error message, some return a NULL and DB2/
400; version 3 release 1 returned *NEGINF (short for “negative infinity”)
as its result.
Joe Celko quoting David McGoveran and C. J. Date:
6 NULLs: Missing Data in SQL (185)
In their book A Guide to Sybase and SQL Server, David McGoveran
and C. J. Date said: “It is this writer’s opinion than NULLs, at least as
currently defined and implemented in SQL, are far more trouble than
they are worth and should be avoided; they display very strange and
inconsistent behavior and can be a rich source of error and confusion.
(Please note that these comments and criticisms apply to any system
that supports SQL-style NULLs, not just to SQL Server specifically.)”
NULLs as a drug addiction:
(186/187)
In the rest of this book, I will be urging you not to use
them, which may seem contradictory, but it is not. Think of a NULL
as a drug; use it properly and it works for you, but abuse it and it can ruin
everything. Your best policy is to avoid NULLs when you can and use
them properly when you have to.
My unique objection here is to "use them properly", which interacts badly with
specific implementation behaviors.
6.5.1 NULLS in Subquery Predicates (191/192)
People forget that a subquery often hides a comparison with a NULL.
Consider these two tables:
...
The result will be empty. This is counterintuitive, but correct.
(separator)
6.5.2 Standard SQL Solutions (193)
SQL-92 solved some of the 3VL (three-valued logic) problems by adding
a new predicate of the form:
<search condition> IS [NOT] TRUE | FALSE | UNKNOWN
But UNKNOWN is a source of problems in itself, so that C. J. Date,
in his book cited below, reccomends in chapter 4.5. Avoiding Nulls in SQL:
Don't use the keyword UNKNOWN in any context whatsoever.
Read "ASIDE" on UNKNOWN, also linked below.
6.8 NULLs and Host Languages (194)
However, you should know how NULLs are handled when they have
to be passed to a host program. No standard host language for
which an embedding is defined supports NULLs, which is another
good reason to avoid using them in your database schema.
(separator)
6.9 Design Advice for NULLs (195)
It is a good idea to declare all your base tables with NOT NULL
constraints on all columns whenever possible. NULLs confuse people
who do not know SQL, and NULLs are expensive.
Objection: NULLs confuses even people that know SQL well,
see below.
(195)
NULLs should be avoided in FOREIGN KEYs. SQL allows this “benefit
of the doubt” relationship, but it can cause a loss of information in
queries that involve joins. For example, given a part number code in
Inventory that is referenced as a FOREIGN KEY by an Orders table, you
will have problems getting a listing of the parts that have a NULL. This is
a mandatory relationship; you cannot order a part that does not exist.
(separator)
6.9.1 Avoiding NULLs from the Host Programs (197)
You can avoid putting NULLs into the database from the Host Programs
with some programming discipline.
...
Determine impact of missing data on programming and reporting:
Numeric columns with NULLs are a problem, because queries
using aggregate functions can provide misleading results.
(separator)
(227)
The SUM() of an empty set is always NULL. One of the most common
programming errors made when using this trick is to write a query that
could return more than one row. If you did not think about it, you might
have written the last example as: ...
(separator)
10.1.1 Sources of NULLs (242)
It is important to remember where NULLs can occur. They are more than
just a possible value in a column. Aggregate functions on empty sets,
OUTER JOINs, arithmetic expressions with NULLs, and OLAP operators
all return NULLs. These constructs often show up as columns in
VIEWs.
(separator)
(301)
Another problem with NULLs is found when you attempt to convert
IN predicates to EXISTS predicates.
(separator)
16.3 The ALL Predicate and Extrema Functions (313)
It is counterintuitive at first that these two predicates are not the same in SQL:
...
But you have to remember the rules for the extrema functions—they
drop out all the NULLs before returning the greater or least values. The
ALL predicate does not drop NULLs, so you can get them in the results.
(separator)
(315)
However, the definition in the standard is worded in the
negative, so that NULLs get the benefit of the doubt.
...
As you can see, it is a good idea to avoid NULLs in UNIQUE
constraints.
Discussing GROUP BY:
NULLs are treated as if they were all equal to each other, and
form their own group. Each group is then reduced to a single
row in a new result table that replaces the old one.
This means that for GROUP BY clause NULL = NULL does not
evaluate to NULL, as in 3VL, but it evaluate to TRUE.
SQL standard is confusing:
The ORDER BY and NULLs (329)
Whether a sort key value that is NULL is considered greater or less than a
non-NULL value is implementation-defined, but...
... There are SQL products that do it either way.
In March 1999, Chris Farrar brought up a question from one of his
developers that caused him to examine a part of the SQL Standard that
I thought I understood. Chris found some differences between the
general understanding and the actual wording of the specification.
And so on. I think is enough by Celko.
C. J. Date on SQL NULLs
C. J. Date is more radical about NULLs: avoid NULLs in SQL, period.
In fact, chapter 4 of his SQL and Relational Theory: How to Write Accurate
SQL Code is titled "NO DUPLICATES, NO NULLS", with subchapters
"4.4 What's Wrong with Nulls?" and "4.5 Avoiding Nulls in SQL" (follow the link:
thanks to Google Books, you can read some pages on-line).
Fabian Pascal on SQL NULLs
From its Practical Issues in Database Management - A Reference
for the Thinking Practitioner (no excerpts on-line, sorry):
10.3 Pratical Implications
10.3.1 SQL NULLs
... SQL suffers from the problems inherent in 3VL as well as from many
quirks, complications, counterintuitiveness, and outright errors [10, 11];
among them are the following:
Aggregate functions (e.g., SUM(), AVG()) ignore NULLs (except for COUNT()).
A scalar expression on a table without rows evaluates incorrectly to NULL, instead of 0.
The expression "NULL = NULL" evaluates to NULL, but is actually invalid in SQL; yet ORDER BY treats NULLs as equal (whatever they precede or follow "regular" values is left to DBMS vendor).
The expression "x IS NOT NULL" is not equal to "NOT(x IS NULL)", as is the case in 2VL.
...
All commercially implemented SQL dialects follow this 3VL approach, and, thus,
not only do they exibits these problems, but they also have spefic implementation
problems, which vary across products.
The answers here all seem to come from a CS perspective so I want to add one from a developer perspective.
For a developer NULL is very useful. The answers here say NULL means unknown, and maybe in CS theory that's true, don't remember, it's been a while. In actual development though, at least in my experience, that happens about 1% of the time. The other 99% it is used for cases where the value is not UNKNOWN but it is KNOWN TO BE ABSENT.
For example:
Client.LastPurchase, for a new client. It is not unknown, it is known that he hasn't made a purchase yet.
When using an ORM with a Table per Class Hierarchy mapping, some values are just not mapped for certain classes.
When mapping a tree structure a root will usually have Parent = NULL
And many more...
I'm sure most developers at some point wrote WHERE value = NULL,
didn't get any results, and that's how they learned about IS NULL syntax. Just look how many votes this question and the linked ones have.
SQL Databases are a tool, and they should be designed the way which is easiest for their users to understand.
Just because you don't know what two things are, does not mean they're equal. If when you think of NULL you think of “NULL” (string) then you probably want a different test of equality like Postgresql's IS DISTINCT FROM AND IS NOT DISTINCT FROM
From the PostgreSQL docs on "Comparison Functions and Operators"
expression IS DISTINCT FROM expression
expression IS NOT DISTINCT FROM expression
For non-null inputs, IS DISTINCT FROM is the same as the <> operator. However, if both inputs are null it returns false, and if only one input is null it returns true. Similarly, IS NOT DISTINCT FROM is identical to = for non-null inputs, but it returns true when both inputs are null, and false when only one input is null. Thus, these constructs effectively act as though null were a normal data value, rather than "unknown".
Maybe it depends, but I thought NULL=NULL evaluates to NULL like most operations with NULL as an operand.
At technet there is a good explanation for how null values work.
Null means unknown.
Therefore the Boolean expression
value=null
does not evaluate to false, it evaluates to null, but if that is the final result of a where clause, then nothing is returned. That is a practical way to do it, since returning null would be difficult to conceive.
It is interesting and very important to understand the following:
If in a query we have
where (value=#param Or #param is null) And id=#anotherParam
and
value=1
#param is null
id=123
#anotherParam=123
then
"value=#param" evaluates to null
"#param is null" evaluates to true
"id=#anotherParam" evaluates to true
So the expression to be evaluated becomes
(null Or true) And true
We might be tempted to think that here "null Or true" will be evaluated to null and thus the whole expression becomes null and the row will not be returned.
This is not so. Why?
Because "null Or true" evaluates to true, which is very logical, since if one operand is true with the Or-operator, then no matter the value of the other operand, the operation will return true. Thus it does not matter that the other operand is unknown (null).
So we finally have true=true and thus the row will be returned.
Note: with the same crystal clear logic that "null Or true" evaluates to true, "null And true" evaluates to null.
Update:
Ok, just to make it complete I want to add the rest here too which turns out quite fun in relation to the above.
"null Or false" evaluates to null, "null And false" evaluates to false. :)
The logic is of course still as self-evident as before.
MSDN has a nice descriptive article on nulls and the three state logic that they engender.
In short, the SQL92 spec defines NULL as unknown, and NULL used in the following operators causes unexpected results for the uninitiated:
= operator NULL true false
NULL NULL NULL NULL
true NULL true false
false NULL false true
and op NULL true false
NULL NULL NULL false
true NULL true false
false false false false
or op NULL true false
NULL NULL true NULL
true true true true
false NULL true false
The concept of NULL is questionable, to say the least. Codd introduced the relational model and the concept of NULL in context (and went on to propose more than one kind of NULL!) However, relational theory has evolved since Codd's original writings: some of his proposals have since been dropped (e.g. primary key) and others never caught on (e.g. theta operators). In modern relational theory (truly relational theory, I should stress) NULL simply does not exist. See The Third Manifesto. http://www.thethirdmanifesto.com/
The SQL language suffers the problem of backwards compatibility. NULL found its way into SQL and we are stuck with it. Arguably, the implementation of NULL in SQL is flawed (SQL Server's implementation makes things even more complicated due to its ANSI_NULLS option).
I recommend avoiding the use of NULLable columns in base tables.
Although perhaps I shouldn't be tempted, I just wanted to assert a corrections of my own about how NULL works in SQL:
NULL = NULL evaluates to UNKNOWN.
UNKNOWN is a logical value.
NULL is a data value.
This is easy to prove e.g.
SELECT NULL = NULL
correctly generates an error in SQL Server. If the result was a data value then we would expect to see NULL, as some answers here (wrongly) suggest we would.
The logical value UNKNOWN is treated differently in SQL DML and SQL DDL respectively.
In SQL DML, UNKNOWN causes rows to be removed from the resultset.
For example:
CREATE TABLE MyTable
(
key_col INTEGER NOT NULL UNIQUE,
data_col INTEGER
CHECK (data_col = 55)
);
INSERT INTO MyTable (key_col, data_col)
VALUES (1, NULL);
The INSERT succeeds for this row, even though the CHECK condition resolves to NULL = NULL. This is due defined in the SQL-92 ("ANSI") Standard:
11.6 table constraint definition
3)
If the table constraint is a check
constraint definition, then let SC be
the search condition immediately
contained in the check constraint
definition and let T be the table name
included in the corresponding table
constraint descriptor; the table
constraint is not satisfied if and
only if
EXISTS ( SELECT * FROM T WHERE NOT
( SC ) )
is true.
Read that again carefully, following the logic.
In plain English, our new row above is given the 'benefit of the doubt' about being UNKNOWN and allowed to pass.
In SQL DML, the rule for the WHERE clause is much easier to follow:
The search condition is applied to
each row of T. The result of the where
clause is a table of those rows of T
for which the result of the search
condition is true.
In plain English, rows that evaluate to UNKNOWN are removed from the resultset.
Because NULL means 'unknown value' and two unknown values cannot be equal.
So, if to our logic NULL N°1 is equal to NULL N°2, then we have to tell that somehow:
SELECT 1
WHERE ISNULL(nullParam1, -1) = ISNULL(nullParam2, -1)
where known value -1 N°1 is equal to -1 N°2
NULL isn't equal to anything, not even itself. My personal solution to understanding the behavior of NULL is to avoid using it as much as possible :).
The question:
Does one unknown equal another unknown?
(NULL = NULL)
That question is something no one can answer so it defaults to true or false depending on your ansi_nulls setting.
However the question:
Is this unknown variable unknown?
This question is quite different and can be answered with true.
nullVariable = null is comparing the values
nullVariable is null is comparing the state of the variable
The confusion arises from the level of indirection (abstraction) that comes about from using NULL.
Going back to the "what's under the Christmas tree" analogy, "Unknown" describes the state of knowledge about what is in Box A.
So if you don't know what's in Box A, you say it's "Unknown", but that doesn't mean that "Unknown" is inside the box. Something other than unknown is in the box, possibly some kind of object, or possibly nothing is in the box.
Similarly, if you don't know what's in Box B, you can label your state of knowledge about the contents as being "Unknown".
So here's the kicker: Your state of knowledge about Box A is equal to your state of knowledge about Box B. (Your state of knowledge in both cases is "Unknown" or "I don't know what's in the Box".) But the contents of the boxes may or may not be equal.
Going back to SQL, ideally you should only be able to compare values when you know what they are. Unfortunately, the label that describes a lack of knowledge is stored in the cell itself, so we're tempted to use it as a value. But we should not use that as a value, because it would lead to "the content of Box A equals the content of Box B when we don't know what's in Box A and/or we don't know what's in Box B.
(Logically, the implication "if I don't know what's in Box A and if I don't know what's in Box B, then what's in Box A = What's in Box B" is false.)
Yay, Dead Horse.
There are two sensible ways to handle NULL = NULL comparisons in a WHERE clause, and they boil down to "What do you mean by NULL?" One way assumes NULL means "unknown," and the other assumes NULL means "data does not exist." SQL has chosen a third way which is wrong all around.
The "NULL means unknown" solution: Throw an error.
Unknown = unknown should evaluate to 3VL null. But the output of a WHERE clause is 2VL: You either return the row or you don't. It's like being asked to divide by zero and return a number: There is no correct response. So you throw an error instead, and force the programmer to explicitly handle this situation.
The "NULL means no data" solution: Return the row.
No data = no data should evaluate to true. If I'm comparing two people, and they have the same first name, and the same last name, and neither has a middle name, then it is correct to say "These people have the same name."
The SQL solution: Don't return the row.
This is always wrong. If NULL means "unknown," then you don't know if the row should be returned or not, and you should not try to guess. If NULL means "no data," then you should return the row. Either way, silently removing the row is incorrect and will cause problems. It's the worst of both worlds.
Setting aside theory and speaking in practical terms, I'm with AlexDev: I have almost never encountered a case where "return the row" was not the desired result. However, "almost never" is not "never," and SQL databases often serve as the backbones of big important systems, so I can see a fair case for being rigorous and throwing an error.
What I cannot see is a case for silently coercing 3VL null into 2VL false. Like most silent type coercions, it's a rabid weasel waiting to be set loose in your system, and when the weasel finally jumps out and bites someone, you'll have the merry devil of a time tracking it back to its nest.
null is unknown in sql so we cant expect two unknowns to be same.
However you can get that behavior by setting ANSI_NULLS to Off(its On by Default)
You will be able to use = operator for nulls
SET ANSI_NULLS off
if null=null
print 1
else
print 2
set ansi_nulls on
if null=null
print 1
else
print 2
You work for the government registering information about citizens. This includes the national ID for every person in the country. A child was left at the door of a church some 40 years ago, nobody knows who their parents are. This person's father ID is NULL. Two such people exist. Count people who share the same father ID with at least one other person (people who are siblings). Do you count those two too?
The answer is no, you don’t, because we don’t know if they are siblings or not.
Suppose you don’t have a NULL option, and instead use some pre-determined value to represent “the unknown”, perhaps an empty string or the number 0 or a * character, etc. Then you would have in your queries that * = *, 0 = 0, and “” = “”, etc. This is not what you want (as per the example above), and as you might often forget about these cases (the example above is a clear fringe case outside ordinary everyday thinking), then you need the language to remember for you that NULL = NULL is not true.
Necessity is the mother of invention.
Just an addition to other wonderful answers:
AND: The result of true and unknown is unknown, false and unknown is false,
while unknown and unknown is unknown.
OR: The result of true or unknown is true, false or unknown is unknown, while unknown or unknown is unknown.
NOT: The result of not unknown is unknown
If you are looking for an expression returning true for two NULLs you can use:
SELECT 1
WHERE EXISTS (
SELECT NULL
INTERSECT
SELECT NULL
)
It is helpful if you want to replicate data from one table to another.
The equality test, for example, in a case statement when clause, can be changed from
XYZ = NULL
to
XYZ IS NULL
If I want to treat blanks and empty string as equal to NULL I often also use an equality test like:
(NULLIF(ltrim( XYZ ),'') IS NULL)
To quote the Christmas analogy again:
In SQL, NULL basically means "closed box" (unknown). So, the result of comparing two closed boxes will also be unknown (null).
I understand, for a developer, this is counter-intuitive, because in programming languages, often NULL rather means "empty box" (known). And comparing two empty boxes will naturally yield true / equal.
This is why JavaScript for example distinguishes between null and undefined.
Null isn't equal to anything including itself
Best way to test if an object is null is to check whether the object equals itself since null is the only object not equal to itself
const obj = null
console.log(obj==obj) //false, then it's null
Check this article

Differences between IF and SWITCH/CASE in C

The question is really simple: in a laboratory class I attended this year, professor presented the switch/case statement alongside the classic if/then/else statement without saying anything about which one was better in different programming situations.
Which one is better when checking a variable which can have at least 10/15 possible values?
Breifly (your question is vague), a switch compiles to a jump table in assembler and is therefore faster than if / then / else. Note that a switch statement in C has a 'follow-through' feature (google this) which can be circumvented with break statements.
You can only switch on things that evaluate to integral types. In particular this means that you cannot switch on strings: strings are not part of the natural C language in any case.
An if / then / else checks several conditions in succession. Comparison is not restricted to integral types as all you're testing is true (not zero) or false (zero).
That's probably enough to get you started.
I think
If then else is better in case only when you have 2 conditions only.
Otherwise its better to use switch case if conditions are more than 2
When the value to be compared has a type that is amenable to being switch'd, and it makes your code more readable, then go ahead and use a switch. For example,
if (val == 0) {
// do something
} else if (val == 1) {
// do something else
} else if (val == 2) {
// yet another option
} ...
is cluttered and hard to maintain compared to a switch. Imagine that some day, you don't want to switch on val but on validate(val); then you'd need to change all the conditions.
Also, switch may be faster than if/else sometimes, because a compiler may turn it into either a jump table or a binary search. Then again, a compiler might do the same to a series of if/else statements, although that's a more difficult optimization to make because the clauses order might matter and the compiler must be able to detect that it doesn't.
switch is better performance-wise too because it can be optimized in various ways by the compiler, depending on whether the values are consecutive. If yes, it can outright use the value as an index to an array of pointers. If not, it can sometimes use a binary search instead of a linear search, when it's faster.
switch looks better than lots of ifs. However it only works on numeric expressions (as a char is essentially a number, it can still be applied to it, however you cannot use it with strings).
If I may point you to here as it has a nice description of the switch statement. Note the opening sentence:
Switch case statements are a substitute for long if statements that
compare a variable to several "integral" values ("integral" values are
simply values that can be expressed as an integer, such as the value
of a char).

Resources