Trying to run this query in LINQPad 4:
SELECT item_group_id as AccountID, IIF(ISNULL(t_item_group.description),'[blank]',t_item_group.description) AS Name
FROM t_item_group
WHERE active = TRUE
I get, "the isnull function requires 2 argument(s)."
I've tried moving the parens around, changing the "[blank]" to "[blank]" and "[blank]" , but none of it helps...
The queries (I have two similar ones (with IIF(ISNULL)) that LINQPad won't run for this reason, yet they run in actuality (in my Web API app) fine; so, LINQPad is more "picky" than it needs to be, perhaps, but what is it expecting, SQL syntax-wise?
ISNULL is already like a 'if' type statement.
You can just replace
IIF(ISNULL(t_item_group.description),'[blank]',t_item_group.description)
with
ISNULL(t_item_group.description, '[blank]')
The ISNULL uses the first parameter (the 'description'), unless that value is null in which case it will use the second parameter.
As an aside, one of the reasons I don't care for ISNULL is that it is poorly named. You'd assume that given its name it will return a bit - true if the parameter is null, false if not null - which you could use in an 'if' statement like you attempted. But that's not how it works.
The alternative is to use COALESCE. It provides much the same functionality, but the naming makes sense.
co·a·lesce ˌkōəˈles verb
1. come together and form one mass or whole.
To COALESCE two parameters is to force them into one non-nullable result. And the function is actually more powerful, as you can provide multiple parameters - COALESCE(i.description, i.name, '[blank]') is perfectly valid.
Related
My HQL query is as under (Hibernate with MS SQL Server)
SELECT...FROM...WHERE...
AND...
AND REVERSE(SUBSTRING(REVERSE(**ALIAS.info**),0, CHARINDEX('#', REVERSE(**ALIAS.info**)))) in (:var1_0,:var1_1)
AND...
The query compiles correctly. However when I try to use the query.list() method, it fails. The reason for failure is the sql query generated is as under:
select...from...where...
and...
and (reverse(substring(reverse(**namedinfos2_.info**), 0, CHARINDEX('#')) in (?))
and...
Note that after the '#' it is missing the namedinfos2_.info again. Hence the CHARINDEX() function fails, as it expects 2 parameters.
It should have been the below:
and...
and (reverse(substring(reverse(**namedinfos2_.info**), 0, CHARINDEX('#', **namedinfos2_.info**)) in (?))
and---
Any idea why does this happen? Or what I should be doing to fix this?
The actual query is quite long.
An Example can be like this:
HQL:
select LastModifiedByUser.field
from namedinfo LastModifiedByUser
where REVERSE(SUBSTRING(REVERSE(LastModifiedByUser.info),0, CHARINDEX('#', REVERSE(LastModifiedByUser.info)))) in (:var1_0,:var1_1)
Generated SQL by Hibernate
select namedinfos2_.field
form namedinfo namedinfos2_
where (reverse(substring(reverse(namedinfos2_.info), 0, CHARINDEX('#')) in (?))
Note- the missing LastModifiedByUser.info conversion in the CHARINDEX method (2nd parameter).
Could you remove the CHARINDEX function? I was not able to found it the expressions functions section, so maybe the engine is not able to translate it correctly.
Also, you are allowed to use native SQL, so you can try building simple CHARINDEX statement and run it in that way.
Also, you can try to use LOCATE as alternative of what you are doing.
I have made a workaround by making a custom function say functionXyz() that takes input string and internally just returns something like the below.
return reverse(substring(reverse(namedinfos2_.info), 0, CHARINDEX('#'))
That is making it work, just that I everytime need to call this function. Not sure if it has an impact on performance.
while doing select in MS SQL very often NULL and '' can be treated identical.
There is good description to Combine These:
how to use null or empty string
However I wonder if it make sense to put this into a user defined function to simplify queries for this. Does anybody uses such function? Are there strong pro and cons on this one?
when I do
SELECT SUM(some_field) FROM some_table
the result is a single record/field with a number in it. Additionally, there will be a message send to the client along the lines of Warning: Null value is eliminated by an aggregate or other SET operation. in case some_field has a NULL value in the table somewhere. Only when they all are NULL (or the table is empty) it will return NULL.
I'm currently in the process of writing my own SqlUserDefinedAggregate and although things work as expected, it does NOT show me this message when one of the values passed turns out to be NULL. The outcome of the function is still correct, but there is no warning. First I assumed I might have to pipe this manually in the Terminate() method, but alas, SQLCLR then throws me an InvalidOperationException saying Data acces is not allowed in this context.
Any hints?
If your aggregate is discarding NULLs then the IsInvariantToNulls property should definitely be set to true else you might get unexpected results sometimes, as stated on the MSDN page for SqlUserDefinedAggregateAttribute.IsInvariantToNulls:
Used by the query processor, this property is true if the aggregate is invariant to nulls. That is, the aggregate of S, {NULL} is the same as aggregate of S. For example, aggregate functions such as MIN and MAX satisfy this property, while COUNT(*) does not.
Incorrectly setting this property can result in incorrect query results. This property is not an optimizer hint; it affects the plan selected and the results returned by the query.
And a UDA is a function so there is no SqlContext.Pipe to use. And even if there was, the Terminate method isn't an appropriate place to handle this since it executes for every group. The warning you are seeing when using SUM, however, is an ANSI warning and is displayed once for the query, not per group.
So, if SQL Server isn't displaying the warning then there likely isn't anything you can do about it. I assume that SQL Server isn't using the IsInvariantToNulls property as a means of knowing if it should display the message or not because it is not guaranteed to be accurately set.
And personally, I find this to be a benefit since, in my opinion, the "Null value is eliminated by an aggregate" warning is entirely not helpful, yet if you want to get rid of it you need to use ISNULL() to inject a value that won't influence the result (e.g. 0 in the case of SUM), or turn off ALL ANSI warnings, in which case you disable some warnings that are sometimes helpful.
We wrote a function get_timestamp() defined as
CREATE OR REPLACE FUNCTION get_timestamp()
RETURNS integer AS
$$
SELECT (FLOOR(EXTRACT(EPOCH FROM clock_timestamp()) * 10) - 13885344000)::int;
$$
LANGUAGE SQL;
This was used on INSERT and UPDATE to enter or edit a value in a created and modified field in the database record. However, we found when adding or updating records consecutively it was returning the same value.
On inspecting the function in pgAdmin III we noted that on running the SQL to build the function the key word IMMUTABLE had been injected after the LANGUAGE SQL statement. The documentation states that the default is VOLATILE (If none of these appear, VOLATILE is the default assumption) so I am not sure why IMMUTABLE was injected, however, changing this to STABLE has solved the issue of repeated values.
NOTE: As stated in the accepted answer, IMMUTABLE is never added to a function by pgAdmin or Postgres and must have been added during development.
I am guessing what was happening was that this function was being evaluated and the result was being cached for optimization, as it was marked IMMUTABLE indicating to the Postgres engine that the return value should not change given the same (empty) parameter list. However, when not used within a trigger, when used directly in the INSERT statement, the function would return a distinct value FIVE times before then returning the same value from then on. Is this due to some optimisation algorithm that says something like "If an IMMUTABLE function is used more that 5 times in a session, cache the result for future calls"?
Any clarification on how these keywords should be used in Postgres functions would be appreciated. Is STABLE the correct option for us given that we use this function in triggers, or is there something more to consider, for example the docs say:
(It is inappropriate for AFTER triggers that wish to query rows
modified by the current command.)
But I am not altogether clear on why.
The key word IMMUTABLE is never added automatically by pgAdmin or Postgres. Whoever created or replaced the function did that.
The correct volatility for the given function is VOLATILE (also the default), not STABLE - or it wouldn't make sense to use clock_timestamp() which is VOLATILE in contrast to now() or CURRENT_TIMESTAMP which are STABLE: those return the same timestamp within the same transaction. The manual:
clock_timestamp() returns the actual current time, and therefore its
value changes even within a single SQL command.
The manual warns that function volatility STABLE ...
is inappropriate for AFTER triggers that wish to query rows modified
by the current command.
.. because repeated evaluation of the trigger function can return different results for the same row. So, not STABLE.
You ask:
Do you have an idea as to why the function returned correctly five
times before sticking on the fifth value when set as IMMUTABLE?
The Postgres Wiki:
With 9.2, the planner will use specific plans regarding to the
parameters sent (the query will be planned at execution), except if
the query is executed several times and the planner decides that the
generic plan is not too much more expensive than the specific plans.
Bold emphasis mine. Doesn't seem to make sense for an IMMUTABLE function without input parameters. But the false label is overridden by the VOLATILE function in the body (voids function inlining): a different query plan can still make sense.
Related:
PostgreSQL Stored Procedure Performance
Aside
trunc() is slightly faster than floor() and does the same here, since positive numbers are guaranteed:
SELECT (trunc(EXTRACT(EPOCH FROM clock_timestamp()) * 10) - 13885344000)::int
I was wondering if there is a better way to cope with MS-Access' inability to handle NULL for boolean-values other than change the column-data-type to integer.
I think you must use a number, and so, it seems does Allen Browne, Access MVP.
Not that I've found :( I haven't programmed Access in awhile, but what I remember involves quite a lot of isNull checks.
I think it depends on how you want your app/solution to interpret said NULLs in your data.Do you want to simply "ignore" them in a report... i.e. have them print out as blank spaces or newlines? In that case you can use the handy IsNull function along with the "immediate if" iif() in either the SQL builder or a column in the regular Access query designer as follows:
IIF(IsNull(BooleanColumnName), NewLine/BlankSpace/Whatever, BooleanColumnName)On the other hand, if you want to consider the NULLs as "False" values, you had better update the column and just change them with something like:Update table SET BooleanColumnName = FALSE WHERE BooleanColumnName IS NULL