Checking all rows of itab against a condition via REDUCE? - loops

In order to check whether all entries of an internal table lt_itab meet a condition COND, I would like to use REDUCE statement. The loop of course needs to terminate once a line violating COND occurs. The second code block further down seems to work but appears to me like a slight abuse of the iteration index. Are you aware of a better/more transparent solution within the REDUCE syntax? Is it possible to work with a structure (integer, boolean) for the iteration variable? The INDEX INTO option seems not to work with REDUCE. Compatibility to kernel-release 753 (or lower) would be nice.
Here is my Minimal Reproducible Example (MRE) which passes syntax check only if lvr_flag_allowed = abap_false OR is commented out (viz -> "lvr_flag_allowed = abap_false OR):
DATA: lt_itab TYPE TABLE OF i,
rv_flag_allowed TYPE boole_d.
lt_itab = VALUE #( ( 2 ) ( 1 ) ( -1 ) ( 5 ) ).
IF lt_itab IS NOT INITIAL.
rv_flag_allowed = REDUCE #( INIT lvr_flag_allowed = abap_true
FOR lvf_idx = 1 UNTIL lvr_flag_allowed = abap_false OR
lvf_idx > lines( lt_itab )
NEXT lvr_flag_allowed = xsdbool( lt_itab[ lvf_idx ] < 0 ) ).
ENDIF.
RETURN.
Currently it gives this syntax check message (its ID is MESSAGEG[M):
The variable "LVR_FLAG_ALLOWED" cannot be used here.
Do you know the technical reason why this does not work? The SAP documentation on REDUCE - Reduction Operator only states
Usually the expression expr (after THEN) and the termination condition log_exp (after UNTIL or WHILE) depend on the iteration variable var.
Hence a workaround MRE came to my mind while writing this down:
DATA: lt_itab TYPE TABLE OF i,
* rv_flag_allowed TYPE boole_d,
rv_last_index TYPE i.
lt_itab = VALUE #( ( 2 ) ( 1 ) ( -22 ) ( 5 ) ( 7 ) ( 4 ) ).
IF lt_itab IS NOT INITIAL.
rv_last_index = REDUCE #( INIT lvr_last_index = 0
FOR lvf_idx = 1 THEN COND #( WHEN lt_itab[ lvf_idx ] < 0
THEN 0
ELSE lvf_idx + 1 )
UNTIL lvf_idx = 0 OR
lvf_idx > lines( lt_itab )
NEXT lvr_last_index = lvr_last_index + 1 ).
ENDIF.
RETURN.
It makes "double" use of the iteration index and returns rv_last_index = 3. I'm returning an integer now rather than a boolean in order to check the correct abort result. Does this seem correct to you?
Many thanks in advance for your feedback and improvement suggestions (beyond the classical while/until loops ;-)) !

I would actually like to express this using line_exists, unfortunately table expressions only support equality comparisons though (and sadly not <):
" invalid syntax v
DATA(some_negative) = xsdbool( line_exists( values[ table_line < 0 ] ) ).
A slightly more verbose but working variant would be using a LOOP AT with an immediate EXIT. (yes, this is not "modern syntax", though still very readable IMO):
DATA(some_negative) = abap_false.
LOOP AT values WHERE table_line < 0 TRANSPORTING NO FIELDS.
some_negative = abap_true.
EXIT.
ENDLOOP.
I don't think that REDUCE is the right tool for the job, as it is supposed to comprehend the table into one value (in other languages there is also no short-circuit, e.g. the .reduce in JS, though they have other methods for this purpose like .some and .every and the alike). If the number of truthy rows is low, not having short circuiting for them might be acceptable, and this REDUCE statement would at least not visit falsy rows through the additional WHERE ( ... ) clause:
DATA(some_negative) = REDUCE abap_bool(
INIT result = abap_false
FOR entry IN values
WHERE ( table_line < 0 )
NEXT result = abap_true
).

Related

loop with condition ignored by the system in sap abap

I try to applied loop with condition to sum up the respective row(field), the where condition should be correct but during running of the system, the program ignored the condition and sum up all rows, any suggestion to fix this problem?
SELECT * FROM LIPS INTO CORRESPONDING FIELDS OF TABLE LT_LIPS
WHERE VGBEL = LT_BCODE_I-VGBEL "getDN number
AND VGPOS = LT_BCODE_I-VGPOS. " get vgpos = 01/02/03
LOOP AT LT_BCODE_I INTO LT_BCODE_I WHERE VGBEL = LT_LIPS-VGBEL AND VGPOS = LT_LIPS-VGPOS.
SUM.
LT_BCODE_I-MENGE = LT_BCODE_I-MENGE.
ENDLOOP
.
Although you are asking about LOOP, I think the issue is more about how you use SUM.
The statement SUM can only be specified within a loop LOOP and is only respected within a AT-ENDAT control structure.
Here is an excerpt from the ABAP documentation, for "Calculation of a sum with SUM at AT LAST. All lines of the internal table are evaluated":
DATA:
BEGIN OF wa,
col TYPE i,
END OF wa,
itab LIKE TABLE OF wa WITH EMPTY KEY.
itab = VALUE #( FOR i = 1 UNTIL i > 10 ( col = i ) ).
LOOP AT itab INTO wa.
AT LAST.
SUM.
cl_demo_output=>display( wa ).
ENDAT.
ENDLOOP.

LOOP itab vs VALUE FOR filtering, which is more efficient?

I am trying to write a for-loop statement for the following scenario:
I have used a select to get multiple tvarvc entries data into T_TVARVC.
LOOP AT t_tvarvc INTO DATA(s_tvarvc).
    CASE s_tvarvc-name.
      WHEN c_augru.
        s_tvarvc_range = CORRESPONDING #( s_tvarvc ).
        APPEND s_tvarvc_range TO t_augru.
      WHEN c_vkorg.
        s_tvarvc_range = CORRESPONDING #( s_tvarvc ).
        APPEND s_tvarvc_range TO t_vkorg.
ENDCASE.
ENDLOOP.
This is what I have come up with:
DATA(t_augru) = VALUE tt_tvarvc( FOR s_tvarvc IN t_tvarvc
                                  WHERE ( name = c_augru )
                                  ( CORRESPONDING #( s_tvarvc ) ) ).
DATA(t_vkorg) = VALUE tt_tvarvc( FOR s_tvarvc IN t_tvarvc
                                  WHERE ( name = c_vkorg )
                                  ( CORRESPONDING #( s_tvarvc ) ) ).
My observation is that, by using LOOP AT and CASE statement combo, the number of iterations will be same as the number of entries in T_TVARVC.
But when using a FOR loop for each range table, T_TVARVC has to be traversed more times to reach the desired entry thus causing multiple iterations more than the first scenario.
Can this be written in a more efficient way?
I agree with your observation about doubling the iterations, and to make it faster, I think the only solution is to use only one loop, considering that the internal table is not already sorted, which limits the possible solutions a lot, and I come to this solution:
TYPES: tt_tvarvc TYPE STANDARD TABLE OF tvarvc WITH EMPTY KEY,
BEGIN OF ty_ranges,
t_augru TYPE tt_tvarvc,
t_vkorg TYPE tt_tvarvc,
END OF ty_ranges.
CONSTANTS: c_augru TYPE tvarvc-name VALUE 'AUGRU',
c_vkorg TYPE tvarvc-name VALUE 'VKORG'.
DATA(t_tvarvc) = VALUE tt_tvarvc( for i = 1 while i <= 100 ( name = c_augru )
( name = c_vkorg ) ).
DATA(ranges) = REDUCE ty_ranges(
INIT ranges2 = VALUE ty_ranges( )
FOR <tvarv> IN t_tvarvC
NEXT ranges2-t_augru = COND #( WHEN <tvarv>-name = c_augru
THEN VALUE #( BASE ranges2-t_augru ( <tvarv> ) )
ELSE ranges2-t_augru )
ranges2-t_vkorg = COND #( WHEN <tvarv>-name = c_vkorg
THEN VALUE #( BASE ranges2-t_vkorg ( <tvarv> ) )
ELSE ranges2-t_vkorg ) ).
(you will use ranges-t_augru and ranges-t_vkorg instead of t_augru and t_vkorg in your code)
You can immediately see that the code is much less legible than any of your two snippets.
Moreover, there is no gain in performance compared to your classic loop.
Back to your snippet with two FOR iterations, we can see that the goal is very clear compared to the classic loop (my opinion). It's of course slower, but probably you don't need to gain a few microseconds, and so I think it's the best solution (still my opinion).
Just to add another answer relating to the part
My observation is that, by using LOOP AT and CASE statement combo, the number of iterations will be same as the number of entries in T_TVARVC.
But when using a FOR loop for each range table, T_TVARVC has to be traversed more times to reach the desired entry thus causing multiple iterations more than the first scenario.
This is only true when you've no sorted index for the field in question. Assume instead of WHERE ( name = c_vkorg ) you use USING KEY sk_name WHERE ( object = c_vkorg ). This will know the index where the values you are searching for are starting in log n time. It will then only process the lines matching the key, never looping over anything else.
This can potentially save a huge amount of time.
Index
Val1
Val2
Val3 (sorted index)
1
A
9999
AAA
2
B
1213
AAB
3
C
554
AAC
...
...
...
...
500
X
1
AUGUR
<-- Starting here with loop
The downside is that a sorted secondary key will also take time to build (and some memory). This can be less of an issue if you have other code also requiring fast access.
The secondary keys are lazy, so the first time they are used is the time they'll be created.
In your scenario, you have to decide what's worth it. Are there frequent read accesses requiring the key? How many rows? Is it more expensive to build up the key access because the key isn't needed elsewhere? How often will the secondary key be invalidated, etc etc.
(Note: If you uncomment the xsdbool, you exclude the time it takes to build the secondary key from the measurement).
REPORT ztest.
START-OF-SELECTION.
PERFORM standard.
PERFORM sorted_secondary.
PERFORM sorted_secondary_val.
FORM standard.
DATA t_tadir TYPE STANDARD TABLE OF tadir WITH EMPTY KEY.
DATA t_clas TYPE STANDARD TABLE OF tadir-obj_name WITH EMPTY KEY.
DATA t_tran TYPE STANDARD TABLE OF tadir-obj_name WITH EMPTY KEY.
SELECT * FROM tadir UP TO 1000000 ROWS INTO TABLE #t_tadir ORDER BY PRIMARY KEY.
* DATA(dummy) = xsdbool( line_exists( t_tadir[ key primary_key object = 'CLAS' ] ) ).
GET RUN TIME FIELD DATA(t1).
LOOP AT t_tadir ASSIGNING FIELD-SYMBOL(<s_tadir>).
CASE <s_tadir>-object.
WHEN 'CLAS'.
APPEND <s_tadir>-obj_name TO t_clas.
WHEN 'TRAN'.
APPEND <s_tadir>-obj_name TO t_tran.
ENDCASE.
ENDLOOP.
GET RUN TIME FIELD DATA(t2).
WRITE: |{ ( t2 - t1 ) / '1000.0' / '1000.0' }, { lines( t_tadir ) }, { lines( t_clas ) }, { lines( t_tran ) }|.
NEW-LINE.
ENDFORM.
FORM sorted_secondary.
DATA t_tadir TYPE STANDARD TABLE OF tadir WITH NON-UNIQUE SORTED KEY sk_object COMPONENTS object.
DATA t_clas TYPE STANDARD TABLE OF tadir-obj_name WITH EMPTY KEY.
DATA t_tran TYPE STANDARD TABLE OF tadir-obj_name WITH EMPTY KEY.
SELECT * FROM tadir UP TO 1000000 ROWS INTO TABLE #t_tadir ORDER BY PRIMARY KEY.
* DATA(dummy) = xsdbool( line_exists( t_tadir[ key sk_object object = 'CLAS' ] ) ).
GET RUN TIME FIELD DATA(t1).
LOOP AT t_tadir ASSIGNING FIELD-SYMBOL(<s_tadir>) USING KEY sk_object WHERE object = 'CLAS'.
APPEND <s_tadir>-obj_name TO t_clas.
ENDLOOP.
LOOP AT t_tadir ASSIGNING <s_tadir> USING KEY sk_object WHERE object = 'TRAN'.
APPEND <s_tadir>-obj_name TO t_tran.
ENDLOOP.
GET RUN TIME FIELD DATA(t2).
WRITE: |{ ( t2 - t1 ) / '1000.0' / '1000.0' }, { lines( t_tadir ) }, { lines( t_clas ) }, { lines( t_tran ) }|.
NEW-LINE.
ENDFORM.
FORM sorted_secondary_val.
DATA t_tadir TYPE STANDARD TABLE OF tadir WITH NON-UNIQUE SORTED KEY sk_object COMPONENTS object.
DATA t_clas TYPE STANDARD TABLE OF tadir-obj_name WITH EMPTY KEY.
DATA t_tran TYPE STANDARD TABLE OF tadir-obj_name WITH EMPTY KEY.
SELECT * FROM tadir UP TO 1000000 ROWS INTO TABLE #t_tadir ORDER BY PRIMARY KEY.
* DATA(dummy) = xsdbool( line_exists( t_tadir[ key sk_object object = 'CLAS' ] ) ).
GET RUN TIME FIELD DATA(t1).
t_clas = VALUE #( for <fs> in t_tadir USING KEY sk_object WHERE ( object = 'CLAS' ) ( <fs>-obj_name ) ).
t_tran = VALUE #( for <fs> in t_tadir USING KEY sk_object WHERE ( object = 'TRAN' ) ( <fs>-obj_name ) ).
GET RUN TIME FIELD DATA(t2).
WRITE: |{ ( t2 - t1 ) / '1000.0' / '1000.0' }, { lines( t_tadir ) }, { lines( t_clas ) }, { lines( t_tran ) }|.
NEW-LINE.
ENDFORM.
Also: LOOP AT ... ASSIGNING/REFERENCE INTO is likely to be faster than LOOP AT ... INTO. Since you make no write access that shouldn't be reflected in the original data source, there's no reason to copy every line in every loop step.
You can also try specialized FILTER statement to achieve the same:
DATA lt_tadir TYPE SORTED TABLE OF tadir WITH NON-UNIQUE KEY object.
SELECT * FROM tadir UP TO 1000000 ROWS INTO TABLE lt_tadir ORDER BY PRIMARY KEY.
DATA(clas) = FILTER #( lt_tadir USING KEY primary_key WHERE object = 'CLAS' ).
DATA(trans) = FILTER #( lt_tadir USING KEY primary_key WHERE object = 'TRAN' ).
I took the snippet prepared by #peterulb for 1M table and here are my measurements:
0.032947, 1000000, 128776, 0 <- FILTER
0.139579, 1000000, 128776, 0 <- standard
0.239092, 1000000, 128776, 0 <- sorted_secondary
0.242161, 1000000, 128776, 0 <- sorted_secondary_val
Despite FILTER uses inline declarations and transfers all source fields into the result table (it is by design), it is executed magnitude faster than the other variants.
I do not insist it will be the rule of thumb on all databases, but at least ABAP 7.54 on HANADB utilizes some built-in optimizations for FILTER statement.

SPSS for loop based on a variable

I'm just learning SPSS and I want to do simple subgroup analysis based on a variable "status" I created which can take values from 0 to 8. I would like to print outputs in one go.
this is the pseudocode for what I want to do:
for( i = 1, i = 8, i++)
{
filter by (ststus = i)
display analysis
remove filter
}
That way I can do it all in one go but also i can add to the analysis code and do something easily for the 8 subgroups.
I don't know if it's relevant but here is the code I want to iterate over currently:
USE ALL.
COMPUTE filter_$=(Workforce EQ 1 AND SurveySample = 1 AND State = 1).
VARIABLE LABELS filter_$ 'Workforce EQ 1 (FILTER)'.
> VALUE LABELS filter_$ 0 'Not Selected' 1 'Selected'. FORMATS filter_$
> (f1.0). FILTER BY filter_$. EXECUTE.
>
>
> FREQUENCIES VARIABLES = Q86 Q33 Q34 Q88 FSEScore /BARCHART FREQ
> /ORDER=ANALYSIS.
>
> CROSSTABS /TABLES=FSEScore BY Q86 /FORMAT=AVALUE TABLES
> /CELLS=ROW /COUNT ROUND CELL.
>
> FILTER OFF. USE ALL.
Thanks guys.
split file command may solve the problem - it causes your analysis reports to show results for each category of your split variable separately:
*run your transformations.
sort cases by status.
split file by status.
FREQUENCIES .....
CROSSTABS ....
split file off.
If this is not enough, you can use a macro to run through "status" categories:
first define the macro:
define MyMacro ()
!do !ST=1 !to 8
* filter commands using **status = !ST**
* transformations using **status = !ST**
FREQUENCIES .....
CROSSTABS ....
!doend
!enddefine.
now call your macro:
MyMacro .
this is probably a very getto way of doing this, the suggestion above is probably more sensible.
You can initialise Python is spss. The following code works:
begin program.
import spss
for i in xrange(1,8):
string = str(i)
spss.Submit("""
USE ALL.
COMPUTE filter_$=(Workforce EQ 1 AND SurveySample = 1 AND Status = %s).
VARIABLE LABELS filter_$ 'Workforce EQ 1 (FILTER)'.
VALUE LABELS filter_$ 0 'Not Selected' 1 'Selected'.
FORMATS filter_$ (f1.0).
FILTER BY filter_$.
EXECUTE.
#analysis as required
FREQUENCIES VARIABLES = Q86
/BARCHART FREQ
/ORDER=ANALYSIS.
"""%(' '.join(string)) )
end program.
Many thanks to eli-k I probably should have just used splitfile.

rewrite T-SQL bitwise logic

How do I rewrite this T-SQL code to produce the same results
SELECT ACC.Title,
ACC.AdvertiserHierarchyId,
1 AS Counter
FROM admanAdvertiserHierarchy_tbl ACC
JOIN dbo.admanAdvertiserObjectType_tbl AOT ON AOT.AdvertiserObjectTypeId = ACC.AdvertiserObjectTypeId
WHERE (EXISTS
(SELECT 1
FROM dbo.admanAdvertiserHierarchy_tbl CAMP
JOIN dbo.admanAdvertiserAdGroup_tbl AG ON CAMP.AdvertiserHierarchyId = AG.AdvertiserHierarchyId
JOIN dbo.admanAdvertiserCreative_tbl AC ON AC.AdvertiserAdGroupId = AG.AdvertiserAdGroupId
AND CAMP.ParentAdvertiserHierarchyId = ACC.AdvertiserHierarchyId
WHERE CAMP.ERROR = 0
AND AC.Dirty & 7 > 0
AND AC.ERROR = 0
AND AG.ERROR = 0 ))
its preventing the optimizer from using indexes efficiently .
trying to achieve the following results
Title AdvertiserHierarchyId Counter
trcom65#travelrepublic.co.uk 15908 1
paul570#travelrepublic.co.uk 37887 1
es88#travelrepublic.co.uk 37383 1
it004#travelrepublic.co.uk 27006 1
011 10526 1
013 10528 1
033 12013 1
062 17380 1
076 20505 1
this is a count of the dirty tinyint column
Dirty total
0 36340607
1 117569
2 873553
3 59
that links to a static reason table
DirtyReasonId Title
0 Nothing
1 Overnight Engine
2 End To End
3 Overnight And End To End
4 Pause Resume
5 Overnight Engine and Paused
6 Overnight Engine E2E and Paused
7 All Three
If you are asking specifically about the use of the BITWISE AND operator, I believe you are correct, and it's unlikely that SQL Server sees that as sargable, at least, not with an index with Dirty as a leading column.
You are showing only the lowest two bits in use (maximum value of Dirty is 3), yet you are testing the lowest three bits.
So, AC.Dirty > 0 would return an equivalent result, given that 3 is largest value of Dirty. But there is a possibility that other (higher-order) bits are set, for example Dirty could be set to 8. So, if the intent is to check ONLY the lowest three bits, then we need to ensure that we test only the three lowest-order bits. This expression would do that, and one of the predicates is sargable:
( AC.Dirty > 0 AND AC.Dirty % 8 > 0 )
This basically tests first whether ANY bits in AC.Dirty are set, and then checks if any of the last three bits are set. (We're using the MODULO division operator to return the remainder of AC.Dirty divided by 8, which will of course return an integer value between 0 and 7. If we get a zero, then we know that none of the lower three bits are set, else we know at least one of the bits is set.
Just to be clear: the predicate on AC.Dirty > 0 is redundant. It's included here in case you are wanting to make sure that database can at least consider using an existing index with Dirty as a leading column.
I will mention that another option to consider would be adding a persisted COMPUTED COLUMN on the expression, and create an index on it. But that seems a bit overkill for what you need here.
If you are asking specifically about getting an index used on table admanAdvertiserCreative_tbl (AC), then likely your best candidate would be covering index on (AdvertiserAdGroupId, Error, Dirty).
The SQL rewrite below should return equivalent results, perhaps with better performance (depending on your data distribution, indexes, et al.)
Basically, replace the EXISTS (correlated subquery) with a JOIN to a subquery. The subquery returns distinct values of CAMP.ParentAdvertiserHierarchyId, which is the column you referenced to correlate the subquery.
This may or may not make use of any indexes, depending on what indexes are available. (It's likely have clustered unique indexes on the primary keys, and have non-clustered indexes on the foreign keys, which should help join performance.)
Untested:
SELECT ACC.Title,
ACC.AdvertiserHierarchyId,
1 AS Counter
FROM admanAdvertiserHierarchy_tbl ACC
JOIN dbo.admanAdvertiserObjectType_tbl AOT
ON AOT.AdvertiserObjectTypeId = ACC.AdvertiserObjectTypeId
JOIN (SELECT CAMP.ParentAdvertiserHierarchyId
FROM dbo.admanAdvertiserHierarchy_tbl CAMP
JOIN dbo.admanAdvertiserAdGroup_tbl AG
ON CAMP.AdvertiserHierarchyId = AG.AdvertiserHierarchyId
JOIN dbo.admanAdvertiserCreative_tbl AC
ON AC.AdvertiserAdGroupId = AG.AdvertiserAdGroupId
WHERE CAMP.ERROR = 0
AND ( AC.Dirty > 0 AND AC.Dirty % 8 > 0 )
AND AC.ERROR = 0
AND AG.ERROR = 0 )
GROUP BY CAMP.ParentAdvertiserHierarchyId
) c
ON c.ParentAdvertiserHierarchyId = ACC.AdvertiserHierarchyId

AND/OR based on variable value in stored procedures

I would like to use AND/OR between the conditions in a stored procedure, and the decision is dependent on the parameter value whether it was 0 (AND) or 1 (OR)
Can anyone help me with this please, i guess this is an easy thing to do but i can't seem to figure it out. Thanks
The easiest way (on first glance) would be to concatenate the query string using dynamic SQL, but dynamic SQL has its issues.
See The Curse and Blessings of Dynamic SQL for an in-depth explanation.
So I would try to avoid dynamic SQL, which is no big deal if your queries are not too complex.
The easiest way is just to fire two different queries depending on the parameter value:
CREATE PROCEDURE spTest
#AndOr bit
AS
BEGIN
if #AndOr = 0 begin
select * from YourTable where foo = 1 and bar = 2
end
else begin
select * from YourTable where foo = 1 or bar = 2
end
END
This is of course an example with a very simple query.
If you have lots of queries, or if your queries are very complex, this might not be the best solution because it forces you to duplicate all queries...but as always, it depends :-)
You can implement your logic on a CASE statement. Something like this:
CREATE PROCEDURE dbo.MySP #OrAnd BIT
AS
BEGIN
SELECT *
FROM MyTable
WHERE CASE WHEN Condition1 AND Condition2 AND #OrAnd = 0 THEN 1
WHEN (Condition1 OR Condition2) AND #OrAnd = 1 THEN 1 ELSE 0 END = 1
END
If you convert the simple conditions' boolean results into numeric ones (0 or 1), you will be able to use your parameter in the following way:
(
(CASE WHEN condition1 THEN 1 ELSE 0 END ^ #AndOr)
&
(CASE WHEN condition2 THEN 1 ELSE 0 END ^ #AndOr)
) ^ #AndOr = 1
Here #AndOr is your parameter, ^ is the Transact-SQL bitwise exclusive OR operator, & stands for the bitwise AND in Transact-SQL, and the CASE expressions are used to convert the boolean results into 0 or 1.
If #AndOr = 0 (which means we want AND between the conditions), the above expression effectively boils down to this:
case1 & case2 = 1
because X XOR 0 yields X and so neither individual values of case1 and case2 nor the entire result of the & operator are not affected by the ^ operators. So, when #AndOr is 0, the result of the original expression would be equivalent to the result of condition1 AND condition2.
Now, if #AndOr = 1 (i.e. OR), then every ^ operator in the expression returns the inverted value of its left operand, in other words, negates the left operand, since 1 XOR 1 = 0 and 0 XOR 1 = 1. Therefore, the original expression would essentially be equivalent to the following:
¬ (¬ case1 & ¬ case2) = 1
where ¬ means negation. Or, converting it back to the booleans, it would be this:
NOT (NOT condition1 AND NOT condition2)
According to one of De Morgan's laws,
(NOT A) AND (NOT B) = NOT (A OR B)
Applying it to the above condition, we get:
NOT (NOT condition1 AND NOT condition2) = NOT (NOT (condition1 OR condition2)) =
= condition1 OR condition2
So, when #AndOr is 1, the expression given in the beginning of my answer is equivalent to condition1 OR condition2. Thus, it works like expected based on the value of #AndOr.
Having the input parameter you can use a IF clause to make different selects.
If input parameter = 0 make the AND conditions, otherwise make the OR conditions.
I can't see any particular elegant way to do it. So here's the straightforward approach
create function myfun (#parm1 int, #parm2 int, #andor int) returns int
begin
if (#andor = 0 AND #parm1 = 99 AND #parm2 = 99) return 1
else if (#andor = 1 AND (#parm1 = 99 OR #parm2 = 99)) return 1
return 0
end
go
select dbo.myfun(99,98,0) -- AND condition should return 0
select dbo.myfun(99,98,1) -- OR condition should return 1
select dbo.myfun(98,98,0) -- AND condition should return 0
select dbo.myfun(98,98,1) -- OR condition shoujld return 0

Resources