DBD::SQLite, how to pass array in query via placeholder? - arrays

Let's have a table:
sqlite> create table foo (foo int, bar int);
sqlite> insert into foo (foo, bar) values (1,1);
sqlite> insert into foo (foo, bar) values (1,2);
sqlite> insert into foo (foo, bar) values (1,3);
Then SELECT some data:
sqlite> select * from foo where foo = 1 and bar in (1,2,3);
1|1
1|2
1|3
Works all right. Now I'm trying to use DBD::SQLite 1.29:
my $sth = $dbh->prepare('select * from foo where foo = $1 and bar in ($2)');
$sth->execute(1,[1,2,3]);
And this gives me null results. DBI trace shows that 2nd placeholder is bound to array all right, but no score. If I join array values in a string and pass it, no result. If I flatten the array, I get predictable error of "called with N placeholders instead of 2".
I'm kinda at loss. What else is there to try?
Upd: All right, here's one bona fide example taken from the real world application.
First, the setup: I have several tables filled with statistical data, number of columns varies from 10 to 700+. The queries I'm talking about select subset of that data for reporting purposes. Different reports consider different aspects and therefore run different queries, one or more per request. There are more than 200 reports, i.e. 200-300 queries. This approach was developed for Postgres and now I need to scale it down and make it work with SQLite. Considering that all this works well with Postgres, I can't justify going over all queries and rewriting them. Bad for maintenance. I can and do use in-place query adjustments, like replacing = ANY () with IN (), these are minor aspects.
So, here's my example: 2 queries ran in succession for one report:
SELECT SPLIT, syn(SPLIT),
(SELECT COUNT(*) FROM cagent WHERE ACD = $1 AND SPLIT = $2 AND
LOC_ID = ANY ($3) AND LOGID IS NOT NULL AND WORKMODE = 40),
(SELECT COUNT(*) FROM cagent WHERE ACD = $1 AND SPLIT = $2 AND
LOC_ID = ANY ($3) AND LOGID IS NOT NULL AND WORKMODE = 30),
(SELECT COUNT(*) FROM cagent WHERE ACD = $1 AND SPLIT = $2 AND
LOC_ID = ANY ($3) AND LOGID IS NOT NULL AND WORKMODE = 50),
(SELECT COUNT(*) FROM cagent WHERE ACD = $1 AND SPLIT = $2 AND
LOC_ID = ANY ($3) AND LOGID IS NOT NULL AND WORKMODE = 220),
(SELECT COUNT(*) FROM cagent WHERE ACD = $1 AND SPLIT = $2 AND
LOC_ID = ANY ($3) AND LOGID IS NOT NULL),
(SELECT COUNT(*) FROM cagent WHERE ACD = $1 AND SPLIT = $2 AND
LOC_ID = ANY ($3) AND LOGID IS NOT NULL AND WORKMODE = 20),
(SELECT COUNT(*) FROM cagent WHERE ACD = $1 AND SPLIT = $2 AND
LOC_ID = ANY ($3) AND LOGID IS NOT NULL AND WORKMODE = 80)
FROM csplit WHERE ACD = $1 AND SPLIT = $2
SELECT syn(LOGID), syn(LOC_ID), LOGID, EXTENSION, syn(ROLE), PERCENT,
syn(AUXREASON), syn(AWORKMODE), syn(DIRECTION), WORKSKILL, syn(WORKSKLEVEL),
AGTIME FROM cagent WHERE ACD = $1 AND SPLIT = $2 AND LOC_ID = ANY ($3) AND
LOGID IS NOT NULL
This is not the most complex example, as there can be any number of input parameters used and reused in different places in query; replacing them with generic ? placeholders is not a trivial task. Code that runs queries against Postgres looks like that (after input cleansing et al):
sub run_select {
my ($class, $dbh, $sql, #bind_values) = #_;
my $sth;
eval {
$sth = $dbh->prepare_cached($sql);
$sth->execute(#bind_values);
};
$# and die "Error executing query: $#";
my %types;
{
my $dbt = $dbh->type_info_all;
#types{ map { $_->[1] } #$dbt[1..$#$dbt] } =
map { $_->[0] } #$dbt[1..$#$dbt];
};
my #result;
while (my $row = $sth->fetchrow_arrayref) {
my $i = 0;
push #result, [ map { [ $types{${$sth->{TYPE}}[$i++]}, $_ ] } #$row ];
};
return \#result;
};
I can rewrite queries and inject values directly; SQL injection is not much of a threat because all input is untainted through regex patterns long before it can hit SQL engine. I don't want to rewrite queries dynamically for two reasons: a) it can potentially lead to problems with value quotation and b) it kinda kills the whole reason behind prepare_cached. SQL engine can't cache reuse prepared statement if it changes every time.
Now as I said, the code above works well with Postgres. Since SQLite engine itself obviously have the possibility of working with data sets, I thought it was a deficiency in DBD::SQLite implementation. So the real question sounds like: is there any way to pass a data set in a placeholder with DBD::SQLite? Not necessarily array though that would be most logical.

Try this:
my $sth = $dbh->prepare("select * from foo where foo = ? and bar in (?,?,?)";
$sth->execute(1,1,2,3);
You can use the x repetition operator to generate the required number of ?s:
my $sql = sprintf "select ... and bar in (%s)", join ",", ('?')x#values;

Use SQL::Abstract, like this:
use strict;
use warnings;
use SQL::Abstract;
my $sqla = SQL::Abstract->new;
my %where = (
foo => 1,
bar => { -in => [1,2,3] }
);
my ($sql, #params) =
$sqla->select('foo', '*', \%where);
my $sth = $dbh->prepare($sql);
$sth->execute(#params);

Related

Stored procedure - get anticipated columns before fully executing statement?

I'm working through a stored procedure and wondering if there's a way to retrieve the anticipated result column list from a sql statement before fully executing.
Scenarios:
dynamic SQL
a UDF that might vary the columns outside of our control
EX:
//inbound parameter
SET QUERY_DEFINITION_ID = 12345;
//Initial statement pulls query text from bank of queries
var sqlText = getQueryFromQueryBank(QUERY_DEFINITION_ID);
//now we run our query
var cmd = {sqlText: sqlText };
stmt = snowflake.createStatement(cmd);
What I'd like to be able to do is say "right - before you run this, give me the anticipated column list" so I can compare it to what's expected.
EX:
Expected: [col1, col2, col3, col4]
Got: [col1]
Result: Oops. Don't run.
Rationale here is that I want to short-circuit the execution if something is missing - before it potentially runs for a while. I can validate all of this after the fact, but it would be really helpful to stop early.
Any ideas very much appreciated!
This sample SP code shows how to get a list of columns that a query will project into the result before you run the query. It should only be used for large, long running queries because it will take a few seconds to get the column list.
There are a couple of caveats. 1) It will only return the names of the columns. It won't tell you how they were built, that is, whether they're aliased, direct from a table, calculated, etc. 2) The example query I used is straight from the Snowflake documentation here https://docs.snowflake.com/en/user-guide/sample-data-tpcds.html#functional-query-definition. For convenience, I minimized the query to a single line. The output of the columns includes object qualifiers in addition to the column names, so V1.I_CATEGORY, V1.D_YEAR, V1.D_MOY, etc. If you don't want them to make it easier to compare names, you can strip off the qualifiers using the JavaScript split function on the dot and take index 1 of the resulting array.
create or replace procedure EXPLAIN_BEFORE_RUNNING()
returns string
language javascript
execute as caller
as
$$
// Set the context for the session to the TPC-H sample data:
executeNonQuery("use schema snowflake_sample_data.tpcds_sf10tcl;");
// Here's a complex query from the Snowflake docs (minimized to one line for convienience):
var sql = `with v1 as( select i_category, i_brand, cc_name, d_year, d_moy, sum(cs_sales_price) sum_sales, avg(sum(cs_sales_price)) over(partition by i_category, i_brand, cc_name, d_year) avg_monthly_sales, rank() over (partition by i_category, i_brand, cc_name order by d_year, d_moy) rn from item, catalog_sales, date_dim, call_center where cs_item_sk = i_item_sk and cs_sold_date_sk = d_date_sk and cc_call_center_sk= cs_call_center_sk and ( d_year = 1999 or ( d_year = 1999-1 and d_moy =12) or ( d_year = 1999+1 and d_moy =1)) group by i_category, i_brand, cc_name , d_year, d_moy), v2 as( select v1.i_category ,v1.d_year, v1.d_moy ,v1.avg_monthly_sales ,v1.sum_sales, v1_lag.sum_sales psum, v1_lead.sum_sales nsum from v1, v1 v1_lag, v1 v1_lead where v1.i_category = v1_lag.i_category and v1.i_category = v1_lead.i_category and v1.i_brand = v1_lag.i_brand and v1.i_brand = v1_lead.i_brand and v1.cc_name = v1_lag.cc_name and v1.cc_name = v1_lead.cc_name and v1.rn = v1_lag.rn + 1 and v1.rn = v1_lead.rn - 1) select * from v2 where d_year = 1999 and avg_monthly_sales > 0 and case when avg_monthly_sales > 0 then abs(sum_sales - avg_monthly_sales) / avg_monthly_sales else null end > 0.1 order by sum_sales - avg_monthly_sales, 3 limit 100;`;
// Before actually running the query, generate an explain plan.
executeNonQuery("explain " + sql);
// Now read the column list from the explain plan from the result set.
var columnList = executeSingleValueQuery("COLUMN_LIST", `select "expressions" as COLUMN_LIST from table(result_scan(last_query_id())) where "operation" = 'Result';`);
// For now, just exit with the column list as the output...
return columnList;
// Your code here...
// Helper functions:
function executeNonQuery(queryString) {
var out = '';
cmd = {sqlText: queryString};
stmt = snowflake.createStatement(cmd);
var rs;
rs = stmt.execute();
}
function executeSingleValueQuery(columnName, queryString) {
var out;
cmd1 = {sqlText: queryString};
stmt = snowflake.createStatement(cmd1);
var rs;
try{
rs = stmt.execute();
rs.next();
return rs.getColumnValue(columnName);
}
catch(err) {
if (err.message.substring(0, 18) == "ResultSet is empty"){
throw "ERROR: No rows returned in query.";
} else {
throw "ERROR: " + err.message.replace(/\n/g, " ");
}
}
return out;
}
$$;
call Explain_Before_Running();

How to update JSON array with PostgreSQL

I have the following inconvenience, I want to update a key of an JSON array using only PostgreSQL. I have the following json:
[
{
"ch":"1",
"id":"12",
"area":"0",
"level":"Superficial",
"width":"",
"length":"",
"othern":"5",
"percent":"100",
"location":" 2nd finger base"
},
{
"ch":"1",
"id":"13",
"area":"0",
"level":"Skin",
"width":"",
"length":"",
"othern":"1",
"percent":"100",
"location":" Abdomen "
}
]
I need to update the "othern" to another number if the "othern" = X
(X is any number that I pass to the query. Example, update othern if othern = 5).
This JSON can be much bigger, so I need something that can iterate in the JSON array and find all the "othern" that match X number and replace with the new one. Thank you!
I have tried with these functions json of Postgresql, but I do not give with the correct result:
SELECT * FROM jsonb_to_recordset('[{"ch":"1", "id":"12", "area":"0", "level":"Superficial", "width":"", "length":"", "othern":"5", "percent":"100", "location":" 2nd finger base"}, {"ch":"1", "id":"13", "area":"0", "level":"Skin", "width":"", "length":"", "othern":"1", "percent":"100", "location":" Abdomen "}]'::jsonb)
AS t (othern text);
I found this function in SQL that is similar to what I need but honestly SQL is not my strength:
CREATE OR REPLACE FUNCTION "json_array_update_index"(
"json" json,
"index_to_update" INTEGER,
"value_to_update" anyelement
)
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT concat('[', string_agg("element"::text, ','), ']')::json
FROM (SELECT CASE row_number() OVER () - 1
WHEN "index_to_update" THEN to_json("value_to_update")
ELSE "element"
END "element"
FROM json_array_elements("json") AS "element") AS "elements"
$function$;
UPDATE plan_base
SET atts = json_array_update_index([{"ch":"1", "id":"12", "area":"0", "level":"Superficial", "width":"", "length":"", "othern":"5", "percent":"100", "location":" 2nd finger base"}, {"ch":"1", "id":"13", "area":"0", "level":"Skin", "width":"", "length":"", "othern":"1", "percent":"100", "location":" Abdomen "}], '{"othern"}', '{"othern":"71"}'::json)
WHERE id = 2;
The function you provided changes a JSON input, gives out the changed JSON and updates a table parallel.
For a simple update, you don't need a function:
demo:db<>fiddle
UPDATE mytable
SET myjson = s.json_array
FROM (
SELECT
jsonb_agg(
CASE WHEN elems ->> 'othern' = '5' THEN
jsonb_set(elems, '{othern}', '"7"')
ELSE elems END
) as json_array
FROM
mytable,
jsonb_array_elements(myjson) elems
) s
jsonb_array_elements() expands the array into one row per element
jsonb_set() changes the value of each othern field. The relevant JSON objects can be found with a CASE clause
jsonb_agg() reaggregates the elements into an array again.
This array can be used to update your column.
If you really need a function which gets the parameters and returns the changed JSON, then this could be a solution. Of course, this doesn't execute an update. I am not quite sure if you want to achieve this:
demo:db<>fiddle
CREATE OR REPLACE FUNCTION json_array_update_index(_myjson jsonb, _val_to_change int, _dest_val int)
RETURNS jsonb
AS $$
DECLARE
_json_output jsonb;
BEGIN
SELECT
jsonb_agg(
CASE WHEN elems ->> 'othern' = _val_to_change::text THEN
jsonb_set(elems, '{othern}', _dest_val::text::jsonb)
ELSE elems END
) as json_array
FROM
jsonb_array_elements(_myjson) elems
INTO _json_output;
RETURN _json_output;
END;
$$ LANGUAGE 'plpgsql';
If you want to combine both as you did in your question, of course, you can do this:
demo:db<>fiddle
CREATE OR REPLACE FUNCTION json_array_update_index(_myjson jsonb, _val_to_change int, _dest_val int)
RETURNS jsonb
AS $$
DECLARE
_json_output jsonb;
BEGIN
UPDATE mytable
SET myjson = s.json_array
FROM (
SELECT
jsonb_agg(
CASE WHEN elems ->> 'othern' = '5' THEN
jsonb_set(elems, '{othern}', '"7"')
ELSE elems END
) as json_array
FROM
mytable,
jsonb_array_elements(myjson) elems
) s
RETURNING myjson INTO _json_output;
RETURN _json_output;
END;
$$ LANGUAGE 'plpgsql';

Coldfusion - How to parse and segment out data from an email file

I am trying to parse email files that will be coming periodically for data that is contained within. We plan to setup cfmail to get the email within the box within CF Admin to run every minute.
The data within the email consists of name, code name, address, description, etc. and will have consistent labels so we are thinking of performing a loop or find function for each field of data. Would that be a good start?
Here is an example of email data:
INCIDENT # 12345
LONG TERM SYS# C12345
REPORTED: 08:39:34 05/20/19 Nature: FD NEED Address: 12345 N TEST LN
City: Testville
Responding Units: T12
Cross Streets: Intersection of: N Test LN & W TEST LN
Lat= 39.587453 Lon= -86.485021
Comments: This is a test post. Please disregard
Here's a picture of what the data actually looks like:
So we would like to extract the following:
INCIDENT
LONG TERM SYS#
REPORTED
Nature
Address
City
Responding Units
Cross Streets
Comments
Any feedback or suggestions would be greatly appreciated!
Someone posted this but it was apparently deleted. Whoever it was I want to thank you VERY MUCH as it worked perfectly!!!!
Here is the function:
<!---CREATE FUNCTION [tvf-Str-Extract] (#String varchar(max),#Delimiter1
varchar(100),#Delimiter2 varchar(100))
Returns Table
As
Return (
with cte1(N) as (Select 1 From (values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1))
N(N)),
cte2(N) as (Select Top (IsNull(DataLength(#String),0)) Row_Number() over (Order By
(Select NULL)) From (Select N=1 From cte1 N1,cte1 N2,cte1 N3,cte1 N4,cte1 N5,cte1 N6) A
),
cte3(N) as (Select 1 Union All Select t.N+DataLength(#Delimiter1) From cte2 t
Where Substring(#String,t.N,DataLength(#Delimiter1)) = #Delimiter1),
cte4(N,L) as (Select S.N,IsNull(NullIf(CharIndex(#Delimiter1,#String,s.N),0)-
S.N,8000) From cte3 S)
Select RetSeq = Row_Number() over (Order By N)
,RetPos = N
,RetVal = left(RetVal,charindex(#Delimiter2,RetVal)-1)
From ( Select *,RetVal = Substring(#String, N, L) From cte4 ) A
Where charindex(#Delimiter2,RetVal)>1
)
And here is the CF code that worked:
<cfquery name="body" datasource="#Application.dsn#">
Declare #S varchar(max) ='
INCIDENT 12345
LONG TERM SYS C12345
REPORTED: 08:39:34 05/20/19 Nature: FD NEED Address: 12345 N TEST
LN City: Testville
Responding Units: T12
Cross Streets: Intersection of: N Test LN & W TEST LN
Lat= 39.587453 Lon= -86.485021
Comments: This is a test post. Please disregard
'
Select Incident = ltrim(rtrim(B.RetVal))
,LongTerm = ltrim(rtrim(C.RetVal))
,Reported = ltrim(rtrim(D.RetVal))
,Nature = ltrim(rtrim(E.RetVal))
,Address = ltrim(rtrim(F.RetVal))
,City = ltrim(rtrim(G.RetVal))
,RespUnit = ltrim(rtrim(H.RetVal))
,CrossStr = ltrim(rtrim(I.RetVal))
,Comments = ltrim(rtrim(J.RetVal))
From (values (replace(replace(#S,char(10),''),char(13),' ')) )A(S)
Outer Apply [dbo].[tvf-Str-Extract](S,'INCIDENT' ,'LONG
TERM' ) B
Outer Apply [dbo].[tvf-Str-Extract](S,'LONG TERM SYS'
,'REPORTED' ) C
Outer Apply [dbo].[tvf-Str-Extract](S,'REPORTED:' ,'Nature'
) D
Outer Apply [dbo].[tvf-Str-Extract](S,'Nature:'
,'Address' ) E
Outer Apply [dbo].[tvf-Str-Extract](S,'Address:' ,'City'
) F
Outer Apply [dbo].[tvf-Str-Extract](S,'City:'
,'Responding ') G
Outer Apply [dbo].[tvf-Str-Extract](S,'Responding Units:','Cross'
) H
Outer Apply [dbo].[tvf-Str-Extract](S,'Cross Streets:' ,'Lat'
) I
Outer Apply [dbo].[tvf-Str-Extract](S+'|||','Comments:' ,'|||'
) J
</cfquery>
<cfoutput>
B. #body.Incident#<br>
C. #body.LongTerm#<br>
D. #body.Reported#<br>
SQL tends to have limited string functions, so it isn't the best tool for parsing. If the email content is always in that exact format, you could use either plain string functions or regular expressions to parse it. However, the latter is more flexible.
I suspect the content actually does contain new lines, which would make for simpler parsing. However, if you prefer searching for content in between two labels, regular expressions would do the trick.
Build an array of the label names (only). Loop through the array, grabbing a pair of labels: "current" and "next". Use the two values in a regular expression to extract the text in between them:
label &"\s*[##:=](.*?)"& nextLabel
/* Explanation: */
label - First label name (example: "Incident")
\s* - Zero or more spaces
[##:=] - Any of these characters: pound sign, colon or equal sign
(.*?) - Group of zero or more characters (non-greedy)
nextLabel - Next label (example: "Long Term Sys")
Use reFindNoCase() to get details about the position and length of matched text. Then use those values in conjunction with mid() to extract the text.
Note, newer versions like ColdFusion 2016+ automagically extract the text under a key named MATCH
The newer CF2016+ syntax is slicker, but something along these lines works under CF10:
emailBody = "INCIDENT # 12345 ... etc.... ";
labelArray = ["Incident", "Long Term Sys", "Reported", ..., "Comments" ];
for (pos = 1; pos <= arrayLen(labelArray); pos++) {
// get current and next label
hasNext = pos < arrayLen(labelArray);
currLabel = labelArray[ pos ];
nextLabel = (hasNext ? labelArray[ pos+1 ] : "$");
// extract label and value
matches = reFindNoCase( currLabel &"\s*[##:=](.*?)"& nextLabel, emailBody, 1, true);
if (arrayLen(matches.len) >= 2) {
results[ currLabel ] = mid( emailBody, matches.pos[2], matches.len[2]);
}
}
writeDump( results );
Results:

How to store the current value of the array in a variable inside a Perl for loop

My requirement is, I want to map some checklist values to some groups. The following is my code:
#selectbox1 => contains the selected select groups
#selectbox2 => contains selected checklist
Code:
foreach $select1(#selectbox1) {
my $sql_select1 = "select id from group_management where group_name = '$select1'";
my $box1 = $dbslave -> prepare($sql_select1);
$box1 -> execute();
while($select_box1= $box1->fetchrow_array())
{
push (#box1,$select_box1);
}
my $box_1 = #box1; # currently I tried like this to store the current value .NEED CORRECTION HERE
foreach $select2(#selectbox2) {
my $sql_select2 = "select id from checklist where checklist_name = '$select2'";
my $box2 = $dbslave -> prepare($sql_select2);
$box2 -> execute();
while($select_box2 = $box2->fetchrow_array())
{
push (#box2,$select_box2);
}
my $box_2 = #box2; # currently I tried like this to store the current value .NEED CORRECTION HERE
my $sql_insert = "insert into checklist_group_mapping values ('',$box_2,$box_1)";
my $ins = $dbslave -> prepare($sql_insert);
$ins -> execute();
}
}
How can I assign the current value of the array to a variable so that I can insert it into the mapping table?
You need to read up on 'context', and in particular 'scalar context' and 'array context'.
When you write:
my $box_1 = #box1;
you are providing scalar context, and in scalar context, #box1 returns the number of elements in the array. If you wrote:
my($box_1) = #box1;
you would be providing array context, and in array context, the first element of #box1 would be assigned to the first element of the array context, $box_1 — and the remaining elements of #box1 would be dropped. (This may well be what you're after; it is likely that you are trying to select the single ID value for each of the various names in #selectbox1.)
Judging from how you're trying to use the $box_1 and $box_2 variables in your code, you are looking to obtain a single string containing all the values from #box1 and another single string containing all the values from #box2, and they probably need to be presented to the DBI driver enclosed in single quotes.
You can get space-separated values into a string using:
my $box_1 = "#box1";
If you need comma-separated values, you can use:
my $box_1;
{ local $" = ","; $box_1 = "#box_1"; }
The $" (aka $LIST_SEPARATOR under use English '-no_match_vars';) must be localized to prevent damage, but that means you have to separate the definition of $box_1 from the assignment (because if you don't, $box_1 is destroyed when you leave the {...} block).
Now, to protect that so that the SQL can work, you need to use the quote method:
$box1 = $dbslave->quote($box1);
or:
my $box1 = $dbslave->quote("#box1");
Assembling these changes, we get:
#!/usr/bin/env perl
use strict;
use warnings;
### Improved, but not operational
# use DBI;
my #selectbox1 = ( "group1", "group2", "group3" );
my #selectbox2 = ( "check1", "check2", "check3" );
my $dbslave;
# $dbslave = DBI->connect(...) or die "A horrible death";
foreach my $select1 (#selectbox1)
{
my $sql_select1 = "select id from group_management where group_name = '$select1'";
my $box1 = $dbslave->prepare($sql_select1);
$box1->execute();
my #box1;
while (my $select_box1 = $box1->fetchrow_array())
{
push #box1, $select_box1;
}
my $box_1 = $dbslave->quote("#box1");
foreach my $select2(#selectbox2)
{
my $sql_select2 = "select id from checklist where checklist_name = '$select2'";
my $box2 = $dbslave->prepare($sql_select2);
$box2->execute();
my #box2;
while (my $select_box2 = $box2->fetchrow_array())
{
push #box2, $select_box2;
}
my $box_2 = $dbslave->quote("#box2");
my $sql_insert = "insert into checklist_group_mapping values ('', $box_2, $box_1)";
my $ins = $dbslave->prepare($sql_insert);
$ins->execute();
}
}
Note that the two SELECT statements assume that the select box strings contain no funny characters (specifically, no single quotes). If you're in charge of the content of #selectbox1 and #selectbox2, that's OK. If they contain user input, you have to sanitize that input, or use $dbslave->quote() again, or use place-holders. I'm going to ignore the issue.
You are also using scalar context with $box1->fetchrow_array(), which is not going to yield the answer you want (although fetchrow_array() is context sensitive, the manual warns you to be careful). I would use something like:
my #box1;
while (my #row = $box1->fetchrow_array())
{
push #box1, $row[0];
}
my $box_1 = $dbslave->quote("#box1");
You also need to use functions. There's a glaring repeat in your code that can be encapsulated into a single function used twice:
#!/usr/bin/perl
use strict;
use warnings;
# use DBI;
my #selectbox1 = ( "group1", "group2", "group3" );
my #selectbox2 = ( "check1", "check2", "check3" );
my $dbslave;
# $dbslave = DBI->connect(...) or die "A horrible death";
sub fetch_all
{
my($dbh, $sql) = #_;
my $sth = $dbh->prepare($sql);
$sth->execute();
my #results;
while (my #row = $sth->fetchrow_array())
{
push #results, $row[0];
}
my $result = $dbslave->quote("#results");
return $result;
}
foreach my $select1 (#selectbox1)
{
my $sql_select1 = "select id from group_management where group_name = '$select1'";
my $box_1 = fetch_all($dbslave, $sql_select1);
foreach my $select2(#selectbox2)
{
my $sql_select2 = "select id from checklist where checklist_name = '$select2'";
my $box_2 = fetch_all($dbslave, $sql_select2);
my $sql_insert = "insert into checklist_group_mapping values ('', $box_2, $box_1)";
my $ins = $dbslave->prepare($sql_insert);
$ins->execute();
}
}
The INSERT statement should be converted to use placeholders so it can be prepared once and used many times:
my $sql_insert = "insert into checklist_group_mapping values ('', ?, ?)";
my $ins = $dbslave->prepare($sql_insert);
foreach my $select1 (#selectbox1)
{
my $sql_select1 = "select id from group_management where group_name = '$select1'";
my $box_1 = fetch_all($dbslave, $sql_select1);
foreach my $select2(#selectbox2)
{
my $sql_select2 = "select id from checklist where checklist_name = '$select2'";
my $box_2 = fetch_all($dbslave, $sql_select2);
$ins->execute($box_1, $box_2);
}
}
Indeed, the two SELECT statements should also be parameterized and prepared once and reused. I've not shown that change because (a) I'm lazy and (b) there's a bigger change that is still more effective.
When we look at what you're really doing, it should all be a single SQL statement:
#!/usr/bin/perl
use strict;
use warnings;
# use DBI;
my #selectbox1 = ( "group1", "group2", "group3" );
my #selectbox2 = ( "check1", "check2", "check3" );
my $dbslave;
# $dbslave = DBI->connect(...) or die "A horrible death";
sub placeholder_list
{
my($n) = #_;
die "$n should be larger than 0" if $n <= 0;
my $list = "(?" . ",?" x ($n - 1) . ")";
return $list;
}
my $sql_insert = qq%
INSERT INTO checklist_group_mapping(col1, col2, col3)
SELECT '', gm.id, cl.id
FROM group_management AS gm
CROSS JOIN checklisst AS cl
WHERE gm.group_name IN X1
AND cl.checklist_name IN X2
%;
my $X1 = placeholder_list(scalar(#selectbox1));
my $X2 = placeholder_list(scalar(#selectbox2));
$sql_insert =~ s/X1/$X1/;
$sql_insert =~ s/X2/$X2/;
my $ins = $dbslave->prepare($sql_insert);
$ins->execute(#selectbox1, #selectbox2);
The big advantage of this is that there are far fewer round trips for information flowing between the application and the database, which (almost) invariably improves performance, often dramatically.
The only residual issue is whether your DBMS supports explicit CROSS JOIN like that. If not, you'll need to replace the words CROSS JOIN with a single comma.
There are still things that should be fixed, such as checking that the prepared statements were successfully prepared, and so on. But this may have given you some insight into how to think about using the DBI with Perl.
The trick is to use $_ variable inside your foreach. Like this:
my $current_value;
foreach $select2(#selectbox2) {
$current_value = $_;
my $sql_select2 = "select id from checklist where checklist_name = '$select2'";
......
my $box_2 = $current_value;

What is LINQ equivalent of SQL’s "IN" keyword

How can I write below sql query in linq
select * from Product where ProductTypePartyID IN
(
select Id from ProductTypeParty where PartyId = 34
)
There is no direct equivalent in LINQ. Instead you can use contains () or any
other trick to implement them. Here's an example that uses Contains:
String [] s = new String [5];
s [0] = "34";
s [1] = "12";
s [2] = "55";
s [3] = "4";
s [4] = "61";
var result = from d in context.TableName
where s.Contains (d.fieldname)
select d;
check this link for details: in clause Linq
int[] productList = new int[] { 1, 2, 3, 4 };
var myProducts = from p in db.Products
where productList.Contains(p.ProductID)
select p;
Syntactic variations aside, you can write it in practically the same way.
from p in ctx.Product
where (from ptp in ctx.ProductTypeParty
where ptp.PartyId == 34
select ptp.Id).Contains(p.ProductTypePartyID)
select p
I prefer using the existential quantifier, though:
from p in ctx.Product
where (from ptp in ctx.ProductTypeParty
where ptp.PartyId == 34
&& ptp.Id == p.ProductTypePartyID).Any()
select p
I expect that this form will resolve to an EXISTS (SELECT * ...) in the generated SQL.
You'll want to profile both, in case there's a big difference in performance.
Something similar to this
var partyProducts = from p in dbo.Product
join pt in dbo.ProductTypeParty on p.ProductTypePartyID equal pt.PartyId
where pt.PartyId = 34
select p
You use the Contains in a Where clause.
Something along these lines (untested):
var results = Product.Where(product => ProductTypeParty
.Where(ptp => ptp.PartyId == 34)
.Select(ptp => ptp.Id)
.Contains(product.Id)
);

Resources