Bourne shell - make a loop for each element in an array? - arrays

This is my array:
ListTabs=""
ListTabs=$ListTabs"T_Tab1\n"
ListTabs=$ListTabs"T_Tab2\n"
ListTabs=$ListTabs"T_Tab3"
echo $ListTabs
arrArr=0
OLD_IFS=$IFS;
IFS=\n
for listArr in ${ListTabs[#]};
do
#echo $listArr
MYDIR[${ARR}]=$listArr
(( arrIdx = $ARR+ 1 ))
done
IFS=$OLD_IFS;
then, i have done a sort of id from a select in this way (FILESELECT_DAT is a output file of query):
sort -u ${FILESELECT_DAT} > ${SORT_OUT1}
ok..Now i have to make a loop that for each element of array makes a SELECT where ID = values of ${SORT_OUT1}. So there are 2 loops. A while on ID and a for loop for the select. How can i loop the ID inside ${SORT_OUT1}? I think this is the begin
id=""
while read $id
do
for ListTabs in ${listArr}
do
-
-
SELECT * FROM $ListTabs(but the results is alway the first tab in each loop)
WHERE ID = ${id}(but he show me all IDs)
-
-
done < ${SORT_OUT1}
Any ideas? Thanks

listArr=( T_Tab{1,2,3} )
sort -u "$FILESELECT_DAT" > "$SORT_OUT1"
while read id; do
for ListTabs in "${listArr[#]}"; do
...
done
done < "$SORT_OUT1"
Take care that nothing in the body of the for-loop reads from standard input, or it will consume part of the input intended for the read command. To be safe, use a separate file descriptor:
while read -u 3 id; do
...
done 3< "$SORT_OUT1"

Related

How to update internal table without using MODIFY?

I have created internal tables where I want to update age of employee in one internal table by calculating it from another table, I have done arithmetic calculations to get age but now how can I update it by any alternate way instead of MODIFY?
WRITE : / 'FirstName','LastName', ' Age'.
LOOP AT gt_items1 INTO gwa_items1.
READ TABLE gt_header INTO gwa_header WITH KEY empid = gwa_items1-empid.
gwa_items1-age = gv_date+0(4) - gwa_header-bdate+0(4).
MODIFY gt_items1 from gwa_items1 TRANSPORTING age WHERE empid = gwa_items1-empid.
WRITE : / gwa_items1-fname , gwa_items1-lname , gwa_items1-age .
ENDLOOP.
Use field symbols (instead of work areas) by LOOPing over internal tables:
WRITE : / 'FirstName','LastName', ' Age'.
LOOP AT gt_items1
ASSIGNING FIELD-SYMBOL(<ls_item1>).
READ TABLE gt_header
ASSIGNING FIELD-SYMBOL(<ls_header>)
WITH KEY empid = <ls_item1>-empid.
IF sy-subrc EQ 0.
<ls_item1>-age = gv_date+0(4) - <ls_header>-bdate+0(4).
WRITE : / <ls_item1>-fname , <ls_item1>-lname , <ls_item1>-age .
ENDIF.
ENDLOOP.
Field symbols have two advantages:
They modify the internal table directly, no separate MODIFY is
necessary.
They are somewhat faster, than work areas.
Besides József Szikszai's answer you could also use references:
write : / 'FirstName','LastName', ' Age'.
sort gt_header by empid. " <------------- Sort for binary search
loop at gt_items1 reference into data(r_item1).
read table gt_header reference into data(r_header)
with key empid = r_item1->empid binary search. " <------------- Faster read
check sy-subrc eq 0.
r_item1->age = gv_date+0(4) - r_header->bdate+0(4).
write : / r_item1->fname , r_item1->lname , r_item1->age .
endloop.
I added some enhacements to your code also.
For more info check this link.

Perl performance is slow, file I/O issue or due to while loop

I have the following code in my while loop and it is significantly slow, any suggestions on how to improve this?
open IN, "<$FileDir/$file" || Err( "Failed to open $file at location: $FileDir" );
my $linenum = 0;
while ( $line = <IN> ) {
if ( $linenum == 0 ) {
Log(" This is header line : $line");
$linenum++;
} else {
$linenum++;
my $csv = Text::CSV_XS->new();
my $status = $csv->parse($line);
my #val = $csv->fields();
$index = 0;
Log("number of parameters for this file is: $sth->{NUM_OF_PARAMS}");
for ( $index = 0; $index <= $#val; $index++ ) {
if ( $index < $sth->{NUM_OF_PARAMS} ) {
$sth->bind_param( $index + 1, $val[$index] );
}
}
if ( $sth->execute() ) {
$ifa_dbh->commit();
} else {
Log("line $linenum insert failed");
$ifa_dbh->rollback();
exit(1);
}
}
}
By far the most expensive operation there is accessing the database server; it's a network trip, hundreds of milliseconds or some such, each time.
Are those DB operations inserts, as they appear? If so, instead of inserting row by row construct a string for an insert statement with multiple rows, in principle as many as there are, in that loop. Then run that one transaction.
Test and scale down as needed, if that adds up to too many rows. Can keep adding rows to the string for the insert statement up to a decided maximum number, insert that, then keep going.†
A few more readily seen inefficiencies
Don't construct an object every time through the loop. Build it once befor the loop, and then use/repopulate as needed in the loop. Then, there is no need for parse+fields here, while getline is also a bit faster
Don't need that if statement for every read. First read one line of data, and that's your header. Then enter the loop, without ifs
Altogether, without placeholders which now may not be needed, something like
my $csv = Text::CSV_XS->new({ binary => 1, auto_diag => 1 });
# There's a $table earlier, with its #fields to populate
my $qry = "INSERT into $table (", join(',', #fields), ") VALUES ";
open my $IN, '<', "$FileDir/$file"
or Err( "Failed to open $file at location: $FileDir" );
my $header_arrayref = $csv->getline($IN);
Log( "This is header line : #$header_arrayref" );
my #sql_values;
while ( my $row = $csv->getline($IN) ) {
# Use as many elements in the row (#$row) as there are #fields
push #sql_values, '(' .
join(',', map { $dbh->quote($_) } #$row[0..$#fields]) . ')';
# May want to do more to sanitize input further
}
$qry .= join ', ', #sql_values;
# Now $qry is readye. It is
# INSERT into table_name (f1,f2,...) VALUES (v11,v12...), (v21,v22...),...
$dbh->do($qry) or die $DBI::errstr;
I've also corrected the error handling when opening the file, since that || in the question binds too tightly in this case, and there's effectively open IN, ( "<$FileDir/$file" || Err(...) ). We need or instead of || there. Then, the three-argument open is better. See perlopentut
If you do need the placeholders, perhaps because you can't have a single insert but it must be broken into many or for security reasons, then you need to generate the exact ?-tuples for each row to be inserted, and later supply the right number of values for them.
Can assemble data first and then build the ?-tuples based on it
my $qry = "INSERT into $table (", join(',', #fields), ") VALUES ";
...
my #data;
while ( my $row = $csv->getline($IN) ) {
push #data, [ #$row[0..$#fields] ];
}
# Append the right number of (?,?...),... with the right number of ? in each
$qry .= join ', ', map { '(' . join(',', ('?')x#$_) . ')' } #data;
# Now $qry is ready to bind and execute
# INSERT into table_name (f1,f2,...) VALUES (?,?,...), (?,?,...), ...
$dbh->do($qry, undef, map { #$_ } #data) or die $DBI::errstr;
This may generate a very large string, what may push the limits of your RDBMS or some other resource. In that case break #data into smaller batches. Then prepare the statement with the right number of (?,?,...) row-values for a batch, and execute in the loop over the batches.‡
Finally, another way altogether is to directly load data from a file using the database's tool for that particular purpose. This will be far faster than going through DBI, probably even including the need to process your input CSV into another one which will have only the needed data.
Since you don't need all data from your input CSV file, first read and process the file as above and write out a file with only the needed data (#data above). Then, there's two possible ways
Either use an SQL command for this – COPY in PostgreSQL, LOAD DATA [LOCAL] INFILE in MySQL and Oracle (etc); or,
Use a dedicated tool for importing/loading files from your RDBMS – mysqlimport (MySQL), SQL*Loader/sqlldr (Oracle), etc. I'd expect this to be the fastest way
The second of these options can also be done out of a program, by running the appropriate tool as an external command via system (or better yet via the suitable libraries).
† In one application I've put together as much as millions of rows in the initial insert -- the string itself for that statement was in high tens of MB -- and that keeps running with ~100k rows inserted in a single statement daily, for a few years by now. This is postgresql on good servers, and of course ymmv.
‡
Some RDBMS do not support a multi-row (batch) insert query like the one used here; in particular Oracle seems not to. (We were informed in the end that that's the database used here.) But there are other ways to do it in Oracle, please see links in comments, and search for more. Then the script will need to construct a different query but the principle of operation is the same.

mysql2sqlite.sh script is not working as required

I am using mysql2sqlite.sh from script Github to change my mysql database to sqlite. But the problem i am getting is that in my table the data 'E-001' gets changed to 'E?001'.
I have no idea how to modify the script to get the required result. Please help me.
the script is
#!/bin/sh
# Converts a mysqldump file into a Sqlite 3 compatible file. It also extracts the MySQL `KEY xxxxx` from the
# CREATE block and create them in separate commands _after_ all the INSERTs.
# Awk is choosen because it's fast and portable. You can use gawk, original awk or even the lightning fast mawk.
# The mysqldump file is traversed only once.
# Usage: $ ./mysql2sqlite mysqldump-opts db-name | sqlite3 database.sqlite
# Example: $ ./mysql2sqlite --no-data -u root -pMySecretPassWord myDbase | sqlite3 database.sqlite
# Thanks to and #artemyk and #gkuenning for their nice tweaks.
mysqldump --compatible=ansi --skip-extended-insert --compact "$#" | \
awk '
BEGIN {
FS=",$"
print "PRAGMA synchronous = OFF;"
print "PRAGMA journal_mode = MEMORY;"
print "BEGIN TRANSACTION;"
}
# CREATE TRIGGER statements have funny commenting. Remember we are in trigger.
/^\/\*.*CREATE.*TRIGGER/ {
gsub( /^.*TRIGGER/, "CREATE TRIGGER" )
print
inTrigger = 1
next
}
# The end of CREATE TRIGGER has a stray comment terminator
/END \*\/;;/ { gsub( /\*\//, "" ); print; inTrigger = 0; next }
# The rest of triggers just get passed through
inTrigger != 0 { print; next }
# Skip other comments
/^\/\*/ { next }
# Print all `INSERT` lines. The single quotes are protected by another single quote.
/INSERT/ {
gsub( /\\\047/, "\047\047" )
gsub(/\\n/, "\n")
gsub(/\\r/, "\r")
gsub(/\\"/, "\"")
gsub(/\\\\/, "\\")
gsub(/\\\032/, "\032")
print
next
}
# Print the `CREATE` line as is and capture the table name.
/^CREATE/ {
print
if ( match( $0, /\"[^\"]+/ ) ) tableName = substr( $0, RSTART+1, RLENGTH-1 )
}
# Replace `FULLTEXT KEY` or any other `XXXXX KEY` except PRIMARY by `KEY`
/^ [^"]+KEY/ && !/^ PRIMARY KEY/ { gsub( /.+KEY/, " KEY" ) }
# Get rid of field lengths in KEY lines
/ KEY/ { gsub(/\([0-9]+\)/, "") }
# Print all fields definition lines except the `KEY` lines.
/^ / && !/^( KEY|\);)/ {
gsub( /AUTO_INCREMENT|auto_increment/, "" )
gsub( /(CHARACTER SET|character set) [^ ]+ /, "" )
gsub( /DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP|default current_timestamp on update current_timestamp/, "" )
gsub( /(COLLATE|collate) [^ ]+ /, "" )
gsub(/(ENUM|enum)[^)]+\)/, "text ")
gsub(/(SET|set)\([^)]+\)/, "text ")
gsub(/UNSIGNED|unsigned/, "")
if (prev) print prev ","
prev = $1
}
# `KEY` lines are extracted from the `CREATE` block and stored in array for later print
# in a separate `CREATE KEY` command. The index name is prefixed by the table name to
# avoid a sqlite error for duplicate index name.
/^( KEY|\);)/ {
if (prev) print prev
prev=""
if ($0 == ");"){
print
} else {
if ( match( $0, /\"[^"]+/ ) ) indexName = substr( $0, RSTART+1, RLENGTH-1 )
if ( match( $0, /\([^()]+/ ) ) indexKey = substr( $0, RSTART+1, RLENGTH-1 )
key[tableName]=key[tableName] "CREATE INDEX \"" tableName "_" indexName "\" ON \"" tableName "\" (" indexKey ");\n"
}
}
# Print all `KEY` creation lines.
END {
for (table in key) printf key[table]
print "END TRANSACTION;"
}
'
exit 0
I can't give a guaranteed solution, but here's a simple technique I've been using successfully to handle similar issues (See "Notes", below). I've been wrestling with this script the last few days, and figure this is worth sharing in case there are others who need to tweak it but are stymied by the awk learning curve.
The basic idea is to have the script output to a text file, edit the file, then import into sqlite (More detailed instructions below).
You might have to experiment a bit, but at least you won't have to learn awk (though I've been trying and it's pretty fun...).
HOW TO
Run the script, exporting to a file (instead of passing directly
to sqlite3):
./mysql2sqlite -u root -pMySecretPassWord myDbase > sqliteimport.sql
Use your preferred text editing technique to clean up whatever mess
you've run into. For example, search/replace in sublimetext. (See the last note, below, for a tip.)
Import the cleaned up script into sqlite:
sqlite3 database.sqlite < sqliteimport.sql
NOTES:
I suspect what you're dealing with is an encoding problem -- that '-' represents a character that isn't recognized by, or means something different to, either your shell, the script (awk), or your sqlite database. Depending on your situation, you may not be able to finesse the problem (see the next note).
Be forewarned that this is most likely only going to work if the offending characters are embedded in text data (not just as text, but actual text content stored in a text field). If they're in a machine name (foreign key field, entity id, e.g.), binary data stored as text, or text data stored in a binary field (blob, eg), be careful. You could try it, but don't get your hopes up, and even if it seems to work be sure to test the heck out of it.
If in fact that '-' represents some unusual character, you probably won't be able to just type a hyphen into the 'search' field of your search/replace tool. Copy it from the source data (eg., open the file, highlight and copy to clipboard) then paste into the tool.
Hope this helps!
To convert mysql to sqlite3 you can use Navicom Premium.

Querying multiple times in Oracle using perl returns only the first query

Note: I have corrected the variable differences and it does print the query from the first set but it returns nothing from the second set. If I use the second set only it works.
In the code below, I have some_array which is array of array the array contains text like name. So
#some_array= ([sam, jon, july],[Mike, Han,Tommy],[angie, sita, lanny]);
Now when I querying the list like 'sam jon july' first and 'mike han tommy' . Only the execute return the result from the first list others is undef. I don't know why any help will be appreciated.
my $pointer;
my $db = $db->prepare_cached("
begin
:pointer := myFun(:A1);
end;
") or die "Couldn't prepare stat: " . $db->errstr;
$db->bind_param_inout(":pointer",\$pointer,0,{ ora_type => ORA_RSET });
for (my $i=0; $i < #some_array ; $i++) {
my #firstarray = #{$some_array[$i]};
my $sql = lc(join(" ", #firstarray));
print "<pre>$sql</pre>\n";
$db->bind_param(":A1",$sql);
$db->execute();
print "<pre>".Dumper($db->execute())."</pre>\n";
}
Just like everyone told you on the last question you asked, initialize your array with parentheses, not nested brackets.
#some_array= ([sam, jon, july],[Mike, Han,Tommy],[angie, sita, lanny])
not
#some_array= [[sam, jon, july],[Mike, Han,Tommy],[angie, sita, lanny]]
You would also benefit tremendously from including
use strict;
use warnings;
at the top of all of your programs. That would catch the strange way you are trying to initialize #some_array, and it would catch your inconsistent usage of #sql and #query. update and $sdh and $db and $dbh.

Cakephp looping through database saves, for some reason it only saves on the last instance of loop?

I'm trying to get this loop to save a new record to the database in cakephp on each iteration but for some reason its only saving it on the last one (so in this case it saves a record called "test9" but no others.. this type of save has worked for me so far in cakephp and I am completely stumped by this, I would appreciate any advice
The debug output just gives this for each record (including the save that works), so I can't determine anything from it:
26 SELECT COUNT() AS count FROM proxylinks AS Proxylink WHERE Proxylink.id = 13 1 1 0
27 SELECT COUNT() AS count FROM proxylinks AS Proxylink WHERE Proxylink.id = 13 1 1 0
28 UPDATE proxylinks SET link = 'test9' WHERE proxylinks.id = 13 1 0
$count = 10;
$v = 1;
do {
######### save link to database
$this->Prox->Proxylink->set(array('link' => 'test' . $v));
$this->Prox->Proxylink->save();
$v++;
} while ($v < $count);
You have to call ->create(), otherwise it's updating the previously saved record.
Quoting the manual:
When calling save in a loop, don't forget to call create().

Resources