How to restart numbering on PHPWord with addListItem()? - listitem

I'm creating a document that will contain multiple numbered lists on a page and have created two list definitions as this:
${'references_' . $testcase} = array('listType'=>PHPWord_Style_ListItem::TYPE_NUMBER_NESTED);
${'procedure_' . $testcase} = array('listType'=>PHPWord_Style_ListItem::TYPE_NUMBER_NESTED);
They're then being called within loops to add to the lists:
for ($i = 1; $i <= 10; $i++) {
$body->addListItem('List Item ' . $i, 0, null, ${'references_' . $testcase});
}
for ($j = 1; $j <= 10; $j++) {
$body->addListItem('List Item ' . $j, 0, null, ${'procedure_' . $testcase});
}
However, the second list numbering starts at 11 - how can I have this set back to 1?

it's an old thread and all but afaik you need to put your lists in seperate textruns to have the numbering start again

I was trying to solve this myself today and I found a fork of PHPWord called PHPWord2. At the moment the code for numbering restart is only in a work branch. Read more at https://github.com/PHPOffice/PHPWord/issues/10. Examples are included in the work branch.

Related

I am not able to get any values to store in the array #AA

It is getting the correct inputs and printing them inside the for loop but when I try to send it to a function module later or if I try to print it outside the for loop it is empty.
What do I need to change?
#!/usr/bin/perl
use lib "."; # This pragma include the current working directory
use Mytools;
$inputfilename = shift #ARGV;
open (INFILE, $inputfilename) or die
("Error reading file $inputfilename: $! \n");
# Storing every line of the input file in array #file_array
while (<INFILE>){
$file_array[ $#file_array + 1 ] = $_;
}
my $protein;
my #AA;
foreach $protein (#file_array)
{
#AA = Mytools::dnaToAA($protein);
print "The main AA\n",#AA;
}
print "The main array",#file_array;
my $header1 = "AA";
my $header2 = "DNA";
Mytools::printreport($header1, $header2, \#AA, \#file_array);
You're overwriting the #AA in every iteration of the foreach loop.
Instead of
#AA = Mytools::dnaToAA($protein);
use
push #AA, Mytools::dnaToAA($protein);
See push.
Next time, try to post runnable code (see mre), i.e. avoid Mytools as they're irrelevant to the problem and make the code impossible to run for anyone else but you.

Repeat current poly reduce function on multiple objects that are selected?

I'm looping through multiple objects, but the loop stops before going to the next object.
Created a loop with condition. If condition is met, it calls a ReduceEdge() function. Problem is it will only iterate once and not go to the next object and repeat the procedure.
global proc ReduceEdge()
{
polySelectEdgesEveryN "edgeRing" 2;
polySelectEdgesEveryN "edgeLoop" 1;
polyDelEdge -cv on;
}
string $newSel[] = `ls -sl`;
for($i = 0; $i < size($newSel); $i++)
{
select $newSel[$i];
int $polyEval[] = `polyEvaluate -e $newSel[$i]`;
int $temp = $polyEval[0];
for($k = 0; $k < $temp; $k++)
{
string $polyInfo[] = `polyInfo -fn ($newSel[$i] + ".f[" + $k + "]")`;
$polyInfo = stringToStringArray($polyInfo[$i]," ");
float $vPosX = $polyInfo[2];
float $vPosY = $polyInfo[3];
float $vPosZ = $polyInfo[4];
if($vPosX == 0 && $vPosY == 0 && $vPosZ == 1.0)
{
select ($newSel[$i] + ".e[" + $k + "]");
ReduceEdge();
}
}
}
Expected results:
If I select 4 cylinders, all their edges will reduce by half the current amount.
Actual results:
When 4 cylinders are selected, only one reduces down to half the edges. The rest stay the same.
Since my comment did help you out, I'll try and give a more thorough explanation.
Your first loop (with $i) iterates over each object in your selection. This is fine.
Your second loop (with $k) iterates over the number of edges for the current object in the loop. So far, so good. Though, I'm wondering if it would be more correct to loop of the number of faces...
Now you ask for an array of all face normals of the face at index $k at object $i, with string $polyInfo[] = `polyInfo -fn ($newSel[$i] + ".f[" + $k + "]")`;.
If you try and print the size and values in $polyInfo, you'll realize you have an array with one element, which is the face normal of the particular face you queried just before. Therefore, it will always be element 0, and not $i, which would increases with every iteration.
I have made a Python/PyMEL version of the script, which may be nice for you to see.
import pymel.core as pm
import maya.mel as mel
def reduceEdge():
mel.eval('polySelectEdgesEveryN "edgeRing" 2;')
mel.eval('polySelectEdgesEveryN "edgeLoop" 1;')
pm.polyDelEdge(cv=True)
def reducePoly():
selection = pm.ls(sl=True)
for obj in selection:
for i, face in enumerate(obj.f):
normal = face.getNormal()
if (normal.x == 0.0 and normal.y == 0.0 and normal.z == 1.0):
pm.select(obj + '.e[' + str(i) + ']')
reduceEdge()
reducePoly()

ref to a hash -> its member array -> this array's member's value. How to elegantly access and test?

I want to use an expression like
#{ %$hashref{'key_name'}[1]
or
%$hashref{'key_name}->[1]
to get - and then test - the second (index = 1) member of an array (reference) held by my hash as its "key_name" 's value. But, I can not.
This code here is correct (it works), but I would have liked to combine the two lines that I have marked into one single, efficient, perl-elegant line.
foreach my $tag ('doit', 'source', 'dest' ) {
my $exists = exists( $$thisSectionConfig{$tag});
my #tempA = %$thisSectionConfig{$tag} ; #this line
my $non0len = (#tempA[1] =~ /\w+/ ); # and this line
if ( !$exists || !$non0len) {
print STDERR "No complete \"$tag\" ... etc ... \n";
# program exit ...
}
I know you (the general 'you') can elegantly combine these two lines. Could someone tell me how I could do this?
This code it testing a section of a config file that has been read into a $thisSectionConfig reference-to-a-hash by Config::Simple. Each config file key=value pair then is (I looked with datadumper) held as a two-member array: [0] is the key, [1] is the value. The $tag 's are configuration settings that must be present in the config file sections being processed by this code snippet.
Thank you for any help.
You should read about Arrow operator(->). I guess you want something like this:
foreach my $tag ('doit', 'source', 'dest') {
if(exists $thisSectionConfig -> {$tag}){
my $non0len = ($thisSectionConfig -> {$tag} -> [1] =~ /(\w+)/) ;
}
else {
print STDERR "No complete \"$tag\" ... etc ... \n";
# program exit ...
}

Perl, grouping Array of Array element based on one column and condition

I have an AoA construct with four columns and many rows. Following is an example of data (input).
DQ556929 103480190 103480214 154943
DQ540839 103325247 103325275 2484
DQ566549 103322763 103322792 99
DQ699634 103322664 103322694 0
DQ544472 103322664 103322692 373
DQ709105 103322291 103322318 46
DQ705937 103322245 103322273 486
DQ699398 103321759 103321788 1211
DQ710151 103320548 103320577 692251
DQ548430 102628297 102628326 1
DQ558403 102628296 102628321 855795
DQ692476 101772501 101772529 481463
DQ544274 101291038 101291068 484047
DQ723982 100806991 100807020 1
DQ709023 100806990 100807020 3
DQ712307 100806987 100807014 0
DQ709654 100806987 100807012 571051
DQ707370 100235936 100235962 1481849
I want to group and write into a file all the row elements (sequentially).
Conditions are if column four values less than 1000 and minimum two values are next to each other, group them else if the value less than 1000 and lies between the values more than 1000 treat them as single and append separately in the same file and the values which are more than 1000 also write as a block but with out affecting the order of the 2nd and third column.
This file is output of my previous program, now for this I have tried implementing my hands but getting some weird results. Here is my chunk of code, but non functional. Guys I need just help if i am executing my logic well here, I am open for any comments as a beginner. And also correct me anywhere.
my #dataf= sort{ $a->[1]<=> $b->[1]} #data;
#dataf=reverse #dataf;
for(my $i>=0;$i<=$#Start;$i++)
{
print "$sortStart[$i]\n";
my $diff = $sortStart[$i] - $sortStart[$i+1];
$dataf[$i][3]= $diff;
# $IDdiff{$ID[$i]}=$diff;
}
#print Dumper(#dataf);
open (CLUST, ">> ./clustTest.txt" );
for (my $k=0;$k<=$#Start;$k++)
{
for (my $l=0;$l<=3;$l++)
{
# my $tempdataf = shift $dataf[$k][$l];
# print $tempdataf;
if ($dataf[$k][3]<=1000)
{
$flag = 1;
do
{
print CLUST"----- Cluster $clustNo -----\n";
print CLUST"$dataf[$k][$l]\t";
if ($dataf[$k][3]<=1000)
{
$flag1 = 1;
}else {$flag1=0;}
$clustNo++;
}until($flag1==0 && $data[$k][3] > 1000);
if($flag1==0 && $data[$k][3] > 1000)
{
print CLUST"Singlet \n";
print CLUST"$dataf[$k][$l]\t";
next;
}
#print CLUST"$dataf[$k][$l]\t"; ##IDdiff
}
print CLUST"\n";
}
}
Expected output in file:
Singlets
DQ556929 103480190 103480214 154943
DQ540839 103325247 103325275 2484
Cluster1
DQ566549 103322763 103322792 99
DQ699634 103322664 103322694 0
DQ544472 103322664 103322692 373
DQ709105 103322291 103322318 46
DQ705937 103322245 103322273 486
Singlets
DQ699398 103321759 103321788 1211
DQ710151 103320548 103320577 692251
DQ548430 102628297 102628326 1
DQ558403 102628296 102628321 855795
DQ692476 101772501 101772529 481463
DQ544274 101291038 101291068 484047
Cluster2
DQ723982 100806991 100807020 1
DQ709023 100806990 100807020 3
DQ712307 100806987 100807014 0
Singlets
DQ709654 100806987 100807012 571051
DQ707370 100235936 100235962 1481849
This seems to produce the expected output. I'm not sure I understood the specification correctly, so there might be errors and edge cases.
How it works: it remembers what kind of section it's currently outputting ($section, Singlet or Cluster). It accumulates lines in the #cluster array if they belong together, when an incompatible line arrives, the cluster is printed and a new one is started. If the cluster to print has only one member, it's treated as a singlet.
#!/usr/bin/perl
use warnings;
use strict;
my $section = q();
my #cluster;
my $cluster_count = 1;
sub output {
if (#cluster > 1) {
print "Cluster$cluster_count\n";
$cluster_count++;
} elsif (1 == #cluster) {
print $section = 'Singlet', "s\n" unless 'Singlet' eq $section;
}
print for #cluster;
#cluster = ();
}
my $last = 'INF';
while (<>) {
my ($id, $from, $to, $value) = split;
if ($value > 1000 || 1000 < abs($last - $from)) {
output();
} else {
$section = 'Cluster';
}
push #cluster, $_;
$last = $to;
}
output();

Drupal7 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 24 bytes)

I have this code that connect an external mysql database and retun columns values
for ($i=0; $i < sizeof($Columns); $i++) {
//$Columns contains columns names in a current datatable $dt
$result3 = Database::getConnection()->query("SELECT ".$Columns[$i]." From ".$dt." ");
$field_values = array();
$index = 0;
while ($row = $result3->fetchAssoc()) {
$field_values [$index] = $row[$Columns[$i]];
//drupal_set_message($field_values[$index]);
$index++;
}
The code work fine for any data table except one that contains thousend of columns and evry column contains many values..In the case of this data table drupal display a fatal error related to the allowed memory:
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 24 bytes) in C:\xampp\htdocs\.....
I want to escape this error by trying to get only a few columns so I did the next:
while ($row = $result3->fetchAssoc() && $index<=3) {
}
But drupal still displaying this error any idea how to avoid such an error?
It might be a good idea to use Batch API for this. But you will have to find the smallest chunk that could be processed without overloading your server. Since this is highly dependent on the data you have, and the data you "don't" have too ( future data ). You may want to split it up int o very very small chunks.
maybe you can,
<?php
for ($i=0; $i < sizeof($Columns); $i++) {
// Send to batch processing.
}
If that doesn't work, then you will have to,
<?php
for ($i=0; $i < sizeof($Columns); $i++) {
// Do a count query get max
// split into smaller batches using paging in SQL ( hint: LIMIT 0, 10).
foreach($pages as $page) {
// Send to batch processing.
}
}
Please refer to the Batch API documentation i had linked to, it explains how you could define batch process and pass your parameters to them.

Resources