Case:
I have windows batch file start.bat which do some operations using extr_mode parameter passed from the outside:
rem settings
set extr_mode=%1
rem the rest of the script
When I'm calling it from cmd using i.e.: start.bat DAILY it works fine and parameter is passed.
Now I'm trying to call this batch file under DBMS_SCHEDULER chain job's program:
begin
sys.dbms_scheduler.create_program(program_name => 'OUT_BAT',
program_type => 'EXECUTABLE',
program_action => 'C:\Job\start.bat DAILY',
number_of_arguments => 0,
enabled => true,
comments => 'Out batch file');
end;
/
this program without parameter (program_action => 'C:\Job\start.bat') runs ok, but when I'm adding parameter job is failing.
I mean, I'm checking dba_scheduler_job_run_details and for this step STATUS = SUCCEEDED, but in ADDITIONAL_INFO there is:
CHAIN_LOG_ID="490364", STEP_NAME="OUT", STANDARD_ERROR="The system cannot find the path specified.
The system cannot find the path specified."
I didn't find any specific answer for my question, so is it possible to run batch file with parameter from DBMS_SCHEDULER chain job?
Frankly, I've no idea about dbms-scheduler.
Naturally, batch can provide a solution, which may or may not be suitable.
Create a new batch called startDAILY.bat containing simply this:
C:\Job\start.bat DAILY
and change your setting
program_action => 'C:\Job\startDAILY.bat'
I'm suspicious about your code line stating
number_of_arguments => 0,
I'd suspect that you may be able to change this to say, number_of_arguments => 1,
and then well - perhaps the dbms-scheduler manual may give a hint about how to supply DAILY as the first argument so that you can use your original code.
Oh BTW - using start as a batch name is not a good idea as START is a batch keyword.
Problem solved - thanks for tip #Magoo,
I needed to create program first:
sys.dbms_scheduler.create_program(program_name => 'OUT_BAT',
program_type => 'EXECUTABLE',
program_action => 'C:\OUT_start.bat',
number_of_arguments => 1,
enabled => false,
comments => 'Out batch file');
then define program argument and enable program:
sys.dbms_scheduler.define_program_argument(program_name => 'OUT_BAT',
argument_position => 1,
argument_name => 'DAILY',
argument_type => 'varchar2',
default_value => 'DAILY');
sys.dbms_scheduler.enable(name => 'OUT_BAT');
then of course the rest elements of dbms_scheduler job.
Related
I want to get the file of which I am deleting on Filepond. However when I use onremovefile={(file) => this.handleRemove(file)}, the file returns to null. What am I doing wrong?
Found the solution! The first parameter is a possible error response and the second one is the file item. Thus it's actually onremovefile={(errRes, file) => this.handleRemove(errRes, file)}
How to use other file extensions in php-cs-fixer, for example cakephp template .ctp files ?
I have trying this code:
<?php
$finder = PhpCsFixer\Finder::create()
->notPath('bootstrap/cache')
->notPath('storage')
->notPath('vendor')
->in(__DIR__)
->name('*.php') // <===== *.ctp
->notName('*.blade.php')
->ignoreDotFiles(true)
->ignoreVCS(true)
;
return PhpCsFixer\Config::create()
->setRules(array(
'#Symfony' => true,
'binary_operator_spaces' => ['align_double_arrow' => false],
'array_syntax' => ['syntax' => 'short'],
'linebreak_after_opening_tag' => true,
'not_operator_with_successor_space' => true,
'ordered_imports' => true,
'phpdoc_order' => true,
))
->setFinder($finder)
;
Thing is, that with original way of running the PHP CS Fixer, you have path finder configured once in config file, and then you pass the path also as CLI argument.
As a result - you have fully defined finder in config, which got later overwritten by CLI argument, also you got a message:
Paths from configuration file have been overridden by paths provided as command arguments.
Solution is to:
- provide a path only in config file
- provide a path only as CLI argument
- provide a path in both places (ex, in config plugins, while as CLI argument plugins/subdir (as passing exactly same value doesn't really make sense...), but then also provide --path-mode=intersection parameter, to not override the path, but take common part of paths (one in config, one as CLI arg)
Very important do not use path argument in terminal command
Example:
php-cs-fixer fix test/global.php
cs fixer use path from command argument, NOT finder
Just run with
php-cs-fixer fix
It is simple, but you can skip it by inattention
This is painfully simple, but I cannot determine why it simply will not work as the Cookbook suggests it will. I am getting a blank result when I run the following:
Cache::write('req_quals', $value, 'permacache');
Cache::read('req_quals', 'permacache');
The config looks like:
Cache::config('permacache', array('engine' => 'File', 'path' => CACHE . 'permacache' . DS, 'duration' => '+9999 days'));
The write works. I know this because I'm looking directly into the tmp/cache/permacache folder and I see the file with its contents inside.
I can write/read this value without any problem if I remove the 'permacache' from both lines.
Am I missing something obvious?
When Cake calculates duration, +9999 days returns a negative duration. You should avoid being a cool guy and just use +999 days as the documentation subtly suggests.
i have a module and i am using node_load(array('nid' => arg(1)));
now the problem is that this function keep getting its data for node_load from DB cache.
how can i force this function to not use DB cache?
Example
my link is http://mydomain.com/node/344983
now:
$node=node_load(array('nid'=>arg(1)),null,true);
echo $node->nid . " -- " arg(1);
output
435632 -- 435632
which is a randomly node id (available on the system)
and everytime i ctrl+F5 my browser i get new nid!!
Thanks for your help
Where are you calling this? For example, are you using it as part of your template.php file, as part of a page, or as an external module?
Unless you have this wrapped in a function with its own namespace, try naming the variable differently than $node -- for example, name it $my_node. Depending on the context, the 'node' name is very likely to be accessed and modified by Drupal core and other modules.
If this is happening inside of a function, try the following and let me know what the output is:
$test_node_1 = node_load(344983); // Any hard-coded $nid that actually exists
echo $test_node_1->nid;
$test_node_2 = node_load(arg(1)); // Consider using hook_menu loaders instead of arg() in the future, but that's another discussion
echo $test_node_2->nid;
$test_node_3 = menu_get_object(); // Another method that is better than arg()
echo $test_node_3->nid;
Edit:
Since you're using hook_block, I think I see your problem -- the block itself is being cached, not the node.
Try setting BLOCK_NO_CACHE or BLOCK_CACHE_PER_PAGE in hook_block, per the documentation at http://api.drupal.org/api/drupal/developer--hooks--core.php/function/hook_block/6
You should also try to avoid arg() whenever possible -- it's a little bit of a security risk, and there are better ways to accomplish just about anything arg() would do in a module environment.
Edit:*
Some sample code that shows what I'm referring to:
function foo_block ($op = 'list', $delta = 0, $edit = array()) {
switch ($op) {
case 'list':
$blocks[0] = array(
'info' => 'I am a block!',
'status' => 1,
'cache' => BLOCK_NO_CACHE // Add this line
);
return $block;
case 'view':
.....
}
}
node_load uses db_query, which uses mysql_query -- so there's no way to easily change the database's cache through that function.
But, node_load does use Drupal's static $nodes cache -- It's possible that this is your problem instead of the database's cache. You can have node_load clear that cache by calling node_load with $reset = TRUE (node_load($nid, NULL, TRUE).
Full documentation is on the node_load manual page at http://api.drupal.org/api/drupal/modules--node--node.module/function/node_load/6
I have had luck passing in the node id to node_load not in an array.
node_load(1);
According to Druapl's api this is acceptable and it looks like if you pass in an array as the first variable it's loaded as an array of conditions to match against in the database query.
The issue is not with arg(), your issue is that you have caching enabled for anonymous users.
You can switch off caching, or you can exclude your module's menu items from the cache with the cache exclude module.
edit: As you've now explained that this is a block, you can use BLOCK_NO_CACHE in hook_block to exclude your block from the block cache.
I'm very new to Ada and one thing that I find hard to grasp is working with Files in Ada when it comes to append some values in a file. It seems easier for me to do so in C. Anyway, I haven't found good information and I hope someone could help me here.
I declare the following first:
PACKAGE Seq_Float_IO IS NEW Ada.Sequential_IO (Element_Type => Long_Float);
Flo_File : Seq_Long_Float_IO.File_Type;
and then I create a file "bvalues.dat":
Seq_Long_Float_IO.Create(File => Flo_File, Name => "bvalues.dat");
and then to write say a variable named "Largest", I use:
Seq_Long_Float_IO.Write(File => Flo_File, Item => Largest);
I see that every time I run the code the file "bvalues.dat" gets destroyed and new values are written to it as the program runs. This is ok for me. What I'm doing in my code is to find the largest value of some values and store the largest element in the file "bvalues.dat".
Now say I have to repeat the operation with different sets of values IN THE SAME PROGRAM (say with an outer LOOP) and I need to store the largest element of each set of values. Thus I need to be able to append each largest value of every set to the file "bvalues.dat". How to achieve this?
Do I need to close the file "bvalues.dat" each time after writing a largest value and then open it again:
Seq_Long_Float_IO.Open(File => Flo_File, Mode => Append_File, Name => "bvalues.dat");
say after an index in an outer loop gets incremented to take in the next set of values for which the largest element is to be computed and then write as I did above
Seq_Long_Float_IO.Write(File => Flo_File, Item => Largest); ?
NEW INFO:
I get the error:
40. Seq_Long_Float_IO.Open(File => Flo_File, Mode => Append_File, Name => "bvalues.dat");
|
>>> "Append_File" is not visible
>>> non-visible declaration at a-sequio.ads:58, instance at line 8
>>> non-visible declaration at a-textio.ads:56
Thanks a lot...
Test file:
WITH Ada.Text_IO;
WITH Ada.Sequential_IO;
PROCEDURE TestWrite5 IS
PACKAGE Seq_Float_IO IS NEW Ada.Sequential_IO (Element_Type => Float);
Flo_File : Seq_Float_IO.File_Type;
BEGIN
Seq_Float_IO.Open (File => Flo_File, Mode => Seq_Float_IO.Append_File,
Name =>"bvalues.dat");
exception
when Name_Error =>
Create (File => Flo_File, Mode => Out_File, Name => "bvalues.dat");
END TestWrite5;
On compiling I get:
exception
when Name_Error =>
|
"Name_Error" is not visible
non-visible declaration at a-sequio.ads:111, instance at line 5
non-visible declaration at a-textio.ads:298
non-visible declaration at a-ioexce.ads:23
Create (File => Flo_File, Mode => Out_File, Name => "bvalues.dat");
|
"Create" is not visible
non-visible declaration at a-sequio.ads:73, instance at line 5
non-visible declaration at a-textio.ads:90
15.
It doesn't change if I also put Seq_Float_IO.Out_File instead of just Out_File.
Create, like the name implies, will create a brand new file, even if one already exists.
If the file already exists and you want to append to it, you would use Open.
If you want to open it for appending, but if it doesn't exist create it, the normal idiom is to put the Create call in an exception handler around Open, like so:
begin
Open (File => Flo_File, Mode => Append_File, Name => "bvalues.dat");
exception
when Name_Error =>
Create (File => Flo_File, Mode => Out_File, Name => "bvalues.dat");
end;
From the rest of your text, it looks like you are thinking about storing temp values in a file. I wouldn't do that unless you need persistence for some reason (recovering from crashes, etc). Disk IO is way way way slow. Just keep your temp values in a variable and save the result when you have it.