Running command-line script with generated database configuration - cakephp

I'm looking to run a script from the command line using the following command:
./cake updateSearchText main
"UpdateSearchTextShell.php" is a script that's in the Command folder and seems to work fine in other circumstances (see below). However, when I run this, I get a "The datasource configuration 'default' was not found in database.php" error.
Because the project in question is open source, I keep secrets like the database password in a separate file and load them in php. My database.php looks like this:
<?php
class DATABASE_CONFIG {
var $default = null;
function __construct () {
$secrets = Configure::read('secrets');
$this->default = $secrets['database'];
}
}
?>
If I replace that with a simple definition of $default as an array specified in the first line of the class, the script runs with no errors. Clearly I can't leave it there in the repo, though, and I'd really rather not have to manually type it in every time I push new changes to the site.
Any thoughts on how to get the database.php above to work with command line scripts, or how to approach the issue of secrets in a way that won't confuse cake?

Related

Laravel queue stops randomly without exception

I have a laravel queue setup with a
database
Connection. Note this problem is also on redis. But i am currently using the database connection for the
failed_jobs
Table to help me check any errors that occur during the queue process.
The problem i have is that the queue stops working after a few jobs without any message showing why. But when i restart the command (php artisan queue:work) it picks up the remaining jobs. And continues. (But stops again later)
The job is configured with these values
public $tries = 1;
public $timeout = 10;
The job code is, (Not original code)
public function handle()
{
try {
$file = //function to create file;
$zip = new ZipArchive();
$zip->open(//zip_path);
$zip->addFile(//file_path, //file_name);
$zip->close();
#unlink(//remove file);
} catch (\Exception $e) {
Log::error($e);
}
}
And the failed function is setup like this:
public function failed(\Exception $exception)
{
Log::error($exception);
$this->fail($exception);
$this->delete();
}
But my there is no failed_job row, And my log is empty
Edit: I added simple info logs after every line of code. And every time i start the queue, It stops after the last line. So the code runs correct. So laravel doesn't start the new job after that
so what you need here to solve the issue is to do the following steps :
go to bootstrap/cache/ remove all file .PHP
go to the src and run php artisan queue:restart
Now after adding the snippet, we need to trigger the following commands respectively:
sudo supervisorctl reread (to check the file content and make sure
that the snippet is correctly set)
sudo supervisorctl update (release the config changes under the supervisor)
sudo supervisorctl restart all (re-trigger the queues so that the newly created queue gets initialized and start picking up messages respectively)
Did you tried queue:listen ?
php artisan queue:listen
Also i guess you need the Supervisor to keep your worker alive.

Run ddl script from file in Groovy

I working with Spock and Groovy in order to test my application. I should need to run a ddl script before to run every test.
To execute the script from Groovy I am using the following code:
def scriptToExecute = './src/test/groovy/com/sql/createTable.sql'
def sqlScriptToExecuteString = new File(scriptToExecute).text
sql.execute(sqlScriptToExecuteString)
The createTable.sql is a complex script that do several drop and create operation ( of course it is multiline ). When I try to execute it I got the following exception:
java.sql.SQLSyntaxErrorException: ORA-00911: invalid character
To notice that the ddl is correct since that it has been checked running it on the same DB that I am connecting with groovy.
Any Idea how to resolve the problem?
I think JDBC does not support this, but there are tools/libraries that could help, see this answer for Java.
In Groovy, using this JDBC script runner would be something like:
Connection con = ....
def runner = new ScriptRunner(con, [booleanAutoCommit], [booleanStopOnerror])
def scriptFile = new File("createTable.ddl")
scriptFile.withReader { reader ->
runner.runScript(reader)
}
Or, if your script is "simple enough" (ie no comments, no semicolons other than separating statements...), you can load the text, split it around ; and execute using sql.withBatch, something like that:
def scriptText = new File("createTable.ddl").text
sql.withBatch { stmt ->
scriptText.split(';').each { order ->
stmt.addBatch order.trim()
}
}
If you can't get it done in JDBC (See Hugues' answer), consider executing sqlplus from your Groovy program.
["sqlplus", CREDENTIALS, "#"+scriptToExecute].execute()

Hadoop Map Whole File in Java

I am trying to use Hadoop in java with multiple input files. At the moment I have two files, a big one to process and a smaller one that serves as a sort of index.
My problem is that I need to maintain the whole index file unsplitted while the big file is distributed to each mapper. Is there any way provided by the Hadoop API to make such thing?
In case if have not expressed myself correctly, here is a link to a picture that represents what I am trying to achieve: picture
Update:
Following the instructions provided by Santiago, I am now able to insert a file (or the URI, at least) from Amazon's S3 into the distributed cache like this:
job.addCacheFile(new Path("s3://myBucket/input/index.txt").toUri());
However, when the mapper tries to read it a 'file not found' exception occurs, which seems odd to me. I have checked the S3 location and everything seems to be fine. I have used other S3 locations to introduce the input and output file.
Error (note the single slash after the s3:)
FileNotFoundException: s3:/myBucket/input/index.txt (No such file or directory)
The following is the code I use to read the file from the distributed cache:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(cacheFile[0].toString()));
while ((line = br.readLine()) != null) {
//Do stuff
}
I am using Amazon's EMR, S3 and the version 2.4.0 of Hadoop.
As mentioned above, add your index file to the Distributed Cache and then access the same in your mapper. Behind the scenes. Hadoop framework will ensure that the index file will be sent to all the task trackers before any task is executed and will be available for your processing. In this case, data is transferred only once and will be available for all the tasks related your job.
However, instead of add the index file to the Distributed Cache in your mapper code, make your driver code to implement ToolRunner interface and override the run method. This provides the flexibility of passing the index file to Distributed Cache through the command prompt while submitting the job
If you are using ToolRunner, you can add files to the Distributed Cache directly from the command line when you run the job. No need to copy the file to HDFS first. Use the -files option to add files
hadoop jar yourjarname.jar YourDriverClassName -files cachefile1, cachefile2, cachefile3, ...
You can access the files in your Mapper or Reducer code as below:
File f1 = new File("cachefile1");
File f2 = new File("cachefile2");
File f3 = new File("cachefile3");
You could push the index file to the distributed cache, and it will be copied to the nodes before the mapper is executed.
See this SO thread.
Here's what helped me to solve the problem.
Since I am using Amazon's EMR with S3, I have needed to change the syntax a bit, as stated on the following site.
It was necessary to add the name the system was going to use to read the file from the cache, as follows:
job.addCacheFile(new URI("s3://myBucket/input/index.txt" + "#index.txt"));
This way, the program understands that the file introduced into the cache is named just index.txt. I also have needed to change the syntax to read the file from the cache. Instead of reading the entire path stored on the distributed cache, only the filename has to be used, as follows:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(#the filename#));
while ((line = br.readLine()) != null) {
//Do stuff
}

How to use interopPermission on batch job script execution

I'm trying to run a powershell script through batch job, I used the following code that works fine in a job :
System.Diagnostics.Process process;
System.Diagnostics.ProcessStartInfo startInfo;
;
process = new System.Diagnostics.Process();
startInfo = new System.Diagnostics.ProcessStartInfo();
startInfo.set_FileName("powershell.exe");
startInfo.set_Arguments("D:\\Documents\\OP3_FTP_Upload.ps1");
startInfo.set_UseShellExecute(false);
startInfo.set_RedirectStandardError(true);
process.set_StartInfo(startInfo);
process.Start();
when I use this code in a runbasebatch class, I have the following errors:
Failed to request the permission of type 'InteropPermission'.
Unable to create object 'CLRObject'
So I try to use the following to solve my permission problem:
Set permissionSet;
InteropPermission interopPermission;
;
interopPermission = new InteropPermission(InteropKind::ClrInterop);
permissionSet = new Set(Types::Class);
permissionSet.add(interopPermission);
CodeAccessPermission::assertMultiple(permissionSet);
...my first code example
CodeAccessPermission::revertAssert();
When I execute my batch job , I have no error message but nothing happens. The path is correct, the script also (parms corrects based on AOS)
I think the problem is my way to implement permissionSet and interopPermission classes, I know how to use it in case of CRUD operations on files, but how to use it in case of script execution? Can anyone explain me how (if possible)to manage those classes in my case of use?
Any other ideas to solve my problem are welcome.
This should be enough (in the method doing the CLR calls):
new InteropPermission(InteropKind::ClrInterop).assert();
Otherwise try to debug.

Create a directory dynamically inside the "web pages" folder of a java web application

So I'm trying to dynamically create a folder inside the web pages folder.
I'm making a game database. Everytime a game is added I do this:
public void addGame(Game game) throws DatabaseException {
em.getTransaction().begin();
em.persist(game);
em.getTransaction().commit();
File file = new File("C:\\GameDatabaseTestFolder");
file.mkdir();
}
So everything works here.
The file get's created.
But I want to create the folder like this:
public void addGame(Game game) throws DatabaseException {
em.getTransaction().begin();
em.persist(game);
em.getTransaction().commit();
File file = new File(game.getId()+"/screenshots");
file.mkdir();
}
Or something like that. So it will be created where my jsp files are and it will have the id off the game.
I don't understand where the folder is created by default.
thank you in advance,
David
It's by default relative to the "current working directory", i.e. the directory which is currently open at the moment the Java Runtime Environment has started the server. That may be for example /path/to/tomcat/bin, or /path/to/eclipse/workspace/project, etc, depending on how the server is started.
You should now realize that this condition is not controllable from inside the web application.
You also don't want to store it in the expanded WAR folder (there where your JSPs are), because any changes will get lost whenever you redeploy the WAR (with the very simple reason that those files are not contained in the original WAR).
Rather use an absolute path instead. E.g.
String gameWorkFolder = "/path/to/game/work/folder";
new File(gameWorkFolder, game.getId()+"/screenshots");
You can make it configureable by supplying it as a properties file setting or a VM argument.
See also:
Image Upload and Display in JSP
getResourceAsStream() vs FileInputStream

Resources