How do I pass a PHP array to a Nightmare NodeJS script? This is where I'm at:
/*Original array $titles =
Array
(
[0] => title 1 with, a comma after with
[1] => title 2
[2] => title 3
[3] => title 4
)
*/
//PHP
$json_titles = json_encode($titles);
echo $json_titles;
// Output: ["title 1 with, a comma after with","title 2","title 3","title 4"]
// Pass to Nightmare NodeJS script with:
shell_exec('xvfb-run node app.js ' . $json_titles);
//NodeJS app.js:
const getTitles = process . argv[2];
console.log(getTitles)
// Output: [title 1 with, a comma after with,title 2,title 3,title 4]
How do I get the same array in PHP to NodeJS?
As you see below Simon came to the rescue with escapeshellarg. Thanks Simon!
I also ended up having one more step in the Node JS script. I needed to: getTitles = JSON.parse(getTitles);
Change your shell_exec to escape the json like so:
shell_exec('xvfb-run node app.js ' . escapeshellarg($json_titles));
This escapes the double quotes in your JSON so they are passed correctly to node.
You should do this any time you pass variables to the command line, to reduce bugs and security risks.
Edit: as discovered by OP, Node will also have to parse the JSON since it gets the argument as a string. This can be done in the node script as follows:
ParsedTitles = JSON.parse(getTitles);
Related
I want to create a list from a matlab struc as follows:
#Import
mat2=sio.loadmat('TestData.mat',squeeze_me=True,struct_as_record=False)
#Solution without a loop
v1 = mat['pyDMD'].v1)
.
.
.
v17 = mat['pyDMD'].v2)
data = [v1, . . ., v17]
#Loop over fieldnames
data = []
fieldnames = []
fieldnames.append(mat['pyDMD']._fieldnames)
for t in range(len(fieldnames)):
data.append(mat['pyDMD'].fieldnames(t))
v1..v17 are a 100x100 arrays of float64. How can I iterate over fieldnames - last line. My approach is obviously not correct.
The answer is quite simple:
for field in mat2['pyDMD']._fieldnames:
snapshot = getattr(mat2['pyDMD'], field)
data.append(np.transpose(snapshot))
I Want to copy an array from a text file and make another array equal it
so
local mapData = {
grass = {
cam = "hud",
x = 171,
image = "valley/grass",
y = 168,
animated = true
}
}
This is an array that is in Data.lua
i want to copy this array and make it equal another array
local savedMapData = {}
savedMapData = io.open('Data.lua', 'r')
Thank you.
It depends on Lua Version what you can do further.
But i like questions about file operations.
Because filehandlers in Lua are Objects with methods.
The datatype is userdata.
That means it has methods that can directly be used on itself.
Like the methods for the datatype string.
Therefore its easy going to do lazy things like...
-- Example open > reading > loading > converting > defining
-- In one Line - That is possible with methods on datatype
-- Lua 5.4
local savedMapData = load('return {' .. io.open('Data.lua'):read('a'):gsub('^.*%{', ''):gsub('%}.*$', '') .. '}')()
for k, v in pairs(savedMapData) do print(k, '=>', v) end
Output should be...
cam => hud
animated => true
image => valley/grass
y => 168
x => 171
If you need it in the grass table then do...
local savedMapData = load('return {grass = {' .. io.open('Data.lua'):read('a'):gsub('^.*%{', ''):gsub('%}.*$', '') .. '}}')()
The Chain of above methods do...
io.open('Data.lua') - Creates Filehandler (userdata) in read only mode
(userdata):read('a') - Reading whole File into one (string)
(string):gsub('^.*%{', '') - Replace from begining to first { with nothing
(string):gsub('%}.*$', '') - Replace from End to first } with nothing
I'm trying to read an input file in Scala that I know the structure of, however I only need every 9th entry. So far I have managed to read the whole thing using:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
The issue, this leaves me with an array that is huge (we're talking 20GB of data). Not only have I seen myself forced to write some very ugly code in order to convert between RDD[Array[String]] and Array[String] but it's essentially made my code useless.
I've tried different approaches and mixes between using
.map()
.flatMap() and
.reduceByKey()
however nothing actually put my collected "cells" into the format that I need them to be.
Here's what is supposed to happen: Reading a folder of text files from our server, the code should read each "line" of text in the format:
*---------*
| NASDAQ: |
*---------*
exchange, stock_symbol, date, stock_price_open, stock_price_high, stock_price_low, stock_price_close, stock_volume, stock_price_adj_close
and only keep a hold of the stock_symbol as that is the identifier I'm counting. So far my attempts have been to turn the entire thing into an array only collect every 9th index from the first one into a collected_cells var. Issue is, based on my calculations and real life results, that code would take 335 days to run (no joke).
Here's my current code for reference:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SparkNum {
def main(args: Array[String]) {
// Do some Scala voodoo
val sc = new SparkContext(new SparkConf().setAppName("Spark Numerical"))
// Set input file as per HDFS structure + input args
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
var collected_cells:Array[String] = new Array[String](0)
//println("[MESSAGE] Length of CC: " + collected_cells.length)
val divider:Long = 9
val array_length = fields.count / divider
val casted_length = array_length.toInt
val indexedFields = fields.zipWithIndex
val indexKey = indexedFields.map{case (k,v) => (v,k)}
println("[MESSAGE] Number of lines: " + array_length)
println("[MESSAGE] Casted lenght of: " + casted_length)
for( i <- 1 to casted_length ) {
println("[URGENT DEBUG] Processin line " + i + " of " + casted_length)
var index = 9 * i - 8
println("[URGENT DEBUG] Index defined to be " + index)
collected_cells :+ indexKey.lookup(index)
}
println("[MESSAGE] collected_cells size: " + collected_cells.length)
val single_cells = collected_cells.flatMap(collected_cells => collected_cells);
val counted_cells = single_cells.map(cell => (cell, 1).reduceByKey{case (x, y) => x + y})
// val result = counted_cells.reduceByKey((a,b) => (a+b))
// val inmem = counted_cells.persist()
//
// // Collect driver into file to be put into user archive
// inmem.saveAsTextFile("path to server location")
// ==> Not necessary to save the result as processing time is recorded, not output
}
}
The bottom part is currently commented out as I tried to debug it, but it acts as pseudo-code for me to know what I need done. I may want to point out that I am next to not at all familiar with Scala and hence things like the _ notation confuse the life out of me.
Thanks for your time.
There are some concepts that need clarification in the question:
When we execute this code:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
That does not result in a huge array of the size of the data. That expression represents a transformation of the base data. It can be further transformed until we reduce the data to the information set we desire.
In this case, we want the stock_symbol field of a record encoded a csv:
exchange, stock_symbol, date, stock_price_open, stock_price_high, stock_price_low, stock_price_close, stock_volume, stock_price_adj_close
I'm also going to assume that the data file contains a banner like this:
*---------*
| NASDAQ: |
*---------*
The first thing we're going to do is to remove anything that looks like this banner. In fact, I'm going to assume that the first field is the name of a stock exchange that start with an alphanumeric character. We will do this before we do any splitting, resulting in:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val validLines = lines.filter(line => !line.isEmpty && line.head.isLetter)
val fields = validLines.map(line => line.split(","))
It helps to write the types of the variables, to have peace of mind that we have the data types that we expect. As we progress in our Scala skills that might become less important. Let's rewrite the expression above with types:
val lines: RDD[String] = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val validLines: RDD[String] = lines.filter(line => !line.isEmpty && line.head.isLetter)
val fields: RDD[Array[String]] = validLines.map(line => line.split(","))
We are interested in the stock_symbol field, which positionally is the element #1 in a 0-based array:
val stockSymbols:RDD[String] = fields.map(record => record(1))
If we want to count the symbols, all that's left is to issue a count:
val totalSymbolCount = stockSymbols.count()
That's not very helpful because we have one entry for every record. Slightly more interesting questions would be:
How many different stock symbols we have?
val uniqueStockSymbols = stockSymbols.distinct.count()
How many records for each symbol do we have?
val countBySymbol = stockSymbols.map(s => (s,1)).reduceByKey(_+_)
In Spark 2.0, CSV support for Dataframes and Datasets is available out of the box
Given that our data does not have a header row with the field names (what's usual in large datasets), we will need to provide the column names:
val stockDF = sparkSession.read.csv("/tmp/quotes_clean.csv").toDF("exchange", "symbol", "date", "open", "close", "volume", "price")
We can answer our questions very easy now:
val uniqueSymbols = stockDF.select("symbol").distinct().count
val recordsPerSymbol = stockDF.groupBy($"symbol").agg(count($"symbol"))
In my Yii2 Project I have an array for example
$array = [];
$array [] = 8 , 3, 6
So when I print out the array is
[8,3,6]
So when I use the same in a where statement it jumbles up.
$class = ModelClass::find()->where(['array_no' => $array])->all
So when I print out class I get the output in asc order sorted..
I get the information of
3 in the first
6 in the second place
8 in the third place.
How can i stop this from happening. I want them to return my output in the same order as array
You should use ORDER BY FIELD(), e.g. :
$models = ModelClass::find()
->where(['array_no' => $array])
->orderBy(new \yii\db\Expression('FIELD (array_no, '.implode(',', $array).')'))
->all();
I want to use an expression like
#{ %$hashref{'key_name'}[1]
or
%$hashref{'key_name}->[1]
to get - and then test - the second (index = 1) member of an array (reference) held by my hash as its "key_name" 's value. But, I can not.
This code here is correct (it works), but I would have liked to combine the two lines that I have marked into one single, efficient, perl-elegant line.
foreach my $tag ('doit', 'source', 'dest' ) {
my $exists = exists( $$thisSectionConfig{$tag});
my #tempA = %$thisSectionConfig{$tag} ; #this line
my $non0len = (#tempA[1] =~ /\w+/ ); # and this line
if ( !$exists || !$non0len) {
print STDERR "No complete \"$tag\" ... etc ... \n";
# program exit ...
}
I know you (the general 'you') can elegantly combine these two lines. Could someone tell me how I could do this?
This code it testing a section of a config file that has been read into a $thisSectionConfig reference-to-a-hash by Config::Simple. Each config file key=value pair then is (I looked with datadumper) held as a two-member array: [0] is the key, [1] is the value. The $tag 's are configuration settings that must be present in the config file sections being processed by this code snippet.
Thank you for any help.
You should read about Arrow operator(->). I guess you want something like this:
foreach my $tag ('doit', 'source', 'dest') {
if(exists $thisSectionConfig -> {$tag}){
my $non0len = ($thisSectionConfig -> {$tag} -> [1] =~ /(\w+)/) ;
}
else {
print STDERR "No complete \"$tag\" ... etc ... \n";
# program exit ...
}