I am trying to load a csv file into a 2-D array, and my code is as follows:
var data: Array[Array[AnyRef]] = _
data = Source.fromFile(filename).getLines.map(_.split(",")).flatten.toArray
But it doesn't work.
This question
provides couple of solutions, but none works for me for some reason.
Does anyone has any ideas?
Related
I want to convert a large number of 2D arrays (over 200k arrays) to 1D arrays with Google Apps Script. When the number of 2D data arrays is 120,000, the following codes are working well without any problem . But they didn’t work when the number of arrays was 130,000. Looks like some limitation in .concat method. In my actual application, the error message returned "RangeError: Maximum call stack size exceeded". Can anyone help? Or is there a better method to get 1D array data directly from one column data in Google Sheets? Thanks a lot for any recommendation!
function test() {
var write = [];
for (i = 0; i < 130000; ++i) {
write[i] = ['AAPL'];
}
SpreadsheetApp.getActiveSheet().getRange(1, 1, write.length, 1).setValues(write);
write = [];
//The code below converts 2D array data to 1D successfully when the number of data arrays is 120000.
var data = SpreadsheetApp.getActiveSheet().getRange('A1:A120000').getValues();
data = [].concat(...data);
console.log(data.length);
//The code below fails when the number of data arrays is 130000.
var data = SpreadsheetApp.getActiveSheet().getRange('A1:A130000').getValues();
data = [].concat(...data);
}
.FLAT()
.getRange('A1:A120000').getValues().flat();
Here is some reference.
An IDE like visual studio code will know this as an array option. The online script editor does not give this as an option in autocomplete but it will works. Because the online IDE does not has this option inside his auto intelligence the editor will not know you working with arrays moving forward.
I have a pyaudio stream running like so:
self.stream = self.microphone.open(format=pyaudio.paInt16, channels=1, rate=self.SAMPLING_RATE, input=True, output=True, frames_per_buffer=self.SAMPLES_PER_CHUNK)
and I am saving each chunk to an array after decoding it through numpy like so:
data = self.stream.read(self.SAMPLES_PER_CHUNK)
data = np.frombuffer(data, dtype='b')
recorded.append(list(data))
And I would later on like to be able to combine these chunks into a single array and save them to a wav file like this:
from scipy.io.wavfile import write
total = []
for i in recorded[start:stop]:
total += i # the issue is here
write('output2.wav', 48000, np.array(total).astype('int16'))
But clearly it is not as simple as merging the arrays as the output file is always just a snippet of static. Could someone tell me how I should be doing this?
I actually realized that it was a question of decoding the data which means that if you change this:
data = np.frombuffer(data, dtype='b')
To this:
data = np.frombuffer(data, dtype='int16')
The rest of the code works just fine
I'm using this json content (I'm open to suggestions on better formatting here):
{"forwardingZones": [{"name": "corp.mycompany.com","dnsServers": ["192.168.0.1","192.168.0.2"]}]}
Note: we may add more items to this list as we scale out, both more IPs and more names, hence the "join(',')" in the end of the code below.
And I'm trying to loop through it to get this result:
corp.mycompany.com=192.168.0.1;192.168.0.2
Using this code:
forward_zones = node['DNS']['forward_zones'].each do |forwarded_zone|
forwarded_zone_name = forwarded_zone['name']
forwarded_zone_dns_servers = forwarded_zone['dns_servers'].join(';')
"#{forwarded_zone_name}=#{forwarded_zone_dns_servers}"
end.join(',')
This is the result that I get:
{"dnsServers"=>["192.168.0.1", "192.168.0.2"], "name"=>"corp.mycompany.com"}
What am i doing wrong...?
x.each returns x. You want x.map.
I have a lot of data in several hundred .mat-files where I want to extract specific data from. All the names of my .mat-files have specific numbers to identify the content like Number1_Number2_Number3_Number4.mat:
01_33_06_121.mat
01_24_12_124.mat
02_45_15_118.mat
02_33_11_190.mat
01_33_34_142.mat
Now I want to extract for example all the data from files with Number1=01 or Number1=02 and Number2=33.
Before I start to write a program from scratch, I would like to know, if there is a simple way to do this with Matlab. Does anybody know how I can solve this problem in a fast way?
Thanks a lot!
There are multiple ways you can do this; on top of my head following can work:
Obtain all the file names into an array
allFiles = dir( 'folder' );
allNames = { allFiles.name };
Loop through your file names and compare against the condition using the regex
for i=1:size(allNames)
if regexp(allNames, pattern, 'match')
disp(allNames)
end
end
this is my first posting and I am really new to programming -
I have a folder with some files that I want to process and then create a numpy array with the values I need I do:
listing = os.listdir(datapath)
my_array=np.zeros(shape=(0,5))
for infile in listing:
dataset = open(infile).readlines()[1:]
data = np.genfromtxt(dataset, usecols=(1,6,7,8,9))
new_array = np.vstack((my_array, data))
and although I have 2 files in listing (datapath folder) the new_array array overwrites the data and gives me only the values of the second file
any ideas?
thanks,
If I understand you correctly, the solution to your problem is simply that you need to vstack it to "my_array" not to a new one.
Just replace the last line with this one and it should work:
my_array = np.vstack((my_array, data))
However, I do not think this is the most efficient way to do it. Since you know how many files are in that folder, just predefine the size of the array and fill its content.
Here is what you need to do to read all files in a numpy array from a specific folder. I have a folder test containing only .txt files. My following file.py is in the same test folder along with all .txt files. Each .txt file contains a 4x4 matrix/array. After running the script the obtained matrices will be a numpy array of [Nx4x4].
import numpy as np
from glob import glob
def read_all_files():
file_names = glob('test/*')
arrays = [np.loadtxt(f) for f in file_names]
matrices = np.concatenate(arrays)