Here is my code:
require 'CSV'
contents = CSV.read('/Users/namename/Desktop/test.csv')
arr = []
first_row = contents[0]
contents.shift
contents.each do |row|
if row[12].to_s =~ /PO Box/i or row[12].to_s =~ /^[[:digit:]]/
#File.open('out.csv','a').puts('"'+row.join('","')+'"')
arr << row
else
row[12], row[13] = row[13], row[12]
#File.open('out.csv','a').puts('"'+row.join('","')+'"')
arr << row
end
end
arr.unshift(first_row)
arr.each do |row|
File.open('out.csv', 'a').puts('"' + row.join('","') + '"')
end
First I .shift so that my header fields don't catch the pattern (and ultimately swap) in the first conditional of the first .each loop. Then I conditionally swap cell values that match the pattern, and then store the correctly shifted values in an array. After this, I .unshift to attempt to put back my header fields I stored in first_row, but when I view the resulting out.csv file I get all my headers in the middle. Why?
Example data:
https://gist.github.com/anonymous/e1017d3ba81634d9e1227e7fe49536cb
The root of your problem is that you're not using the features provided by the CSV module.
First, CSV.read takes a :headers option that will catch the headers for you so you don't have to worry about them and, as a bonus, lets you access fields by header name instead of numeric index (handy if the CSV fields' order is changed). With the :headers option, CSV.read returns a CSV::Table object, which has another benefit I'll discuss in a moment.
Second, you're generating your own faux-CSV output instead of letting the CSV module do it. This, in particular, is needless and dangerous:
...puts('"' + row.join('","') + '"')
If any of your column values has quotation marks or newlines, which need to be escaped, this will fail, badly. You could use CSV.generate_line(row) instead, but you don't need to if you've used the headers: option above. Like I said, it returns a CSV::Table object, which has a to_csv method, and that method accepts a :force_quotes option. That will quote every field just like you want—and, more importantly, safely.
Armed with the above knowledge, the code becomes much saner:
require "csv"
contents = CSV.read('/Users/namename/Desktop/test.csv', headers: true)
contents.each do |row|
next unless row["DetailActiveAddressLine1"] =~ /PO Box|^[[:digit:]]/i
row["DetailActiveAddressLine1"], row["DetailActiveAddressLine2"] =
row["DetailActiveAddressLine2"], row["DetailActiveAddressLine1"]
end
File.open('out.csv', 'a') do |file|
file.write(contents.to_csv(force_quotes: true))
end
If you'd like, you can see a version of the code in action (without file access, of course) on Ideone: http://ideone.com/IkdCpb
I'm trying to figure out how to check if an element belongs to an array using ruby. I'm reading from a file (say demo.txt) which has comma separated pairs in each line. I'm concatenating these two pairs and pushing it into an array. Later I need to check if a specific value belongs to the array. I can see the array is populated successfully but the second check is unsuccessful, i.e. it can't find the element in the array. 'My demo.txt' is as follows
a, b
c, d
The ruby code goes as follows
array = Array.new
File.readlines('demo.txt').each do |line|
line.slice! ", "
array.push line
end
array.each do|d|
puts d
end
if array.include? 'ab'
puts "correct" #this is not printed
end
How do I check if the array contains the element 'ab'?
There is ab\n in your array.
Instead of
array.each do|d|
puts d
end
use inspect to check the values,
p array
#=> ["ab\n", "cd"]
To fix the issue, use chomp on line
File.readlines('b.txt').each do |line|
line = line.chomp
...
end
Replacing this
if array.include? 'ab'
puts "correct"
end
with this one
if array.find 'ab'
puts "correct"
end
resolves the issue.
I have some data marked by a set of string markers that start with a 1 for the start, and a 2 for the end row of the data. These strings are a fixed list that I search from. If a string is not found, I want it to skip that string and just give the array a set of 0s as values. The code I use to search and break up the big data sheet into variables based on the markers is below:
tasknames = {'task1';'task2';'task3';'task4'};
for n = 1:numel(tasknames)
first = find(~cellfun(#isempty,strfind(Text(:,9),[tasknames{n},'_1'])))+1;
last = find(~cellfun(#isempty, strfind(Text(:,9),[tasknames{n},'_2'])))+1;
task_data{n} = Data(first:last, :);
end
Basically if strfnd comes back empty whne it goes to find that start and end row in Data, it crashes, because non exists. How do I avoid this crash and just fill task_data{n} for that particular marker with like 100 zeros or something?
If basically occurs when first and last are empty, meaning you can check it :
if(isempty(first)&&isempty(last))
task_data{n}=zeros(1,100);
else
task_data{n}= Data(first:last, :);
end
I can't seem to get this down, right.
I have a big list of words in an array. I want these words to appear in 8 'tables', each 14 rows by 9 columns, with words running down each column of the table.
So I can get as far as columns = words.each_slice(14) and then later tables = columns.each_slice(9) but from there i'm not sure. I feel like I should make a hash and append the first n item of each column to an array, and then maybe join them with a tab delimiter.
My destination is a spreadsheet, so maybe outputting to CSV would make sense? I'm just not sure how to have it grouped into separate 'tables' (instead of just 9 columns with lots of rows and no separation) but maybe all it takes is a csv line with all blanks?
Anyway, any input or insight would be welcome.
This will do what you ask
You don't say anything about the output format you want, so I've just surrounded each word with quotes, joined them with commas and put a blank line between tables.
My "words" are just the numbers 1 to 200.
words = (1 .. 200).map { |v| '%03d' % v }
words.each_slice(14).each_slice(9) do |table|
(0 ... table[0].size).each do |i|
row = table.map { |column| column[i] }
row.pop if row[-1].nil?
puts row.map { |cell| %<"#{cell}"> }.join ','
end
puts ''
end
output
"001","015","029","043","057","071","085","099","113"
"002","016","030","044","058","072","086","100","114"
"003","017","031","045","059","073","087","101","115"
"004","018","032","046","060","074","088","102","116"
"005","019","033","047","061","075","089","103","117"
"006","020","034","048","062","076","090","104","118"
"007","021","035","049","063","077","091","105","119"
"008","022","036","050","064","078","092","106","120"
"009","023","037","051","065","079","093","107","121"
"010","024","038","052","066","080","094","108","122"
"011","025","039","053","067","081","095","109","123"
"012","026","040","054","068","082","096","110","124"
"013","027","041","055","069","083","097","111","125"
"014","028","042","056","070","084","098","112","126"
"127","141","155","169","183","197"
"128","142","156","170","184","198"
"129","143","157","171","185","199"
"130","144","158","172","186","200"
"131","145","159","173","187"
"132","146","160","174","188"
"133","147","161","175","189"
"134","148","162","176","190"
"135","149","163","177","191"
"136","150","164","178","192"
"137","151","165","179","193"
"138","152","166","180","194"
"139","153","167","181","195"
"140","154","168","182","196"
You were on the right track. Here's a solution that writes in CSV format for lists whose length is a multiple of 14*9. You can also create a spreadsheet directly with the appropriate gem. I'll post an update which handles any length list shortly.
Note that I think each_slice requires you to include Enumerable for at least pre 2.0 Ruby versions.
(0...14*9*2).each_slice(14).collect.each_slice(9) {|table|
table.transpose.each {|row|
puts row.inspect.delete('[]')} ; puts}
If you need to pad your input array to a multiple of 14*9 so that the transpose works, you can use the following:
def print_csv(array)
mod=array.length%(14*9)
array = array+[nil]*(14*9-mod) if mod>0
array.each_slice(14).collect.each_slice(9) {|table|
table.transpose.each {|row|
puts row.reject(&:nil?)*','} ; puts}
end