Goal
I am trying to put data into my spreadsheet using a two dimensional array which can contain one or many products.
Take a look at my two dimensional array:
Now take a look at my wanted result:
I can't seem to correctly place the data into the Excel cells. I thought there was a 1 liner of code that could do this, but I tried a different approach and just can't seem to wrap my head around these 2d arrays... Friday can't come soon enough!
Take a look at my code:
If wSheet IsNot Nothing Then
Dim colRange() As String = {"B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U"}
For i As Integer = products.GetLowerBound(0) To products.GetUpperBound(0)
'Start at B2 ... C2 ... D2 ...
Dim r As Microsoft.Office.Interop.Excel.Range = wSheet.Range(colRange(i) & "2").Resize(products.GetLength(1))
r.Value2 = products
Next
End If
This really just doesn'T work, it will just display the (0,0), (1,0) and (2,0) values to the Excel sheet... Any suggestions?
Bah, I knew it could've been done in a 1 liner! Lol. I swapped the columns / rows in my 2d array and tried to set that as the range.
Then it was just a matter of:
wSheet.Range("B2").Resize(UBound(products, 1), UBound(products, 2)).Value2 = products
Related
I am struggling to understand this behavior in Lua. If I execute in local console:
tab={}
tab[100] = "E"
the table looks like this:
{
[100] = "E"
}
Now I am populating my table in a for-loop with a few conditions:
cell_types = {}
cell = 1
for y=1, 1000 do
for x=1, 1000 do
if some_condition then
cell_types[cell] = "E"
elseif some_condition then
cell_types[cell] = "M"
else
cell_types[cell] = "C"
end
cell = cell+1
end
end
Now however the table looks like this:
{ "E", "E", "M", "E", "C", "C", "E", "E", "E", "E", "E", "E", "E", "E", "E", "E", "E", "E" }
If I remove the first table call (cell_types[cell] = "E") then again I have key/value pairs:
{
[101] = "M",
[102] = "M",
[103] = "M",
[104] = "M",
[105] = "M",
[106] = "M",
[107] = "M"
}
What could cause this behavior? And how can I make sure to always store key/value pairs in my table?
Thank you.
The Lua table is always a collection of key-value pairs.
It's just internally it's optimized to store continuous sequence as an array, and discontinuous values as a hash map.
By removing some assignments, like cell_types[cell] = "E" case, you're introducing the holes in the sequence, so it's no longer falls into the array implementation part, and has to be iterated as a hash map with pairs(). Note that ipairs() will only iterate the array part of the table, and will stop at some undefined point, where it will find a hole in sequence.
So, I have a 2D array called allBusinesses of type BusinessClass. I fill this array in the following way:
allBusinesses[0].append(contentsOf: [B11, B12, B13, B14, B15, B16])
allBusinesses[1].append(contentsOf: [B21, B22, B23, B24, B25, B26])
Where B11, B12 ... B26 are all BusinessClass instances.
I have another 2D BusinessClass array called myOwnedBusinesses. I create it the following way:
var myOwnedBusinesses: [[BusinessClass]] = [[], []]
In my application, I have a tableView which contains all elements of allBusinesses, where each section contains the rows of the second dimension of the array, so that: allBusinesses[section][row]. When I select a random cell in the tableView, the corresponding BusinessClass element is added to the myOwnedBusinesses array, in the following way:
myOwnedBusinesses[selectedSection].append(allBusinesses[selectedSection][selectedRow])
As you can imagine from seeing the code, if I for instance select the cell at section 0 row 3, then select the cell at section 0 row 2, the order of myOwnedBusinesses will be wrong, being the opposite of what allBusinesses is. As a conclusion, I want to maintain the same order between the two arrays, even though the myOwnedBusinesses array is not always filled.
Here is my solution
let section0 = ["a", "b", "c", "d", "e", "f"]
let section1 = ["h", "i", "j", "k", "l", "m"]
section0.index(of: "b")
var all:[[String]] = [[],[]]
all[0].append(contentsOf: section0)
all[1].append(contentsOf: section1)
To keep the original indexes flatten the original array
let all_flat = all.flatMap {$0}
Let's say the user selects the cells in this order : "d", "e", "a", "h", "m", and "k".
var myArray = [["d", "e", "a"], ["h", "m", "k"]]
And then sort each array inside myArray
myArray = myArray.map {
$0.sorted { str1, str2 in
return all_flat.index(of: str1)! < all_flat.index(of: str2)!
}
}
For your case :
let allBusinesses_flat = allBusinesses.flatMap {$0}
myOwnedBusinesses = myOwnedBusinesses.map {
$0.sorted { b1, b2 in
return all_flat.index(of: b1)! < all_flat.index(of: b2)!
}
}
This solution is expensive, memory wise. Storing the selected indexes would be preferable.
I'm working on a problem around Instagram hashtags. Users often have "bundles" of hashtags that they copy and paste when they are posting images. Different bundles for different topics.
So I might have my "Things from the garden" bundle, which would be ["garden", "beautifullawns", "treesoutside", "greenlondon"] and so on. They're often twenty to thirty items long.
Sometimes they might have several of these to keep things varied.
What I want to do is by looking at past images that they have posted, to recommend a bundle of tags to use.
To do that I would have several arrays of tags that they have used previously:
x = ["a", "b", "c", "d", "e"]
y = ["a", "b", "d", "e", "f", "g"]
z = ["a", "c", "d", "e", "f", "h"]
...
I'd like to find largest common subsets of entries for these arrays.
So in this case, the largest subset would be ["a", "d", "e"] within those three. That's simple enough to achieve naively by using something like x & y & z.
However, I'd like to create a ranking of these subsets based on their size and frequency within all of the arrays under consideration, so that I can display the most commonly used bundles of tags:
[
{bundle: ["a","d","e"], frequency: 3, size: 3},
{bundle: ["e","f"], frequency: 2, size: 2},
{bundle: ["a","b"], frequency: 2, size: 2},
{bundle: ["b","d"], frequency: 2, size: 2},
...
]
Presumably, with a limitation on the minimum size of these bundles, say two items.
I'm using Elasticsearch for indexing, but I've found that attempting to do this with aggregations is challenging, so I'm pulling out the images into Ruby and then working there to create the listing.
As a first pass, I've looped over all of these arrays, then find all subsets of the other arrays, using an MD5 hash key as a unique identifier. But this limits the results. Adding further passes makes this approach quite inefficient, I suspect.
require 'digest'
x = ["a", "b", "c", "d", "e"]
y = ["a", "b", "d", "e", "f", "g"]
z = ["a", "c", "d", "e", "f", "h"]
def bundle_report arrays
arrays = arrays.collect(&:sort)
working = {}
arrays.each do |array|
arrays.each do |comparison|
next if array == comparison
subset = array & comparison
key = Digest::MD5.hexdigest(subset.join(""))
working[key] ||= {subset: subset, frequency: 0}
working[key][:frequency] += 1
working[key][:size] = subset.length
end
end
working
end
puts bundle_report([x, y, z])
=> {"bb4a3fb7097e63a27a649769248433f1"=>{:subset=>["a", "b", "d", "e"], :frequency=>2, :size=>4}, "b6fdd30ed956762a88ef4f7e8dcc1cae"=>{:subset=>["a", "c", "d", "e"], :frequency=>2, :size=>4}, "ddf4a04e121344a6e7ee2acf71145a99"=>{:subset=>["a", "d", "e", "f"], :frequency=>2, :size=>4}}
Adding a second pass gets this to a better result:
def bundle_report arrays
arrays = arrays.collect(&:sort)
working = {}
arrays.each do |array|
arrays.each do |comparison|
next if array == comparison
subset = array & comparison
key = Digest::MD5.hexdigest(subset.join(""))
working[key] ||= {subset: subset, frequency: 0}
working[key][:frequency] += 1
working[key][:size] = subset.length
end
end
original_working = working.dup
original_working.each do |key, item|
original_working.each do |comparison_key, comparison|
next if item == comparison
subset = item[:subset] & comparison[:subset]
key = Digest::MD5.hexdigest(subset.join(""))
working[key] ||= {subset: subset, frequency: 0}
working[key][:frequency] += 1
working[key][:size] = subset.length
end
end
working
end
puts bundle_report([x, y, z])
=> {"bb4a3fb7097e63a27a649769248433f1"=>{:subset=>["a", "b", "d", "e"], :frequency=>2, :size=>4}, "b6fdd30ed956762a88ef4f7e8dcc1cae"=>{:subset=>["a", "c", "d", "e"], :frequency=>2, :size=>4}, "ddf4a04e121344a6e7ee2acf71145a99"=>{:subset=>["a", "d", "e", "f"], :frequency=>2, :size=>4}, "a562cfa07c2b1213b3a5c99b756fc206"=>{:subset=>["a", "d", "e"], :frequency=>6, :size=>3}}
Can you suggest an efficient way to establish this ranking of large subsets?
Rather than do an intersection of every array with every other array, which might quickly get out of hand, I'd be tempted to keep a persistent index (in Elasticsearch?) of all the possible combinations seen so far, along with a count of their frequency. Then for every new set of tags, increment the frequency counts by 1 for all the sub-combinations from that tag.
Here's a quick sketch:
require 'digest'
def bundle_report(arrays, min_size = 2, max_size = 10)
combination_index = {}
arrays.each do |array|
(min_size..[max_size,array.length].min).each do |length|
array.combination(length).each do |combination|
key = Digest::MD5.hexdigest(combination.join(''))
combination_index[key] ||= {bundle: combination, frequency: 0, size: length}
combination_index[key][:frequency] += 1
end
end
end
combination_index.to_a.sort_by {|x| [x[1][:frequency], x[1][:size]] }.reverse
end
input_arrays = [
["a", "b", "c", "d", "e"],
["a", "b", "d", "e", "f", "g"],
["a", "c", "d", "e", "f", "h"]
]
bundle_report(input_arrays)[0..5].each do |x|
puts x[1]
end
Which results in:
{:bundle=>["a", "d", "e"], :frequency=>3, :size=>3}
{:bundle=>["d", "e"], :frequency=>3, :size=>2}
{:bundle=>["a", "d"], :frequency=>3, :size=>2}
{:bundle=>["a", "e"], :frequency=>3, :size=>2}
{:bundle=>["a", "d", "e", "f"], :frequency=>2, :size=>4}
{:bundle=>["a", "b", "d", "e"], :frequency=>2, :size=>4}
This might not scale very well either though.
I'm trying to find a way to remove any elements in an array that fall between two "markers", but there are a few quirks. To give a proper spec, the function I'm making is supposed to do this:
Remove any elements from the earliest start marker S to the earliest end marker E after the S, or the final element if there is none. Repeat this until there are no Ss left. Remove any remaining Es.
For example, if I had an array containing the following:
This is my current code:
# if the starting token is `1` and the ending one is `2`:
while clearing.include? 1
from = clearing.index(1)
to = clearing[from..-1].index(2) + from
clearing.slice!(from..to)
end
clearing.delete(2)
Which, when run with clearing set to:
["a", "b", "c", "d", "e", "f", 1, "g", "h", "i", "j", 1, "k", 2, "l", 2, 2, "m", "n", "o", "p", "q", "r", "s", 1, "t", "u", "v", "w", "x", 1, "y", "z"]
properly returns
["a", "b", "c", "d", "e", "f", "l", "m", "n", "o", "p", "q", "r", "s"]
Online test
It works, but it's ugly, and I'm fairly sure there's a more idiomatic way to do it. I can't find it, though, or think of it, so I'm asking here: Is my code the only (sane) way to do what I'm trying to?
This is, unfortunately, a severe case of the XY problem -- the specific task I'm trying to accomplish is removing comments from a string. I could do this with a fairly simple regex (/#.*?$/m) if I had the input string, but because it's a class assignment, I have to delete from anything starting with # to the :newline token in an array of them. Please don't suggest "This is weird, why not try solving the overall problem a different way" -- I know it's weird. I wish I could.
Overall, I think any solution is going to be as elegant as yours. The only thing I can think of is that you could make it faster by iterating over each character only once:
i = 0
loop do
break if i >= clearing.length
break if clearing[i] == 1
i += 1
end
loop do
break if i >= clearing.length
val = clearing.delete(i)
break if val == 2
end
Which is definitely less elegant.
Ruby's fairly obscure "flip-flop" operator could be used here. I'm not recommending that it be used, just sayin'.
clearing.reject { |e| (e==1..e==2) ? true : false }.reject { |e| e==2 }
#=> ["a", "b", "c", "d", "e", "f", "l", "m", "n", "o", "p", "q", "r", "s"]
The first block returns false until 1 is detected. It then returns true for the 1 and continues to return true until 2 is detected. It returns true for the 2, but then returns false until the next (if any) 1 is detected, and so on. Hence, the name "flip-flop". The example often given for the flip-flop operator is reading sections of a file that are delimited with start/end markers.
It may seem that (e==1..e==2) ? true : false could be simplified to (e==1..e==2), but that is not the case, as the latter expression is treated as a normal range. Flip-flops must have a conditional form.
Replace both rejects with reject! if clearing is to be changed in place.
I have arrays basket = ["O", "P", "W", "G"] and sack = ["G", "P", "O", "W"]. How can I compare these arrays to determine if the elements are arranged in the same order on not?
You can use:
basket == sack #=> false, for given values
If you compare them, having the same order:
basket.sort == sack.sort #=> true
Also, please check "Comparing two arrays in Ruby" for a discussion on comparing arrays.
If both arrays can contain different number of elements and possibly some extra elements, and you want to find out whether those elements that are common to both arrays appear in exact same order, then, you could do something like below:
basket = ["D", "O", "P", "W", "G", "C"]
sack = ["Z", "O", "W", "P", "G", "X"]
p (basket - (basket - sack)) == (sack - (sack - basket))
Here is the solution I was able to come up with.
ordered = 0
disordered = 0
index = 0
while index < basket.length
if basket[index] == sack[index]
ordered+= 1
elsif basket.include?(sack[index]) && (basket[index] != sack[index])
disordered+= 1
end
index += 1
end
puts" there are #{ordered} ordered common items and #{disordered} disordered common items"
I hope it helps.