R convert data.frame to json - arrays

I'm trying to convert a data.frame into json format
my data.frame has the following structure
a <- rep(c("Mario", "Luigi"), each = 3)
b <- sample(34:57, size = length(a))
df <- data.frame(a,b)
> df
a b
1 Mario 43
2 Mario 34
3 Mario 36
4 Luigi 45
5 Luigi 52
6 Luigi 35
What I want to create is something like this (to finally print it to a .json file)
[
{
"a": "Mario",
"b": [43, 34, 36]
},
{
"a": "Luigi",
"b": [45, 52, 35]
}
]
I've tried different packages handling json format but so far failed to produce this kind of output. I usually end up with something like this
[
{
"a":"Mario",
"b":43
},
{
"a":"Mario",
"b":34
},
{
"a":"Mario",
"b":36
},
{
"a":"Luigi",
"b":45
},
{
"a":"Luigi",
"b":52
},
{
"a":"Luigi",
"b":35
}
]

If you nest b as a list column, it will convert correctly:
library(jsonlite)
# converts b to nested list column
df2 <- aggregate(b ~ a, df, list)
df2
## a b
## 1 Luigi 49, 42, 37
## 2 Mario 46, 50, 45
toJSON(df2, pretty = TRUE)
## [
## {
## "a": "Luigi",
## "b": [49, 42, 37]
## },
## {
## "a": "Mario",
## "b": [46, 50, 45]
## }
## ]
or if you prefer dplyr:
library(dplyr)
df %>% group_by(a) %>%
summarise(b = list(b)) %>%
toJSON(pretty = TRUE)
or data.table:
library(data.table)
toJSON(setDT(df)[, .(b = list(b)), by = a], pretty = TRUE)
which both return the same thing.

To get the required JSON structure you will want your data in a list, something like:
l <- list(list(a = "Mario",
b = c(43,34,36)),
list(a = "Luigi",
b = c(45,52,35)))
## then can use the library(jsonlite) to convert to JSON
library(jsonlite)
toJSON(l, pretty = T)
[
{
"a": ["Mario"],
"b": [43, 34, 36]
},
{
"a": ["Luigi"],
"b": [45, 52, 35]
}
]
So to split your data into this format, you can do
l <- lapply(unique(df$a), function(x) list(a = x, b = df[a == x,"b"]) )
## and then the conversion works
toJSON(l, pretty = T)
[
{
"a": ["Mario"],
"b": [44, 49, 50]
},
{
"a": ["Luigi"],
"b": [39, 57, 35]
}
]
This works for the simple case, but if it gets more complex it might be better to re-design how you create your data.frame, and instead create a list(s) to begin with.
Reference
The jsonlite vignette is a very good resource.

Related

Get consistent byte array output from json.Marshal

I'm working on a hashing function for a map[string]interface{}
Most of the hashing libs required []byte as input to compute the hash.
I tried to Marshal using the json.Marshal for simple maps it works correct but when i add some complexity and shuffled the items then json.Marshal fails to give me a consistent byte array output
package main
import (
"encoding/json"
"fmt"
)
func main() {
data := map[string]interface{}{
"id": "124",
"name": "name",
"count": 123456,
"sites": []map[string]interface{}{
{
"name": "123445",
"count": 234324,
"id": "wersfs",
},
{
"id": "sadcacasca",
"name": "sdvcscds",
"count": 22,
},
},
"list": []int{5, 324, 123, 123, 123, 14, 34, 52, 3},
}
data1 := map[string]interface{}{
"name": "name",
"id": "124",
"sites": []map[string]interface{}{
{
"id": "sadcacasca",
"count": 22,
"name": "sdvcscds",
},
{
"count": 234324,
"name": "123445",
"id": "wersfs",
},
},
"count": 123456,
"list": []int{123, 14, 34, 52, 3, 5, 324, 123, 123},
}
jsonStr, _ := json.Marshal(data)
jsonStr1, _ := json.Marshal(data1)
fmt.Println(jsonStr)
fmt.Println(jsonStr1)
for i := 0; i < len(jsonStr); i++ {
if jsonStr[i] != jsonStr1[i] {
fmt.Println("Byte arrays not equal")
}
}
}
This is what I have tried and it fails to give me a consistent output.
Moreover i was thinking to write a function which will do the sorting of the map and values as well, but then got stuck on how do I sort the
"sites": []map[string]interface{}
I tried json.Marshal and also sorting the map but got stuck
Your data sructures are not equivalent. According to JSON rules arrays are ordered, therefore [123, 14, 34, 52, 3, 5, 324, 123, 123] is not the same as [5, 324, 123, 123, 123, 14, 34, 52, 3]. No wonders the hashes are different. If you need different arrays with the same elements to produce the same hash, you need to canonicalize the arrays before hashing. E.g. sort them.
Here is how it could be done: https://go.dev/play/p/OHq7jsX_cNw
Before serilizing it recursively gos down the maps and arrays and prepares all arrays:
// Prepares data by sorting arrays in place
func prepare(data map[string]any) map[string]any {
for _, value := range data {
switch v := value.(type) {
case []int:
prepareIntArray(v)
case []string:
prepareStringArray(v)
case []map[string]any:
prepareMapArrayById(v)
for _, obj := range v {
prepare(obj)
}
case map[string]any:
prepare(v)
}
}
return data
}
// Sorts int array in place
func prepareIntArray(a []int) {
sort.Ints(a)
}
// Sorts string array in place
func prepareStringArray(a []string) {
sort.Strings(a)
}
// Sorts an array of objects by "id" fields
func prepareMapArrayById(mapSlice []map[string]any) {
sort.Slice(mapSlice, func(i, j int) bool {
return getId(mapSlice[i]) < getId(mapSlice[j])
})
}
// Extracts "id" field from JSON object. Returns empty string if there is no "id" or it is not a string.
func getId(v map[string]any) string {
idAny, ok := v["id"]
if !ok {
return ""
}
idStr, ok := idAny.(string)
if ok {
return idStr
} else {
return ""
}
}
As both the marshaled outputs are basically string representations of the same map in different sequences, if you sort their characters, they become equal.
following this logic, if you sort both jsonStr and jsonStr1, the sorted []byte(s) will be exactly equal. which then you can use to formulate your hash value.
check my solution here

MongoDB query all documents contains ids that does not exist anymore in the collection

I ran into an issue that I haven't found a solution to yet.
I have a collection with dozens of documents that every one of the documents contains a list (let's use the name 'list' as a key for that list) with ids of other documents(they are connected in some way).
some of the documents in the collection were deleted and I try to find all the documents that contain the ids of documents that do not exist anymore in the collection.
example:
As to the example above: I want to get the document with the id : 5e3266e9bd724a000107a902 because it contains a list with the id 5e32a7f7bd724a00012c1104 that does not exist anymore.
Here is a solution that works exploiting $lookup on the same collection (think "self-JOIN"):
var r = [
{_id: 0, aa: [ 10, 11, 12 ] }
,{_id: 1, aa: [ 10, 11, 12 ] }
,{_id: 2, aa: [ 20, 21, 22 ] } // 21 is on watch list...
,{_id: 3, aa: [ 21, 20, 12 ] } // this one too and 21 is in different position
,{_id: 4, aa: [ 10, 22, 12 ] }
,{_id: 5, aa: [ 10, 22, 23 ] } // this one too...
,{_id: 6, aa: [ 10, 22, 21, 23 ] } // this one has BOTH 21 and 23
,{_id: 10, X:10}
,{_id: 11, X:11}
,{_id: 12, X:12}
,{_id: 20, X:20}
,{_id: 21, X:21}
,{_id: 22, X:22}
,{_id: 23, X:23}
];
db.foo.insert(r);
// Here is the whole thing:
db.foo.aggregate([ ]);
// Delete _id 21 and 23:
db.foo.remove({_id: 21});
db.foo.remove({_id: 23});
// Double check:
c = db.foo.aggregate([ ]);
// Where does id 21 and/or 23 NOT exist anymore? Note we don't ask for 21 or 23.
// We just know we expect a query to return docs that indicate 21 and/or 23
// are no longer there:
c = db.foo.aggregate([
// NOTE! By using localField:'aa', we are asking for EACH element in the
// array to be used as a value to match to _id (in the same collection):
{$lookup: {from: 'foo', localField: 'aa', foreignField: '_id', as: 'X'}},
// Exploit "make a list of scalars from array of objects" notation by taking
// input array $X and taking the _id field out:
{$project: {X: {$setDifference: ["$aa", "$X._id"] }} },
// Keep those that match -- and protect against against empty sets
// with $ifNull to turn a null into an array of len 0:
{$match: {$expr: {$gt:[{$size: {$ifNull:['$X',[]]}}, 0]}} }
]);
{ "_id" : 2, "X" : [ 21 ] }
{ "_id" : 3, "X" : [ 21 ] }
{ "_id" : 5, "X" : [ 23 ] }
{ "_id" : 6, "X" : [ 21, 23 ] }

How to iterate through an array of objects in ruby

I have this array right here and I need to get the "id" of each object
[{ id: 1, points: 60 }, { id: 2, points: 20 }, { id: 3, points: 95 }, { id: 4, points: 75 }]
customers = [{ id: 1, points: 90 }, { id: 2, points: 20 }, { id: 3, points: 70 }, { id: 4, points: 40 }, { id: 5, points: 60 }, { id: 6, points: 10}]
I know how to go through the whole array with
#scores.each_with_index{ |score, index| }
However, I haven't found a way to get the objects's points.
Perhaps you are looking for the following.
customers = [
{ id: 1, points: 90 }, { id: 2, points: 20 },
{ id: 3, points: 70 }, { id: 4, points: 40 },
{ id: 5, points: 60 }, { id: 6, points: 10}
]
h = customers.each_with_object({}) do |g,h|
id, points = g.values_at(:id, :points)
h[id] = points
end
#=> {1=>90, 2=>20, 3=>70, 4=>40, 5=>60, 6=>10}
This allows you to easily extract information of interest, such as the following.
h.keys
#=> [1, 2, 3, 4, 5, 6]
h.values
#=> [90, 20, 70, 40, 60, 10]
h[2]
#=> 20
h.key?(5)
#=> true
h.key?(7)
#=> false
h.value?(70)
#=> true
h.value?(30)
#=> false
What you called score is actually an hash like { id: 1, points: 60 } and I'm going to call it item
So, let's try
#scores.each_with_index do |item, index|
puts "#{index + 1}: id #{item[:id]}, points #{item[:points]}"
end
So, I have this array right here and I need to get the id of each object
In order to transform each element of a collection, you can use Enumerable#map (or in this case more precisely Array#map):
customers.map { _1[:id] }
#=> [1, 2, 3, 4, 5, 6]
This given construct is an array of objects so we need to individually iterate through each element and print out the value present in the objects. The following code shows how we can do it:
customers.each{|obj| p obj[:id].to_s+" "+ obj[:points].to_s }
Here we iterate through each element and print out individual entities of the hash using the obj[:id]/obj[:points] (obj being each individual object here.)
What about something like this?
customers.map(&:to_proc).map{ |p| [:id, :points].map(&p) }
=> [[1, 90], [2, 20], [3, 70], [4, 40], [5, 60], [6, 10]]

Select items from arrays of an array with certain indexes

I want to select items from each array of arr from index position 0, 2 and 4
Input array
arr = [
["name", "address", "contact", "company", "state"],
["n1", "add1", "c1", "cp1", "s1"],
["n2", "add2", "c2", "cp2", "s2"]
]
Output array
arr = [
["name", "contact", "company"],
["n1", "c1", "cp1"],
["n2", "c2", "cp2"]
]
as an alternative to deleting unneeded items, you can just select the needed items.
arr.map{|subarray| subarray.values_at(0, 2, 4) }
# => [["name", "contact", "state"], ["n1", "c1", "s1"], ["n2", "c2", "s2"]]
If you want tot take this more generic and only select the even columns you could do it like this
arr.map{|a| a.select.with_index { |e, i| i.even? }}
which gives
[["name", "contact", "state"], ["n1", "c1", "s1"], ["n2", "c2", "s2"]]
Original question:
I want to delete items from each array of arr from index position 1 and 5
We can use delete_if to achieve this. Here:
arr.map { |a| a.delete_if.with_index { |el, index| [1,5].include? index } }
# => [["name", "contact", "company", "state"], ["n1", "c1", "cp1", "s1"], ["n2", "c2", "cp2", "s2"]]
PS: the output in question is incorrect as for arrays at index 1 and 2, example is deleting element at index 4
Ruby has very nice destructuring syntax, so you can extract all your values in a one-liner:
a = 0.upto(5).to_a # => [0, 1, 2, 3, 4, 5]
x, _, y, _, z = a
x # => 0
y # => 2
z # => 4
The underscores are just placeholder for values you don't need.
It can be also performed with each_slice method.
If 0, 2, 4 values can be treated as a list with every second value omitted (_), it can be written like:
arr.map { |a| a.each_slice(2).map { |item, _| item } }

Sum an array of hashes in Ruby

I have a ruby array of 3 hashes. Each peace has information about report_data (consumption of 2 types of energy) and monthes_data (same for each). Please see the code below.
arr = [{:report_data=>
[{:type=>
{"id"=>1, "name"=>"electricity"},
:data=>[10, 20, 30, 40]},
{:type=>
{"id"=>2, "name"=>"water"},
:data=>[20, 30, 40, 50]}],
:monthes_data=>
{:monthes=>
["jan", "feb"]},
{:report_data=>
[{:type=>
{"id"=>1, "name"=>"electricity"},
:data=>[15, 25, 35, 45]},
{:type=>
{"id"=>2, "name"=>"water"},
:data=>[25, 35, 45, 55]}],
:monthes_data=>
{:monthes=>
["jan", "feb"]},
{:report_data=>
[{:type=>
{"id"=>1, "name"=>"electricity"},
:data=>[17, 27, 37, 47]},
{:type=>
{"id"=>2, "name"=>"water"},
:data=>[27, 37, 47, 57]}],
:monthes_data=>
{:monthes=>
["jan", "feb"]}]
I'm new to Ruby. Please help me to sum all the data by energy types. In the end I want to have one hash with report_data and monthes_data. I need the result look like:
{:report_data=>
[{:type=>
{:"id"=>1, "name"=>"electricity"},
:data=>[42, 72, 102, 132]},
{:type=>
{"id"=>2, "name"=>"water"}},
:data=>[72, 102, 132, 162]}],
:monthes_data=>
{:monthes=>
["jan", "feb"]}}
arr = [{:report_data=>
[{:type=>
{"id"=>1, "name"=>"electricity"},
:data=>[10, 20, 30, 40]},
{:type=>
{"id"=>2, "name"=>"water"},
:data=>[20, 30, 40, 50]}],
:monthes_data=>
{:monthes=>
["jan", "feb"]}},
{:report_data=>
[{:type=>
{"id"=>1, "name"=>"electricity"},
:data=>[15, 25, 35, 45]},
{:type=>
{"id"=>2, "name"=>"water"},
:data=>[25, 35, 45, 55]}],
:monthes_data=>
{:monthes=>
["jan", "feb"]}},
{:report_data=>
[{:type=>
{"id"=>1, "name"=>"electricity"},
:data=>[17, 27, 37, 47]},
{:type=>
{"id"=>2, "name"=>"water"},
:data=>[27, 37, 47, 57]}],
:monthes_data=>
{:monthes=>
["jan", "feb"]}}]
acc = {}
arr.each do
|e| e[:report_data].each_with_index do
|e, idx|
type = e[:type]['id']
e[:data].each_with_index do
|e, idx|
acc[type] = [] if not acc[type]
acc[type][idx] = (acc[type][idx] or 0) + e
end
end
end
p acc
outputs
{1=>[42, 72, 102, 132], 2=>[72, 102, 132, 162]}
You should be able to reformat this into your record
Code
def convert(arr)
{ :months_data=>arr.first[:months_data],
:report_data=>arr.map { |h| h[:report_data] }.
transpose.
map { |d| { :type=>d.first[:type] }.
merge(:data=>d.map { |g| g[:data] }.transpose.map { |a| a.reduce(:+) }) }
}
end
Example
Half the battle in problems such as this one is visualizing the data. It's much clearer, imo, when written like this:
arr = [
{:report_data=>[
{:type=>{"id"=>1, "name"=>"electricity"}, :data=>[10, 20, 30, 40]},
{:type=>{"id"=>2, "name"=>"water"}, :data=>[20, 30, 40, 50]}
],
:months_data=>{:months=>["jan", "feb"]}
},
{:report_data=>[
{:type=>{"id"=>1, "name"=>"electricity"}, :data=>[15, 25, 35, 45]},
{:type=>{"id"=>2, "name"=>"water"}, :data=>[25, 35, 45, 55]}
],
:months_data=>{:months=>["jan", "feb"]}
},
{:report_data=>[
{:type=>{"id"=>1, "name"=>"electricity"}, :data=>[17, 27, 37, 47]},
{:type=>{"id"=>2, "name"=>"water"}, :data=>[27, 37, 47, 57]}],
:months_data=>{:months=>["jan", "feb"]}
}
]
Let's try it:
convert(arr)
#=> {:months_data=>{:months=>["jan", "feb"]},
# :report_data=>[
# {:type=>{"id"=>1, "name"=>"electricity"}, :data=>[42, 72, 102, 132]},
# {:type=>{"id"=>2, "name"=>"water"}, :data=>[72, 102, 132, 162]}
# ]
# }
Explanation
The first thing I did is concentrate on computing the sums, so I converted this to the values of :report_data. That key, and the key-value pair of months' data, which is the same for all elements (hashes) of arr, can be added back in later.
b = arr.map { |h| h[:report_data] }
#=> [
# [{:type=>{"id"=>1, "name"=>"electricity"}, :data=>[10, 20, 30, 40]},
# {:type=>{"id"=>2, "name"=>"water"}, :data=>[20, 30, 40, 50]}
# ],
# [{:type=>{"id"=>1, "name"=>"electricity"}, :data=>[15, 25, 35, 45]},
# {:type=>{"id"=>2, "name"=>"water"}, :data=>[25, 35, 45, 55]}
# ],
# [{:type=>{"id"=>1, "name"=>"electricity"}, :data=>[17, 27, 37, 47]},
# {:type=>{"id"=>2, "name"=>"water"}, :data=>[27, 37, 47, 57]}
# ]
# ]
If you are not certain that the elements of each array will be sorted by "id", you could write:
b = arr.map { |h| h[:report_data].sort_by { |g| g[:type]["id"] } }
c = b.transpose
#=> [
# [{:type=>{"id"=>1, "name"=>"electricity"}, :data=>[10, 20, 30, 40]},
# {:type=>{"id"=>1, "name"=>"electricity"}, :data=>[15, 25, 35, 45]},
# {:type=>{"id"=>1, "name"=>"electricity"}, :data=>[17, 27, 37, 47]}
# ],
# [{:type=>{"id"=>2, "name"=>"water"}, :data=>[20, 30, 40, 50]},
# {:type=>{"id"=>2, "name"=>"water"}, :data=>[25, 35, 45, 55]},
# {:type=>{"id"=>2, "name"=>"water"}, :data=>[27, 37, 47, 57]}
# ]
# ]
e = c.map {|d| { :type=>d.first[:type] }.
merge(:data=>d.map { |g| g[:data] }.transpose.map { |a| a.reduce(:+) }) }
#=> [{:type=>{"id"=>1, "name"=>"electricity"}, :data=>[42, 72, 102, 132]},
# {:type=>{"id"=>2, "name"=>"water"} , :data=>[72, 102, 132, 162]}]
Lastly, we need to put the put the key :report_data back in and add the months' data:
{ :months_data=>arr.first[:months_data], :report_data=>e }
#=> {:months_data=>{:months=>["jan", "feb"]},
# :report_data=>[
# {:type=>{"id"=>1, "name"=>"electricity"}, :data=>[42, 72, 102, 132]},
# {:type=>{"id"=>2, "name"=>"water"}, :data=>[72, 102, 132, 162]}
# ]
# }
For clarity I've reformatted the input array and removed the :monthes_data key, since that seems to be unrelated to your question. Here's our data:
TL;DR
def zip_sum(arr1, arr2)
return arr2 if arr1.nil?
arr1.zip(arr2).map {|a, b| a + b }
end
def sum_report_data(arr)
arr.flat_map do |item|
item[:report_data].map {|datum| datum.values_at(:type, :data) }
end
.reduce({}) do |sums, (type, data)|
sums.merge(type => data) do |_, old_data, new_data|
zip_sum(old_data, new_data)
end
end
.map {|type, data| { type: type, data: data } }
end
p sum_report_data(arr)
# =>
[ { type: { "id" => 1, "name" => "electricity" }, data: [ 42, 72, 102, 132 ] },
{ type: { "id" => 2, "name" => "water" }, data: [ 72, 102, 132, 162 ] }
]
Explanation
arr = [
{ report_data: [
{ type: { "id" => 1, "name" => "electricity" },
data: [ 10, 20, 30, 40 ]
},
{ type: { "id" => 2, "name" => "water" },
data: [ 20, 30, 40, 50 ]
}
]
},
{ report_data: [
{ type: { "id" => 1, "name" => "electricity" },
data: [ 15, 25, 35, 45 ]
},
{ type: { "id" => 2, "name" => "water" },
data: [ 25, 35, 45, 55 ]
}
]
},
{ report_data: [
{ type: { "id" => 1, "name" => "electricity" },
data: [ 17, 27, 37, 47 ]
},
{ type: { "id" => 2, "name" => "water" },
data: [ 27, 37, 47, 57 ]
}
]
}
]
Step 1
First, let's define a helper method to sum the values of two arrays:
def zip_sum(arr1, arr2)
return arr2 if arr1.nil?
arr1.zip(arr2).map {|a, b| a + b }
end
zip_sum([ 1, 2, 3 ], [ 10, 20, 30 ])
# => [ 11, 22, 33 ]
zip_sum(nil, [ 5, 6, 7 ])
# => [ 5, 6, 7 ]
The way zip_sum works is by "zipping" the two arrays together using Enumerable#zip (e.g. [1, 2].zip([10, 20]) returns [ [1, 10], [2, 20] ]), then adding each pair together.
Step 2
Next, let's use Enumerable#flat_map to get the parts of the data we care about:
result1 = arr.flat_map do |item|
item[:report_data].map {|datum| datum.values_at(:type, :data) }
end
# result1 =>
[ [ { "id" => 1, "name" => "electricity" }, [ 10, 20, 30, 40 ] ],
[ { "id" => 2, "name" => "water" }, [ 20, 30, 40, 50 ] ],
[ { "id" => 1, "name" => "electricity" }, [ 15, 25, 35, 45 ] ],
[ { "id" => 2, "name" => "water" }, [ 25, 35, 45, 55 ] ],
[ { "id" => 1, "name" => "electricity" }, [ 17, 27, 37, 47 ] ],
[ { "id" => 2, "name" => "water" }, [ 27, 37, 47, 57 ] ]
]
Above we've just grabbed the :type and :data values out of each hash the :report_data arrays.
Step 3
Next let's use Enumerable#reduce to iterate over the array of arrays and calculate a running sum of the :data values using the zip_sum method we defined earlier:
result2 = result1.reduce({}) do |sums, (type, data)|
sums.merge(type => data) do |_, old_data, new_data|
zip_sum(old_data, new_data)
end
end
# result2 =>
{ { "id" => 1, "name" => "electricity" } => [ 42, 72, 102, 132 ],
{ "id" => 2, "name" => "water" } => [ 72, 102, 132, 162 ]
}
The result might look a little odd to you because we usually use strings or symbols as hash keys, but in this hash we're using other hashes (the :type values from above) as keys. That's one nice thing about Ruby: You can use any object as a key in a hash.
Inside the reduce block, sums is the hash that's ultimately returned. It starts out as an empty hash ({}, the value we passed to reduce as an argument). type is the hash we're using as a key and data is the array of integers. In each iteration the next values from the result2 array are assigned to type, but sums is updated with whatever value was returned at the end of the block in the previous iteration.
We're using Hash#merge in kind of a tricky way:
sums.merge(type => data) do |_, old_data, new_data|
zip_sum(old_data, new_data)
end
This merges the hash { type => data } (remember that type is the :type hash
and data is the array of integers) into the hash sums. If there are any key collisions, the block will be invoked. Since we only have one key, type, then the block will be invoked if sums[type] already exists. If it does, we call zip_sum with the previous value of sums[type] and data, effectively keeping a running sum of data.
In effect, it's basically doing this:
sums = {}
type, data = result2[0]
sums[type] = zip_sum(sums[type], data)
type, data = result2[1]
sums[type] = zip_sum(sums[type], data)
type, data = result2[3]
# ...and so on.
Step 4
We now have this hash in result3:
{ { "id" => 1, "name" => "electricity" } => [ 42, 72, 102, 132 ],
{ "id" => 2, "name" => "water" } => [ 72, 102, 132, 162 ]
}
That's the data we want, so now we just have to take it out of this weird format and put it into a regular hash with the keys :type and :data:
result3 = result2.map {|type, data| { type: type, data: data } }
# result3 =>
[ { type: { "id" => 1, "name" => "electricity" },
data: [ 42, 72, 102, 132 ]
},
{ type: { "id" => 2, "name" => "water" },
data: [ 72, 102, 132, 162 ]
}
]

Resources