Efficient way of flattening a recursive data structure in golang - arrays

I have a recursive data structure that can contain a few different type of data:
type Data interface{
// Some methods
}
type Pair struct { // implements Data
fst Data
snd Data
}
type Number float64 // implements Data
Now I want to flatten a chain of Pairs into a []Data. However, the Data in the fst field should not be flattened, only data in snd should be flattened. E.g:
chain := Pair{Number(1.0), Pair{Number(2.0), Pair{Number(3.0), nil}}}
chain2 := Pair{Pair{Number(1.0), Number(4.0)}, Pair{Number(2.0), Pair{Number(3.0), nil}}}
becomes:
data := []Data{Number(1.0), Number(2.0), Number(3.0)}
data2 := []Data{Pair{Number(1.0), Number(4.0)}, Number(2.0), Number(3.0)}
My naive approach would be:
var data []Data
chain := Pair{Number(1.0), Pair{Number(2.0), Pair{Number(3.0), nil}}}
for chain != nil {
data = append(data, chain.fst)
chain = chain.snd
}
Is there a more efficient approach that can flatten a data structure like the one in the variable chain into an []Data array?

You can use a recursive function. On the way down, add up the number of pairs, at the bottom, allocate the array, and on the way back up, fill the array from back to front.
If you need to support arbitrary trees, you can add a size method to Data, and then do another tree traversal to actually fill the array.

Huh, your naive approach doesn't work for Pairs nested inside fst. If you had chain := Pair{Pair{Number(1.0), Number(2.0)}, Number{3.0}}, it would end up as []Data{Pair{Number(1.0), Number(2.0)}, Number{3.0}}. This is an inherently recursive problem, so why not implement it as such?
I suggest adding a flatten() method to your interface. Pairs can just recursively nest themselves, and Numbers just return their value.
Here's a fully working example with some minimal testing:
package main
import "fmt"
type Data interface {
flatten() []Data
}
type Pair struct {
fst Data
snd Data
}
type Number float64
func (p Pair) flatten() []Data {
res := []Data{}
if p.fst != nil {
res = append(res, p.fst.flatten()...)
}
if p.snd != nil {
res = append(res, p.snd.flatten()...)
}
return res
}
func (n Number) flatten() []Data {
return []Data{n}
}
func main() {
tests := []Data{
Pair{Number(1.0), Pair{Number(2.0), Pair{Number(3.0), nil}}},
Pair{Pair{Number(1.0), Number(2.0)}, Number(3.0)},
Pair{Pair{Pair{Number(1.0), Number(2.0)}, Pair{Number(3.0), Number(4.0)}}, Pair{Pair{Number(5.0), Number(6.0)}, Number(7.0)}},
Number(1.0),
}
for _, t := range tests {
fmt.Printf("Original: %v\n", t)
fmt.Printf("Flattened: %v\n", t.flatten())
}
}
(This assumes that the top-level input Data is never nil).
The code prints:
Original: {1 {2 {3 <nil>}}}
Flattened: [1 2 3]
Original: {{1 2} 3}
Flattened: [1 2 3]
Original: {{{1 2} {3 4}} {{5 6} 7}}
Flattened: [1 2 3 4 5 6 7]
Original: 1
Flattened: [1]

As suggested, writing a recursive function fits best for this problem. But it's also possible to write a non-recursive version (IMHO recursive version would be more clear):
func flatten(d Data) []Data {
var res []Data
stack := []Data{d}
for {
if len(stack) == 0 {
break
}
switch x := stack[len(stack)-1].(type) {
case Pair:
stack[len(stack)-1] = x.snd
stack = append(stack, x.fst)
case Number:
res = append(res, x)
stack = stack[:len(stack)-1]
default:
if x == nil {
stack = stack[:len(stack)-1]
} else {
panic("INVALID TYPE")
}
}
}
return res
}

Related

Sort while appending to slice?

I have a []byte which I need to sort, in ascending order.
I get an object with the items and then iterate the array in order to create the object returned:
// unfortunately, for some obscure reason I can't change the data types of the caller and the object from the function call are different, although both are []byte underneath (...)
type ID []byte
// in another package:
type ByteInterface []byte
func (c *Store) GetAll() ByteInterface {
returnObj := make([]ByteInterface,0)
obj, err := GetData()
// err handling
for _, b := range obj.IDs {
returnObj = append(returnObj, ByteInterface(b))
}
return returnObj
}
So I'm asking myself if it is possible to do the append so that returnObj is sorted right away, or if I need to sort obj.ByteData upfront (or sort returnOjb afterwards).
On each iteration, do the following:
Grow the target slice (possibly reallocating it):
numElems := len(returnObj)
returnObj = append(returnObj, make([]byte, len(obj))...)
Use the standard approach for insertion to keep the destination sorted by finding a place to put each byte from the source slice, one by one:
for _, b := range obj {
i := sort.Search(numElems, func (i int) bool {
return returnObj[i] >= b
}
if i < numElems {
copy(returnObj[i+1:], returnObj[i:])
}
returnObj[i] = b
numElems++
}
(The call to copy should be optimized by copying less but this is left as an exercise for the reader.)

Converting a Swift array of Ints into an array of its running subtotals [duplicate]

I'd like a function runningSum on an array of numbers a (or any ordered collection of addable things) that returns an array of the same length where each element i is the sum of all elements in A up to an including i.
Examples:
runningSum([1,1,1,1,1,1]) -> [1,2,3,4,5,6]
runningSum([2,2,2,2,2,2]) -> [2,4,6,8,10,12]
runningSum([1,0,1,0,1,0]) -> [1,1,2,2,3,3]
runningSum([0,1,0,1,0,1]) -> [0,1,1,2,2,3]
I can do this with a for loop, or whatever. Is there a more functional option? It's a little like a reduce, except that it builds a result array that has all the intermediate values.
Even more general would be to have a function that takes any sequence and provides a sequence that's the running total of the input sequence.
The general combinator you're looking for is often called scan, and can be defined (like all higher-order functions on lists) in terms of reduce:
extension Array {
func scan<T>(initial: T, _ f: (T, Element) -> T) -> [T] {
return self.reduce([initial], combine: { (listSoFar: [T], next: Element) -> [T] in
// because we seeded it with a non-empty
// list, it's easy to prove inductively
// that this unwrapping can't fail
let lastElement = listSoFar.last!
return listSoFar + [f(lastElement, next)]
})
}
}
(But I would suggest that that's not a very good implementation.)
This is a very useful general function, and it's a shame that it's not included in the standard library.
You can then generate your cumulative sum by specializing the starting value and operation:
let cumSum = els.scan(0, +)
And you can omit the zero-length case rather simply:
let cumSumTail = els.scan(0, +).dropFirst()
Swift 4
The general sequence case
Citing the OP:
Even more general would be to have a function that takes any sequence
and provides a sequence that's the running total of the input
sequence.
Consider some arbitrary sequence (conforming to Sequence), say
var seq = 1... // 1, 2, 3, ... (CountablePartialRangeFrom)
To create another sequence which is the (lazy) running sum over seq, you can make use of the global sequence(state:next:) function:
var runningSumSequence =
sequence(state: (sum: 0, it: seq.makeIterator())) { state -> Int? in
if let val = state.it.next() {
defer { state.sum += val }
return val + state.sum
}
else { return nil }
}
// Consume and print accumulated values less than 100
while let accumulatedSum = runningSumSequence.next(),
accumulatedSum < 100 { print(accumulatedSum) }
// 1 3 6 10 15 21 28 36 45 55 66 78 91
// Consume and print next
print(runningSumSequence.next() ?? -1) // 120
// ...
If we'd like (for the joy of it), we could condense the closure to sequence(state:next:) above somewhat:
var runningSumSequence =
sequence(state: (sum: 0, it: seq.makeIterator())) {
(state: inout (sum: Int, it: AnyIterator<Int>)) -> Int? in
state.it.next().map { (state.sum + $0, state.sum += $0).0 }
}
However, type inference tends to break (still some open bugs, perhaps?) for these single-line returns of sequence(state:next:), forcing us to explicitly specify the type of state, hence the gritty ... in in the closure.
Alternatively: custom sequence accumulator
protocol Accumulatable {
static func +(lhs: Self, rhs: Self) -> Self
}
extension Int : Accumulatable {}
struct AccumulateSequence<T: Sequence>: Sequence, IteratorProtocol
where T.Element: Accumulatable {
var iterator: T.Iterator
var accumulatedValue: T.Element?
init(_ sequence: T) {
self.iterator = sequence.makeIterator()
}
mutating func next() -> T.Element? {
if let val = iterator.next() {
if accumulatedValue == nil {
accumulatedValue = val
}
else { defer { accumulatedValue = accumulatedValue! + val } }
return accumulatedValue
}
return nil
}
}
var accumulator = AccumulateSequence(1...)
// Consume and print accumulated values less than 100
while let accumulatedSum = accumulator.next(),
accumulatedSum < 100 { print(accumulatedSum) }
// 1 3 6 10 15 21 28 36 45 55 66 78 91
The specific array case: using reduce(into:_:)
As of Swift 4, we can use reduce(into:_:) to accumulate the running sum into an array.
let runningSum = arr
.reduce(into: []) { $0.append(($0.last ?? 0) + $1) }
// [2, 4, 6, 8, 10, 12]
By using reduce(into:_:), the [Int] accumulator will not be copied in subsequent reduce iterations; citing the Language reference:
This method is preferred over reduce(_:_:) for efficiency when the
result is a copy-on-write type, for example an Array or a
Dictionary.
See also the implementation of reduce(into:_:), noting that the accumulator is provided as an inout parameter to the supplied closure.
However, each iteration will still result in an append(_:) call on the accumulator array; amortized O(1) averaged over many invocations, but still an arguably unnecessary overhead here as we know the final size of the accumulator.
Because arrays increase their allocated capacity using an exponential
strategy, appending a single element to an array is an O(1) operation
when averaged over many calls to the append(_:) method. When an array
has additional capacity and is not sharing its storage with another
instance, appending an element is O(1). When an array needs to
reallocate storage before appending or its storage is shared with
another copy, appending is O(n), where n is the length of the array.
Thus, knowing the final size of the accumulator, we could explicitly reserve such a capacity for it using reserveCapacity(_:) (as is done e.g. for the native implementation of map(_:))
let runningSum = arr
.reduce(into: [Int]()) { (sums, element) in
if let sum = sums.last {
sums.append(sum + element)
}
else {
sums.reserveCapacity(arr.count)
sums.append(element)
}
} // [2, 4, 6, 8, 10, 12]
For the joy of it, condensed:
let runningSum = arr
.reduce(into: []) {
$0.append(($0.last ?? ($0.reserveCapacity(arr.count), 0).1) + $1)
} // [2, 4, 6, 8, 10, 12]
Swift 3: Using enumerated() for subsequent calls to reduce
Another Swift 3 alternative (with an overhead ...) is using enumerated().map in combination with reduce within each element mapping:
func runningSum(_ arr: [Int]) -> [Int] {
return arr.enumerated().map { arr.prefix($0).reduce($1, +) }
} /* thanks #Hamish for improvement! */
let arr = [2, 2, 2, 2, 2, 2]
print(runningSum(arr)) // [2, 4, 6, 8, 10, 12]
The upside is you wont have to use an array as the collector in a single reduce (instead repeatedly calling reduce).
Just for fun: The running sum as a one-liner:
let arr = [1, 2, 3, 4]
let rs = arr.map({ () -> (Int) -> Int in var s = 0; return { (s += $0, s).1 } }())
print(rs) // [1, 3, 6, 10]
It does the same as the (updated) code in JAL's answer, in particular,
no intermediate arrays are generated.
The sum variable is captured in an immediately-evaluated closure returning the transformation.
If you just want it to work for Int, you can use this:
func runningSum(array: [Int]) -> [Int] {
return array.reduce([], combine: { (sums, element) in
return sums + [element + (sums.last ?? 0)]
})
}
If you want it to be generic over the element type, you have to do a lot of extra work declaring the various number types to conform to a custom protocol that provides a zero element, and (if you want it generic over both floating point and integer types) an addition operation, because Swift doesn't do that already. (A future version of Swift may fix this problem.)
Assuming an array of Ints, sounds like you can use map to manipulate the input:
let arr = [0,1,0,1,0,1]
var sum = 0
let val = arr.map { (sum += $0, sum).1 }
print(val) // "[0, 1, 1, 2, 2, 3]\n"
I'll keep working on a solution that doesn't use an external variable.
I thought I'd be cool to extend Sequence with a generic scan function as is suggested in the great first answer.
Given this extension, you can get the running sum of an array like this: [1,2,3].scan(0, +)
But you can also get other interesting things…
Running product: array.scan(1, *)
Running max: array.scan(Int.min, max)
Running min: array.scan(Int.max, min)
Because the implementation is a function on Sequence and returns a Sequence, you can chain it together with other sequence functions. It is efficient, having linear running time.
Here's the extension…
extension Sequence {
func scan<Result>(_ initialResult: Result, _ nextPartialResult: #escaping (Result, Self.Element) -> Result) -> ScanSequence<Self, Result> {
return ScanSequence(initialResult: initialResult, underlying: self, combine: nextPartialResult)
}
}
struct ScanSequence<Underlying: Sequence, Result>: Sequence {
let initialResult: Result
let underlying: Underlying
let combine: (Result, Underlying.Element) -> Result
typealias Iterator = ScanIterator<Underlying.Iterator, Result>
func makeIterator() -> Iterator {
return ScanIterator(previousResult: initialResult, underlying: underlying.makeIterator(), combine: combine)
}
var underestimatedCount: Int {
return underlying.underestimatedCount
}
}
struct ScanIterator<Underlying: IteratorProtocol, Result>: IteratorProtocol {
var previousResult: Result
var underlying: Underlying
let combine: (Result, Underlying.Element) -> Result
mutating func next() -> Result? {
guard let nextUnderlying = underlying.next() else {
return nil
}
previousResult = combine(previousResult, nextUnderlying)
return previousResult
}
}
One solution using reduce:
func runningSum(array: [Int]) -> [Int] {
return array.reduce([], combine: { (result: [Int], item: Int) -> [Int] in
if result.isEmpty {
return [item] //first item, just take the value
}
// otherwise take the previous value and append the new item
return result + [result.last! + item]
})
}
I'm very late to this party. The other answers have good explanations. But none of them have provided the initial result, in a generic way. This implementation is useful to me.
public extension Sequence {
/// A sequence of the partial results that `reduce` would employ.
func scan<Result>(
_ initialResult: Result,
_ nextPartialResult: #escaping (Result, Element) -> Result
) -> AnySequence<Result> {
var iterator = makeIterator()
return .init(
sequence(first: initialResult) { partialResult in
iterator.next().map {
nextPartialResult(partialResult, $0)
}
}
)
}
}
extension Sequence where Element: AdditiveArithmetic & ExpressibleByIntegerLiteral {
var runningSum: AnySequence<Element> { scan(0, +).dropFirst() }
}

Golang: calculate diff between two array of bytes and patch an array

I'm trying to find the difference between two byte ararys and store the delta.
I've read this documentation https://golang.org/pkg/bytes/ but I didn't find anything that show how to find the diff.
Thanks.
Sounds like you just want a function which takes two byte slices and returns a new slice containing the difference of each element in the input slice. The example function below asserts that the input slices are both non-nil and have the same length. It also returns a slice of int16s since the range of difference in bytes is [-255,255].
package main
import "fmt"
func main() {
bs1 := []byte{0, 2, 255, 0}
bs2 := []byte{0, 1, 0, 255}
delta, err := byteDiff(bs1, bs2)
if err != nil {
panic(err)
}
fmt.Printf("OK: delta=%v\n", delta)
// OK: delta=[0 1 255 -255]
}
func byteDiff(bs1, bs2 []byte) ([]int16, error) {
// Ensure that we have two non-nil slices with the same length.
if (bs1 == nil) || (bs2 == nil) {
return nil, fmt.Errorf("expected a byte slice but got nil")
}
if len(bs1) != len(bs2) {
return nil, fmt.Errorf("mismatched lengths, %d != %d", len(bs1), len(bs2))
}
// Populate and return the difference between the two.
diff := make([]int16, len(bs1))
for i := range bs1 {
diff[i] = int16(bs1[i]) - int16(bs2[i])
}
return diff, nil
}

Most idiomatic way to select elements from an array in Golang?

I have an array of strings, and I'd like to exclude values that start in foo_ OR are longer than 7 characters.
I can loop through each element, run the if statement, and add it to a slice along the way. But I was curious if there was an idiomatic or more golang-like way of accomplishing that.
Just for example, the same thing might be done in Ruby as
my_array.select! { |val| val !~ /^foo_/ && val.length <= 7 }
There is no one-liner as you have it in Ruby, but with a helper function you can make it almost as short.
Here's our helper function that loops over a slice, and selects and returns only the elements that meet a criteria captured by a function value:
func filter(ss []string, test func(string) bool) (ret []string) {
for _, s := range ss {
if test(s) {
ret = append(ret, s)
}
}
return
}
Starting with Go 1.18, we can write it generic so it will work with all types, not just string:
func filter[T any](ss []T, test func(T) bool) (ret []T) {
for _, s := range ss {
if test(s) {
ret = append(ret, s)
}
}
return
}
Using this helper function your task:
ss := []string{"foo_1", "asdf", "loooooooong", "nfoo_1", "foo_2"}
mytest := func(s string) bool { return !strings.HasPrefix(s, "foo_") && len(s) <= 7 }
s2 := filter(ss, mytest)
fmt.Println(s2)
Output (try it on the Go Playground, or the generic version: Go Playground):
[asdf nfoo_1]
Note:
If it is expected that many elements will be selected, it might be profitable to allocate a "big" ret slice beforehand, and use simple assignment instead of the append(). And before returning, slice the ret to have a length equal to the number of selected elements.
Note #2:
In my example I chose a test() function which tells if an element is to be returned. So I had to invert your "exclusion" condition. Obviously you may write the helper function to expect a tester function which tells what to exclude (and not what to include).
Have a look at robpike's filter library. This would allow you to do:
package main
import (
"fmt"
"strings"
"filter"
)
func isNoFoo7(a string) bool {
return ! strings.HasPrefix(a, "foo_") && len(a) <= 7
}
func main() {
a := []string{"test", "some_other_test", "foo_etc"}
result := Choose(a, isNoFoo7)
fmt.Println(result) // [test]
}
Interestingly enough the README.md by Rob:
I wanted to see how hard it was to implement this sort of thing in Go, with as nice an API as I could manage. It wasn't hard.
Having written it a couple of years ago, I haven't had occasion to use it once. Instead, I just use "for" loops.
You shouldn't use it either.
So the most idiomatic way according to Rob would be something like:
func main() {
a := []string{"test", "some_other_test", "foo_etc"}
nofoos := []string{}
for i := range a {
if(!strings.HasPrefix(a[i], "foo_") && len(a[i]) <= 7) {
nofoos = append(nofoos, a[i])
}
}
fmt.Println(nofoos) // [test]
}
This style is very similar, if not identical, to the approach any C-family language takes.
Today, I stumbled on a pretty idiom that surprised me. If you want to filter a slice in place without allocating, use two slices with the same backing array:
s := []T{
// the input
}
s2 := s
s = s[:0]
for _, v := range s2 {
if shouldKeep(v) {
s = append(s, v)
}
}
Here's a specific example of removing duplicate strings:
s := []string{"a", "a", "b", "c", "c"}
s2 := s
s = s[:0]
var last string
for _, v := range s2 {
if len(s) == 0 || v != last {
last = v
s = append(s, v)
}
}
If you need to keep both slices, simply replace s = s[:0] with s = nil or s = make([]T, 0, len(s)), depending on whether you want append() to allocate for you.
There are a couple of nice ways to filter a slice without allocations or new dependencies. Found in the Go wiki on Github:
Filter (in place)
n := 0
for _, x := range a {
if keep(x) {
a[n] = x
n++
}
}
a = a[:n]
And another, more readable, way:
Filtering without allocating
This trick uses the fact that a slice shares the same backing array
and capacity as the original, so the storage is reused for the
filtered slice. Of course, the original contents are modified.
b := a[:0]
for _, x := range a {
if f(x) {
b = append(b, x)
}
}
For elements which must be garbage collected, the following code can
be included afterwards:
for i := len(b); i < len(a); i++ {
a[i] = nil // or the zero value of T
}
One thing I'm not sure about is whether the first method needs clearing (setting to nil) the items in slice a after index n, like they do in the second method.
EDIT: the second way is basically what MicahStetson described in his answer. In my code I use a function similar to the following, which is probably as good as it gets in terms on performance and readability:
func filterSlice(slice []*T, keep func(*T) bool) []*T {
newSlice := slice[:0]
for _, item := range slice {
if keep(item) {
newSlice = append(newSlice, item)
}
}
// make sure discarded items can be garbage collected
for i := len(newSlice); i < len(slice); i++ {
slice[i] = nil
}
return newSlice
}
Note that if items in your slice are not pointers and don't contain pointers you can skip the second for loop.
There isn't an idiomatic way you can achieve the same expected result in Go in one single line as in Ruby, but with a helper function you can obtain the same expressiveness as in Ruby.
You can call this helper function as:
Filter(strs, func(v string) bool {
return strings.HasPrefix(v, "foo_") // return foo_testfor
}))
Here is the whole code:
package main
import "strings"
import "fmt"
// Returns a new slice containing all strings in the
// slice that satisfy the predicate `f`.
func Filter(vs []string, f func(string) bool) []string {
vsf := make([]string, 0)
for _, v := range vs {
if f(v) && len(v) > 7 {
vsf = append(vsf, v)
}
}
return vsf
}
func main() {
var strs = []string{"foo1", "foo2", "foo3", "foo3", "foo_testfor", "_foo"}
fmt.Println(Filter(strs, func(v string) bool {
return strings.HasPrefix(v, "foo_") // return foo_testfor
}))
}
And the running example: Playground
you can use the loop as you did and wrap it to a utils function for reuse.
For multi-datatype support, copy-paste will be a choice. Another choice is writing a generating tool.
And final option if you want to use lib, you can take a look on https://github.com/ledongthuc/goterators#filter that I created to reuse aggregate & transform functions.
It requires the Go 1.18 to use that support generic + dynamic type you want to use with.
filteredItems, err := Filter(list, func(item int) bool {
return item % 2 == 0
})
filteredItems, err := Filter(list, func(item string) bool {
return item.Contains("ValidWord")
})
filteredItems, err := Filter(list, func(item MyStruct) bool {
return item.Valid()
})
It also supports Reduce in case you want to optimize the way you select.
Hope it's useful with you!
"Select Elements from Array" is also commonly called a filter function. There's no such thing in go. There are also no other "Collection Functions" such as map or reduce. For the most idiomatic way to get the desired result, I find https://gobyexample.com/collection-functions a good reference:
[...] in Go it’s common to provide collection functions if and when they are specifically needed for your program and data types.
They provide an implementation example of the filter function for strings:
func Filter(vs []string, f func(string) bool) []string {
vsf := make([]string, 0)
for _, v := range vs {
if f(v) {
vsf = append(vsf, v)
}
}
return vsf
}
However, they also say, that it's often ok to just inline the function:
Note that in some cases it may be clearest to just inline the
collection-manipulating code directly, instead of creating and calling
a helper function.
In general, golang tries to only introduce orthogonal concepts, meaning that when you can solve a problem one way, there shouldn't be too many more ways to solve it. This adds simplicity to the language by only having a few core concepts, such that not every developer uses a different subset of the language.
Take a look at this library: github.com/thoas/go-funk
It provides an implementation of a lot of life-saving idioms in Go (including filtering of elements in array for instance).
r := funk.Filter([]int{1, 2, 3, 4}, func(x int) bool {
return x%2 == 0
}
Here is an elegant example of both Fold and Filter that uses recursion to accomplish filtering. FoldRight is also generally useful. It is not stack safe but could be made so with trampolining. Once Golang has generics it can be entirely generalized for any 2 types:
func FoldRightStrings(as, z []string, f func(string, []string) []string) []string {
if len(as) > 1 {//Slice has a head and a tail.
h, t := as[0], as[1:len(as)]
return f(h, FoldRightStrings(t, z, f))
} else if len(as) == 1 {//Slice has a head and an empty tail.
h := as[0]
return f(h, FoldRightStrings([]string{}, z, f))
}
return z
}
func FilterStrings(as []string, p func(string) bool) []string {
var g = func(h string, accum []string) []string {
if p(h) {
return append(accum, h)
} else {
return accum
}
}
return FoldRightStrings(as, []string{}, g)
}
Here is an example of its usage to filter out all the strings with length < 8
var p = func(s string) bool {
if len(s) < 8 {
return true
} else {
return false
}
}
FilterStrings([]string{"asd","asdfas","asdfasfsa","asdfasdfsadfsadfad"}, p)
I`m developing this library: https://github.com/jose78/go-collection. PLease try this example to filter elements:
package main
import (
"fmt"
col "github.com/jose78/go-collection/collections"
)
type user struct {
name string
age int
id int
}
func main() {
newMap := generateMapTest()
if resultMap, err := newMap.FilterAll(filterEmptyName); err != nil {
fmt.Printf("error")
} else {
fmt.Printf("Result: %v\n", resultMap)
result := resultMap.ListValues()
fmt.Printf("Result: %v\n", result)
fmt.Printf("Result: %v\n", result.Reverse())
fmt.Printf("Result: %v\n", result.JoinAsString(" <---> "))
fmt.Printf("Result: %v\n", result.Reverse().JoinAsString(" <---> "))
result.Foreach(simpleLoop)
err := result.Foreach(simpleLoopWithError)
if err != nil {
fmt.Println(err)
}
}
}
func filterEmptyName(key interface{}, value interface{}) bool {
user := value.(user)
return user.name != "empty"
}
func generateMapTest() (container col.MapType) {
container = col.MapType{}
container[1] = user{"Alvaro", 6, 1}
container[2] = user{"Sofia", 3, 2}
container[3] = user{"empty", 0, -1}
return container
}
var simpleLoop col.FnForeachList = func(mapper interface{}, index int) {
fmt.Printf("%d.- item:%v\n", index, mapper)
}
var simpleLoopWithError col.FnForeachList = func(mapper interface{}, index int) {
if index > 0 {
panic(fmt.Sprintf("Error produced with index == %d\n", index))
}
fmt.Printf("%d.- item:%v\n", index, mapper)
}
Result of execution:
Result: map[1:{Alvaro 6 1} 2:{Sofia 3 2}]
Result: [{Sofia 3 2} {Alvaro 6 1}]
Result: [{Alvaro 6 1} {Sofia 3 2}]
Result: {Sofia 3 2} <---> {Alvaro 6 1}
Result: {Alvaro 6 1} <---> {Sofia 3 2}
0.- item:{Sofia 3 2}
1.- item:{Alvaro 6 1}
0.- item:{Sofia 3 2}
Recovered in f Error produced with index == 1
ERROR: Error produced with index == 1
Error produced with index == 1
The DOC currently are located in wiki section of the project. You can try it in this link. I hope you like it...
REgaRDS...

Serialize a mixed type JSON array in Go

I want to return a structure that looks like this:
{
results: [
["ooid1", 2.0, "Söme text"],
["ooid2", 1.3, "Åther text"],
]
}
That's an array of arrags that is string, floating point number, unicode character.
If it was Python I'd be able to:
import json
json.dumps({'results': [["ooid1", 2.0, u"Söme text"], ...])
But in Go you can't have an array (or slice) of mixed types.
I thought of using a struct like this:
type Row struct {
Ooid string
Score float64
Text rune
}
But I don't want each to become a dictionary, I want it to become an array of 3 elements each.
We can customize how an object is serialized by implementing the json.Marshaler interface. For our particular case, we seem to have a slice of Row elements that we want to encode as an array of heterogenous values. We can do so by defining a MarshalJSON function on our Row type, using an intermediate slice of interface{} to encode the mixed values.
This example demonstrates:
package main
import (
"encoding/json"
"fmt"
)
type Row struct {
Ooid string
Score float64
Text string
}
func (r *Row) MarshalJSON() ([]byte, error) {
arr := []interface{}{r.Ooid, r.Score, r.Text}
return json.Marshal(arr)
}
func main() {
rows := []Row{
{"ooid1", 2.0, "Söme text"},
{"ooid2", 1.3, "Åther text"},
}
marshalled, _ := json.Marshal(rows)
fmt.Println(string(marshalled))
}
Of course, we also might want to go the other way around, from JSON bytes back to structs. So there's a similar json.Unmarshaler interface that we can use.
func (r *Row) UnmarshalJSON(bs []byte) error {
arr := []interface{}{}
json.Unmarshal(bs, &arr)
// TODO: add error handling here.
r.Ooid = arr[0].(string)
r.Score = arr[1].(float64)
r.Text = arr[2].(string)
return nil
}
This uses a similar trick of first using an intermediate slice of interface{}, using the unmarshaler to place values into this generic container, and then plop the values back into our structure.
package main
import (
"encoding/json"
"fmt"
)
type Row struct {
Ooid string
Score float64
Text string
}
func (r *Row) UnmarshalJSON(bs []byte) error {
arr := []interface{}{}
json.Unmarshal(bs, &arr)
// TODO: add error handling here.
r.Ooid = arr[0].(string)
r.Score = arr[1].(float64)
r.Text = arr[2].(string)
return nil
}
func main() {
rows := []Row{}
text := `
[
["ooid4", 3.1415, "pi"],
["ooid5", 2.7182, "euler"]
]
`
json.Unmarshal([]byte(text), &rows)
fmt.Println(rows)
}
You can read a full example here.
Use []interface{}
type Results struct {
Rows []interface{} `json:"results"`
}
You will then have to use type assertion if you want to access the values stored in []interface{}
for _, row := range results.Rows {
switch r := row.(type) {
case string:
fmt.Println("string", r)
case float64:
fmt.Println("float64", r)
case int64:
fmt.Println("int64", r)
default:
fmt.Println("not found")
}
}
Some clumsy, but you can
type result [][]interface{}
type results struct {
Results result
}
Working example https://play.golang.org/p/IXAzZZ3Dg7

Resources