(Golang) handle and format nils from list - loops

Say i am expecting this list from a query to a backend:
Name: Volvo,
EngineType: 2NR-FE,
Warranty: 2 years,
Distance covered: 9000km.
In the case e.g EngineType is empty which equals to a nil, I want to set that to "None" or if Warranty == nil i want to set that to "No warranty found"
Here is my loop:
for _, res := range resp.Automobiles {
if res.Cars != nil {
for _, car := range res.Cars {
if car == nil || car.EngineType == nil {
fmt.Println("None")
} else if car.Warranty == nil {
fmt.Println("NO Warranty Found")
} else {
details = []string{*car.Name, *car.EngineType, *car.Warranty, *car.DistanceCovered}
fmt.Println(strings.Join(details, ","))
}
}
}
}
}
Output I'm expecting if Warranty == nil and EngineType == nil respectively:
Volvo, 2NR-FE, No warranty found, 9000km
Volvo, None, 2 years, 9000km
Output I'm getting:
No warranty Found
None
My code just sets the line to the first expression it checks
I know my code might be shitty, I'm still getting used to Go.

Output I'm getting
Because you only print the other details of the car if both EngineType and Warranty is not nil. If either is nil, the other details are omitted.
You can dry out your code with a helper function like this:
func StrOrDefault(s *string, default string) string {
if s == nil {
return default
} else {
return *s
}
}
You'll also have to decide what output you want, if any, if car == nil. In the code below, I output "None" if car == nil.
for _, res := range resp.Automobiles {
if res.Cars == nil {
continue
}
for _, car := range res.Cars {
if car == nil {
fmt.Println("None")
continue
}
details = []string{
*car.Name,
StrOrDefault(car.EngineType, "None"),
StrOrDefault(car.Warranty, "No warranty found"),
*car.DistanceCovered,
}
fmt.Println(strings.Join(details, ","))
}
}

You might prefer using the standard library templating mechanism over fmt routines. As https://stackoverflow.com/a/32775203/7050833 shows, you can then say 'if not'...'else' to capture nil vs. non-nil cases.

Related

Consistency among two Databases

I am trying that both these functions execute together or not at all.
If one func fails then the other shouldn't execute either.
func SomeFunc() error {
//Write to DB1
if err1 != nil {
return err
}
}
func OtherFunc() error {
//Write to DB2
if err2 != nil {
return err
}
}
I am trying to write to two different databases and the writes should either happen in both or neither.
I tried having like if err1 == nil then execute Otherfunc() but then Otherfunc() can fail.
Then we can rollback changes like if err2!= nil then Update db1 back to what it was before. Problem with this is that this Update operation may fail too.
I see this as a slippery slope. I want that these operations to happen together or not at all.
Thanks for any help.
EDIT:
I found this question and the "eventual consistency" makes sense to me.
This is my implementation now:
func SomeFunc() error {
//Write to DB1
if err1 != nil {
return err
}
}
func OtherFunc() error {
//Write to DB2
if err2 != nil {
return err2
}
}
func RevertSomeFunc() {
//Revert back changes by SomeFunc
fmt.Println("operation failed.")
if err3 != nil {
time.Sleep(time.Second * 5) //retry every 5 sec
RevertSomeFunc()
}
}
func main() {
err1 := SomeFunc()
if err1 != nil {
fmt.Println("operation failed.")
return
}
err2 := OtherFunc()
if err2 != nil {
go RevertSomeFunc() // launch it off to a go-routine so that it doesnt block the code.
}
}
If there is some improvement I can make to this implementation. Please lmk.

Golang - How to read many tables on SQL Server PROC Execution

I'm using Go to execute a Stored Proc, this Proc answer 2 tables, I've tried to use rows.NextResultSet() to try to access to next table, but I can not deal with Procs that respond many tables.
I'm using the github.com/denisenkom/go-mssqldb driver.
For privacy reasons I can not post the code, but this is an example:
// Connection code above ...
ctx, cancel := context.WithTimeout(context.Background(), 6*time.Second)
defer cancel()
// Emulate the many tables result
row, err := db.QueryContext(ctx, "SELECT 'algo' = '1' ; SELECT 'algo2' = '2', 'algo3' = '3'")
if err != nil {
return
}
var mssg, mssg2 string
for row.NextResultSet() {
for row.Next() {
var cols []string
cols, err = row.Columns()
if err != nil {
return
}
log.Println(row.Columns())
switch len(cols) {
case 1:
if err = row.Scan(&mssg); err != nil {
return
}
log.Println("mssg ", mssg)
case 2:
if err = row.Scan(&mssg, &mssg2); err != nil {
return
}
log.Println("mssg ", mssg, "mssg2 ", mssg2)
default:
continue
}
}
}
If I comment the for row.NextResultSet() {} the rows.Next() just iterates over the first SELECT.
If I print log.Println(row.NextResultSet()) it is always false
How can I read each result set?
After reading and trying different ways to solve this I found the solution, I think the docs are not clear at all.
The solution was:
Iterate over all rows of the first result set (rows.Next())
Evaluate if rows.NextResultSet() {}
If true iterate over all rows of the next result set
Do 2 an 3 'till rows.NextResultSet() == false

how to access key and value from json array in go lang

I have a sample as below,
result = [{"Key":"9802", "Record":{"action":"Warning","status":"Created","statusid":"9802","system":"CRM","thresholdtime":"9"}}]
how can i access thresholdtime value in go lang?
I'm trying to display like this: result[0]["Record"]["thresholdtime"]
error: invalid operation: result[0]["Record"] (type byte does not support indexing)
Thanks
The json.Unmarshal(...) Example should get you started.
Here's one way to do it (Go Playground):
func main() {
var krs []KeyRecord
err := json.Unmarshal([]byte(jsonstr), &krs)
if err != nil {
panic(err)
}
fmt.Println(krs[0].Record.ThresholdTime)
// 9
}
type KeyRecord struct {
Key int `json:"Key,string"`
Record Record `json:"Record"`
}
type Record struct {
Action string `json:"action"`
Status string `json:"status"`
StatusId int `json:"statusid,string"`
System string `json:"system"`
ThresholdTime int `json:"thresholdtime,string"`
}
var jsonstr = `
[
{
"Key": "9802",
"Record": {
"action": "Warning",
"status": "Created",
"statusid": "9802",
"system": "CRM",
"thresholdtime": "9"
}
}
]
`
You can unmarshal the JSON document into a generic type; however, it's not recommended for many reasons, ultimately related to loss of type information:
xs := []map[string]interface{}{}
err := json.Unmarshal([]byte(jsonstr), &xs)
if err != nil {
panic(err)
}
ttstr := xs[0]["Record"].(map[string]interface{})["thresholdtime"].(string)
fmt.Printf("%#v\n", ttstr) // Need to convert to int separately, if desired.
// "9"
Something like this should get you pretty close i think: https://play.golang.org/p/ytpHTTNMjB-
Use the built-in json package to decode your data into structs (with attached json tags). Then its as simple as accessing a struct field.
Use json.Unmarshal to unmarshal the data into a suitable data type. In many instances, you can (and I would recommend) using custom-declared struct types with json tags for this purpose.
However, to your comment on another answer, it is possible to unmarshal into an interface{} and have the unmarshaller determine the most suitable data type to represent the JSON structure. For example, a slice of []interface{} type will represent a list, a map of map[string]interface{} a dictionary, primitive types for their equivalent JSON, etc.
I wrote a parser which uses this approach for another Stack question last week. This does not intend to be high-performance or highly tested code, but demonstrates the key points:
package main
import (
"encoding/json"
"fmt"
"log"
"reflect"
"strconv"
"strings"
)
// Some arbitrary JSON
const js = `
{
"key1": [
{"key2": false, "some_other_key": "abc"},
{"key3": 3}
],
"key2": {
"hello": "world"
},
"shallow": true,
"null_value": null
}`
func indentStringLines(s string, n int) string {
// Build indent whitespace - this has not been optimized!
var indent string
for i := 0; i < n; i++ {
indent += " "
}
parts := strings.Split(s, "\n")
for i := 0; i < len(parts) - 1; i++ {
parts[i] = indent + parts[i]
}
return strings.Join(parts, "\n")
}
func recursivelyPrintSlice(m []interface{}, indent int) string {
var str string
for i, val := range m {
str += fmt.Sprintf("%s: %s\n",
strconv.FormatInt(int64(i), 10),
recursivelyPrint(val, indent),
)
}
return strings.TrimSpace(str)
}
func recursivelyPrint(val interface{}, indent int) string {
var str string
switch v := val.(type) {
case bool:
str += strconv.FormatBool(v)
case float64:
str += strconv.FormatFloat(v, 'g', -1, 64)
case string:
str += v
case map[string]interface{}:
str += "{\n"
for key, childVal := range v {
str += fmt.Sprintf("%s: %s\n", key, recursivelyPrint(childVal, indent))
}
str += "}"
case []interface{}:
str += "[\n" + recursivelyPrintSlice(v, indent) + "\n]"
case nil:
str += "null"
default:
str += fmt.Sprintf(
"[unimplemented type printer for %s]",
reflect.ValueOf(v).Kind(),
)
}
return strings.TrimSpace(indentStringLines(str, indent+2))
}
func main() {
var x interface{}
err := json.Unmarshal([]byte(js), &x)
if err != nil {
log.Fatal(err)
}
fmt.Println(recursivelyPrint(x, 0))
}

How do I handle nil return values from database?

I am writing a basic program to read values from database table and print in table. The table was populated by an ancient program. Some of the fields in the row are optional and when I try to read them as string, I get the following error:
panic: sql: Scan error on column index 2: unsupported driver -> Scan pair: <nil> -> *string
After I read other questions for similar issues, I came up with following code to handle the nil values. The method works fine in practice. I get the values in plain text and empty string instead of the nil values.
However, I have two concerns:
This does not look efficient. I need to handle 25+ fields like this and that would mean I read each of them as bytes and convert to string. Too many function calls and conversions. Two structs to handle the data and so on...
The code looks ugly. It is already looking convoluted with 2 fields and becomes unreadable as I go to 25+
Am I doing it wrong? Is there a better/cleaner/efficient/idiomatic golang way to read values from database?
I find it hard to believe that a modern language like Go would not handle the database returns gracefully.
Thanks in advance!
Code snippet:
// DB read format
type udInfoBytes struct {
id []byte
state []byte
}
// output format
type udInfo struct {
id string
state string
}
func CToGoString(c []byte) string {
n := -1
for i, b := range c {
if b == 0 {
break
}
n = i
}
return string(c[:n+1])
}
func dbBytesToString(in udInfoBytes) udInfo {
var out udInfo
var s string
var t int
out.id = CToGoString(in.id)
out.state = stateName(in.state)
return out
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
ret := udInfo{}
r := udInfoBytes{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
ret = dbBytesToString(r)
defer db.Close()
return ret
}
edit:
I want to have something like the following where I do no have to worry about handling NULL and automatically read them as empty string.
// output format
type udInfo struct {
id string
state string
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
r := udInfo{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
defer db.Close()
return r
}
There are separate types to handle null values coming from the database such as sql.NullBool, sql.NullFloat64, etc.
For example:
var s sql.NullString
err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s)
...
if s.Valid {
// use s.String
} else {
// NULL value
}
go's database/sql package handle pointer of the type.
package main
import (
"database/sql"
"fmt"
_ "github.com/mattn/go-sqlite3"
"log"
)
func main() {
db, err := sql.Open("sqlite3", ":memory:")
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec("create table foo(id integer primary key, value text)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values(null)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values('bar')")
if err != nil {
log.Fatal(err)
}
rows, err := db.Query("select id, value from foo")
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var id int
var value *string
err = rows.Scan(&id, &value)
if err != nil {
log.Fatal(err)
}
if value != nil {
fmt.Println(id, *value)
} else {
fmt.Println(id, value)
}
}
}
You should get like below:
1 <nil>
2 bar
An alternative solution would be to handle this in the SQL statement itself by using the COALESCE function (though not all DB's may support this).
For example you could instead use:
q := fmt.Sprintf("SELECT id,COALESCE(state, '') as state FROM Mytable WHERE id='%s' ", ud)
which would effectively give 'state' a default value of an empty string in the event that it was stored as a NULL in the db.
Two ways to handle those nulls:
Using sql.NullString
if value.Valid {
return value.String
}
Using *string
if value != nil {
return *value
}
https://medium.com/#raymondhartoyo/one-simple-way-to-handle-null-database-value-in-golang-86437ec75089
I've started to use the MyMySql driver as it uses a nicer interface to that of the std library.
https://github.com/ziutek/mymysql
I've then wrapped the querying of the database into simple to use functions. This is one such function:
import "github.com/ziutek/mymysql/mysql"
import _ "github.com/ziutek/mymysql/native"
// Execute a prepared statement expecting multiple results.
func Query(sql string, params ...interface{}) (rows []mysql.Row, err error) {
statement, err := db.Prepare(sql)
if err != nil {
return
}
result, err := statement.Run(params...)
if err != nil {
return
}
rows, err = result.GetRows()
return
}
To use this is as simple as this snippet:
rows, err := Query("SELECT * FROM table WHERE column = ?", param)
for _, row := range rows {
column1 = row.Str(0)
column2 = row.Int(1)
column3 = row.Bool(2)
column4 = row.Date(3)
// etc...
}
Notice the nice row methods for coercing to a particular value. Nulls are handled by the library and the rules are documented here:
https://github.com/ziutek/mymysql/blob/master/mysql/row.go

Go: Reading a specific range of lines in a file

I mainly need to read a specific range of lines in a file, and if a string is matched to an index string (let's say "Hello World!" for example) return true, but I'm not sure how to do so. I know how read individual lines and whole files, but not ranges of lines. Are there any libraries out there that can assist, or there a simple script to do it w/? Any help is greatly appreciated!
Something like this?
package main
import (
"bufio"
"bytes"
"fmt"
"os"
)
func Find(fname string, from, to int, needle []byte) (bool, error) {
f, err := os.Open(fname)
if err != nil {
return false, err
}
defer f.Close()
n := 0
scanner := bufio.NewScanner(f)
for scanner.Scan() {
n++
if n < from {
continue
}
if n > to {
break
}
if bytes.Index(scanner.Bytes(), needle) >= 0 {
return true, nil
}
}
return false, scanner.Err()
}
func main() {
found, err := Find("test.file", 18, 27, []byte("Hello World"))
fmt.Println(found, err)
}
If you're using for to iterate through a slice of lines, you could use something along the lines of
for _,line := range file[2:40] {
// do stuff
}

Resources