I'm using Go to execute a Stored Proc, this Proc answer 2 tables, I've tried to use rows.NextResultSet() to try to access to next table, but I can not deal with Procs that respond many tables.
I'm using the github.com/denisenkom/go-mssqldb driver.
For privacy reasons I can not post the code, but this is an example:
// Connection code above ...
ctx, cancel := context.WithTimeout(context.Background(), 6*time.Second)
defer cancel()
// Emulate the many tables result
row, err := db.QueryContext(ctx, "SELECT 'algo' = '1' ; SELECT 'algo2' = '2', 'algo3' = '3'")
if err != nil {
return
}
var mssg, mssg2 string
for row.NextResultSet() {
for row.Next() {
var cols []string
cols, err = row.Columns()
if err != nil {
return
}
log.Println(row.Columns())
switch len(cols) {
case 1:
if err = row.Scan(&mssg); err != nil {
return
}
log.Println("mssg ", mssg)
case 2:
if err = row.Scan(&mssg, &mssg2); err != nil {
return
}
log.Println("mssg ", mssg, "mssg2 ", mssg2)
default:
continue
}
}
}
If I comment the for row.NextResultSet() {} the rows.Next() just iterates over the first SELECT.
If I print log.Println(row.NextResultSet()) it is always false
How can I read each result set?
After reading and trying different ways to solve this I found the solution, I think the docs are not clear at all.
The solution was:
Iterate over all rows of the first result set (rows.Next())
Evaluate if rows.NextResultSet() {}
If true iterate over all rows of the next result set
Do 2 an 3 'till rows.NextResultSet() == false
Related
I have a program that fetches users from mssql db and while reading them with Rows.Next() function, program gives panic error and I can't handle the error and make the program continue.
How can I achieve that functionality? I don't want my program to stop on error.
That's the function that executes the query and reads.
func connectServer(serverObj ServerObj) {
var (
server string = serverObj.server_ip // for example
user string = serverObj.username // Database user
password string = serverObj.password // User Password
port string = serverObj.port // Database port
)
connString := fmt.Sprintf("server=%s;user id=%s;password=%s;port=%s", server, user, password, port)
conn, err := sql.Open("mssql", connString)
// Test if the connection is OK or not
if err != nil {
recover()
fmt.Println(err.Error())
} else {
fmt.Println("Connected!")
userSelectQuery := "SELECT name FROM sys.sql_logins"
rows, _ := conn.Query(userSelectQuery)
if err != nil {
recover()
fmt.Println("could not execute query")
}
usernames := []ServerUsername{}
for rows.Next() {
username := ServerUsername{}
err := rows.Scan(&username.username)
if err != nil {
recover()
fmt.Println("error")
continue
}
usernames = append(usernames, username)
}
fmt.Printf("found %d users: %+v \n", len(usernames), usernames)
defer rows.Close()
}
defer conn.Close()
}
I am trying that both these functions execute together or not at all.
If one func fails then the other shouldn't execute either.
func SomeFunc() error {
//Write to DB1
if err1 != nil {
return err
}
}
func OtherFunc() error {
//Write to DB2
if err2 != nil {
return err
}
}
I am trying to write to two different databases and the writes should either happen in both or neither.
I tried having like if err1 == nil then execute Otherfunc() but then Otherfunc() can fail.
Then we can rollback changes like if err2!= nil then Update db1 back to what it was before. Problem with this is that this Update operation may fail too.
I see this as a slippery slope. I want that these operations to happen together or not at all.
Thanks for any help.
EDIT:
I found this question and the "eventual consistency" makes sense to me.
This is my implementation now:
func SomeFunc() error {
//Write to DB1
if err1 != nil {
return err
}
}
func OtherFunc() error {
//Write to DB2
if err2 != nil {
return err2
}
}
func RevertSomeFunc() {
//Revert back changes by SomeFunc
fmt.Println("operation failed.")
if err3 != nil {
time.Sleep(time.Second * 5) //retry every 5 sec
RevertSomeFunc()
}
}
func main() {
err1 := SomeFunc()
if err1 != nil {
fmt.Println("operation failed.")
return
}
err2 := OtherFunc()
if err2 != nil {
go RevertSomeFunc() // launch it off to a go-routine so that it doesnt block the code.
}
}
If there is some improvement I can make to this implementation. Please lmk.
I want to create an abstract function, that gets data from DB and fills array by this data. Types of array can be different. And I want to do it without reflect, due to performance issues.
I just want to call everywhere some function like GetDBItems() and get array of data from DB with desired type. But all implementations that I create are owful.
Here is this function implementation:
type AbstractArrayGetter func(size int) []interface{}
func GetItems(arrayGetter AbstractArrayGetter) {
res := DBResponse{}
DB.Get(&res)
arr := arrayGetter(len(res.Rows))
for i := 0; i < len(res.Rows); i++ {
json.Unmarshal(res.Rows[i].Value, &obj[i])
}
}
Here I call this function:
var events []Event
GetFullItems("events", "events_list", map[string]interface{}{}, func(size int) []interface{} {
events = make([]Event, size, size)
proxyEnt := make([]interface{}, size, size)
for i, _ := range events {
proxyEnt[i] = &events[i]
}
return proxyEnt
})
It works, but there are to much code to call this function, also there is some perfomance issue about copying events array to interfaces array.
How can I do it without reflect and do it with a short function call code? Or reflect not to slow in this case?
I tested performance with reflect, and it is similar to the mentioned above solution. So here is solution with reflect, if someone needs it. This function gets data from DB and fills abstract array
func GetItems(design string, viewName string, opts map[string]interface{}, arrayType interface{}) (interface{}, error) {
res := couchResponse{}
opts["limit"] = 100000
bytes, err := CouchView(design, viewName, opts)
if err != nil {
return nil, err
}
err = json.Unmarshal(bytes, &res)
if err != nil {
return nil, err
}
dataType := reflect.TypeOf(arrayType)
slice := reflect.MakeSlice(reflect.SliceOf(dataType), len(res.Rows), len(res.Rows))
for i := 0; i < len(res.Rows); i++ {
if opts["include_docs"] == true {
err = json.Unmarshal(res.Rows[i].Doc, slice.Index(i).Addr().Interface())
} else {
err = json.Unmarshal(res.Rows[i].Value, slice.Index(i).Addr().Interface())
}
if err != nil {
return nil, err
}
}
x := reflect.New(slice.Type())
x.Elem().Set(slice)
return x.Interface(), nil
}
and getting data using this function:
var e Event
res, err := GetItems("data", "data_list", map[string]interface{}{}, e)
I am trying to update a lot of records, to which cannot be done within the one minute max request time given, so I need to use a datastore.Cursor, but for some reason the returned cursor is always the same. So each redirect is done with the same cursor value, resulting in the the same 20 database updates being performed each time.
Any ideas to why things aren't working like I would like?
http.HandleFunc("/fix", func(w, http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
fixUser(c, w, r, "/fix", func() error {
// do the fix here
return nil
})
})
func fixUser(ctx context.Context, w http.ResponseWriter, r *http.Request, path string, fn func(user *User) error) {
q := datastore.NewQuery("users")
c := r.URL.Query().Get("c")
if len(c) > 0 {
cursor, err := datastore.DecodeCursor(c)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte(err.Error()))
return
}
q.Start(cursor)
}
iter := q.Run(ctx)
var cr datastore.Cursor
for i := 0; i < 20; i++ {
var u User
key, err := iter.Next(&u)
if err == datastore.Done {
return
}
if err != nil {
panic(err.Error())
}
cr, _ = iter.Cursor()
log.Debugf(ctx, "Cursor: %v", cr) // always the same value
u.Key = key
fn(&u)
}
pathWithCursor := fmt.Sprintf("%s?c=%s", path, cr.String())
http.Redirect(w, r, pathWithCursor, 301)
}
I looked at some of my own cursor code and compared it against yours. The main difference I see is that I use q = q.Start(cursor) rather than q.start(cursor). This should fix your problem since your query will now be updated to reflect the position specified by the cursor. Without storing your query back into the q variable, your query will not update.
I am writing a basic program to read values from database table and print in table. The table was populated by an ancient program. Some of the fields in the row are optional and when I try to read them as string, I get the following error:
panic: sql: Scan error on column index 2: unsupported driver -> Scan pair: <nil> -> *string
After I read other questions for similar issues, I came up with following code to handle the nil values. The method works fine in practice. I get the values in plain text and empty string instead of the nil values.
However, I have two concerns:
This does not look efficient. I need to handle 25+ fields like this and that would mean I read each of them as bytes and convert to string. Too many function calls and conversions. Two structs to handle the data and so on...
The code looks ugly. It is already looking convoluted with 2 fields and becomes unreadable as I go to 25+
Am I doing it wrong? Is there a better/cleaner/efficient/idiomatic golang way to read values from database?
I find it hard to believe that a modern language like Go would not handle the database returns gracefully.
Thanks in advance!
Code snippet:
// DB read format
type udInfoBytes struct {
id []byte
state []byte
}
// output format
type udInfo struct {
id string
state string
}
func CToGoString(c []byte) string {
n := -1
for i, b := range c {
if b == 0 {
break
}
n = i
}
return string(c[:n+1])
}
func dbBytesToString(in udInfoBytes) udInfo {
var out udInfo
var s string
var t int
out.id = CToGoString(in.id)
out.state = stateName(in.state)
return out
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
ret := udInfo{}
r := udInfoBytes{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
ret = dbBytesToString(r)
defer db.Close()
return ret
}
edit:
I want to have something like the following where I do no have to worry about handling NULL and automatically read them as empty string.
// output format
type udInfo struct {
id string
state string
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
r := udInfo{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
defer db.Close()
return r
}
There are separate types to handle null values coming from the database such as sql.NullBool, sql.NullFloat64, etc.
For example:
var s sql.NullString
err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s)
...
if s.Valid {
// use s.String
} else {
// NULL value
}
go's database/sql package handle pointer of the type.
package main
import (
"database/sql"
"fmt"
_ "github.com/mattn/go-sqlite3"
"log"
)
func main() {
db, err := sql.Open("sqlite3", ":memory:")
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec("create table foo(id integer primary key, value text)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values(null)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values('bar')")
if err != nil {
log.Fatal(err)
}
rows, err := db.Query("select id, value from foo")
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var id int
var value *string
err = rows.Scan(&id, &value)
if err != nil {
log.Fatal(err)
}
if value != nil {
fmt.Println(id, *value)
} else {
fmt.Println(id, value)
}
}
}
You should get like below:
1 <nil>
2 bar
An alternative solution would be to handle this in the SQL statement itself by using the COALESCE function (though not all DB's may support this).
For example you could instead use:
q := fmt.Sprintf("SELECT id,COALESCE(state, '') as state FROM Mytable WHERE id='%s' ", ud)
which would effectively give 'state' a default value of an empty string in the event that it was stored as a NULL in the db.
Two ways to handle those nulls:
Using sql.NullString
if value.Valid {
return value.String
}
Using *string
if value != nil {
return *value
}
https://medium.com/#raymondhartoyo/one-simple-way-to-handle-null-database-value-in-golang-86437ec75089
I've started to use the MyMySql driver as it uses a nicer interface to that of the std library.
https://github.com/ziutek/mymysql
I've then wrapped the querying of the database into simple to use functions. This is one such function:
import "github.com/ziutek/mymysql/mysql"
import _ "github.com/ziutek/mymysql/native"
// Execute a prepared statement expecting multiple results.
func Query(sql string, params ...interface{}) (rows []mysql.Row, err error) {
statement, err := db.Prepare(sql)
if err != nil {
return
}
result, err := statement.Run(params...)
if err != nil {
return
}
rows, err = result.GetRows()
return
}
To use this is as simple as this snippet:
rows, err := Query("SELECT * FROM table WHERE column = ?", param)
for _, row := range rows {
column1 = row.Str(0)
column2 = row.Int(1)
column3 = row.Bool(2)
column4 = row.Date(3)
// etc...
}
Notice the nice row methods for coercing to a particular value. Nulls are handled by the library and the rules are documented here:
https://github.com/ziutek/mymysql/blob/master/mysql/row.go