Golang panic error while reading mssql query result - sql-server

I have a program that fetches users from mssql db and while reading them with Rows.Next() function, program gives panic error and I can't handle the error and make the program continue.
How can I achieve that functionality? I don't want my program to stop on error.
That's the function that executes the query and reads.
func connectServer(serverObj ServerObj) {
var (
server string = serverObj.server_ip // for example
user string = serverObj.username // Database user
password string = serverObj.password // User Password
port string = serverObj.port // Database port
)
connString := fmt.Sprintf("server=%s;user id=%s;password=%s;port=%s", server, user, password, port)
conn, err := sql.Open("mssql", connString)
// Test if the connection is OK or not
if err != nil {
recover()
fmt.Println(err.Error())
} else {
fmt.Println("Connected!")
userSelectQuery := "SELECT name FROM sys.sql_logins"
rows, _ := conn.Query(userSelectQuery)
if err != nil {
recover()
fmt.Println("could not execute query")
}
usernames := []ServerUsername{}
for rows.Next() {
username := ServerUsername{}
err := rows.Scan(&username.username)
if err != nil {
recover()
fmt.Println("error")
continue
}
usernames = append(usernames, username)
}
fmt.Printf("found %d users: %+v \n", len(usernames), usernames)
defer rows.Close()
}
defer conn.Close()
}

Related

Golang - How to read many tables on SQL Server PROC Execution

I'm using Go to execute a Stored Proc, this Proc answer 2 tables, I've tried to use rows.NextResultSet() to try to access to next table, but I can not deal with Procs that respond many tables.
I'm using the github.com/denisenkom/go-mssqldb driver.
For privacy reasons I can not post the code, but this is an example:
// Connection code above ...
ctx, cancel := context.WithTimeout(context.Background(), 6*time.Second)
defer cancel()
// Emulate the many tables result
row, err := db.QueryContext(ctx, "SELECT 'algo' = '1' ; SELECT 'algo2' = '2', 'algo3' = '3'")
if err != nil {
return
}
var mssg, mssg2 string
for row.NextResultSet() {
for row.Next() {
var cols []string
cols, err = row.Columns()
if err != nil {
return
}
log.Println(row.Columns())
switch len(cols) {
case 1:
if err = row.Scan(&mssg); err != nil {
return
}
log.Println("mssg ", mssg)
case 2:
if err = row.Scan(&mssg, &mssg2); err != nil {
return
}
log.Println("mssg ", mssg, "mssg2 ", mssg2)
default:
continue
}
}
}
If I comment the for row.NextResultSet() {} the rows.Next() just iterates over the first SELECT.
If I print log.Println(row.NextResultSet()) it is always false
How can I read each result set?
After reading and trying different ways to solve this I found the solution, I think the docs are not clear at all.
The solution was:
Iterate over all rows of the first result set (rows.Next())
Evaluate if rows.NextResultSet() {}
If true iterate over all rows of the next result set
Do 2 an 3 'till rows.NextResultSet() == false

"tail -f"-like generator

I had this convenient function in Python:
def follow(path):
with open(self.path) as lines:
lines.seek(0, 2) # seek to EOF
while True:
line = lines.readline()
if not line:
time.sleep(0.1)
continue
yield line
It does something similar to UNIX tail -f: you get last lines of a file as they come. It's convenient because you can get the generator without blocking and pass it to another function.
Then I had to do the same thing in Go. I'm new to this language, so I'm not sure whether what I did is idiomatic/correct enough for Go.
Here is the code:
func Follow(fileName string) chan string {
out_chan := make(chan string)
file, err := os.Open(fileName)
if err != nil {
log.Fatal(err)
}
file.Seek(0, os.SEEK_END)
bf := bufio.NewReader(file)
go func() {
for {
line, _, _ := bf.ReadLine()
if len(line) == 0 {
time.Sleep(10 * time.Millisecond)
} else {
out_chan <- string(line)
}
}
defer file.Close()
close(out_chan)
}()
return out_chan
}
Is there any cleaner way to do this in Go? I have a feeling that using an asynchronous call for such a thing is an overkill, and it really bothers me.
Create a wrapper around a reader that sleeps on EOF:
type tailReader struct {
io.ReadCloser
}
func (t tailReader) Read(b []byte) (int, error) {
for {
n, err := t.ReadCloser.Read(b)
if n > 0 {
return n, nil
} else if err != io.EOF {
return n, err
}
time.Sleep(10 * time.Millisecond)
}
}
func newTailReader(fileName string) (tailReader, error) {
f, err := os.Open(fileName)
if err != nil {
return tailReader{}, err
}
if _, err := f.Seek(0, 2); err != nil {
return tailReader{}, err
}
return tailReader{f}, nil
}
This reader can be used anywhere an io.Reader can be used. Here's how loop over lines using bufio.Scanner:
t, err := newTailReader("somefile")
if err != nil {
log.Fatal(err)
}
defer t.Close()
scanner := bufio.NewScanner(t)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
if err := scanner.Err(); err != nil {
fmt.Fprintln(os.Stderr, "reading:", err)
}
The reader can also be used to loop over JSON values appended to the file:
t, err := newTailReader("somefile")
if err != nil {
log.Fatal(err)
}
defer t.Close()
dec := json.NewDecoder(t)
for {
var v SomeType
if err := dec.Decode(&v); err != nil {
log.Fatal(err)
}
fmt.Println("the value is ", v)
}
There are a couple of advantages this approach has over the goroutine approach outlined in the question. The first is that shutdown is easy. Just close the file. There's no need to signal the goroutine that it should exit. The second advantage is that many packages work with io.Reader.
The sleep time can be adjusted up or down to meet specific needs. Decrease the time for lower latency and increase the time to reduce CPU use. A sleep of 100ms is probably fast enough for data that's displayed to humans.
Check out this Go package for reading from continuously updated files (tail -f): https://github.com/hpcloud/tail
t, err := tail.TailFile("filename", tail.Config{Follow: true})
for line := range t.Lines {
fmt.Println(line.Text)
}

Unmarshal body response from task queue

I'm using go and google task queue in order to create some a sync jobs.
I'm passing the data to the worker method successfully but i can't unmarshal the data in order to use it.
I tried different ways i'm getting an unmarshal error
err um &json.SyntaxError{msg:"invalid character 'i' in literal false (expecting 'a')", Offset:2}
This is how i'm sending the data to the queue
keys := make(map[string][]string)
keys["filenames"] = req.FileNames // []string
t := taskqueue.NewPOSTTask("/deletetask", keys)
_, err = taskqueue.Add(ctx, t, "delete")
And this is how i tried to unmarshal it
type Files struct {
fileNames string `json:"filenames"`
}
func worker(w http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
var objmap map[string]json.RawMessage
b, err := ioutil.ReadAll(r.Body)
if err != nil {
c.Debugf("err io %#v", err)
}
c.Debugf("b %#v", string(b[:])) //Print: b "filenames=1.txt&filenames=2.txt"
err = json.Unmarshal(b, &objmap)
if err != nil {
c.Debugf("err um %#v", err)
}
//this didn't work as well same err
f := []Files{}
err = json.Unmarshal(b, &f)
}
Arguments to tasks are sent as POST-values, you assign a slice of strings as the POST-value for the key filenames.
Then you try to deserialize the full POST request body as JSON.
A simple solution would be to split up each file in one task and just send the file name as a string value, then it would be something like:
// Add tasks
for i := range req.FileNames {
postValues := url.Values{}
postValues.Set("fileName", req.FileNames[i])
t := taskqueue.NewPOSTTask("/deletetask", postValues)
if _, err := taskqueue.Add(ctx, t, "delete"); err != nil {
c.Errorf("Failed add task, error: %v, fileName: %v", err, req.FileNames[i])
}
}
// The actual delete worker
func worker(w http.ResponseWriter, r *http.Request) {
ctx := appengine.NewContext(r)
fileName := r.FormValue("fileName")
ctx.Infof("Deleting: %v", fileName)
}

How do I handle nil return values from database?

I am writing a basic program to read values from database table and print in table. The table was populated by an ancient program. Some of the fields in the row are optional and when I try to read them as string, I get the following error:
panic: sql: Scan error on column index 2: unsupported driver -> Scan pair: <nil> -> *string
After I read other questions for similar issues, I came up with following code to handle the nil values. The method works fine in practice. I get the values in plain text and empty string instead of the nil values.
However, I have two concerns:
This does not look efficient. I need to handle 25+ fields like this and that would mean I read each of them as bytes and convert to string. Too many function calls and conversions. Two structs to handle the data and so on...
The code looks ugly. It is already looking convoluted with 2 fields and becomes unreadable as I go to 25+
Am I doing it wrong? Is there a better/cleaner/efficient/idiomatic golang way to read values from database?
I find it hard to believe that a modern language like Go would not handle the database returns gracefully.
Thanks in advance!
Code snippet:
// DB read format
type udInfoBytes struct {
id []byte
state []byte
}
// output format
type udInfo struct {
id string
state string
}
func CToGoString(c []byte) string {
n := -1
for i, b := range c {
if b == 0 {
break
}
n = i
}
return string(c[:n+1])
}
func dbBytesToString(in udInfoBytes) udInfo {
var out udInfo
var s string
var t int
out.id = CToGoString(in.id)
out.state = stateName(in.state)
return out
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
ret := udInfo{}
r := udInfoBytes{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
ret = dbBytesToString(r)
defer db.Close()
return ret
}
edit:
I want to have something like the following where I do no have to worry about handling NULL and automatically read them as empty string.
// output format
type udInfo struct {
id string
state string
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
r := udInfo{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
defer db.Close()
return r
}
There are separate types to handle null values coming from the database such as sql.NullBool, sql.NullFloat64, etc.
For example:
var s sql.NullString
err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s)
...
if s.Valid {
// use s.String
} else {
// NULL value
}
go's database/sql package handle pointer of the type.
package main
import (
"database/sql"
"fmt"
_ "github.com/mattn/go-sqlite3"
"log"
)
func main() {
db, err := sql.Open("sqlite3", ":memory:")
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec("create table foo(id integer primary key, value text)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values(null)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values('bar')")
if err != nil {
log.Fatal(err)
}
rows, err := db.Query("select id, value from foo")
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var id int
var value *string
err = rows.Scan(&id, &value)
if err != nil {
log.Fatal(err)
}
if value != nil {
fmt.Println(id, *value)
} else {
fmt.Println(id, value)
}
}
}
You should get like below:
1 <nil>
2 bar
An alternative solution would be to handle this in the SQL statement itself by using the COALESCE function (though not all DB's may support this).
For example you could instead use:
q := fmt.Sprintf("SELECT id,COALESCE(state, '') as state FROM Mytable WHERE id='%s' ", ud)
which would effectively give 'state' a default value of an empty string in the event that it was stored as a NULL in the db.
Two ways to handle those nulls:
Using sql.NullString
if value.Valid {
return value.String
}
Using *string
if value != nil {
return *value
}
https://medium.com/#raymondhartoyo/one-simple-way-to-handle-null-database-value-in-golang-86437ec75089
I've started to use the MyMySql driver as it uses a nicer interface to that of the std library.
https://github.com/ziutek/mymysql
I've then wrapped the querying of the database into simple to use functions. This is one such function:
import "github.com/ziutek/mymysql/mysql"
import _ "github.com/ziutek/mymysql/native"
// Execute a prepared statement expecting multiple results.
func Query(sql string, params ...interface{}) (rows []mysql.Row, err error) {
statement, err := db.Prepare(sql)
if err != nil {
return
}
result, err := statement.Run(params...)
if err != nil {
return
}
rows, err = result.GetRows()
return
}
To use this is as simple as this snippet:
rows, err := Query("SELECT * FROM table WHERE column = ?", param)
for _, row := range rows {
column1 = row.Str(0)
column2 = row.Int(1)
column3 = row.Bool(2)
column4 = row.Date(3)
// etc...
}
Notice the nice row methods for coercing to a particular value. Nulls are handled by the library and the rules are documented here:
https://github.com/ziutek/mymysql/blob/master/mysql/row.go

How do I delete all blobs in app engine on Go?

The blobstore API has no function to list all blobs. How can I get this list and then delete all blobs?
The blobstore API on appengine for go has no way to do this. Instead, use the datastore to fetch __BlobInfo__ entities as appengine.BlobInfo. Although the API claims to have a BlobKey field, it is not populated. Instead, use the string ID of the returned key and cast it to an appengine.BlobKey, which you can then pass to blobstore.Delete.
Here's a handler at "/tasks/delete-blobs" to delete 20k blobs at a time in a loop until they are all deleted. Also note that cursors are not used here. I suspect that __BlobInfo__ is special and doesn't support cursors. (When I attempted to use them, they did nothing.)
func DeleteBlobs(w http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
c = appengine.Timeout(c, time.Minute)
q := datastore.NewQuery("__BlobInfo__").KeysOnly()
it := q.Run(ctx)
wg := sync.WaitGroup{}
something := false
for _i := 0; _i < 20; _i++ {
var bk []appengine.BlobKey
for i := 0; i < 1000; i++ {
k, err := it.Next(nil)
if err == datastore.Done {
break
} else if err != nil {
c.Errorf("err: %v", err)
continue
}
bk = append(bk, appengine.BlobKey(k.StringID()))
}
if len(bk) == 0 {
break
}
go func(bk []appengine.BlobKey) {
something = true
c.Errorf("deleteing %v blobs", len(bk))
err := blobstore.DeleteMulti(ctx, bk)
if err != nil {
c.Errorf("blobstore delete err: %v", err)
}
wg.Done()
}(bk)
wg.Add(1)
}
wg.Wait()
if something {
taskqueue.Add(c, taskqueue.NewPOSTTask("/tasks/delete-blobs", nil), "")
}
}

Resources