Consistency among two Databases - database

I am trying that both these functions execute together or not at all.
If one func fails then the other shouldn't execute either.
func SomeFunc() error {
//Write to DB1
if err1 != nil {
return err
}
}
func OtherFunc() error {
//Write to DB2
if err2 != nil {
return err
}
}
I am trying to write to two different databases and the writes should either happen in both or neither.
I tried having like if err1 == nil then execute Otherfunc() but then Otherfunc() can fail.
Then we can rollback changes like if err2!= nil then Update db1 back to what it was before. Problem with this is that this Update operation may fail too.
I see this as a slippery slope. I want that these operations to happen together or not at all.
Thanks for any help.
EDIT:
I found this question and the "eventual consistency" makes sense to me.
This is my implementation now:
func SomeFunc() error {
//Write to DB1
if err1 != nil {
return err
}
}
func OtherFunc() error {
//Write to DB2
if err2 != nil {
return err2
}
}
func RevertSomeFunc() {
//Revert back changes by SomeFunc
fmt.Println("operation failed.")
if err3 != nil {
time.Sleep(time.Second * 5) //retry every 5 sec
RevertSomeFunc()
}
}
func main() {
err1 := SomeFunc()
if err1 != nil {
fmt.Println("operation failed.")
return
}
err2 := OtherFunc()
if err2 != nil {
go RevertSomeFunc() // launch it off to a go-routine so that it doesnt block the code.
}
}
If there is some improvement I can make to this implementation. Please lmk.

Related

Golang - How to read many tables on SQL Server PROC Execution

I'm using Go to execute a Stored Proc, this Proc answer 2 tables, I've tried to use rows.NextResultSet() to try to access to next table, but I can not deal with Procs that respond many tables.
I'm using the github.com/denisenkom/go-mssqldb driver.
For privacy reasons I can not post the code, but this is an example:
// Connection code above ...
ctx, cancel := context.WithTimeout(context.Background(), 6*time.Second)
defer cancel()
// Emulate the many tables result
row, err := db.QueryContext(ctx, "SELECT 'algo' = '1' ; SELECT 'algo2' = '2', 'algo3' = '3'")
if err != nil {
return
}
var mssg, mssg2 string
for row.NextResultSet() {
for row.Next() {
var cols []string
cols, err = row.Columns()
if err != nil {
return
}
log.Println(row.Columns())
switch len(cols) {
case 1:
if err = row.Scan(&mssg); err != nil {
return
}
log.Println("mssg ", mssg)
case 2:
if err = row.Scan(&mssg, &mssg2); err != nil {
return
}
log.Println("mssg ", mssg, "mssg2 ", mssg2)
default:
continue
}
}
}
If I comment the for row.NextResultSet() {} the rows.Next() just iterates over the first SELECT.
If I print log.Println(row.NextResultSet()) it is always false
How can I read each result set?
After reading and trying different ways to solve this I found the solution, I think the docs are not clear at all.
The solution was:
Iterate over all rows of the first result set (rows.Next())
Evaluate if rows.NextResultSet() {}
If true iterate over all rows of the next result set
Do 2 an 3 'till rows.NextResultSet() == false

Golang abstract function that gets data from db and fills array

I want to create an abstract function, that gets data from DB and fills array by this data. Types of array can be different. And I want to do it without reflect, due to performance issues.
I just want to call everywhere some function like GetDBItems() and get array of data from DB with desired type. But all implementations that I create are owful.
Here is this function implementation:
type AbstractArrayGetter func(size int) []interface{}
func GetItems(arrayGetter AbstractArrayGetter) {
res := DBResponse{}
DB.Get(&res)
arr := arrayGetter(len(res.Rows))
for i := 0; i < len(res.Rows); i++ {
json.Unmarshal(res.Rows[i].Value, &obj[i])
}
}
Here I call this function:
var events []Event
GetFullItems("events", "events_list", map[string]interface{}{}, func(size int) []interface{} {
events = make([]Event, size, size)
proxyEnt := make([]interface{}, size, size)
for i, _ := range events {
proxyEnt[i] = &events[i]
}
return proxyEnt
})
It works, but there are to much code to call this function, also there is some perfomance issue about copying events array to interfaces array.
How can I do it without reflect and do it with a short function call code? Or reflect not to slow in this case?
I tested performance with reflect, and it is similar to the mentioned above solution. So here is solution with reflect, if someone needs it. This function gets data from DB and fills abstract array
func GetItems(design string, viewName string, opts map[string]interface{}, arrayType interface{}) (interface{}, error) {
res := couchResponse{}
opts["limit"] = 100000
bytes, err := CouchView(design, viewName, opts)
if err != nil {
return nil, err
}
err = json.Unmarshal(bytes, &res)
if err != nil {
return nil, err
}
dataType := reflect.TypeOf(arrayType)
slice := reflect.MakeSlice(reflect.SliceOf(dataType), len(res.Rows), len(res.Rows))
for i := 0; i < len(res.Rows); i++ {
if opts["include_docs"] == true {
err = json.Unmarshal(res.Rows[i].Doc, slice.Index(i).Addr().Interface())
} else {
err = json.Unmarshal(res.Rows[i].Value, slice.Index(i).Addr().Interface())
}
if err != nil {
return nil, err
}
}
x := reflect.New(slice.Type())
x.Elem().Set(slice)
return x.Interface(), nil
}
and getting data using this function:
var e Event
res, err := GetItems("data", "data_list", map[string]interface{}{}, e)

"tail -f"-like generator

I had this convenient function in Python:
def follow(path):
with open(self.path) as lines:
lines.seek(0, 2) # seek to EOF
while True:
line = lines.readline()
if not line:
time.sleep(0.1)
continue
yield line
It does something similar to UNIX tail -f: you get last lines of a file as they come. It's convenient because you can get the generator without blocking and pass it to another function.
Then I had to do the same thing in Go. I'm new to this language, so I'm not sure whether what I did is idiomatic/correct enough for Go.
Here is the code:
func Follow(fileName string) chan string {
out_chan := make(chan string)
file, err := os.Open(fileName)
if err != nil {
log.Fatal(err)
}
file.Seek(0, os.SEEK_END)
bf := bufio.NewReader(file)
go func() {
for {
line, _, _ := bf.ReadLine()
if len(line) == 0 {
time.Sleep(10 * time.Millisecond)
} else {
out_chan <- string(line)
}
}
defer file.Close()
close(out_chan)
}()
return out_chan
}
Is there any cleaner way to do this in Go? I have a feeling that using an asynchronous call for such a thing is an overkill, and it really bothers me.
Create a wrapper around a reader that sleeps on EOF:
type tailReader struct {
io.ReadCloser
}
func (t tailReader) Read(b []byte) (int, error) {
for {
n, err := t.ReadCloser.Read(b)
if n > 0 {
return n, nil
} else if err != io.EOF {
return n, err
}
time.Sleep(10 * time.Millisecond)
}
}
func newTailReader(fileName string) (tailReader, error) {
f, err := os.Open(fileName)
if err != nil {
return tailReader{}, err
}
if _, err := f.Seek(0, 2); err != nil {
return tailReader{}, err
}
return tailReader{f}, nil
}
This reader can be used anywhere an io.Reader can be used. Here's how loop over lines using bufio.Scanner:
t, err := newTailReader("somefile")
if err != nil {
log.Fatal(err)
}
defer t.Close()
scanner := bufio.NewScanner(t)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
if err := scanner.Err(); err != nil {
fmt.Fprintln(os.Stderr, "reading:", err)
}
The reader can also be used to loop over JSON values appended to the file:
t, err := newTailReader("somefile")
if err != nil {
log.Fatal(err)
}
defer t.Close()
dec := json.NewDecoder(t)
for {
var v SomeType
if err := dec.Decode(&v); err != nil {
log.Fatal(err)
}
fmt.Println("the value is ", v)
}
There are a couple of advantages this approach has over the goroutine approach outlined in the question. The first is that shutdown is easy. Just close the file. There's no need to signal the goroutine that it should exit. The second advantage is that many packages work with io.Reader.
The sleep time can be adjusted up or down to meet specific needs. Decrease the time for lower latency and increase the time to reduce CPU use. A sleep of 100ms is probably fast enough for data that's displayed to humans.
Check out this Go package for reading from continuously updated files (tail -f): https://github.com/hpcloud/tail
t, err := tail.TailFile("filename", tail.Config{Follow: true})
for line := range t.Lines {
fmt.Println(line.Text)
}

How do I handle nil return values from database?

I am writing a basic program to read values from database table and print in table. The table was populated by an ancient program. Some of the fields in the row are optional and when I try to read them as string, I get the following error:
panic: sql: Scan error on column index 2: unsupported driver -> Scan pair: <nil> -> *string
After I read other questions for similar issues, I came up with following code to handle the nil values. The method works fine in practice. I get the values in plain text and empty string instead of the nil values.
However, I have two concerns:
This does not look efficient. I need to handle 25+ fields like this and that would mean I read each of them as bytes and convert to string. Too many function calls and conversions. Two structs to handle the data and so on...
The code looks ugly. It is already looking convoluted with 2 fields and becomes unreadable as I go to 25+
Am I doing it wrong? Is there a better/cleaner/efficient/idiomatic golang way to read values from database?
I find it hard to believe that a modern language like Go would not handle the database returns gracefully.
Thanks in advance!
Code snippet:
// DB read format
type udInfoBytes struct {
id []byte
state []byte
}
// output format
type udInfo struct {
id string
state string
}
func CToGoString(c []byte) string {
n := -1
for i, b := range c {
if b == 0 {
break
}
n = i
}
return string(c[:n+1])
}
func dbBytesToString(in udInfoBytes) udInfo {
var out udInfo
var s string
var t int
out.id = CToGoString(in.id)
out.state = stateName(in.state)
return out
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
ret := udInfo{}
r := udInfoBytes{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
ret = dbBytesToString(r)
defer db.Close()
return ret
}
edit:
I want to have something like the following where I do no have to worry about handling NULL and automatically read them as empty string.
// output format
type udInfo struct {
id string
state string
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
r := udInfo{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
defer db.Close()
return r
}
There are separate types to handle null values coming from the database such as sql.NullBool, sql.NullFloat64, etc.
For example:
var s sql.NullString
err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s)
...
if s.Valid {
// use s.String
} else {
// NULL value
}
go's database/sql package handle pointer of the type.
package main
import (
"database/sql"
"fmt"
_ "github.com/mattn/go-sqlite3"
"log"
)
func main() {
db, err := sql.Open("sqlite3", ":memory:")
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec("create table foo(id integer primary key, value text)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values(null)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values('bar')")
if err != nil {
log.Fatal(err)
}
rows, err := db.Query("select id, value from foo")
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var id int
var value *string
err = rows.Scan(&id, &value)
if err != nil {
log.Fatal(err)
}
if value != nil {
fmt.Println(id, *value)
} else {
fmt.Println(id, value)
}
}
}
You should get like below:
1 <nil>
2 bar
An alternative solution would be to handle this in the SQL statement itself by using the COALESCE function (though not all DB's may support this).
For example you could instead use:
q := fmt.Sprintf("SELECT id,COALESCE(state, '') as state FROM Mytable WHERE id='%s' ", ud)
which would effectively give 'state' a default value of an empty string in the event that it was stored as a NULL in the db.
Two ways to handle those nulls:
Using sql.NullString
if value.Valid {
return value.String
}
Using *string
if value != nil {
return *value
}
https://medium.com/#raymondhartoyo/one-simple-way-to-handle-null-database-value-in-golang-86437ec75089
I've started to use the MyMySql driver as it uses a nicer interface to that of the std library.
https://github.com/ziutek/mymysql
I've then wrapped the querying of the database into simple to use functions. This is one such function:
import "github.com/ziutek/mymysql/mysql"
import _ "github.com/ziutek/mymysql/native"
// Execute a prepared statement expecting multiple results.
func Query(sql string, params ...interface{}) (rows []mysql.Row, err error) {
statement, err := db.Prepare(sql)
if err != nil {
return
}
result, err := statement.Run(params...)
if err != nil {
return
}
rows, err = result.GetRows()
return
}
To use this is as simple as this snippet:
rows, err := Query("SELECT * FROM table WHERE column = ?", param)
for _, row := range rows {
column1 = row.Str(0)
column2 = row.Int(1)
column3 = row.Bool(2)
column4 = row.Date(3)
// etc...
}
Notice the nice row methods for coercing to a particular value. Nulls are handled by the library and the rules are documented here:
https://github.com/ziutek/mymysql/blob/master/mysql/row.go

Simple way to copy a file

Is there any simple/fast way to copy a file in Go?
I couldn't find a fast way in the Doc's and searching the internet doesn't help as well.
Warning: This answer is mainly about adding a hard link to a file, not about copying the contents.
A robust and efficient copy is conceptually simple, but not simple to implement due to the need to handle a number of edge cases and system limitations that are imposed by the target operating system and it's configuration.
If you simply want to make a duplicate of the existing file you can use os.Link(srcName, dstName). This avoids having to move bytes around in the application and saves disk space. For large files, this is a significant time and space saving.
But various operating systems have different restrictions on how hard links work. Depending on your application and your target system configuration, Link() calls may not work in all cases.
If you want a single generic, robust and efficient copy function, update Copy() to:
Perform checks to ensure that at least some form of copy will succeed (access permissions, directories exist, etc.)
Check to see if both files already exist and are the same using
os.SameFile, return success if they are the same
Attempt a Link, return if success
Copy the bytes (all efficient means failed), return result
An optimization would be to copy the bytes in a go routine so the caller doesn't block on the byte copy. Doing so imposes additional complexity on the caller to handle the success/error case properly.
If I wanted both, I would have two different copy functions: CopyFile(src, dst string) (error) for a blocking copy and CopyFileAsync(src, dst string) (chan c, error) which passes a signaling channel back to the caller for the asynchronous case.
package main
import (
"fmt"
"io"
"os"
)
// CopyFile copies a file from src to dst. If src and dst files exist, and are
// the same, then return success. Otherise, attempt to create a hard link
// between the two files. If that fail, copy the file contents from src to dst.
func CopyFile(src, dst string) (err error) {
sfi, err := os.Stat(src)
if err != nil {
return
}
if !sfi.Mode().IsRegular() {
// cannot copy non-regular files (e.g., directories,
// symlinks, devices, etc.)
return fmt.Errorf("CopyFile: non-regular source file %s (%q)", sfi.Name(), sfi.Mode().String())
}
dfi, err := os.Stat(dst)
if err != nil {
if !os.IsNotExist(err) {
return
}
} else {
if !(dfi.Mode().IsRegular()) {
return fmt.Errorf("CopyFile: non-regular destination file %s (%q)", dfi.Name(), dfi.Mode().String())
}
if os.SameFile(sfi, dfi) {
return
}
}
if err = os.Link(src, dst); err == nil {
return
}
err = copyFileContents(src, dst)
return
}
// copyFileContents copies the contents of the file named src to the file named
// by dst. The file will be created if it does not already exist. If the
// destination file exists, all it's contents will be replaced by the contents
// of the source file.
func copyFileContents(src, dst string) (err error) {
in, err := os.Open(src)
if err != nil {
return
}
defer in.Close()
out, err := os.Create(dst)
if err != nil {
return
}
defer func() {
cerr := out.Close()
if err == nil {
err = cerr
}
}()
if _, err = io.Copy(out, in); err != nil {
return
}
err = out.Sync()
return
}
func main() {
fmt.Printf("Copying %s to %s\n", os.Args[1], os.Args[2])
err := CopyFile(os.Args[1], os.Args[2])
if err != nil {
fmt.Printf("CopyFile failed %q\n", err)
} else {
fmt.Printf("CopyFile succeeded\n")
}
}
import (
"io/ioutil"
"log"
)
func checkErr(err error) {
if err != nil {
log.Fatal(err)
}
}
func copy(src string, dst string) {
// Read all content of src to data, may cause OOM for a large file.
data, err := ioutil.ReadFile(src)
checkErr(err)
// Write data to dst
err = ioutil.WriteFile(dst, data, 0644)
checkErr(err)
}
If you are running the code in linux/mac, you could just execute the system's cp command.
srcFolder := "copy/from/path"
destFolder := "copy/to/path"
cpCmd := exec.Command("cp", "-rf", srcFolder, destFolder)
err := cpCmd.Run()
It's treating go a bit like a script, but it gets the job done. Also, you need to import "os/exec"
Starting with Go 1.15 (Aug 2020), you can use File.ReadFrom:
package main
import "os"
func main() {
r, err := os.Open("in.txt")
if err != nil {
panic(err)
}
defer r.Close()
w, err := os.Create("out.txt")
if err != nil {
panic(err)
}
defer w.Close()
w.ReadFrom(r)
}
Perform the copy in a stream, using io.Copy.
Close all opened file descriptors.
All errors that should be checked are checked, including the errors in deferred (*os.File).Close calls.
Gracefully handle multiple non-nil errors, e.g. non-nil errors from both io.Copy and (*os.File).Close.
No unnecessary complications that were present in other answers, such as calling Close twice on the same file but ignoring the error on one of the calls.
No unnecessary stat checks for existence or for file type. These checks aren't necessary: the future open and read operations will return an error anyway if it's not a valid operation for the type of file. Secondly, such checks are prone to races (e.g. the file might be removed in the time between stat and open).
Accurate doc comment. See: "file", "regular file", and behavior when dstpath exists. The doc comment also matches the style of other functions in package os.
// Copy copies the contents of the file at srcpath to a regular file at dstpath.
// If dstpath already exists and is not a directory, the function truncates it.
// The function does not copy file modes or file attributes.
func Copy(srcpath, dstpath string) (err error) {
r, err := os.Open(srcpath)
if err != nil {
return err
}
defer r.Close() // ok to ignore error: file was opened read-only.
w, err := os.Create(dstpath)
if err != nil {
return err
}
defer func() {
e := w.Close()
// Report the error from Close, if any.
// But do so only if there isn't already
// an outgoing error.
if e != nil && err == nil {
err = e
}
}()
_, err = io.Copy(w, r)
return err
}
In this case there are a couple of conditions to verify, I prefer non-nested code
func Copy(src, dst string) (int64, error) {
src_file, err := os.Open(src)
if err != nil {
return 0, err
}
defer src_file.Close()
src_file_stat, err := src_file.Stat()
if err != nil {
return 0, err
}
if !src_file_stat.Mode().IsRegular() {
return 0, fmt.Errorf("%s is not a regular file", src)
}
dst_file, err := os.Create(dst)
if err != nil {
return 0, err
}
defer dst_file.Close()
return io.Copy(dst_file, src_file)
}
If you are on windows, you can wrap CopyFileW like this:
package utils
import (
"syscall"
"unsafe"
)
var (
modkernel32 = syscall.NewLazyDLL("kernel32.dll")
procCopyFileW = modkernel32.NewProc("CopyFileW")
)
// CopyFile wraps windows function CopyFileW
func CopyFile(src, dst string, failIfExists bool) error {
lpExistingFileName, err := syscall.UTF16PtrFromString(src)
if err != nil {
return err
}
lpNewFileName, err := syscall.UTF16PtrFromString(dst)
if err != nil {
return err
}
var bFailIfExists uint32
if failIfExists {
bFailIfExists = 1
} else {
bFailIfExists = 0
}
r1, _, err := syscall.Syscall(
procCopyFileW.Addr(),
3,
uintptr(unsafe.Pointer(lpExistingFileName)),
uintptr(unsafe.Pointer(lpNewFileName)),
uintptr(bFailIfExists))
if r1 == 0 {
return err
}
return nil
}
Code is inspired by wrappers in C:\Go\src\syscall\zsyscall_windows.go
Here is an obvious way to copy a file:
package main
import (
"os"
"log"
"io"
)
func main() {
sFile, err := os.Open("test.txt")
if err != nil {
log.Fatal(err)
}
defer sFile.Close()
eFile, err := os.Create("test_copy.txt")
if err != nil {
log.Fatal(err)
}
defer eFile.Close()
_, err = io.Copy(eFile, sFile) // first var shows number of bytes
if err != nil {
log.Fatal(err)
}
err = eFile.Sync()
if err != nil {
log.Fatal(err)
}
}
You can use "exec".
exec.Command("cmd","/c","copy","fileToBeCopied destinationDirectory") for windows
I have used this and its working fine. You can refer manual for more details on exec.

Resources