Do I need to explicitly rollback a transaction? - database

I would like to know how Go handles failed DB transaction. My code looks like:
func assert(e interface{}) {
if e != nil {
panic(e)
}
}
//the caller will handle panics
func SomeDBOperation(db *sql.DB) {
tx, err := db.Begin()
assert(err)
defer func() {
if e := recover(); e != nil {
tx.Rollback()
panic(e)
}
assert(tx.Commit())
}()
// some code that can possibly panic...
}
Can I simplify the error checking like this:
func SomeDBOperation(db *sql.DB) {
tx, err := db.Begin()
assert(err)
defer func() { assert(tx.Commit()) }()
// some code that can possibly panic...
}
BTW, I am using SQLite, if any answer is db-specific, I would also like to know the behavior with MySQL.

By default, any database error will automatically cancel and rollback the transaction. That's what transactions are for. So strictly speaking, in the case of a database error (i.e. foreign key violation or something), there's no need to rollback the transaction yourself.
However, you should always defer a rollback immediately after creating the transaction. This is so that if there are any errors not related to the database, that the transaction is rolled back and cleaned up. In such a case, rolling back a transaction that has already been aborted will be a no-op, so harmless.
The way this looks in code is something like this:
func SomeDBOperation(db *sql.DB) error {
txn, err := db.Begin()
if err != nil {
return fmt.Errorf("failed to start transaction: %w", err)
}
defer txn.Rollback() // nolint:errcheck
/* ... whatever other logic and DB operations you care about */
return nil
}

It is important to rollback the tx if there is an error while executing any query, otherwise it is still running and holding locks. Check out this post .

Related

how to make Goroutine not Concurrency if Same Function

Is there any way to make goroutine execute one after another ( one by one) if it was the same function?
I didn't mean to use goroutine firstly. However, "os/exec" function in TCP will cause the tcp forced to stop. Thus I use goroutine to avoid crashing. But I still want them to execute by order, instead of concurrently. Here is my code.
func handleTCP(conn net.Conn) {
defer conn.Close()
fmt.Println("handle TCP function")
for {
wg := new(sync.WaitGroup)
wg.Add(1)
go func() {
cmdArgs := []string{temp_str, test_press, gh, "sample.csv"}
cmd := exec.Command("calib.exe", cmdArgs...)
wg.Done()
}()
}
}
You can use locks to limit access to a resource to one thread at a time as follows
i := 1
iLock = &sync.Mutex{}
for {
go func() {
iLock.Lock()
fmt.Printf("Value of i: %d\n", i)
iLock.Unlock()
}
}
More examples here
Not sure what you are using the WaitGroup for. If it is to achieve the sequentiality you desire, then it can be removed.
Try to put the lock into function, it makes theirs execution sequential. But remember about wg.Done() must be under defer at first line of the function.
Something like this:
var mu sync.Mutex
func handleTCP(conn net.Conn) {
defer conn.Close()
fmt.Println("handle TCP function")
for {
wg := new(sync.WaitGroup)
wg.Add(1)
go func() {
defer wg.Done()
mu.Lock()
defer mu.UnLock()
cmdArgs := []string{temp_str, test_press, gh, "sample.csv"}
cmd := exec.Command("calib.exe", cmdArgs...)
}()
}
}

Bad connection response to long-running MSSQL transaction in Golang

I have a requester that manages my SQL queries against an Azure SQL database. The function responsible for transaction queries is as follows:
import (
"context"
"database/sql"
"fmt"
"log"
"strings"
"time"
"github.com/cenkalti/backoff"
_ "github.com/denisenkom/go-mssqldb" // Need to import the SQL driver so we can tell Golang how to interpret our requests
)
// Helper function that does a single Exec with a transaction with a context on a query and variables.
// This function will return an error if there are any failures
func (requester *Requester) doTransaction(ctx context.Context,
isolation sql.IsolationLevel, txFunc func(*sql.Tx) error) error {
// First, get the database connection; if this fails then return an error
conn, err := requester.getConn(ctx)
if err != nil {
return err
}
// Before we continue on, ensure that the connection is clsoed and returned to the connection pool
defer func() {
if err := conn.Close(); err != nil {
log.Printf("Close failed, error: %v", err)
}
}()
// Next, start the transaction with the given context and the default isolation
tx, err := requester.getTx(ctx, conn, isolation)
if err != nil {
return err
}
// Now, ensure that the transaction is either rolled back or committed before
// the function ends
var tErr error
defer func() {
if p := recover(); p != nil {
tx.Rollback()
panic(p)
} else if tErr != nil {
log.Printf("An error occurred: %v", tErr)
if err := tx.Rollback(); err != nil {
log.Printf("Rollback failed, error: %v", err)
}
} else {
if tErr := tx.Commit(); tErr != nil {
log.Printf("Commit failed, error: %v", tErr)
}
}
}()
// Finally, run the function and return the result
tErr = txFunc(tx)
return tErr
}
// Helper function that gets a connection to the database with a backup and retry
func (requester *Requester) getConn(ctx context.Context) (*sql.Conn, error) {
// Create an object that will dictate how and when the retries are done
// We currently want an exponential backoff that retries a maximum of 5 times
repeater := backoff.WithContext(backoff.WithMaxRetries(
backoff.NewExponentialBackOff(), 5), ctx)
// Do a retry operation with a 500ms wait time and a maximum of 5 retries
// and return the result of the operation therein
var conn *sql.Conn
if err := backoff.Retry(func() error {
// Attempt to get the connection to the database
var err error
if conn, err = requester.conn.Conn(ctx); err != nil {
// We failed to get the connection; if we have a login error, an EOF or handshake
// failure then we'll attempt the connection again later so just return it and let
// the backoff code handle it
log.Printf("Conn failed, error: %v", err)
if isLoginError(err, requester.serverName, requester.databaseName) {
return err
} else if strings.Contains(err.Error(), "EOF") {
return err
} else if strings.Contains(err.Error(), "TLS Handshake failed") {
return err
}
// Otherwise, we can't recover from the error so return it
return backoff.Permanent(err)
}
return nil
}, repeater); err != nil {
return nil, err
}
return conn, nil
}
// Helper function that starts a transaction against the database
func (requester *Requester) getTx(ctx context.Context, conn *sql.Conn,
isolation sql.IsolationLevel) (*sql.Tx, error) {
// Create an object that will dictate how and when the retries are done
// We currently want an exponential backoff that retries a maximum of 5 times
repeater := backoff.WithContext(backoff.WithMaxRetries(
backoff.NewExponentialBackOff(), 5), ctx)
// Do a retry operation with a 500ms wait time and a maximum of 5 retries
// and return the result of the operation therein
var tx *sql.Tx
if err := backoff.Retry(func() error {
// Attempt to start the transaction with the given context and the default isolation
var err error
if tx, err = conn.BeginTx(ctx, &sql.TxOptions{Isolation: isolation, ReadOnly: false}); err != nil {
// We failed to create the transaction; if we have a connection error then we'll
// attempt the connection again later so just return it and let the backoff code handle it
if strings.Contains(err.Error(), "An existing connection was forcibly closed by the remote host.") ||
strings.Contains(err.Error(), "bad connection") {
log.Printf("BeginTx failed, error: %v. Retrying...", err)
return err
}
// Otherwise, we can't recover from the error so return it
log.Printf("Unknown/uncaught exception when attempting to create a transaction, error: %v", err)
return backoff.Permanent(err)
}
return nil
}, repeater); err != nil {
return nil, err
}
return tx, nil
}
The requester object wraps an sql.Db and is created like this:
// First, create a connection string from the endpoint, port, user name, password and database name
connString := fmt.Sprintf("server=%s;user id=%s;password=%s;port=%s;database=%s;connection timeout=30",
endpoint, dbUser, dbPassword, port, dbName)
// Finally, attempt to connect to the database. If this fails then return an error
db, err := sql.Open("sqlserver", connString)
if err != nil {
return nil, err
}
// Ensure that our connections are used and reused in such a way
// as to avoid bad connections and I/O timeouts
db.SetMaxOpenConns(20)
db.SetConnMaxLifetime(10 * time.Minute)
db.SetConnMaxIdleTime(10 * time.Minute)
On the whole this works well. The one problem I've noticed is that, when a long time has elapsed between individual requests, then I'll get an i/o timeout error on the first retry and then bad connection errors on subsequent retries, ultimately resulting in failure. My thought is that the problem is related to this bug. Essentially, it appears that Microsoft invalidates idle requests after 30 minutes. However, as I have the maximum idle time set to 10 minutes, this shouldn't be the problem.
What's going on here and how do I fix this issue?
After some investigation, I discovered that the database connection grows stale after a 30 minute window, and modifying the lifetime or idle time of the connection pool doesn't really do anything to fix that. So, what I did to alleviate this problem was to modify my getConn function to ping the server beforehand so I could ensure that the connection is "fresh", for lack of a better term.
func (requester *Requester) getConn(ctx context.Context) (*sql.Conn, error) {
// First, attempt to ping the server to ensure that the connection is good
// If this fails, then return an error
if err := requester.conn.PingContext(ctx); err != nil {
return nil, err
}
// Create an object that will dictate how and when the retries are done
// We currently want an exponential backoff that retries a maximum of 5 times
repeater := backoff.WithContext(backoff.WithMaxRetries(
backoff.NewExponentialBackOff(), 5), ctx)
// Do a retry operation with a 500ms wait time and a maximum of 5 retries
// and return the result of the operation therein
var conn *sql.Conn
if err := backoff.Retry(func() error {
// Attempt to get the connection to the database
var err error
if conn, err = requester.conn.Conn(ctx); err != nil {
// We failed to get the connection; if we have a login error, an EOF or handshake
// failure then we'll attempt the connection again later so just return it and let
// the backoff code handle it
log.Printf("Conn failed, error: %v", err)
if isLoginError(err, requester.serverName, requester.databaseName) {
return err
} else if strings.Contains(err.Error(), "EOF") {
return err
} else if strings.Contains(err.Error(), "TLS Handshake failed") {
return err
}
// Otherwise, we can't recover from the error so return it
return backoff.Permanent(err)
}
return nil
}, repeater); err != nil {
return nil, err
}
return conn, nil
}

Sharing a Postgres (or other DBMS) transaction context between processes

A common pattern in monolithic application design is to delegate business logic to a dedicated service, passing in an open transaction as, for example, a javax.persistence.EntityTransaction instance in Java, or an sql.Transaction in Go.
Go example:
// business.go
type BusinessLogicService interface {
DoSomething(tx *sql.Transaction)
}
type businessLogicService struct {
}
func (s *BusinessLogicService) DoSomething(tx *sql.Transaction) {
tx.ExecuteContext(.....)
}
func NewBusinessLogicService() {
return &businessLogicService{}
}
// server.go
ctx := context.Background()
tx, err := db.BeginTx(ctx)
if err != nil {
log.Fatal(err)
}
bls := business.NewBusinessLogicService()
bls.DoSomething(tx)
tx.Commit()
Could the same effect be achieved in an architecture where each of these components are implemented in a different language/runtime? In such an application, Postgres is responsible for doing the 'bookkeeping' in relation to the DB transaction. It seems to me that it should be possible to pass a similar 'handle' for the transaction to another process to read its state and append operations.
For example the equivalent business logic is provided as a gRPC service with the following definition:
message TransactionInfo {
string transaction_id = 1;
}
message DoSomethingRequest {
TransactionInfo transaction_info = 1;
}
message DoSomethingResponse {
}
service BusinessLogicService {
rpc DoSomething(DoSomethingRequest) returns (DoSomethingResponse)
}
The server process BEGINs the transaction and passes a reference to this BusinessLogicService.
ctx := context.Background()
tx, err := db.BeginTx(ctx)
if err != nil {
log.Fatal(err)
}
conn, err := grpc.Dial(*serverAddr, opts...)
if err != nil {
...
}
defer conn.Close()
bls := pb.NewBusinessLogicClient()
/// SOMEHOW PASS THE TX OBJECT TO THE REMOTE SERVICE
txObj := &pb.TransactionInfo{....???????????.....}
result := bls.DoSomething(txObj)
tx.Commit()
Is this possible with Postgres or another DBMS?

How to set timeout for long running queries in gorm

Is there a way I can set gorm to time out after a configurable period of time when running a long query? I am using mssql. I have looked through the documentation and haven't discovered a way yet.
This code seems to work for me and is pretty clean. Just use transactions I guess.
//Gorm query below
query = query.Where(whereClause)
//Set up Timeout
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
var x sql.TxOptions
db := query.BeginTx(ctx, &x)
defer db.Commit()
// Execute the query
if err := db.Find(*results).Error; err != nil {
return err
}
By using WithContext from *gorm.DB you can pass a Timeout Context to Gorm:
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
var products []Product
db.WithContext(ctx).Find(&products)
gorm at the moment seems is not implementing any query that accepts as a parameter a context.Context, as e.g. QueryRowContext is doing.
You can create a workaround and use a Context to make "expirable" your query.
type QueryResponse struct {
MyResult *MyResult
Error error
}
func queryHelper(ctx context.Context) <- chan *QueryResponse {
chResult := make(chan *QueryResponse, 1)
go func() {
//your query here
//...
//blah blah check stuff do whatever you want
//err is an error that comes from the query code
if err != nil {
chResult <- &QueryResponse{nil, err}
return
}
chResult <- &QueryResponse{queryResponse, nil}
} ()
return chResult
}
func MyQueryFunction(ctx context.Context) (*MyResult, error) {
select {
case <-ctx.Done():
return nil, fmt.Errorf("context timeout, query out of time")
case res := <-queryHelper(ctx):
return res.MyResult, res.Error
}
}
And then in your upper function, whatever is it you can create a context and pass it to the MyQueryFunction. If the query exceeds the time you have set an error is raised and you should (must) check it.
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
res, err := MyQueryFunction(ctx)
if err != nil {
fmt.Printf("err %v", err)
} else {
fmt.Printf("res %v", res)
}
Of course, it is an example, is not considering a lot of use cases and a proper implementation inside the framework must be preferred.

golang: Monitor (and read) files open for writing

I have some .log files and want to monitor if any data is appended to any of them to collect that data into a DB.
How do I open an opened-for-writing file and how do I monitor for new lines/changes?
Thank you.
I wrote this small piece of code that will monitor for file change and if detected, run the function you passed to it.
func watchFileAndRun(filePath string, fn func()) error {
defer func() { r := recover(); if r != nil { logCore("ERROR","Error:watching file:", r) } }()
fn()
initialStat, err := os.Stat(filePath)
checkErr(err)
for {
stat, err := os.Stat(filePath)
checkErr(err)
if stat.Size() != initialStat.Size() || stat.ModTime() != initialStat.ModTime() {
fn()
initialStat, err = os.Stat(filePath)
checkErr(err)
}
time.Sleep(10 * time.Second)
}
return nil
}
To call this function where loadEnv is a function:
go watchFileAndRun("config.json", loadEnv)
Some Notes:
go watchFileAndRun() will run the routing asynchronously. IE in the background.
checkErr is my own function you can simply put a standard
if err != nil {fmt.printf("Something went wrong",err)} in its place.
and the defer first line is to prevent the program crashing the main if something went wrong. You could nest it deeper if you wanted to make this function more resilient.
Hope it helps someone. and its smaller than the nsnotify plug in which to be frank I could get working either.

Resources