How to solve TIME_WAIT state issue when in multiple concurrency? - database

If I run below example on Windows I will quickly hit TCP connection limit (which I set to 64k) and get error: dial tcp 127.0.0.1:3306: connectex: Only one usage of each socket address (protocol/network address/port) is normally permitted.
I see all this TIME_WAIT states waiting for there lifetime to end with: netstat -ano|findstr 3306
Why aren't connections closed immediately?
The code:
package main
import (
_ "github.com/go-sql-driver/mysql"
"github.com/jmoiron/sqlx"
"log"
"sync"
)
var (
db_instance *sqlx.DB
wg sync.WaitGroup
)
func main() {
db, err := sqlx.Connect("mysql", "user:pass#/table")
if err != nil {
log.Fatalln(err)
}
defer db.Close()
db_instance = db
for {
for l := 0; l < 50; l++ {
wg.Add(1)
go DB_TEST()
}
wg.Wait()
}
}
func DB_TEST() {
defer wg.Done()
var s string
err := db_instance.QueryRow("SELECT NOW()").Scan(&s)
if err != nil {
log.Println(err)
return
}
log.Println(s)
}

Drafting answer from my comments discussion with #Glavić.
Utilize the SetMaxOpenConns and SetMaxIdleConns settings to keep TIME_WAIT status and connections under control. If needed use SetConnMaxLifetime too, generally it's not needed.

Related

Using goroutines to iterate through file indefinitely

I'm new to Go so please excuse my ignorance. I'm attempting to iterate through a bunch of wordlists line by line indefinitely with goroutines. But when trying to do so, it does not iterate or stops half way through. How would I go about this in the proper manner without breaking the flow?
package main
import (
"bufio"
"fmt"
"os"
)
var file, _ = os.Open("wordlist.txt")
func start() {
scanner := bufio.NewScanner(file)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
}
func main(){
for t := 0; t < 150; t++ {
go start()
fmt.Scanln()
}
}
Thank you!
You declare file as a global variable. Sharing read/write file state amongst multiple goroutines is a data race and will give you undefined results.
Most likely, reads start where the last read from any of the goroutines left off. If that's end-of-file, it likely continues to be end-of-file. But, since the results are undefined, that's not guaranteed. Your erratic results are due to undefined behavior.
Here's a revised version of your program that declares a local file variable and uses a sync.Waitgroup to synchronize the completion of all the go start() goroutines and the main goroutine. The program checks for errors.
package main
import (
"bufio"
"fmt"
"os"
"sync"
)
func start(filename string, wg *sync.WaitGroup, t int) {
defer wg.Done()
file, err := os.Open(filename)
if err != nil {
fmt.Println(err)
return
}
defer file.Close()
lines := 0
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines++
}
if err := scanner.Err(); err != nil {
fmt.Println(err)
return
}
fmt.Println(t, lines)
}
func main() {
wg := &sync.WaitGroup{}
filename := "wordlist.txt"
for t := 0; t < 150; t++ {
wg.Add(1)
go start(filename, wg, t)
}
wg.Wait()
}

Bad connection response to long-running MSSQL transaction in Golang

I have a requester that manages my SQL queries against an Azure SQL database. The function responsible for transaction queries is as follows:
import (
"context"
"database/sql"
"fmt"
"log"
"strings"
"time"
"github.com/cenkalti/backoff"
_ "github.com/denisenkom/go-mssqldb" // Need to import the SQL driver so we can tell Golang how to interpret our requests
)
// Helper function that does a single Exec with a transaction with a context on a query and variables.
// This function will return an error if there are any failures
func (requester *Requester) doTransaction(ctx context.Context,
isolation sql.IsolationLevel, txFunc func(*sql.Tx) error) error {
// First, get the database connection; if this fails then return an error
conn, err := requester.getConn(ctx)
if err != nil {
return err
}
// Before we continue on, ensure that the connection is clsoed and returned to the connection pool
defer func() {
if err := conn.Close(); err != nil {
log.Printf("Close failed, error: %v", err)
}
}()
// Next, start the transaction with the given context and the default isolation
tx, err := requester.getTx(ctx, conn, isolation)
if err != nil {
return err
}
// Now, ensure that the transaction is either rolled back or committed before
// the function ends
var tErr error
defer func() {
if p := recover(); p != nil {
tx.Rollback()
panic(p)
} else if tErr != nil {
log.Printf("An error occurred: %v", tErr)
if err := tx.Rollback(); err != nil {
log.Printf("Rollback failed, error: %v", err)
}
} else {
if tErr := tx.Commit(); tErr != nil {
log.Printf("Commit failed, error: %v", tErr)
}
}
}()
// Finally, run the function and return the result
tErr = txFunc(tx)
return tErr
}
// Helper function that gets a connection to the database with a backup and retry
func (requester *Requester) getConn(ctx context.Context) (*sql.Conn, error) {
// Create an object that will dictate how and when the retries are done
// We currently want an exponential backoff that retries a maximum of 5 times
repeater := backoff.WithContext(backoff.WithMaxRetries(
backoff.NewExponentialBackOff(), 5), ctx)
// Do a retry operation with a 500ms wait time and a maximum of 5 retries
// and return the result of the operation therein
var conn *sql.Conn
if err := backoff.Retry(func() error {
// Attempt to get the connection to the database
var err error
if conn, err = requester.conn.Conn(ctx); err != nil {
// We failed to get the connection; if we have a login error, an EOF or handshake
// failure then we'll attempt the connection again later so just return it and let
// the backoff code handle it
log.Printf("Conn failed, error: %v", err)
if isLoginError(err, requester.serverName, requester.databaseName) {
return err
} else if strings.Contains(err.Error(), "EOF") {
return err
} else if strings.Contains(err.Error(), "TLS Handshake failed") {
return err
}
// Otherwise, we can't recover from the error so return it
return backoff.Permanent(err)
}
return nil
}, repeater); err != nil {
return nil, err
}
return conn, nil
}
// Helper function that starts a transaction against the database
func (requester *Requester) getTx(ctx context.Context, conn *sql.Conn,
isolation sql.IsolationLevel) (*sql.Tx, error) {
// Create an object that will dictate how and when the retries are done
// We currently want an exponential backoff that retries a maximum of 5 times
repeater := backoff.WithContext(backoff.WithMaxRetries(
backoff.NewExponentialBackOff(), 5), ctx)
// Do a retry operation with a 500ms wait time and a maximum of 5 retries
// and return the result of the operation therein
var tx *sql.Tx
if err := backoff.Retry(func() error {
// Attempt to start the transaction with the given context and the default isolation
var err error
if tx, err = conn.BeginTx(ctx, &sql.TxOptions{Isolation: isolation, ReadOnly: false}); err != nil {
// We failed to create the transaction; if we have a connection error then we'll
// attempt the connection again later so just return it and let the backoff code handle it
if strings.Contains(err.Error(), "An existing connection was forcibly closed by the remote host.") ||
strings.Contains(err.Error(), "bad connection") {
log.Printf("BeginTx failed, error: %v. Retrying...", err)
return err
}
// Otherwise, we can't recover from the error so return it
log.Printf("Unknown/uncaught exception when attempting to create a transaction, error: %v", err)
return backoff.Permanent(err)
}
return nil
}, repeater); err != nil {
return nil, err
}
return tx, nil
}
The requester object wraps an sql.Db and is created like this:
// First, create a connection string from the endpoint, port, user name, password and database name
connString := fmt.Sprintf("server=%s;user id=%s;password=%s;port=%s;database=%s;connection timeout=30",
endpoint, dbUser, dbPassword, port, dbName)
// Finally, attempt to connect to the database. If this fails then return an error
db, err := sql.Open("sqlserver", connString)
if err != nil {
return nil, err
}
// Ensure that our connections are used and reused in such a way
// as to avoid bad connections and I/O timeouts
db.SetMaxOpenConns(20)
db.SetConnMaxLifetime(10 * time.Minute)
db.SetConnMaxIdleTime(10 * time.Minute)
On the whole this works well. The one problem I've noticed is that, when a long time has elapsed between individual requests, then I'll get an i/o timeout error on the first retry and then bad connection errors on subsequent retries, ultimately resulting in failure. My thought is that the problem is related to this bug. Essentially, it appears that Microsoft invalidates idle requests after 30 minutes. However, as I have the maximum idle time set to 10 minutes, this shouldn't be the problem.
What's going on here and how do I fix this issue?
After some investigation, I discovered that the database connection grows stale after a 30 minute window, and modifying the lifetime or idle time of the connection pool doesn't really do anything to fix that. So, what I did to alleviate this problem was to modify my getConn function to ping the server beforehand so I could ensure that the connection is "fresh", for lack of a better term.
func (requester *Requester) getConn(ctx context.Context) (*sql.Conn, error) {
// First, attempt to ping the server to ensure that the connection is good
// If this fails, then return an error
if err := requester.conn.PingContext(ctx); err != nil {
return nil, err
}
// Create an object that will dictate how and when the retries are done
// We currently want an exponential backoff that retries a maximum of 5 times
repeater := backoff.WithContext(backoff.WithMaxRetries(
backoff.NewExponentialBackOff(), 5), ctx)
// Do a retry operation with a 500ms wait time and a maximum of 5 retries
// and return the result of the operation therein
var conn *sql.Conn
if err := backoff.Retry(func() error {
// Attempt to get the connection to the database
var err error
if conn, err = requester.conn.Conn(ctx); err != nil {
// We failed to get the connection; if we have a login error, an EOF or handshake
// failure then we'll attempt the connection again later so just return it and let
// the backoff code handle it
log.Printf("Conn failed, error: %v", err)
if isLoginError(err, requester.serverName, requester.databaseName) {
return err
} else if strings.Contains(err.Error(), "EOF") {
return err
} else if strings.Contains(err.Error(), "TLS Handshake failed") {
return err
}
// Otherwise, we can't recover from the error so return it
return backoff.Permanent(err)
}
return nil
}, repeater); err != nil {
return nil, err
}
return conn, nil
}

Sharing a Postgres (or other DBMS) transaction context between processes

A common pattern in monolithic application design is to delegate business logic to a dedicated service, passing in an open transaction as, for example, a javax.persistence.EntityTransaction instance in Java, or an sql.Transaction in Go.
Go example:
// business.go
type BusinessLogicService interface {
DoSomething(tx *sql.Transaction)
}
type businessLogicService struct {
}
func (s *BusinessLogicService) DoSomething(tx *sql.Transaction) {
tx.ExecuteContext(.....)
}
func NewBusinessLogicService() {
return &businessLogicService{}
}
// server.go
ctx := context.Background()
tx, err := db.BeginTx(ctx)
if err != nil {
log.Fatal(err)
}
bls := business.NewBusinessLogicService()
bls.DoSomething(tx)
tx.Commit()
Could the same effect be achieved in an architecture where each of these components are implemented in a different language/runtime? In such an application, Postgres is responsible for doing the 'bookkeeping' in relation to the DB transaction. It seems to me that it should be possible to pass a similar 'handle' for the transaction to another process to read its state and append operations.
For example the equivalent business logic is provided as a gRPC service with the following definition:
message TransactionInfo {
string transaction_id = 1;
}
message DoSomethingRequest {
TransactionInfo transaction_info = 1;
}
message DoSomethingResponse {
}
service BusinessLogicService {
rpc DoSomething(DoSomethingRequest) returns (DoSomethingResponse)
}
The server process BEGINs the transaction and passes a reference to this BusinessLogicService.
ctx := context.Background()
tx, err := db.BeginTx(ctx)
if err != nil {
log.Fatal(err)
}
conn, err := grpc.Dial(*serverAddr, opts...)
if err != nil {
...
}
defer conn.Close()
bls := pb.NewBusinessLogicClient()
/// SOMEHOW PASS THE TX OBJECT TO THE REMOTE SERVICE
txObj := &pb.TransactionInfo{....???????????.....}
result := bls.DoSomething(txObj)
tx.Commit()
Is this possible with Postgres or another DBMS?

Publishing to google pub sub asynchronously through goroutine

I'm trying to push the message to google pub-sub asynchronously through goroutine but I'm facing below error
panic: not an App Engine context
I'm using mux and have an api handler
n = 1 million
func apihandler(w http.ResponseWriter, r *http.Request) {
go createuniquecodes(n)
return "request running in background"
}
func createuniquecodes(n) {
c := make(chan string)
go createuniquecodes(c, n)
for val := range c {
publishtopubsub(val)
}
}
func createuniquecodes(n) {
for i := 0; i < n; i++ {
uniquecode := some random string
// publish to channel and pubsub
c <- uniquecode
}
close(c)
}
func publishuq(msg string) error {
ctx := context.Background()
client, err := pubsub.NewClient(ctx, projectId)
if err != nil {
log.Fatalf("Could not create pubsub Client: %v", err)
}
t := client.Topic(topicName)
result := t.Publish(ctx, &pubsub.Message{
Data: []byte(msg),
})
id, err := result.Get(ctx)
if err != nil {
return err
}
fmt.Printf("Published a message; msg ID: %v\n", id)
return nil
}
Please note that I need to generate 5 million unique codes,
How will I define a context in go routine since I'm doing everything asynchronously
I assume you're using the App Engine standard (not flexible) environment. Please note that a "request handler (apihandler in your case) has a limited amount of time to generate and return a response to a request, typically around 60 seconds. Once the deadline has been reached, the request handler is interrupted".
You're trying to "break out" of the request when calling go createuniquecodes(n) and then ctx := context.Background() down the line is what panics with not an App Engine context. You could technically use NewContext(req *http.Request) to derive a valid context from the original context, but again, you'd only have 60s before your request times out.
Please have a look at TaskQueues, as they " let applications perform work, called tasks, asynchronously outside of a user request."

Unable to connect to MS SQL

I'm playing around with Golang and trying to connect to MS SQL. I'm using github.com/denisenkom/go-mssqldb package and sqlx for this purpose.
But I'm getting error: Unable to open tcp connection with host 'localhost:1433': dial tcp [::1]:1433: connectex: No connection could be made because the target machine actively refused it.
I'm completely sure that everything with database itself since it's perfectly working with my.Net project.
Here is is the code:
package main
import (
"database/sql"
"fmt"
"log"
_ "github.com/denisenkom/go-mssqldb"
"github.com/jmoiron/sqlx"
)
type Excursion struct {
Id int `db:"id"`
Name sql.NullString `db:"name"`
}
func main() {
db, err := sqlx.Connect("sqlserver", "server=localhost;user id=DESKTOP-H74S9IT\\andrey.shedko;database=Flex;connection timeout=30;")
if err := db.Ping(); err != nil {
log.Fatal(err)
}
rows, err := db.Queryx("SELECT Id, Name FROM dbo.Excursions")
for rows.Next() {
var item Excursion
err = rows.StructScan(&item)
if err != nil {
log.Fatal(err)
}
fmt.Printf(
"%d - %s: %s\n===================\n",
item.Id,
item.Name.String,
)
}
defer db.Close()
}
Could you advice please what is wrong with this?
Probably you need to activate TCP.
More info: Enable TCP/IP Network Protocol for SQL Server
Images source: https://learn.microsoft.com/en-us/azure/includes/media/virtual-machines-sql-server-connection-tcp-protocol/enable-tcp.png and https://learn.microsoft.com/en-us/azure/includes/media/virtual-machines-sql-server-connection-tcp-protocol/restart-sql-server.png

Resources