A common pattern in monolithic application design is to delegate business logic to a dedicated service, passing in an open transaction as, for example, a javax.persistence.EntityTransaction instance in Java, or an sql.Transaction in Go.
Go example:
// business.go
type BusinessLogicService interface {
DoSomething(tx *sql.Transaction)
}
type businessLogicService struct {
}
func (s *BusinessLogicService) DoSomething(tx *sql.Transaction) {
tx.ExecuteContext(.....)
}
func NewBusinessLogicService() {
return &businessLogicService{}
}
// server.go
ctx := context.Background()
tx, err := db.BeginTx(ctx)
if err != nil {
log.Fatal(err)
}
bls := business.NewBusinessLogicService()
bls.DoSomething(tx)
tx.Commit()
Could the same effect be achieved in an architecture where each of these components are implemented in a different language/runtime? In such an application, Postgres is responsible for doing the 'bookkeeping' in relation to the DB transaction. It seems to me that it should be possible to pass a similar 'handle' for the transaction to another process to read its state and append operations.
For example the equivalent business logic is provided as a gRPC service with the following definition:
message TransactionInfo {
string transaction_id = 1;
}
message DoSomethingRequest {
TransactionInfo transaction_info = 1;
}
message DoSomethingResponse {
}
service BusinessLogicService {
rpc DoSomething(DoSomethingRequest) returns (DoSomethingResponse)
}
The server process BEGINs the transaction and passes a reference to this BusinessLogicService.
ctx := context.Background()
tx, err := db.BeginTx(ctx)
if err != nil {
log.Fatal(err)
}
conn, err := grpc.Dial(*serverAddr, opts...)
if err != nil {
...
}
defer conn.Close()
bls := pb.NewBusinessLogicClient()
/// SOMEHOW PASS THE TX OBJECT TO THE REMOTE SERVICE
txObj := &pb.TransactionInfo{....???????????.....}
result := bls.DoSomething(txObj)
tx.Commit()
Is this possible with Postgres or another DBMS?
Related
Is there a way I can set gorm to time out after a configurable period of time when running a long query? I am using mssql. I have looked through the documentation and haven't discovered a way yet.
This code seems to work for me and is pretty clean. Just use transactions I guess.
//Gorm query below
query = query.Where(whereClause)
//Set up Timeout
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
var x sql.TxOptions
db := query.BeginTx(ctx, &x)
defer db.Commit()
// Execute the query
if err := db.Find(*results).Error; err != nil {
return err
}
By using WithContext from *gorm.DB you can pass a Timeout Context to Gorm:
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
var products []Product
db.WithContext(ctx).Find(&products)
gorm at the moment seems is not implementing any query that accepts as a parameter a context.Context, as e.g. QueryRowContext is doing.
You can create a workaround and use a Context to make "expirable" your query.
type QueryResponse struct {
MyResult *MyResult
Error error
}
func queryHelper(ctx context.Context) <- chan *QueryResponse {
chResult := make(chan *QueryResponse, 1)
go func() {
//your query here
//...
//blah blah check stuff do whatever you want
//err is an error that comes from the query code
if err != nil {
chResult <- &QueryResponse{nil, err}
return
}
chResult <- &QueryResponse{queryResponse, nil}
} ()
return chResult
}
func MyQueryFunction(ctx context.Context) (*MyResult, error) {
select {
case <-ctx.Done():
return nil, fmt.Errorf("context timeout, query out of time")
case res := <-queryHelper(ctx):
return res.MyResult, res.Error
}
}
And then in your upper function, whatever is it you can create a context and pass it to the MyQueryFunction. If the query exceeds the time you have set an error is raised and you should (must) check it.
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
res, err := MyQueryFunction(ctx)
if err != nil {
fmt.Printf("err %v", err)
} else {
fmt.Printf("res %v", res)
}
Of course, it is an example, is not considering a lot of use cases and a proper implementation inside the framework must be preferred.
I want to create a database driven application using Golang. I am trying to do it TDD way.
When I try to test methods that make Sql queries, What all are the packages available ?
I don't want to connect to the default database that I use for development. I can write code to take up another test database while running a test, but is there any go library that already does it.
Is there any library that does db tests without connecting to database at all ?
What is the standard way to do database test with golang ?
I had a similar question not long ago when refactoring some of my own tests, and there's a couple of ways you can do it:
a) Provide an exported type and an Open or Connect function that returns it - e.g.
type DB struct {
db *sql.DB
}
// Using http://jmoiron.github.io/sqlx/ for this example, but
// it has the same interface as database/sql
func Open(opts *Options) (*DB, error) {
db, err := sqlx.Connect(opts.Driver, fmt.Sprintf("host=%s user=%s dbname=%s sslmode=%s", opts.Host, opts.User, opts.Name, opts.SSL))
if err != nil {
return nil, err
}
return &DB{db}, nil
}
... and then each of your tests, write setup & teardown functions that return an instance of *DB that you define your database functions on (as methods - i.e. func (db *DB) GetUser(user *User) (bool, error)):
// Setup the test environment.
func setup() (*DB, error) {
err := withTestDB()
if err != nil {
return nil, err
}
// testOptions is a global in this case, but you could easily
// create one per-test
db, err := Open(testOptions)
if err != nil {
return nil, err
}
// Loads our test schema
db.MustLoad()
return db, nil
}
// Create our test database.
func withTestDB() error {
db, err := open()
if err != nil {
return err
}
defer db.Close()
_, err = db.Exec(fmt.Sprintf("CREATE DATABASE %s;", testOptions.Name))
if err != nil {
return err
}
return nil
}
Note that this is somewhat "integration" testing, but I strongly prefer to test against a "real" database since mocking the interface won't help you catch issues with your queries/query syntax.
b) The alternative, although less extensible on the application side, is to have a global db *sql.DB variable that you initialise in init() within your tests—since tests have no guaranteed order you'll need to use init()—and then run your tests from there. i.e.
var db *sql.DB
func init() {
var err error
// Note the = and *not* the assignment - we don't want to shadow our global
db, err = sqlx.Connect(...)
if err != nil {
...
}
err := db.loadTestSchema
// etc.
}
func TestGetUser(t *testing.T) {
user := User{}
exists, err := db.GetUser(user)
...
}
You can find some practical examples in drone.io's GitHub repo, and I'd also recommend this article on structuring Go applications (especially the DB stuff).
I use a global variable to store the data source (or connection string) of current database and set to different value in test function. Since there is only one database I need to operate so I choose the easiest way.
I'm trying to push the message to google pub-sub asynchronously through goroutine but I'm facing below error
panic: not an App Engine context
I'm using mux and have an api handler
n = 1 million
func apihandler(w http.ResponseWriter, r *http.Request) {
go createuniquecodes(n)
return "request running in background"
}
func createuniquecodes(n) {
c := make(chan string)
go createuniquecodes(c, n)
for val := range c {
publishtopubsub(val)
}
}
func createuniquecodes(n) {
for i := 0; i < n; i++ {
uniquecode := some random string
// publish to channel and pubsub
c <- uniquecode
}
close(c)
}
func publishuq(msg string) error {
ctx := context.Background()
client, err := pubsub.NewClient(ctx, projectId)
if err != nil {
log.Fatalf("Could not create pubsub Client: %v", err)
}
t := client.Topic(topicName)
result := t.Publish(ctx, &pubsub.Message{
Data: []byte(msg),
})
id, err := result.Get(ctx)
if err != nil {
return err
}
fmt.Printf("Published a message; msg ID: %v\n", id)
return nil
}
Please note that I need to generate 5 million unique codes,
How will I define a context in go routine since I'm doing everything asynchronously
I assume you're using the App Engine standard (not flexible) environment. Please note that a "request handler (apihandler in your case) has a limited amount of time to generate and return a response to a request, typically around 60 seconds. Once the deadline has been reached, the request handler is interrupted".
You're trying to "break out" of the request when calling go createuniquecodes(n) and then ctx := context.Background() down the line is what panics with not an App Engine context. You could technically use NewContext(req *http.Request) to derive a valid context from the original context, but again, you'd only have 60s before your request times out.
Please have a look at TaskQueues, as they " let applications perform work, called tasks, asynchronously outside of a user request."
I've implemented dao.go file with the following realization:
type DbClient struct {
db *gorm.DB
}
GetDBClient() initializes connection with database and returns (*DbClient, error)
func (db *DbClient) Close() {
db.db.Close()
}
Different CRUD methods of DbClient
And the main.go file that serves all the handlers consumes it like this:
var dbClient *DbClient
func main() {
db, err := GetDBClient()
if err != nil {
panic(err)
}
dbClient = db
defer dbClient.Close()
...
}
So all the handlers of the main.go use global dbClient.
Is this architecture thread safe and does it provide atomicity of operations with database?
This design should be good.
sql.DB handles concurrent access and implements pooling. gorm inherits these features from it.
One change I would made though. Global variables are hard to manage.
You can inject db into your code that uses it.
// to be removed
// var dbClient *DbClient
func main() {
db, err := GetDBClient()
if err != nil {
panic(err)
}
defer db.Close()
CodeThatUsesDB(db)
...
}
I am working on a multi tenant application, I need to query a particular user from a KIND and From Particular Namespace.
I am able to get the values from default Namespace.the package i am using here is "google.golang.org/appengine/datastore"
q := datastore.NewQuery(ENTITYNAME).Filter("Name =", ed.Expense.Name)
var expenses []ExpenseEntiry
return q.GetAll(ed.Ctx, &expenses)
The namespace value is not part of the query (it's not a property of the query). The namespace comes from the context which you pass when executing the query, e.g. to Query.GetAll().
If you have a context (you do as you pass it to q.GetAll()), you can create a derivative context with a given namespace using the appengine.Namespace() function.
For example:
ctx2, err := appengine.Namespace(ed.Ctx, "mynamespace")
// check err
And use this new context to pass to Query.GetAll():
return q.GetAll(ctx2, &expenses)
It is rare that you need to create a new context with a different namespace, ed.Ctx should already be a context with the right namespace. So when / where you create ed.Ctx, you should already apply the namespace there, so you can avoid "accidental" exposure of data of other tenants (which is a major security issue).
If you are using the old lib: google.golang.org/appengine/datastore, then you need to create the context with the namespace:
ctx2, err := appengine.Namespace(ed.Ctx, "mynamespace")
if err != nil {
return err
}
But you WANT to be using the latest lib: cloud.google.com/go/datastore. The Namespace can be set directly on the Query object. This is new. You must then run the query using datastoreClient.Run(ctx, query).
func deleteTestNamespace(ctx context.Context, namespaces string) error {
dsClient, err := datastore.NewClient(ctx, log, datastore.Config{...})
err := dsClient.DeleteMulti(ctx, keys[i:i+chunk])
if err != nil {
return err
}
var keys []*datastore.Key
for _, kind := range envKinds {
// Get all keys
query := datastore.NewQuery(kind).KeysOnly().Namespace(namespace)
it := dsClient.Run(ctx, query)
for {
var key datastore.Key
_, err := it.Next(&key)
if err == iterator.Done {
break
}
if err != nil {
return err
}
keys = append(keys, &key)
}
// Delete all records in chunks of 500 or less
for i := 0; i < len(keys); i += 500 {
chunk := min(len(keys)-i, 500)
err := dsClient.DeleteMulti(ctx, keys[i:i+chunk])
if err != nil {
return err
}
}
}
return nil
}
func min(num1 int, num2 int) int {
if num1 < num2 {
return num1
}
return num2
}