How to mock a ping command - database

I'm using https://github.com/DATA-DOG/go-sqlmock
and trying to mock a connection to a db.
Now, I need to mock a ping command (for load balancing purposes). However, I can't find any information about how to do that.
For example, I would like to write a test like this
db, mock, _ := sqlmock.New()
// ExpectPing does not exist, but it is there anything similar?
mock.ExpectPing().WillReturnError("mock error")
err := db.Ping()
if err==nil{
t.Fatal("there should have been an error")
}
in addition, I need to inject this mocked object into a function. let's say New(*sql.DB) that outputs a new structure. so it must be compatible with *sql.DB struct. for this reason, sqlmock seems a good choice. However, I've not found the way to mock the ping command.
it is there any way to do this using this library?
if not, is there any other database/sql mock library that could do the trick?

Update: As of Jan 6, 2020, this functionality has been added to go-sqlmock, and is included in the v1.4.0 release.
Original answer:
No, there is "nothing similar." The Ping and PingContext methods depend on the driver implementing the Pinger interface, so you can't mimic it by, for example, expecting a 'SELECT 1' or something.
There is already an issue requesting to add this. It seems to not have gotten much attention. I suspect creating a PR (which is probably about 3 lines of code) would greatly increase the chance of having that feature added.
If your goal is to make a Ping fail, you should be able to mimic that by connecting to an invalid DB endpoint (either with or without sqlmock).

You certainly can mock db itself:
type mockedDB struct {
*sql.DB
}
func (db *mockedDB) Ping() error {
return errors.New("not implemented")
}
func Example_mockedDB_Ping() {
db, _, _ := sqlmock.New()
defer db.Close()
mdb := mockedDB{db}
fmt.Println("mdb.Ping(): ", mdb.Ping())
// Output: mdb.Ping(): not implemented
}
but I don't understand what is the purpose of such experiment.
In the same way you can put mock into new type and define ExpectPing on it.

I needed to mock the Ping() functionality as well. This is how I solved it:
type mockDB struct {
ReturnError error
}
func (db *mockDB) Ping() error {
return db.ReturnError
}
func (db *mockDB) PingContext(ctx context.Context) error {
return db.ReturnError
}
// Pinger to be able to mock & ask just for a ping
type Pinger interface {
PingContext(ctx context.Context) error
Ping() error
}
// DatabasePingCheck returns a Check that validates connectivity to a database/sql.DB using Ping().
func DatabasePingCheck(pinger Pinger, timeout time.Duration) Check {
return func() error {
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
if pinger == nil {
return fmt.Errorf("pinger is nil")
}
return pinger.PingContext(ctx)
}
}

Related

Redis connection pool with Lambda

I am trying to use Redis with lambda functions. How can I use the connection pooling in the lamdba if it does not maintain state. Is it ok to always have the lambda make new connections to redis? Or can I use the connection pool example I pasted below? Where in my Handle func I will do conn := pool.Get(). Not sure what approach I should take. Any help.
func Handle(ctx context.context, req events.APIGatewayWebsocketProxyRequest)(interface{},error){
//make new redis connection
}
or
func newPool(addr string) *redis.Pool {
return &redis.Pool{
MaxIdle: 3,
IdleTimeout: 240 * time.Second,
// Dial or DialContext must be set. When both are set, DialContext takes precedence over Dial.
Dial: func () (redis.Conn, error) { return redis.Dial("tcp", addr) },
}
}
var (
pool *redis.Pool
redisServer = flag.String("redisServer", ":6379", "")
)
func main() {
flag.Parse()
pool = newPool(*redisServer)
...
}
Although not the perfect solution (like e.g. AWS RDS Proxy), creating the connection pool outside the lambda's handler seems to be your best bet. The actual technique depends on your language/environment as you can also lazily initialize your connection pool and save it to a class variable, singleton, etc. In your example, use a global pool variable from anywhere in the lambda handler function.
var (
pool *redis.Pool
redisServer = flag.String("redisServer", ":6379", "")
)
func newPool(addr string) *redis.Pool {
...
}
func HandleRequest(ctx context.Context, name MyEvent) (string, error) {
conn := pool.Get()
defer conn.Close()
err = conn.Send("ZINCRBY", ...)
if err != nil {
return err
}
...
}
func main() {
flag.Parse()
pool = newPool(*redisServer)
lambda.Start(HandleRequest)
}
Check this Redislabs article for some more info.
AWS lambda's processing model is a rather simple one. As long as the requests are coming (lambda is being called from API gateway, Kinesis stream, etc.), lambda's lifecycle management code will keep the lambda alive as it synchronously loops over the batches of requests. When this single-threaded solution can't keep up with the incoming processing requests (by constantly checking on some latency metrics), lambda's lifecycle manager will create more lambdas. So, in a sense, when there's a lot to process, you will have your lambda(s) up for some time, and that's when you want to be conservative with your server's free connections anyhow.

Go App Engine get version in init() without Context

Is there a way to to get my autoscaled application's VersionID in my init() function without a Context? The only available option seems to be appengine.VersionID(context.Context). Manually scaled instances have /_ah/start called when they start up (giving access to a Context), but there is nothing like this for autoscaled instances.
I am not caring about the generated ID that appengine.VersionID returns with it, just the app.yaml version.
EDIT: A bit of context: I am wanting to deploy versions in the form x-x-x-dev or x-x-x-live and have my database connection depend on the version suffix. This way, when I look in the GCP console, I can be certain which deployed modules/services are using which database. Of course, I setup my DB connection pool in the init(), which has no access to a Context.
I searched and searched with no answers online anywhere, so here it is.
Simply parse the app.yaml file in your init() function. My example here uses a yaml parsing package, but it can be done more lightweight if you need.
import "github.com/ghodss/yaml"
type AppVersion struct {
Version string `json:"version"`
}
func VersionID() (string, error) {
dat, err := ioutil.ReadFile("app.yaml")
if err != nil {
return "", err
}
a := &AppVersion{}
err = yaml.Unmarshal(dat, a)
if err != nil {
return "", err
}
return a.Version, nil
}
Note that this DOES NOT return the generated ID in the form X.Y that appengine.VersionID() does. Only the X part of the version.
As an aside, in the appengine repo on Github, the actual call to appengine.VersionID requires a Context, but internally calls the internal package with nil. So they basically force you to call it with a Context, but it isn't actually used. It's incredibly infuriating.
EDIT: It should be noted that the new Go SDK in gcloud no longer supports version in the app.yaml, as it is now a required parameter at deploy. However, the "legacy" SDK is still supported and maintained, which I am continuing to use as of today (12/24/2018).

How to prevent / handle ErrBadConn with Azure SQL Database

I'm using this driver: https://github.com/denisenkom/go-mssqldb and on production with an Azure SQL Database Standard S3 level we are getting way too much ErrBadconn - driver: Bad connection returned.
How can I prevent or at least gracefully handle that. Here's some code to show how things are setup.
A typical database function call
package dal
var db *sql.DB
type Database struct{}
func (d Database) Open() {
newDB, err := sql.Open("mssql", os.Getenv("dbconnestion"))
if err != nil {
panic(err)
}
err = newDB.Ping()
if err != nil {
panic(err)
}
db = newDB
}
func (d Database) Close() {
db.Close()
}
// ... in another file
func (e *Entities) Add(entity Entity) (int64, error) {
stmt, err := db.Prepare("INSERT INTO Entities VALUES(?, ?)")
if err != nil {
return -1, err
}
defer stmt.Close()
result, err := stmt.Exec(entity.Field1, entity.Field2)
if err != nil {
return -1, err
}
return result.LastInsertId()
}
On a web api
func main() {
db := dal.Database{}
db.Open()
defer db.Close()
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
entities := &dal.Entites{}
id, err := entities.Add(dal.Entity{Field1: "a", Field2: "b"})
if err != nil {
// here all across my web api and other web package or cli cmd that uses the dal I'm getting random ErrBadConn
}
})
}
So in short, the dal package is shared across multiple Azure web apps and command line Go apps.
I cannot see a pattern, those errors, which are frequent and randomly occurring. We are using Bugsnag to log the errors from all our apps.
For completion, sometimes our Standard S3 limit of 200 concurrent connections is reached.
I've triple checked everywhere on the package that access the database, making sure that all sql.Rows were closed, all db.Prepare statement are closed. As and example here's how a typical query function looks like:
func (e *Entities) GetByID(id int64) ([]Entity, error) {
rows, err := db.Query("SELECT * FROM Entities WHERE ID = ?", id)
if err != nil {
return nil, err
}
defer rows.Close()
var results []Entity
for rows.Next() {
var r Entity
err := readEntity(rows, &r)
if err != nil {
return nil, err
}
results = append(results, r)
}
if err = rows.Err(); err != nil {
return nil, err
}
return results, nil
}
The readEntity is basically only doing Scan on the fields.
I don't think it's code related, unit tests run well locally. It's just once deployed to Azure after running for sometimes, the driver: Bad connection start to show up very frequently.
I've ran this query to try and see as suggested in this question: Azure SQL server max pool size was reached error
select * from sys.dm_exeC_requests
But I'm not exactly sure what should I be paying attention here.
Things I've did / made sure of.
As it's suggested, the database/sql should handle the connection pool, so having a global variable for the database connection should be fine.
Making sure sql.Rows and db.Prepare statement are closed everywhere.
Increased the Azure SQL level to S3.
There's an issue for the sql driver I'm using talking about Azure SQL making database connection is a bad state if they are idling for more thant 2 minutes.
https://github.com/denisenkom/go-mssqldb/issues/81
Does the way database/sql handle the connection pooling is in any way not working with the way Azure SQL Database are manage.
Is there a way to gracefully handle this? I know that C# / Entity Framework have a connection resiliency / retry logic for Azure SQL, is it for the similar reasons? How could I implement this without having to pass everywhere on my error handling? I mean I don't want to do something like this clearly:
if err == sql.ErrBadConn {
// close and re-open the global db object
// retry
}
This is certainly not my only option here?
Any help would be extremely welcome.
Thank you
I'm not seeing anywhere that you close your database. Best practice (in other languages - not positive about Go) is to close / deallocate / dereference the database object after use, to release the connection back into the pool. If you're running out of connection resources, you're being told that you need to release things. Holding the reference open means nobody else can use that connection, so it'll stay around until it gets recycled because it's timed out. This is why you're getting it intermittently, rather than consistently - it depends on having a certain number of connections taking place w/in a certain period of time.
Pierre,
How often do you run into these connection issues? I recommend you build retry logic to graciously fail from bad connections. Here is how you would do it with C#: https://azure.microsoft.com/en-us/documentation/articles/sql-database-develop-csharp-retry-windows/
If you still feel you need assistance, feel free to shoot me an email with your server name and database name at meetb#microsoft.com and we will get our team to look into this issue.
Cheers,
Meet

Google AppEngine DataStore Read & Write (Golang) [duplicate]

I am currently trying to test a piece of my code that runs a query on the datastore before putting in a new entity to ensure that duplicates are not created. The code I wrote works fine in the context of the app, but the tests I wrote for that methods are failing. It seems that I cannot access data put into the datastore through queries in the context of the testing package.
One possibility might lie in the output from goapp test which reads: Applying all pending transactions and saving the datastore. This line prints out after both the get and put methods are called (I verified this with log statements).
I tried closing the context and creating a new one for the different operations, but unfortunately that didn't help either. Below is a simple test case that Puts in an object and then runs a query on it. Any help would be appreciated.
type Entity struct {
Value string
}
func TestEntityQuery(t *testing.T) {
c, err := aetest.NewContext(nil)
if err != nil {
t.Fatal(err)
}
defer c.Close()
key := datastore.NewIncompleteKey(c, "Entity", nil)
key, err = datastore.Put(c, key, &Entity{Value: "test"})
if err != nil {
t.Fatal(err)
}
q := datastore.NewQuery("Entity").Filter("Value =", "test")
var entities []Entity
keys, err := q.GetAll(c, &entities)
if err != nil {
t.Fatal(err)
}
if len(keys) == 0 {
t.Error("No keys found in query")
}
if len(entities) == 0 {
t.Error("No entities found in query")
}
}
There is nothing wrong with your test code. The issue lies in the Datastore itself. Most queries in the HR Datastore are not "immediately consistent" but eventually consistent. You can read more about this in the Datastore documentation.
So basically what happens is that you put an entity into the Datastore, and the SDK's Datastore "simulates" the latency that you can observe in production, so if you run a query right after that (which is not an ancestor query), the query result will not include the new entity you just saved.
If you put a few seconds sleep between the datastore.Put() and q.GetAll(), you will see the test passes. Try it. In my test it was enough to sleep just 100ms, and the test always passed. But when writing tests for such cases, use the StronglyConsistentDatastore: true option as can be seen in JonhGB's answer.
You would also see the test pass without sleep if you'd use Ancestor queries because they are strongly consistent.
The way to do this is to force the datastore to be strongly consistent by setting up the context like this:
c, err := aetest.NewContext(&aetest.Options{StronglyConsistentDatastore: true})
if err != nil {
t.Fatal(err)
}
Now the datastore won't need any sleep to work, which is faster, and better practice in general.
Update: This only works with the old aetest package which was imported via appengine/aetest. It does not work with the newer aetest package which is imported with google.golang.org/appengine/aetest. App Engine has changed from using an appengine.Context to using a context.Context, and consequently the way that the test package now works is quite different.
To compliment #JohnGB's answer in the latest version of aetest, there are more steps to get a context with strong consistency. First create an instance, then create a request from that instance, which you can use to produce a context.
inst, err := aetest.NewInstance(
&aetest.Options{StronglyConsistentDatastore: true})
if err != nil {
t.Fatal(err)
}
defer inst.Close()
req, err := inst.NewRequest("GET", "/", nil)
if err != nil {
t.Fatal(err)
}
ctx := appengine.NewContext(req)

How to mock/abstract filesystem in go?

I would like to be able to log every write/read that my go app issues to the underlying OS, and also (if it's possible) completely replace FS with one that resides only in memory.
Is it possible? How? Maybe there is a ready-to-Go solution?
This is straight from Andrew Gerrand's 10 things you (probably) don't know about Go:
var fs fileSystem = osFS{}
type fileSystem interface {
Open(name string) (file, error)
Stat(name string) (os.FileInfo, error)
}
type file interface {
io.Closer
io.Reader
io.ReaderAt
io.Seeker
Stat() (os.FileInfo, error)
}
// osFS implements fileSystem using the local disk.
type osFS struct{}
func (osFS) Open(name string) (file, error) { return os.Open(name) }
func (osFS) Stat(name string) (os.FileInfo, error) { return os.Stat(name) }
For this to work, you will need to write your code to take a fileSystem argument (maybe embed it in some other type, or let nil denote the default filesystem).
For those looking to solve the problem of mocking out your filesystem during testing, checkout #spf13's Afero library, https://github.com/spf13/afero. It does everything that the accepted answer does, but with better documentation and examples.
You can use the testing/fstest package:
package main
import "testing/fstest"
func main() {
fs := fstest.MapFS{
"hello.txt": {
Data: []byte("hello, world"),
},
}
data, err := fs.ReadFile("hello.txt")
if err != nil {
panic(err)
}
println(string(data) == "hello, world")
}
https://godocs.io/testing/fstest
Just because this question pops up pretty high when googling for this matter:
I don't know about logging reads and writes, but for a filesystem residing only in memory, I've found blang/vfs. I haven't used in production, and it says it's alpha and interfaces are likely to change. Use it at your own risk.
I suppose you could implement it to log reads and writes.

Resources