Unmarshal body response from task queue - google-app-engine

I'm using go and google task queue in order to create some a sync jobs.
I'm passing the data to the worker method successfully but i can't unmarshal the data in order to use it.
I tried different ways i'm getting an unmarshal error
err um &json.SyntaxError{msg:"invalid character 'i' in literal false (expecting 'a')", Offset:2}
This is how i'm sending the data to the queue
keys := make(map[string][]string)
keys["filenames"] = req.FileNames // []string
t := taskqueue.NewPOSTTask("/deletetask", keys)
_, err = taskqueue.Add(ctx, t, "delete")
And this is how i tried to unmarshal it
type Files struct {
fileNames string `json:"filenames"`
}
func worker(w http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
var objmap map[string]json.RawMessage
b, err := ioutil.ReadAll(r.Body)
if err != nil {
c.Debugf("err io %#v", err)
}
c.Debugf("b %#v", string(b[:])) //Print: b "filenames=1.txt&filenames=2.txt"
err = json.Unmarshal(b, &objmap)
if err != nil {
c.Debugf("err um %#v", err)
}
//this didn't work as well same err
f := []Files{}
err = json.Unmarshal(b, &f)
}

Arguments to tasks are sent as POST-values, you assign a slice of strings as the POST-value for the key filenames.
Then you try to deserialize the full POST request body as JSON.
A simple solution would be to split up each file in one task and just send the file name as a string value, then it would be something like:
// Add tasks
for i := range req.FileNames {
postValues := url.Values{}
postValues.Set("fileName", req.FileNames[i])
t := taskqueue.NewPOSTTask("/deletetask", postValues)
if _, err := taskqueue.Add(ctx, t, "delete"); err != nil {
c.Errorf("Failed add task, error: %v, fileName: %v", err, req.FileNames[i])
}
}
// The actual delete worker
func worker(w http.ResponseWriter, r *http.Request) {
ctx := appengine.NewContext(r)
fileName := r.FormValue("fileName")
ctx.Infof("Deleting: %v", fileName)
}

Related

Archive/zip read file.zip: bad file descriptor

I have written a function to read zip archive tomap[string]*zip.File.
func ReadZip(file string) (map[string]*zip.File, error) {
r, err := zip.OpenReader(file)
if err != nil {
return nil, err
}
defer r.Close()
files := make(map[string]*zip.File)
for _, f := range r.File {
files[f.Name] = f
}
return files, nil
}
But when i try to open file infoRC, err := f["info.json"].Open() arises error
read file.zip: bad file descriptor.
Is there better way to read zip archive?
Once ReadCloser.Close is called, any of the *zip.File structs are invalid:
Close closes the Zip file, rendering it unusable for I/O.
You need to either:
Keep r open as long as you want to read the ZIP entries, or
make an in-memory/temporary file copy of all of the zip file contents
An example of the latter option:
func ReadZip(file string) (map[string][]byte, error) {
r, err := zip.OpenReader(file)
if err != nil {
return nil, err
}
defer r.Close()
files := make(map[string][]byte)
for _, f := range r.File {
fc, err := f.Open()
if err != nil {
return nil, err
}
contents, err := ioutil.ReadAll(fc)
fc.Close()
if err != nil {
return nil, err
}
files[f.Name] = contents
}
return files, nil
}

Not able to store data in file properly using gob

When I try to save the map of type map[mapKey]string into a file using gob encoder, it is not saving string in file.
Here mapKey is struct and map value is long json string.
type mapKey struct{
Id1 string
Id2 string
}
And whenever I am use nested map instead of the struct like:
var m = make(map[string]map[string]string)
It is working fine and saving string properly. I am not sure what I am missing here.
Code to encode, decode and save it in file:
func Save(path string, object interface{}) error {
file, err := os.Create(path)
if err == nil {
encoder := gob.NewEncoder(file)
encoder.Encode(object)
}
file.Close()
return err
}
// Decode Gob file
func Load(path string, object interface{}) error {
file, err := os.Open(path)
if err == nil {
decoder := gob.NewDecoder(file)
err = decoder.Decode(object)
}
file.Close()
return err
}
func Check(e error) {
if e != nil {
_, file, line, _ := runtime.Caller(1)
fmt.Println(line, "\t", file, "\n", e)
os.Exit(1)
}
}
There is nothing special in encoding a value of type map[mapKey]string.
See this very simple working example which uses the specified reader/writer:
func save(w io.Writer, i interface{}) error {
return gob.NewEncoder(w).Encode(i)
}
func load(r io.Reader, i interface{}) error {
return gob.NewDecoder(r).Decode(i)
}
Testing it with an in-memory buffer (bytes.Buffer):
m := map[mapKey]string{
{"1", "2"}: "12",
{"3", "4"}: "34",
}
fmt.Println(m)
buf := &bytes.Buffer{}
if err := save(buf, m); err != nil {
panic(err)
}
var m2 map[mapKey]string
if err := load(buf, &m2); err != nil {
panic(err)
}
fmt.Println(m2)
Output as expected (try it on the Go Playground):
map[{1 2}:12 {3 4}:34]
map[{1 2}:12 {3 4}:34]
You have a working code, but know that you have to call Load() with a pointer value (else Decoder.Decode() wouldn't be able to modify its value).
Also a few things to improve it:
In your example you are swallowing the error returned by Encoder.Encode() (check it and you'll see what the problem is; a common problem is using a struct mapKey with no exported fields in which case an error of gob: type main.mapKey has no exported fields would be returned).
Also you should call File.Close() as a deferred function.
Also if opening the file fails, you should return early and you shouldn't close the file.
This is the corrected version of your code:
func Save(path string, object interface{}) error {
file, err := os.Create(path)
if err != nil {
return err
}
defer file.Close()
return gob.NewEncoder(file).Encode(object)
}
func Load(path string, object interface{}) error {
file, err := os.Open(path)
if err != nil {
return err
}
defer file.Close()
return gob.NewDecoder(file).Decode(object)
}
Testing it:
m := map[mapKey]string{
{"1", "2"}: "12",
{"3", "4"}: "34",
}
fmt.Println(m)
if err := Save("testfile", m); err != nil {
panic(err)
}
var m2 map[mapKey]string
if err := Load("testfile", &m2); err != nil {
panic(err)
}
fmt.Println(m2)
Output as expected:
map[{1 2}:12 {3 4}:34]
map[{1 2}:12 {3 4}:34]

"tail -f"-like generator

I had this convenient function in Python:
def follow(path):
with open(self.path) as lines:
lines.seek(0, 2) # seek to EOF
while True:
line = lines.readline()
if not line:
time.sleep(0.1)
continue
yield line
It does something similar to UNIX tail -f: you get last lines of a file as they come. It's convenient because you can get the generator without blocking and pass it to another function.
Then I had to do the same thing in Go. I'm new to this language, so I'm not sure whether what I did is idiomatic/correct enough for Go.
Here is the code:
func Follow(fileName string) chan string {
out_chan := make(chan string)
file, err := os.Open(fileName)
if err != nil {
log.Fatal(err)
}
file.Seek(0, os.SEEK_END)
bf := bufio.NewReader(file)
go func() {
for {
line, _, _ := bf.ReadLine()
if len(line) == 0 {
time.Sleep(10 * time.Millisecond)
} else {
out_chan <- string(line)
}
}
defer file.Close()
close(out_chan)
}()
return out_chan
}
Is there any cleaner way to do this in Go? I have a feeling that using an asynchronous call for such a thing is an overkill, and it really bothers me.
Create a wrapper around a reader that sleeps on EOF:
type tailReader struct {
io.ReadCloser
}
func (t tailReader) Read(b []byte) (int, error) {
for {
n, err := t.ReadCloser.Read(b)
if n > 0 {
return n, nil
} else if err != io.EOF {
return n, err
}
time.Sleep(10 * time.Millisecond)
}
}
func newTailReader(fileName string) (tailReader, error) {
f, err := os.Open(fileName)
if err != nil {
return tailReader{}, err
}
if _, err := f.Seek(0, 2); err != nil {
return tailReader{}, err
}
return tailReader{f}, nil
}
This reader can be used anywhere an io.Reader can be used. Here's how loop over lines using bufio.Scanner:
t, err := newTailReader("somefile")
if err != nil {
log.Fatal(err)
}
defer t.Close()
scanner := bufio.NewScanner(t)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
if err := scanner.Err(); err != nil {
fmt.Fprintln(os.Stderr, "reading:", err)
}
The reader can also be used to loop over JSON values appended to the file:
t, err := newTailReader("somefile")
if err != nil {
log.Fatal(err)
}
defer t.Close()
dec := json.NewDecoder(t)
for {
var v SomeType
if err := dec.Decode(&v); err != nil {
log.Fatal(err)
}
fmt.Println("the value is ", v)
}
There are a couple of advantages this approach has over the goroutine approach outlined in the question. The first is that shutdown is easy. Just close the file. There's no need to signal the goroutine that it should exit. The second advantage is that many packages work with io.Reader.
The sleep time can be adjusted up or down to meet specific needs. Decrease the time for lower latency and increase the time to reduce CPU use. A sleep of 100ms is probably fast enough for data that's displayed to humans.
Check out this Go package for reading from continuously updated files (tail -f): https://github.com/hpcloud/tail
t, err := tail.TailFile("filename", tail.Config{Follow: true})
for line := range t.Lines {
fmt.Println(line.Text)
}

How do I handle nil return values from database?

I am writing a basic program to read values from database table and print in table. The table was populated by an ancient program. Some of the fields in the row are optional and when I try to read them as string, I get the following error:
panic: sql: Scan error on column index 2: unsupported driver -> Scan pair: <nil> -> *string
After I read other questions for similar issues, I came up with following code to handle the nil values. The method works fine in practice. I get the values in plain text and empty string instead of the nil values.
However, I have two concerns:
This does not look efficient. I need to handle 25+ fields like this and that would mean I read each of them as bytes and convert to string. Too many function calls and conversions. Two structs to handle the data and so on...
The code looks ugly. It is already looking convoluted with 2 fields and becomes unreadable as I go to 25+
Am I doing it wrong? Is there a better/cleaner/efficient/idiomatic golang way to read values from database?
I find it hard to believe that a modern language like Go would not handle the database returns gracefully.
Thanks in advance!
Code snippet:
// DB read format
type udInfoBytes struct {
id []byte
state []byte
}
// output format
type udInfo struct {
id string
state string
}
func CToGoString(c []byte) string {
n := -1
for i, b := range c {
if b == 0 {
break
}
n = i
}
return string(c[:n+1])
}
func dbBytesToString(in udInfoBytes) udInfo {
var out udInfo
var s string
var t int
out.id = CToGoString(in.id)
out.state = stateName(in.state)
return out
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
ret := udInfo{}
r := udInfoBytes{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
ret = dbBytesToString(r)
defer db.Close()
return ret
}
edit:
I want to have something like the following where I do no have to worry about handling NULL and automatically read them as empty string.
// output format
type udInfo struct {
id string
state string
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
r := udInfo{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
defer db.Close()
return r
}
There are separate types to handle null values coming from the database such as sql.NullBool, sql.NullFloat64, etc.
For example:
var s sql.NullString
err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s)
...
if s.Valid {
// use s.String
} else {
// NULL value
}
go's database/sql package handle pointer of the type.
package main
import (
"database/sql"
"fmt"
_ "github.com/mattn/go-sqlite3"
"log"
)
func main() {
db, err := sql.Open("sqlite3", ":memory:")
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec("create table foo(id integer primary key, value text)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values(null)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values('bar')")
if err != nil {
log.Fatal(err)
}
rows, err := db.Query("select id, value from foo")
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var id int
var value *string
err = rows.Scan(&id, &value)
if err != nil {
log.Fatal(err)
}
if value != nil {
fmt.Println(id, *value)
} else {
fmt.Println(id, value)
}
}
}
You should get like below:
1 <nil>
2 bar
An alternative solution would be to handle this in the SQL statement itself by using the COALESCE function (though not all DB's may support this).
For example you could instead use:
q := fmt.Sprintf("SELECT id,COALESCE(state, '') as state FROM Mytable WHERE id='%s' ", ud)
which would effectively give 'state' a default value of an empty string in the event that it was stored as a NULL in the db.
Two ways to handle those nulls:
Using sql.NullString
if value.Valid {
return value.String
}
Using *string
if value != nil {
return *value
}
https://medium.com/#raymondhartoyo/one-simple-way-to-handle-null-database-value-in-golang-86437ec75089
I've started to use the MyMySql driver as it uses a nicer interface to that of the std library.
https://github.com/ziutek/mymysql
I've then wrapped the querying of the database into simple to use functions. This is one such function:
import "github.com/ziutek/mymysql/mysql"
import _ "github.com/ziutek/mymysql/native"
// Execute a prepared statement expecting multiple results.
func Query(sql string, params ...interface{}) (rows []mysql.Row, err error) {
statement, err := db.Prepare(sql)
if err != nil {
return
}
result, err := statement.Run(params...)
if err != nil {
return
}
rows, err = result.GetRows()
return
}
To use this is as simple as this snippet:
rows, err := Query("SELECT * FROM table WHERE column = ?", param)
for _, row := range rows {
column1 = row.Str(0)
column2 = row.Int(1)
column3 = row.Bool(2)
column4 = row.Date(3)
// etc...
}
Notice the nice row methods for coercing to a particular value. Nulls are handled by the library and the rules are documented here:
https://github.com/ziutek/mymysql/blob/master/mysql/row.go

How do I delete all blobs in app engine on Go?

The blobstore API has no function to list all blobs. How can I get this list and then delete all blobs?
The blobstore API on appengine for go has no way to do this. Instead, use the datastore to fetch __BlobInfo__ entities as appengine.BlobInfo. Although the API claims to have a BlobKey field, it is not populated. Instead, use the string ID of the returned key and cast it to an appengine.BlobKey, which you can then pass to blobstore.Delete.
Here's a handler at "/tasks/delete-blobs" to delete 20k blobs at a time in a loop until they are all deleted. Also note that cursors are not used here. I suspect that __BlobInfo__ is special and doesn't support cursors. (When I attempted to use them, they did nothing.)
func DeleteBlobs(w http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
c = appengine.Timeout(c, time.Minute)
q := datastore.NewQuery("__BlobInfo__").KeysOnly()
it := q.Run(ctx)
wg := sync.WaitGroup{}
something := false
for _i := 0; _i < 20; _i++ {
var bk []appengine.BlobKey
for i := 0; i < 1000; i++ {
k, err := it.Next(nil)
if err == datastore.Done {
break
} else if err != nil {
c.Errorf("err: %v", err)
continue
}
bk = append(bk, appengine.BlobKey(k.StringID()))
}
if len(bk) == 0 {
break
}
go func(bk []appengine.BlobKey) {
something = true
c.Errorf("deleteing %v blobs", len(bk))
err := blobstore.DeleteMulti(ctx, bk)
if err != nil {
c.Errorf("blobstore delete err: %v", err)
}
wg.Done()
}(bk)
wg.Add(1)
}
wg.Wait()
if something {
taskqueue.Add(c, taskqueue.NewPOSTTask("/tasks/delete-blobs", nil), "")
}
}

Resources