I'm developing a set of functions (based on KNative Serving + Eventing) and I'm having a lot of problems getting the decoded data on the receiving-end if the stream.
So, I'm implementing a couple of functions that, will use the Google APIs (via google.golang.org/api/slides/v1) to grab a Google Slides Presentation (a struct from the library), encode it as a []byte and send it over the network with protobuf/gRPC.
This appears to be working correctly, however, when I try to decode it back into a Presentation I'm getting an error. Printing out the Decode call returns only EOF.
Here's the proto definition:
syntax = "proto3";
package api;
message ParserRequest {
bytes Presentation = 1;
}
message ParserResponse {
int32 Status = 1;
bytes Document = 2;
}
service ParserService {
// ParsePresentation parse the Google Slides presentation it into the SDR representation
rpc ParsePresentation(ParserRequest) returns (ParserResponse) {}
}
The sending function is similar to:
presentation, err := svc.Presentations.Get(docID).Do()
if err != nil {
log.Fatalf("Unable to retrieve data from document: %+v", err)
}
if presentation != nil {
log.Printf("Calling Parser...")
address := "PORT:IP"
conn, err := grpc.DialContext(ctx, address, grpc.WithInsecure())
if err != nil {
log.Printf("Dial Error! %+v", err)
return nil, fmt.Errorf("could not connect shipping service: %+v", err)
}
defer conn.Close()
log.Printf("Marshalling data...")
// data, err := presentation.MarshalJSON()
// data, err := json.Marshal(presentation)
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
err = enc.Encode(&presentation)
if err != nil {
log.Println("Encode Error: %+v", err)
log.Fatal("encode error:", err)
}
data := buf.Bytes()
log.Println(data) //[255 211 255 129 3 1 1 12 ...]
cli, err := parser.NewParserServiceClient(conn).ParsePresentation(ctx, &parser.ParserRequest{Presentation: data})
if err != nil {
log.Printf("Cli call Error! %+v", err)
return nil, fmt.Errorf("failed to get parser service: %+v", err)
}
log.Printf("Result: %d", cli.Status)
}
On the other end, I should now decode the data array and "translate" back into a Presentation struct, doing so via:
func (c *parserService) ParsePresentation(ctx context.Context, in *pb.ParserRequest) (*pb.ParserResponse, error) {
log.Printf("ParserService.ParsePresentation was called!")
if in.Presentation == nil {
log.Fatalf("Missing parameter Google Slides document.")
}
sdrDocument, err := gslides_parser.ParsePresentationBytes(in.Presentation)
if err != nil {
log.Fatalf("Unable to parse the Google Slides presentation: %+v", err)
}
presentation, err := json.Marshal(sdrDocument)
return &pb.ParserResponse{Status: 200, Document: presentation}, nil
}
Than, when it gets to the gslides_parser.ParsePresentationBytes(in.Presentation) will then be decoded:
func ParsePresentationBytes(presentationParam []byte) (sdr.Document, error) {
var (
document sdr.Document
presentation slides.Presentation
)
log.Printf("PresentationBytes gob decoder")
log.Println(presentationParam) // Output: [255 211 255 129 3 1 1 12 ...]
// err := json.Unmarshal(presentationParam, &presentation)
buf := bytes.NewBuffer(presentationParam)
log.Printf("New decoder...")
dec := gob.NewDecoder(buf)
log.Printf("Decode...")
log.Println(&presentation) // Output: &{[] [] <nil> <nil> [] {0 map[]} [] []}
err := dec.Decode(&presentation)
log.Printf("Done decoding...")
if err != nil {
// Never gets here
document = sdr.Document{}
(...)
} else {
log.Printf("PresentationBytes Error!")
}
return document, nil
}
So, why can't I decode the information? I don't see anything terribly wrong with this code, but I'm also a golang newb so I may have some error that is eluding me.
Isn't gob the appropriate way of dealing with this? I tried simply marshalling/unmarshalling but that also produces errors.
Related
In order to reuse CGO pointers (type C.uintptr_t) between multiple applications, I tried to use go rpc to pass the initialized pointer, but the program reported an error: rpc: gob error encoding body: gob: type not registered for interface: main._Ctype_ulong. I think there might be some issues with pointer types.
1. init func
func initApp(configPath *C.char) C.uintptr_t
2. App1, daemon process, call the init func, and pass the pointer to another by go rpc
var globalSDKPtr C.ulong
type HelloService struct{}
func (p *HelloService) Hello(request string, reply *C.ulong) error {
*reply = globalSDKPtr
return nil
}
func startRPS() {
rpc.RegisterName("HelloService", new(HelloService))
listener, err := net.Listen("tcp", ":1234")
if err != nil {
log.Fatal("ListenTCP error:", err)
}
conn, err := listener.Accept()
if err != nil {
log.Fatal("Accept error:", err)
}
rpc.ServeConn(conn)
}
3. App2, recevie the pointer reuse it.
client, err := rpc.Dial("tcp", "localhost:1234")
if err != nil {
log.Fatal("dialing:", err)
}
var reply C.ulong
err = client.Call("HelloService.Hello", "hello", &reply)
if err != nil {
log.Fatal(err)
}
res := C.query(reply)
I guess the reason for the problem is that my thinking is wrong. The way to reuse cgo pointers may not be the way of go rpc, but shared memory, but in any case, passing cgo-related things is always confusing. . Can anyone help me out.
i've been trying to wrap my head around unit testing, dependency injection, tdd and all that stuff and i've been stuck on testing functions that make database calls, for example.
Let's say you have a PostgresStore struct that takes in a Database interface, which has a Query() method.
type PostgresStore struct {
db Database
}
type Database interface {
Query(query string, args ...interface{}) (*sql.Rows, error)
}
And your PostgresStore has a GetPatients method, which calls database query.
func (p *PostgresStore) GetPatients() ([]Patient, error) {
rows, err := p.db.Query("SELECT id, name, age, insurance FROM patients")
if err != nil {
return nil, err
}
defer rows.Close()
items := []Patient{}
for rows.Next() {
var i Patient
if err := rows.Scan(
&i.ID,
&i.Name,
&i.Surname,
&i.Age,
&i.InsuranceCompany,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
In the real implementation, you would just pass a *sql.DB as Database argument, but how would you guys write a unit test with a fake database struct?
let me try to clarify some of your doubts. First of all, I'm gonna share a working example to better understand what's going on. Then, I'm gonna mention all of the relevant aspects.
repo/db.go
package repo
import "database/sql"
type Patient struct {
ID int
Name string
Surname string
Age int
InsuranceCompany string
}
type PostgresStore struct {
// rely on the generic DB provided by the "sql" package
db *sql.DB
}
func (p *PostgresStore) GetPatient(id int) ([]Patient, error) {
rows, err := p.db.Query("SELECT id, name, age, insurance FROM patients")
if err != nil {
return nil, err
}
defer rows.Close()
items := []Patient{}
for rows.Next() {
var i Patient
if err := rows.Scan(
&i.ID,
&i.Name,
&i.Surname,
&i.Age,
&i.InsuranceCompany,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
Here, the only relevant change is how you define the PostgresStore struct. As the db field, you should rely on the generic DB provided by the database/sql package of the Go Standard Library. Thanks to this, it's trivial to swap its implementation with a fake one, as we're gonna see later.
Please note that in the GetPatient method you're accepting an id parameter but you're not using it. Your query is more suitable to a method like GetAllPatients or something like that. Be sure to fix it accordingly.
repo/db_test.go
package repo
import (
"testing"
"github.com/DATA-DOG/go-sqlmock"
"github.com/stretchr/testify/assert"
)
func TestGetPatient(t *testing.T) {
// 1. set up fake db and mock
db, mock, err := sqlmock.New()
if err != nil {
t.Fatalf("err not expected: %v", err)
}
// 2. configure the mock. What we expect (query or command)? The outcome (error vs no error).
rows := sqlmock.NewRows([]string{"id", "name", "surname", "age", "insurance"}).AddRow(1, "john", "doe", 23, "insurance-test")
mock.ExpectQuery("SELECT id, name, age, insurance FROM patients").WillReturnRows(rows)
// 3. instantiate the PostgresStore with the fake db
sut := &PostgresStore{
db: db,
}
// 4. invoke the action we've to test
got, err := sut.GetPatient(1)
// 5. assert the result
assert.Nil(t, err)
assert.Contains(t, got, Patient{1, "john", "doe", 23, "insurance-test"})
}
Here, there are a lot to cover. First, you can check the comments within the code that give you a better idea of each step. In the code, we're relying on the package github.com/DATA-DOG/go-sqlmock that allows us to easily mock a database client.
Obviously, the purpose of this code is to give a general idea on how to implement your needs. It can be written in a better way but it can be a good starting point for writing tests in this scenario.
Let me know if this helps, thanks!
In querying Gerrit, they intentionally put a )]}' at the beginning of their api response, see: https://gerrit-review.googlesource.com/Documentation/rest-api-changes.html. I am trying to remove it so the JSON is valid, but I'm unsure of the best way to do this in Go
this is my current program to query gerrit and pull out the changeID and the status from its json :
package main
import (
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"log"
"net/http"
)
type gerritData struct {
ChangeID string `json:"change_id"`
Status string `json:"status"`
}
func gerritQuery(gerrit string) (gerritData, error) {
username := "redacted"
password := "redacted"
client := &http.Client{}
req, err := http.NewRequest("GET", "https://gerrit.company.com/a/changes/?q="+gerrit, nil)
req.SetBasicAuth(username, password)
resp, err := client.Do(req)
if err != nil {
log.Fatal(err)
}
respBody, err := ioutil.ReadAll(resp.Body)
// Trying to cut it out manually.
respBody = respBody[:len(respBody)-4]
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
var gerritResponse gerritData
if err := json.NewDecoder(resp.Body).Decode(&gerritResponse); err != nil {
panic(err.Error())
}
return gerritResponse, nil
}
func main() {
gerritFlag := flag.String("gerrit", "foo", "The Gerrit you want to query")
flag.Parse()
gerritResponse, _ := gerritQuery(*gerritFlag)
fmt.Println(gerritResponse)
}
Go is still complaining with panic: invalid character ')' looking for beginning of value. I'm still new to the language so any advice would be great.
The code in the question trims four bytes from the end of the response.
Trim the bytes from the beginning of the response:
respBoby = respBody[4:]
I want to delete thelast N bytes from file in Go,
Actually, this is already implemented is the os.Truncate() function. But this function takes the new size. So to use this, you have to first get the size of the file. For that, you may use os.Stat().
Wrapping it into a function:
func truncateFile(name string, bytesToRemove int64) error {
fi, err := os.Stat(name)
if err != nil {
return err
}
return os.Truncate(name, fi.Size()-bytesToRemove)
}
Using it to remove the last 5000 bytes:
if err := truncateFile("C:\\Test.zip", 5000); err != nil {
fmt.Println("Error:", err)
}
Another alternative is to use the File.Truncate() method for that. If we have an os.File, we may also use File.Stat() to get its size.
This is how it would look like:
func truncateFile(name string, bytesToRemove int64) error {
f, err := os.OpenFile(name, os.O_RDWR, 0644)
if err != nil {
return err
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
return err
}
return f.Truncate(fi.Size() - bytesToRemove)
}
Using it is the same. This may be preferable if we're working on a file (we have it opened) and we have to truncate it. But in that case you'd want to pass os.File instead of its name to truncateFile().
Note: if you try to remove more bytes than the file currently has, truncateFile() will return an error.
I'm working on an appengine app using the datastore. I'm attempting to gob
encode an interface and store it into the datastore. But when I try to load from
the datastore, I get the error:
gob: name not registered for interface: "main27155.strand"
The peculiar thing is that the load() method starts working after having
called the save() method. It no longer returns an error, and everything saved
in the datastore is loaded as expected. But when I restart the intance, the
load() method stops working again.
The load and save methods I mention refer to the methods defined by the
datastore.PropertyLoadSaver interface
From the looks of it, it seems like a problem with registering the
type/interfaces with gob, but I have exactly the same gob.Register() calls in
both the load() and save() methods.
I even tried removing the gob.Register() calls from both load and save methods
and adding it to init(). The exact same behavior was observed.
How can I load my datastore on a cold start?
type bio struct {¬
Id string¬
Hp int¬
godie chan bool //should be buffered¬
dam chan int¬
Genetics dna¬
}¬
type dna interface {
decode() mRNA
Get(int) trait
Set(int, trait)
Duplicate() dna
Len() int
}
type trait interface {
mutate() trait
}
// implements dna{}
type strand []trait
// implements trait{}
type tdecoration string
type color struct {
None bool // If true, colors are not shown in theme
Bg bool // If true, color is a background color
R int // 0-255
G int
B int
}
.
func start(w http.ResponseWriter, r *http.Request) error {
c := appengine.NewContext(r)
var bs []bio
if _, err := datastore.NewQuery("bio").GetAll(c, &bs); err != nil {
log.Println("bs is len: ", len(bs))
return err
}
...
return nil
}
func stop(w http.ResponseWriter, r *http.Request) error {
c := appengine.NewContext(r)
log.Println("Saving top 20 colors")
var k []*datastore.Key
var bs []*bio
stat := getStats()
for i, b := range stat.Leaderboard {
k = append(k, datastore.NewKey(c, "bio", b.Id, 0, nil))
bv := b
bs = append(bs, &bv)
// At most 20 bios survive across reboots
if i > 178 {
break
}
}
// Assemble slice of keys for deletion
dk, err := datastore.NewQuery("bio").KeysOnly().GetAll(c, nil)
if err != nil {
return errors.New(fmt.Sprintf("Query error: %s", err.Error()))
}
fn := func(c appengine.Context) error {
// Delete all old entries
err := datastore.DeleteMulti(c, dk)
if err != nil {
return errors.New(fmt.Sprintf("Delete error: %s", err.Error()))
}
// save the elite in the datastore
_, err = datastore.PutMulti(c, k, bs)
if err != nil {
return err
}
return nil
}
return datastore.RunInTransaction(c, fn, &datastore.TransactionOptions{XG: true})
}
// satisfy datastore PropertyLoadSaver interface ===============================
func (b *bio) Load(c <-chan datastore.Property) error {
gob.Register(&color{})
gob.Register(new(tdecoration))
var str strand
gob.Register(str)
tmp := struct {
Id string
Hp int
Gengob []byte
}{}
if err := datastore.LoadStruct(&tmp, c); err != nil {
return err
}
b.Id = tmp.Id
b.Hp = tmp.Hp
return gob.NewDecoder(strings.NewReader(string(tmp.Gengob))).Decode(&(b.Genetics))
}
func (b *bio) Save(c chan<- datastore.Property) error {
defer close(c)
gob.Register(&color{})
gob.Register(new(tdecoration))
var str strand
gob.Register(str)
var buf bytes.Buffer
gen := b.Genetics
if err := gob.NewEncoder(&buf).Encode(&gen); err != nil {
log.Println(err)
return err
}
dp := []datastore.Property{
{Name: "Id", Value: b.Id},
{Name: "Hp", Value: int64(b.Hp)},
{Name: "Gengob", Value: buf.Bytes(), NoIndex: true},
}
for _, p := range dp {
c <- p
}
return nil
}
Additional info: This behavior was not present before I stuffed the datastore
calls in stop() into datastore.RunInTransaction()
Register all types an in init() functions using RegisterName(). Delete all existing data from the store and you should be good to go.
App Engine generates a mangled name for the main package every time the application is built. The name generated by Register() includes this mangled package name. Any gobs encoded with the mangled name will only be readable using the same build of the app. If you cause the application to be rebuilt by modifying the code, then the app will not be able to decode gobs stored previously.