I'm implementing queries to datastore in Go, but the inequality operator is not working as expected.
Here's my code
query2 := datastore.NewQuery(MetaData)
if !hasAccess {
for _, zone := range filters.Zones {
query2 = query2.Filter("Zone <", zone)
query2 = query2.Filter("Zone >", zone)
}
}
The result always returns nil even though I have data in datastore that matches that criteria (Zone != zone).
Related
I am trying to run a postgres if statement at my golang project, but i met this error, could you help me figure out?
the code is
newDate := "2022-06-22"
query := `
DO $$
DECLARE
new_date date:= $1;
BEGIN
IF EXISTS (SELECT * FROM systemtable WHERE date = new_date) THEN
UPDATE systemtable SET is_latest = TRUE WHERE date = new_date;
ELSE
INSERT INTO systemtable (date, is_latest) VALUES (new_date, TRUE);
END IF;
END$$;`
if _, err := txi.Exec(query, newDate); err != nil {
return err
}
Then the error returned is "pq: bind message supplies 1 parameters, but prepared statement "" requires 0"
Do your job with two separate statements within a transaction. That way you preserve consistency and don't have to perform any business logic on database side.
I Have 1000000 records inside BigQuery. what is the best way to fetch data from DB and process using goLang? I'm getting timeout issue if fetch all the data without limit. already I increase the limit to 5min, but it takes more than 5 min.
I want to do some streaming call or pagination implementation, But i don't know in golang how I do.
var FetchCustomerRecords = func(req *http.Request) *bigquery.RowIterator {
ctx := appengine.NewContext(req)
ctxWithDeadline, _ := context.WithTimeout(ctx, 5*time.Minute)
log.Infof(ctx, "Fetch Customer records from BigQuery")
client, err := bigquery.NewClient(ctxWithDeadline, "ddddd-crm")
q := client.Query(
"SELECT * FROM Something")
q.Location = "US"
job, err := q.Run(ctx)
if err != nil {
log.Infof(ctx, "%v", err)
}
status, err := job.Wait(ctx)
if err != nil {
log.Infof(ctx, "%v", err)
}
if err := status.Err(); err != nil {
log.Infof(ctx, "%v", err)
}
it, err := job.Read(ctx)
if err != nil {
log.Infof(ctx, "%v", err)
}
return it
}
You can read the table contents directly without issuing a query. This doesn't incur query charges, and provides the same row iterator as you would get from a query.
For small results, this is fine. For large tables, I would suggest checking out the new storage api, and the code sample on the samples page.
For a small table or simply reading a small subset of rows, you can do something like this (reads up to 10k rows from one of the public dataset tables):
func TestTableRead(t *testing.T) {
ctx := context.Background()
client, err := bigquery.NewClient(ctx, "my-project-id")
if err != nil {
t.Fatal(err)
}
table := client.DatasetInProject("bigquery-public-data", "stackoverflow").Table("badges")
it := table.Read(ctx)
rowLimit := 10000
var rowsRead int
for {
var row []bigquery.Value
err := it.Next(&row)
if err == iterator.Done || rowsRead >= rowLimit {
break
}
if err != nil {
t.Fatalf("error reading row offset %d: %v", rowsRead, err)
}
rowsRead++
fmt.Println(row)
}
}
you can split your query to get 10x of 100000 records and run in multiple goroutine
use sql query like
select * from somewhere order by id DESC limit 100000 offset 0
and in next goroutine select * from somewhere order by id DESC limit 100000 offset 100000
Does pgx offer any support for 'where in' clauses? I found in another stackoverflow thread that one should use string concatenation to build the query manually. IMO this is a bit error prone though, as you have to take care of escaping/sql injection and the like on your own.
I also tried to figure it out on my own:
const updatePurgedRecordingsStmt = "update recordings set status = 'DELETED', deleted = now() where status <> 'DELETED' and id in ($1);"
func (r *Repository) DeleteRecordings() error {
pool, err := r.connPool()
if err != nil {
return errors.Wrap(err, "cannot establish connection")
}
pgRecIds := &pgtype.Int4Array{}
if err := pgRecIds.Set([]int32{int32(1), int32(2)}); err != nil {
return errors.Wrap(err, "id conversion failed")
}
if _, err = pool.Exec(updatePurgedRecordingsStmt, pgRecIds); err != nil {
return errors.Wrap(err, "update stmt failed")
}
return nil
}
When I execute this code, I get the following error though:
ERROR: incorrect binary data format in bind parameter 1 (SQLSTATE 22P03)
The versions I am using:
Postgres:
db=> SELECT version();
version
-----------------------------------------------------------------------------------------------------------
PostgreSQL 9.6.11 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit
(1 row)
PGX:
github.com/jackc/fake v0.0.0-20150926172116-812a484cc733 h1:vr3AYkKovP8uR8AvSGGUK1IDqRa5lAAvEkZG1LKaCRc=
github.com/jackc/fake v0.0.0-20150926172116-812a484cc733/go.mod h1:WrMFNQdiFJ80sQsxDoMokWK1W5TQtxBFNpzWTD84ibQ=
github.com/jackc/pgx v3.3.0+incompatible h1:Wa90/+qsITBAPkAZjiByeIGHFcj3Ztu+VzrrIpHjL90=
github.com/jackc/pgx v3.3.0+incompatible/go.mod h1:0ZGrqGqkRlliWnWB4zKnWtjbSWbGkVEFm4TeybAXq+I=
github.com/lib/pq v1.0.0 h1:X5PMW56eZitiTeO7tKzZxFCSpbFZJtkMMooicw2us9A=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
As you already know IN expects a list of scalar expressions, not an array, however pgtype.Int4Array represents an array, not a list of scalar expressions.
"IMO this is a bit error prone though, as you have to take care of escaping/sql injection and the like on your own. "
Not necessarily, you can loop over your array, construct a string of parameter references, concatenate that to the query and then execute it passing in the array with ....
var paramrefs string
ids := []interface{}{1,2,3,4}
for i, _ := range ids {
paramrefs += `$` + strconv.Itoa(i+1) + `,`
}
paramrefs = paramrefs[:len(paramrefs)-1] // remove last ","
query := `UPDATE ... WHERE id IN (` + paramrefs + `)`
pool.Exec(query, ids...)
Alternatively you can use ANY instead of IN.
ids := &pgtype.Int4Array{}
ids.Set([]int{1,2,3,4})
query := `UPDATE ... WHERE id = ANY ($1)`
pool.Exec(query, ids)
(here you may have to cast the param reference to the appropriate array type, I'm not sure, give it a try without cast, if not ok, try with cast)
func prepareWhereINString(count int) string {
var paramrefs string
for i := 0; i < count; i++ {
paramrefs += `$` + strconv.Itoa(i+1) + `,`
}
paramrefs = paramrefs[:len(paramrefs)-1] // remove last ","
return paramrefs
}
When making query with OrderBy it always query wrong result whereby repeating first document multiple times added to list.
q := session.Advanced().DocumentQueryAllOld(reflect.TypeOf(&models.List{}), "", "user", false)
//q = q.WhereNotEquals("Doc.hh", "Tarzan")
q = q.OrderBy("Docnn")
q = q.Statistics(&statsRef)
//q = q.Skip(0)
//q = q.Take(6)
q.ToList(&queryResult)
If no index before it will query right result but if existing index auto created by a different OrderBy value it will query wrong result
How do I filter out rows that are null? I know it's hard to find only-null rows, but hopefully this should be easier.
I'd like to do the following:
q := datastore.NewQuery("MY_KIND").Filter("MY_ID !=", nil)
... but Filter doesn't support the != comparator. FYI, using this GQL syntax in the Datastore Viewer works just fine:
SELECT * FROM MY_KIND WHERE MY_ID != NULL
You can use greater filter with the appropriate value (> 0 for numbers, > "" for strings).
Typically an ID cannot be an empty string or zero.
Here's what works for me.
// Datastore entity with null value
// `DeletedAt` in my case is *time.Time
{
"DeletedAt": NULL,
"Version": 1,
...
}
Here's my query to get records where DeletedAt is NULL:
datastore.NewQuery("MyKind").Filter("DeletedAt =", (*time.Time)(nil))
Hope that this will be helpful to someone of you.