it's days I try to merge two jons based on a common key. I have in input two different JSONs with a common field, I would like to merge the data of the two json based on the common key. A sort of sql join between the two JSON.
JSON derives from this code
func Dati_plus(c *gin.Context) {
oracle, err := http.Get("http://XXXX/XXX")
if err != nil {
panic(err)
}
defer oracle.Body.Close()
mysql, err := http.Get("http://XXXX/XXX")
if err != nil {
panic(err)
}
defer mysql.Body.Close()
oracleJSON, err := ioutil.ReadAll(oracle.Body)
if err != nil {
panic(err)
}
mysqlJSON, err := ioutil.ReadAll(mysql.Body)
if err != nil {
panic(err)
}
var oracleOUT map[string]interface{}
var mysqlOUT map[string]interface{}
json.Unmarshal([]byte(oracleJSON), &oracleOUT)
json.Unmarshal([]byte(mysqlJSON), &mysqlOUT)
a := map[string]interface{}{"result":mysqlOUT["result"]}
b := map[string]interface{}{"result":oracleOUT["result"]}
the JSONs in input have this form
{"count":2,"result":[{"DESC":"2","NOMEmy":"PIPPO","COGNOMEmy":"PIPPO"},{"DESC":"7","NOMEmy":"PIPPO","COGNOMEmy":"PIPPO"}]
{"count":2,"result":[{"DESC":"2","COS":"PIPPO","ROS":"PIPPO"},{"DESC":"7","COS":"PIPPO","ROS":"PIPPO"},{"DESC":"60","COS":"PIPPO","ROS":"PIPPO"}]
If i have two json like this the result of the function it should be
{"count":2,"result":[{"DESC":"2","COS":"PIPPO","ROS":"PIPPO","NOMEmy":"PIPPO","COGNOMEmy":"PIPPO"},{"DESC":"7","COS":"PIPPO","ROS":"PIPPO","NOMEmy":"PIPPO","COGNOMEmy":"PIPPO"},{"DESC":"60","COS":"PIPPO","ROS":"PIPPO"}]
if it can help, this is a function I use for merge between two single-value JSONs, but I could not modify it in the right way
func merge(dst, src map[string]interface{}, depth int) map[string]interface{} {
if depth > MaxDepth {
panic("Troppo Lungo")
}
for key, srcVal := range src {
if dstVal, ok := dst[key]; ok {
srcMap, srcMapOk := mapify(srcVal)
dstMap, dstMapOk := mapify(dstVal)
if srcMapOk && dstMapOk {
srcVal = merge(dstMap, srcMap, depth+1)
}
}
dst[key] = srcVal
}
return dst
}
func mapify(i interface{}) (map[string]interface{}, bool) {
value := reflect.ValueOf(i)
if value.Kind() == reflect.Map {
m := map[string]interface{}{}
for _, k := range value.MapKeys() {
m[k.String()] = value.MapIndex(k).Interface()
}
return m, true
}
return map[string]interface{}{}, false
}
Please, Help Me. THX
One observation here is that you can define a simple type that will model both of your data sets and use the golang type system to your advantage (instead of working against it by using so much reflection), for example:
type Data struct {
Count int `json:"count"`
Results []map[string]string `json:"result"`
}
// ...
oracleData, mysqlData := Data{}, Data{}
err := json.Unmarshal([]byte(oracleJson), &oracleData)
check(err)
err := json.Unmarshal([]byte(mysqlJson), &mysqlData)
check(err)
Now your "merge" function can simply return a new "Data" struct with values populated from the two inputs without having to worry about type assertions or casting. One key feature of this implementation is that it creates a lookup table of data result objects by their "DESC" key, which is later used for correlation:
func merge(d1, d2 Data) Data {
// Create the lookup table by each result object "DESC".
d1ResultsByDesc := map[string]map[string]string{}
for _, obj1 := range d1.Results {
d1ResultsByDesc[obj1["DESC"]] = obj1
}
newData := Data{}
for _, obj2 := range d2.Results {
newObj := map[string]string{}
// Include all result data objects from d2
for k2, v2 := range obj2 {
newObj[k2] = v2
}
// Also include the matching result data from d1
obj1 := d1ResultsByDesc[obj2["DESC"]]
for k1, v1 := range obj1 {
newObj[k1] = v1
}
newData.Results = append(newData.Results, newObj)
}
return newData
}
Related
I tried to code a OneTimePad with go but I cant write to file:
The files are bin files (compiled Go code)
My Code:
package main
import ("fmt"
"io/ioutil"
"math/rand")
func rndByte(l int)[]byte{
token := make([]byte, l)
rand.Read(token)
return token
}
func writeByteFile(filename string,inp []byte ){
err := ioutil.WriteFile(filename, inp, 0644)
if err != nil {
fmt.Println(err)
}
}
func readFile(filename string) []byte {
data, err := ioutil.ReadFile(filename)
if err != nil {
fmt.Println("File reading error", err)
}
return data
}
func main(){
x := readFile("xor")
// y:= len(x)
z := rndByte(489)
var res [489]byte
for i:=0; i != 489; i++{
res[i] = x[i] ^ z[i]
}
writeByteFile("xorKey", z)
writeByteFile("xorENC", res)
}
my Error:
# command-line-arguments
./xorbyte.go:47:19: cannot use res (type [489]byte) as type []byte in argument to writeByteFile
[489]byte and []byte are different types.
[489]byte is an array
[]byte is a slice
try to convert array to slice:
writeByteFile("xorENC", res[:])
Checkout https://blog.golang.org/go-slices-usage-and-internals
Here you are doing wrong conversion from byte array to byte slice. To convert an array to a slice you can use the following syntax:
var byteArray [5]byte
byteSlice := byteArray[:]
Ref: Convert array to slice in Go
So, you can try this way,
writeByteFile("xorENC", res[:])
I want to create an abstract function, that gets data from DB and fills array by this data. Types of array can be different. And I want to do it without reflect, due to performance issues.
I just want to call everywhere some function like GetDBItems() and get array of data from DB with desired type. But all implementations that I create are owful.
Here is this function implementation:
type AbstractArrayGetter func(size int) []interface{}
func GetItems(arrayGetter AbstractArrayGetter) {
res := DBResponse{}
DB.Get(&res)
arr := arrayGetter(len(res.Rows))
for i := 0; i < len(res.Rows); i++ {
json.Unmarshal(res.Rows[i].Value, &obj[i])
}
}
Here I call this function:
var events []Event
GetFullItems("events", "events_list", map[string]interface{}{}, func(size int) []interface{} {
events = make([]Event, size, size)
proxyEnt := make([]interface{}, size, size)
for i, _ := range events {
proxyEnt[i] = &events[i]
}
return proxyEnt
})
It works, but there are to much code to call this function, also there is some perfomance issue about copying events array to interfaces array.
How can I do it without reflect and do it with a short function call code? Or reflect not to slow in this case?
I tested performance with reflect, and it is similar to the mentioned above solution. So here is solution with reflect, if someone needs it. This function gets data from DB and fills abstract array
func GetItems(design string, viewName string, opts map[string]interface{}, arrayType interface{}) (interface{}, error) {
res := couchResponse{}
opts["limit"] = 100000
bytes, err := CouchView(design, viewName, opts)
if err != nil {
return nil, err
}
err = json.Unmarshal(bytes, &res)
if err != nil {
return nil, err
}
dataType := reflect.TypeOf(arrayType)
slice := reflect.MakeSlice(reflect.SliceOf(dataType), len(res.Rows), len(res.Rows))
for i := 0; i < len(res.Rows); i++ {
if opts["include_docs"] == true {
err = json.Unmarshal(res.Rows[i].Doc, slice.Index(i).Addr().Interface())
} else {
err = json.Unmarshal(res.Rows[i].Value, slice.Index(i).Addr().Interface())
}
if err != nil {
return nil, err
}
}
x := reflect.New(slice.Type())
x.Elem().Set(slice)
return x.Interface(), nil
}
and getting data using this function:
var e Event
res, err := GetItems("data", "data_list", map[string]interface{}{}, e)
I am trying to update a lot of records, to which cannot be done within the one minute max request time given, so I need to use a datastore.Cursor, but for some reason the returned cursor is always the same. So each redirect is done with the same cursor value, resulting in the the same 20 database updates being performed each time.
Any ideas to why things aren't working like I would like?
http.HandleFunc("/fix", func(w, http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
fixUser(c, w, r, "/fix", func() error {
// do the fix here
return nil
})
})
func fixUser(ctx context.Context, w http.ResponseWriter, r *http.Request, path string, fn func(user *User) error) {
q := datastore.NewQuery("users")
c := r.URL.Query().Get("c")
if len(c) > 0 {
cursor, err := datastore.DecodeCursor(c)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte(err.Error()))
return
}
q.Start(cursor)
}
iter := q.Run(ctx)
var cr datastore.Cursor
for i := 0; i < 20; i++ {
var u User
key, err := iter.Next(&u)
if err == datastore.Done {
return
}
if err != nil {
panic(err.Error())
}
cr, _ = iter.Cursor()
log.Debugf(ctx, "Cursor: %v", cr) // always the same value
u.Key = key
fn(&u)
}
pathWithCursor := fmt.Sprintf("%s?c=%s", path, cr.String())
http.Redirect(w, r, pathWithCursor, 301)
}
I looked at some of my own cursor code and compared it against yours. The main difference I see is that I use q = q.Start(cursor) rather than q.start(cursor). This should fix your problem since your query will now be updated to reflect the position specified by the cursor. Without storing your query back into the q variable, your query will not update.
I am writing a basic program to read values from database table and print in table. The table was populated by an ancient program. Some of the fields in the row are optional and when I try to read them as string, I get the following error:
panic: sql: Scan error on column index 2: unsupported driver -> Scan pair: <nil> -> *string
After I read other questions for similar issues, I came up with following code to handle the nil values. The method works fine in practice. I get the values in plain text and empty string instead of the nil values.
However, I have two concerns:
This does not look efficient. I need to handle 25+ fields like this and that would mean I read each of them as bytes and convert to string. Too many function calls and conversions. Two structs to handle the data and so on...
The code looks ugly. It is already looking convoluted with 2 fields and becomes unreadable as I go to 25+
Am I doing it wrong? Is there a better/cleaner/efficient/idiomatic golang way to read values from database?
I find it hard to believe that a modern language like Go would not handle the database returns gracefully.
Thanks in advance!
Code snippet:
// DB read format
type udInfoBytes struct {
id []byte
state []byte
}
// output format
type udInfo struct {
id string
state string
}
func CToGoString(c []byte) string {
n := -1
for i, b := range c {
if b == 0 {
break
}
n = i
}
return string(c[:n+1])
}
func dbBytesToString(in udInfoBytes) udInfo {
var out udInfo
var s string
var t int
out.id = CToGoString(in.id)
out.state = stateName(in.state)
return out
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
ret := udInfo{}
r := udInfoBytes{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
ret = dbBytesToString(r)
defer db.Close()
return ret
}
edit:
I want to have something like the following where I do no have to worry about handling NULL and automatically read them as empty string.
// output format
type udInfo struct {
id string
state string
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
r := udInfo{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
defer db.Close()
return r
}
There are separate types to handle null values coming from the database such as sql.NullBool, sql.NullFloat64, etc.
For example:
var s sql.NullString
err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s)
...
if s.Valid {
// use s.String
} else {
// NULL value
}
go's database/sql package handle pointer of the type.
package main
import (
"database/sql"
"fmt"
_ "github.com/mattn/go-sqlite3"
"log"
)
func main() {
db, err := sql.Open("sqlite3", ":memory:")
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec("create table foo(id integer primary key, value text)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values(null)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values('bar')")
if err != nil {
log.Fatal(err)
}
rows, err := db.Query("select id, value from foo")
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var id int
var value *string
err = rows.Scan(&id, &value)
if err != nil {
log.Fatal(err)
}
if value != nil {
fmt.Println(id, *value)
} else {
fmt.Println(id, value)
}
}
}
You should get like below:
1 <nil>
2 bar
An alternative solution would be to handle this in the SQL statement itself by using the COALESCE function (though not all DB's may support this).
For example you could instead use:
q := fmt.Sprintf("SELECT id,COALESCE(state, '') as state FROM Mytable WHERE id='%s' ", ud)
which would effectively give 'state' a default value of an empty string in the event that it was stored as a NULL in the db.
Two ways to handle those nulls:
Using sql.NullString
if value.Valid {
return value.String
}
Using *string
if value != nil {
return *value
}
https://medium.com/#raymondhartoyo/one-simple-way-to-handle-null-database-value-in-golang-86437ec75089
I've started to use the MyMySql driver as it uses a nicer interface to that of the std library.
https://github.com/ziutek/mymysql
I've then wrapped the querying of the database into simple to use functions. This is one such function:
import "github.com/ziutek/mymysql/mysql"
import _ "github.com/ziutek/mymysql/native"
// Execute a prepared statement expecting multiple results.
func Query(sql string, params ...interface{}) (rows []mysql.Row, err error) {
statement, err := db.Prepare(sql)
if err != nil {
return
}
result, err := statement.Run(params...)
if err != nil {
return
}
rows, err = result.GetRows()
return
}
To use this is as simple as this snippet:
rows, err := Query("SELECT * FROM table WHERE column = ?", param)
for _, row := range rows {
column1 = row.Str(0)
column2 = row.Int(1)
column3 = row.Bool(2)
column4 = row.Date(3)
// etc...
}
Notice the nice row methods for coercing to a particular value. Nulls are handled by the library and the rules are documented here:
https://github.com/ziutek/mymysql/blob/master/mysql/row.go
The blobstore API has no function to list all blobs. How can I get this list and then delete all blobs?
The blobstore API on appengine for go has no way to do this. Instead, use the datastore to fetch __BlobInfo__ entities as appengine.BlobInfo. Although the API claims to have a BlobKey field, it is not populated. Instead, use the string ID of the returned key and cast it to an appengine.BlobKey, which you can then pass to blobstore.Delete.
Here's a handler at "/tasks/delete-blobs" to delete 20k blobs at a time in a loop until they are all deleted. Also note that cursors are not used here. I suspect that __BlobInfo__ is special and doesn't support cursors. (When I attempted to use them, they did nothing.)
func DeleteBlobs(w http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
c = appengine.Timeout(c, time.Minute)
q := datastore.NewQuery("__BlobInfo__").KeysOnly()
it := q.Run(ctx)
wg := sync.WaitGroup{}
something := false
for _i := 0; _i < 20; _i++ {
var bk []appengine.BlobKey
for i := 0; i < 1000; i++ {
k, err := it.Next(nil)
if err == datastore.Done {
break
} else if err != nil {
c.Errorf("err: %v", err)
continue
}
bk = append(bk, appengine.BlobKey(k.StringID()))
}
if len(bk) == 0 {
break
}
go func(bk []appengine.BlobKey) {
something = true
c.Errorf("deleteing %v blobs", len(bk))
err := blobstore.DeleteMulti(ctx, bk)
if err != nil {
c.Errorf("blobstore delete err: %v", err)
}
wg.Done()
}(bk)
wg.Add(1)
}
wg.Wait()
if something {
taskqueue.Add(c, taskqueue.NewPOSTTask("/tasks/delete-blobs", nil), "")
}
}