How to make an HTTP request to get data from clickhouse database - database

I'm trying to make an HTTP request to get data from clickhouse database using Go. I don't have too much experience with it and not sure how to get the returned value by the query
This is what I have:
reader := strings.NewReader("SELECT COUNT(*) FROM system.tables WHERE database = 'local' AND name = 'persons'")
request, err := http.NewRequest("GET", "http://localhost:8123", reader)
if err != nil {
fmt.Println(err)
}
client := &http.Client{}
resp, err := client.Do(request)
if err != nil {
fmt.Println(err)
}
fmt.Println("The answer is: ", resp.Body)
The expected output should be a number (1 means table exist and 0 means that doesn't exist) but I'm getting in resp.Body this output:
The answer is: &{0xc4201741c0 {0 0} false <nil> 0x6a9bb0 0x6a9b20}
Any idea to get just the value of the query?

I had to add an extra line
message, _ := ioutil.ReadAll(resp.Body)
fmt.Println(string(message))
That helped me to get what I wanted.

Related

How to query any table of RDS using Golang SDK

I am writing an AWS lambda to query 10 different tables from RDS(SQL Server) using Golang SDK. What I have learned so far is we have to create a similar struct for the table to query it. But as I want to query 10 tables, So I don't want to create the struct for every table, even the table schema may get changed someday.
Lately, I want to create a CSV file per table as the backup with the queried data and upload it to S3. So is it possible to directly import the CSV file into a lambda, so that I can directly upload it to S3?
You can see my current code below
func executeQuery(dbconnection *sql.DB) {
println("\n\n----------Executing Query ----------")
query := "select TOP 5 City,State,Country from IMBookingApp.dbo.Address"
rows, err := dbconnection.Query(query)
if err != nil {
fmt.Println("Error:")
log.Fatal(err)
}
println("rows", rows)
defer rows.Close()
count := 0
for rows.Next() {
var City, State, Country string
rows.Columns
err := rows.Scan(&City, &State, &Country)
if err != nil {
fmt.Println("Error reading rows: " + err.Error())
}
fmt.Printf("City: %s, State: %s, Country: %s\n", City, State, Country)
count++
}
}
This code can only work for the Address table, and not for other tables
I have also tried it with GORM
package main
import (
"fmt"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mssql"
)
type Currency struct {
CurrencyId int `gorm:"column:CurrencyId;"`
Code string `gorm:"column:Code;"`
Description string `gorm:"column:Description;"`
}
func main() {
db, err := gorm.Open("mssql", "sqlserver://***")
db.SingularTable(true)
gorm.DefaultTableNameHandler = func(dbVeiculosGorm *gorm.DB, defaultTableName string) string {
return "IMBookingApp.dbo.Currency"
}
fmt.Println("HasTable-Currency:", db.HasTable("ClientUser"))
var currency Currency
db.Debug().Find(&currency)
fmt.Println("Currency:", currency)
fmt.Println("Error", err)
defer db.Close()
}
With both the approaches I couldn't find any way to make the code generic for multiple tables. I would appreciate it if anyone can give me some suggestions or if you can point to some resources.
I did not test this code but is should give you an idea how to fetch Rows into strings array.
defer rows.Close()
columns, err := rows.Columns()
if err != nil {
panic(err)
}
for rows.Next() {
receiver := make([]*string, len(columns))
err := rows.Scan(receiver)
if err != nil {
fmt.Println("Error reading rows: " + err.Error())
}
}
GO internally converts many types into strings - https://github.com/golang/go/blob/master/src/database/sql/convert.go#L219
If data is cannot be converted you have 2 options:
Easy - update your SQL query to return strings or string compatible data
Complicated. Use slice of interface{} instead of slice of *string and fill it in with default values of correct type based on rows.ColumnTypes(). Later you will have to convert real values into strings to save into csv.
Below code worked for me -
conn, _ := getConnection() // Get database connection
rows, err := conn.Query(query)
if err != nil {
fmt.Println("Error:")
log.Fatal(err)
}
defer rows.Close()
columns, err := rows.Columns()
if err != nil {
panic(err)
}
for rows.Next() {
receiver := make([]string, len(columns))
is := make([]interface{}, len(receiver))
for i := range is {
is[i] = &receiver[i]
// each is[i] will be of type interface{} - compatible with Scan()
// using the underlying concrete `*string` values from `receiver`
}
err := rows.Scan(is...)
if err != nil {
fmt.Println("Error reading rows: " + err.Error())
}
fmt.Println("receiver", receiver)
Reference:- sql: expected 3 destination arguments in Scan, not 1 in Golang

BigQuery - fetch 1000000 records and do some process over data using goLang

I Have 1000000 records inside BigQuery. what is the best way to fetch data from DB and process using goLang? I'm getting timeout issue if fetch all the data without limit. already I increase the limit to 5min, but it takes more than 5 min.
I want to do some streaming call or pagination implementation, But i don't know in golang how I do.
var FetchCustomerRecords = func(req *http.Request) *bigquery.RowIterator {
ctx := appengine.NewContext(req)
ctxWithDeadline, _ := context.WithTimeout(ctx, 5*time.Minute)
log.Infof(ctx, "Fetch Customer records from BigQuery")
client, err := bigquery.NewClient(ctxWithDeadline, "ddddd-crm")
q := client.Query(
"SELECT * FROM Something")
q.Location = "US"
job, err := q.Run(ctx)
if err != nil {
log.Infof(ctx, "%v", err)
}
status, err := job.Wait(ctx)
if err != nil {
log.Infof(ctx, "%v", err)
}
if err := status.Err(); err != nil {
log.Infof(ctx, "%v", err)
}
it, err := job.Read(ctx)
if err != nil {
log.Infof(ctx, "%v", err)
}
return it
}
You can read the table contents directly without issuing a query. This doesn't incur query charges, and provides the same row iterator as you would get from a query.
For small results, this is fine. For large tables, I would suggest checking out the new storage api, and the code sample on the samples page.
For a small table or simply reading a small subset of rows, you can do something like this (reads up to 10k rows from one of the public dataset tables):
func TestTableRead(t *testing.T) {
ctx := context.Background()
client, err := bigquery.NewClient(ctx, "my-project-id")
if err != nil {
t.Fatal(err)
}
table := client.DatasetInProject("bigquery-public-data", "stackoverflow").Table("badges")
it := table.Read(ctx)
rowLimit := 10000
var rowsRead int
for {
var row []bigquery.Value
err := it.Next(&row)
if err == iterator.Done || rowsRead >= rowLimit {
break
}
if err != nil {
t.Fatalf("error reading row offset %d: %v", rowsRead, err)
}
rowsRead++
fmt.Println(row)
}
}
you can split your query to get 10x of 100000 records and run in multiple goroutine
use sql query like
select * from somewhere order by id DESC limit 100000 offset 0
and in next goroutine select * from somewhere order by id DESC limit 100000 offset 100000

MSSQL leaking connections

I have strange issue with golang sql and probably denisenkom/go-mssqldb.
My code part:
func Auth(username string, password string, remote_ip string, user_agent string) (string, User, error) {
var token string
var user = User{}
query := `exec dbo.sp_get_user ?,?`
rows, err := DB.Query(query, username, password)
if err != nil {
return token, user, err
}
defer rows.Close()
rows.Next()
if err = rows.Scan(&user.Id, &user.Username, &user.Description); err != nil {
log.Printf("SQL SCAN: Failed scan User in Auth. %v \n", err)
return token, user, err
}
hashFunc := md5.New()
hashFunc.Write([]byte(username + time.Now().String()))
token = hex.EncodeToString(hashFunc.Sum(nil))
query = `exec dbo.sp_set_session ?,?,?,?`
_, err = DB.Exec(query, user.Id, token, remote_ip, user_agent)
if err != nil {
return token, user, err
}
return token, user, nil
}
Problem: defer rows.Close() - not working properly
After this with DB.Connection.Stats().OpenConnections I always have 2 connection opened (also after repeat User login is still 2 connection for whole app lifecycle)
But if I rewrite func as:
...
query := `exec dbo.sp_get_user ?,?`
rows, err := DB.Query(query, username, password)
if err != nil {
return token, user, err
}
defer rows.Close()
rows.Next()
if err = rows.Scan(&user.Id, &user.Username, &user.Description); err != nil {
log.Printf("SQL SCAN: Failed scan User in Auth. %v \n", err)
return token, user, err
}
rows.Close()
...
Then rows underline stmt is closed and next DB.Connection.Stats().OpenConnections always will be 1 connection open.
DB in my app is simply return underlying connection from sql.Open
Problem is only in this part where two query executions with Query and Exec in one functions.
Maybe Query and Exec defines different connections, but i don't find this in driver source or database/sql source.
Thank you! (sorry for english if it's so bad)
PS:
exec dbo.sp_get_user ?,? - is simple select from user table, not more.
exec dbo.sp_set_session ?,?,?,? - is simple insert to user table, not more
In MSSQL - DBCC INPUTBUFFER shows me query = 'cast(##identity as bigint)' which executes in denisenkom/go-mssqldb mssql.go on line 593

simple appengine go datastore query returns no result [duplicate]

I'm trying to save two records and then get the 2nd one. The issue is that the filter doesn't seem to work. Although I filter by Name ("Andrew W") I always get "Joe Citizen". The counter also indicates 2 records when it should be just one. This drives me crazy. See the full code below.
The result prints counter 2 e2 {"Joe Citizen" "Manager" "2015-03-24 09:08:58.363929 +0000 UTC" ""}
package main
import (
"fmt"
"time"
"net/http"
"google.golang.org/appengine"
"google.golang.org/appengine/datastore"
)
type Employee struct {
Name string
Role string
HireDate time.Time
Account string
}
func init(){
http.HandleFunc("/", handle)
}
func handle(w http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
e1 := Employee{
Name: "Joe Citizen",
Role: "Manager",
HireDate: time.Now(),
}
_, err := datastore.Put(c, datastore.NewKey(c, "employee", "", 0, nil), &e1)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
panic(err)
return
}
e1.Name = "Andrew W"
_, err = datastore.Put(c, datastore.NewKey(c, "employee", "", 0, nil), &e1)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
panic(err)
return
}
var e2 Employee
q := datastore.NewQuery("employee")
q.Filter("Name =", "Andrew W")
cnt, err := q.Count(c)
if err !=nil{
http.Error(w, err.Error(), http.StatusInternalServerError)
panic(err)
return
}
for t := q.Run(c); ; {
if _, err := t.Next(&e2); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
panic(err)
return
}
break
}
fmt.Fprintf(w, "counter %v e2 %q", cnt, e2)
}
The (first) problem is this:
q := datastore.NewQuery("employee")
q.Filter("Name =", "Andrew W")
Query.Filter() returns a derivative query with the filter you specified included. You have to store the return value and use it ongoing:
q := datastore.NewQuery("employee")
q = q.Filter("Name =", "Andrew W")
Or just one line:
q := datastore.NewQuery("employee").Filter("Name =", "Andrew W")
Note: Without this the query you execute would have no filters and therefore would return all previously saved entities of the kind "employee", where "Joe Citizen" might be the first one which you see printed.
For the first run you will most likely see 0 results. Note that since you don't use Ancestor queries, eventual consistency applies. The development SDK simulates the High replication datastore with its eventual consistency, and therefore the query following the Put() operations will not see the results.
If you put a small time.Sleep() before proceeding with the query, you will see the results you expect:
time.Sleep(time.Second)
var e2 Employee
q := datastore.NewQuery("employee").Filter("Name=", "Andrew W")
// Rest of your code...
Also note that running your code in the SDK you can simulate strong consistency by creating your context like this:
c, err := aetest.NewContext(&aetest.Options{StronglyConsistentDatastore: true})
But of course this is for testing purposes only, you can't do this in production.
If you want strongly consistent results, specify an ancestor key when creating the key, and use ancestor queries. An ancestor key is only required if you want strongly consistent results. If you're fine with a few seconds delay for the results to show up, you don't have to. Also note that the ancestor key does not have to be the key of an existing entity, it's just semantics. You can create any fictional key. Using the same (fictional) key to multiple entities will put them into the same entity group and ancestor queries on this group will be strongly consistent.
Often the ancestor key is an existing key, usually derived from the current user or account, because that can be created/computed easily and it holds/stores some additional information, but as noted above, it doesn't have to be.

Unable to get db response for update query for further execution in go

I want to update data in remote db table and do further tasks but could not do it.
Using same code with insert query, I am able to insert values in same table where I am getting response very fast and going ahead for further tasks.
But in update query it does update values in table but could not going further.
Here is my sample code that I have tried:
package src
import (
"github.com/go-sql-driver/mysql"
"database/sql"
"fmt"
"log"
"net"
)
const (
DB_NAME = "test_db"
DB_HOST = "remote db ip address:port"
DB_USER = "username"
DB_PASS = "password"
)
const (
bufferSize int = 1024
port string = ":23456"
)
func main() {
udpAddr, err := net.ResolveUDPAddr("udp6", port)
var s Server
s.conn, err = net.ListenUDP("udp4", udpAddr)
fmt.Println("Trying to connect remote db")
dsn := DB_USER + ":" + DB_PASS + "#" + DB_HOST + "/" + DB_NAME + "?charset=utf8"
db, err := sql.Open("mysql", dsn)
if err != nil {
log.Fatal(err)
}
defer db.Close()
fmt.Println("Connected db")
rows, err := db.Query("UPDATE user_name_table SET user_id = ? WHERE user_name = ? ", "abcd", "admin")
defer rows.Close()
if err != nil {
log.Fatal(err)
fmt.Println("error in updating values")
} else {
fmt.Println("updated successfully in db")
}
// not coming here for doing further tasks
}
Also , I have tried to update db table from php webservice and its working fine but using go neither its giving response after updating values nor moving further.
Please help me .
Thanks in advance.
This happens because log.Fatal(err) will call os.Exit(1). In other words, it will terminate your program when err != nil.
Found out the solution.Just removed unwanted rows for returned response for updating query and everything working fine again.
_, err := db.Query("UPDATE user_name_table SET user_id = ? WHERE user_name = ? ", "abcd", "admin")
// defer rows.Close()

Resources