can not write to file - arrays

I tried to code a OneTimePad with go but I cant write to file:
The files are bin files (compiled Go code)
My Code:
package main
import ("fmt"
"io/ioutil"
"math/rand")
func rndByte(l int)[]byte{
token := make([]byte, l)
rand.Read(token)
return token
}
func writeByteFile(filename string,inp []byte ){
err := ioutil.WriteFile(filename, inp, 0644)
if err != nil {
fmt.Println(err)
}
}
func readFile(filename string) []byte {
data, err := ioutil.ReadFile(filename)
if err != nil {
fmt.Println("File reading error", err)
}
return data
}
func main(){
x := readFile("xor")
// y:= len(x)
z := rndByte(489)
var res [489]byte
for i:=0; i != 489; i++{
res[i] = x[i] ^ z[i]
}
writeByteFile("xorKey", z)
writeByteFile("xorENC", res)
}
my Error:
# command-line-arguments
./xorbyte.go:47:19: cannot use res (type [489]byte) as type []byte in argument to writeByteFile

[489]byte and []byte are different types.
[489]byte is an array
[]byte is a slice
try to convert array to slice:
writeByteFile("xorENC", res[:])
Checkout https://blog.golang.org/go-slices-usage-and-internals

Here you are doing wrong conversion from byte array to byte slice. To convert an array to a slice you can use the following syntax:
var byteArray [5]byte
byteSlice := byteArray[:]
Ref: Convert array to slice in Go
So, you can try this way,
writeByteFile("xorENC", res[:])

Related

How to chunk a file into 4 equal files

I have a file of huge size for example 100MB, I need to chunk it into 4 25MB files using golang.
The thing here is, if i use go routine and read the file, the order of the data inside the files are not preserved. the code i used is
package main
import (
"bufio"
"fmt"
"log"
"os"
"sync"
"github.com/google/uuid"
)
func main() {
file, err := os.Open("sampletest.txt")
if err != nil {
log.Fatal(err)
}
defer file.Close()
lines := make(chan string)
// start four workers to do the heavy lifting
wc1 := startWorker(lines)
wc2 := startWorker(lines)
wc3 := startWorker(lines)
wc4 := startWorker(lines)
scanner := bufio.NewScanner(file)
go func() {
defer close(lines)
for scanner.Scan() {
lines <- scanner.Text()
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
}()
writefiles(wc1, wc2, wc3, wc4)
}
func writefile(data string) {
file, err := os.Create("chunks/" + uuid.New().String() + ".txt")
if err != nil {
fmt.Println(err)
}
defer file.Close()
file.WriteString(data)
}
func startWorker(lines <-chan string) <-chan string {
finished := make(chan string)
go func() {
defer close(finished)
for line := range lines {
finished <- line
}
}()
return finished
}
func writefiles(cs ...<-chan string) {
var wg sync.WaitGroup
output := func(c <-chan string) {
var d string
for n := range c {
d += n
d += "\n"
}
writefile(d)
wg.Done()
}
wg.Add(len(cs))
for _, c := range cs {
go output(c)
}
go func() {
wg.Wait()
}()
}
Here using this code my file got split into 4 equal files, but the order in it is not preserved.
I am very new to golang, any suggestions are highly appreciated.
I took this code from some site and tweaked here and there to meet my requirements.
I took this code from some site and tweaked here and there to meet my requirements.
Based on your statement, you should be able to modify the code from running concurrently to sequentially, it's faaar easier than applying concurrent aspect to existing code.
The work is basically just: remove the concurrent part.
Anyway, below is a simple example of how to achieve what you want. I use your code as the base, and then I remove everything related to concurrent process.
package main
import (
"bufio"
"fmt"
"log"
"os"
"strings"
"github.com/google/uuid"
)
func main() {
split := 4
file, err := os.Open("file.txt")
if err != nil {
log.Fatal(err)
}
defer file.Close()
scanner := bufio.NewScanner(file)
texts := make([]string, 0)
for scanner.Scan() {
text := scanner.Text()
texts = append(texts, text)
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
lengthPerSplit := len(texts) / split
for i := 0; i < split; i++ {
if i+1 == split {
chunkTexts := texts[i*lengthPerSplit:]
writefile(strings.Join(chunkTexts, "\n"))
} else {
chunkTexts := texts[i*lengthPerSplit : (i+1)*lengthPerSplit]
writefile(strings.Join(chunkTexts, "\n"))
}
}
}
func writefile(data string) {
file, err := os.Create("chunks-" + uuid.New().String() + ".txt")
if err != nil {
fmt.Println(err)
}
defer file.Close()
file.WriteString(data)
}
Here is a simple file splitter. You can handle the leftovers yourself, I added the leftover bytes to 5th file.
package main
import (
"bufio"
"fmt"
"os"
)
func main() {
file, err := os.Open("sample-text-file.txt")
if err != nil {
panic(err)
}
defer file.Close()
// to divide file in four chunks
info, _ := file.Stat()
chunkSize := int(info.Size() / 4)
// reader of chunk size
bufR := bufio.NewReaderSize(file, chunkSize)
// Notice the range over slice of len 5, after 4 leftover will be written to 5th file
for i := range [5]int{} {
reader := make([]byte, chunkSize)
rlen, err := bufR.Read(reader)
fmt.Println("Read: ", rlen)
if err != nil {
panic(err)
}
writeFile(i, rlen, &reader)
}
}
// Notice bufW as a pointer to avoid exchange of big byte slices
func writeFile(i int, rlen int, bufW *[]byte) {
fname := fmt.Sprintf("file_%v", i)
f, err := os.Create(fname)
defer f.Close()
w := bufio.NewWriterSize(f, rlen)
wbytes := *(bufW)
wLen, err := w.Write(wbytes[:rlen])
if err != nil {
panic(err)
}
fmt.Println("Wrote ", wLen, "to", fname)
w.Flush()
}

Golang abstract function that gets data from db and fills array

I want to create an abstract function, that gets data from DB and fills array by this data. Types of array can be different. And I want to do it without reflect, due to performance issues.
I just want to call everywhere some function like GetDBItems() and get array of data from DB with desired type. But all implementations that I create are owful.
Here is this function implementation:
type AbstractArrayGetter func(size int) []interface{}
func GetItems(arrayGetter AbstractArrayGetter) {
res := DBResponse{}
DB.Get(&res)
arr := arrayGetter(len(res.Rows))
for i := 0; i < len(res.Rows); i++ {
json.Unmarshal(res.Rows[i].Value, &obj[i])
}
}
Here I call this function:
var events []Event
GetFullItems("events", "events_list", map[string]interface{}{}, func(size int) []interface{} {
events = make([]Event, size, size)
proxyEnt := make([]interface{}, size, size)
for i, _ := range events {
proxyEnt[i] = &events[i]
}
return proxyEnt
})
It works, but there are to much code to call this function, also there is some perfomance issue about copying events array to interfaces array.
How can I do it without reflect and do it with a short function call code? Or reflect not to slow in this case?
I tested performance with reflect, and it is similar to the mentioned above solution. So here is solution with reflect, if someone needs it. This function gets data from DB and fills abstract array
func GetItems(design string, viewName string, opts map[string]interface{}, arrayType interface{}) (interface{}, error) {
res := couchResponse{}
opts["limit"] = 100000
bytes, err := CouchView(design, viewName, opts)
if err != nil {
return nil, err
}
err = json.Unmarshal(bytes, &res)
if err != nil {
return nil, err
}
dataType := reflect.TypeOf(arrayType)
slice := reflect.MakeSlice(reflect.SliceOf(dataType), len(res.Rows), len(res.Rows))
for i := 0; i < len(res.Rows); i++ {
if opts["include_docs"] == true {
err = json.Unmarshal(res.Rows[i].Doc, slice.Index(i).Addr().Interface())
} else {
err = json.Unmarshal(res.Rows[i].Value, slice.Index(i).Addr().Interface())
}
if err != nil {
return nil, err
}
}
x := reflect.New(slice.Type())
x.Elem().Set(slice)
return x.Interface(), nil
}
and getting data using this function:
var e Event
res, err := GetItems("data", "data_list", map[string]interface{}{}, e)

Loading matrix in from csv file - golang

I'm writing a program that performs math on matrixes. I want to load them in from a csv file and have the following code:
file, err := os.Open("matrix1.csv")
if err != nil {
log.Fatal(err)
}
defer file.Close()
lines, _ := csv.NewReader(file).ReadAll()
for i, line := range lines {
for j, val := range line {
valInt, err := strconv.Atoi(val)
if err != nil {
log.Fatal(err)
}
matrix1[i][j] = valInt
}
}
However the strconv code is throwing an error:
strconv.ParseInt: parsing "": invalid syntax
It appears that everything else in the code is correct, does anyone have any ideas on how to solve this error?
EDIT: I'm now trying to work on outputting my result to a new csv file.
I have the following code:
file2, err := os.Create("result.csv")
if err != nil {
log.Fatal(err)
}
defer file1.Close()
writer := csv.NewWriter(file2)
for line2 := range blank {
writer.Write(line2)
}
}
}
This gives the following error:
cannot use line2 (type int) as type []string in argument to writer.Write
Updated with the suggestions from comments however the above error is now seen.
This means one of the cells of your CSV is blank, I reproduced the error with this code:
package main
import (
"encoding/csv"
"log"
"strconv"
"strings"
)
func main() {
matrix1 := [5][5]int{}
file := strings.NewReader("1,2,3,4,5\n6,7,8,,0")
lines, _ := csv.NewReader(file).ReadAll()
for i, line := range lines {
for j, val := range line {
valInt, err := strconv.Atoi(val)
if err != nil {
log.Fatal(err)
}
matrix1[i][j] = valInt
}
}
}
If you are ok with treating blank cells as 0 this will get you past the error:
func main() {
matrix1 := [5][5]int{}
file := strings.NewReader("1,2,3,4,5\n6,7,8,,0")
lines, _ := csv.NewReader(file).ReadAll()
for i, line := range lines {
for j, val := range line {
var valInt int
var err error
if val == "" {
valInt = 0
} else {
valInt, err = strconv.Atoi(val)
}
if err != nil {
log.Fatal(err)
}
matrix1[i][j] = valInt
}
}
}
As mentions in the comments above the error was due to a missing value in my csv file.
Once the file was amended the error is now gone.

Not able to store data in file properly using gob

When I try to save the map of type map[mapKey]string into a file using gob encoder, it is not saving string in file.
Here mapKey is struct and map value is long json string.
type mapKey struct{
Id1 string
Id2 string
}
And whenever I am use nested map instead of the struct like:
var m = make(map[string]map[string]string)
It is working fine and saving string properly. I am not sure what I am missing here.
Code to encode, decode and save it in file:
func Save(path string, object interface{}) error {
file, err := os.Create(path)
if err == nil {
encoder := gob.NewEncoder(file)
encoder.Encode(object)
}
file.Close()
return err
}
// Decode Gob file
func Load(path string, object interface{}) error {
file, err := os.Open(path)
if err == nil {
decoder := gob.NewDecoder(file)
err = decoder.Decode(object)
}
file.Close()
return err
}
func Check(e error) {
if e != nil {
_, file, line, _ := runtime.Caller(1)
fmt.Println(line, "\t", file, "\n", e)
os.Exit(1)
}
}
There is nothing special in encoding a value of type map[mapKey]string.
See this very simple working example which uses the specified reader/writer:
func save(w io.Writer, i interface{}) error {
return gob.NewEncoder(w).Encode(i)
}
func load(r io.Reader, i interface{}) error {
return gob.NewDecoder(r).Decode(i)
}
Testing it with an in-memory buffer (bytes.Buffer):
m := map[mapKey]string{
{"1", "2"}: "12",
{"3", "4"}: "34",
}
fmt.Println(m)
buf := &bytes.Buffer{}
if err := save(buf, m); err != nil {
panic(err)
}
var m2 map[mapKey]string
if err := load(buf, &m2); err != nil {
panic(err)
}
fmt.Println(m2)
Output as expected (try it on the Go Playground):
map[{1 2}:12 {3 4}:34]
map[{1 2}:12 {3 4}:34]
You have a working code, but know that you have to call Load() with a pointer value (else Decoder.Decode() wouldn't be able to modify its value).
Also a few things to improve it:
In your example you are swallowing the error returned by Encoder.Encode() (check it and you'll see what the problem is; a common problem is using a struct mapKey with no exported fields in which case an error of gob: type main.mapKey has no exported fields would be returned).
Also you should call File.Close() as a deferred function.
Also if opening the file fails, you should return early and you shouldn't close the file.
This is the corrected version of your code:
func Save(path string, object interface{}) error {
file, err := os.Create(path)
if err != nil {
return err
}
defer file.Close()
return gob.NewEncoder(file).Encode(object)
}
func Load(path string, object interface{}) error {
file, err := os.Open(path)
if err != nil {
return err
}
defer file.Close()
return gob.NewDecoder(file).Decode(object)
}
Testing it:
m := map[mapKey]string{
{"1", "2"}: "12",
{"3", "4"}: "34",
}
fmt.Println(m)
if err := Save("testfile", m); err != nil {
panic(err)
}
var m2 map[mapKey]string
if err := Load("testfile", &m2); err != nil {
panic(err)
}
fmt.Println(m2)
Output as expected:
map[{1 2}:12 {3 4}:34]
map[{1 2}:12 {3 4}:34]

How do I handle nil return values from database?

I am writing a basic program to read values from database table and print in table. The table was populated by an ancient program. Some of the fields in the row are optional and when I try to read them as string, I get the following error:
panic: sql: Scan error on column index 2: unsupported driver -> Scan pair: <nil> -> *string
After I read other questions for similar issues, I came up with following code to handle the nil values. The method works fine in practice. I get the values in plain text and empty string instead of the nil values.
However, I have two concerns:
This does not look efficient. I need to handle 25+ fields like this and that would mean I read each of them as bytes and convert to string. Too many function calls and conversions. Two structs to handle the data and so on...
The code looks ugly. It is already looking convoluted with 2 fields and becomes unreadable as I go to 25+
Am I doing it wrong? Is there a better/cleaner/efficient/idiomatic golang way to read values from database?
I find it hard to believe that a modern language like Go would not handle the database returns gracefully.
Thanks in advance!
Code snippet:
// DB read format
type udInfoBytes struct {
id []byte
state []byte
}
// output format
type udInfo struct {
id string
state string
}
func CToGoString(c []byte) string {
n := -1
for i, b := range c {
if b == 0 {
break
}
n = i
}
return string(c[:n+1])
}
func dbBytesToString(in udInfoBytes) udInfo {
var out udInfo
var s string
var t int
out.id = CToGoString(in.id)
out.state = stateName(in.state)
return out
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
ret := udInfo{}
r := udInfoBytes{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
ret = dbBytesToString(r)
defer db.Close()
return ret
}
edit:
I want to have something like the following where I do no have to worry about handling NULL and automatically read them as empty string.
// output format
type udInfo struct {
id string
state string
}
func GetInfo(ud string) udInfo {
db := getFileHandle()
q := fmt.Sprintf("SELECT id,state FROM Mytable WHERE id='%s' ", ud)
rows, err := db.Query(q)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
r := udInfo{}
for rows.Next() {
err := rows.Scan(&r.id, &r.state)
if err != nil {
log.Println(err)
}
break
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
defer db.Close()
return r
}
There are separate types to handle null values coming from the database such as sql.NullBool, sql.NullFloat64, etc.
For example:
var s sql.NullString
err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s)
...
if s.Valid {
// use s.String
} else {
// NULL value
}
go's database/sql package handle pointer of the type.
package main
import (
"database/sql"
"fmt"
_ "github.com/mattn/go-sqlite3"
"log"
)
func main() {
db, err := sql.Open("sqlite3", ":memory:")
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec("create table foo(id integer primary key, value text)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values(null)")
if err != nil {
log.Fatal(err)
}
_, err = db.Exec("insert into foo(value) values('bar')")
if err != nil {
log.Fatal(err)
}
rows, err := db.Query("select id, value from foo")
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var id int
var value *string
err = rows.Scan(&id, &value)
if err != nil {
log.Fatal(err)
}
if value != nil {
fmt.Println(id, *value)
} else {
fmt.Println(id, value)
}
}
}
You should get like below:
1 <nil>
2 bar
An alternative solution would be to handle this in the SQL statement itself by using the COALESCE function (though not all DB's may support this).
For example you could instead use:
q := fmt.Sprintf("SELECT id,COALESCE(state, '') as state FROM Mytable WHERE id='%s' ", ud)
which would effectively give 'state' a default value of an empty string in the event that it was stored as a NULL in the db.
Two ways to handle those nulls:
Using sql.NullString
if value.Valid {
return value.String
}
Using *string
if value != nil {
return *value
}
https://medium.com/#raymondhartoyo/one-simple-way-to-handle-null-database-value-in-golang-86437ec75089
I've started to use the MyMySql driver as it uses a nicer interface to that of the std library.
https://github.com/ziutek/mymysql
I've then wrapped the querying of the database into simple to use functions. This is one such function:
import "github.com/ziutek/mymysql/mysql"
import _ "github.com/ziutek/mymysql/native"
// Execute a prepared statement expecting multiple results.
func Query(sql string, params ...interface{}) (rows []mysql.Row, err error) {
statement, err := db.Prepare(sql)
if err != nil {
return
}
result, err := statement.Run(params...)
if err != nil {
return
}
rows, err = result.GetRows()
return
}
To use this is as simple as this snippet:
rows, err := Query("SELECT * FROM table WHERE column = ?", param)
for _, row := range rows {
column1 = row.Str(0)
column2 = row.Int(1)
column3 = row.Bool(2)
column4 = row.Date(3)
// etc...
}
Notice the nice row methods for coercing to a particular value. Nulls are handled by the library and the rules are documented here:
https://github.com/ziutek/mymysql/blob/master/mysql/row.go

Resources