Go. Writing []byte to file results in zero byte file - file

I try to serialize a structured data to file. I looked through some examples and made such construction:
func (order Order) Serialize(folder string) {
b := bytes.Buffer{}
e := gob.NewEncoder(&b)
err := e.Encode(order)
if err != nil { panic(err) }
os.MkdirAll(folder, 0777)
file, err := os.Create(folder + order.Id)
if err != nil { panic(err) }
defer file.Close()
writer := bufio.NewWriter(file)
n, err := writer.Write(b.Bytes())
fmt.Println(n)
if err != nil {
panic(err)
}
}
Serialize is a method serializing its object to file called by it's id property. I looked through debugger - byte buffer contains data before writing. I mean object is fully initialized. Even n variable representing quantity of written bytes is more than a thousand - the file shouldn't be empty at all. The file is created but it is totally empty. What's wrong?

bufio.Writer (as the package name hints) uses a buffer to cache writes. If you ever use it, you must call Writer.Flush() when you're done writing to it to ensure the buffered data gets written to the underlying io.Writer.
Also note that you can directly write to an os.File, no need to create a buffered writer "around" it. (*os.File implements io.Writer).
Also note that you can create the gob.Encoder directly directed to the os.File, so even the bytes.Buffer is unnecessary.
Also os.MkdirAll() may fail, check its return value.
Also it's better to "concatenate" parts of a file path using filepath.Join() which takes care of extra / missing slashes at the end of folder names.
And last, it would be better to signal the failure of Serialize(), e.g. with an error return value, so the caller party has the chance to examine if the operation succeeded, and act accordingly.
So Order.Serialize() should look like this:
func (order Order) Serialize(folder string) error {
if err := os.MkdirAll(folder, 0777); err != nil {
return err
}
file, err := os.Create(filepath.Join(folder, order.Id))
if err != nil {
return err
}
defer file.Close()
if err := gob.NewEncoder(file).Encode(order); err != nil {
return err
}
return nil
}

Related

File scanner loop does not execute

I'm getting the last line of a text file, and try to read it.
get last line:
func getLastLine(file *os.File) (result int) {
s := bufio.NewScanner(file)
result = 0
for s.Scan() {
result++
}
err := s.Err()
if err != nil {
log.Fatal(err)
}
return
}
read file:
func readFileFrom(file *os.File) {
s := bufio.NewScanner(file)
for s.Scan() {
fmt.Println(s.Text())
}
err := s.Err()
if err != nil {
log.Fatal(err)
}
}
If i write this in main.go:
getLastLine(file)
readFileFrom(file)
It will not execute the block:
for s.Scan() {
fmt.Println(s.Text())
}
If I remove the line getLastLine(file), the reading works as expected.
I think it's because 2 Scanners are accessing the same file.
os.File maintains the position where the next read or write operation will work. Reading from / writing to the file updates this position.
If you use a single file, passing it to getLastLine() will read it till its end, so its pointer will point to the end of the file. Now passing it to readFileFrom() will not read and print anything because there is no more data after the end of the file (that's the definition of the "end").
You need to either rewind the pointer using File.Seek(), or you need to close and reopen it. Obviously just rewinding is more efficient. To set the pointer to the file start:
if _, err := file.Seek(0, io.SeekStart); err != nil {
panic(err)
}
So do this between the 2 function calls:
getLastLine(file)
if _, err := file.Seek(0, io.SeekStart); err != nil {
panic(err)
}
readFileFrom(file)
Also note that if you would open the file twice, you would not need to rewind it, and you could also run the 2 functions concurrently without interfering with each other, because they only read the file and each os.File has its own pointer.
file1, err := os.Open("a.txt")
// handle err
defer file1.Close()
file2, err := os.Open("a.txt")
// handle err
defer file2.Close()
wg := sync.WaitGroup()
wg.Add(1)
go func() {
defer wg.Done()
getLastLine(file1)
}()
readFileFrom(file2)
wg.Wait() // Wait for getLastLine() to complete

How to remove the last N bytes from a file

I want to delete thelast N bytes from file in Go,
Actually, this is already implemented is the os.Truncate() function. But this function takes the new size. So to use this, you have to first get the size of the file. For that, you may use os.Stat().
Wrapping it into a function:
func truncateFile(name string, bytesToRemove int64) error {
fi, err := os.Stat(name)
if err != nil {
return err
}
return os.Truncate(name, fi.Size()-bytesToRemove)
}
Using it to remove the last 5000 bytes:
if err := truncateFile("C:\\Test.zip", 5000); err != nil {
fmt.Println("Error:", err)
}
Another alternative is to use the File.Truncate() method for that. If we have an os.File, we may also use File.Stat() to get its size.
This is how it would look like:
func truncateFile(name string, bytesToRemove int64) error {
f, err := os.OpenFile(name, os.O_RDWR, 0644)
if err != nil {
return err
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
return err
}
return f.Truncate(fi.Size() - bytesToRemove)
}
Using it is the same. This may be preferable if we're working on a file (we have it opened) and we have to truncate it. But in that case you'd want to pass os.File instead of its name to truncateFile().
Note: if you try to remove more bytes than the file currently has, truncateFile() will return an error.

Trouble overwriting file content

I've got trouble overwriting a files content with zeros. The problem is that the very last byte of the original file remains, even when I exceed its size by 100 bytes. Someone got an idea what I'm missing?
func (h PostKey) ServeHTTP(w http.ResponseWriter, r *http.Request) {
f, err := os.Create("received.dat")
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
return
}
defer f.Close()
_, err = io.Copy(f, r.Body)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
return
}
// Retrieve filesize
size, _ := f.Seek(0, 1)
zeroFilled := make([]byte, size + 100)
n, err := f.WriteAt(zeroFilled, 0)
if err != nil {
return
}
fmt.Printf("Size: %d\n", size) // prints 13
fmt.Printf("Bytes written: %d\n", n) // prints 113
}
The problem may occurred because the data is written into a same file (shared resource) inside an http handler, and the handler itself may be executed concurrently. You need to lock access to the file during data serialization (overwriting process). Quick solution will be:
import (
"sync"
//... other packages
)
var muFile sync.Mutex
func (h PostKey) ServeHTTP(w http.ResponseWriter, r *http.Request) {
muFile.Lock()
defer muFile.Unlock()
f, err := os.Create("received.dat")
//other statements
//...
}
If your server load is low, the above solution will be fine. But if your server needs to handle a lot of requests concurrently, you need to use different approach (although the rule is the same, lock access to any shared resource).
I was writing to the file and trying to overwrite it in the same context, and so parts of the first write operation were still in memory and not yet written to the disk. By using f.Sync() to flush everything after copying the bodys content I was able to fix the issue.

Golang, a proper way to rewind file pointer

package main
import (
"bufio"
"encoding/csv"
"fmt"
"io"
"log"
"os"
)
func main() {
data, err := os.Open("cc.csv")
defer data.Close()
if err != nil {
log.Fatal(err)
}
s := bufio.NewScanner(data)
for s.Scan() {
fmt.Println(s.Text())
if err := s.Err(); err != nil {
panic(err)
}
}
// Is it a proper way?
data.Seek(0, 0)
r := csv.NewReader(data)
for {
if record, err := r.Read(); err == io.EOF {
break
} else if err != nil {
log.Fatal(err)
} else {
fmt.Println(record)
}
}
}
I use two readers here to read from a csv file.
To rewind a file I use data.Seek(0, 0) is it a good way? Or it's better to close the file and open again before second reading.
Is it also correct to use *File as an io.Reader ? Or it's better to do r := ioutil.NewReader(data)
Seeking to the beginning of the file is easiest done using File.Seek(0, 0) (or more safely using a constant: File.Seek(0, io.SeekStart)) just as you suggested, but don't forget that:
The behavior of Seek on a file opened with O_APPEND is not specified.
(This does not apply to your example though.)
Setting the pointer to the beginning of the file is always much faster than closing and reopening the file. If you need to read different, "small" parts of the file many times, alternating, then maybe it might be profitable to open the file twice to avoid repeated seeking (worry about this only if you have peformance problems).
And again, *os.File implements io.Reader, so you can use it as an io.Reader. I don't know what ioutil.NewReader(data) is you mentioned in your question (package io/ioutil has no such function; maybe you meant bufio.NewReader()?), but certainly it is not needed to read from a file.

How to verify if file has contents to marshal from ioutil.ReadFile in Go

I am trying to use a file instead of a DB to get a prototype up and running. I have a program that (1) reads existing content from the file to a map, (2) takes JSON POSTs that add content to the map, (3) on exit, writes to the file.
First, the file is not being created. Then I created an empty file. It is not being written to.
I am trying to read the file, determine if there is existing content. If there is not existing content, create a blank map. If there is existing content, unmarshal it into a new map.
func writeDB() {
eventDBJSON, err := json.Marshal(eventDB)
if err != nil {
panic(err)
}
err2 := ioutil.WriteFile("/Users/sarah/go/dat.txt", eventDBJSON, 0777)
if err2 != nil {
panic(err2)
}
}
func main() {
dat, err := ioutil.ReadFile("/Users/sarah/go/dat.txt")
if err != nil {
panic(err)
}
if dat == nil {
eventDB = DB{
events: map[string]event{},
}
} else {
if err2 := json.Unmarshal(dat, &eventDB); err2 != nil {
panic(err2)
}
}
router := httprouter.New()
router.POST("/join", JoinEvent)
router.POST("/create", CreateEvent)
log.Fatal(http.ListenAndServe(":8080", router))
defer writeDB()
}
There is no way for the server to ever reach defer writeDB().
http.ListenAndServe blocks, and if it did return anything, you log.Fatal that, which exits your app at that point.
You can't intercept all ways an app can exit, getting SIGKILL, machine loss of power, etc.
I'm assuming you really just want to write some code, bounce the server, repeat
If that's the case, then Ctrl-C is good enough.
If you want to write your file on Ctrl-C, look at the signal package.
Also, defer on the last line of a function really has no purpose as defer basically means "do this last".
you can use (*os.File).Stat() to get a file's FileInfo which contain its size
file, err := os.Open( filepath )
if err != nil {
// handle error
}
fi, err := file.Stat()
if err != nil {
// handle error
}
s := fi.Size()

Resources