I'm getting the last line of a text file, and try to read it.
get last line:
func getLastLine(file *os.File) (result int) {
s := bufio.NewScanner(file)
result = 0
for s.Scan() {
result++
}
err := s.Err()
if err != nil {
log.Fatal(err)
}
return
}
read file:
func readFileFrom(file *os.File) {
s := bufio.NewScanner(file)
for s.Scan() {
fmt.Println(s.Text())
}
err := s.Err()
if err != nil {
log.Fatal(err)
}
}
If i write this in main.go:
getLastLine(file)
readFileFrom(file)
It will not execute the block:
for s.Scan() {
fmt.Println(s.Text())
}
If I remove the line getLastLine(file), the reading works as expected.
I think it's because 2 Scanners are accessing the same file.
os.File maintains the position where the next read or write operation will work. Reading from / writing to the file updates this position.
If you use a single file, passing it to getLastLine() will read it till its end, so its pointer will point to the end of the file. Now passing it to readFileFrom() will not read and print anything because there is no more data after the end of the file (that's the definition of the "end").
You need to either rewind the pointer using File.Seek(), or you need to close and reopen it. Obviously just rewinding is more efficient. To set the pointer to the file start:
if _, err := file.Seek(0, io.SeekStart); err != nil {
panic(err)
}
So do this between the 2 function calls:
getLastLine(file)
if _, err := file.Seek(0, io.SeekStart); err != nil {
panic(err)
}
readFileFrom(file)
Also note that if you would open the file twice, you would not need to rewind it, and you could also run the 2 functions concurrently without interfering with each other, because they only read the file and each os.File has its own pointer.
file1, err := os.Open("a.txt")
// handle err
defer file1.Close()
file2, err := os.Open("a.txt")
// handle err
defer file2.Close()
wg := sync.WaitGroup()
wg.Add(1)
go func() {
defer wg.Done()
getLastLine(file1)
}()
readFileFrom(file2)
wg.Wait() // Wait for getLastLine() to complete
Related
I'm coding in Go, and I created a file handler and a program that prints the value of that file.
However, the file that should be created with file.Filename is deleted when I run it.
I don't know what the reason is, even if I try to debug, the answer doesn't come out, and even if I google it, I don't get the answer.
(64bit windows 10 (WSL2))
package main
import (
"fmt"
"io"
"io/ioutil"
"os"
"github.com/labstack/echo"
)
func checkErr(err error) {
if err != nil {
panic(err)
}
}
func readFile(filename string) string {
data, err := ioutil.ReadFile(filename)
checkErr(err)
return string(data)
}
func main() {
e := echo.New()
e.POST("/file", func(c echo.Context) error {
file, err := c.FormFile("file")
checkErr(err)
src, err := file.Open()
checkErr(err)
defer src.Close()
dst, err := os.Create(file.Filename)
checkErr(err)
defer dst.Close()
_, err = io.Copy(dst, src)
checkErr(err)
data := readFile(file.Filename)
fmt.Println(data)
return c.String(200, "sd")
})
e.Logger.Fatal(e.Start(":5000"))
}
I'm guessing that your file exists, but the code that you wrote is reading the file before the changes are "flushed to disk".
Right here:
defer dst.Close()
_, err = io.Copy(dst, src)
Should Close() or Sync() your writer as soon as possible, otherwise you may read before the write is finished. And since your readFile() function isn't re-using the file, you might as well just close (not Sync()) it immediately, not deferred
Try this:
_, err = io.Copy(dst, src)
dst.Close()
if err != nil {
}
There could be an error while copying, but we still want to Close() the file (if there wasn't an error during the os.Create, os.Open, or os.OpenFile...
I want to delete thelast N bytes from file in Go,
Actually, this is already implemented is the os.Truncate() function. But this function takes the new size. So to use this, you have to first get the size of the file. For that, you may use os.Stat().
Wrapping it into a function:
func truncateFile(name string, bytesToRemove int64) error {
fi, err := os.Stat(name)
if err != nil {
return err
}
return os.Truncate(name, fi.Size()-bytesToRemove)
}
Using it to remove the last 5000 bytes:
if err := truncateFile("C:\\Test.zip", 5000); err != nil {
fmt.Println("Error:", err)
}
Another alternative is to use the File.Truncate() method for that. If we have an os.File, we may also use File.Stat() to get its size.
This is how it would look like:
func truncateFile(name string, bytesToRemove int64) error {
f, err := os.OpenFile(name, os.O_RDWR, 0644)
if err != nil {
return err
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
return err
}
return f.Truncate(fi.Size() - bytesToRemove)
}
Using it is the same. This may be preferable if we're working on a file (we have it opened) and we have to truncate it. But in that case you'd want to pass os.File instead of its name to truncateFile().
Note: if you try to remove more bytes than the file currently has, truncateFile() will return an error.
I try to serialize a structured data to file. I looked through some examples and made such construction:
func (order Order) Serialize(folder string) {
b := bytes.Buffer{}
e := gob.NewEncoder(&b)
err := e.Encode(order)
if err != nil { panic(err) }
os.MkdirAll(folder, 0777)
file, err := os.Create(folder + order.Id)
if err != nil { panic(err) }
defer file.Close()
writer := bufio.NewWriter(file)
n, err := writer.Write(b.Bytes())
fmt.Println(n)
if err != nil {
panic(err)
}
}
Serialize is a method serializing its object to file called by it's id property. I looked through debugger - byte buffer contains data before writing. I mean object is fully initialized. Even n variable representing quantity of written bytes is more than a thousand - the file shouldn't be empty at all. The file is created but it is totally empty. What's wrong?
bufio.Writer (as the package name hints) uses a buffer to cache writes. If you ever use it, you must call Writer.Flush() when you're done writing to it to ensure the buffered data gets written to the underlying io.Writer.
Also note that you can directly write to an os.File, no need to create a buffered writer "around" it. (*os.File implements io.Writer).
Also note that you can create the gob.Encoder directly directed to the os.File, so even the bytes.Buffer is unnecessary.
Also os.MkdirAll() may fail, check its return value.
Also it's better to "concatenate" parts of a file path using filepath.Join() which takes care of extra / missing slashes at the end of folder names.
And last, it would be better to signal the failure of Serialize(), e.g. with an error return value, so the caller party has the chance to examine if the operation succeeded, and act accordingly.
So Order.Serialize() should look like this:
func (order Order) Serialize(folder string) error {
if err := os.MkdirAll(folder, 0777); err != nil {
return err
}
file, err := os.Create(filepath.Join(folder, order.Id))
if err != nil {
return err
}
defer file.Close()
if err := gob.NewEncoder(file).Encode(order); err != nil {
return err
}
return nil
}
I am trying to use a file instead of a DB to get a prototype up and running. I have a program that (1) reads existing content from the file to a map, (2) takes JSON POSTs that add content to the map, (3) on exit, writes to the file.
First, the file is not being created. Then I created an empty file. It is not being written to.
I am trying to read the file, determine if there is existing content. If there is not existing content, create a blank map. If there is existing content, unmarshal it into a new map.
func writeDB() {
eventDBJSON, err := json.Marshal(eventDB)
if err != nil {
panic(err)
}
err2 := ioutil.WriteFile("/Users/sarah/go/dat.txt", eventDBJSON, 0777)
if err2 != nil {
panic(err2)
}
}
func main() {
dat, err := ioutil.ReadFile("/Users/sarah/go/dat.txt")
if err != nil {
panic(err)
}
if dat == nil {
eventDB = DB{
events: map[string]event{},
}
} else {
if err2 := json.Unmarshal(dat, &eventDB); err2 != nil {
panic(err2)
}
}
router := httprouter.New()
router.POST("/join", JoinEvent)
router.POST("/create", CreateEvent)
log.Fatal(http.ListenAndServe(":8080", router))
defer writeDB()
}
There is no way for the server to ever reach defer writeDB().
http.ListenAndServe blocks, and if it did return anything, you log.Fatal that, which exits your app at that point.
You can't intercept all ways an app can exit, getting SIGKILL, machine loss of power, etc.
I'm assuming you really just want to write some code, bounce the server, repeat
If that's the case, then Ctrl-C is good enough.
If you want to write your file on Ctrl-C, look at the signal package.
Also, defer on the last line of a function really has no purpose as defer basically means "do this last".
you can use (*os.File).Stat() to get a file's FileInfo which contain its size
file, err := os.Open( filepath )
if err != nil {
// handle error
}
fi, err := file.Stat()
if err != nil {
// handle error
}
s := fi.Size()
I'm trying to write the output of the statement below into a text file but I can't seem to find out if there is a printf function that writes directly to a text file. For example if the code below produces the results [5 1 2 4 0 3] I would want to read this into a text file for storage and persistence. Any ideas please?
The code I want to goto the text file:
//choose random number for recipe
r := rand.New(rand.NewSource(time.Now().UnixNano()))
i := r.Perm(5)
fmt.Printf("%v\n", i)
fmt.Printf("%d\n", i[0])
fmt.Printf("%d\n", i[1])
You can use fmt.Fprintf together with an io.Writer, which would represent a handle to your file.
Here is a simple example:
func check(err error) {
if err != nil {
panic(err)
}
}
func main() {
f, err := os.Create("/tmp/yourfile")
check(err)
defer f.Close()
w := bufio.NewWriter(f)
//choose random number for recipe
r := rand.New(rand.NewSource(time.Now().UnixNano()))
i := r.Perm(5)
_, err = fmt.Fprintf(w, "%v\n", i)
check(err)
_, err = fmt.Fprintf(w, "%d\n", i[0])
check(err)
_, err = fmt.Fprintf(w, "%d\n", i[1])
check(err)
w.Flush()
}
More ways of writing to file in Go are shown here.
Note that I have used panic() here just for the sake of brevity, in the real life scenario you should handle errors appropriately (which in most cases means something other than exiting the program, what panic() does).
This example will write the values into the output.txt file.
package main
import (
"bufio"
"fmt"
"math/rand"
"os"
"time"
)
func main() {
file, err := os.OpenFile("output.txt", os.O_WRONLY|os.O_CREATE, 0666)
if err != nil {
fmt.Println("File does not exists or cannot be created")
os.Exit(1)
}
defer file.Close()
w := bufio.NewWriter(file)
r := rand.New(rand.NewSource(time.Now().UnixNano()))
i := r.Perm(5)
fmt.Fprintf(w, "%v\n", i)
w.Flush()
}
Use os package to create file and then pass it to Fprintf
file, fileErr := os.Create("file")
if fileErr != nil {
fmt.Println(fileErr)
return
}
fmt.Fprintf(file, "%v\n", i)
This should write to file.