Trouble overwriting file content - file

I've got trouble overwriting a files content with zeros. The problem is that the very last byte of the original file remains, even when I exceed its size by 100 bytes. Someone got an idea what I'm missing?
func (h PostKey) ServeHTTP(w http.ResponseWriter, r *http.Request) {
f, err := os.Create("received.dat")
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
return
}
defer f.Close()
_, err = io.Copy(f, r.Body)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
return
}
// Retrieve filesize
size, _ := f.Seek(0, 1)
zeroFilled := make([]byte, size + 100)
n, err := f.WriteAt(zeroFilled, 0)
if err != nil {
return
}
fmt.Printf("Size: %d\n", size) // prints 13
fmt.Printf("Bytes written: %d\n", n) // prints 113
}

The problem may occurred because the data is written into a same file (shared resource) inside an http handler, and the handler itself may be executed concurrently. You need to lock access to the file during data serialization (overwriting process). Quick solution will be:
import (
"sync"
//... other packages
)
var muFile sync.Mutex
func (h PostKey) ServeHTTP(w http.ResponseWriter, r *http.Request) {
muFile.Lock()
defer muFile.Unlock()
f, err := os.Create("received.dat")
//other statements
//...
}
If your server load is low, the above solution will be fine. But if your server needs to handle a lot of requests concurrently, you need to use different approach (although the rule is the same, lock access to any shared resource).

I was writing to the file and trying to overwrite it in the same context, and so parts of the first write operation were still in memory and not yet written to the disk. By using f.Sync() to flush everything after copying the bodys content I was able to fix the issue.

Related

I get an error that I don't know for what reason. a file disappears when I run the program

I'm coding in Go, and I created a file handler and a program that prints the value of that file.
However, the file that should be created with file.Filename is deleted when I run it.
I don't know what the reason is, even if I try to debug, the answer doesn't come out, and even if I google it, I don't get the answer.
(64bit windows 10 (WSL2))
package main
import (
"fmt"
"io"
"io/ioutil"
"os"
"github.com/labstack/echo"
)
func checkErr(err error) {
if err != nil {
panic(err)
}
}
func readFile(filename string) string {
data, err := ioutil.ReadFile(filename)
checkErr(err)
return string(data)
}
func main() {
e := echo.New()
e.POST("/file", func(c echo.Context) error {
file, err := c.FormFile("file")
checkErr(err)
src, err := file.Open()
checkErr(err)
defer src.Close()
dst, err := os.Create(file.Filename)
checkErr(err)
defer dst.Close()
_, err = io.Copy(dst, src)
checkErr(err)
data := readFile(file.Filename)
fmt.Println(data)
return c.String(200, "sd")
})
e.Logger.Fatal(e.Start(":5000"))
}
I'm guessing that your file exists, but the code that you wrote is reading the file before the changes are "flushed to disk".
Right here:
defer dst.Close()
_, err = io.Copy(dst, src)
Should Close() or Sync() your writer as soon as possible, otherwise you may read before the write is finished. And since your readFile() function isn't re-using the file, you might as well just close (not Sync()) it immediately, not deferred
Try this:
_, err = io.Copy(dst, src)
dst.Close()
if err != nil {
}
There could be an error while copying, but we still want to Close() the file (if there wasn't an error during the os.Create, os.Open, or os.OpenFile...

golang: Monitor (and read) files open for writing

I have some .log files and want to monitor if any data is appended to any of them to collect that data into a DB.
How do I open an opened-for-writing file and how do I monitor for new lines/changes?
Thank you.
I wrote this small piece of code that will monitor for file change and if detected, run the function you passed to it.
func watchFileAndRun(filePath string, fn func()) error {
defer func() { r := recover(); if r != nil { logCore("ERROR","Error:watching file:", r) } }()
fn()
initialStat, err := os.Stat(filePath)
checkErr(err)
for {
stat, err := os.Stat(filePath)
checkErr(err)
if stat.Size() != initialStat.Size() || stat.ModTime() != initialStat.ModTime() {
fn()
initialStat, err = os.Stat(filePath)
checkErr(err)
}
time.Sleep(10 * time.Second)
}
return nil
}
To call this function where loadEnv is a function:
go watchFileAndRun("config.json", loadEnv)
Some Notes:
go watchFileAndRun() will run the routing asynchronously. IE in the background.
checkErr is my own function you can simply put a standard
if err != nil {fmt.printf("Something went wrong",err)} in its place.
and the defer first line is to prevent the program crashing the main if something went wrong. You could nest it deeper if you wanted to make this function more resilient.
Hope it helps someone. and its smaller than the nsnotify plug in which to be frank I could get working either.

Go. Writing []byte to file results in zero byte file

I try to serialize a structured data to file. I looked through some examples and made such construction:
func (order Order) Serialize(folder string) {
b := bytes.Buffer{}
e := gob.NewEncoder(&b)
err := e.Encode(order)
if err != nil { panic(err) }
os.MkdirAll(folder, 0777)
file, err := os.Create(folder + order.Id)
if err != nil { panic(err) }
defer file.Close()
writer := bufio.NewWriter(file)
n, err := writer.Write(b.Bytes())
fmt.Println(n)
if err != nil {
panic(err)
}
}
Serialize is a method serializing its object to file called by it's id property. I looked through debugger - byte buffer contains data before writing. I mean object is fully initialized. Even n variable representing quantity of written bytes is more than a thousand - the file shouldn't be empty at all. The file is created but it is totally empty. What's wrong?
bufio.Writer (as the package name hints) uses a buffer to cache writes. If you ever use it, you must call Writer.Flush() when you're done writing to it to ensure the buffered data gets written to the underlying io.Writer.
Also note that you can directly write to an os.File, no need to create a buffered writer "around" it. (*os.File implements io.Writer).
Also note that you can create the gob.Encoder directly directed to the os.File, so even the bytes.Buffer is unnecessary.
Also os.MkdirAll() may fail, check its return value.
Also it's better to "concatenate" parts of a file path using filepath.Join() which takes care of extra / missing slashes at the end of folder names.
And last, it would be better to signal the failure of Serialize(), e.g. with an error return value, so the caller party has the chance to examine if the operation succeeded, and act accordingly.
So Order.Serialize() should look like this:
func (order Order) Serialize(folder string) error {
if err := os.MkdirAll(folder, 0777); err != nil {
return err
}
file, err := os.Create(filepath.Join(folder, order.Id))
if err != nil {
return err
}
defer file.Close()
if err := gob.NewEncoder(file).Encode(order); err != nil {
return err
}
return nil
}

Go connection.Writer and reader not behaving properly, reading 2 write in one read operation

I am trying to read a file from client and then send it to server.
It goes like this, you input send <fileName> in the client program, then <fileName> will be sent to server. The server read 2 things from the client via TCP connection, first the command send <fileName> and second the content of the file.
However, sometimes my program will randomly include the file content in the <fileName> string. For example, say I have a text file called xyz.txt, the content of which is "Hellow world". The server sometimes receive send xyz.txtHellow world. Sometimes it doesn't and it works just fine.
I think that it is the problem of synchronization or not flushing reader/writer buffer. But I am not quite sure.
Thanks in advance!
Client code:
func sendFileToServer(fileName string, connection net.Conn) {
fileBuffer := make([]byte, BUFFER_SIZE)
var err error
file, err := os.Open(fileName) // For read access.
lock := make(chan int)
w := bufio.NewWriter(connection)
go func(){
w.Write([]byte("send " + fileName))
w.Flush()
lock <- 1
}()
<-lock
// make a read buffer
r := bufio.NewReader(file)
//read file until there is an error
for err == nil || err != io.EOF {
//read a chunk
n, err := r.Read(fileBuffer)
if err != nil && err != io.EOF {
panic(err)
}
if n == 0 {
break
}
// write a chunk
if _, err := w.Write(fileBuffer[:n]); err != nil {
panic(err)
}
}
file.Close()
connection.Close()
fmt.Println("Finished sending.")
}
Server code: (connectionHandler is a goroutine that is invoked for every TCP connection request from client)
func connectionHandler(connection net.Conn, bufferChan chan []byte, stringChan chan string) {
buffer := make([]byte, 1024)
_, error := connection.Read(buffer)
if error != nil {
fmt.Println("There is an error reading from connection", error.Error())
stringChan<-"failed"
return
}
fmt.Println("command recieved: " + string(buffer))
if("-1"==strings.Trim(string(buffer), "\x00")){
stringChan<-"failed"
return
}
arrayOfCommands := strings.Split(string(buffer)," ")
arrayOfCommands[1] = strings.Replace(arrayOfCommands[1],"\n","",-1)
fileName := strings.Trim(arrayOfCommands[1], "\x00")
if arrayOfCommands[0] == "get" {
fmt.Println("Sending a file " + arrayOfCommands[1])
sendFileToClient(fileName, connection, bufferChan, stringChan)
} else if arrayOfCommands[0] == "send" {
fmt.Println("Getting a file " + arrayOfCommands[1])
getFileFromClient(fileName, connection, bufferChan, stringChan)
} else {
_, error = connection.Write([]byte("bad command"))
}
fmt.Println("connectionHandler finished")
}
func getFileFromClient(fileName string, connection net.Conn,bufferChan chan []byte, stringChan chan string) { //put the file in memory
stringChan<-"send"
fileBuffer := make([]byte, BUFFER_SIZE)
var err error
r := bufio.NewReader(connection)
for err == nil || err != io.EOF {
//read a chunk
n, err := r.Read(fileBuffer)
if err != nil && err != io.EOF {
panic(err)
}
if n == 0 {
break
}
bufferChan<-fileBuffer[:n]
stringChan<-fileName
}
connection.Close()
return
}
TCP is a stream protocol. It doesn't have messages. The network is (within some limits we don't need to concern us about) free to send your data one byte at a time or everything at once. And even if you get lucky and the network sends your data in packets like you want them there's nothing that prevents the receive side from concatenating the packets into one buffer.
In other words: there is nothing that will make each Read call return as many bytes as you wrote with some specific Write calls. You sometimes get lucky, sometimes, as you noticed, you don't get lucky. If there are no errors, all the reads you do from the stream will return all the bytes you wrote, that's the only guarantee you have.
You need to define a proper protocol.
This is not related to Go. Every programming language will behave this way.

How to verify if file has contents to marshal from ioutil.ReadFile in Go

I am trying to use a file instead of a DB to get a prototype up and running. I have a program that (1) reads existing content from the file to a map, (2) takes JSON POSTs that add content to the map, (3) on exit, writes to the file.
First, the file is not being created. Then I created an empty file. It is not being written to.
I am trying to read the file, determine if there is existing content. If there is not existing content, create a blank map. If there is existing content, unmarshal it into a new map.
func writeDB() {
eventDBJSON, err := json.Marshal(eventDB)
if err != nil {
panic(err)
}
err2 := ioutil.WriteFile("/Users/sarah/go/dat.txt", eventDBJSON, 0777)
if err2 != nil {
panic(err2)
}
}
func main() {
dat, err := ioutil.ReadFile("/Users/sarah/go/dat.txt")
if err != nil {
panic(err)
}
if dat == nil {
eventDB = DB{
events: map[string]event{},
}
} else {
if err2 := json.Unmarshal(dat, &eventDB); err2 != nil {
panic(err2)
}
}
router := httprouter.New()
router.POST("/join", JoinEvent)
router.POST("/create", CreateEvent)
log.Fatal(http.ListenAndServe(":8080", router))
defer writeDB()
}
There is no way for the server to ever reach defer writeDB().
http.ListenAndServe blocks, and if it did return anything, you log.Fatal that, which exits your app at that point.
You can't intercept all ways an app can exit, getting SIGKILL, machine loss of power, etc.
I'm assuming you really just want to write some code, bounce the server, repeat
If that's the case, then Ctrl-C is good enough.
If you want to write your file on Ctrl-C, look at the signal package.
Also, defer on the last line of a function really has no purpose as defer basically means "do this last".
you can use (*os.File).Stat() to get a file's FileInfo which contain its size
file, err := os.Open( filepath )
if err != nil {
// handle error
}
fi, err := file.Stat()
if err != nil {
// handle error
}
s := fi.Size()

Resources