Get file size given file descriptor in Go - file

If given a path, I would use this to get file size
file, _ := os.Open(path)
fi, _ := file.Stat()
fsuze := fi.Size()
But if only given fd, how can I get the file size?
Is there any way in Go like this in C:
lseek(fd, 0, SEEK_END)

You create a new *os.File from a file descriptor using the os.NewFile function.
You can do it exactly the same way as in C, using Seek
offset, err := f.Seek(0, os.SEEK_END)
But since you have the *os.File already, you can call Stat even if it was derived directly from the file descriptor.

try to get the file start
fileInfo, err := file.Stat()
if err != nil {...
}
files fileInfo.Size())

Related

GoLang os.Chdir() permission denied

I am writing a program that creates a directory and then changes the working directory to the newly created directorty in order to do some work:
func main() {
err := os.Mkdir("English", 0777) // I know 777 is not good practice, first I want to get Chdir() working
if err != nil && !os.IsExist(err) {
log.Fatal(err)
}
err = os.Chdir("English")
if err != nil {
log.Fatal(err)
}
}
Console output:
023/02/05 18:15:45 chdir English: permission denied
exit status 1
Simple fix: executing the program using sudo resulted in the directoy being created with permissions as specified.

Why does writing to a deleted file not return an error in Go?

This program successfully runs even though it's writing to a deleted file. Why does this work?
package main
import (
"fmt"
"os"
)
func main() {
const path = "test.txt"
f, err := os.Create(path) // Create file
if err != nil {
panic(err)
}
err = os.Remove(path) // Delete file
if err != nil {
panic(err)
}
_, err = f.WriteString("test") // Write to deleted file
if err != nil {
panic(err)
}
err = f.Close()
if err != nil {
panic(err)
}
fmt.Printf("No errors occurred") // test.txt doesn't exist anymore
}
On Unix-like systems, when a process opens a file it gets a File descriptor which points to the process File table entry, which, in turn, refers to inode structure on the disk. inode keeps file information, including data location.
Contents of a directory are just pairs of inode numbers and names.
If you delete a file, you simply delete a link to inode from the directory, inode still exists (as long as there is no link to it from somewhere, including processes) and data can be read and written from/to data location.
On Windows this code fails since Windows does not allow opened file to be deleted:
panic: remove test.txt: The process cannot access the file because it is being used by another process.
goroutine 1 [running]:
main.main()
D:/tmp/main.go:18 +0x1d1
exit status 2

Write stdout stream to file

I am running an external process via exec.Command() and I want the stdout from the command to be printed as well as written to file, in real time (similar to using tee from a command-line) .
I can achieve this with a scanner and a writer:
cmd := exec.Command("mycmd")
cmdStdOut, _ := cmd.StdoutPipe()
s := bufio.NewScanner(cmdStdOut)
f, _ := os.Create("stdout.log")
w := bufio.NewWriter(f)
go func() {
for s.Scan(){
t := s.Text()
fmt.Println(t)
fmt.Fprint(w, t)
w.Flush()
}
}
Is there a more idiomatic way to do this that avoids clobbering Scan and Flush?
Assign a multiwriter to the commmand's stdout that writes to a file and to a pipe. You can then use the pipe's read end to follow the output.
This example behaves similar to the tee tool:
package main
import (
"io"
"os"
"os/exec"
)
func main() {
var f *os.File // e.g. os.Create, os.Open
r, w := io.Pipe()
defer w.Close()
cmd := exec.Command("mycmd")
cmd.Stdout = io.MultiWriter(w, f)
// do something with the output while cmd is running by reading from r
go io.Copy(os.Stdout, r)
cmd.Run()
}
Alternative with StdoutPipe:
package main
import (
"io"
"os"
"os/exec"
)
func main() {
var f *os.File
cmd := exec.Command("date")
stdout, _ := cmd.StdoutPipe()
go io.Copy(io.MultiWriter(f, os.Stdout), stdout)
cmd.Run()
}
Ignoring errors for brevity. As stated by other answers, you could use io.MultiWriter in an io.Copy, but when you are dealing with stdout of exec.Cmd, you need to be aware of Wait closing the pipes as soon as the command terminates, as stated by the documentation (https://golang.org/pkg/os/exec/#Cmd.StdoutPipe).
Wait will close the pipe after seeing the command exit, so most callers need not close the pipe themselves. It is thus incorrect to call Wait before all reads from the pipe have completed.
Ignoring this could lead to some portions of the output not being read, and is therefore lost. Instead, do not use Run, but instead use Start and Wait. eg.
package main
import (
"io"
"os"
"os/exec"
)
func main() {
cmd := exec.Command("date")
stdout, _ := cmd.StdoutPipe()
f, _ := os.Create("stdout.log")
cmd.Start()
io.Copy(io.MultiWriter(f, os.Stdout), stdout)
cmd.Wait()
}
This will ensure everything is read from stdout and close all pipes afterwards.

Why golang File struct design like this

golang File struct is like this:
type File struct{
*file
}
and File struct functiona is also design to recive a pointer,why it design like this?
It is explained in the Go os package source code comments.
For example, this is safe:
package main
import "os"
func main() {
f, err := os.Create("/tmp/atestfile")
if err != nil {
*f = os.File{}
}
// finalizer runs
}
Package os
go/src/os/types.go:
// File represents an open file descriptor.
type File struct {
*file // os specific
}
go/src/os/file_plan9.go:
// file is the real representation of *File.
// The extra level of indirection ensures that no clients of os
// can overwrite this data, which could cause the finalizer
// to close the wrong file descriptor.
type file struct {
fd int
name string
dirinfo *dirInfo // nil unless directory being read
}
go/src/os/file_unix.go:
// +build darwin dragonfly freebsd linux nacl netbsd openbsd solaris
// file is the real representation of *File.
// The extra level of indirection ensures that no clients of os
// can overwrite this data, which could cause the finalizer
// to close the wrong file descriptor.
type file struct {
pfd poll.FD
name string
dirinfo *dirInfo // nil unless directory being read
nonblock bool // whether we set nonblocking mode
}
go/src/os/file_windows.go:
// file is the real representation of *File.
// The extra level of indirection ensures that no clients of os
// can overwrite this data, which could cause the finalizer
// to close the wrong file descriptor.
type file struct {
pfd poll.FD
name string
dirinfo *dirInfo // nil unless directory being read
}

Reading the first two bytes from a file efficiently - Golang

I'm trying to find a good way of reading the first two bytes from a file using Go.
I have some .zip files in my current directory, mixed in with other files.
I would like to loop through all the files in the directory and check if the first two bytes contain the right .zip identifier, namely 50 4B.
What would be a good way to accomplish this using the standard library without having to read the entire file?
Going through the available functions in the io package I managed to find:
func LimitReader(r Reader, n int64) Reader
Which seems to fit my description, it reads from Reader (How do I get a Reader?) but stops after n bytes. Since I'm rather new to Go, I'm not sure how to go about it.
You get the initial reader by opening the file. For 2 bytes, I wouldn't use the LimitReader though. Just reading 2 bytes with io.ReadFull is easier.
r, err := os.Open(file)
if err != nil {
return err
}
defer r.Close()
var header [2]byte
n, err := io.ReadFull(r, header[:])
if err != nil {
return err
}

Resources