How to transfer multiple files using go - file

I am trying to write a program in go which has two parts. One part is the client who tries to upload multiple pictures to the other part the server.
The server side should do the following:
Get the number of files which will be send
Loop for every file
Get filename
Get the file and save it
Go to 3
So far the server side is doing the following:
func getFileFromClient(connection net.Conn) {
var numberOfPics int
var err error
var receivedBytes int64
var fileName string
r := bufio.NewReader(connection)
strNumberOfPics, err := r.ReadString('\n')
if err != nil {
fmt.Printf("Error reading: %s\n", err)
return
}
fmt.Printf("Read: %s\n", strNumberOfPics)
strNumberOfPics = strings.Trim(strNumberOfPics, "\n")
numberOfPics, err = strconv.Atoi(strNumberOfPics)
if err != nil {
fmt.Printf("Error Atoi: %s\n", err)
panic("Atoi")
}
fmt.Printf("Receiving %d pics:\n", numberOfPics)
for i := 0; i < numberOfPics; i++ {
// Getting the file name:
fileName, err = r.ReadString('\n')
if err != nil {
fmt.Printf("Error receiving: %s\n", err)
}
fmt.Printf("Filename: %s\n", fileName)
fileName = strings.Trim(fileName, "\n")
f, err := os.Create(fileName)
defer f.Close()
if err != nil {
fmt.Println("Error creating file")
}
receivedBytes, err = io.Copy(f, connection)
if err != nil {
panic("Transmission error")
}
fmt.Printf("Transmission finished. Received: %d \n", receivedBytes)
}
}
io.Copy is working for just one file and nothing additional (because it does not empty the queue I think). I do not want to reconnect every time for every file if I do not have too. But I am not sure what I actually can do about that.
Has anyone any suggestions of an existing package or method which could help? Or example code? Or am I just plain wrong and it is a bad idea to even try this with go?
I think it might be enough if the server is able to flush the connection buffer after every read so no additional info is read and/or copied.
Really looking forward for help, thanks in advance
EDIT: Updated Code still not working. I think it might be the bufio.reader
func getFileFromClient(connection net.Conn) {
var numberOfPics int
var err error
var receivedBytes int64
var fileName string
r := bufio.NewReader(connection)
strNumberOfPics, err := r.ReadString('\n')
if err != nil {
fmt.Printf("Error reading: %s\n", err)
return
}
strNumberOfPics = strings.Trim(strNumberOfPics, "\n")
numberOfPics, err = strconv.Atoi(strNumberOfPics)
if err != nil {
fmt.Printf("Error Atoi: %s\n", err)
panic("Atoi")
}
fmt.Printf("Receiving %d pics:\n", numberOfPics)
for i := 0; i < numberOfPics; i++ {
// Getting the file name:
fileName, err = r.ReadString('\n')
if err != nil {
fmt.Printf("Error receiving: %s\n", err)
}
fileName = strings.Trim(fileName, "\n")
fmt.Printf("Filename: %s\n", fileName)
f, err := os.Create(fileName)
defer f.Close()
if err != nil {
fmt.Println("Error creating file")
}
// Get the file size
strFileSize, err := r.ReadString('\n')
if err != nil {
fmt.Printf("Read size error %s\n", err)
panic("Read size")
}
strFileSize = strings.Trim(strFileSize, "\n")
fileSize, err := strconv.Atoi(strFileSize)
if err != nil {
fmt.Printf("Error size Atoi: %s\n", err)
panic("size Atoi")
}
fmt.Printf("Size of pic: %d\n", fileSize)
receivedBytes, err = io.CopyN(f, connection, int64(fileSize))
if err != nil {
fmt.Printf("Transmission error: %s\n", err)
panic("Transmission error")
}
fmt.Printf("Transmission finished. Received: %d \n", receivedBytes)
}
}
EDIT 2: I did not get this solution to work. I am pretty sure it is because I used bufio. I did however get it to work by transmitting a single zip file with io.copy. Another solution which worked was to transmit a zip file by using http. If you are stuck trying something similar and need help feel free to send me a message. Thanks to all of you for your help

Keeping your implementation so far, the thing you're missing is that io.Copy() reads from source until it finds an EOF, so it will read all the remaining images in one go.
Also, the client must send, for each image, its size in bytes (you could do that after sending the name).
In the server, just read the size and then use io.CopyN() to read that exact number of bytes.
EDIT: as a matter of fact, you could also do things like you were doing and send images in parallel instead of serially, that would mean you open a new connection for each file transfer and then read all of the file withouth needing to send the amount of images or their size.
In case you want an alternative, a good option would be using good 'ol HTTP and multipart requests. There's the built-in module mime/multipart that allows you to do file transfers over HTTP. Of course, that would mean you'd have to rewrite your program.

My suggestion is to zip all the images you want to transfer and then send them as a single multipart POST request. In that way you have a standard way of knowing all your Acceptance criteria.
You can easily zip multiple files using https://golang.org/pkg/archive/zip/

Related

Go WriteString function panicking?

func FileFill(filename string) error {
f, err := os.Open("file.txt")
if err != nil {
panic("File not opened")
}
defer f.Close()
for i := 0; i < 10; i++ {
//I know this should have some error checking here
f.WriteString("some text \n")
}
return nil
}
Hi, I'm new to learning Go and I've been trying out some small use cases to learn it a bit better. I made this function to fill 10 lines of a file with "some text". When I tried this with error checking, the program panicked at the WriteString line. Am I misunderstanding something fundamental here? I looked at the documentation and I can't figure out why it doesn't like this. Thanks.
Need to use a function with write or append permission:
package main
import "os"
func main() {
f, err := os.Create("file.txt")
if err != nil {
panic(err)
}
defer f.Close()
for range [10]struct{}{} {
f.WriteString("some text\n")
}
}
https://golang.org/pkg/os#Create
// Choose the permit you want with os.OpenFile flags
file, err := os.OpenFile(path, os.O_RDWR, 0644)
// or crate new file if not exist
file, err = os.OpenFile(path, os.O_RDWR|os.O_CREATE, 0755)
// or append new data into your file with O_APPEND flag
file, err = os.OpenFile(path, os.O_APPEND, 0755)
docs: https://pkg.go.dev/os#OpenFile

How to remove the last N bytes from a file

I want to delete thelast N bytes from file in Go,
Actually, this is already implemented is the os.Truncate() function. But this function takes the new size. So to use this, you have to first get the size of the file. For that, you may use os.Stat().
Wrapping it into a function:
func truncateFile(name string, bytesToRemove int64) error {
fi, err := os.Stat(name)
if err != nil {
return err
}
return os.Truncate(name, fi.Size()-bytesToRemove)
}
Using it to remove the last 5000 bytes:
if err := truncateFile("C:\\Test.zip", 5000); err != nil {
fmt.Println("Error:", err)
}
Another alternative is to use the File.Truncate() method for that. If we have an os.File, we may also use File.Stat() to get its size.
This is how it would look like:
func truncateFile(name string, bytesToRemove int64) error {
f, err := os.OpenFile(name, os.O_RDWR, 0644)
if err != nil {
return err
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
return err
}
return f.Truncate(fi.Size() - bytesToRemove)
}
Using it is the same. This may be preferable if we're working on a file (we have it opened) and we have to truncate it. But in that case you'd want to pass os.File instead of its name to truncateFile().
Note: if you try to remove more bytes than the file currently has, truncateFile() will return an error.

Go. Writing []byte to file results in zero byte file

I try to serialize a structured data to file. I looked through some examples and made such construction:
func (order Order) Serialize(folder string) {
b := bytes.Buffer{}
e := gob.NewEncoder(&b)
err := e.Encode(order)
if err != nil { panic(err) }
os.MkdirAll(folder, 0777)
file, err := os.Create(folder + order.Id)
if err != nil { panic(err) }
defer file.Close()
writer := bufio.NewWriter(file)
n, err := writer.Write(b.Bytes())
fmt.Println(n)
if err != nil {
panic(err)
}
}
Serialize is a method serializing its object to file called by it's id property. I looked through debugger - byte buffer contains data before writing. I mean object is fully initialized. Even n variable representing quantity of written bytes is more than a thousand - the file shouldn't be empty at all. The file is created but it is totally empty. What's wrong?
bufio.Writer (as the package name hints) uses a buffer to cache writes. If you ever use it, you must call Writer.Flush() when you're done writing to it to ensure the buffered data gets written to the underlying io.Writer.
Also note that you can directly write to an os.File, no need to create a buffered writer "around" it. (*os.File implements io.Writer).
Also note that you can create the gob.Encoder directly directed to the os.File, so even the bytes.Buffer is unnecessary.
Also os.MkdirAll() may fail, check its return value.
Also it's better to "concatenate" parts of a file path using filepath.Join() which takes care of extra / missing slashes at the end of folder names.
And last, it would be better to signal the failure of Serialize(), e.g. with an error return value, so the caller party has the chance to examine if the operation succeeded, and act accordingly.
So Order.Serialize() should look like this:
func (order Order) Serialize(folder string) error {
if err := os.MkdirAll(folder, 0777); err != nil {
return err
}
file, err := os.Create(filepath.Join(folder, order.Id))
if err != nil {
return err
}
defer file.Close()
if err := gob.NewEncoder(file).Encode(order); err != nil {
return err
}
return nil
}

Go connection.Writer and reader not behaving properly, reading 2 write in one read operation

I am trying to read a file from client and then send it to server.
It goes like this, you input send <fileName> in the client program, then <fileName> will be sent to server. The server read 2 things from the client via TCP connection, first the command send <fileName> and second the content of the file.
However, sometimes my program will randomly include the file content in the <fileName> string. For example, say I have a text file called xyz.txt, the content of which is "Hellow world". The server sometimes receive send xyz.txtHellow world. Sometimes it doesn't and it works just fine.
I think that it is the problem of synchronization or not flushing reader/writer buffer. But I am not quite sure.
Thanks in advance!
Client code:
func sendFileToServer(fileName string, connection net.Conn) {
fileBuffer := make([]byte, BUFFER_SIZE)
var err error
file, err := os.Open(fileName) // For read access.
lock := make(chan int)
w := bufio.NewWriter(connection)
go func(){
w.Write([]byte("send " + fileName))
w.Flush()
lock <- 1
}()
<-lock
// make a read buffer
r := bufio.NewReader(file)
//read file until there is an error
for err == nil || err != io.EOF {
//read a chunk
n, err := r.Read(fileBuffer)
if err != nil && err != io.EOF {
panic(err)
}
if n == 0 {
break
}
// write a chunk
if _, err := w.Write(fileBuffer[:n]); err != nil {
panic(err)
}
}
file.Close()
connection.Close()
fmt.Println("Finished sending.")
}
Server code: (connectionHandler is a goroutine that is invoked for every TCP connection request from client)
func connectionHandler(connection net.Conn, bufferChan chan []byte, stringChan chan string) {
buffer := make([]byte, 1024)
_, error := connection.Read(buffer)
if error != nil {
fmt.Println("There is an error reading from connection", error.Error())
stringChan<-"failed"
return
}
fmt.Println("command recieved: " + string(buffer))
if("-1"==strings.Trim(string(buffer), "\x00")){
stringChan<-"failed"
return
}
arrayOfCommands := strings.Split(string(buffer)," ")
arrayOfCommands[1] = strings.Replace(arrayOfCommands[1],"\n","",-1)
fileName := strings.Trim(arrayOfCommands[1], "\x00")
if arrayOfCommands[0] == "get" {
fmt.Println("Sending a file " + arrayOfCommands[1])
sendFileToClient(fileName, connection, bufferChan, stringChan)
} else if arrayOfCommands[0] == "send" {
fmt.Println("Getting a file " + arrayOfCommands[1])
getFileFromClient(fileName, connection, bufferChan, stringChan)
} else {
_, error = connection.Write([]byte("bad command"))
}
fmt.Println("connectionHandler finished")
}
func getFileFromClient(fileName string, connection net.Conn,bufferChan chan []byte, stringChan chan string) { //put the file in memory
stringChan<-"send"
fileBuffer := make([]byte, BUFFER_SIZE)
var err error
r := bufio.NewReader(connection)
for err == nil || err != io.EOF {
//read a chunk
n, err := r.Read(fileBuffer)
if err != nil && err != io.EOF {
panic(err)
}
if n == 0 {
break
}
bufferChan<-fileBuffer[:n]
stringChan<-fileName
}
connection.Close()
return
}
TCP is a stream protocol. It doesn't have messages. The network is (within some limits we don't need to concern us about) free to send your data one byte at a time or everything at once. And even if you get lucky and the network sends your data in packets like you want them there's nothing that prevents the receive side from concatenating the packets into one buffer.
In other words: there is nothing that will make each Read call return as many bytes as you wrote with some specific Write calls. You sometimes get lucky, sometimes, as you noticed, you don't get lucky. If there are no errors, all the reads you do from the stream will return all the bytes you wrote, that's the only guarantee you have.
You need to define a proper protocol.
This is not related to Go. Every programming language will behave this way.

How to verify if file has contents to marshal from ioutil.ReadFile in Go

I am trying to use a file instead of a DB to get a prototype up and running. I have a program that (1) reads existing content from the file to a map, (2) takes JSON POSTs that add content to the map, (3) on exit, writes to the file.
First, the file is not being created. Then I created an empty file. It is not being written to.
I am trying to read the file, determine if there is existing content. If there is not existing content, create a blank map. If there is existing content, unmarshal it into a new map.
func writeDB() {
eventDBJSON, err := json.Marshal(eventDB)
if err != nil {
panic(err)
}
err2 := ioutil.WriteFile("/Users/sarah/go/dat.txt", eventDBJSON, 0777)
if err2 != nil {
panic(err2)
}
}
func main() {
dat, err := ioutil.ReadFile("/Users/sarah/go/dat.txt")
if err != nil {
panic(err)
}
if dat == nil {
eventDB = DB{
events: map[string]event{},
}
} else {
if err2 := json.Unmarshal(dat, &eventDB); err2 != nil {
panic(err2)
}
}
router := httprouter.New()
router.POST("/join", JoinEvent)
router.POST("/create", CreateEvent)
log.Fatal(http.ListenAndServe(":8080", router))
defer writeDB()
}
There is no way for the server to ever reach defer writeDB().
http.ListenAndServe blocks, and if it did return anything, you log.Fatal that, which exits your app at that point.
You can't intercept all ways an app can exit, getting SIGKILL, machine loss of power, etc.
I'm assuming you really just want to write some code, bounce the server, repeat
If that's the case, then Ctrl-C is good enough.
If you want to write your file on Ctrl-C, look at the signal package.
Also, defer on the last line of a function really has no purpose as defer basically means "do this last".
you can use (*os.File).Stat() to get a file's FileInfo which contain its size
file, err := os.Open( filepath )
if err != nil {
// handle error
}
fi, err := file.Stat()
if err != nil {
// handle error
}
s := fi.Size()

Resources