Description

Hi I'hitting a roadblock when trying to get a file stream from a client sending a file via multipart. I would to allow a multipart upload and stream it to S3 (which takes an io.reader to upload) without saving the file to disk (or keeping the full copy in memory). So far, I use multipart.FileHeader and bind the request directly but FileHeader.Open() may save the file to a temp location if too big. Is there a way to do this?

My understanding is that MustBindWith will instantiate the multipart.FileHeader which will upload the file, save it in memory if it's less than 10Mb and save it to a temp file otherwise. In any ways it's not a direct stream from the client.

How to reproduce

package main

import (
    "github.com/gin-gonic/gin"
    "github.com/gin-gonic/gin/binding"
    "mime/multipart"
)

type FileUploadParam struct {
    File multipart.FileHeader `form:"file" binding:"required"`
}

func Upload(c *gin.Context) {
    var p FileUploadParam
    if err := c.MustBindWith(&p, binding.FormMultipart); err == nil {
        file, _ := p.File.Open()
        // file may or may not be in memory
        // I can use file in s3.PutObject as it is a io.Reader but it is from local memory not sreamed from client
    }
}
func main() {
    g := gin.Default()
    g.POST("/upload", Upload)
    g.Run(":9000")
}

Comment From: kishaningithub

@thinkerou / @appleboy Any thoughts?

Comment From: kamikazechaser

any s3 implementation doesn't support direct streaming from the client. The s3 library itself with buffer the file internally then upload (e.g. see minio PutObject implementation).

The GC usually frees the memory on the next collection so it is usually not a big issue.