GORM Playground Link

https://github.com/go-gorm/playground/pull/450

Description

error when preloading large dataset in mysql. The error is Error 1390: Prepared statement contains too many placeholders.

best would be if large where in querys get chunked.

Comment From: jinzhu

the value need to be changed in db server

Comment From: christianWilling

this is not possible in mysql or mariadb

Comment From: imthaghost

Was this resolved for mysql?

Comment From: christianWilling

no my current solution is to fetch everything with relations in chunks up to 63000, hope it works for you too there seems to be no intrest in fixing this

Comment From: imthaghost

Thats the only solution I have been finding online for a workaround. Mind sharing how you are doing it with the gorm playground example you have. I believe it would look something like:

_ = DB.Preload("Friends").FindInBatches(&res, 63000, func(tx *gorm.DB, batch int) error {

        return nil
    })

Or are you doing something else kinda have no idea what to do at this point lmao.

It sucks there is no interest in fixing this.

Comment From: christianWilling

in most cases i ran into ram limitations with lots of requests so i do this:

func QueryRelationsAndReturn[T any](q *gorm.DB, w http.ResponseWriter, hasLimit bool) error {
    w.Header().Set("Content-Type", "application/json")
    w.WriteHeader(http.StatusOK)

    var json = jsoniter.ConfigCompatibleWithStandardLibrary
    enc := json.NewEncoder(w)

    _, err := w.Write([]byte("["))
    if err != nil {
        return errors.Wrap(err, "write json open")
    }

    do := true
    r := 0
    chunk := 30000
    for do {
        var sOut []T

        if !hasLimit {
            q.Limit(chunk).Offset(chunk * r)
        }

        err := q.Find(&sOut).Error
        if err != nil {
            return errors.Wrap(err, "fetch")
        }

        if r != 0 {
            _, err = w.Write([]byte(","))
            if err != nil {
                return errors.Wrap(err, "write json seperator")
            }
        }

        sOutL := len(sOut)
        if sOutL != chunk {
            do = false
        }
        r++

        if hasLimit {
            do = false
        }

        sOutLCheck := sOutL - 1

        for i, e := range sOut {
            err = enc.Encode(e)
            if err != nil {
                return errors.Wrap(err, "enc")
            }
            if i != sOutLCheck {
                _, err = w.Write([]byte(","))
                if err != nil {
                    return errors.Wrap(err, "write json seperator")
                }
            }
        }
    }

    _, err = w.Write([]byte("]"))
    if err != nil {
        return errors.Wrap(err, "write json open")
    }

    return nil
}

the smaller chunksize from 30000 cames from testing with diffrent sizes and this seems to give a good balance between fetching and ram usage. in some cases i have more than 2million results.

which chunks it into an json array and in other cases us use the FindInBatches just like you did.

Comment From: imthaghost

Big brain solution! Update: Gave this a try works well for me.

I think in my use case I am going more for accuracy and not too much about performance so I will probably go with a lower chunksize but to clarify 30000 seems to work well with 2 million?

Yah I've tried using FindInBatches theres just some other issues I'm hitting with it using Joins() vs Preload() I think Joins() is nice because its within a single query vs multiple with Preload() theres probably an open issue about this one somewhere too.