This installation uses PyTorch 2.6.0, Cuda 12.6 for GTX 10XX - RTX 30XX & PyTorch 2.7.1, Cuda 12.8 for RTX 40XX - 50XX which are well-tested and stable. It is not recommeneded to use neither PytTorch ...
When encounter OOM error, the name of the volume causing this error will be highlighted (red) in the output. Also, this naughty volume will be log into oom_errors.log ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results