New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
nvidia-driver-daemonset kernel header installation issue on Openshift 4.6
#110
opened Nov 25, 2020 by
geoberle
Problem running example: " sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi"
#104
opened Nov 23, 2020 by
dummys
nvidia-driver-validation pod tolerations are not configurable
enhancement
#102
opened Nov 13, 2020 by
borremosch
5 of 5
state-device-plugin-validation schedules on wrong node in multi-node cluster
enhancement
#101
opened Nov 13, 2020 by
zzh8829
5 of 5
Deploying GPU-Operator on AWS EC2 with (K8S v.19.1, Ubuntu 20.04.1LTS)
#100
opened Nov 10, 2020 by
danniel1205
13 of 16
Operator should create one daemonset for each linux distribution found in node list
enhancement
#94
opened Nov 3, 2020 by
MartinForReal
0 of 11
nvidia/driver could not resolve resolve Linux kernel version
#85
opened Sep 28, 2020 by
liuchintao
13 of 16
New GPU nodes do not get labeled with nvidia.com/gpu resources without restart of device-plugin pods
#81
opened Sep 10, 2020 by
dagrayvid
Cluster autoscaler scaling to zero for GPU node pools
enhancement
#76
opened Aug 27, 2020 by
ranjb
5 of 16
Errors with running nVidia GPU Operator 1.1.7-r2 on OpenShift 4.3.28
#75
opened Aug 26, 2020 by
ibmbwolfe
0 of 11
Standardised mechanism for node selector in hybrid clusters
question
#74
opened Aug 21, 2020 by
nwalker-nvidia
GPU Operator crashes if no GPU nodes present
bug
good first issue
#71
opened Aug 11, 2020 by
mmgaggle
Problems running the GPU operator on k3s
enhancement
platform
#69
opened Jul 22, 2020 by
TheMosquito
Previous Next
ProTip!
Follow long discussions with comments:>50.