Last updated：2021-05-11 10:41:31
If you need to use your cluster in computing-intensive scenarios such as machine learning and image processing, you can create a GPU-accelerated container cluster. This way, you can schedule GPU-accelerated containers without the need to manually install the NVIDIA driver or Compute Unified Device Architecture (CUDA).
Unlike CPU and memory, you must explicitly declare the number of GPUs you want to use in the YAML file by setting
resources.limits of containers.
apiVersion: v1 kind: Pod metadata: name: cuda-vector-add spec: restartPolicy: OnFailure containers: - name: cuda-vector-add image: hub.kce.ksyun.com/ksyun/cuda-vector-add:0.1 resources: limits: nvidia.com/gpu: 1 # Specify the number of NVIDA GPUs to schedule.
Did you find the above information helpful?
Please give us your feedback.
Thank you for your feedback.