Found 0 result in total

Content is empty
If you don't find the content you expect, please try another search term
Container Engine (KCE)
User Guide
Cluster management
Perform GPU scheduling in a cluster
Last updated:2021-05-11 10:41:31
If you need to use your cluster in computing-intensive scenarios such as machine learning and image processing, you can create a GPU-accelerated container cluster. This way, you can schedule GPU-accelerated containers without the need to manually install the NVIDIA driver or Compute Unified Device Architecture (CUDA).
Unlike CPU and memory, you must explicitly declare the number of GPUs you want to use in the YAML file by setting nvidia.com/gpu in resources.limits of containers.
Example:
apiVersion: v1
kind: Pod
metadata:
name: cuda-vector-add
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
image: hub.kce.ksyun.com/ksyun/cuda-vector-add:0.1
resources:
limits:
nvidia.com/gpu: 1 # Specify the number of NVIDA GPUs to schedule.
resources.limits of containers.Pure Mode