티스토리 뷰

https://github.com/cap-ntu/ML-Model-CI/issues/37

 

onnxruntime how to to specify a GPU device? · Issue #37 · cap-ntu/ML-Model-CI

I'm afraid this is an issue that we cannot specify a GPU device to test. Currently, we limited the GPU usage by setting flag os.environ["CUDA_VISIBLE_DEVICES"]="0" in the server, but I think that's...

github.com

 

import onnxruntime as ort

model_path = '<path to model>'

providers = [
    ('CUDAExecutionProvider', {
        'device_id': 0,
    }),
    'CPUExecutionProvider',
]

session = ort.InferenceSession(model_path, providers=providers)

 

ort.InferenceSession(onnx_dir, providers=['CUDAExecutionProvider'], provider_options=[{'device_id': 2}])
댓글
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2025/12   »
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31
글 보관함