ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
... part of T2, get it here
URL: https://github.com/microsoft/onnxruntime
Author: Microsoft Corporation <onnxruntime [at] microsoft [dot] com>
Maintainer: René Rebe <rene [at] exactco [dot] de>
License: MIT
Version: 1.25.1
Download: https://github.com/microsoft/ onnxruntime v1.25.1onnxruntime-v1.25.1.tar.gz
T2 source: onnxruntime.cache
T2 source: onnxruntime.desc
Build time (on reference hardware): 1850% (relative to binutils)2
Installed size (on reference hardware): 94.07 MB, 386 files
Dependencies (build time detected): bash binutils cmake coreutils diffutils gawk git grep gtest gzip linux-header ninja numpy openssl patch protobuf pybind11 python-gpep517 sed setuptools sympy tar util-linux vcs-versioning zlib
Installed files (on reference hardware): n.a.
1) This page was automatically generated from the T2 package source. Corrections, such as dead links, URL changes or typos need to be performed directly on that source.
2) Compatible with Linux From Scratch's "Standard Build Unit" (SBU).