Hubbry Logo
search button
Sign in
Accelerated Linear Algebra
Accelerated Linear Algebra
Comunity Hub
History
arrow-down
starMore
arrow-down
bob

Bob

Have a question related to this hub?

bob

Alice

Got something to say related to this hub?
Share it here.

#general is a chat channel to discuss anything related to the hub.
Hubbry Logo
search button
Sign in
Accelerated Linear Algebra
Community hub for the Wikipedia article
logoWikipedian hub
Welcome to the community hub built on top of the Accelerated Linear Algebra Wikipedia article. Here, you can discuss, collect, and organize anything related to Accelerated Linear Algebra. The purpose of t...
Add your contribution
Accelerated Linear Algebra
XLA (Accelerated Linear Algebra)
Developer(s)OpenXLA
Repositoryxla on GitHub
Written inC++
Operating systemLinux, macOS, Windows
Typecompiler
LicenseApache License 2.0
Websiteopenxla.org

XLA (Accelerated Linear Algebra) is an open-source compiler for machine learning developed by the OpenXLA project.[1] XLA is designed to improve the performance of machine learning models by optimizing the computation graphs at a lower level, making it particularly useful for large-scale computations and high-performance machine learning models. Key features of XLA include:[2]

  • Compilation of Computation Graphs: Compiles computation graphs into efficient machine code.
  • Optimization Techniques: Applies operation fusion, memory optimization, and other techniques.
  • Hardware Support: Optimizes models for various hardware, including CPUs, GPUs, and NPUs.
  • Improved Model Execution Time: Aims to reduce machine learning models' execution time for both training and inference.
  • Seamless Integration: Can be used with existing machine learning code with minimal changes.

XLA represents a significant step in optimizing machine learning models, providing developers with tools to enhance computational efficiency and performance.[3][4]

Supported target devices

[edit]

See also

[edit]

References

[edit]
  1. ^ "OpenXLA Project". Retrieved December 21, 2024.
  2. ^ Woodie, Alex (2023-03-09). "OpenXLA Delivers Flexibility for ML Apps". Datanami. Retrieved 2023-12-10.
  3. ^ "TensorFlow XLA: Accelerated Linear Algebra". TensorFlow Official Documentation. Retrieved 2023-12-10.
  4. ^ Smith, John (2022-07-15). "Optimizing TensorFlow Models with XLA". Journal of Machine Learning Research. 23: 45–60.
  5. ^ "intel/intel-extension-for-openxla". GitHub. Retrieved December 29, 2024.
  6. ^ "Accelerated JAX on Mac - Metal - Apple Developer". Retrieved December 29, 2024.
  7. ^ "Developer Guide for Training with PyTorch NeuronX — AWS Neuron Documentation". awsdocs-neuron.readthedocs-hosted.com. Retrieved 29 December 2024.
  8. ^ Barsoum, Emad (13 April 2022). "Supporting PyTorch on the Cerebras Wafer-Scale Engine - Cerebras". Cerebras. Retrieved 29 December 2024.
  9. ^ Ltd, Graphcore. "Poplar® Software". graphcore.ai. Retrieved 29 December 2024.
  10. ^ "PyTorch/XLA documentation — PyTorch/XLA master documentation". pytorch.org. Retrieved 29 December 2024.