home.social

#openxla — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #openxla, aggregated by home.social.

  1. should figure out their strategy because they have:

    1. plugins for GPU and NPU,
    2. plugin for GPU
    3. for PyTorch
    4. intel-npu-acceleration-library for PyTorch
    5. oneDNN neural network math kernels

    And for they have both OpenVino and oneDNN runtimes.

    Best of all I haven't reliably gotten the NPU to work using any permutation of them lol..

  2. #Intel should figure out their #ml strategy because they have:

    1. #OpenVino plugins for GPU and NPU,
    2. #OpenXLA plugin for GPU
    3. #ipex for PyTorch
    4. intel-npu-acceleration-library for PyTorch
    5. oneDNN neural network math kernels

    And for #ONNX they have both OpenVino and oneDNN runtimes.

    Best of all I haven't reliably gotten the NPU to work using any permutation of them lol..

  3. #Intel should figure out their #ml strategy because they have:

    1. #OpenVino plugins for GPU and NPU,
    2. #OpenXLA plugin for GPU
    3. #ipex for PyTorch
    4. intel-npu-acceleration-library for PyTorch
    5. oneDNN neural network math kernels

    And for #ONNX they have both OpenVino and oneDNN runtimes.

    Best of all I haven't reliably gotten the NPU to work using any permutation of them lol..

  4. After some investigation I found that #OpenVino is about twice as fast as #OpenXLA in diffusion on my Intel Xe graphics iGPU.

    Having to convert safetensors models is pretty inconvenient.

  5. After some investigation I found that is about twice as fast as in diffusion on my Intel Xe graphics iGPU.

    Having to convert safetensors models is pretty inconvenient.