A machine learning compiler for GPUs, CPUs, and ML accelerators

1 Open Issue Need Help Last updated: Nov 16, 2025

Open Issues Need Help

View All on GitHub

AI Summary: The user is investigating the `ConvertRandomBitsToUniformFloatingPoint` function in XLA, specifically comparing its f16 and f32 implementations. They observe that the f16 version (around L561) appears to generate values in the range [0,1) but cannot locate the explicit scaling and shifting operations that are used in the f32 version (around L546) to achieve this range. The core question is to understand how the f16 implementation correctly produces values in [0,1) without the explicit scaling/shifting seen in f32.

Complexity: 3/5
good first issue question

A machine learning compiler for GPUs, CPUs, and ML accelerators

C++