branch: master
Commits on master
- 8de1fc2 Einsum space fix (#2927) 2 years ago
- b55b55d use at least int32 and uint32 for sum output (#2926) 2 years ago
- d424bab tensor.py cleanup around Tensor.slice (#2921) 2 years ago
- 089703a cleanup test_dtype_alu (#2919) 2 years ago
- 3ba591c less outdated abstraction.py (#2917) 2 years ago
- 50927de s/lazydata.realized/lazydata.base.realized/g (#2914) 2 years ago
- 2783e1b bugfix Tensor.item when it's unbased (#2913) 2 years ago
- c3133ad Disk shm refactor (#2912) 2 years ago
- 3855432 don't use numpy to create Tensor(None) (#2909) 2 years ago
- 50cfb1f update onnx model links (#2908) 2 years ago
- 1bbeb3f remove the different rtol / atol for openpilot CUDA in benchmark (#2907) 2 years ago
- a543d8b fuzz default dtypes for some test_dtype tests (#2906) 2 years ago
- 5f3d5cf catch cycles in print_tree (#2891) 2 years ago
- 4432cb1 minor cleanups / remove that op (#2905) 2 years ago
- fd0ba33 onnx_ops formatting cleanup (#2904) 2 years ago
- 5cac633 apply the multitensor optimizations in lazy.py (#2901) 2 years ago
- 5bf43c9 reenable one onnx test failed due to dtype (#2902) 2 years ago
- 677ae76 use np.less and torch.lt for CMPLT (#2899) 2 years ago
- d2e9245 render_locals takes a dtype (#2873) 2 years ago
- 6116039 don't match dtype with first input in where (#2898) 2 years ago
- 7dc3352 increase stable diffusion validation threshold 1e-4 -> 3e-4 (#2897) 2 years ago
- 24e79e0 Move the webgpu CMPLT hack to one place (#2895) 2 years ago
- 852ef57 fix readme typo 2 years ago
- 193109a hotfix: compare on ids 2 years ago
- f6c7833 fast compare for lazyop (#2893) 2 years ago
- 1500aca remove output_type in ops_cpu and ops_torch (#2892) 2 years ago
- 2d2c498 assert for elementwise dtypes in lazy (#2888) 2 years ago
- 41b2a25 Fix exponential behavior in lazyops (#2890) 2 years ago
- 8c4a0f8 Fix int child count (#2882) 2 years ago
- 8a04107 move the op casting logic from mlops to tensor try 2 (#2887) 2 years ago