branch: master
Commits on master
- ad3d742 good line shaves in st and faster (#2343) 2 years ago
- 652d2de wow how did i think that was okay (#2339) 2 years ago
- 8e22c0d everything can jit now (#2338) 2 years ago
- a8875bd add types to lazy (#2327) 2 years ago
- 1d55015 force rebuild of ocelot (#2334) 2 years ago
- 0d0c74b Assert for memory allocation failures (#2337) 2 years ago
- aa01a63 cleanup of lines / unused / types (#2336) 2 years ago
- 3971259 fix test_real_world llama (#2335) 2 years ago
- 3b9dd33 add device to beam search cache key (#2333) 2 years ago
- 75676ab Profiling-helper (#2321) 2 years ago
- 8235da1 whisper: support batch inference, add librispeech WER test (#2074) 2 years ago
- 3baaf29 two stage cumsum in tensor.py (#2331) 2 years ago
- 163b2bc wgpu.utils._device -> wgpu.utils.device (#2330) 2 years ago
- 27f4c26 fix getitem slice when end < start (#2329) 2 years ago
- 822d6e6 Simpler mops verify (#2325) 2 years ago
- ef67d7f shapetracker whitespace 2 years ago
- a985115 fuzz_linearizer same api for interpreted and compiled (#2320) 2 years ago
- 294e71d remove lines (unused code) (#2319) 2 years ago
- 628365e JIT cleanups (#2317) 2 years ago
- b64738e Remove AS_STRIDED from shapetracker (#2216) 2 years ago
- b8d460d Add Tensor.multinomial (#2295) 2 years ago
- cb6cfcc add icb support check for metal device (#2313) 2 years ago
- 70a65c2 JIT support in Interpreted (#2314) 2 years ago
- 9a20bc0 Tensor(None) is Tensor([]) (#2316) 2 years ago
- f1f863c allow 0-dim array to broadcast into zero shape tensor (#2315) 2 years ago
- 4da2dde Interpreted cleanups (#2312) 2 years ago
- 123a0b8 support zero in shape (#2303) 2 years ago
- f113a0b dtype promotion priorities (#2311) 2 years ago
- 3c5a51f aaaaaaa finally (#2310) 2 years ago
- cff8375 make self referential AST fast too (#2278) 2 years ago