branch: master
Commits on master
- 0c49d38 replace with tensor op (#3099) 2 years ago
- f3a50b4 fix broadcasted logic if there's 0 in shapes (#3097) 2 years ago
- 025fbf4 One hot in tensor.py (#3093) 2 years ago
- 7086d77 bugfix do not reset shapetracker of 0 size lazybuffer (#3096) 2 years ago
- 13e872b add mutigpu support for llama attention (#3064) 2 years ago
- dcf7eca update jit type annotation post lazy rewrite (#3091) 2 years ago
- 0fe6904 use device from LinearizerOptions in kernel search (#3090) 2 years ago
- 93e3f95 use BEAM=2 instead of BEAM=4 in cuda ci gpt2 (#3089) 2 years ago
- f502c9b minor cleanup of View.reshape (#3088) 2 years ago
- f40299c remove the third merging state in view._merge_dims (#3085) 2 years ago
- 7f9590d hotfix disable flaky mac runner wino cifar (#3087) 2 years ago
- adcc844 cat works (#3086) 2 years ago
- cdeab9a mem_estimate is always int, not symbolic (#3083) 2 years ago
- 162fa61 wmma: clean up device specific tensor core code (#3081) 2 years ago
- d218d13 minor cleanups of lazy.py (#3080) 2 years ago
- 56dda33 Tensor.expand resolves the new_shape before shortcut return (#3078) 2 years ago
- 6842476 better test demonstration (#3077) 2 years ago
- 507e0af fix onehot and jit in examples/transformer (#3073) 2 years ago
- 4342fcc filter_strides -> canonicalize_strides (#3072) 2 years ago
- 023f5df simpler idxs_to_idx (#3071) 2 years ago
- 2495ca9 early gate the graph (#3070) 2 years ago
- ff0d6e4 jit autorealizes output (#3069) 2 years ago
- ae83733 hotfix: examples/transformer.py 2 years ago
- 145718a unbind view or shapetracker also returns var_val (#3067) 2 years ago
- ef3aa6d update gh actions (#3033) 2 years ago
- 3f80c1a speedtweaks3: apply shouldn't use the tensor constructor (#3065) 2 years ago
- 0abe72b hotfix: use is for enum compare, a few more 2 years ago
- b2b5849 hotfix: use is for enum compare 2 years ago
- ac3f246 cached size (#3060) 2 years ago
- 73b72b8 test scaled dot product attention (#3063) 2 years ago