Skip to content

Releases: flashinfer-ai/flashinfer

v0.1.6

27 Aug 01:18
9ee26e7
Compare
Choose a tag to compare

0.1.6 (2024-08-27)

SM75 Support

Starting from 0.1.6, our pre-built wheels include experimental support sm75 (Turing architecture GPUs such as Tesla T4, Quadro RTX 6000 and RTX 2080).

API Changes

plan/run

Since 0.1.6 on, begin_forward/forward/end_forward APIs are replaced with the new plan/run API.

  • forward is renamed to run, which is more precise and consistent with the naming convention of cutlass's python API.
  • begin_forward is renamed to plan, which is consistent with the naming convention of nvmath API.
  • end_forward is deprecated and has no effect after this PR.

There is some slight difference between the old forward and the new run API:

  • All extra arguments such as causal and logits_soft_cap will be provided in plan (previously begin_forward) API, and cached until next plan call, and we only need to provide query and KV-Cache tensors in run API.

The old begin_forward/forward/end_forward APIs are still functional, but we will gradually deprecate them in future releases.

Check #466 for more details.

MultiLevelCascadeAttentionWrapper

Since 0.1.6 on, we introduce a new MultiLevelCascadeAttentionWrapper API for cascade inference,
which supports multi-level cascade inference where all levels' KV-Cache can be managed in a unified Paged KV-Cache.

See documentation and tutorial on API usage and layout explaination.

The old BatchDecodeWithSharedPrefixPagedKVCacheWrapper and BatchPrefillWithSharedPrefixPagedKVCacheWrapper will be deprecated in future releases.

Features

Refactor

  • refactor: replace begin_forward/forward/end_forward with plan/run #466

Misc

  • misc: improve error handling of sampling kernels (#456) (0dce178)

Performance Improvements

  • slight optimization on f16->f8 fragment layout swizzling (#453) (0d61871)
  • slight optimization on fragment layout swizzle (#458) (7c397cb)
  • use persistent kernel for merging attention states (#459) (be6bf5b)

Acknowledgement

We thank @LiuXiaoxuanPKU on enhance of speculative sampling operator, @merrymercy on API change suggestion and @zhyncs on integrating fp8 BMM cublas implementation.

v0.1.5

13 Aug 10:19
838d050
Compare
Choose a tag to compare

0.1.5 (2024-08-13)

Bugfix

  • Fix PagedPrefill python api and some typos (#441) (3fff008)
  • fix prefill kernels' lse result for empty kv-cache (#440) (6ac28f4)

Features

  • decouple float and int workspace buffer (#442) (a7ee566)

Performance Improvements

  • faster fp8->fp16 dequantization for pre sm_90 arch (#439) (c93f647)

Acknowledgement

We thank contributions and feedbacks from the community: @comaniac, @hnyls2002, @jianfei-wangg, @Yard1.

v0.1.4

09 Aug 09:07
9ca04e4
Compare
Choose a tag to compare

0.1.4 (2024-08-09)

Features

Bug Fixes

  • fix dispatch fp16 type when enable fp8 (#430) (daa5566)
  • improve numerical stability of sampling kernels (#429) (898d8ea)

Other improvements

  • break up _kernels into multiple modules (#428) (8e482d9)

Acknowledgement

We thank contributions and feedbacks from the community: @comaniac, @esmeetu, @LiuXiaoxuanPKU, @peng1999, @xslingcn, @Yard1, @zhyncs.

v0.1.3

31 Jul 10:47
Compare
Choose a tag to compare

0.1.3 (2024-07-31)

Bugfix

  • bugfix: Fix cudagraph mode of BatchPrefillWithRaggedKVCacheWrapper (#412) (9907bc)
  • fix cu118 cub usage for sampling kernels (#410) (58d359)

Misc

  • enhance allocator error info and add shape check for prefill begin forward functions (#413) (5e36c5)

v0.1.2

29 Jul 12:11
d2f6a42
Compare
Choose a tag to compare

0.1.2 (2024-07-29)

Bugfix

Features

Performance Improvements

v0.1.1

20 Jul 09:15
b64d5c9
Compare
Choose a tag to compare

0.1.1 (2024-07-20)

Bugfix

  • fix the invalid kernel configuration for architectures with small shared memory size (#385) (cdac57)

Features

  • expose decoupled kv-cache to pytorch api (#383) (457a0ae)

Performance Improvements

v0.1.0

17 Jul 08:29
58b68d0
Compare
Choose a tag to compare

0.1.0 (2024-07-17)

Features

  • Add mask to merge_state_in_place (#372) (e14fa81)
  • expose pytorch api for block sparse attention (#375) (4bba6fa)
  • Fused GPU sampling kernel for joint top-k & top-p sampling (#374) (6e028eb)

v0.0.9

12 Jul 05:54
17a5f1b
Compare
Choose a tag to compare

0.0.9 (2024-07-12)

Bugfix

  • fix the decode kernel segfault in cudagraph mode (#368)(c69cfa)
  • fix decode kernels output for empty kv cache (#363)(ac72b1)
  • check gpu id in PyTorch APIs and use input tensor's gpu default stream (#361)(1b84fa)

Performance Improvements

Acknowledgement

We thank @Yard1, @Ying1123 and @zhyncs for their contributions.

v0.0.8

03 Jul 07:58
478447e
Compare
Choose a tag to compare

0.0.8 (2024-07-03)

Bugfix

  • fix prefill/append kernel behavior for empty kv-cache (#353) (7adc8c)
  • fix decode attention kernel with logits cap (#350) (f5f7a2)

v0.0.7

28 Jun 10:15
fec77d0
Compare
Choose a tag to compare

0.0.7 (2024-06-28)

Breaking Changes

  • batch_decode_with_padded_kv_cache was removed, we encourage user to use BatchDecodeWithPagedKVCacheWrapper instead. (#343)

Bugfix

  • fix the forward_return_lse function in BatchPrefillWithRaggedKVCache class (#337)
  • fix the scheduler behavior of large page size (#333)

Features

Performance Improvements