1. 15 Sep, 2022 1 commit
  2. 14 May, 2022 1 commit
  3. 17 Apr, 2022 1 commit
  4. 27 Mar, 2022 1 commit
  5. 31 Dec, 2021 1 commit
  6. 20 Dec, 2021 1 commit
  7. 29 Jun, 2021 1 commit
  8. 25 Jun, 2021 1 commit
  9. 22 Jun, 2021 1 commit
  10. 19 Jun, 2021 1 commit
  11. 08 Jun, 2021 1 commit
  12. 31 May, 2021 1 commit
  13. 13 May, 2021 1 commit
  14. 07 May, 2021 1 commit
  15. 01 May, 2021 1 commit
  16. 28 Apr, 2021 1 commit
  17. 21 Apr, 2021 1 commit
  18. 14 Apr, 2021 1 commit
  19. 08 Apr, 2021 1 commit
    • Update with the new frontend API · f431756f
      Summary:
      The new frontend makes an union of two execution modes, while starts from
      a single tensor class. Besides, it emits the operator execution through
      a common path that works both for dragon and torch.
      Ting PAN committed
  20. 04 Feb, 2021 1 commit
    • Reimplement the general matrix multiplication · 6bfe3e73
      Summary:
      This commit generalizes the fully-connected operation into GEMM,
      and enhances the matmul operation via batched Dot, GEMV and GEMM.
      New representations and attributes have been consistent with ONNX.
      Ting PAN committed
  21. 25 Jan, 2021 1 commit
    • Remove support for CUDNN v6 · 73ed1b96
      Summary:
      For the purpose of consistency on getting CUDNN convolution algorithms,
      CUDNN v6 (mainly relied by CUDA 8.0) is now dropped.
      Ting PAN committed
  22. 20 Jan, 2021 1 commit
    • Add sysconfig module · bbfecf22
      Summary:
      This commit adds the sysconfig module to get the build information.
      Build information is helpful to select tests or report issues.
      Ting PAN committed
  23. 16 Jan, 2021 1 commit
  24. 29 Dec, 2020 1 commit
  25. 23 Dec, 2020 1 commit
  26. 15 Dec, 2020 1 commit
  27. 11 Dec, 2020 1 commit
  28. 10 Dec, 2020 1 commit
  29. 09 Dec, 2020 1 commit
  30. 03 Dec, 2020 1 commit
  31. 02 Dec, 2020 1 commit
  32. 29 Nov, 2020 1 commit
  33. 05 Nov, 2020 1 commit
    • Use FP32 accumulator for FP16 ReduceSum · d56e67d1
      Summary:
      This commit adds a fallback with FP32 accumulator
      for FP16 ReduceSum to avoid dropping too many small values.
      Besides, FP16 kernels for arch < 530 are almost available.
      Ting PAN committed
  34. 24 Oct, 2020 1 commit
  35. 20 Oct, 2020 1 commit
  36. 14 Oct, 2020 1 commit
  37. 13 Oct, 2020 1 commit
    • Add LinSpace Operator · e83c407a
      Summary:
      This commit adds the linspace op for dragon, torch and tensorflow.
      And, a workaround for truncated int interval is made to range/linspace (up to 2**57).
      Ting PAN committed
  38. 08 Oct, 2020 1 commit
  39. 07 Oct, 2020 1 commit
    • Add Sort Operator · b4019faa
      Summary:
      This commit adds the sort op for dragon, torch and tensorflow.
      Besides, cuda implementation of topk op is now available.
      Ting PAN committed
  40. 27 Sep, 2020 1 commit
    • Use local workspace for Context · fdf26ef2
      Summary:
      This commit uses local(thread or stream) workspace for Context,
      which provides a more elegant way to dispatch kernels requiring scratch.
      Besides, TF32 math type is provided as a cuDNN option for Ampere device.
      Ting PAN committed