triton-cpu初体验
体验一下triton cpu,看看是否有想象中的效果。
分布式内存计算机的出现主要是为了满足大规模计算任务对计算能力和内存容量的需求, 但是由于物理限制与成本考虑, 单处理器的性能提升存在极限, 而分布式内存计算机通过使用多个相对简单/成本较低的处理器组成集群, 可以在不突破物理限制的情况下, 以较低的成本实现更高的计算性能.
自从2020年Apple发布的芯片M1/M2/M3, 至少提供了四种不同的方式可以执行高负载的计算任务:
标准的ARMv8 SIMD/NEON向量指令集.
苹果尚未公开文档的AMX(Apple Matrix Co-processor)指令集, 由CPU发射, 在特殊的加速器上运行.
神经网络处理器ANE(Apple Neural Engine)
Metal GPU
在M1 Max上单核计算单精度浮点矩阵乘法时, 使用SIMD指令集可达到102 GFLOPS左右的性能, 而使用AMX指令集最多可达到1475 GFLOPS! 本文就来带领大家一同探索AMX指令集, 学习如何解锁这剩下的14倍算力.
Since 2020, Apple has published M1/M2/M3. They have at least four different ways to perform high-intensity computing tasks.
Standard arm NEON instructions.
Undocumented AMX (Apple Matrix Co-processor) instructions. Issued by the CPU and performed on the co-processor.
Apple Neural Engine
Metal GPU
If we use ARM NEON instructions to accelerate the sgemm kernel on the single core of the M1 Max, It can achieve a performance of around 102 GFLOPS. But if use AMX instructions it can achieve 1475 GFLOPS!
In this article, I will introduce how you can leverage the AMX instructions to unlock the potential performance of Apple Silicon. And the all code I used in here (Verified on M2 Pro). This article refers to the work of Peter Cawley et al, which contains more instructions and usage methods.