This project aims to enable language model inference on FPGAs, supporting AI applications in edge devices and environments with limited resources.