## Local Llama: An Open Comparison of Hardware The online community Local Llama, active on Reddit, has opened a discussion thread dedicated to the hardware configurations used by users. The goal is to share experiences in running large language models (LLMs) locally. The thread invites users to describe their systems, even the most "janky" ones (a slang term to indicate configurations assembled unconventionally or with recovered components). This approach allows collecting a wide variety of solutions and identifying effective strategies to optimize performance, considering different budgets and needs. The discussion is particularly useful for those approaching the world of LLMs and wanting to experiment without necessarily resorting to paid cloud services. The sharing of information and practical advice facilitates access to these technologies and promotes innovation. ## The Importance of Hardware in the Age of AI Hardware plays a crucial role in the development and use of artificial intelligence. Computing power and memory capacity are key factors for training and running machine learning models, especially large ones like LLMs. Optimizing hardware, both in terms of components and architecture, is therefore essential to making AI more accessible and efficient.