An image shared on Reddit, in the LocalLLaMA group, has captured the attention of the open-source community. The image, titled "Distillation when you do it. Training when we do it", depicts a person cooking at home (distillation) contrasted with a luxury restaurant (training).
Accessibility and resources
The image reflects a reality: model distillation, i.e., creating smaller and optimized versions of pre-existing models, is an activity that can be carried out with relatively limited resources. Conversely, training models from scratch requires expensive hardware infrastructures and specialized skills.
For those evaluating on-premise deployments, there are trade-offs between initial costs (CapEx) for hardware and operating costs (OpEx) related to energy consumption and maintenance. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.
๐ฌ Comments (0)
๐ Log in or register to comment on articles.
No comments yet. Be the first to comment!