\n\n## Introduction \nRecently, LLM models have undergone a significant transformation, characterized by a rapid increase in their popularity and capabilities. This evolution has been driven by proprietary models such as GPT-4 and GPT-o1, which have captured the attention of the AI community due to their exceptional performance and versatility. At the same time, open-source LLM variants like LLaMA and Mistral have contributed significantly to the increase in popularity of these models through the ease of personalization and deployment on various applications. \nMoxin 7B was introduced as a fully open-sourced model developed according to the Framework Model Openness framework, which goes beyond just sharing the model weights to promote transparency in training, data, and implementation details, creating a more inclusive and collaborative research environment that can support a thriving open-source ecosystem. \nTo equip Moxin with diverse capabilities across various tasks, three variant models were developed based on Moxin, including Moxin-VLM, Moxin-VLA, and Moxin-Chinese, targeting vision-language, vision-language-action, and Chinese respectively. \nExperiments show that our models have achieved superior performance in various evaluations. We used an open-source framework and publicly available data for training. Our models are made available along with the available data and code to derive them. \n## Technical Characteristics \n* Moxin-VLM: vision-language \n* Moxin-VLA: vision-language-action \nc* Open-source model with complete transparency in training, data, and implementation details \n* Utilization of the Model Openness Framework \n* Capabilities in vision-language, vision-language-action, and Chinese \n## Conclusion \nThese new open-source LLM variants offer a significant expansion of capabilities for these models. Collaboration between the research community and open-source development is crucial to support a thriving open-source ecosystem. \n## References \n* [Link to GitHub repository](https://github.com/moxin-models)