A user on Reddit shared their positive experience with the Qwen 3.5-35B-A3B language model, stating that they have adopted it as their primary tool for various development tasks.
Usage and Performance
The user employs the model for:
- Aggregation and prioritization of messages, emails, and alerts via an N8N server.
- Dynamic generation of systems based on user requests.
- Execution of scheduled tasks with access to custom tools, such as obtaining daily mortgage rates in the United States.
- Image analysis and interpretation of visual content.
- Analysis of large code bases.
The user emphasizes that the model, while not the smartest overall, compensates for knowledge gaps with the ability to use a browser to find the necessary information. The hardware configuration used includes a 5090 and a 3090 GPU, with a context window of 100,000 tokens and Unsloth's Q4-K-XL quantization.
For those evaluating on-premise deployments, AI-RADAR offers analytical frameworks on /llm-onpremise to assess trade-offs.
๐ฌ Comments (0)
๐ Log in or register to comment on articles.
No comments yet. Be the first to comment!