Introduction
The transformer-based sentiment models have been increasingly used for analyzing opinions on social media. However, temporal changes can compromise their stability and accuracy.
Methodology
The author analyzes the stability of sentiment models through a zero-training approach, applying his skills to authentic social media flows from large events. The method was evaluated on three different transformer architectures and 12,279 authentic social posts.
Results
The results show significant model instability with accuracy drops reaching 23.4% during event-driven periods. The author introduced four new drift metrics that surpass embedding-based baselines and are suitable for production deployment.
Conclusion
The discovery of these temporal changes can help improve the stability of sentiment models and ensure more precise results. This zero-training method represents a good solution for deploying sentiment models in dynamic situations.
๐ฌ Comments (0)
๐ Log in or register to comment on articles.
No comments yet. Be the first to comment!