OpenAI discovers malicious activity on ChatGPT

A recent report from OpenAI reveals that an individual with links to Chinese law enforcement attempted to use ChatGPT to plan and track smear campaigns. The primary target was to discredit the Japanese prime minister and other critics of the Chinese Communist Party.

This incident highlights the potential risks associated with the use of large language models (LLMs) for malicious purposes, including disinformation and manipulation of public opinion. The ability to generate plausible text makes these tools particularly well-suited for such activities.

For those evaluating on-premise deployments, there are trade-offs to consider. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.