South Korea's parliament passed the "Basic AI Act," which establishes government standards not as recommendations but as a full-fledged legal framework. (Japan Times)
The core principle is "innovation first, regulation second." This gives the green light to developers while putting strict barriers where they're critical.
What's allowed (and encouraged):
- **Development without licenses.** The "permitted by default" principle applies. Startups and tech giants don't need prior approval to launch new models unless they pose a direct threat to life or human rights.
- **Government procurement and support.** The law requires the state to purchase AI solutions from local companies and invest in R&D so Samsung and Naver don't fall behind American competitors.
What's under strict control (High-Risk AI):
This covers algorithms that affect a person's fate or safety.
- **Areas:** Medical diagnostics, transportation management (autopilots), biometrics, scoring for hiring or credit issuance.
- **Requirements:** Mandatory human-in-the-loop (an operator makes the final decision), full transparency of algorithmic logic, and liability insurance for errors.
What's prohibited and punished:
- **Deepfakes without labeling.** Any AI-generated content must have watermarks. Creating deepfakes for deception or sexual exploitation is a criminal offense.
- **Covert biometric collection.** Using AI for facial recognition in public places without notifying citizens (exception: national security threats).
What experts say:
The industry is pleased: the law turned out to be softer than Europe's AI Act, which many consider an "innovation killer." For Korean big tech, it means clear rules and protection from lawsuits.
Human rights advocates are skeptical. Their main complaint is that fines for violating safety principles are too low (pocket change for corporations) and the wording is too vague on how users can challenge an AI's decision.