Russia's Digital Ministry published an AI bill — three model types will shape the market

The bill may take effect September 1, 2027. AI models will be classified as sovereign, national, or trusted.

Author: Michael Kokin ·

The bill may take effect September 1, 2027. AI models will be classified as sovereign, national, or trusted.

Sovereign model — all development, training, and datasets are made in Russia, by Russian companies and citizens only, no foreign components. In practice, only Sber is building something like this. Creating one from scratch costs hundreds of billions of rubles with no guarantees of matching global model quality.

National model — softer: foreign open-source solutions and datasets are allowed. Most Russian developers fall here, including Yandex. A compromise between sovereignty and reality.

Trusted model — a separate category for critical infrastructure. Security is confirmed by FSTEC and FSB. Foreign AI systems in critical infrastructure are prohibited.

> What this means for Western and Chinese services. They won't be allowed into government or critical infrastructure. In the commercial sector there's no direct ban yet, but the requirement to disclose training datasets will likely not be met by OpenAI, Google, or Baidu.

> What this means for users. Three things. First: apps with Russian AI models will be pre-installed on all smartphones. Second: services must disclose you're talking to an AI, and generated content will be labeled. Third: if AI causes damage, you'll have the right to compensation — the model owner is liable.

This is still a draft. But the direction is clear: Russia is building an AI model classification similar in spirit to the European AI Act — only focused on localization level rather than risk levels.

> regulation.gov.ru/projects/166424