Taipei, Sept. 29 (CNA) Governments should be careful about imposing regulations on artificial intelligence (AI) without having a better understanding of the technology, AI pioneer Andrew Ng said at a forum in Taipei on “Navigating the Future of AI.”
In a panel discussion at the forum hosted by Taiwania Capital, Ng stressed the importance of algorithm transparency but also warned of the perils of developing a regulatory framework prematurely.
He said that when he looked at the regulatory landscape in the U.S. and Europe, there was a possibility of regulatory capture because the government does not have a good understanding of what many generative AI companies are doing.
Regulatory capture is a form of government failure caused by regulatory agencies coming to be dominated by the interests of those who are supposed to be regulated.
The government “currently lacks the levers to gain the transparency and to even identify the problem,” Ng said, which has led to his concerns that some countries are following a path toward regulatory capture, which would be “worse than no regulation.”
Also on the panel was Audrey Tang (唐鳳), Taiwan’s minister of digital affairs, who voiced other concerns of her own.
Beyond the problem of large companies prescribing a set of guidelines or rules that benefit only large established players, there is also the problem of legal responsibilities not being properly assigned, she said.
Taiwan’s social media platforms, for example, did not provide the transparency society required when they posted ads for fake financial schemes that caused harm, Tang said.
Taiwan recently amended its laws, however, and the platforms will now have “mandated liability” to be responsible for false advertisements and pay fines if people are consequently scammed, she said.
“The legal system needs to respond to the harms so the externalities are properly re-internalized,” Tang said, stressing that a good liability framework is better than top-down guardrails that protect only the established companies.
Externalities are the costs of economic activities that are not reflected in market prices or mechanisms.
In terms of new regulations on AI in particular, Tang said, “there will be a lot of infrastructure redesigning and rebuilding work to do.”
Ng, on the other hand, said he has found that regulating products may be more fruitful than regulating technologies.
Regulating products or a specific industry, where you can more clearly articulate outcomes, seems to be a promising approach, while regulating technology as a general-purpose technology is much harder, Ng argued.
In Taiwan, Tang said, many people are expecting the government “to demonstrate the use of responsible AI,” likely because the government has published guidelines for the public sector “to use generative AI in a privacy-preserving and responsible way.”
“We’re committed to public code and open source, which means that whatever we use in our ministry and help other ministries use will be published as guidelines for the industry,” Tang said.
Taiwan’s science minister also announced a few months ago that the National Science and Technology Council is launching TAIDE (Trustworthy AI Dialogue Engine), “a fine-tuned LLaMA (Large Language Model Meta AI) that fits the local Taiwanese culture,” Tang said.
LLaMa is an open-source large language model that Meta released to better solve the problems encountered by previous large language models such as ChatGPT.
Fine-tuning is a process of adapting a pre-trained language model to perform specific tasks or generate text in a way that is more aligned with a particular application.
Tang said this is the infrastructure-building that governments must do.
“When we say public code, it means the software code but also the regulatory code,” Tang said.
“Our regulatory code may also be exportable to other countries,” she said.