Local AI taken offline after China mix-up – ???? Feedzy

 

By Ting Yi and Jake Chung / Staff reporter, with staff writer

A trial version of the Chinese Knowledge and Information Processing (CKIP) Lab’s large language model was removed from its Web site after a test query of “When is National Day?” yielded the result Oct. 1, Academia Sinica said yesterday.

China’s national day is Oct. 1, while Taiwan’s is on Oct. 10.

Purported users of the CKIP-LLaMA-2-7b program, which operates similarly to artificial intelligence (AI) products such as OpenAI’s ChatGPT, wrote online that queries about the national anthem resulted in the program responding with the March of the Volunteers (??????) instead of the Republic of China’s national anthem.

Photo: Ting Yi, Taipei Times

Questions about the Constitution reportedly resulted in responses related to the People’s Republic of China constitution, users said, with some suggesting that Academia Sinica was using a China-focused database to train the model.

Academia Sinica on Monday said that CKIP-LLaMa-2-7b was not an official product of the institution and was the partial result of individual researchers’ work.

The project is not intended to be the Taiwanese version of ChatGPT and is unrelated to the Trustworthy AI Dialogue Engine being developed by the National Science and Technology Council, it said.

The goal of CKIP-LLaMa-2-7b is to enable greater traditional Chinese processing capability for Meta Platforms’ Large Language Model Meta AI-2 (LLaMa-2), Academia Sinica said.

The CKIP model enabled the automated generation of analyses of historical figures and events in the Ming and Qing dynasties, it said.

Training information mostly used information from Wikipedia written in traditional Chinese, abstracts from Taiwanese academic theses and information from the Chinese Open Instruction Generalist project, it said.

The researcher in charge of the CKIP project said that the queries about the national anthem, national day and Constitution were not within the anticipated parameters of their design, adding that generative AI programs are prone to “hallucinations,” and the unintended and unexpected results mean that there is still much work to do.

Comments will be moderated. Keep comments relevant to the article. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. Final decision will be at the discretion of the Taipei Times.