Site icon AJMEDIA English

Japan’s AI draft guidelines ask for measures to address overreliance

FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston. President Joe Biden’s administration wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released, though it hasn’t decided if the government will have a role in doing the vetting. The U.S. Commerce Department on Tuesday, April 11, said it will spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures that could ease consumer concerns about these new systems. (AP Photo/Michael Dwyer, File)

Tokyo, 15 October, /AJMEDIA/

Companies and organizations that utilize artificial intelligence will be required to take measures to reduce the risk of overreliance on the technology, according to draft guidelines by a Japanese government panel.

The draft guidelines obtained by Kyodo News also call on AI developers to be careful not to use biased data for machine learning, while urging them to maintain records of their interactions with the technology, to be provided in the event of any issues.

The panel, which is tasked with discussing the country’s AI strategy, is expected to finalize the guidelines by the end of the year. Japan, this year’s chair of the Group of Seven industrialized nations, is also working with other members on drawing up international guidelines for AI developers.

The draft outlines 10 basic rules for AI-related businesses, such as ensuring fairness and transparency with regard to protecting human rights and preventing personal information from being given to third parties without an individual’s permission.

The rules also ask that information be provided about how data is acquired from an individual or entity and how it is then used by related parties.

Companies that develop AI platforms, providers of services that utilize the technology and users will all be required to share some degree of responsibility.

The guidelines provide principles according to business categories. Developers are requested to ensure that data employed for AI purposes is both accurate and up to date, and that they preferably adopt measures to ensure information that has not been approved for use cannot be accessed.

Meanwhile, providers that utilize AI will be asked to warn users to avoid inputting personal information that they do not want accessed by third parties, and guarantee that their services are limited to their intended use to prevent bad actors from employing the technology for malign purposes.

Exit mobile version