US Regulators Initiate Probe into OpenAI’s ChatGPT Over False Information Risks
OpenAI, an artificial intelligence company backed by Microsoft, is facing an investigation by the Federal Trade Commission (FTC) in the United States. The inquiry focuses on the potential risks posed to consumers by OpenAI‘s ChatGPT, a language model known for generating human-like responses.
The FTC has sent a letter to OpenAI requesting information on how the company addresses concerns regarding false information and its impact on individuals’ reputations. This investigation highlights the growing regulatory scrutiny surrounding AI technology.
ChatGPT’s Impact and Industry Competition
ChatGPT, along with similar AI products, has the potential to revolutionize how people access information online. Unlike traditional search engines that provide a list of links, ChatGPT generates instant, convincing responses to user queries.
As this technology gains prominence, it has sparked intense debates regarding data usage, response accuracy, and potential violations of authors’ rights during training. Amidst these discussions, technology rivals are fiercely competing to offer their versions of AI language models.
FTC’s Concerns and Areas of Focus
The FTC’s letter to OpenAI raises questions about the steps the company has taken to address the potential generation of false, misleading, disparaging, or harmful statements about real individuals. Additionally, the FTC is examining OpenAI’s approach to data privacy, data acquisition methods for training the AI, and how user privacy is safeguarded. The commission aims to ensure that OpenAI’s products are safe, compliant with regulations, and do not infringe upon individuals’ rights or privacy.
OpenAI’s Response and Commitment to Safety
OpenAI’s CEO, Sam Altman, has assured the public that the company prioritizes safety and user privacy. He stated on Twitter that OpenAI has dedicated years to safety research and several months specifically to make ChatGPT safer and more aligned with users’ interests. Altman emphasized that OpenAI’s systems are designed to learn about the world in general, rather than targeting private individuals. The company has expressed confidence in complying with the law and willingness to cooperate with the FTC.
Altman’s Congressional Testimony and Call for Regulation
Earlier this year, Sam Altman testified before Congress, acknowledging the potential for errors associated with AI technology. He advocated for industry regulations and the establishment of a dedicated agency to oversee AI safety.
Altman stressed the significance of addressing the impact of AI technology, including its potential effects on employment. OpenAI aims to collaborate with the government to prevent any adverse consequences arising from the misuse or mishandling of AI technology.
FTC Investigation and Lina Khan’s Concerns
The Washington Post first reported the FTC’s investigation into OpenAI. The inquiry is still in its preliminary stage. While FTC Chair Lina Khan did not explicitly mention OpenAI’s investigation during a recent congressional hearing, she expressed concerns about the output of AI language models.
Khan highlighted instances where sensitive information appeared in response to queries from unrelated individuals, as well as cases of libelous and defamatory statements. Lina Khan’s tenure has been marked by heightened scrutiny of large tech firms, although some critics argue that she has exceeded the FTC’s authority.