Technical Reporter

The BBC threatens to take legal action against artificial intelligence (AI) companies, and the chatbot company said it recreates BBC content without its permission “verbatim”.
The BBC has written to the United States for confusion, asking it to immediately stop using BBC content, delete any content it holds, and propose financial compensation for the materials already used.
The BBC is one of the world’s largest news organizations, and this is the first time such action is taken against AI companies.
Confusion said in a statement: “The BBC’s claim is just another part of the overwhelming evidence that the BBC will take any steps to protect Google’s illegal monopoly.”
It does not explain how it believes Google relevance to the BBC position, or provides further comments.
The BBC’s legal threat was formed in a letter to bewildered boss Aravind Srinivas.
“This constitutes a copyright infringement in the UK and a violation of the BBC’s terms of use,” the letter said.
The BBC also cites research published earlier this year and found that four popular AI chatbots, including confusing AI, summed up news reports inaccurately, including some BBC content.
It points to the findings of major issues representing BBC content in some confusing AI responses in the analysis, and it says such outputs do not involve BBC editorial guidelines around providing impartial and accurate news.
It added: “The BBC’s reputation for the BBC has been highly damaged, including funding the BBC’s UK licensing fee payers and undermining their trust in the BBC.”
Online scratch review
Since Openai launched Chatgpt in late 2022, chatbots and image generators can generate content responses for content responses in seconds.
But their rapid growth and increased capacity raise questions about the unauthorized use of existing materials.
Many of the materials used to develop and generate AI models are materials that use robots and crawlers to automatically extract large amounts of web sources for site data.
The rise in the campaign, known as the Internet scratch, has recently prompted British media publishers to join creatives to call on the British government to protect copyrighted content.
In response to the BBC letter, the Professional Publishers Association (PPA), which represents more than 300 media brands, said it was “very worried that AI platforms are currently failing to uphold UK copyright laws.”
It said the robots were used to “illegally scratch publishers’ content without permission or payment.”
It added: “This practice directly threatens the UK’s £4.4 billion publishing industry and its 55,000 people employed.”
Many organizations, including the BBC, use files called “robots.txt” in their website code, trying to prevent robots and automation tools from extracting data for AI.
It instructs robots and web crawlers not to access certain pages and materials that exist.
But complying with the directive is still voluntary, and according to some reports, robots don’t always respect it.
The BBC said in the letter that while the company does not allow two confused crawlers, the company “apparently disrespects robots.txt”.
Mr Srinivas denied allegations that his crawler ignored the robot in an interview with Fast Company last June.
Confused also says that since there is no basic model established, the website content will not be used for AI model pre-training.
“Answer Engine”
The company’s AI chatbots have become a popular destination for people seeking answers to common or complex questions and describe themselves as “answer engines.”
It says on its website that it does this by “searching the network, identifying trusted sources and synthesizing information into clear, up-to-date responses.”
It also recommends that users double check the accuracy of responses – a common warning accompanied by AI chatbots that can be known to be true, in fact, false information can be spoken, which is convincing.
In January, Apple suspended an AI feature that generated false headlines for BBC News app notifications when summarizing groups of iPhone users.
