The mission titled ICUBE-Q will be launched on board China's Chang'E6 from Hainan
world1 day ago
Anthropic, an artificial intelligence company backed by Alphabet Inc, on Tuesday released a large language model that competes directly with offerings from Microsoft-backed OpenAI, the creator of ChatGPT.
Large language models are algorithms that are taught to generate text by feeding them human-written training text. In recent years, researchers have obtained much more human-like results with such models by drastically increasing the amount of data fed to them and the amount of computing power used to train them.
Claude, as Anthropic's model is known, is built to carry out similar tasks to ChatGPT by responding to prompts with human-like text output, whether that is in the form of editing legal contracts or writing computer code.
But Anthropic, which was co-founded by siblings Dario and Daniela Amodei, both of whom are former OpenAI executives, has put a focus on producing AI systems that are less likely to generate offensive or dangerous content, such as instructions for computer hacking or making weapons, than other systems.
Such AI safety concerns gained prominence last month after Microsoft said it would limit queries to its new chat-powered Bing search engine after a New York Times columnist found that the chatbot displayed an alter ego and produced unsettling responses during an extended conversation.
Safety issues have been a thorny problem for tech companies because chatbots do not understand the meaning of the words they generate.
To avoid generating harmful content, the creators of chatbots often program them to avoid certain subject areas altogether. But that leaves chatbots vulnerable to so-called "prompt engineering", where users talk their way around restrictions.
Anthropic has taken a different approach, giving Claude a set of principles at the time the model is "trained" with vast amounts of text data. Rather than trying to avoid potentially dangerous topics, Claude is designed to explain its objections, based on its principles.
"There was nothing scary. That's one of the reasons we liked Anthropic," Richard Robinson, chief executive of Robin AI, a London-based start-up that uses AI to analyse legal contracts that Anthropic granted early access to Claude, told Reuters in an interview.
Robinson said his firm had tried applying OpenAI's technology to contracts but found that Claude was both better at understanding dense legal language and less likely to generate strange responses.
"If anything, the challenge was in getting it to loosen its restraints somewhat for genuinely acceptable uses," Robinson said.
ALSO READ:
The mission titled ICUBE-Q will be launched on board China's Chang'E6 from Hainan
world1 day ago
As the climate warms due to the burning of fossil fuels, heatwaves are lasting longer and reaching greater peaks as average temperatures rise
world2 days ago
Schools in Delhi-NCR that received threat emails closed as a precaution; children sent home
world2 days ago
Around nine schools have received bomb threats so far, according to a Delhi Fire official; search operations underway
world3 days ago
Demonstrators had vowed they would fight any eviction as they protested the soaring death toll from Israel's war with Hamas in the Gaza Strip
world3 days ago
The vast archipelago nation, experiences frequent seismic and volcanic activity due to its position on the Pacific 'Ring of Fire'
world3 days ago
Lawyers from two countries clashed at the court this month, with Nicaragua saying Germany was 'pathetic' for providing weapons to Israel and aid to Gazans
world3 days ago
The US Secretary of State observed Jordan's efforts to bring in food and supplies
world3 days ago