In early June, Apple researchers released a study suggesting that simulated reasoning (SR) models, such as OpenAI’s o1 and o3, DeepSeek-R1, and Claude 3.7 Sonnet Thinking, produce outputs consistent ...
Stanford University researchers asked Americans to judge AI responses to political questions. After collecting over 180,000 judgments, the researchers concluded that leading AI models from OpenAI, ...
Challenges in visual and spatial processing and a deficit in training data have revealed a surprising lack of timekeeping ability in AI systems When you purchase through links on our site, we may earn ...
A new study appears to lend credence to allegations that OpenAI trained at least some of its AI models on copyrighted content. OpenAI is embroiled in suits brought by authors, programmers, and other ...
A new study by Apple has ignited controversy in the AI field by showing how reasoning models undergo 'complete accuracy collapse' when overloaded with complex problems. When you purchase through links ...
Could future AIs be “conscious,” and experience the world similarly to the way humans do? There’s no strong evidence that they will, but Anthropic isn’t ruling out the possibility. On Thursday, the AI ...
Researchers at Anthropic have uncovered a disturbing pattern of behavior in artificial intelligence systems: models from every major provider—including OpenAI, Google, Meta, and others — demonstrated ...
Modern AI engines and LLMs are not good at helping people in a mental health crisis, a new study finds. Google’s latest Gemini is the highest-scoring large language model on a recent test of empathy ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results