Connect with us

Tech

Apple’s ReALM Joins the Fray: GPT and Gemini, Brace for Impact!

Published

on

Apple has developed a new artificial intelligence system called ReALM (Reference Resolution as Language Modeling) that aims to make voice assistants smarter in understanding and following user commands. This system makes it easier for these assistants to figure out what users are referring to on their screens, even when they use indirect language like pronouns.

Traditionally, voice assistants have struggled to interpret vague language and visual cues. Apple’s ReALM addresses this by simplifying the process, allowing it to better understand on-screen references and integrate this knowledge into conversations.

ReALM works by turning the visual layout of a screen into text, which helps it understand where things are on the screen and what they mean. Apple’s research shows that this new approach works much better than older methods, even outperforming the capabilities of OpenAI’s GPT-4.

This innovation could make voice assistants more helpful in various situations, such as helping drivers use their car’s infotainment systems or aiding people with disabilities by making it easier to use technology without having to be overly specific.

Apple has been active in AI research and recently introduced a method for training large language models that combine text and images. With Apple’s Worldwide Developers Conference (WWDC) coming up in June, there is anticipation that the company will showcase more AI features, highlighting its ongoing efforts in artificial intelligence research.

Follow The420.in on

 Telegram | Facebook | Twitter | LinkedIn | Instagram | YouTube

Continue Reading