A new generation of AI-powered rings is attempting to narrow one of the world’s oldest communication divides by translating sign language into text in real time, transforming subtle finger movements into spoken meaning through wearable technology small enough to disappear into everyday life.

More than a century after a deaf baseball player helped popularize hand signals on the field, researchers in South Korea have developed a new device that seeks to translate sign language into text in real time using a set of lightweight AI-powered rings.
The system, described by its creators as a step toward “seamless interaction between signers and non-signers,” uses seven wireless rings fitted with motion sensors to interpret hand movements and generate words and short phrases.
The technology is designed to address a persistent barrier faced by millions of deaf people worldwide. Although an estimated 70 million people use one of roughly 300 sign languages globally, only a small fraction of the hearing population understands them fluently. Everyday interactions, from ordering food to attending social gatherings, can become exercises in improvisation and patience.
The new rings aim to make those exchanges smoother by translating gestures directly into text while preserving the natural flow of conversation.
Unlike earlier sign language translation devices, which often relied on bulky gloves, cables, or customized hardware, the new system is wireless and stretchable. Each ring slips just below the second knuckle and adjusts to different finger sizes, allowing users to move their hands naturally while signing.
The rings contain tiny accelerometers similar to those found in smartwatches and fitness trackers. These sensors measure motions such as bending, curling, and pauses in movement. The devices then transmit that information through ultra thin Bluetooth components to a connected system that interprets the gestures.
The challenge is not simply recognizing signs, but doing so quickly enough to keep pace with human conversation. Fluent signers can communicate at speeds comparable to spoken dialogue, often producing between 100 and 150 signs per minute.
To bridge that gap, the researchers added a predictive AI layer that functions much like autocomplete in texting. By analyzing the context of previous signs, the system attempts to anticipate the next word and assemble phrases as the conversation unfolds.
In tests involving 100 common signs drawn from both American Sign Language and International Sign Language, the system achieved accuracy rates above 88 percent, including among first-time users unfamiliar with the technology.
The rings were trained to recognize both static gestures and signs involving motion. A gesture like “want,” for example, is detected by the movement of open palms closing into fists, while more fluid signs such as “dance” or “fly” are tracked across continuous motion patterns.
Researchers say the system may eventually extend beyond direct translation. Because the AI model learns gestures rather than spoken language itself, the team believes the platform could someday function as a translation bridge between different sign languages, something akin to a multilingual interpreter for signing systems.
But even the developers acknowledge the technology’s limitations.
Sign language is not conveyed through finger movement alone. Facial expressions, mouth shapes, posture, rhythm, and body orientation all contribute meaning, tone, and emotional nuance. Without those signals, a translation system risks flattening or misinterpreting intent.
That limitation has led some researchers back toward camera based systems that attempt to capture the full body and face. Earlier versions of those systems struggled outside laboratory conditions, often confused by poor lighting or cluttered backgrounds, but advances in processing power and computer vision are reviving interest in more comprehensive approaches.
Still, wearable systems have advantages. Unlike camera setups, they do not depend on environmental conditions and can travel with the user into daily life.
The South Korean team believes the rings may eventually find uses beyond sign language translation, including virtual reality environments, touchless computing systems, and rehabilitation programs that monitor hand movement and dexterity.
The project also reflects a broader shift in assistive technology, one increasingly shaped by artificial intelligence. Rather than forcing users to adapt to machines, researchers are trying to build systems that respond more fluidly to human behavior, anticipating motion, filling gaps in communication, and learning from context in real time.
For now, the rings remain experimental. But they offer a glimpse of how AI may begin reshaping one of the oldest and most expressive forms of human communication, not by replacing it, but by trying to make it legible to a wider world.

Stay ahead of the stories shaping our world. Subscribe to Impact Newswire for timely, curated insights on global tech, business, and innovation all in one place.
Dive deeper into the future with the Cause Effect 4.0 Podcast, where we explore the ideas, trends, and technologies driving the global AI conversation.
Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide
Faustine Ngila is the AI Editor at Impact Newswire, based in Nairobi, Kenya. He is an award-winning journalist specializing in artificial intelligence, blockchain, and emerging technologies.
He previously worked as a global technology reporter at Quartz in New York and Digital Frontier in London, where he covered innovation, startups, and the global digital economy.
With years of experience reporting on cutting-edge technologies, Faustine focuses on AI developments, industry trends, and the impact of technology on society.
Discover more from Impact Newswire
Subscribe to get the latest posts sent to your email.



