Pioneering startup introduces patented virtual keyboard and touch screen for enhanced mixed reality experiences
In the past 18 months, the AI industry has witnessed an explosion of significant breakthroughs. How can AI technologies be effectively harnessed to enhance consumer experiences across all industries? ChiMETA digital technology (Shanghai) Ltd. (ChiMETA), a pioneering startup founded by industry veterans and AI professionals, presents their revolutionary solution.
Committed to delivering cutting-edge mixed reality experiences, ChiMETA is proud to unveil their first-generation MR glasses. Remarkably lightweight at only 150 grams, making them the lightest VR/MR glasses in the world — they offer users eye-level video see-through (VST) experience without stitching, while delivering more than five hours of uninterrupted MR experience, supported by a type-C cable that connects to a large battery processing unit.
From traditional convolutional neural network (CNN) and recurrent neural network (RNN), to large language models (LLMs) and other AI-generated graphics and video contents (AIGC) — the utilization of AI has predominantly been limited to two-dimensional visual applications.
Recognizing the demand for a more natural and immersive approach, ChiMETA introduces their patented virtual keyboard and touch screen, incorporating precise fingertip positioning and unmatched millimeter-level interaction. Powered by their own proprietary AI Vision technology, these innovations transform VR goggles into dynamic MR glasses, enabling users to interact seamlessly with everything in the surrounding environment — from virtual keyboards, cloud desktop computers, to documents or even a cup of coffee — without the need for any handheld devices.
While most XR glasses primarily focus on gaming applications, ChiMETA is dedicated to addressing challenges in our daily work scenarios. Users can work remotely without laptops, leveraging six expansive virtual displays across their entire surrounding (3 Degrees of Freedom) — while interacting with the environment, unaffected by natural light conditions.
With ChiMETA’s own Large Language Model (LLM), companies can train their own GPT tools using vision, speech, text, and voice inputs. Imagine employees seamlessly interacting with remote colleagues, synchronizing vision and sound, or students actively engaging with teachers through live video feeds, augmenting objects of interest during interactive lab sessions. Meanwhile, doctors will be able to present results to patients in three dimensional MRI or CT models, and businesses can effortlessly manage inventory and improve production efficiency.