cross pond high tech
159.9K views | +3 today
Follow
cross pond high tech
light views on high tech in both Europe and US
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

Huawei’s new 4K Vision TV claims voice, facial recognition, and tracking among a long list of AI powers

Huawei’s new 4K Vision TV claims voice, facial recognition, and tracking among a long list of AI powers | cross pond high tech | Scoop.it

Huawei announced its own 4K television, the Huawei Vision, during the Mate 30 Pro event today. Like the Honor Vision and Vision Pro TVs that were announced back in August, Huawei’s self-branded TV runs the company’s brand-new Harmony OS software as its foundation.

Huawei will offer 65-inch and 75-inch models to start, with 55-inch and 85-inch models coming later. The Huawei TV features quantum dot color, thin metal bezels, and a pop-up camera for video conferencing that lowers into the television when not in use. On TVs, Harmony OS is able to serve as a hub for smart home devices that support the HiLink platform.

Huawei is also touting the TV’s AI capabilities, likening it to a “smart speaker with a big screen.” The TV supports voice commands and includes facial recognition and tracking capabilities. Apparently, there’s some AI mode that helps protect the eyes of young viewers — presumably by filtering blue light. The Vision also allows “one-hop projection” from a Huawei smartphone. The TV’s remote has a touchpad and charges over USB-C.

Philippe J DEWOST's insight:

TV is now watching you watching TV : is this smart ?

Philippe J DEWOST's curator insight, September 25, 2019 12:47 AM

Still think YOU are watching TV ?

Scooped by Philippe J DEWOST
Scoop.it!

Amazon Alexa scientists find ways to improve speech and sound recognition

Amazon Alexa scientists find ways to improve speech and sound recognition | cross pond high tech | Scoop.it

How do assistants like Alexa discern sound? The answer lies in two Amazon research papers scheduled to be presented at this year’s International Conference on Acoustics, Speech, and Signal Processing in Aachen, Germany. Ming Sun, a senior speech scientist in the Alexa Speech group, detailed them this morning in a blog post.

“We develop[ed] a way to better characterize media audio by examining longer-duration audio streams versus merely classifying short audio snippets,” he said, “[and] we used semisupervised learning to train a system developed from an external dataset to do audio event detection.”

 

The first paper addresses the problem of media detection — that is, recognizing when voices captured from an assistant originate from a TV or radio rather than a human speaker. To tackle this, Sun and colleagues devised a machine learning model that identifies certain characteristics common to media sound, regardless of content, to delineate it from speech.

 

Philippe J DEWOST's insight:

Alexa, listen to me, not the TV !

No comment yet.