![]() ![]() And too often in the past, we just haven't had enough information about the speech patterns of people with different kinds of disabilities or with different kinds of atypical speech patterns." "It's an artificial intelligence technology, so it requires us to have enough data to be able to develop technology that will actually work for people with a particular kind of speech pattern. "Speech technology relies on training data," he added. And too often, those are the people for whom the speech technology doesn't work," said Mark Hasegawa-Johnson, a professor of electrical and computer engineering at UIUC who's leading the project. "One of the groups that would benefit the most are people who have physical disabilities of many different kinds. The Speech Accessibility Project will work to change this by creating a dataset of representative speech samples that can be used to train machine learning models, so they can better understand a range of speech patterns. As a result, many people may not be able to effectively use these speech technologies. That includes speech affected by Lou Gehrig's disease or ALS, Parkinson's disease, cerebral palsy and Down syndrome. But these systems don't always recognize certain speech patterns, particularly those associated with disabilities. Speech recognition, which can be found in voice assistants like Siri and Alexa as well as translation tools, has become a part of many people's everyday lives. Amazon, Apple, Google, Meta and Microsoft are all supporting the project, as well as a handful of nonprofit disability organizations. The Speech Accessibility Project, which launched Monday, is spearheaded by the University of Illinois at Urbana-Champaign. A new research initiative aims to make voice recognition technology more useful for people with a range of diverse speech patterns and disabilities. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |