MIT develops AI which can understand if you are happy or not
The team is keen to point out that they developed the system with privacy strongly in their mind
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they have gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, and neutral based on a person’s speech patterns and vitals.
The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analysed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in their mind. The algorithm runs locally on a user’s device as a way of protecting personal information.
“Our next step is to improve the algorithm’s emotional granularity so that it s more accurate at calling out boring, tense, and excited moments, rather than just labelling interactions as ‘positive’ or ‘negative’ says graduate student Tuka Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”