5 ways AI systems are now able to mimic the human senses | MCUTimes

5 ways AI systems are now able to mimic the human senses

  • Artificial intelligence has become incredibly advanced.
  • Some systems can detect transparent objects or distinguish thousands of tastes or smells.
  • Here are five ways in which AI projects replicate our five senses.

Since the research in robotics began, humans have been trying to create machines in their own image.

From machines that can move like humans, to machines that can feel and think as humans do, humans have long been on the hunt for ways they can, for all intents and purposes, recreate themselves in machine form.

While this may have previously seemed like an unattainable goal, systems are beginning to emerge that can distinguish as many tastes, smells, and textures as any human being.

An institution dedicated to developing human-like machines is Massachusetts Institute of Technology (MIT). One area it focuses on is exploring how to make robots with a sense of touch. Meanwhile, the French company Aryballe is also working on AI systems that are able to distinguish between thousands of different variants.

Although products and brands like Alexa are already well-known names, there are many types of technology that go far beyond voice recognition – here are five examples of AI projects that seem to replicate the human senses.

1. Robots that can ‘see’ transparent objects

Although robots do not have a sense of themselves as such, there are some whose infrared radiation system allows them to identify an object by its shape.

Robots are not always the best when confronted with transparent objects such as glass bottles or plastic cups, mainly as their depth sensors reflect on these objects and are only able to capture vague shadows.

AI drug development pharma

Robots sometimes struggle to identify transparent objects such as glass bottles or plastic cups.

Getty Images


However, a team of researchers from Carnegie Mellon University in the USA recently succeeded in creating a system so that robots can use the vague shadows they capture with their sensors to “fill in” their knowledge with additional information and give shape to transparent objects, according to Wall Street Journal.

The researchers combined a depth sensor with a standard camera to capture shades of red, green and blues on the edges of transparent objects.

Later, they improved the system so that robots could recognize the visual signs of colors detected by the camera. As a result, a robot arm would automatically adjust the grip to be able to catch the objects.

2. Hearing aids that can ‘hear’ voices over background noise

Carnegie Mellon researchers also created a database of digitized sounds and images using all kinds of household items so that an AI system with automatic training could identify each sound correctly.

According to the researchers, the robot was able to identify objects it could not see but could hear up to 750% of the time.

hearing loss ear hearing aid

Hearing aid manufacturers could use the technology to make cochlear implants.

Getty Images / Westend61


Technology is also being developed that makes it possible for AI to isolate sounds from and differentiate, for example, between voices and noise.

Oticon Inc, a hearing aid manufacturer is investigating how to make cochlear implants with neural networks.

These implants have algorithms fed millions of speech samples with and without background noise that automatically isolate voices from background noise.

This would allow those suffering from certain types of hearing loss to regain their hearing.

3. Robots that can ‘smell’ burning or gas leaks

Aryballe is an AI software company that mimics the human olfactory system with biosensors and a machine learning system.

The sensor picks up the odor molecules in the air and encodes them for data.

Blue natural gas flame on a homemade gas stove

This technology can help avert emergency situations if it can detect gas leaks.

Aterra / Getty Images


The AI ​​system then collects this data and combines it with a database containing thousands of different odors.

After cross-referencing the collected data with those in the database, the system can determine what type of odor it is.

This technology could be life-saving if, for example, it was combined with technology that could turn off an oven before burning food, or if it could detect a gas leak.

4. Systems that can ‘taste’ thousands of foods

Gastrograph AI is a platform created by Analytical Flavor Systems Inc.

It predicts how people will react to new foods so that developers and marketers can use consumer tastes to predict which ones will work better or worse in a specific market.

This system uses the data of thousands of consumers who have rated thousands of products via a mobile app, and specifies various parameters and categories.

girl enjoying sandwich

Gastrograph AI is a platform created by Analytical Flavor Systems Inc.

Getty Images


Through AI’s self-learning system, it can determine which taste and preference patterns work best in each location.

“We have modeled over 1,000 flavor signatures to date (and count) that are easy to interpret by formulators,” it says on its website. “Do you have a rare Portuguese flavor? Find other major consumer groups for it based on palette and perception data. Do you have a flavor that has not been modeled yet? Create a new signature and Gastrograph AI will optimize it for your audience.”

5. AI that can ‘feel’ surfaces

GelSight is a technology developed by researchers at the MIT Laboratory of Computer Science and Artificial Intelligence.

The technology enables robots to determine the shape, size and material of a surface using a touch sensor, which means that the sense of touch can be digitized with very high precision.

feline greenies girl claps cat

The technique allows robots to determine the shape, size and material of a surface.

Greenies / Facebook


Yunzhu Li, a researcher at MIT, is working on an AI system capable of establishing a relationship between an image and a touch of a surface.

“Humans develop skills from experience through our lives; neural networks can learn much faster,” Li told the Wall Street Journal.

By collecting data from more than 200 objects that were touched thousands of times with a GelSight sensor, Li created a database for AI to model matching visual data and touch data.

Disclaimers for mcutimes.com

All the information on this website - https://mcutimes.com - is published in good faith and for general information purpose only. mcutimes.com does not make any warranties about the completeness, reliability, and accuracy of this information. Any action you take upon the information you find on this website (mcutimes.com), is strictly at your own risk. mcutimes.com will not be liable for any losses and/or damages in connection with the use of our website.

Leave a Comment