< Academy

Machine Learning in Robotics and Tactile Sensing

Research
Ivo Kosa
Junior ML Engineer

Robots are can use RGB cameras and LiDAR to map environments, detect objects and estimate depth. However, vision alone isn’t enough. Cameras struggle in low light, objects can be occluded and small-scale surface details are hard to capture. When it comes to grasping fragile or irregular items, sight cannot replace touch. This is where tactile sensing becomes critical .

Slides

Why Robots Need Touch

Traditional sensors each have limitations:

  • 2D cameras don’t capture accurate size information and fail in poor lighting.
  • RGB-D cameras help with object recognition and shape estimation but still suffer from occlusion and resolution issues.
  • LiDAR works well for large-scale 3D mapping but struggles with fine detail.
  • Tactile sensors provide direct, real-time feedback for grasp planning and slip detection but lack large-scale environmental awareness .

Types of Tactile Sensors

Tactile sensing can be implemented in several ways :

1. Optical Sensors

Measure deformation of a flexible membrane using structured light or marker arrays.


Pros: Highly information dense, relatively inexpensive.


Cons: Bulky due to camera positioning.

2. Magnetic Sensors

Detect changes in local magnetic fields caused by deformation.


Pros: Small form factor, embeddable in thin membranes.


Cons: Complex manufacturing, lower resolution.

3. Piezoelectric / Capacitive Sensors

Measure changes in resistivity or capacitance under pressure.

Pros: Capture complex surface dynamics.

Cons: Low resolution and tuning challenges.

Each approach trades off resolution, scalability and practicality.

Artificial Cilia: Combining Texture and Stiffness

One particularly interesting design is the Artificial Cilia Sensor .

This sensor uses:

  • Micro magnetic spheres infused into flexible cilia material
  • Mounted on top of a hollow silicone dome

The design enables:

  • Small-scale textural measurement via the cilia
  • Large-scale stiffness measurement via the deformable dome

This hybrid approach captures both surface detail and structural resistance. This is closer to how biological touch operates.

Experimental Setup

To test performance, a structured dataset was created :

  • 6 textures
  • 5 materials
  • 30 combinations
  • Two data collections with five grasp positions each

Each sensor configuration produced:

  • 2 sensors
  • 4 taxels per sensor
  • 3 axes per taxel
  • 24 output channels

This results in time-series tactile signals representing force dynamics during grasping.

Applying Machine Learning

To interpret this high-dimensional signal data, two models were explored :

  • 1D Convolutional Neural Network (CNN) across the time axis
  • Convolutional Autoencoder (CAE)

The goal: classify material and texture based purely on tactile data.

Results: In-Distribution vs Out-of-Distribution

Performance was evaluated under two conditions :

In-Distribution

Trained and tested on subsets of combined grasp data

  • ~94% material accuracy
  • ~97% texture accuracy
  • ~91% total accuracy

Out-of-Distribution

Trained on one grasp set, tested on another

  • ~81% material accuracy
  • ~85–89% texture accuracy
  • ~70–72% total accuracy

The performance drop highlights a core robotics challenge: Generalising across unseen grasps and force patterns is significantly harder than learning within controlled distributions. This mirrors broader ML realities: robustness under distribution shift remains difficult.

Case Study: Amazon Robotics

In practice, this plays out on the warehouse floor in highly dynamic environments. Amazon’s robotic systems operate in fabric storage pods where items of different shapes, weights and textures are densely packed together. Using 6-axis force-torque sensors in the end-effector, the robot doesn’t just “see” the item, it feels resistance as it makes contact. As the gripper closes, tactile feedback detects micro-slips, pressure distribution and unexpected deformation. If a grasp is unstable or infeasible, the system can adjust grip force in real time or escalate to a human operator in a collaborative (co-bot) setup

Touch plays a critical role in preventing damage and improving planning through self-supervised updates from previous runs. This is not fully autonomous robotics replacing humans, it is collaborative, sensor-augmented systems.

The Broader Picture

Tactile sensing shifts robotics from rigid automation to adaptive manipulation.

But the research also surfaces deeper challenges:

  • High-dimensional sensor data requires careful representation learning
  • Generalisation across objects and grasp styles remains difficult
  • Out-of-distribution robustness is still a bottleneck
  • Real-world systems need human fallback mechanisms

Its an exciting sector and will be great to see how it continues to develop.

< back to academy
< previous
Next >