top of page

Intersection Inc. visited the UCSD (University of California, San Diego) based Calit2 (California Institute of Telecommunications and Information Technology) for discussions with researchers from the Institute for Neural Computation's Machine Perception Laboratory, specifically in the areas of robotics and HMI. 

Currently this lab is developing “emotionally relevant” robots and is a joint collaboration with the Early Play and Development Laboratory (University of Miami) and the Movement Control Laboratory (University of Washington).


Intriguing and inspiring conversations were had with UCSD scientist and researcher Deborah Forster about Calit2’s current projects, and how their research could apply to various industries today such as teaching services, consumer products and automobiles. Our company focus was to understand how to implement this technology into future products and how we can specifically apply this technology to the design approach and mindset of Intersection’s HUMIIN™.


We were shown Diego-san, the latest in emotionally relevant robots from Calit2, who was preceded by the popular “emotionally responsive” Albert Einstein robot which is still kept at UCSD. Based upon a baby-boy aesthetic, except larger due to cost minimization issues, Diego-san stands at around 4 feet tall and weighs 66 pounds. The responsive robot features a life-like face mask which is the platform that Diego-san displays its emotions on. This soft rubber-like face surrounding the complex mechanisms underneath is surprisingly realistic. We were told that the original baby face mold was simply too real and creepy, therefore it was changed to a face with slightly over exaggerated cartoon-like features. Diego-San utilizes HD cameras for eyes which enable it to understand, react, learn and mimic various facial expressions such as frustration, happiness or confusion. Research was completed in child/infant behavior and the way a mother interacts with her child. UCSD scientists are using this information to develop the software for Diego-san. This software will allow it to learn to control its body and to interact/communicate with other people in a way that a normal human baby would naturally learn. The entire robot encases 44 individual pneumatic joints. Bi-directional air within the joints creates a highly pressure sensitive system which allows passive and active resistance when joints are moved. This allows Diego-san to respond in a human-like way to the sense of touch. The body hardware is designed by Japan’s Kokoro Co. and the head, with 27 of those pneumatic joints, by Hanson Robotics in Dallas, Texas. 


Next project we were shown were the “Ruby” development models, a robot designed to aid the learning process in the classroom specifically for younger students. Three different iterations were currently in the project room; these were the 4th, 5th and 6th generation of the learning robot. The almost humanoid form Ruby was designed to remain stationary in a classroom with a reactive positive face on a screen where a human face would naturally be. A larger interactive touch screen on the torso area allows students to complete learning requirements and games while interacting through voice and touch. We discussed the challenges of designing and developing a robot for this specific task, including their findings from the immersive research where Ruby was placed in an actual classroom environment. To help investigate how beneficial Ruby was in the classroom environment researchers programmed it with 10 Finnish words, a language that no child in the classroom spoke, and analyzed how well the class learnt. One conclusion from this research was that children who used Ruby in frequent short bursts, as opposed to one long sitting, learnt the vocabulary faster. UCSD researchers were able to analyze this specific student information due to the fact that Ruby is able to differentiate between each student in the class by using 50,000 images taken over 28 days (Earlier Ruby versions used 7,000)  to memorize and recognize each student. Ruby also has the ability to recognize when there are two separate students in front of it. This individual student analysis allows Ruby to provide valuable feedback to teachers about friendships and relationships which may otherwise go unnoticed. Ruby also has the ability to hold and recognize objects given to it by the students. This was demonstrated by soft toys placed in its hand which Ruby analyzed via HD cameras and announced the object name. Younger children enjoy giving their possessions to away temporarily to gain gratification when it is recognized. They want to have a companion or a friend, not another teacher, and this is a way that students can connect with Ruby socially while learning is underway at the same time. This companionship is made even more human by features such as Ruby’s wake up function, where after nap time students would poke the robot to wake it up. Ruby would respond to students by blinking its digital eyes and displaying typical tired behavior, this eventually led to students learning that this was not an acceptable way to treat a friend. However currently expensive, Ruby is still being developed and researchers aim for Ruby to cost less than USD $3000 in the future.


Calit2 researchers stated that their research at UCSD was not to find replacements for humans in the working environment, but rather to find collaborators who help humans maximize productivity. 


-Sam McCafferty

bottom of page