Summary review

Totally outstanding book bringing an entirely new spin to how we can imagine, and thus invent and build, a symbiotic and meaningful life with robots.

While fantasizing about the vision for jetpack cognition lab, on Jan 1st, 2020, I was noting down “a new breed of robots”, with my own idea going more down the lines of the self-learning capacity we want to build into our robots. Fast-forward to Sept 27th, 2021, when I first seem to have encountered The New Breed: What Our History with Animals Reveals about Our Future with Robots, by Kate Darling, a researcher of Human-robot interaction, robot ethics, IP theory & policy at MIT Media Lab. Finally I was ready to dive in few weeks ago and I was not disappointed. All this “new breed” talk is about how we get to a new way of thinking about robots, both for the general public and for roboticists. We need to elevate our thinking here in order to get a new generation of robots on the market and really make a difference to the positive for many people. Darling’s book is of great help in spinning that story.

The New Breed (e-book) by Kate Darling

In summary, Darling wittingly and successfully proposes to use our view of animals as a blue print for robots as they could be. The book is brimming with exciting details about our relationship with animals and the way our psychology gets in the way of establishing firm ethics for dealing with animals and establishing their rights. This relationship is projected onto the situation with robots, how to design them, how to treat them, and whether it is OK to kick them. It seems with animals we are mostly doing all of the kind stuff for our own psychological and emotional well-being. In other words, whether with animals or with robots, the way we treat them is more about the effect, real or imagined, this has on us than on the other creature itself.

Kids growing up in Alexa households start shouting commands rather than asking for stuff.

I really enjoyed both the general proposal for contemporary robot story-telling put forward in the book, as well as the prose itself, which is simple, readable, and stealthily funny. Totally recommended for roboticists and everyone else alike. In addition, it could be a very interesting read for people interested in animal rights & vegetarianism, as it unrolls the history of animal rights in (mostly) western societies.

Annotated quotes

This being not the only thing that got me excited, let’s isolate some core points that did so too, and look at them more closely in turn using some quotes from the book.

  1. Public imagination about robots is strongly centered on the idea of human-like robots, which look like we do and can do everything we can do, only better
  2. The state of robotics is nowhere near the science fiction
  3. Robots allow us to learn something about ourselves.

Public imagination and story-telling

Imagination is largely fueled by the stories that we know. Robot stories are until recently science fiction, mediated by literature and movies. There are some classic lingering stories like the Golem, Frankenstein, Karel Čapek’s play, and maybe Metropolis. Influential later examples include the Star Wars and Terminator universes, among many more. Already,

As technology critic Sara Watson points out, our stories, too often, compare robots to humans. [p. xiv]

which is nourishing a situation where

Many people are not thrilled by the anticipated robot takeover. Our concerns are particularly centered on the idea of creating something like us, with humanlike agency, that will take our steering wheels and harm us or our children. [p. xii]

But creativity and wild imagination to the rescue. Since

When we assume that robots will inevitably automate human jobs and replace friendships, we’re not thinking creatively about how we design and use the technology, and we don’t see the choices we have in shaping the broader systems around it. [p. xiv]

From an analytical perspective it can be said, that

Inexperience with robots and their inner workings may make certain machine behavior seem magically lifelike to people. [4. Robots versus Toasters > p. 92]

but the notion of inexperience is too general to be actionable.

I’ve heard curse words directed at the public intellectuals who extol the dangers of robot takeovers, and complaints that the big-name alarmists are mostly physicists, philosophers, and CEOs who don’t have in-depth knowledge of artificial intelligence or robotics. [1. Workers Trained and Engineered > Page 6]

As roboticists it is our responsibility to share our experience, a purportedly more differentiated one. In doing this we certainly do not want to go about it by lecturing. Rather we want to rely on a universal currency and provide alternative stories of the future. Darling picks up on animals for this purpose, which is a great choice. Everyone knows animals. They provide infinitely varied example and inspiration of ways of being different than the human one while being unquestionably real. In doing this, a lot of ambiguity comes to the surface that was present while our current relation and general view of animals was shaped, historically. The close look she takes reveals that a lot of ambiguity still persists. And all of this makes for some awesome stories. In the meantime,

  • No, the oxen did not replace the farmer. But yes, the oxen does something farmer-like.
  • No, hunting dogs did not replace the hunter. But yes, the hunting dog does something hunter-like.
  • No, adoption of pets did not replace friendship. But yes, the pet does something friendship-like.

OK, and yes you guessed it, while not replacing pets social robots do pet-like things where a real animal is not an option.

So, if we do think creatively, we can come up with all these robot ideas where the robot meaningfully integrates with our activities, partially supplanting things we do ourselves, and at the same time extending our overall capacity with things that we could not do before. This is a whole lot about how we think about organizing our activity and tasks. House cleaning is complex and parts of it can be done by a robot. Garden work is complex and parts of it can be done by a robot. Companionship is complex and parts of it can be done by a robot. Everything we do is complex, and parts of it can be done by a robot. Art, relaxation, being social.

State of robotics

Awesome stories, it was said. Let’s start with some funny ones that the book adds to a growing list of robot, AI & tech fails.

Our advancements in artificial intelligence, as amazing as they are, haven’t gotten anywhere near understanding how to create the adaptable, flexible general intelligence that a human, even a toddler, has. [1. Workers Trained and Engineered > p. 14]

Robot fails

The vacuuming robot Roomba represents the most successful type of consumer robot to date with iRobot selling 20 million units in 2018. These robots have real utility within a narrow scope. Early versions and apparently also more recent ones will fall down stairs. But hilariously they are also known to mindlessly distribute dog poop if they come across a pile. The robot cannot sense this and just keeps doing its thing.

The company had invested in a modern art piece: a robotic office copy machine that was designed to wander the halls, randomly creating and spitting out copies of nothing. I only got to see it once, because, sadly, it wasn’t able to recognize stairs and eventually fell down them. [1. Workers Trained and Engineered > p. 9]

The absurdidity of this robot generates some entertainment points, and then another story of robots and stairs.

Musk had promised to produce five thousand Model 3 electric cars per week in 2018, but Tesla couldn’t even make half of them. What went wrong? According to analysts, the robots, while able to work consistently and precisely, weren’t able to recognize the litany of minor defects that can happen during the manufacturing process—slightly crooked parts, for example—leading to problems down the line. [1. Workers Trained and Engineered > p. 13]

The “resolution” of the robots perception and calibration was not sufficient for all levels of scale, so these micro-defects evaded the system.

Let’s add this one from 2022. At a public chess tournament in Moscow a chess-playing robot pinched then fingers of a seven year old kid and apparently broke them.

We have been calling these grounding fails. With grounding we mean the completeness of the connection of an agent’s actions out through the sensorimotor layer and down to physical reality. As the Tesla story clearly tells, physical reality does have its special surprises, even for our own reasoning at times.

Ourselves

What does this tell us about ourselves?

People are more talented than we give them credit for. [1. Workers Trained and Engineered > p. 12]

For us it is easy to do all these things, sweep the floor withouth falling down the stairs. Play chess without physically injuring our opponent, even if our hands touch while over the board. It only appears easy to ourselves though, because we simply cannot perceive our internal activity that makes sure this things are taken care of.

This is the great introspective fallacy that some many, including roboticists of every generation, fall victim to.

That is a grand one and there is more things robots can teach us about ourselves and the skills we actually have but do not perceive them as such. This only becomes obvious when you try to replicate these skills in a machine. [tbd]

Movement

This bit on movement perception really kicked in then. Did you know that there is brain circuitry tuned movement in particular? Thanks to this

… physical robots trigger another piece of our biological hardwiring: our perception of movement.

Our sense of movement is so deeply engrained that it responds to entirely abstract situations.

In a seminal study from the 1940s, psychologists Fritz Heider and Marianne Simmel showed participants a black-and-white movie of simple, geometrical shapes moving around on a screen. When instructed to describe what they were seeing, nearly every single one of their participants interpreted the shapes to be moving around with agency and purpose. They described the behavior of the triangles and circle the way we describe people’s behavior, by assuming intent and motives.

This highlights our incliniation to anthropomorphize, and more particularly attributing agency to anything that moves, except maybe ballistics. But our perception is even more nuanced, allowing us to perceive psychological states in the movement of an “agent”.

What brought the shapes to life for Heider and Simmel’s participants was solely their movement. We can interpret certain movement in other entities as “worried,”“frustrated,” or “blinded by rage,” even when the “other” is a simple black triangle moving across a white background.

All of this makes sense in evolutionary terms and also tells us about how deeply this capacity is rooted in our brain.

Many scientists believe that autonomous movement activates our “life detector.” Because we’ve evolved needing to quickly identify natural predators, our brains are on constant lookout for moving agents.

It is basically identical to our inclination to see a face in anything with two round things next to each other, which we take as eyes. The technical term for this is pareidolia.

The researchers also found evidence that animal detection activated an entirely different region of people’s brains. Research like this suggests that a specific part of our brain is constantly monitoring for lifelike animal movement.

All quotes from [4. Robots versus Toasters > p. 97-99]

This realization allows us to create extremely simple robots that still have a profound effect if they manage to activate these motion sensitivity in ourselves. The stuff discussed above only refers to visual stimuli. With robots this can easily be extended to the tactile realm to create haptic stimuli and let you perceive life-like motion with your eyes closed. The simplicity of this approach allows us to create many robot designs which are far from any shape we know, and in this way keep expectations at a minimum and be able to surprise.

References

Comments

tag cloud

robotics AI music books research psychology intelligence feed ethology computational startups sound jcl audio brain organization motivation models micro management jetpack funding dsp testing test synthesis sonfication smp scope risk principles musician motion mapping language gt fail exploration evolution epistemology digital decision datadriven computing complexity algorithms aesthetics wayfinding visualization tools theory temporal sustainability stuff sonic-art sonic-ambience society signal-processing self score robots robot-learning robot python pxp priors predictive policies philosophy perception organization-of-behavior open-world open-culture neuroscience networking network navigation movies minecraft midi measures math locomotion linux learning kpi internet init health hacker growth grounding graphical generative gaming games explanation event-representation embedding economy discrete development definitions cyberspace culture creativity computer-music computer compmus cognition business birds biology bio-inspiration android agents action