An Experiment in Large-Scale Behavior Modification
The electric car company Tesla has implemented an unprecedented program designed to deploy self-driving technology into the public, with an emphasis on safety and control. In this program, over 100,000 owners have signed up to become eligible for the software beta release, and eligibility is based on statistics gathered on each driver’s performance over a minimum of 100 miles. When the beta started wide distribution in November, about 1 out of 10 drivers qualified, and received the beta software version. It is not until the software is deemed solid and foolproof that it will be deployed generally. When will this happen and what will it look like?
FSD at this time is rather like having a teenager with quick reflexes but limited distance vision, driving your car for you. As they refine and perfect this software, Tesla wants to have as much data as can be safely tested, by as many people as can be deemed safe. In addition, many users have paid full price for the software, so there is a need to deploy it in a timely fashion, and not take forever to refine it so that anyone can use it.
This project is one of the largest systematic attempts to monitor and shape behavior in a public setting, and it has elements of classic behavioral modifications, albeit in a commercially target project. It records the occurrence of four specific driving behaviors, all of which count against your chances of being allowed to try the new software. These are: collision warnings, sharp turns, sudden braking, and too-close following. One may ask why these specific behaviors were singled out, and then what it means to have these monitored.
Based upon a considerable amount of driving data and accident reports, Tesla was able to identify these four specific behaviors as being predictors of future accidents. That is not to say that these behaviors cause accidents, but simply that when they are present, there is an increased likelihood of a future accident, regardless of whether the driver is at fault.
Several interesting behaviors emerge. One is to tend to stop slowly, which also may mean drifting into an intersection, or close to another car, when one would otherwise hit the brakes. Another behavior is to avoid other vehicles, choosing routes that might be less occupied. I quickly adapted to selecting the quiet but fast country roads that, if a little out of my way, still afforded a fast trip. A major limitation is when there are unpredictable situations such as service vehicles, stopped right in the street, with no alternative but to cross the no-cross lines. As for keeping well behind vehicles ahead, this tempts others to cut in, actually introducing more risk. The criteria also select for location. It is nearly impossible to get a good score in location such as Manhattan and Los Angeles, because the dense and unpredictable traffic make it unreachable. So while the behavior of traveling less busy roads is reinforced, the very act of living in an urban area has become a negative behavior.
Are these conditions fair? One may ask. “I deserve the software, it’s not my fault if…” one might say. However, this is not a case of fair or not, it is a simple policy design scientifically to avoid risk and ensure a best outcome. It is not designed to make people happy, or to deploy the most software. Given this tacit goal, all other considerations take second place. It is a case of regulation of allostasis, not homeostasis.
So this is an example of shaping spontaneous behavior by applying a specific set of reinforcements or inhibits. Not unlike any learning paradigm. It does change how the brain is working during driving. Rather than having to attend to many small adjustments and changes, it takes on a state of vigilance and stillness. It focuses on readiness to move, not movement itself. Research has shown that a brain in optimal rest is also in optimal readiness, as evidenced by EEG as well as behavioral findings. So drivers using the new system will be making more SMR, being more relaxed, and having a different type of symbiotic relationship with their vehicles. We have evolved from the horse, to the car, and now to the car of the future.
According to my sources, 120,000 Tesla owners signed up for the beta trial, and initially about 12,000 people achieved a high enough score to get the software. So what of these 1 out of 10 drivers, what can we say about them? Well, they avoided sudden stops, and did not turn too quickly, and did not follow other vehicles too closely, basically. Do these conditions produce a safer driver? Experience showed that adhering to these requirements sometimes compromised safety. For example, avoiding sudden stops means that if a vehicle suddenly appears unexpectedly, the driver will hesitate to “slam on” the brakes, even if this is the best option.
I recall sort of drifting towards an intersection with a yellow light, being careful not to press the brake. Not the best way to do that. Similarly, avoiding sharp turns results in spending extra time at intersections and roundabouts, stimulating the following vehicles to slow down, and occasionally, issue a warning honk. Indeed, Teslas during this phase have gained a reputation for driving too slow, and for holding things up at intersections. Hardly a positive public image.
As another example, it is reported that a driving student failed the driving test because he did not press the brake pedal in order to stop. A Tesla can be set to stop whenever the accelerator is not pressed, so using the brake pedal is unnecessary. However, the student had learned a habit that, while appropriate in the Tesla vehicle, did not conform to the world’s expectations. Clearly, there is learning both ways that will be required, both on the part of the drivers, but also on the part of the other vehicles’ drivers, before self-driving becomes the norm.
What are the benefits? When it is working well, I am sure that the driver will be making more SMR waves, being in a state of rest, but alert readiness. The driver learns to shift between manual and automatic control, in a smooth and effortless fashion, without deliberate thought.
One of the interesting effects of the beta trial, which is also true of neurofeedback, is that one simply becomes more aware of things they may not have noticed in the past. Whether you come to a full stop, how close you follow, how quickly you turn, these things are usually lost in the layers of automatic behavior that you don’t even think about. It is possible to drive for miles, with the mind elsewhere, and find oneself at the destination, not recalling the trip itself. With awareness comes the capacity to evaluate, and the capacity to change.
As it turns out, this trial is creating a fleet of drivers who share certain learned behaviors, for better or for worse. These would have little overall impact were it not for the existence of other vehicles on the road. The effects of these changed driving habits are made more visible when there are unforeseen vehicles, or when other drivers must accommodate the new behavior of self-driving vehicles. It is a system of dynamics similar to social or family systems, in which changes in one component affects others.
So we can learn a lot from this trial, above and beyond the question of how a technology innovator can interface with the public to improve software and deploy innovative functions. We can learn that as you select specific behaviors to reinforce, you may produce changes you did not anticipate. In neurofeedback, we choose specific EEG and biological signals to feed back to the subject, to learn what we hope to be: concentration, relaxation, focus, self-regulation, allostatic control, and so on. As we do this, we need to also be cognizant of what other changes we may be producing, and how these serve the overall goals. Neurofeedback, when put into an overall context of self-efficacy and neuroplasticity, can teach people to find their ways in and out of their mental and emotional situations, much the way a self-driving car can help you find your way to work, school, or wherever you need to go.
What is needed is to study drivers using different driving styles, and monitor their brain and biological signs. We did this very testing along with General Motors, in the Corvette Reverse Test Drive.
We did find that the best drivers were calm, relaxed, and automatic in their actions. The way a driver should be, whether using manual, autopilot, or full self-driving.
It’s time to repeat this project with the Tesla team, and find out how advanced driving technology affects the driving experience at all levels.
We’re ready if you are, to go deeper into what the new world of technology is doing for us, with us, and to us.