Does the world really need another study of shadow AI? That was my first thought going into this project. Reading dozens of previous reports did not change that impression: there's a lot of shadow AI out there, and a lot of reports saying so.
But the more I read, the more apparent it became that something important was missing. This endless supply was not meeting what was actually in demand. While existing research answered the question "is there shadow AI," there wasn't much on the more important question–why?
The naive answer is that AI tools help workers accomplish their tasks, so of course they will use them. On the other hand, it's also useful for workers who want to maintain their employment to abide by company policies, so it's not quite that simple. We need a more nuanced articulation of the incentives for and against using unsanctioned AI tools if we want an actor theory that can be operationalized to reduce risk from shadow AI.
To understand why people use shadow AI, we need to be willing to consider more broadly why people do anything at all. Yes, we want to get our work done, and we want to avoid punishment, but we are also social creatures, driven mostly by emotions and the need for belonging, working with imperfect information to optimize for our perceived in-group's benefit. We need to add a little more texture to "people do what is good for them" to explain actual human behavior.
I won't recapitulate the entire report here, but that is a useful frame for reading it, and in particular for digesting the most challenging findings. The people most likely to use shadow AI are those who, on paper, should be the least likely: the AI experts and executives who feel like they have the intellectual or institutional authority to exempt themselves from the rules. (If you recognize yourself in that description, um, you aren't alone.) Those findings are counter-intuitive if you view humans as meaty computers, and perfectly intelligible if you think about any of the people in your life.
That is my big takeaway from this work: those concerned about the risks of shadow AI should engage with others in their organization as people. Our report discusses worker motives that are measurable in the aggregate, but human diversity is vast and the incentives driving people around you may differ. The risks of unapproved software are real, but so are the benefits that might be driving your coworkers to accept those risks. From here on, let's just assume people are using AI tools, and instead start the conversation by asking why.