AI is Useful for Capitalists but Probably Terrible for Anyone Else

AI is finally useful for business, and everyone is likely underestimating its impact. But unless AI is open-source and truly owned by the end users the future for everyone but the software providers looks grim.

Picture of capitalist 19th century baron robot with human workers at typewriters - created via stable diffusion.

21 Feb 2023

London, UK

by Matthew Eric Bassett

The last time your author opined about the state of artificial intelligence1 I predicted that commercial success required two things: first, that AI researchers focus on solving a specific business problem, and second, that enough data exists for that specific business problem. The premise for this prediction was that researchers needed to develop an intuition of the business process involved so they could encode that intuition into their models. In other words, that a general-purpose solution would not crack every business problem. This might have been true temporarily, but it's doomed to be wrong more permanently. I missed a reoccurring pattern in the history of AI: that eventually enough computational power wins. In the same way chess-playing engines that tried to encode heuristics about the game eventually lost to models that had enough computational power, these AI models for "specific business problems" have all just lost to the hundred billion parameters of GPT-3.

I am not known for being overly bullish on technology, but I struggle to think of everyday sorts of business examples where such a large language model would not do well. It is true that in the above example the model did terribly on questions requiring basic arithmetic (converting rent per square foot per month to rent per square metre per year, for example), but these limitations are missing the point. Computers are known to be adequate arithmetic-performing machines (hence the name), and surely future models would correct this and other deficiencies. Artificial intelligence is now generally useful for business, and I am probably not thinking broadly enough about where it will end up.

One decent guess, however, might be augmented intelligence - the idea that AI is best deployed as a tool to increase the power and productivity of human operators rather than replace them. 2 Large language models like GPT-3 could be used to scale the work of a human or handle their dull, boring work, much like I might use a programming language to scale my work or automate away my dull, repetitive tasks. We already have products like GitHub's Copilot, which can sit alongside a programmer and make helpful suggestions of entire functions or algorithms, increasing the programmer's productivity. It's not hard to imagine lawyers, doctors, accountants, marketers, salespersons, political speechwriters, et cetera, having similar AI assistants. In fact, many already do!

This should recall to mind that technology is a lever. Artificial intelligence algorithms will amplify the work that a single person can do - if that person is connected to the hive mind.

Let's leave AI aside for a second and consider more pedestrian technologies. In two decades smart phones, search engines, and social media went from being ideas in Star Trek to ubiquitous in daily life. Ubiquitous doesn't even cut it, all three are required for life in a modern, 21st century country. Banking, working, dating, et cetera, range from "extremely difficult" to "impossible" without them. The people who try are already part of a subculture; they're "off grid". In turn, the corporations responsible for those innovations have grown incredibly wealthy and immensely powerful. It should go without saying (but too often doesn't) that it is right and proper for those inventors to reap the rewards of providing such an innovative and useful product to the rest of us. But those products have become necessary, and the costs for them have become like taxes: a phone tax we much pay monthly or annually, a “Google tax” businesses must pay for decent SEO rankings and reviews, and a “social media” tax one must pay in the form of privacy and an ad-free thought process.

These taxes might be innocent enough, but together they start to create barriers to entry to participation in the modern economy; you have no choice but to pay the corporate overlords the price of admission. At the same time, these companies can only employ (and only need to employ) a tiny fraction of the labour force. The result is that productivity increases across the economy have benefited a smaller percentage of people. Wages from labour have stagnated. This decoupling of wages from productivity3 started before the age of smart phones and social media 4, but those things put it into overdrive and artificial intelligence is about to pour fuel on top of it. 5 There are only a handful of companies with enough data to be able to train artificial intelligence algorithms. And as the use of those algorithms generates more training data there is likely to be a compounding effect, where those AI companies become better and the gap between them and everyone else becomes harder to cross. As these algorithms find their way into daily work everyone else will beholden to these companies in order to get anything done. Like the financial system has become a critical part of the world's economy - a sector that's "too big to fail" - tech companies are likely to become equally vital. And as we have seen in finance, they are likely to abuse their position.

There are at least three dangers I can see. The first is an "Access to AI" problem, wherein these tech companies can decide who can and who cannot participate in the modern economy without much recourse. Just as Twitter can eject users from its platform, so can MicroOpenSoftAI reject your API keys so you cannot use the 21st century version of a spreadsheet. This might not be a problem in a world where you can be just as productive with open-source tools, but if MicroOpenSoftAI's software is necessary to compete in the world, just as WhatsApp is necessary to do almost anything in some places, this becomes a serious barrier to entry. But AI can also be used to create artificial barriers of entry in thought and communication as well as in the economy, just by being a "smart" automatic gate keeper. Imagine an AI algorithm embedded into the InstaWhatsTell chat app - everyone at work, in government, and in your social circle uses it. The owners have a corporate "ethics" policy forbidding certain political positions. One day you're advocating for such a position to your friends or colleagues, so the AI decides to end the conversation for you, gate-keeping the conversation from your "toxic" ideas.

The second is a "Trustworthiness of AI" problem. If such a tool is controlled by someone else you can never really trust it to act on your behalf even if you cannot avoid using it. Consider the AI on your phone in the previous example - the one that would end the call if you were advocating for an idea that it didn't like. The same AI could change your words so that the other party never hears your original thoughts, instead, it would make sure people only hear the words that the “ideal” you would have said. Today's large language models and voice generation models are already capable of generating text that reads like yours and voices that sound like yours. But the changes could also be more subtle. Remember the conversions from rent per square foot per month to rent per square metre per year ? Would you immediately recognize one value from the other? In a world where these algorithms are ubiquitous your work would scale so much that you wouldn't have a chance to double check. Like a self-driving car, you cannot easily put the human back in the loop, the human is paying attention to everything or is on her smart phone. There would be little to stop a nefarious company from inserting wrong but innocent-looking texts or numbers that benefit them at your expense. And just like a nefarious financial adviser might implore you to buy their "cross-currency swaps" to improve your balance sheet 6, the tech companies would implore you to trust the algorithm. Because who is smarter than the AI?

The third problem we have already mentioned: the gap between wages and productivity will grow even more quickly. This is partly a consequence of the first two problems: on the one hand, AI companies create barriers to entry, and you have to buy their tool and sing from the song-sheet to play in the economy. And on the other, AI companies will be in an enviable, difficult-to-regulate position wherein the same tools that you need to be productive can undermine you if your business is in danger of competing with theirs. Both of these would drive profits for the AI companies up, ensuring that the value generated from the increased productivity is captured by the AI owners and not the workers or other actors in the broader economy. But it’s also a consequence of a creative economy where AI is taking a central role. Humans are constantly seeking novelty - and a little bit of novelty, say in a Paul Graham essay or Steven King novel, is enough for a large language model to learn from and create an entire universe from. It's not hard to imagine the scripts for most of the Marvel universe films being generated by AI models. The future of creativity might be humans trying, like workers on Amazon's Mechanical Turk, to create a few novel ideas that the AI would then expand, fill out, and send to a mass audience - think hundreds of thousands of low paid writers pounding on their keyboards to come up with one or two lines of an original Shakespeare play. Such human involvement is already needed to train ChatGPT. Reinforcement Learning with Human Feedback is a labour-intensive way to train AI models, and currently the most effective if you want ChatGPT-like performance. AI will continue to drive wages down, as it replaces higher-paid work with lower-paid work and allows the owners of the software to capture the difference.

None of this is inevitable.
It's a consequence of our system of regulating economic activity just as much as a consequence of our technology. But these days we tend to let a small cadre of billionaires think of the world they want to live in and implement it while we just accept the consequences, or pretend that we made those decisions ourselves.

Of course, these examples are all far-fetched and dystopian. It’s not like there was ever a major tech company that tried to use its technology to hobble the competition (such as Microsoft in the 1990s with its “Embrace, Extend, and Extinguish” strategy). 7 Nor has there ever been a major company, much less a Western government, that tried to use technology to create barriers to economic or political participation (such as the FBI using wire-taps and mass media to try to discredit Martin Luther King Jr in the 1960s). 8 No, AI will offer us boundless convenience and increased productivity; it'll let our corporate overlords gain boundless profits, too, while our wages continue to decline. In exchange, we will have just enough to buy more of the stuff that the AI told us we should want.

 

Notes