Love in the Time of the AI Apocalypse
A remarkable lack of curiosity about the human
After the company I worked for was bought by a large software company, I was added to a new team which had a networking challenge to help all of the new people integrate in with existing employees. The gamification of the process gave me an excuse to talk to a wide variety of people. One person in particular can represent a whole industry. I told him that I had been an English major. He said, "oh, maybe you could help me with my kid." His kid was in a literature class and didn't know the purpose of it, and he didn't either. His question was less a sign of curiosity than an uncomfortable marker of difference, but I answered with some words about developing empathy and learning about different life experiences. To this my coworker responded with something like, oh, I hadn't thought of that— leaving unasked the question of why one would want to develop empathy or to learn about different people in the first place. To many in the tech business, rationality is the beginning and end of life.
Spinning flax into gold
The time has come to monetize human subjectivity!
Much of the excitement about AI is precisely its ability to convert subjectivity and emotion into numbers. I trained in basic concepts of AI including object recognition, sentiment analysis, Long-Term-Short-Term Memory, transformers, semantic search, LLMs, AI agents. At this pivotal moment, I'd like to reflect on a series of AI technologies and evaluate the emotional and subjective remainder, in other words, what's left over after AI has turned something into a number. It's important to remember that certain limitations of AI are by design, and that algorithms won't magically lose these limitations.
Sentiment analysis can take a pile of product reviews and turn it into a spreadsheet ranking feelings about restaurant food quality, service, cleanliness, atmosphere, etc. This enables product marketplaces like Amazon to summarize the subjective feedback of individuals into data-driven recommendations. These recommendations are generally for the average consumer and not specific to how I evaluate products and services. For example, I will often look for high quality reviews with specific details about criteria, expectations, and results. I try to get at the person who is writing the review. Are they like me, concerned about things that I'm concerned with, or are they not? Are they the same class or a different class? These questions make a huge difference when the statement is about fair pricing. When I was in a social fraternity in college and we would vote on new members, the person doing the recommending mattered as much or more as what they said. For example, we knew if one guy said that a candidate was a great guy, it meant that he was mellow and smoked weed. However, when shopping online, I see AI summaries that will say some think that a service has a fair price but others thought it was high. AI sentiment analysis may be good for companies selling products because it makes choosing a product easier, but it's not that great for a customer wanting to buy a product for a specific purpose. For example, I'm looking for a 3-tiered plan stand to turn into a lamp. In my case the width and height are the most critical because that tells me if it will fit on my bookshelf. I don't need opinions about how it looks or quality because I can see the picture and see that the shelves are made of MDF. My impression of sentiment analysis in product reviews is that it's based on an average customer which doesn't exist. I find shopping online more difficult now because websites hide product details and surface bland AI summaries in their place. Features like this enable corporations to benefit from purchasing behaviors that are less thoughtful and more impulsive.
AI Summaries of discussions. Like much of text generation currently, AI summaries can be hit or miss. When holding meetings, the summary of the meeting can be an important way of reinforcing and consolidating key points or blockers for future reference, so that teams can make progress toward goals and be accountable. What I noticed with AI summaries is that the automatic summary ends up being a kind of a vague memory of the call in terms of the topics discussed. After a recent meeting, I wanted to confirm some key points from a call, but found the summary too vague to do it. An AI summary may be a good starting point for reinforcing and consolidating key points, but it's generally easier just to leave it as is as a kind of halfhearted record that a call happened and things were discussed. Managers accustomed to seeing summarizies of calls will be satisfied, but the summary functions as busywork, not as a means of advancing a project.
Similar to AI summaries of discussions, I've found customer support bots to be inconsistent in quality. As a product implementator, my job is to accomplish a certain task for the business, but the support bot is optimized for generic technical questions. Often, there will be key acronyms to the product that the bot won't recognize but instead will infer a meaning from its general base. Sometimes, the response is great at helping me thing of something else to check, but other times, I found it to be confused or wrong. At a certain point, trying to explain the correct context to a bot becomes the actual work needed to solve a problem: a step up from talking to a rubber duck to solve a problem. Benefits: quicker, less rude, and unblocking actions that need rationale. Support bots do provide a quicker response than waiting for support or product development (especially as vendors cut these staff). For corporations with knowledgeable but rude support staff, There's great value in replacing people who make customers angry with a friendly chatbot. Few things make me hate a tech company more than dealing with arrogant, insulting tech staff, especially when those tech people don't understand (and have no interest in) the business value of what I'm trying to accomplish. Support bots can be also useful for satisfying managers who don't want to implement any approach unless it's explicitly recommended by the product team. In those cases, it can be valuable to paste the question and response in a Jira for future reference.
Large Language Models (LLMs) like chatbots, support bots, and agents generate text prediction based on patterns in their training data. In addition to the generic language base, a support bot will get access to a specialized training set (e.g. product and support documents), and get extra training for providing responses to particular questions. The more that the questions you ask look like the LLMs training questions, the better the reponse will be. The more that the question is related to a a non-typical use case, the lower the quality of the response. Words and phrases are tokenized (turned into numbers). A word can be represented by one or more tokens, as in the case with a compound word like 'blockhead.' Tokenization enables things like machine translation and semantic search, and helps LLMs generate appropriate responses to a range of queries. Google search now defaults to semantic search, and its once valuable ability to pull up exact results is now hidden away under the tools menu as Verbatim. Publishing companies would love to cut costs by using machine translation to generate translations of fiction, but the results are similar to making a copy of a painting by using paint by numbers. LLMs are designed to predict results that are similar to their inputs. This makes them great at generating average texts, texts which look and feel like everything else and which fail to surprise. Even a prompt to generate an expert response is going to generate an average approximation of what an expected expert text would look like (Researchers Asked LLMs for Strategic Advice. They Got “Trendslop” in Return.) Using an LLM to remind you of things that are left out of your analysis is going to result in it generating a list of things typically included in similar texts, which can be helpful but not what an expert could produce when challenged with a difficult problem. Similarly, because LLMs generate results similar to their inputs, this makes them especially suboptimal for tasks like brainstorming new topics. If you want to brainstorm the most expected ideas on a topic, they work fine. They would be great for generating all of the content which found its way online in the 1990s, first used to drive search engine optimization and now the basis for training LLMs. But creative fiction and poetry are really about the encounter between people and their different life experiences.
What remains
I watched a video "How China Built Tech Power Without Critical Thinking," https://www.youtube.com/watch?v=laPVR5nKMaE In this video, Yinfi makes the point that:
"Restricting certain types of critical thought does not actually slow people down. In many cases, it makes them faster.After certain boundaries of thought are drawn, you stop wasting energy questioning the walls, and you start optimising the space inside them" (5:38).
This optimization for the current system is not limited to China, however, and is pervasive in Western corporate culture, as my conversation with my coworker demonstrates. Yinfi also talks about innovation, saying that:
"We've been taught that innovation requires creative and critical thinking. But what if that's only true for discovering new ideas, not for scaling them?"
Yinfi also addresses the role that entertainment plays in distracting workers from deeper questions. In his videos, marketer Rory Sutherland talks about the two philosophies of economics: the Chicago school and the Austrian School (most notably represented by Peter Drucker). In the Chicago school, everything is about efficiency and scale, but for the Austrian school it is all about value creation. I might suggest that the two approaches are dialectically related. One creates value, and the other extracts it. A gas station chain might build its reputation on offering clean restrooms on the highway, but then sells out to a corporate owner who extracts the value from the gas station chain by scaling it and cutting costs, including the costs incurred by cleaning restrooms.
Unlike corporations and efficiency gurus, I embrace the human and I'm reminded of something that Luigi Giussani wrote: "Reason is agile, goes everywhere, travels many roads." Efficiency is one road; value creation is another. A third road is gratuity: creating and sharing things because it brings joy. The drive toward efficiency did not start with AI. Even companies which were inspired by creativity, like Disney, used templates to churn out films with the same animation with new characters. For years, Hollywood has been selling the same plots to consumers, assured in the knowledge that what succeeded in the past can succeed again. And publishers are notoriously risk averse, publishing what's already proven, or taking chances on new authors only to drop them if their success takes more than a fiscal quarter or two to materialize. This type of product can do well at distracting people and may be indeed by generated by AI in the future. The trick will be how to make the same stuff seem novel. But I am placing my bets on human diversity, the beauty of creating and sharing value, the joy of gratuity.
This is a whole-hearted post, which comes from my human struggles and experience. As such, I did not use any AI in composing, "researching," editing, or any other way.