As the competition for AI application dominance heats up among tech companies, there is much speculation around the implications of machine-learning on the future of humanity—particularly artistic expression like literature and music.
While automation may very well kill off the editing job that I currently hold (addendum: formerly held), I have faith that my other disciplines – DJing and writing – will stand the test of time.
My confidence in this belief comes from their human, storytelling element that cannot be replicated by machines—these means of communication rely on feeling (instinct, environmental feedback) to create content that panders to and resonates with an audience, as opposed to the learned data used by AI.
Rudimental Machine Learning Applications
Lately, ChatGPT is a buzzword on the tip of everyone’s tongue. Many have it on their radar as a threat to the future of writing. However, in my opinion, unless you draft wills for a living or are a stenographer, you don’t have much cause for alarm. But we’ll get to that.
Granted, you’re at least vaguely familiar with artificial intelligence (AI). Even if you’ve somehow never heard the term, chances are high you use it routinely in your daily life.
Take for example Apple Maps rerouting you on your drive home to avoid traffic backups, or the use of facial recognition to unlock your devices. If you’re still in the dark ages and don’t own a smartphone, you’ve encountered AI used by credit card companies to track purchases you make, and create a fraud alert if a given expenditure seems to not fit the pattern of your past purchase behaviors (price, time.
These are instances of machine learning (ML), in which algorithms compile and store data. The algorithms are then used to make predictions for future outcomes through inference based on past patterns of preferences.
ChatGPT, in particular, is a type of natural language processing (NLP), a factitious linguistics tool that allows machines to read and interpret syntax. The technology takes text written by humans and translates it into computer inputs.
As with other forms of machine learning, NLP analyzes a backlog of stored data to draw inferences through pattern recognition—the data is leveraged to “learn,” i.e., improve performance of a given task. And from what we’ve seen so far, the results are pretty eye-popping.
It’s impressive how ChatGPT can generate plausible prose, relevant and well-structured, without any understanding of the world — without overt goals, explicitly represented facts, or the other things we might have thought were necessary to generate intelligent-sounding prose.Steven Pinker, Will ChatGPT Replace Human Writers?
While adept at the function they were constructed to serve, these applications are certainly not without their flaws. And though the technology will surely continue to improve, resulting in less errors, there is reason to believe these automatons will never reach 100% proficiency—a welcomed relief for many current job holders in various data analysis industries.
Here’s more of author Steven Pinker’s take on the matter:
Pushback will come from the forehead-slapping blunders, like the “fact” that crushed glass is gaining popularity as a dietary supplement or that nine women can make a baby in one month. As the systems are improved by human feedback, there will be fewer of these clangers. But given the infinite possibilities, they’ll still be there.
Nonetheless, there are doubtless many kinds of boilerplates that could be produced by an LLM as easily as a human, and that might be a good thing—perhaps we shouldn’t be paying the billable hours of an expensive lawyer to craft a will or divorce agreement that could be automatically generated.Steven Pinker, Will ChatGPT Replace Human Writers?
Therefore, at least according to Pinker, because of the vast swath of (unverified) data in the digital realm – and thus endless possibilities of erroneous combinations – machine-learning technologies will never reach total accuracy.
Lack of information online about a given subject also could spell trouble for AI, as it may source fallacious information in order to generate sufficient content. For example, if you were to have it write an artist bio but there was minimal information about you online, some of the “facts” it came back with could be inaccurate.
While maybe not so much for data entry – as Pinker alluded to – these issues bode well for those holding positions in data analysis, as humans will likely need to be the final curator of machine-learned “creative” output.
But beyond individuals in those type of roles, the technology that creatives and intellectuals would be wise to be concerned about – instead of mere AI – is artificial general intelligence (AIG), which has the additional trait of unpredictability.
AIG: AI Reimagined (2.0)
AI applications are designed to perform only a specific task, and improve upon their performance. This task specificity is what separates the rudimental technology from AIG, a form of machine-learning in which bots have “a mind of their own.”
An AGI can do anything, whereas an AI can only do the narrow thing that it’s supposed to do. Like a better chat bot is one that replies in good English, replies to the question you ask, can look things up for you, doesn’t say anything politically incorrect, etc.
Making a better one of these means amputating more of the possibilities of what it would otherwise do. Like, in the case of chat bots, saying the wrong thing or not answering your question…
…With an AGI or artificial person, their thinking is unpredictable. We’re expecting them to produce ideas that nobody predicted they would produce, which are good explanations—and that’s what people can do.David Deutsch on The Tim Ferris Show
While this may seem like a doomsday scenario for human thinking and artistry alike, fear not: Beyond the ability to produce the unpredictable, people can also do imagination. In other words, come up with the inconceivable.
AI doesn’t have the capacity for imagination that humans do. It must still follow reason to arrive at conclusions—any novel solution developed relies on logic or past experience (learned data) to be “creative.”
Imagination: Human Creativity’s X Factor
“Humans create knowledge through creativity, and what you’re basically saying is that the narrow AI is not allowed to be creative—it has to solve a specific problem.
True creativity means you can hold any idea in your head; it’s unbounded.Naval Ranikant
Surely, AIG can be creative in the sense it can come up with something novel—a solution, and explanation. But, as mentioned, it is always through inference. A factitious thing does not have the capacity for imagination, i.e., imaginative creativity.
Human creativity doesn’t need to rely on rational thinking. Maybe so if you’re referring to the creation of knowledge. But in the broad sense, human creativity results through imagination—the ability to manifest mental imagery you have not yet experienced through the senses.
On the other hand, creativity refers to the capacity to make something real using original ideas, which is why some consider machine algorithms to be “creative.”
For humans, imagination is a prerequisite for creativity—creating requires using an imagined thing to spark an idea that can tangibly produce something. In contrast, machine learning’s “inspiration” in formulating new ideas comes from its pre-existing pool of knowledge, i.e., reasoning.
Logic will get you from A to Z; imagination will get you everywhere.Albert Einstein
Imagination is constructing unfounded ideas based on near nothingness. Though novel ideas may seem to come out of thin air, people often draw inspiration from past experiences, much like how machines rely on learned data. But in the case of humans, it can be subconscious and instinctual.
However, for people, the spark that yields insight or fresh ideas also can come through a sense of wonder. In those moments of awe we are fully present, allowing a genuine connection between experiencer and the novel experience which often brings a new perspective and insight, i.e., the eureka moment.
Knowledge vs. Intuition (Insight)
Explanations are derived from knowledge or experience, but insight comes when in tune with the natural order of the universe, passively taking stock of your experience instead of trying to shape it or define it.
Insight is contingent on the way we take in and process information instead of on our existing trove of knowledge. We must objectively use our senses and instinct in order to gain insight. AI lacks the capacity to generate insight or intuitive understanding because it has no sensory ability—it must process new info as “fact,” instead of using instinct to gauge authenticity.
Human insight comes from environmental cues, both external and internal. In either instance, it arrives via feeling, or more accurately, sensing—not by thinking.
In the case of external cues, full immersion in your surrounding world – primarily through seeing and hearing – generates understanding that can inspire imagination and creativity.
On the other hand, internal sensory cues rely on internal feeling, i.e., intuition, to bring insight. Intuition is the feeling derived from objectively monitoring your internal state, and is another critical quality for making compelling art that people possess which AGI doesn’t.
There’s a different thing that man has that the machines don’t. It’s an intuitiveness. And intuition and prediction are two different things.Pharrell
Thinking is what separates humans from animals, but feeling (intuition) is what distinguishes us from the machines.
Sensation is also often what we as humans use to gauge our own reaction to something. Hence the prevalence of the saying, “how does that make you feel?” instead of “what do you think about that?” or “what does that make you think?”
Authenticity in Creative Endeavors (Art)
“There’s artistry, then there’s manufacturing. And I think that’s where the line is for a lot of people—when it just feels like manufactured music instead of someone’s artistic output.”
Though it may become more difficult as machine-learning technologies improve, but you can get a pretty good read on the authenticity of something, i.e., machine or human-crafted, by gauging your internal reaction to it.
And while authenticity isn’t likely a strong criterion for hard data, it certainly is for the arts.
The demand for authenticity is even stronger for intellectual products like stories and editorials: The awareness that there’s a real human you can connect it to changes its status and its acceptability.Steven Pinker, Will ChatGPT Replace Human Writers?
And also changes its relatability, albeit maybe on more of an intuitive level. The authenticity of art is determined by your feeling, and is what makes something relatable—how does something move you if it doesn’t seem genuine?
I was listening to a recent episode of the DIY Musician’s Podcast, in which the hosts were discussing a similar matter. It wasn’t totally analogous because they were referring to music formulated by a collective (artist, producer, brand strategists, etc.) instead of an individual or band – as opposed to art created by AI – but it still gets at this idea of authenticity and relatability:
I think I like to perceive that a song comes from an individual’s perspective and sensibilities. But if it’s a good song and it moves me, what does it matter [how many people contributed to it]?
It will be interesting to see AI’s ability to replicate human authenticity in the future. I don’t think it will be difficult for people to distinguish the two, especially if listening to their instincts regarding how it makes them feel.
Beyond using our own reactions to something to orient us on our feelings, humans also have the distinct advantage of the capacity to use perception as a compass to gauge others’ feelings.
As most of us know, humans are often irrational creatures, a key attribute that gives us in the arts the upper hand over the machines in terms of job security—using empathy, we can leverage the human trait of emotionality.
External Sensory Feedback: The Human Element (Feeling)
With the prevalence of Spotify and YouTube, or even dating back to Pandora radio, many people argue that DJing is dead. However, while these platforms do a decent, consistent job curating a playlist by relying on algorithms utilizing user-generated feedback, they can’t pick up on subconscious sensory cues from users (or audience members).
Thus, AI synthesis of content doesn’t typically tell a captivating story or have the ability to capture the mood of a live setting. In other words, they don’t have the capacity to move people or create a social bond.
Sure, AI can take user-generated feedback or use pattern recognition of past preferences to make recommendations, but it lacks the ability to make decisions relying on internal instinct (as mentioned), or by taking cues from sensory perceptions in its external environment—paying attention to a feeling.
Machine algorithms can use user feedback in the form of hard data to improve, but it can’t interpret subconscious sensory information. It doesn’t have the ability to read body language, or decipher surprise, joy, admiration, or disapproval.
There’s a whole like university of science between what’s been played and what you’re hearing.
If I’m hearing something, and not paying attention to how I’m feeling, then for me, I don’t know what I’m listening to—that’s my GPS of understanding. Everything’s cataloged by and categorized by the feeling of it. If I can’t see how I feel about it, then I don’t even know what it is—it’s just music.Pharrell
Though it is somewhat scientific, I would argue it’s more a university of wonder than science—instinctive feeling is something that still cannot be explained by scientific reasoning.
And for the sake of human prosperity, I’m cautiously optimistic that it won’t ever be.