The rise of generative AI has sparked sensational headlines, predicting the replacement of millions of jobs and painting a picture of almost limitless possibilities in marketing.
However, the need for human expertise has never been more critical, and, as generative AI tools flood the internet with mediocre content, human perspectives will become a rare and valuable commodity.
Ironically, the most enthusiastic proponents of AI are often the ones least knowledgeable about its limitations.
It is therefore crucial for us all to gain a deep understanding of the technology, its boundaries, risks, and responsible usage to harness its commercial impact effectively in the coming years.
From an era of curation to creation
Despite AI being in existence for several decades, the current wave of massive hype can be attributed to a few key factors.
One significant reason is the emergence of improved generative AI capabilities. Advancements in large language models such as ChatGPT, deep fakes, and generative human representation models have astounded people worldwide.
We are witnessing a transition from an era of curation to an era of creation.
In the past, AI algorithms primarily focused on curating content to optimise what we see online, tailoring results to our preferences and maximising engagement. We are now entering an era where AI enables the creation of visual, text-based, and audio outputs with nothing more than a simple text-based prompt.
RIP graphic designers?
Over the last few months we’ve seen some laughable examples of generative AI being used without specialist oversight. Bold claims about the demise of specialised professions have circulated, suggesting that professionals like graphic designers are becoming obsolete.
Linkedin have also recently announced that they are launching a feature to offer in-platform AI post generation and response features, potentially flooding the platform with thoughtless content lacking human input.
These examples serve to highlight flaws in the claims about the capability of the technology alone, and the need for greater understanding about how this technology will more likely augment, not replace, human expertise.
99.9% of the time when we talk about AI, we’re really talking about ML
It is essential to clarify that when we discuss AI, we are primarily referring to machine learning (ML).
Generative AI is a form of ML, which utilises training data to generate entirely new content based on prompts. Large Language Models like ChatGPT are just one example of a ML application. And, while ChatGPT is a remarkable tool, it is crucial to avoid using “AI” as a catch-all term for various technologies and applications as it limits our understanding of the broader field.
Thinking more broadly about the full spectrum of AI applications, there are actually 6 use cases:
- Task automation – handling repetitive tasks through machine learning algorithms
- Content generation – creating materials such as text and images using generative AI
- Insight extraction – mining large datasets for actionable insights using machine learning
- Decision making – utilising AI for autonomous decision-making
- Human augmentation – avatars, cyber mechanics, robotics
- Human representation – deep fakes, voice, personas
Two of these applications make up pretty much all of the major marketing use cases we’ve been seeing, or are likely to see in the next few years – task automation and content generation.
If machine learning is the car, data is the fuel
To truly understand machine learning, we must recognise the central role of data. Data acts as the fuel for machine learning, with the models themselves serving as the vehicles.
ChatGPT’s power stems from its training on an extensive dataset encompassing the entirety of the internet.
Machines aren’t trained by magic
Behind the hype and excitement, it is essential to acknowledge the vast manufacturing apparatus and the people who make the technology work.
Training these machine learning models relies heavily on human input and labelling of items to refine their understanding and response accuracy.
Companies like OpenAI have established operations in countries like Kenya to engage workers in tagging items, improving the accuracy of the algorithms. The development of AI models is not a result of magic, but rather the collective efforts of humans and machines.
Common misconceptions of LLMs
ChatGPT represents a remarkable advancement in the field of large language models, showcasing its prowess as a next-word predictor. This sophisticated machine learning technology has the ability to generate coherent and contextually relevant text, shaving time off various tasks.
However, it is essential to recognise that even though large language models like ChatGPT exhibit impressive capabilities, they do have certain limitations. Acknowledging these limitations allows us to apply human judgement and critical thinking when utilising their outputs.
Large language models lack up-to-date knowledge. Since their training data is sourced from a specific period in time.
Additionally, while they excel at predicting the most probable next word based on the context, there is still a high chance of these tools generating incorrect or misleading information. It is crucial to verify and fact-check the outputs generated by these models, especially when dealing with critical or sensitive topics.
They can generate text that appears coherent and fluent, but they may struggle to grasp the nuances of language in the same way humans do.
Understanding context requires a deeper level of comprehension that considers the broader meaning and implications of a conversation or text.
Moving beyond the practical limitations of this technology, it is crucial to understand the significant risks associated with generative AI more broadly.
Privacy risks
Privacy stands as a significant risk associated with generative AI technology. These models operate by analysing and learning from the data they receive, continuously training themselves to improve their performance. However, this reliance on user input raises concerns about the privacy of individuals.
The data provided as input could be stored, analysed, and potentially used for purposes beyond the immediate user interaction. This includes the possibility of personal information or prompts being exposed in search results or other outputs.
🚨 Reminder not to put any private or confidential information into SGE 🚨
Your input data is likely used to further train SGE and could surface in future results. pic.twitter.com/F4utXTuBos
— Britney Muller 🇺🇦 (@BritneyMuller) May 25, 2023
Users engaging with Google’s ‘search generative experience’ trial may unknowingly contribute their data or prompts to be used for training the underlying generative AI model. Consequently, there is a real possibility that this information could surface in search results!
Algorithmic bias
The second major flaw with this technology is around algorithmic bias. Algorithmic bias, an inherent flaw of generative AI, amplifies existing human biases.
A recent study found that, amongst other examples, while 34% of US judges are women, only 3% of the images generated for the keyword “judge” were perceived as women.
Addressing algorithmic bias requires a twofold approach. First, users and developers must exercise caution when interpreting and relying on the outputs of generative AI models. It is essential to critically evaluate and cross-reference the results to mitigate potential biases.
Secondly, the development of generative AI technology should prioritise supervised learning, a subset of AI, to implement proper controls and safeguards. By leveraging supervised learning techniques, models can be trained with explicit oversight to ensure equal representation and reduce bias in their outputs. This approach ensures that generative AI models are more inclusive and equitable.
Safety concerns
Perhaps the most shocking example to highlight the safety risks in play with this technology comes via Snapchat. Snapchats ‘my AI’ feature now appears in user accounts and has been shown here to encourage a 13 year old girl to meet an older man for a romantic getaway.
Instances like these emphasise the critical need for comprehensive testing and robust safeguards when implementing AI chatbots.
To ensure user safety, it is imperative that AI chatbots undergo meticulous testing, including rigorous scenario-based assessments and ethical reviews. Adequate protocols should be established to identify and address potential risks, such as inappropriate content generation, harmful suggestions, or interactions that could compromise user well-being.
Balancing progress and responsibility
There are widespread concerns that existing regulatory frameworks are already struggling to keep pace with technological advancements.
Today, with the absence of regulation, human oversight and expertise remain essential to ensure that AI is applied appropriately and ethically.
As the field of AI continues to evolve, it’s crucial for regulators, industry leaders, and stakeholders to collaborate and develop frameworks that strike the right balance between fostering innovation and safeguarding ethical practices.
Time to focus on the bigger picture
It should be clear from some of the examples we’ve raised, that AI will not take your job. But, there are also many cases where AI can enhance human skills to outperform humans alone.
At Hallam, we are actively exploring the potential of generative AI in our creative processes. Tools like Midjourney assist our creative team in developing initial concepts, but it’s important to note that it’s not a one-and-done process. We iterate and refine the generated content multiple times, leveraging human expertise to perfect the final output.
Other platforms like Jasper.ai, Fireflies, and GitHub Copilot are also proving useful in enhancing efficiency in copywriting, note-taking, and code generation, respectively. These tools augment human capabilities rather than replace them.
We’re entering the era of the AI copilot
Looking ahead, we are entering an era where generative AI becomes an AI copilot, amplifying human skills. The democratisation of AI technology means that AI-generated content will flood the internet, with limited human input required.
However, brands relying solely on AI-generated content will struggle to stand out. Human perspectives, creativity, and critical thinking will become invaluable resources in a sea of AI-generated content.
Human perspectives will be a precious resource
Models like ChatGPT rely on the data they have been trained on to generate outputs that mimic human language. However, the ability to bring new and trustworthy information to the web remains a uniquely human capability.
A fascinating contrast emerges when considering Google’s AI-powered ‘search generative experience.’ While this feature incorporates chatbot functionality and utilises machine learning technology, it’s now placing greater emphasis on surfacing and prioritising content generated by humans.
For brands, this shift holds significant implications. Placing human stories at the core of their content will be essential in establishing a distinctive brand identity in the years to come. Whether it involves featuring B2B experts or B2C ambassadors, the incorporation of genuine human narratives will serve as a powerful differentiating factor.
In an era where AI-generated content becomes increasingly prevalent, brands have an opportunity to stand out by authentically connecting with their audience through relatable human experiences. By incorporating human-generated content, brands can offer a distinct perspective that sets them apart from automated and generic content flooding the digital landscape.
Essential skills for the future
In the coming years, we anticipate a significant divergence in the skills demanded due to the rise of AI technology. On one hand, there is a growing need for individuals with deep technical expertise to harness and understand AI. Proficiency in data science is increasingly valuable, as is the ability to assess, implement, and integrate these technologies into business operations effectively.
On the other hand, there remains a strong demand for skills that robots currently lack. Abilities such as strategic thinking, creativity, effective communication, and critical thinking will maintain their importance in the future.
What we are witnessing is the erosion of the middle ground – a phenomenon I’ve referred to as the “valley of death.” This refers to the skill set that falls in the middle ground, where the capabilities of AI encroach upon traditional job functions. It is crucial for individuals to make a deliberate choice and align themselves with either side of this spectrum—embrace the technical aspects or focus on cultivating the uniquely human skills.
For those inclined towards the technical realm, diving deeper into AI-related disciplines and acquiring proficiency in data science will be instrumental in capitalising on the opportunities presented by this technology.
On the other hand, it’s equally important to recognize the enduring value of human-centric skills. Strategy, creativity, effective communication, and critical thinking are inherently human capabilities that remain in high demand. These skills enable us to bring a holistic perspective, solve complex problems, and innovate more widely.
In summary
Generative AI is not the silver bullet it’s often portrayed to be. Recognising its limitations, addressing privacy concerns, mitigating algorithmic bias, and ensuring responsible usage are all crucial steps to navigate the evolving technology landscape.
While machine learning models like ChatGPT can simulate human language, only genuine human stories and specialist expertise will enable brands to differentiate themselves in an era where AI-generated content becomes ubiquitous.