Artificial intelligence has become a normal part of our lives. The current AI boom is often attributed to ChatGPT-like large language models, especially since OpenAI made its product freely available. But this is an oversimplification, particularly among those unfamiliar with AI's history.
What was once a specialised tool confined to labs is now a widely accessible function, thanks to its visibility as a generative application.
One of the most basic forms of AI we encounter daily is the algorithm that suggests content based on our activity. When you scroll through your Google home screen, the news articles or links you see are there because of your previous searches, perhaps about weather or real estate. The program predicts what you might like to see. However, many people may not fully realize how these systems use intelligence to tailor content to our interests.
My journey with AI
Like many others, I was initially wary of AI. In early 2023, when GPT was new and AI fears were rampant, I wondered if this was the start of a Terminator-style scenario – an innocent bot trying to help humanity that eventually recognises our flaws and turns against us.
Unlike search engines, which use keywords to provide options, AI needs specific input to give a reliable answer. During this time of AI scepticism, my cultural studies professor surprised me by saying, "I am up for all evolutions to the society, as it is my job description as per my expertise, but for once – I am going to say AI is not that big a deal. It is not as smart as people with all the pop-culture movie references have made it out to be."
Intrigued by my professor's stance, I decided to try GPT myself. I was a novice - the idea of an AI engaging in conversation was new to me. I was familiar with animation software where humans create character datasets, but this was different. No one was behind the scenes ensuring my interaction went as expected.
Human-machine co-existence
The idea that AI can answer anything as if it has a mind of its own is an exaggeration. While AI can mimic human behavior through intent analysis, it relies on neural networks. These models use natural language processing to generate responses based on tokenisation and contextual understanding.
AI training projects, like those at Mindrift, involve creating conversations between a bot and a user. A human-written prompt helps the bot understand the user, and its response comes from a pre-existing dataset. The human response is stored so that AI can recognise indicators like tone, interest, and specificity for future interactions.
For us, language is intuitive. For AI, it's just data – a set of codes governed by programmed parameters. An AI's ability to respond to a prompt is highly nuanced. It can generate countless responses based on how the user phrases the query.
Think of it this way: if you want to know about literature, you can type the word in a search engine and get top results. AI, on the other hand, will ask you to elaborate to generate a more suitable response. Search engines are like going on a discovery mission; AI is like accessing an encyclopedia.
Demystifying AI
AI is curated with expertise from various fields, much like those old Britannica Encyclopedias. A common misconception is that AI operates entirely on its own, without human guidance. In reality, the nature of large language models (LLMs) depends heavily on human input.
LLMs work on massive datasets during pre-training to transform deep-learning algorithms. The scope of these datasets is vast, covering numerous topics and perspectives from human writers.
If I had to describe AI in human terms, I'd say it's like a brilliant child. This child gets straight As because they can absorb and explain complex knowledge simply. AI is like a real-life expert who's a jack of all trades while mastering the art of communication.
The future of AI and language
As AI becomes more integrated into our daily communication, we need to consider its impact on language evolution. A 2024 study on academic writing noted an increase in the use of certain words, like "delve," which might indicate AI influence.
While using AI for writing isn't inherently harmful, it could affect the quality of datasets used for AI training. If language follows a universal style instead of individual patterns, it might become repetitive. This could lead to a decline in the richness and diversity of language.
Humans adapt their language over time based on what they hear and read. For AI, this process is much faster. When AI models are trained on AI-generated language, it might restrict the bot's speech patterns. As AI replicates these patterns, its ability to create unique responses could decrease. If we remove human influence from AI, it might become self-reliant, leading to a potential reduction of linguistic diversity. This could eventually hamper our own language skills as more AI-generated content means the consumption of repetitive and homogenized texts.
What's next?
While AI can mimic human-like language, it still lacks the flexibility and nuance that comes from varied human interaction. Maintaining the 'human touch' in AI is crucial for its continued development.
We shouldn't reject AI as an opposing force. Instead, we should harness it to make our lives more efficient. The fear that AI will steal jobs is misplaced – people who know how to use AI effectively will be in demand.
Embracing new technology has always been part of human progress. Today, we can't imagine life without refrigerators, trains, or mobile phones. Perhaps we should give AI the same chance we give to new innovations. After all, AI is shaping our future and it deserves to be developed thoughtfully and responsibly.
Article by
Saloni Chopra