Since joining Mindrift as a technical editor, I’ve had the opportunity to work on a wide range of AI projects that push the boundaries of innovation. Each project, whether enhancing large language models (LLM) or exploring visual reasoning, offers a glimpse into the future of AI. I've been fortunate to witness firsthand how our work drives technological advancements, and it’s been an exciting journey.
AGI tasks across multiple programming languages
One of the most exciting projects I’ve worked on at Mindrift involved building AI systems that can understand and work with various programming languages, such as Python, JavaScript, Java, and SQL. This kind of work is part of what’s known as Artificial General Intelligence (AGI), or strong AI, which aims to create machines that can think and learn like humans. At Mindrift, we're at the forefront of developing AGI, tackling challenges that require models to process information, think critically, and adapt to new situations.
My goal was to improve these AI models so developers could interact with them more naturally, using coding languages in a way that feels intuitive, without making mistakes. We worked to improve NLP models, for instance, so they could better understand the subtleties of languages that are similar to those of humans and produce error-free code blocks, explanations, and context-sensitive responses.
These tasks included identifying errors, improving code syntax across different programming languages, and then optimizing based on best practices in a particular programming language. I also had the opportunity to participate in quality assurance work related to AGI coding projects. In light of this, we worked on this project to improve the usability of the Large Language Model (LLM) while increasing its capacity to boost developers' productivity and inventiveness.
Boosting AI with deep learning and visual reasoning tasks
One of the most exciting areas I’ve worked on at Mindrift involves helping AI systems understand and make decisions based on images, much like how a person would. This process, known as visual reasoning, is a key part of our work.
For example, imagine teaching an AI to recognize different objects in a picture, like a cat or a tree, and then understand how these objects relate to each other. We train the AI not only to recognise what’s in the image but to interpret its meaning. This helps the AI make smart decisions based on what it sees, such as identifying potential dangers or understanding the context of a situation.
An example of this task occurs in scenarios where we have to train AI to differentiate between similar functionalities or contexts and perform critical reasoning to make insightful decisions based on what an AI model sees. Imagine teaching an AI to recognize two similar tools, like a screwdriver and a chisel. Both look alike, but the AI needs to understand the small differences in shape and use them to make the right decision for a task.
As a result of these improvements, LLMs will be designed to navigate through complex visual environments, make real-time decisions, and show insight into the subtleties of vision and context.
Improving user interaction through AI training
At Mindrift, our commitment to innovation goes beyond just visual AI. A big part of our work involves improving Natural Language Processing (NLP) tools to handle more complex tasks, especially when it comes to making real-time decisions during conversations. Our team worked on advancing models that could go into conversations that would feel natural and spontaneous, responding to user inputs with contextually appropriate and meaningful outputs.
To achieve this, we worked on refining the AI’s responses through a process called prompt engineering. This involves carefully crafting and editing the prompts that guide the AI, ensuring it can respond in a way that is not only accurate but also empathetic and contextually appropriate. The result would be a multi-purpose AI tool that continuously provided effective, empathetic, and accurate communications with users across multiple industries.
Ensuring safety with responsible AI
Working on responsible AI is just one area of the variety of tasks we tackle at Mindrift. Recently, I had the opportunity to work on a project focused on creating ethical AI frameworks, which ensures that our technologies align with safety rules for the benefit of society as a whole
One of these tasks involved training AI models to detect harmful content across various categories. We use prompt engineering techniques to test the AI on malicious prompts, allowing it to recognize and understand harmful behavior. This kind of work frequently involves sensitive or explicit material. Ensuring the safety of LLMs through the identification of various malicious users and the ability to distinguish between malicious and non-malicious users — based on the intent behind their actions — is the ultimate goal of this project.
The aim is to safeguard AI systems and prevent their misuse for harmful activities, such as criminal actions or violations. By helping AI identify and mitigate these risks, we contribute to creating safer, more trustworthy models – an important step for societal advancement.
Training AI to think and feel like humans
One of the most challenging yet rewarding tasks is training AI models to think and even "feel" like humans. For instance, we’ve worked on AI projects where the system had to recognize not just what the user was asking, but also the user's emotional state, like whether they were frustrated or excited. This made it possible to respond in a way that showed understanding and empathy.
To be more specific, this means training AI to go beyond responding with a good answer: it needs to understand context, emotions, and intent. We’ve been able to achieve this through prompt engineering, refining both writing and editing tasks across various projects to help AI recognize tone and intent behind user inputs.
Enhancing AI in rich text editing and formatting
Another area we’ve focused on is improving the AI’s ability to handle rich text editing.. For example, it included improving the capability of the AI to produce its output in several different formats like XML, Markdown, JSON, LaTeX, CSV, and many others. By improving these formatting capabilities, we enhance the quality of AI-generated content and ensure it fits seamlessly into workflows like technical documentation, data analysis, and academic writing.
Reflecting on my AI journey at Mindrift
Work at Mindrift as a technical editor in coding in AGI projects, improving natural language processing tools, integrating the latest visual reasoning into our AI models, training AI models to detect harmful content to keep AI safe.
I have learned that developing AI involves more than just prompt engineering to improve things; it’s about creating systems that can think, understand, and reason like human beings. My journey with Mindrift has reinforced my excitement for AI and the incredible potential it holds, pushing the boundaries of what technology can achieve with every task I take on.
Article by
Anuththara Jayasundara