I talk to my mother almost every day. This week, during our conversation, she mentioned that there was some snow on the ground.
She asked if we also had snow, and I confirmed that we did.
I had shoveled my driveway several times and estimated we probably had around a foot of snow.
Curious about the exact amount, I quickly searched the internet (my mom enjoys discussing the weather).
It was surprising to see such discrepancies, especially since measuring snowfall should not be that difficult.
Wow, just wow—we definitely did not have 40 inches of snow!
Artificial Intelligence (AI) has revolutionized how we approach research, offering unprecedented efficiency and access to vast amounts of information.
However, as tempting as it may be to lean entirely on AI for academic or professional research, doing so comes with significant risks.
Remember the saying, "Trust, but Verify." This is especially true for AI-generated content and data.
AI can create a false sense of comprehension by producing polished, authoritative-sounding outputs that may not always be accurate.
According to a Yale study, AI tools risk fostering "illusions of understanding," where researchers believe they are exploring all possibilities when, in reality, they are confined to questions and methods that align with AI's capabilities.
Relying strictly on AI can lead to a "monoculture of knowing," narrowing the diversity of perspectives and approaches in scientific inquiry.
Check out the article from YaleNews.
Doing more, but learning less: The risks of AI in research
AI-generated insights may prioritize what is popular and frequently cited over what is true. In 2023, more than 10,000 research papers were retracted, but retraction does not mean every link to that research was removed.
All those in links - citations- will make the research appear credible to search engines.
More than 10,000 research papers were retracted in 2023 — a new record.
AI systems are only as good as the data they are trained on. If the training data is biased or incomplete, the resulting outputs will reflect those flaws.
For example, algorithms trained on non-representative datasets have been shown to perpetuate racial and gender biases.
Ethical concerns abound regarding data privacy and ownership. Many AI models require vast amounts of data to function effectively, raising questions about whether this data was obtained ethically and with proper consent.
Below are the limitations of using AI for research identified by USC Libraries.
"In addition to many of the known limitations outlined below, generative AI may be prone to problems yet to be discovered or not fully understood.
Without scrutiny, these errors can propagate misinformation and compromise research integrity.
Lack of Transparency and Accountability
AI operates as a "black box" for many users—its decision-making processes are often opaque and difficult to interpret. This lack of transparency can make it challenging to verify the accuracy or reliability of its outputs.
AI tools cannot be held accountable for errors; the responsibility ultimately falls on human users.
As noted by the University of Utah's Research Integrity Office, researchers who rely uncritically on AI-generated content risk accusations of academic misconduct if inaccuracies or plagiarism are discovered.
Overlooking Human Creativity and Context
While AI excels at processing large datasets and identifying patterns, it struggles with contextual understanding and creative problem-solving.
For example, AI can quickly identify opportunities for improvement based on data from your website, but a human needs to understand if those opportunities align with your ideal client.
The opportunity could be a search term that is not relevant to your business,
Without this balance, we risk losing human intelligence's depth and adaptability research.
With all the recent excitement about artificial intelligence, it is more important than ever to remember that technology should aid human inquiry—not be a substitute for it.