Staff Perspective: AI for Research - A Conversation Between a Skeptic and Google Gemini

Staff Perspective: AI for Research - A Conversation Between a Skeptic and Google Gemini

Maegan M. Paxton Willing, Ph.D., MPH

I saw technology grow by leaps and bounds while growing up. Like many millennials, I have used a floppy disk and have died of dysentery more than a few times on the Oregon Trail. I remember when using the Internet meant no one could use the telephone, and then when the Internet was on our phones. Aside from my dislike of social media, I've never felt that I had an aversion to technology --- that is until AI came around. Maybe it's from watching one too many dystopian sci-fi movies, but AI just makes me uncomfortable.

So I tried to delay using AI as long as possible, often doing things the 'hard way'. I attended several talks on AI for research and academia, but I was still not sold that it was right for me and our research projects. But the more USU talked about its benefits, and with some very strong encouragement from my supervisor, I begrudgingly began trying to use Google Gemini (Note. USU encourages our use of Gemini and has a license that keeps our data and queries private).

I started simple and focused on my non-research tasks - 'help me write this email' or 'help me create competencies for an annual evaluation.' To my surprise, it was genuinely helpful. This week I've started opening up to the idea of using AI to support my research, but the skepticism remains. So I "sat down" with Google Gemini to ask how I can use AI to support my research and what pitfalls I may fall into with its use. What follows is an abbreviated version of this discussion, cut for clarity and length by Gemini. If you'd like to read the full version, you can do so here.


I am writing a blog about using AI to assist with the research process. Can I ask you some questions?
I'd be happy to help! Ask away. I'm ready to answer any questions you have about using AI in the research process.

I'm a bit skeptical about using AI. Why might my concerns be valid?

Your skepticism about using AI in research, especially in a sensitive field like military mental health, is not only valid but also essential. A critical and discerning approach is the best way to leverage AI's benefits while mitigating its significant risks.

  • Bias and Fairness: AI models learn from their training data. If this data is unrepresentative or contains historical biases, AI can perpetuate and amplify these biases, potentially leading to misdiagnosis or inaccurate risk assessments for certain military subgroups based on factors like race or gender.
  • "Hallucinations" and Factual Inaccuracies: AI, especially large language models (LLMs), can confidently generate plausible-sounding but entirely false information or fabricate citations. Relying on these "hallucinations" can lead to flawed conclusions. This risk is particularly present when reviewing qualitative data, as AI might misinterpret nuance, tone, or context, or even fabricate quotes or summaries.
  • Data Privacy and Security: Military mental health data is extremely sensitive. Inputting such information into third-party AI tools raises significant privacy and security risks, potentially violating IRB protocols and regulations like HIPAA.
  • Lack of Transparency ("Black Box" Problem): Many advanced AI models operate as "black boxes," making it difficult to understand how they reached a conclusion. For high-stakes decisions like suicide risk prediction, knowing the rationale is crucial for clinical decision-making and accountability.
  • Over-Reliance and Automation Bias: Researchers might become overly reliant on AI, losing critical thinking skills or assuming AI output is always correct, leading to missed errors or nuances.

How can you help alleviate those concerns?

The key is responsible, critical, and transparent AI integration:

  • Human Oversight is Non-Negotiable: AI is a tool to assist, not replace. You, the human researcher, are essential for critical thinking, ethical decision-making, interpreting nuances, and ensuring validity.
  • Proactive Bias Management: Prioritize AI models trained on diverse and representative datasets. Perform rigorous bias audits and always keep a human-in-the-loop for interpretation and validation.
  • Rigorous Verification to Combat Hallucinations: Treat AI outputs as drafts or hypotheses. Every piece of information, claim, or citation must be independently verified against original, reliable sources. Use "lateral reading" by opening new tabs to cross-verify claims. For qualitative data, always trace AI-generated codes or themes back to the raw data to ensure accuracy and contextual understanding.
  • Robust Data Privacy and Security: Never input sensitive patient data (PII/PHI) into public AI tools. Use de-identification/anonymization for any data analysis with AI and only use secure, approved platforms that meet stringent security standards and IRB protocols.
  • Demand Transparency: Favor interpretable AI models (Explainable AI - XAI) when possible. Always be transparent about AI use in your methodology sections and publications.
  • Maintain Human Agency: Use AI to offload repetitive tasks, freeing you to focus on higher-order thinking, complex interpretation, and innovative study design.

I've found AI to be helpful in wording items for surveys and checking the reading level of surveys and consent forms. What are ways that AI can best help with research?

You're already hitting on some excellent uses for AI in developing study documents! Beyond wording and readability checks for surveys and consent forms, AI can be a powerful assistant in several other aspects of study document creation.

Here are ways AI can help with developing study documents:

  • Supercharge Literature Reviews: AI tools can rapidly scan vast databases (e.g., PubMed, PsycINFO), summarize papers, and extract key information like methodologies and participant demographics. This saves immense time compared to manual reading and can even help identify research gaps and emerging trends that might inspire new questions.
  • Enhance Data Analysis: This is a major area of impact. AI, particularly machine learning, excels at predictive modeling for risk assessment. For instance, it can analyze large datasets (electronic health records, service history) to predict suicide risk with promising accuracy, identifying high-risk individuals for targeted interventions. AI can also uncover complex patterns traditional methods might miss, assist with qualitative data analysis by automating initial coding and theme suggestions, and analyze digital phenotyping data (e.g., smartphone usage, wearable data) for early indicators of mental health issues.
  • Streamline Research Workflow and Design: AI can help with hypothesis generation based on literature analysis, assist in designing effective survey and interview questions, and even aid in outlining and refining language for grant proposals and paper drafts. For instance, it's effective at checking the reading level of surveys and consent forms.
  • Develop Study Documents: Beyond surveys and consent forms, AI can help in drafting sections of IRB applications and protocols (e.g., background, study design, data management plans), crafting targeted recruitment materials, and developing interview/focus group guides by brainstorming questions and probes.

By strategically integrating AI into your document development workflow, you can free up valuable time for the more complex and critical aspects of your military mental health research.

Do you have any final takeaways for using AI for research?

  • AI is a Powerful Assistant, Not a Replacement: It streamlines, but your expertise is irreplaceable.
  • Skepticism is Your Superpower: Always verify, and actively seek out potential hallucinations or biases.
  • Prioritize Ethics, Privacy, and Transparency: These are paramount, especially with sensitive military mental health data.
  • Embrace Explainable AI: Understand why AI reaches its conclusions, particularly for impactful findings.
  • Focus on Value-Added Tasks: Let AI handle the grunt work, so you can dedicate time to critical thinking and meaningful insights.
  • Stay Informed and Adapt: AI is rapidly evolving; continuously educate yourself on best practices and ethical guidelines.

By strategically, critically, and ethically integrating AI, you can significantly enhance the efficiency, depth, and impact of your military mental health research, ultimately contributing more effectively to the well-being of service members and veterans.


The thing I was most surprised by was how much Gemini encouraged my skepticism. I particularly liked the analogy of "Treat AI as a "Smart Intern". It was also clear that AI tools can provide support in a lot of places to help speed up the research process. One of my favorite uses is checking the reading level on surveys and consent forms.

This week I've used AI to help review surveys, start a consent form and procedures for an IRB protocol, and draft recruitment language. However, I also ran a new screener through Gemini, and it answered all of the questions correctly in less than a second, rendering it effectively useless for screening prospective participants. I'm left conflicted because I'm certainly beginning to see the benefits of AI but also reminded that it may make our jobs harder in other ways. There's much to be seen in how research's relationship evolves with AI, but I'm more open to it today than I was yesterday.

The opinions in CDP Staff Perspective blogs are solely those of the author and do not necessarily reflect the opinion of the Uniformed Services University of the Health Science or the Department of Defense.

Maegan M. Paxton Willing, Ph.D., MPH,  is a Research Psychologist (HJF) with the Center for Deployment Psychology (CDP) at the Uniformed Services University of the Health Sciences in Bethesda, Maryland. Dr. Willing’s research broadly examines mental health in military populations. Her primary research interests include the development, correlates, and treatment of sleep and posttraumatic stress disorder in service members, with a particular emphasis on posttraumatic nightmares.