Most AI acknowledgments I see are insufficient. “This work was created with assistance from ChatGPT.” “AI tools were used in the preparation of this document.” What do these disclaimers actually tell us? Nothing.
I’ve hit this problem repeatedly this summer as my usage of AI has fundamentally shifted to make it a core component of everything I do. Take one recent example: developing an AI workshop for summer interns. I used AI extensively throughout the development—not just grammar checking, but collaborative brainstorming, structure refinement, even drafting entire sections. When it came time to present the work, the standard options all felt inadequate.
This pattern keeps repeating. Building task management tools with AI assistance. Writing course materials through AI collaboration. Even everyday professional writing where AI becomes a genuine thinking partner. Each time, I face the same attribution dilemma: how do you acknowledge meaningful AI collaboration without either hiding it or making it sound like you did nothing?
In my data science courses, I’ve stopped accepting vague disclaimers from students facing identical challenges. Instead, I teach them to document AI collaboration the same way we document human collaboration: specifically and honestly. When my department developed our Generative AI Academic Policy, one of the more progressive policies in higher education, we moved beyond simple disclosure toward meaningful attribution. But even our advanced policy falls short of the granular transparency that deep AI collaboration requires.
The question isn’t whether to acknowledge AI collaboration—it’s how to do it in ways that serve readers instead of just covering legal bases.
The Research Gap
IBM’s recent AI Attribution Toolkit represents thoughtful research into attribution practices, using sophisticated scenario-based methodology to examine how knowledge workers perceive AI collaboration credit. Their findings reveal nuanced views on attribution across different contribution types, amounts, and initiative levels - valuable insights that challenge simple binary disclosure requirements.
However, their research was conducted entirely within a large technology company, surveying employees already immersed in AI development and usage. This creates a critical limitation: tech workers operate in professional environments where AI usage is normalized and expected. When everyone around you uses AI daily, acknowledging its role carries no social or professional risk. In fact, not using AI daily is the social fail.
The gap became clear when I shared attribution concepts with colleagues outside tech. While IBM’s participants readily engaged with granular attribution frameworks, professionals in other fields often expressed reluctance to acknowledge AI assistance at all - fearing it might signal reduced competence or creativity to colleagues unfamiliar with AI collaboration.
This highlights the core implementation challenge: moving from research insights gathered in AI-friendly environments to practical adoption across professional contexts where AI acknowledgment still carries social costs.
Moving from Theory to Practice
After two years of watching students navigate AI assistance without meaningful attribution guidelines, and seeing most professors and secondary school teachers default to AI bans rather than integration, I finally spent this summer developing systematic GenAI integration for my courses and my own work. The challenge remains: how to acknowledge AI contributions without writing lengthy explanations for every piece of work.
For the first time this fall, I’m requiring granular AI attribution in both my freshman introduction to data science and my 500-level data engineering courses. I suspect student reactions will mirror the broader professional resistance I’ve observed: initial pushback followed by growing appreciation for transparency frameworks that actually work. My experience with summer interns showed promise, but the sample size was small and the “doing things the way we always have” mindset proved difficult to overcome.
A student recently told me she avoids using AI entirely because she doesn’t want to be tempted to use it when she’s not supposed to. This captures exactly what IBM’s research couldn’t: while surveying tech workers in AI-friendly environments reveals theoretical preferences, real-world implementation faces social costs. How do attribution practices actually affect learning when students fear judgment for AI use? What drives adoption versus resistance outside tech bubbles?
These implementation questions require the kind of longitudinal, systematic study that only academic research can provide.
The Research We Need
The disconnect between tech-friendly survey responses and real-world adoption reveals fundamental gaps in our understanding of AI attribution. We need empirical research that examines how attribution frameworks perform across diverse professional contexts, not just within organizations where AI use is already normalized.
Key research questions include: How do different attribution approaches affect adoption rates among professionals who face social costs for AI acknowledgment? What factors drive resistance versus acceptance of granular attribution frameworks? How do we design standards that work across industries with varying levels of AI comfort?
This research requires systematic study across multiple professional contexts, longitudinal tracking of attribution practices, and careful analysis of implementation barriers that laboratory studies cannot capture. The stakes are too high for ad-hoc solutions developed in AI-friendly bubbles.
Industry needs attribution standards that work beyond tech companies—frameworks that account for the social dynamics, professional risks, and cultural resistance that shape real-world adoption. Getting this right requires academic research that can study these questions systematically, without the commercial pressures that limit industry research scope.
The field is ready for this work. The question is whether we’ll do it systematically or continue letting organizations stumble toward solutions independently.
Attribution
This piece emerged from collaborative exploration between human insight and AI assistance across multiple writing sessions. The human contributor provided the initial concept based on personal AI integration challenges, conducted research to identify relevant prior work (including IBM’s attribution research), established strategic positioning, contributed all personal anecdotes and classroom observations, and maintains full responsibility for final content. AI assistance contributed to structural development, drafting support, and editorial refinement through extensive back-and-forth revision cycles. The collaboration involved significant iterative refinement, with AI helping articulate and organize concepts emerging from human domain expertise.
References
- IBM AI Attribution Toolkit: https://aiattribution.github.io/
- He, J., Houde, S., & Weisz, J. D. (2025). Which Contributions Deserve Credit? Perceptions of Attribution in Human-AI Co-Creation. arXiv preprint arXiv:2502.18357v1. https://arxiv.org/html/2502.18357v1
- IBM Research Blog: AI Attribution Toolkit. https://research.ibm.com/blog/AI-attribution-toolkit