Expert Eyes, Machine Hands investigates how generative artificial intelligence (GenAI) is reshaping creative practice, aesthetic judgment, and professional expertise within the visual communication and advertising fields. While GenAI tools such as DALL·E and ChatGPT are increasingly integrated into creative industries, little empirical research examines how expert practitioners actually experience and evaluate these systems in real-world tasks. This project therefore explores not only what GenAI can produce, but how creative professionals negotiate its affordances, limitations, and cultural implications.
School of Communication Staff involved: A/P Angelique Nairn, Justin Matthews, Daniel Fastnedge, Angela Asuncion, AD Narayan & Matt Guinibert
GenAI has rapidly become a defining cultural technology of the 2020s, transforming assumptions about art, authorship, and originality. Scholars highlight its dual potential: on one hand, GenAI can accelerate idea generation, reduce labour, and expand imaginative possibilities; on the other, it risks producing superficial “pseudo-creativity,” diluting aesthetic standards, and redistributing cultural capital away from human expertise. These debates are especially significant in creative industries—such as advertising—where originality, brand distinction, and aesthetic nuance are core professional values.
This project situates itself within these tensions, using Csikszentmihalyi’s systems model of creativity and Bourdieu’s account of cultural capital to analyse how GenAI interacts with domain knowledge, expert judgment, and aesthetic norms. Rather than treating GenAI as a tool that simply supplements or replaces human creativity, the study interrogates how it reconfigures the conditions under which creative work is conceived, evaluated, and legitimated.


The project employed a collaborative autoethnography, enabling a group of experienced creative practitioners—who are also educators—to reflect systematically on their own interactions with GenAI. Five practitioner–researchers worked independently on an identical, industry-style creative brief: designing a full brand identity for a new Amazon-sourced energy drink. GenAI (DALL·E and ChatGPT-5) served as the primary production tool for naming, logo development, packaging, mock advertising, and optional video concepts.
During the process, participants:
A subsequent artefact-led focus group allowed participants to critique their own and others’ outputs against the brief. The team then conducted a comprehensive reflexive thematic analysis across transcripts, logs, and artefacts, grounding insights in lived experience and theoretical frameworks.
Participants consistently experienced GenAI as a powerful tool for overcoming inertia, sparking unexpected directions, and accelerating early-stage exploration. GenAI’s rapid generation of names, slogans, visual motifs, and mock designs helped practitioners narrow options efficiently. Many described GenAI as a collaborator that stimulated creativity rather than replacing it, echoing research on co-creative human–AI partnerships.
A major theme concerned the system’s apparent “memory” of users’ prior interactions. Participants noted that GenAI often defaulted to familiar domains, styles, or problem-solving patterns based on their historical use of the platform. While this created efficient, personalised workflows, it also restricted novelty—an algorithmic reinforcement of existing preferences. This behaviour mirrors Bourdieu’s notion of habitus: the technology appeared to learn and reproduce each practitioner’s tendencies, narrowing rather than expanding creative possibility.
Although GenAI could produce polished images quickly, practitioners repeatedly observed issues with fidelity, consistency, and typography. Small details degraded during refinement, colour palettes shifted unpredictably, and complex elements like hands entered uncanny territory. More importantly, many outputs felt generic or derivative—reflecting dominant design tropes embedded in the training data. Logos resembled major brands, typography appeared stock-like, and overall aesthetics often evoked “Canva-like” templates. Experts felt that while GenAI could achieve visual plausibility, it struggled to deliver originality or subtle aesthetic judgments.
Across the project, expertise remained essential. Practitioners identified where GenAI excelled (speed, variation, exploration) and where it faltered (nuance, consistency, innovation). They consistently intervened to critique, refine, or correct outputs, underscoring that human judgment still determines whether AI-generated work meets professional standards. The study highlights expertise not as resistance to technological change, but as the ability to strategically curate and constrain GenAI’s affordances while mitigating its biases and limitations.
This project advances scholarly and industry debates by offering a fine-grained, experiential account of human–AI co-creativity. It demonstrates that:
In sum, the study shows that GenAI is neither a threat nor a saviour to creativity, but a transformative force that reshapes the ecosystem in which creative work occurs. Understanding this transformation requires attention to the lived experiences, aesthetic decisions, and accountability practices of the experts who work with these tools.
