Treffer: Programming Through Sketching - Non-Linguistic Interaction with Generative AI
info:eu-repo/semantics/openAccess
English
1542825897
From OAIster®, provided by the OCLC Cooperative.
Weitere Informationen
Generative AI technologies are rapidly being used in programming workflows, particularly for front-end and user interface development. However, many of these tools, including well-known platforms such as ChatGPT and Copilot, focus mainly on text-based prompts, which frequently fail to convey spatial relationships, layout hierarchies, and designer intent. This mismatch reduces usability, expressiveness, and creative control, particularly for visually oriented tasks. To address this gap, this study investigates how sketch-based, non-linguistic interaction can improve developer experience and AI-human collaboration in code generation. This study addresses three core questions: What limitations do programmers face when using traditional text-based AI tools for layout and interface generation? How does sketch-based interaction affect the expressiveness, control, and prototyping speed in AI-assisted programming? What are the UX implications of using multimodal (sketch + text) input when interacting with generative AI tools for code generation? A Design Science Research (DSR) approach was used to create and test a conceptual framework for sketch-based, multimodal engagement with generative AI. Eight experienced programmers took part in a hands-on UI development task, creating and submitting sketches (by hand drawing, Figma, or Photoshop) to AI tools including Lovable.dev, Cursor.com, and ChatlyAI. Semi-structured interviews were used to evaluate the artefact, with responses analysed with responses analyzed thematically to uncover trends in usability, learning, and creative involvement. The findings show that text-only inputs frequently resulted in misaligned or incomplete UI outputs, necessitating time-consuming changes. In contrast, sketch-based inputs allowed for clearer communication of layout purpose, faster prototyping, and less cognitive burden. Multimodal engagement, which combines visual sketching with minimum textual prompts, provided the most accurate and user-fr