Text-to-image AI as a tool for the designer’s ideation process
Loading...
URL
Journal Title
Journal ISSN
Volume Title
School of Arts, Design and Architecture |
Bachelor's thesis
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Authors
Date
2023
Department
Major/Subject
Design
Mcode
ARTS3101
Degree programme
Bachelor's Programme in Design
Language
en
Pages
76
Series
Abstract
Ideation is an essential process of any designer. To aid ideation, a variety of tools and methods have been developed in the past few decades. Recently, researchers have highlighted computational tools as a promising area for studying design ideation. To adapt to the future of design, designers need to be able to understand how to use new, emerging tools to enhance their design processes. Though artificial intelligence (AI) is becoming a thriving field with a great number of meaningful applications, in both academic and non-academic circles the potential use of AI in ideation remains fairly unexplored. The inherent qualities of generative text-to-image (T2I) models, namely their speed and ability to take in natural language or textual input and transfer it into visual output highlight the potential of these models as ideational aids. This thesis investigates the possible connection between AI and design ideation through the method of semi-structured interviews. The models could act as strong alternatives to traditional ideation tools such as brainstorming or mood boards, allowing the designer to quickly transition the ideas in their head into visual format. In the interviews, the researcher questions ten design professionals of different backgrounds about their ideation process and then subjects them to an exercise in which they solve a design brief through the use of T2I models such as DALL-E2 or Stable Diffusion. Digital tools inevitably sway designers towards a way of thinking, as argued by other researchers within this domain. The results of this thesis show T2I AI tools as more applicable in diverging scenarios, where a designer needs to gather a set of diverse ideas and expand their space of possible solutions. In addition, in the context of this thesis, T2I models create a specific environment in which designers need to work in if they want to use them for ideation. This leads to the tool being applicable only in specific scenarios. For example, in scenarios where the designer is looking to continuously iterate with a tool or come up with non-visual inspirational stimuli, T2I AI might not be as effective. Conclusively, in the right context, T2I AI provides a compelling alternative to search engines such as Google, offering stimuli beyond the designer’s own ideation. T2I AI models could indeed serve as an excellent tool for the designer’s ideation process. However, this depends, among others, on designers’ intentions with the models, their intended outcomes from the tool, prior knowledge with T2I AI and willingness to spend time to learn the software. This thesis aims to gather those insights, offering designers a better understanding of when to use T2I AI tools for ideation.Description
Supervisor
Person, OscarThesis advisor
Jeong, RebeccaKeywords
generative AI, design ideation, text-to-image, design tools, convergence and divergence, DALLE-2