Exploring the role of text-to-image AI in concept generation

DS 122: Proceedings of the Design Society: 24th International Conference on Engineering Design (ICED23)

Year: 2023
Editor: Kevin Otto, Boris Eisenbart, Claudia Eckert, Benoit Eynard, Dieter Krause, Josef Oehmen, Nad
Author: Brisco, Ross; Hay, Laura; Dhami, Sam
Series: ICED
Institution: University of Strathclyde
Section: Design Methods
Page(s): 1835-1844
DOI number: https://doi.org/10.1017/pds.2023.184


Artificial intelligence (AI) capable of generating images from a text prompt are becoming increasingly prevalent in society and design. The general public can use their computers and mobile devices to ask a complex text-to-image AI to create an image which is in some cases indistinguishable from that which a human could create using a computer graphics package. These images are shared on social media and have been used in the creation of art projects, documents and publications. This exploratory study aimed to identify if modern text-to-image AI (Midjourney, DALL-E 2, and Disco Diffusion) could be used to replace the designer in the concept generation stage of the design process. Teams of design students were asked to evaluate AI generated concepts from 15 to a final concept. The outcomes of this research are a first of its kind for the field of engineering design, in the identification of barriers in the use of current text-to-image AI for the purpose of engineering design. The discussion suggests how this can be overcome in the short term and what knowledge the research community needs to build to overcome these barriers in the long term.

Keywords: Artificial intelligence, Conceptual design, text-to-image, Design process, Concept Generation

Please sign in to your account

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties. Privacy Policy.