//////////////////

Automating Product Design with AI

20230516

This post was initially going to be a deep dive into a Google Sheets Sync Figma plugin by Dave Williames and how to use ChatGPT to complete the process (e.g. "Generate 50 user comments around X topic with this data structure" > then feed them through the Sync plugin over GSheets), but then I started thinking that it would actually be much more interesting to have the generation happen in the plugin itself within Figma... and voilá! The same guy published Contentinator, which does exactly that.

I feel these are the initial steps for having an actual Copilot for designers. Some are hinting at this but there's no product yet, so I'll test this plugin, see if it can solve a few challenges, and add some thoughts on what I would expect a full copilot to help us with.

In my case, working for a company, the content of wireframes & designs revolves normally around the same topics and has some repetition. The idea is to avoid manual input of those, and just create the frame where the content needs to go. The most common tasks I can think of are:

  • Generate bulk sample content for a set of components

  • Content translation & generation in-canvas

  • Generate a set of images to fill components

Data Tables & Cards

I want to generate text & numeric content, ordered in a certain way for a number of rows, and apply it across a range of components:

  • Separating by "New lines" and "Semicolons" seems to give the best results for content insertion

  • The AI is not always consistent when generating +5 instances, getting creative with the format and thus messing up when applying the layers.

Ideally, you can select the full table and specify a complex configuration, in my case the tables are built column by column instead of row by row (meaning, cells in the same column are grouped together to allow batch resizing), thus breaking the content insertions.

Cards are pretty easy to do as they are grouped together – in this case, it's a complex insights card use-case, but it nails the content anyway:

Content Translation In-Canvas

Another great use case would be to be able to translate a full frame into another language to check how the components adapt. This plugin doesn't allow that kind of input, but it does allow some really interesting text manipulation over the same layer:

Generating images

There are 2 use cases for me:

  1. Needing a quick image for a specific block

  2. Needing to fill a set of components with an image type (users, etc)

The first one works just fine, the plugin is basically a Dall-E / Midjourney with an insert option. For the second one, you'd still be better off using Sync with a database of pre-establish images, as the plugin can't batch-generate images. Take into account:

  1. You need to set up a shareable Google Sheet to the plugin can fetch the data

  2. Images need to be public URLs to a .jpg (I'm storing them on Drive as well)

  3. Layers have to be named as #column (e.g. my image layer is named #Avatar)

This works great, but you'd have to pre-build your image use-cases to then only fetch as needed.


Final thoughts

I have been reviewing several design projects I did in the past, and to be fair at first sight there are not a lot of instances where I've seen myself wasting a lot of time creating mock content. I might be missing better / more creative use cases of the existing tools.

I feel that the main limitation is that we are not feeding the design to the AI for feedback, we are just using its outputs to fill content, missing potential industry-groundbreaking use cases more akin to what Copilot is doing for code.

In this line, these could be a few interesting use cases if we are able to feed our work to the AI

  • We could have a "Design Review" trigger that feeds frames as PNGs into the AI for analysis, outputting ChatGPT as Figma comments directly on the canvas. These comments could touch on a variety of topics that, similarly to Copilot, aggregate common UX knowledge for example around accessibility, best practices, or misalignments with the existing design system.

  • Content seems a really simple but powerful use case. Ideally, instead of prompting each text layer, we could have a full AI review of copy that actually modifies the content for a duplicated version of a frame.

  • For translation work, having the ability to trigger duplicated "ghost" frames next to the one you are working on with the content translated into other languages could be a great way to see what the structure looks like across audiences.

  • I'm more doubtful of generating UI directly though, although if we are able to train models on the existing design system it could be quite consistent when generating more generic or repetitive areas of a product.

As of this post, the online version of ChatGPT-4 doesn't ingest images, but I can just imagine how this industry is going to change as soon as ingesting, understanding and outputting image content is as easy as it language is right now.