My step to step guide to leverage no code AI tools to create a landing page. Use AI tools to generate code, suggest copy, font types, colour pallets, creating imagery, audio and video. Let me show you can do this too.
Imagine a future, a future when you can bring a conceptual idea for a software product to life by simply writing a basic prompt into an interface. You provide some high level context relating to your conceptual product or service and sit back and let an entire end-to-end AI development tool generate each element of your mobile app, web app or website. It would suggest all the copy based on the target market, branding, custom imagery and illustrations, optimal user journey, structure of the database, logic, APIs, you name it. Once it's complete, you can play around with it and edit any elements you wish with an intuitive graphical interface, or if you prefer, jump in an edit any of the code directly to your preference.
Sadly, a full end-to-end AI-driven software development tool like this doesn't exist...yet.
However, in this present moment we are living in an incredible time where software development and AI tools are innovating at breakneck speed and potentially moving us closer to this future.
I’m sure you’ve heard a lot recently about AI. Perhaps you're sick of hearing about it, perhaps you're curious, or perhaps you don’t particularly care. Personally, I was pretty slow to react to this new wave of hype, which is pretty unusual for me given my shiny object syndrome with new technologies. It's the curious early adopter in me. Given the Crypto, NFT and Web3 bubbles over the past few years (which I admittedly got caught up in) it made me slightly hesitant to jump into the next hype cycle. However, this feels different; the advancements in AI feels different. Unlike previously hyped technologies that promised more utility and disruption in the future, AI doesn't seem like a nascent technology. AI brings more broad utility which seem to be more accessible to the masses and there is certainly more to come.
There is no doubt that AI technologies like GPT-3 and MidJourney (to name a few) are still in their early stages, but they are quickly becoming ubiquitous. To stay ahead of the curve in the tech industry, and especially when building products, it is essential to understand how these technologies work and in particular, how these new technologies can improve our products, businesses and personal lives. The best way to prepare for the future is to get started and gain practical experience.
In this post i’m going to walk you through each of the AI and no code tools I used to create this landing page experiment. I’m going to share every prompt used and explain some of the tips and tricks I learned along the way. The best thing is, each of these tools are free and accessible to anyone with a computer and an internet connection.
Check the out the end result here before reading.
As part of this experiment I created a few constraints. Firstly, the tools needed to be accessible to everyone; free, zero coding involved and with little learning curve required. This ensured I didn’t get carried away on more advanced tools until I could understand the basics. To give you an idea of how much I used AI to generate the website, I then used Chat GPT-3 to suggest all the website copy, including the titles, sub titles, testimonials, features, fonts, HEX colour codes, video and audio scripts, code generation and much more. I also used generative image tools to create all the images you see and AI background removal tools. Lastly, I used audio and video AI tools.
First of all let me set something straight. I'm not a designer, I don't know how to use specialist design tools like photoshop or illustrator, I'm not a developer and I barley know how to code and I am certainly not a copy writer. However, I spent the last 3 weeks getting familiar with a number of AI tools to better understand the utility of these products and the value they can provide as a maker and product manager. Not to mention the implications of these tools and the impending impact they will have on various professions.
It may seem pretty intimidating to get started, especially if you don’t have a technical experience, but I can assure you it’s not as daunting as you might perceive it to be.
Before we dive into this section, it’s worth clarifying some of the terms in advance as I will make reference to these phrases throughout this post. The set of words you provide in Chat GPT-3 are called a prompt, and the answer you get back from GPT-3 is called a completion.
Let’s start with the copy for the landing page.
I initially came up with a basic outline for a hypothetical product “a new VR headset inspired by 80s design”. It was that simple. Taking this basic outline prompt, I wanted to understand what Chat GPT-3 could produce to help inspire me and flesh out the product features. My first prompt was to create a press release for the product as a starting point which I could build out from.
As you can see in the screenshot below, my prompt requested the tone of the press release to be 'witty' and I also added some details such as ‘VR headset’ which was ‘inspired by 80s design’. Not terribly detailed as you can see, however, the completion provided was impressive with little context or direction provided.
One incredible thing to note about Chat GPT is it’s aware of the previous context you provided it, and also the completion previously provided. This means you can ask it to elaborate on completions it has provided or refine them without having to repeat the prior prompt. Take for example the completion produced above. I could use the following prompts to further refine the completion:
Next up I wanted to define the headline for the website. As you can see, I refined the original completion to make the headline 'shorter' and 'punchier'.
Now that I had provided some context and got some great output relating to the headline and sub text I wanted to explore some the hypothetical features of the product. I wanted the copy on the landing page to be entertaining, therefore, I requested the completion to include 'silly features'.
Notice how it has taken all the previous context without me needing to remind it of the products specifics.
Once I selected a few of my favourite features provided, I prompted it to further elaborate some of the features. In this example below, I could include the completion within quotation marks or alternatively I could reference the numerical value in the numbered list from the completion, e.g. "elaborate further on point 1".
Moving onto the testimonial section, I wanted to get some funny testimonials from early users of the product.
Coming up with a selection of colours for the theme of your website can be tricky. So I prompted GPT-3 to provide some suggestions using the following prompt "provide the hex colors for the design of the 80s retro inspired website". Personally I was really impressed by the suggestions. Not only did it provide the HEX colour codes but also further detail on where the inspiration for the colour came from with tied into the general theme of the "80s retro inspired" aesthetic I was looking to achieve. It also provided some creative direction on where this colour would be best placed. For example, it provided an accent colour specifically for the buttons.
I then took these colours and entered them into Webflow next to each other to see how they contrasted against oneanother and made some minor tweaks to some of the suggestions.
If you are looking to leverage AI tools to generate colour combinations check out Colormind, Huemint or Khroma.
After quickly scanning Webflow's limited catalogue of fonts for something retro looking I quickly turned to Chat GPT-3 for some suggestions. The suggestions below were spot on. I checked each of the suggestions in Google Fonts directory and selected Orbitron and Press Start 2P and imported them into Webflow. Using Chat GPT-3 to suggest fonts was certainly much faster than my traditional methods of sifting through various Google search results or going through the entirety of Google Fonts library. Additionally, the short description provided against each suggestion is helpful to understand the type of aesthetic quality the font type suggested provides.
Right this is where it gets pretty wild.
I wanted to experiment with some of new AI speech generators after hearing some incredible things about ElevenLabs new Prime Voice AI tool.
So I created a free account and started playing about with some of the configuration options and added some dummy copy to test the output. Once I found the optimum voice type and configuration settings I needed to come up with a script.
As you might have guessed by now I used Chat GPT-3 to provide a script to introduce the new product and specifically constrained it to 50 seconds to keep it short and punchy.
I then pasted this script into ElevenLabs Speech Synthesis model and generated the audio file which I could then download to MP3 format.
I initially wanted to embed a video explainer on the landing page but the wannabe designer in me didn't like the look as most of the free video hosts don't allow you to remove the watermark, controls and other elements which clutter the UI. However, it didn't stop me playing with a few video AI tools. I tried one in particular, that allows you to use the output of a generative image tool, such as MidJourney, and turn it into a video. D-ID's Creative Reality tool can generate a photorealist presenter video by combining images with text.
To create my explainer I generated an image of Marty McFly from Back to the Future and imported it into the Creative Reality tool. This was similar to that of the AI audio tool I used Chat GPT-3 to provide me with a 50 second script which I pasted into Creative Reality tool.
I was certainly impressed in the output generated given it was simply a single image I imported, however, there is certainly some room for improvement to make it hyper realistic.
Now the tricky part. I have no idea how to add an audio player on the landing page. So I decide to give Chat GPT-3 a shot and see if it could write the JavaScript required to embed into my website.
Guess what …. IT DID!!
To add this to my website, I added an embed element in Webflow and pasted in the code and added in the URL for the audio file which I hosted on Dropbox. I then styled a div block and assigned it a class then updated the button ID in the JavaScript. It was as simple as that.
I can see this being an incredible tools for non-technicals folks; though there is some moderate technical understanding required. However, I believe that Chat GPT-3 (a low code solution) paired with various other no code tools could provide the means to create more complex software products with technical skills required in a fraction of the time it currently takes.
I've considered creating a library of JavaScript snippets for various means which can be simply embedded into most no code web app and website products. Watch this space.
I would be lying to you if I didn’t admit I found this part the most fun (and frustrating at times).
I am continuously blown blown away by what is possible with generative AI image tools.
I used MidJourney as I personally found the output much better than Stable Diffusion and DALL·E 2. This also may be due to the limited time I spent getting familiar with those tools so I wouldn't rule them out.
One element which causes a lot of folk confusion when first using MidJourney is the interface to interact with the model. In short, you need to have a Discord account as it’s set up as server on Discord. You can create a free Discord account in minutes. Once you have an account simply visit the MidJourney site and sign up and complete the set up wizard process.
Now, this is where the fun begins.
Admittedly I was horrific at first coming up with prompts and getting low quality output (shit in = shit out). I quickly learned there is an art to writing prompts in MidJourney.
One of the best accounts to follow for MidJourney prompt tips is Linus Ekenstam on Twitter. He's been prolific with a bunch of AI tools and sharing his learning throughout testing each tool. He's also got a fantastic newsletter full of tips which helped me craft the images for my website.
To note, my process to create the images for the landing page, was made up of a lot of trial and error; going back and fourth to better refine my prompts.
Let’s start with the hero image of the VR headset.
After trying various prompts to get a retro but slick looking VR headset image, I was struggling to get the output I envisaged. That’s when I found out that you could add an image url to your prompt. I Googled a bunch of VR headsets on Google images and found one I liked (thanks Meta). I simply copied the URL and pasted it into the prompt and added some additional prompts to sculpt the desired output.
As you see below, I took the image URL and added in some prompts to inspire the model, asking it to take the Apple and Nintendo aesthetic design with an 80s flair.
4 images were quickly generated. I liked the look of version 2 (V2), so I upscaled it and also requested 4 alternative versions of this image to see if there was any improvements I should consider.
Tool tip: Upscaling is essentially selecting the image you want out of the 4 options provided. It creates a higher resolution image which you can download. If I want to get further variations based on one of the images I can simply select V1-4. V1 being top left, V2 top right, V3 bottom left and V4 bottom right. Lastly, you can also select the refresh icon if you want to generate entirely new images.
I really liked the look of version 2 so upscaled it and saved it locally.
Next, I wanted to add this image to my site but didn’t want the background. So I used an incredible, free, AI background image removal tool called ‘Photo room’ which automatically removed the background of the images in seconds.
Let’s now move onto perhaps some of the tricker images that I generated for the list of product features which sits below the hero section of the landing page.
The first challenge was to get the image of the VR headset, which I created in then previous step, into these next images. First, I had to host the image somewhere so I could add the URL to the prompt. There is a bunch of free image hosting sites which can be found with a quick Google. I personally hosted the image on a hidden page on my website and copied the url. I then wanted to get a series of images depicting a person wearing the headset with a 80s retro feel to it, which was realistic looking, and perhaps would align with the feature image I was depicting.
The prompts I used for the most part in the initial sentence were pretty straight forward. I then got a little more specific with technical terms like; 50mm lens, 3:2, —q5. These might not make sense to you and the good news is, for the most I don’t fully understand the technicality behind these terms either. I got these specific prompts from similar images I found on Linus’s Twitter ALT tags and also this super handy guide. Remember, it’s all trial and error and don’t be afraid to experiment until you find something that works.
I was really pleased with the output, however, as you might notice the headset in the images ins’t exactly consistent. I haven’t found a workaround for this yet but I can imagine this would be a game changer if future improvements allow precise blending placement from one image to another.
It's been said by a lot of others before me, but it's worth repeating. At this moment in time, these tools are aids. They improve productivity and creativity. It's not a simple copy and paste. Prompt engineering is required, and it takes time to learn how to yield optimum input to get the desired output. For me, the biggest value unlock was from the generative image AI. Being able to imagine and reimagine and experiment with various prompts and see the output generated in front of me was and is astonishing. Additionally, when looking for some inspiration and direction with copy writing, Chat GPT-3 is an invaluable tool. However, my biggest jaw dropping moment was Chat GPT-3 producing the JavaScript required to create an audio player to embed into my site.
For illustrators, graphical and user interface designers, this is either a game changer or potential threat. Creating assets which once could take hours or days can be created in seconds, with a fraction of the skill or experience once required. This isn't to say by any stretch that designers are out of job in the near future. This was a common narrative when no code tools first emerged with many saying software developers will shortly be threatened, but if anything it's removed the time consuming repetitive tasks and brought efficiency gains and allowed them to focus more on harder problems. As I said earlier, at this moment in time these tools are aids, and for the most part remove unnecessary friction, and if adopted can 10x creativity and output in my opinion. Lastly, on the topic of AI design tools, two new products in early beta which i'm playing close attention to are Galileo AI and Genius. Both have a similar value proposition and leverage AI models to assist with wire framing and user interface design via a Figma plugin. Galileo seems to be more of a prompt based UI generator and Genius comparative to CoPilot where it acts as more of an assistant and takes over when prompted based on your current design patterns.
Moving onto generative language models like Chat GPT-3. It's hard to fathom the impact tools like these will have as i'm still wrapping my head around it's capabilities and the models are continually improving. It should be said that although the output is incredible, I would at this moment in time simply copy and paste it. There is still creativity and skill required to craft the copy, especially when it comes to website copy which is crafted for SEO and marketing purposes. It's an incredible tool to assist with your writing and research. Additionally when trying to summarise paragraphs of text into something more digestible and straight to point or alternatively if your looking for assistance expanding on something you have inputted. For copy writers I can see this being an incredibly valuable tool paired with their expertise.
For Product Managers (like myself) there is huge value with tools like GPT-3. Just check out Martin Slaney's Product Managers Prompt Book as an example. He provides an entire library of prompts to assist with conducting competitor analysis, creating user stories, acceptance criteria, release notes, product visions you name it. After trying some of the prompt he suggested myself using the acceptance criteria creation as an example I was blown away with how thorough it was and admittedly I wouldn't have come up with such a robust set of acceptance criteria myself. The threat of AI tools for Product Managers seems a little further away in my opinion, given the multi faceted role which has creative elements, coordination and communication aspects associated with it which i'm not aware can be replaced at this moment in time.
Lastly, there is the impact on developers. For the most part i'm not in a position to provide much input on this subject, however, from what i've seen myself and from others products like Github's Copilot and GPT-3 can already auto generate generate code, functions, test and even documentation. It's certainly not at the point where non technical users can use these products and for the most part they are suggestive tools rather than compiling entire functional applications from a few prompts. The general sentiment i've heard from developer friends is tools such as Copilot certainly improves their productivity, allows them to code faster, and frees up time for them to solve harder problems and learn alternative syntax's they might have not previously considered.
This paradigm change suggests in the near future our creative potential isn’t limited by our technical abilities anymore and dramatically lowers the barrier to entry.
This is hopefully just one of many experiments with AI tools I intend to conduct and document. I've only scratched the surface with existing tools available and if the last year of innovation is anything like what's to come there will certainly be most exciting experiments around the corner to explore.
To provide you with idea on my next experiment, il be looking to leverage AI to create a product validation grading tool. I actually conceptualised this years ago, well before any consideration of leveraging AI. The original plan was to build a decision tree to provide the scoring and suggested improvements but from some early testing with AI models I think the output and value created using an AI model might be far superior.