Get the stats on auto’s AI-powered search readiness and explore the four main areas where there are quick wins for auto brands to level up their AI-powered search game.
It’s hard to say exactly what AI means for the future of digital experiences, but AI-powered search is one area where auto brands can and should be optimising already.
Even if people aren’t going out of their way to use ChatGPT, Claude or Perplexity, they’ll be interacting with AI-powered search in Google’s AI Overviews as part of search results. In fact, a recent survey from Bain found that about 80% of consumers now rely on AI overviews in at least 40% of their searches, reducing organic web traffic by an estimated 15% to 25%.1
This new type of search means auto brands need to think a bit differently about their websites and content. Before, it was all about content that ranks well and gets that #1 spot in results. Now, brands also need to think about how to create content that AI wants to use to synthesise answers.
So, how can auto brands make sure they’re a go-to source for AI, and how ready are they to do it? There is a broad range of best practice in this area, but based on our recent analysis of the auto category, we’ve identified four areas where it’s clear there’s room to improve.
Turns out, AI crawlers are fussy. They like fast, clean and accessible websites. Auto is on the back foot when it comes to this. In our Experience Fundamentals criterion (which measures the types of site performance basics AI crawlers care about), the category only scored an average of 9 out of 15 points (58%).
Within the four sub-criteria for Experience Fundamentals, Cross-device Experience was the one bright spot, averaging 76%. Performance, Structure and Accessibility were all well below that (averaging 50%, 60% and 44% respectively).
The low scores in Performance and Accessibility are particularly concerning. Our Performance sub-criterion uses key measures from Google Pagespeed that relate to AI-powered search readiness and accessibility enhancements are frequently touted by experts as an important way to improve AI-crawler access.
AI likes content that’s written for humans, covers topics thoroughly and is optimised for user search intent, not just specific keywords. Increasingly, it looks for this not just in text, but also in multimedia.
Unfortunately, auto has a ways to go to satisfy AI in this area. For our Content Experience criterion, the category scored an average of 48%. While all of the sub-criteria within it are important in optimising for AI-powered search, there are three specific ones to call out.
The Flow sub-criterion (avg. score 56%) looks at overall content flow, scannability and narrative continuity. The lower score here means it's likely many brands lack content that’s well-structured and written in the clear, conversational manner AI favours.
The Quality sub-criterion scored slightly better at an average of 63%, but one key data point dragging it down was Flesch readability test scores. This again points to content not being as simple, natural and conversational as it could be.
With an average score of 30%, the Customer Support sub-criterion was the lowest performing. A primary data point dragging it down was the breadth of support content. Only 11% of brands excelled when it came to having broad, rich customer support content. A big missed opportunity given AI’s penchant for deep, thorough content.
AI loves structured data and metadata. Both make it far easier for it to understand what a piece of content is, along with its context and semantics.
Right now, Auto is being quite stingy with structured data and metadata. In our analysis, we found that auto brands had an average of 409 pages with missing or invalid meta descriptions on their websites. From brand to brand, the depth of this issue varied wildly, with some having only a handful and others having thousands.
When it comes to structured data, we’re looking to expand our analysis in this area, but we have some insight through the Rich Results Test, which we ran on multiple pages for every website we studied. This test detects whether a page contains structured data to generate Google’s rich results in search. This type of structured data should be common as it’s beneficial for search. In auto, we found that based on the pages we sampled, 57% of them had no structured data for rich results detected on smartphone or desktop.
Just like humans, AI values credibility. It wants fresh, well-referenced content from authoritative sources.
Building credibility is a broad topic worthy of its own study, but in our recent auto analysis, we looked at some important hygiene factors that are impactful but often overlooked—claim substantiation, basic security errors and site errors.
We found that 52% of brands only sometimes or rarely had robust substantiation for claims and statistics that should have them. A lack of citations might not bother many human visitors, but AI will notice.
Basic security and site error issues that can undermine credibility were also common. Of the 47 brands we looked at, 66% had HTTP URLs that should have been HTTPS URLs (all URLS should be HTTPS today), and 85% had pages with internal client errors (4xx) (e.g. 404s, 403s). Just over half of the brands had 10+ pages with internal client errors. These may seem unimportant, but they’re just a couple of the more basic security and error issues we found that all add up to undermine credibility.
So, is auto in Australia ready for AI-powered search? Based on our analysis, we think the category isn’t in the best spot to make the most of it. The good news is all the areas for improvement we’ve covered are easy to address, will remain important despite the speed of AI advancement and will improve the experience for users as well.
External sources