Summary in 30 seconds:
- As Google promotes more and more sites with content that exudes expertise, authority and reliability (EAT) It is imperative that SEOs and marketers produce content that is not only well written, but also demonstrates expertise.
- How do you understand the topics and concerns that are most important to your customers?
- Can you use Q&A to inform content strategies?
- XPath notations can be your treasure trove.
- Organic Research Catalyst Manager, Brad McCourt shares a detailed guide on using XPath notations and your favorite robot to quickly get questions and answers in a simple and digestible format.
While Google increasingly favors sites with content that exudes expertise, thetority and reliability (EAT), it is imperative that SEOs and marketers produce content that is not only well written, but also demonstrates expertise. One way to demonstrate your expertise on a topic or product is to answer common customer questions directly in your content.
But, how do you identify these issues? How do you understand the most important topics and concerns?
The good news is that they are hiding in plain sight. Chances are, your consumers will be yelling at the top of their keyboard in the Q&A sections of sites like Amazon.
These sections are a treasure trove of questions (mostly) that real customers are asking about the products you sell.
How do you use these questions and answers to inform content strategies? XPath notation is your answer.
You can use XPath notations and your bot preferred exploration to quickly get questions and answers in a simple and understandable format. XPath saves you from clicking through endless screens of questions by automating the collection of iImportant information for your content strategy.
What is XPath?
XML Path (XPath) is a query language developed by W3 to navigate XML documents and select the specified data nodes.
The notation used by XPath is called “expressions”. By using these expressions, you can efficiently extract all the data you need from a website as long as there is a consistent structure between the web pages.
This means that you can use this language to extract all publicly available data in the source code, including questions of a selectionection of Amazon Q&A pages.
This article is not intended as a tutorial on XPath. For that, there is many W3 resources . However, XPath is fairly easy to learn knowing only the structure of XML and HTML documents. This is what makes it such a powerful tool for SEOs regardless of coding prowess.
Let's take an example to show you how ...
Use XPath to extract questions from Amazon customers
Pre-request: Choose your web crawler
While most of the big names in web crawling - Botify, DeepCrawl, OnCrawl - all offer the option to extract data of the source code, I will be using ScreamingFrog in the example below.
ScreamingFrog is from loin the most economical option, allowing you to crawl up to 500 URLs without purchasing a license. For larger projects, you can purchase a license. This will allow you to crawl as many URLs as your RAM can handle.
Step 1: Collect URLs to crawl
Fo In For our example, let's say we're researching the topics we should include in our product pages and microspike listings. For those who don't know, microspikes are an accessory for your boots or shoes. They give you extra grip in winter conditions, so they're especially popular among hikers and runners in cold weather.
Source: https://www.amazon.com/ s? k = microspikes
Here we have a list of 13 pages of questions and answers for the main ones microspike pages on Amazon.com. Unfortunately, some manual work is required to create the list.
- Show source code
- View the rendered source code and copy the XPath directly from Chrome's Inspect Element tool
You will find that the expression needed to locate all questions in an Amazon Q&A page is:
Here is the decomposed XPath notation:
- // is used to locate all instances of the following expression.
- Span is the tag we're trying to locate. // tag in the source code. There are over 300 of them , so we'll have to be more specific.
- @class specifies that // tags with an assigned class attribute will be located.
- @ class = ”a-declarative” indicates that // tags where the class attribute is set to "a-declarative " - that is,
There is an extra step to return the inner text of the specified tag that is found, but ScreamingFrog does the heavy lifting for us.
It is important to note that this will only work for Amazon question and answer pages. If you want to extract questions from, for example, Quora, TripAdvisor, or any other site, the expression will need to be adjusted to locate the specific entity you want to collect while exploring.
Step 3: Configure your crawler
Once you've got everything set, you can go to ScreamingFrog. /
Configuration -> Custom -> Extraction
This will then take you to the custom extraction screen.
This is where you can:
- Give the extraction a name so that It is easier to find after crawling, especially if you are extracting multiple entities. ScreamingFrog allows you to extract multiple entities during a single crawl.
- You can then choose the extraction method. In this article, everything revolves around XPath, but you also have the possibilityty to extract data via CSSPath and REGEX notation.
- Place the desired XPath expression in the "Enter XPath " field. ScreamingFrog will even check your syntax for you, providing a green check mark if all is well.
- You then have the option to select what you want to extract, be it the full HTML element or the HTML found in localized tag. For our example, we want to extract the text between all tags with a class attribute set to "a-declarative " in order to select "extract text ".
We can then click OK.
Step four: explore desired URLs
It is now time to explore our list of Amazon Q&A pages for microspikes.
First, we will need to change the mode in ScreamingFrog from " Spider "to " List ".
Then we can either add our set of URLs manually, or download them from an Excel or other supported format.
After confirming the list, ScreamingFrog will crawl each URL we provided, extract the text between all of them tags containing the class attribute set to "a-declarative".
In order to see the collected data, you just need to select "Custom Extract" in ScreamingFrog.
At first glance, the output may not seem as exciting.
However, this is only due to the fact that a large amount of unnecessary space is included in the data, so some columns may appear empty if they are not expanded to fully display the content.
Once you have copied and pasted the data into Excel or any spreadsheet of your choice, you can finally see the data that have been extracted. After a few cleanings, you get the final result:
The result is 118 questions real customers have asked about microspikes in an easily accessible format. With this data at your fingertips, you are now ready to incorporate this research into your content strategy.
Before diving into content strategies content, a quick note: you can't just crawl, scratch, and post content on your own.from another site, even if it is accessible to the public.
First of all, that would be plagiarism and expect to be hit by a DMCA Notice . Second, you are not fooling Google. Google knows the original source of the content and it is extremely unlikely that your content will rank well, which defeats it. the goal of this whole strategy.
Instead, this data can be used to inform your strategy and help you produce high quality, unique content searched by users.
Now, how to start your analysis?
I recommend that you classify the questions first. For our example, many questions were asked about:
- Size: what size of microspikes is needed for a specific shoe / boot sizes?
- Appropriate use - whether or not microspikes can be used in shops, on slippery roofs, while fishing, mowing, or for walking on plaster?
- Features: Are they adjustable, type of material, do they come with a carrying case?
- Concerns: Are they comfortable, do they damage your shoes, do they damage the type of flooring / flooring on which you find yourself 're on, sustainability?
This is an amazing preview ofThese potential concerns customers might have before purchasing microspikes.
From here you can use this information to:
1. Improve existing content on your product and category pages
Incorporate the topics into product or brand deions. categories, responding to buyers' questions in a preventive manner.
For our example, we might want to clarify how sizes work - including a size chart and specifically mentioning them types of footwear with which the product may or may not be compatible.
2. Create a short FAQ section on the page featuring original content, answering frequently asked questions
Make sure to implement FAQPage Schema.org markup for a better chance of appearing for lists such as the People Also Asking sections, which are increasingly taking places real estate in the search results.
For our example, we can answer frequently asked questions about comfort, shoe damage, durability and adaptability. We could also determine whether the product comes with a carrying case and how best to store the product for travel.
3. Produce a product guide, incorporating answers to common questions about a product or category
Another strategy is to produce it a complete stop the production guideuit presenting specific use cases, sizes, limitations and features. For our example, we could create specific content for each use case, like hiking, running in freezing conditions, etc.
Better yet, embed videos, images, graphics, and featured products with a clear purchase path.
Using this approach, your end product will be content that shows expertise, authority over a topic and, most importantly, answers customer concerns and questions before they even think to ask. This will prevent your customers from having to do additional research or contact customer service. With your informative and useful content, they will be more ready to make a purchase.
In addition, this approach also has the potentialreduce product return rates. Savvy customers are less likely to buy the wrong product due to assumed or incomplete knowledge.
Amazon is just the tip of the iceberg here. You can realistically apply this strategy to any site that has publicly available data to pull, whether it's Quora questions about a product category, Trip Advisor reviews of hotels, venues, and attractions, or even discussions on Reddit.
The more informed you are of what your customers expect when they visit your site, the better you can respond to those expectations, motivate purchases, reduce bounces and improve organic search performance.
Brad McCourt is the head of organic research at the Catalyst office in Boston.
En learn more about: