Posted by MiriamEllis Image credit: Danny Sternfeld In creating a Google My Business listing for your local business, making a data-based decision is one of the most important steps you’ll be taking. Just how influential are the categories you select? Our recent State of the Local SEO Industry 2020 survey found that, out of all factors, GMB elements (which include categories) have the greatest impact on local pack rankings. Choose wisely, and these elements help ensure Google views you as a candidate for possible inclusion as a result for a set of search phrases. Choose wrongly and you can exclude yourself from this vital visibility. Google categories can also play a role in determining which features will be available to you in your Google Business Profile/Google listing. For example, if you’re categorized as a “hotel”, you won’t be able to use Google Posts. If you’re categorized as an educational institution, you won’t be able to receive reviews. Meanwhile, if you’re categorizing your business in the auto dealership space, you’ll be allowed to have multiple listings for your departments and the car makes you vend. Categories impact the attributes that will be associated with your business, the menus you can use, whether booking buttons are available to you, and whether you have primary or secondary hours of operation displayed. In short, your choice of your primary and secondary categories contributes a lot to Google’s understanding and handling of your business. With so much riding on proper categorization, let’s empower you to research your options like a pro today! When and where to choose Google categoriesIn creating a brand new Google My Business listing, one of the first thing Google asks you to do is to choose a category: And, as Google says, you can change and add more categories later. Once you have access to your GMB dashboard, you’ll find your categories by clicking on the “Info” tab in the left menu and looking right below your business name, where the pencil icon will let you edit your categories: You are allowed to select up to 10 categories. Your primary category is believed to have the greatest influence on your local rankings, and must be chosen with extra care: You can edit your categories in the GMB dashboard any time you want to, with the understanding that doing so can substantially alter the rankings you’re experiencing for various search phrases. How to choose Google categoriesHere’s your step-by-step workflow for picking the Google categories that are best for your business, with the help of some great tools. 1) Determine your most important search phrasesFirst, create a list that includes:
Next, take your list of keywords and enter them into your choice of free or paid keyword research tools to discover which terms have the highest potential search volume. For example, Moz’s Keyword Suggestions tool within Moz Keyword Explorer can help you determine the difference in search volume between two terms like “Mexican restaurant” vs. “taco shop”: Note down the search volume for each term on your list. Finally, refine your list down to a smaller set of terms that combine the highest search volume with being most relevant and important for your company. In most cases, this is the list you’ll move ahead with, although there are some cases in which you would choose to target lower volume search phrases because they are either a) less competitive, or b) a more exact description of what your business is. 2) Determine which categories your market competitors are using for your most important search phrasesNow, take your refined list of search phrases over to Google and begin searching for them in your local market. Your local market is made up of your customers’ locations in relationship to your business location. This could only be as large as your neighborhood, or it could include a whole city or several adjacent cities, depending on:
For example, a coffee shop might have quite a small local market if most of its customers arrive looking for a quick, convenient cup of coffee. Meanwhile, an amusement park might have a much larger local market because people are willing to go a greater distance to visit it. Google’s local results increasingly reflect their understanding of intent differently for different business models. Here’s a screenshot of the market an Internet searcher in the North Beach district of San Francisco might see if they are looking for “pizza near me”:
Now, make a list of all the competitors you discovered in your market while searching from the location of your business. Next, be sure you’re using the Chrome browser and head over to Chrome webstore to download the awesome, free, new extension called GMBspy. Developed by George Nenni of Generations Digital, turning this extension on enables you to go to Google Maps, search for your market competitors and see their categories, like this: You can look up competitors one by one, or just mouse around on the map to see the GMBspy extension data pop up. Google doesn’t automatically reveal all the categories a business is using and so this little tool saves so much time, and a lot of fiddling around with HTML to access that data. What a great development! Note down all of the categories your market competitors are using. Pay special attention to the categories being used by the business ranking #1 for each of your refined search phrases. 3) Get category suggestions and leave no stone unturnedYour market might be full of highly active competitors who have wisely chosen the best categories, or it could be a less sophisticated scenario in which other companies are overlooking opportunities you might be able to discover. Hop on over to PlePer’s GMB Category Helper and type in your business name and up to three comma separated search phrases. If you’ve not yet opened for business, you can just enter the street address of your proposed location instead of a business name. Then, go get a cup of tea or do a little exercise for five minutes and come back for this amazing data: Based on your lat-long coordinates, PlePer shows you your current categories, the categories being used in your area, a list of category suggestions, and other useful information. Quite cool! The free version of this tool lets you do three such searches per day. Jot down any notable findings that were absent from using GMBspy. And, finally, just to be sure you haven’t missed any potential opportunities, move over to PlePer’s full GMB category list: It’s updated at least every 3 days, which is great because Google continuously adds and subtracts categories. Just select your language and country and hit the “fetch” button. This tool can be especially useful if you offer an unusual good or service and aren’t sure whether a category exists for it. Note down anything you feel might be relevant. Finally, within the GMB dashboard, Google will also sometimes make suggestions about additional categories you might want to consider adding, like this: In the above screenshot, you can see that our categorizing Moz as a software company is causing Google to suggest that we might also want to select “accounting software company”. In this case, the suggestion is irrelevant for Moz’s business model, but it’s a good idea to see if Google is making any valuable suggestions for your company. You’ve now got all the data you need to make a selection, based on the categories that are applicable to your popular search phrases and that are being used (or overlooked) by your top market competitors. Well done! A little extra GMB category savvyImage credit: Thom Wong Let’s boost your confidence about Google categories with a few more tips before you fill in your choices in the GMB dashboard. Answers to these FAQs could help you out with common predicaments: 1) How many GMB categories should I choose?My best answer is: as many as are truly relevant to your business. Never add categories that don’t relate to your business. For example, if you’re marketing a pizza place, you obviously shouldn’t add hair salon as a category, or it can totally confuse Google, your customers, and even harm your rankings. So long as each category is applicable, you should be fine. In the past, there has been much discussion about whether category dilution (choosing too many categories) could hurt your rankings.Local SEO Colan Neilsen’s recent study demonstrated the opposite — that adding more, relevant categories can positively impact your your visibility, rather than undermine it. This is a good time to note that the Guidelines for representing your business on Google’s section on categories can be a bit confusing. It contains outdated information pertaining to a bygone era (pre-2013) in which businesses were allowed to custom create categories. I don’t know why Google has never updated this section to remove the text about writing categories that describe what your business “is” rather than what your business “has”, since you’re automatically confined to choosing only Google’s own pre-approved categories, but, the odd state of this area of the guidelines has personally made me take the other recommendations in it with a grain of salt. For example, Google’s insistence that you should use as few categories as possible is somewhat dubious, though their recommendation that you only pick relevant categories makes perfect sense. My advice is to experiment with any relevant category and see where it gets you in terms of visibility. 2) What should I do if Google doesn’t have a category I need?Google has well over 3,000 categories for the US alone, and while this large index covers many business models, it’s not uncommon to find that something you offer isn’t represented. Sterling Sky founder, Joy Hawkins, recently highlighted a case in which a business owner went about requesting a new category from Google the right way, with abundant evidence of why a new option should be added. If a missing category is holding your business back, I recommend studying that GMB help forum thread and then creating one of your own, making the most convincing argument you can about why Google needs to include your category wish. If, however, you can’t get Google to act on your request, your next best bet is to choose the category that most closely represents what your business is, and then use the business description field, images, and Google Posts to add more nuanced information about your goods and services. 3) How can I know if I’ve chosen the right categories?This question most commonly arises in troubleshooting ranking failures. You think you’ve done all you can to rank for a particular search phrase in Google local packs/finders/maps, but you’re just not there. While there can be scores of factors contributing to that, it’s always smart to re-check that you haven’t excluded yourself by selecting the wrong category. Go back to the map and fire up GMBspy again to see which categories the top ranking businesses are using. Do your categories match, or are you missing something? Also, pay attention to your GMB Insights, Google Analytics and any other analysis software you’re using whenever you add or subtract a category from your GMB listing. If you see a sudden drop in any metric dating to changing your categories, you may have made a poor category alteration choice you will need to correct. Finally, be aware that you’re not the only one controlling your categories. If you experience a drop in rankings and notice that your categories have been mysteriously altered, it could be stemming from a third-party edit or bad data out there on the local web. Local SEO Nikki Brown tells a scary story about a client whose rankings went from 1st to 31st due to an unexpected edit of their primary category, emphasizing the importance of making a category audit part of any rankings-related troubleshooting you engage in. 4) How should I use categories for a multi-entity business model?Google’s guidelines allow some business models to have more than one listings for the same physical location of a business. These special scenarios include:
The guidelines recommend that each forward-facing department of a multi-department model should have distinct categories, and it’s considered a local SEO best practice to do the same for multi-practitioner scenarios, too. Diversifying your categories for multi-entity listings can sometimes lessen Google filtering some of your listings out of their results because you no longer have more than one entity competing for the same category terms. A good way to think about category diversification for multi-entity models is that Google’s permission to have more than one listing is giving you the opportunity to increase the number of categories your overall brand can select. Instead of just having 10 categories, your total company could theoretically target 20, 30, 40, etc., substantially improving your potential visibility across a far wider array of search phrases. 5) When and why might I choose a less popular category?There are scenarios in which you might encounter a set of local rankings you’re having extra trouble breaking into. For example, your physical location might put you just outside the map radius Google appears to be drawing for that search phrase, or your competitors may be discouragingly strong or dense on the ground. In cases like this, you might want to experiment with going after a category that could be described as low hanging fruit —- something your keyword research and competitive audit showed you fewer people are searching for and fewer brands are employing. The foundational goal of managing Google My Business listings is to drive conversions/transactions for your company. If geography or competition are making it hard for you to win maximum revenue from a most popular category, you might find you can make up some of the difference by choosing a number of less popular categories that enable you to rank more easily or over a larger area of the map. 6) What about choosing categories beyond Google?There’s a whole world of business listings beyond Google, and each directory or platform has its own system of categorization. Moz Local customers enjoy the tremendous convenience of selecting categories in the dashboard that automatically map to relevant categories across our partner network, but if you’re managing your listings manually, you will need to see what’s available on each site as you go. To sum upYour business will be best served by allocating time for the research and implementation phase of filling in the categories on your GMB listings. Don’t rush, be methodical, and you’ll have the satisfaction of knowing you put in the work to make the best category choices. And check back periodically to see if new categories have become available that could win you new local SERP visibility and increased transactions. Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read! from The Moz Blog https://ift.tt/3bDBRcW More on
0 Comments
Posted by izzismith Google Search Console is by far the most used device in the SEO’s toolkit. Not only does it provide us with the closest understanding we can have of Googlebot’s behavior and perception of our domain properties (in terms of indexability, site usability, and more), but it also allows us to assess the search KPIs that we work so rigorously to improve. GSC is free, secure, easy to implement, and it’s home to the purest form of your search performance KPI data. Sounds perfect, right? However, the lack of capability for analyzing those KPIs on larger scales means we can often miss crucial points that indicate our pages’ true performance. Being limited to 1,000 rows of data per request and restricted filtering makes data refinement and growth discovery tedious (or close to impossible). SEOs love Google Search Console — it has the perfect data — but sadly, it’s not the perfect tool for interpreting that data. FYI: there’s an APIIn order to start getting as much out of GSC as possible, one option is to use an API that increases the request amount to 25,000 lines per pull. The wonderful Aleyda Solis built an actionable Google Data Studio report using an API that’s very easy to set up and configure to your needs. You can also use something out of the box. In this post, the examples use Ryte Search Success because it makes it much easier, faster, and more efficient to work with that kind of data at scale. We use Search Success for multiple projects on a daily basis, whether we’re assisting a client with a specific topic or we’re carrying out optimizations for our own domains. So, naturally we come across many patterns that give a higher indication of what’s taking place on the SERPs. However you use GSC search performance data, you can turn it into a masterpiece that ensures you get the most out of your search performance metrics! To help you get started with that, I’ll demonstrate some advanced and, frankly, exciting patterns that I’ve come across often while analyzing search performance data. So, without further ado, let’s get to it. Core Updates got you down?When we analyze core updates, it always looks the same. Below you can see one of the clearest examples of a core update. On May 6, 2020, there is a dramatic fall in impressions and clicks, but what is really important to focus on is the steep drop in the number of ranking keywords. The amount of ranking keywords is an important KPI, because it helps you determine if a site is steadily increasing its reach and content relevancy. Additionally, you can relate it with search volumes and trends over time. Within this project, we found hundreds of cases that look exactly like the examples below: lucrative terms were climbing up pages two and three (while Google perceives ranking relevance) before finally making it up to the top 10 to be tested. There is a corresponding uplift in impressions, yet the click-through-rate for this important keyword remained at a measly 0.2%. Out of 125K searches, the page only received 273 clicks. That’s clearly not enough for this domain to stay in the top 10, so during the Core Update rollout, Google demoted these significant underperformers. The next example is very similar, yet we see a higher altitude on page one due to the fact that there’s a lower amount of impressions. Google will likely aim to get statistically relevant results, so the fewer impressions a keyword has, the longer the tests need to occur. As you can see, 41 clicks out of 69K impressions shows that no searcher was clicking through to the site via this commercial keyword, and thus they fell back to pages two and three. This is a typical Core Update pattern that we’ve witnessed hundreds of times. It shows us that Google is clearly looking for these patterns, too, in order to find what might be irrelevant for their users, and what can kiss goodbye to page one after an update. Aim to pass those “Top 10 Tests” with flying colorsWe can never know for sure when Google will roll out a Core Update, nor can we ever be fully confident of what results in a demotion. However, we should always try to rapidly detect these telltale signs and react before a Core Update has even been thought of. Make sure you have a process in place that deals with discovering subpar CTRs, and leverage tactics like snippet copy testing and Rich Results or Featured Snippet generation, which will aim to exceed Google’s CTR expectations and secure your top 10 positions. Of course, we also witness these classic “Top 10 Tests” outside of Google’s Core Updates! This next example is from our own beloved en.ryte.com subdomain, which aims to drive leads to our services and is home to our vast online marketing wiki and magazine, so it naturally earns traffic for many informational-intent queries. Here is the ranking performance for the keyword “bing” which is a typical navigational query with tons of impressions (that’s quite a few Google users that are searching for Bing!). We can view the top 10 tests clearly when the light blue spikes show a corresponding uplift in impressions. Whereas that looks like a juicy amount of impressions to lure over to our site, in reality nobody is clicking through to us because searchers want to navigate to bing.com and not to our informational Wiki article. This is a clear case of split searcher intent, where Google may surface varying intent documents to try and cater to those outside of their assumptions. Of course, the CTR of 0% proves that this page has no value for anyone, and we were demoted. Interestingly enough, this position loss cost us a heck load of impressions. This caused a huge drop in “visibility” and therefore made it look like we had dramatically been hit by the January Core Update. Upon closer inspection, we found that we had just lost this and similar navigational queries like “gmail” that made the overall KPI drop seem worse than it was. Due to the lack of impact this will have on our engaged clicks, these are dropped rankings that we certainly won’t lose sleep over. Aiming to rank high for these high search volume terms with an intent you’re unable to cater to is only useful for optimizing for “visibility indexes”. Ask yourself if it’s worth your precious time to focus on these, because of course you’re not going to bring valuable clicks to your pages with them. Don’t waste time chasing high volume queries that won’t benefit your business goalsIn my SEO career, I’ve sometimes gone down the wrong path of spending time optimizing for juicy-looking keywords with oodles of search volume. More often than not, these rankings yielded little value in terms of traffic quality simply because I wasn’t assessing the searcher intent properly. These days, before investing my time, I try to better interpret which of those terms will bring my business value. Will the keyword bring me any clicks? Will those clickers remain on my website to achieve something significant (i.e. is there a relevant goal in mind?), or am I chasing these rankings for the sake of a vanity metric? Always evaluate what impact this high ranking will bring your business, and adjust your strategies accordingly. The next example is for the term “SERP”, which is highly informational and likely only carried out to learn what the acronym stands for. For such a query, we wouldn’t expect an overwhelming number of clicks, yet we attempted to utilize better snippet copy in order to turn answer intent into research intent, and therefore drive more visits. However, it didn’t exactly work out. We got pre-qualified on page two, then tested on page one (you can see the corresponding uplift in impressions below), but we failed to meet the expectations with a poor CTR of 0.1%, and were dropped back down. Again, we weren’t sobbing into our fine Bavarian beers about the loss. There are plenty more worthwhile, traffic-driving topics out there that deserve our attention. Always be on the lookout for those CTR underperformersSomething that we were glad to act on was the “meta keywords'' wiki article. Before we have a moment of silence for the fact that “meta keywords” is still heavily searched for, notice how we dramatically jumped up from page four to page one at the very left side of the chart. We were unaware of this keyword’s movement, and therefore its plain snippet was seldomly clicked and we fell back down. After some months, the page one ranking resurfaced, and this time we took action after coming across it in our CTR Underperformer Report. The snippet was addressed to target that of the searcher’s intent, and the page was enhanced in parallel to give a better direct answer to the main focus questions. Not only did this have a positive impact on our CTR, but we even gained the Featured Snippet. It’s super important to identify these top 10 tests in time, so that you can still act and do something to remain prominent in the top 10. We identified this and many other undernourished queries using the CTR Underperformer Report. It maps out all the CTRs from queries, and reports on where we would have expected a higher number of clicks for that keyword’s intent, impressions, and position (much like Google’s models likely aim to do, too). We use this report extensively to identify cases where we deserve more traffic, and in order to ensure we stay in the top 10 or get pushed up even higher. Quantify the importance of Featured SnippetsSpeaking of Featured Snippets, the diagram below demonstrates what it can look like when you’re lucky enough to be in the placement vs. when you don’t have it. The keyword “reset iphone” from a client’s tech blog had a CTR of 20% with the Featured Snippet, while without the Featured Snippet it was at a sad 3%. It can be game changing to win a relevant Featured Snippet due to the major impact it can have on your incoming traffic. Featured Snippets can sometimes have a bad reputation, due to the risk that they could drive a lower CTR than a standard result, especially when triggered for queries with higher informational intent. Try to remember that Featured Snippets can display your brand more prominently, and can be a great sign of trust to the average searcher. Even if users were satisfied on the SERP, the Featured Snippet can therefore provide worthwhile secondary benefits such as better brand awareness and potentially higher conversions via that trust factor. Want to find some quick Featured Snippet opportunities for which you need only repurpose existing content? Filter your GSC queries using question and comparison modifiers to find those Featured-Snippet-worthy keywords you can go out and steal quickly. You’re top 10 material — now what?Another one of our keywords, “Web Architecture”, is a great example of why it’s so crucial to keep discovering new topics as well as underperforming content. We found this specific term was struggling a while ago during ongoing topic research and set out to apply enhancements to push its ranking up to the top 10. You can see the telltale cases of Google figuring out the purpose, quality, and relevance of this freshly renewed document while it climbs up to page one. We fared well in each of our tests. For example, at positions 10-8, we managed to get a 5.7% CTR. which is good for such a spot. After passing that test, we got moved up higher to positions 4-7, where we struck a successful 13% CTR. A couple of weeks later we reached an average position of 3.2 with a tasty CTR of 18.7%, and after some time we even bagged the Featured Snippet. This took just three months from identifying the opportunity to climbing the ranks and getting the Featured Snippet. Of course, it’s not just about CTR, it’s about the long click: Google’s main metric that’s indicative of a site providing the best possible result for their search users. How many long clicks are there in comparison to medium clicks, to short clicks, and how often are you the last click to demonstrate that search intent is successfully fulfilled? We checked in Google Analytics and out of 30K impressions, people spend an average of five minutes on this page, so it’s a great example of a positive long click. Optimize answers, not just pagesIt’s not about pages, it’s about individual pieces of information and their corresponding answers that set out to satisfy queries. In the next diagram, you can actually see Google adjusting the keywords that specific pages are ranking for. This URL ranks for a whopping 1,548 keywords, but pulling a couple of the significant ones for a detailed individual analysis helps us track Google’s decision making a lot better. When comparing these two keywords, you can see that Google promoted the stronger performer on page one, and then pushed the weaker one down. The strong difference in CTR was caused by the fact that the snippet was only really geared towards a portion of its ranking keywords, which led to Google adjusting the rankings. It’s not always about a snippet being bad, but about other snippets being better, and whether the query might deserve a better piece of information in place of the snippet. Remember, website quality and technical SEO are still criticalOne thing we always like to stress is that you shouldn’t always judge your data too quickly, because there could be underlying technical errors that are getting you down (such as botched migrations, mixed ranking signals, blocked assets, and so on). The case below illustrates perfectly why it’s so much better to analyze this data with a tool like Ryte, because with GSC you will see only a small portion of what’s taking place, and with a very top-level view. You want to be able to compare individual pages that are ranking for your keyword to reveal what’s actually at the root of the problem. You’re probably quite shocked by this dramatic drop, because before the dip this was a high-performing keyword with a great CTR and a long reign in position one. This keyword was in position one with a CTR of 90%, but then the domain added a noindex directive to the page (facepalm). So, Google replaced that number one ranking URL with their subdomain, which was already ranking number two. However, the subdomain homepage wasn’t the ideal location for the query, as searchers couldn’t find the correct information right away. But it got even worse, because then they decided to 301 redirect that subdomain homepage to the top level domain homepage, so now Google was forced to initially rank a generic page that clearly didn’t have the correct information to satisfy that specific query. As you can see, they then fell completely from that top position, as it was irrelevant, and Google couldn’t retrieve the correct page for the job. Something similar happened in this next example. The result in position one for a very juicy term with a fantastic CTR suddenly returned a 404, so Google started to rank a different page from that same domain instead, which was associated with a slightly similar but inexact topic. This again wasn’t the correct fit for the query, so the overall performance declined. This is why it’s so important to look not just at the overall data, but to dig deeper — especially if there’s multiple pages ranking for a keyword — so that you can see exactly what’s happening. Got spam?The final point is not exactly a pattern to consider, but more a wise lesson to wrap up everything I’ve explored in this post. At scale, Google is testing pages in the top 10 results in order to find the best placement based on that performance. With this in mind, why can’t we ask people to go to the SERPs, click on our results, and reap the tasty benefits of that improved position? Or better yet, why don’t we automate this continually for all of our top-10-tested queries? Of course, this approach is heavily spammy, against guidelines, and something against which Google can easily safeguard. You don’t have to test this either, because Marcus (being the inquisitive SEO he is!) already did. One of his own domains on job advertisements ranks for the focus keyword of “job adverts”, and as you can imagine, this is a highly competitive term that requires a lot of effort to score. It was ranking at position 6.6 and had a decent CTR, but he wanted to optimize it even further and climb those SERPs to position one. He artificially cranked up his CTR using clever methods that ended up earning a “very credible” 36% CTR in position nine. Soon in position 10, he had a CTR of 56.6%, at which point Google started to catch wind of the spammy manipulation and punted him down the SERPs. Lesson learned. Of course, this was an experiment to understand at which point Google would detect spammy behavior. I wouldn’t encourage carrying out such tactics for personal gain, because it’s in the best interests of your website’s health and status to focus on the quality of your clicks. Even if this test was working well and rankings improved, over time your visitors may not resonate with your content, and Google might recall that that lower position was initially in place for a reason. It’s an ongoing cycle. I encourage you to reach your results organically. Leverage the power of snippet optimization in parallel with ongoing domain and content improvements to not only increase the quantity and quality of your clicks, but the very experiences on your website that make an impact to your long-term SEO and business growth. ConclusionTo summarize, don’t forget that GSC search performance data gives you the best insight into your website’s true performance. Rank trackers are ideal for competitor research and SERP snapshots, but the position data is only one absolute ranking from one set variable like location and device. Use your own GSC data for intrinsic pattern analyses, diagnostics, and growth discovery. But with great data, comes great responsibilities. Make sure you’re finding and understanding the patterns you need to be aware of, such as struggling top 10 tests, underperforming snippets, technical faults, and anything else that deprives you of the success you work so hard to achieve. Ready for more?You'll uncover even more SEO goodness from Izzi and our other MozCon speakers in the MozCon 2020 video bundle. At this year's special low price of $129, this is invaluable content you can access again and again throughout the year to inspire and ignite your SEO strategy:
Get my MozCon 2020 video bundle Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read! from The Moz Blog https://ift.tt/3ianfo0 More on Posted by randfish Have you ever tried to create 10x content? It's not easy, is it? Knowing how and where to start can often be the biggest obstacle you'll face. In this oldie-but-goodie episode of Whiteboard Friday, Rand Fishkin talks about how you can develop your own 10x content to help your brand stand out. Click on the whiteboard image above to open a high-resolution version in a new tab! Video TranscriptionHowdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're chatting about how to create 10x content. Now, for those of you who might need a refresher or who haven't seen previous Whiteboard Fridays where we've talked about 10x content, this is the idea that, because of content saturation, content overload, the idea that there's just so much in our streams and standing out is so hard, we can't just say, "Hey, I want to be as good as the top 10 people in the search results for this particular keyword term or phrase." We have to say, "How can I create something 10 times better than what any of these folks are currently doing?" That's how we stand out. What is 10x content?10x content is content that is 10 times better than the highest ranking result for a given keyword(s). Here are 119 Examples of 10x Content. Criteria for 10x content:
If you hit all of these things, you probably have yourself a piece of 10x content. It's just very hard to do. That's what we're talking about today. What's a process by which we can get to checking off all these boxes? Step 1 - Gain deep insight.So let's start here. First off, when you have an issue, let's say you've got a piece of content that you know you want to create, a topic you know you're going to address that topic. We can talk about how to get to that topic in a future Whiteboard Friday, and we've had some in the past certainly around keyword research and choosing topics and that sort of thing. But if I know the topic, I need to first gain a deep, deep insight into the core of why people are interested in this subject. So for example, let's do something simple, something we're all familiar with. "I wonder what the most highly-rated new movies are out there." Essentially this is, "Well, okay, how do we get into this person's brain and try and answer the core of their question?" They're essentially asking, "Okay, how do I figure out . . . help me decide what to watch." That could have a bunch of angles to it. It could be about user ratings, or it could be maybe about awards. Maybe it's about popularity. What are the most popular movies out there? It could be meta ratings. Maybe this person wants to see an aggregated list of all the data out there. It could be editorial or critic ratings. There's a bunch of angles there. Step 2 - We have to get unique.We know that uniqueness, being exceptional, not the same as everyone else but different from everyone else out there, is really important. So as we brainstorm different ways that we might address the core of this user's problem, we might say, "All right, movie ratings, could we do a round-up?" Well, that already exists at places like Metacritic. They sort of aggregate everything and then put it all together and tell us what critics versus audiences think across many, many different websites. So that's already been done. Awards versus popularity, again, it's already been done in a number of places that do comparisons of here's the ones that had the highest box office versus here's the ones that won certain types of awards. Well, okay, so that's not particularly unique. What about critics versus audiences? Again, this is done basically on every different website. Everyone shows me user ratings versus critic ratings. What about by availability? Well, there's actually a bunch of sites that do this now where they show you this is on Netflix, this is on Hulu, this is on Amazon, this you can watch on Comcast or on demand, this you can see on YouTube. All right, so that's not unique either. What about which ratings can I trust? Hang on a tick. That might not exist yet. That's a great, unique insight into this problem, because one of the challenges that I have when I want to say, "What should I decide to watch," is who should I trust and who should I believe. Can I go to Fandango or Amazon or Metacritic or Netflix? Whose ratings are actually trustworthy? Well, now we've got something unique, and now we've got that core insight, that unique angle on it. Step 3 - Uncover powerful methods to provide an answer.Now we want to uncover a powerful, hard-to-replicate, high-quality method to provide an answer to that question. In this case, that could be, "Well, you know what? We can do a statistical analysis." We get a sample set big enough, enough films, maybe 150 movies or so from the last year. We take a look at the ratings that each service provides, and we see if we can find patterns, patterns like: Who's high and low? Do some have different genre preferences? Which one is trustworthy? Does one correlate with awards and critics? Which ones are outliers? All of these are actually trying to get to the "which one can I trust" question. I think we can answer that if we do this statistical analysis. It's a pain in the butt. We have to go to all these sites. We have to collect all the data. We have to put it into a statistical model. We then have to run our model. We have to make sure that we have a big enough sample set. We've got to see what our correlations are. We have to check for outliers and distributions and all this kind of stuff. But once we do that and once we show our methodology, now all we have to do is... Step 4 - Find a unique, powerful, exceptional way to present this content.In fact, FiveThirtyEight.com did exactly this. They took this statistical analysis. They looked at all of these different sites, Fandango and IMDB users versus critics versus Metacritic versus Rotten Tomatoes and a number of other sites. Then they had this one graph that shows essentially the star rating averages across I think it was 146 different films, which was the sample set that they determined was accurate enough. Now they've created this piece of 10x content, and they've answered this unique take on the question, "Which rating service can I trust?" The answer is, "Don't trust Fandango," basically. But you can see more in there. Metacritic is pretty good. A couple of the other ones are decent. Step 5 - Expect that you're going to do this 5 to 10 times before you have one hit.The only way to get good at this, the only way to get good is experimentation and practice. You do this over and over again, and you start to develop a sixth sense for how you can uncover that unique element, how you can present it in a unique fashion, and how you can make it sing on the Web. All right, everyone, I look forward to hearing your thoughts on 10x content. If you have any examples you'd like to share with us, please feel free to do so in the comments. No problem linking out. That's just fine. We will see you again next week for another edition of Whiteboard Friday. Take care. Video transcription by Speechpad.com
Check out the free Content Strategy course! Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read! from The Moz Blog https://ift.tt/2F5Tcis More on Posted by amandamilligan As you might imagine, it’s not easy to get your brand name mentioned in top media outlets. But if you put in the work to engage in content marketing + digital PR, the benefits are massive:
I’ll explain how you can earn this type of coverage and its corresponding benefits for your brand. Step 1: Create newsworthy contentYou probably have an instinctual sense of what qualifies as news, but some of the key newsworthy elements are timeliness, proximity, and significance. Timeliness is tough. Hard news is usually covered by media outlets automatically anyway. However, there’s a way to create news — and it’s through data journalism. By doing your own research, conducting your own studies, running your own surveys, and performing your own analyses, you’re effectively creating news by offering brand new stories. For example, for our client Porch, we used data from the U.S. Census Bureau’s American FactFinder, Yelp, and Zillow to determine which cities are the best for young families. This project is inherently location-based, which adds the proximity element as well. But even if your content isn’t location-based, explore whether you can take your data and localize it so that you cover multiple geographic areas. (Then, you can pitch local news in addition to national news!) Significance is also an excellent element to keep in mind, especially during the ideation stage. It basically means: How many people are impacted by this news, and to what degree? This is especially important if you’re aiming for national news publications, as they tend to have a wide audience. In this case, there are plenty of young families across the country, and CNBC saw that it could connect with this demographic. When you combine all of these newsworthy elements, you can increase your chances of getting respectable news publications interested. Step 2: Design and package the content for clarityYou need to present your data in a clear and compelling way. Easier said than done, though, right? Here are common design pitfalls to watch out for:
Finally, don’t be afraid to add the most interesting insights or context as callouts to the images. That way people can identify the most pertinent information immediately while still having more to explore if they want the full story. Take, for example, one of the graphics we created for BestVPN for a project that got coverage on The Motley Fool, USA Today, Nasdaq and more. We don’t assume people will read text in an article to get relevant information, so we put it right on the image. Here’s another example of a project image we created for Influence.co. We included the callout at the bottom of the image and featured it in our pitch emails (more on that later) because we knew it was a compelling data point. Lo and behold, it became the headline for the Bustle coverage we secured. Note: It’s entirely possible a news publication won’t run your images. That’s totally fine! Creating the images is still worth it, because they help everyone grasp your project more quickly (including writers), and when well done, they convey a sense of authority. When you have all of your data visualized, we recommended creating a write-up that goes along with it. One objective of the article is to explain why you executed the project in the first place. What were you trying to discover? How is this information useful to your audience? The other objective is to provide more color to the data. What are the implications of your findings? What could it mean to readers, and how can they apply the new knowledge to their lives, if applicable? Include quotes from experts when appropriate, as this will be useful to publication writers as well. Step 3: Write personalized pitchesI could create an entirely separate article about how to properly pitch top-tier publishers. But for our purposes, I do want to address two of the most important elements: Treat writers like people“You did something PR people never do — but should. Looked at my Twitter feed and made it personal. Nicely done!” — CNBC writer Building real connections with people takes time and effort. If you’re going to pitch a writer, you need to do the following:
Some still swear by the templated approach. While it might work sometimes, we’ve found that because writers’ inboxes continue to be inundated with pitches, reaching out to them in a more personalized manner can not only increase our chances of getting emails opened, but also getting a genuinely appreciative response. So, start your email with a personal connection. Reach out about something you have in common or something about them you admire. It will go a long way! Include a list of the most relevant insights“Wow these findings are super interesting and surprising. I will for sure include if I go ahead with this piece.” — The Wall Street Journal writer Never assume a writer is going to click through to your project and read the entire thing before deciding if they want to cover it. In the pitch email, you need to spell out exactly what you think is the most interesting part about the project for their readers. The key word being their readers. Sure, overall you probably have a few main takeaways in mind that are compelling, but there’s often nuance in which specific takeaways will be the most relevant to particular publishers. We’ve seen this so many times, and it’s reflected in the resulting headlines. For example, for a project we created called Generational Knowledge Gaps, we surveyed nearly 1,000 people about their proficiency in hands-on tasks. Look at the news headlines on REALTOR Magazine and ZDNet, respectively: While REALTOR Magazine went with a headline that captures the general spirit of the project, ZDNet’s is more honed in on what matters for their readers: the tech side of things. If we’d pitched to them the same way we’d pitched to REALTOR, they might not have covered the project at all. So, after a personalization, include bullet points that say what the key data points are for their particular audience, wrap up the email with a question of whether they’re interested, and send it off. ConclusionIt’s not an easy process to get the attention of top writers. You have to take time to develop high-quality content — it takes us at least a month — and then strategically promote it, which can also take at least another month to get as much coverage as you can. However, this investment can have major payoff, as you’ll be earning unparalleled brand awareness and high-value backlinks. To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog. Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read! from The Moz Blog https://ift.tt/32MJvhl More on Posted by rjonesx. Hey folks, I'm Russ Jones, Adjunct Search Scientist with Moz, and I'm proud to announce that this month we’ll be releasing a terrific update to our metric, Page Authority (PA). Although Page Authority hasn't attracted the same attention as its sibling metric Domain Authority, PA has always correlated with SERPs much better than DA, serving as a strong predictor of ranking. While PA has always fluctuated with changes in the link graph, we’re introducing a whole new method of deriving the score. What's changingLong gone are the days of just counting backlinks a couple of ways and hoping they correlate well with SERPs. As Moz tends to do, we’re pioneering a new manner of calculating Page Authority to produce superior results. Here are some of the ways we’re changing things up: The training setIn the past, we used SERPs alone to train the Page Authority model. While this method was simple and direct, it left much to be desired. Our first step in addressing the new Page Authority is redefining the training set altogether. Instead of modeling Page Authority based on one page's ability to outrank another page, we now train based on the cumulative value of a page based on a number of metrics including search traffic and CPC. While this is a bit of an oversimplification of what’s going on, this methodology allows us to better compare pages that don't appear in the SERPs together. For example, imagine Page A is on one topic and Page B is on another topic. Historically, our model wouldn't get to compare these two pages because they never appear on the same SERP. This new methodology provides an abstract value to each page, such that they can be compared with any other page by the machine-learned model. The re-training setOne of the biggest problems in building metrics is not what the models see, but what the models don't see. Think about this for a minute: what types of URLs don't show up in the SERPs that the model will use to produce Page Authority? Well, for starters, there won't be many images or other binary files. There also won't be penalized pages. In order to address this problem, we now use a common solution of running the model, identifying outliers (high PA URLs which do not in fact have any search value), and then feeding those URLs back into the training set. We can then re-run the model such that it learns from its own mistakes. This can be repeated as many times as is necessary to reduce the number of outliers. Ripping off the Band-AidMoz is always cognizant of the impact the changes to our metrics might have on our customers. There is a trade-off between continuity and accuracy. With Page Authority, we’re focusing on accuracy. This may cause larger-than-normal shifts in your Page Authority, so it’s more important than ever to think about Page Authority with respect to your competitors, not as a standalone number. What actions should we take?Communicate with stakeholders, team members, and clients about the updateJust like our upgrade to Domain Authority, some users will likely be surprised by changes in their PA. Make sure they understand that the new PA will be more accurate (and more useful!) and that the most important measurement is relative to their competitors. We won't release a Page Authority which isn't better than the previous version, so even if the results are disappointing, understand that you now have better insight than ever before into the performance of your pages in the SERPs. Use PA as a relative metric, like DAPage Authority is intrinsically comparative. A PA of 70 means nothing unless you know the PA of your competitors. It could be high enough to allow you to rank for every keyword you like, or it could be terribly low because your competitors are Wikipedia and Quora. The first thing you should do when analyzing the Page Authority of any URL is set it in the proper context of its competitor's URLs. Expect PA to keep pace with GoogleJust as we announced with Domain Authority, we’re not going to launch the new PA and just let it go. Our intent is to continue to improve upon the model as we discover new and better features and models. This volatility will mostly affect pages with unnatural link profiles, but we would rather stay up-to-date with Google's algorithms even if it means a bit of a bumpy ride. When is it launching?We’ll be rolling out the new Page Authority on September 30, 2020. Between now and then, we encourage you to explore our resources to help you prepare and facilitate conversations with clients and team members. Following the launch of the new PA, I’ll also be hosting a webinar on October 15 to discuss how to leverage the metric. We’re so excited about the new and improved PA and hope you’re looking forward to this update too. If you have any questions, please comment below, reach out to me on Twitter @rjonesx, or email me at [email protected]. To get prepared and learn more about the upcoming change to Page Authority, be sure to dig into our helpful resources: Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read! from The Moz Blog https://ift.tt/3jKEkp5 More on |
Authorhttps://seouk4.weebly.com/ Archives
September 2021
Categories |