Post by amirmukaddas on Mar 12, 2024 5:35:09 GMT
They are those pages that distance your web project from the meanings for which it is recognized and classified by Google. To be clear, if your SEO blog contains 100 articles, of which 70 talk about cutting and sewing, something is wrong. find and cut useless content The situation is very widespread. I mentioned it the day before yesterday when talking about the SEO editorial plan : the articles that do not follow the editorial plan, the "extemporaneous" ones, can weigh down the scanning structure, causing the positionings to worsen as they increase in number. But how can we understand what they are after years of mixed publications? Simple, ask Google It is necessary to check for any discrepancy between indexed pages and pages that receive traffic. The first step is to check the indexing status on the Search Console and compare this number with the number of pages indexed according to the site search operator : indexing status Site search operator: site operator: In the first case Google declares 565 indexed pages, in the second 569 , so the values are substantially aligned, but this does not mean that Google considers all my articles worthy of traffic.
To check you can open Google Analytics and perform the following Denmark Telegram Number Data step: Behavior –> Content –> Landing Pages Secondary Dimension –> Acquisition –> Medium Source By calling up the first 1,000 results you will have all the pages of your website with all the traffic sources. At this point you have to filter them by Google / Organic source , excluding the others. Now you have all the pages of your website that "actually" receive traffic from search engines, in my case only 391 articles receive traffic from Google, the others do not. How to precisely locate pages At this point you just have to proceed by difference.
I use Screaming Frog to extract all the URIs (URIs are the web addresses immediately after the first slash) of my blog pages into Excel format. Next to this column I paste the one downloaded from Google Analytics, including all the URIs that receive traffic. Once I have deleted the repeated contents (with an excel filter) between the cells of the two columns, I have an exact list of the web pages that Google obviously doesn't like . In my case, most of the pages in question are editorials and diary pages, since I also like to tell a little about my life through the blog. Now I can decide what to do with it, whether to merge, cut, modify or leave everything as it is. On larger websites this work can take months, but I assure you that once it's done, it really makes things better.
To check you can open Google Analytics and perform the following Denmark Telegram Number Data step: Behavior –> Content –> Landing Pages Secondary Dimension –> Acquisition –> Medium Source By calling up the first 1,000 results you will have all the pages of your website with all the traffic sources. At this point you have to filter them by Google / Organic source , excluding the others. Now you have all the pages of your website that "actually" receive traffic from search engines, in my case only 391 articles receive traffic from Google, the others do not. How to precisely locate pages At this point you just have to proceed by difference.
I use Screaming Frog to extract all the URIs (URIs are the web addresses immediately after the first slash) of my blog pages into Excel format. Next to this column I paste the one downloaded from Google Analytics, including all the URIs that receive traffic. Once I have deleted the repeated contents (with an excel filter) between the cells of the two columns, I have an exact list of the web pages that Google obviously doesn't like . In my case, most of the pages in question are editorials and diary pages, since I also like to tell a little about my life through the blog. Now I can decide what to do with it, whether to merge, cut, modify or leave everything as it is. On larger websites this work can take months, but I assure you that once it's done, it really makes things better.