Introduction

Everyone who wants to optimize a website needs to know and understand technical SEO. Optimizing a website still matters when it comes to increasing a website's search visibility.

The first step in technical SEO is an audit of a website that shows current performance and identifies all 'technical' issues. These issues are referring to page speed, site structure, crawlability, and much more.

In this post, we will go through the 10 most important technical issues that affect SEO.

Here is the list of top 10 issues that we will cover in this post:

  1. Missing alt tags

  2. No XML sitemaps

  3. Use of structured data

  4. Missing meta descriptions

  5. Broken links

  6. Title and description

  7. The site isn't indexed correctly

  8. Social media meta tags

  9. robots.txt file

  10. Rel canonical

What is SEO?

SEO stands for "search engine optimization". It means the process of improving a website to increase its visibility for relevant searches. The better visibility website has, the more likely we are to attract customers to our business.

If you want to be at the top of the results of the search engine, you need to:

  • Ensure these search engines understand who you are and what you offer;

  • Convince that you are the most credible option for users;

  • Make your content deliverable.

1. Missing alt tags

An alt tag, also known as "alt description", is an HTML attribute that provides alternative text for images. The primary purpose of alt tags was to help make images accessible to blind people. Search engines and other robots cannot interpret images, so alt tags solve this by providing descriptive text that clearly describes what it shows. Alt tags help Google, and other search engines better understand the image's content and help it rank.

1<img src="mac-book.png" alt="MacBook Pro 13">

There are 3 rules of alt tags:

  • Be descriptive and specific

  • Be relevant

  • Be unique

2. No XML sitemaps

A sitemap is a file where you provide information about pages, images, and other files on your website and relations between them. This file tells Google and other search engines which pages and files are essential in your website and provides valuable information about these files.

Some of the important information that sitemap provides is how often the page is changed when the page was last updated and all alternate language versions of the page.

If all pages of the website are properly linked, search engines can discover most of your website. A sitemap also improves the crawling of larger and more complex websites.

On the image below, you can see an example of a sitemap template:

Sitemap

3. Use of structured data

We can help Google understand the page's content on our website by providing explicit clues about the meaning by including structured data on the page. Structured data is a standardized format for providing information about a page and classifying the page content.

Here is JSON-LD structured data snippet that should appear on a recipe page, describing some details about the content on it:

1<html>
2  <head>
3    <title>Party Cake</title>
4    <script type="application/ld+json">
5    {
6      "@context": "https://schema.org/",
7      "@type": "Recipe",
8      "name": "Party Cake",
9      "author": {
10        "@type": "Person",
11        "name": "Jamie Oliver"
12      },
13      "datePublished": "2021-04-10",
14      "description": "This cake is awesome and perfect for parties.",
15      "prepTime": "PT30M"
16    }
17    </script>
18  </head>
19  <body>
20    <h2>Party cake recipe</h2>
21    <p>
22      <em>by Jamie Oliver, 2021-04-10</em>
23    </p>
24    <p>
25      This cake is awesome and perfect for parties.
26    </p>
27    <p>
28      Preparation time: 30 minutes
29    </p>
30  </body>
31</html>

Users can search for the recipe by ingredient, cook time, and more details because the structured data labels each element of the recipe.

Structured data is coded using in-page markup on the page that information applies to. You shouldn't create blank pages to hold structured data on them.

Google supports three types of structured data formats: JSON-LD, Microdata, and RDFa.

More about structured data.

When you add structured data to your page, you can check here to see if it works well.

4. Missing meta descriptions

A meta description is an HTML tag used to describe the content of a web page. This description is shown below the title and URL of the page as it appears in the search engine results. The meta description should be between 140 and 160 characters.

Below you can see an example of meta description in search results and how you should write it in code:

image-20210506-103332
1<head>
2  <meta 
3    name="description" 
4    content="When writting meta description you should keep it 
5            between 140 and 160 characters so Google can display 
6            your entire message.">
7</head>

Your meta description can help or hurt CTR (click-through rates), and you should take full advantage of the opportunity to present your website to users (searchers).

Since the click-through rate on the SERPs is seen as a potential ranking factor, you should make your descriptions SEO-friendly and write them to get more clicks.

5. Broken links

Broken links are sending visitors to a webpage that no longer exists and that links don't work. There can be a lot of reasons why links are broken, but these are the main: 

  • The website is no longer available;

  • The web page was moved without redirecting;

  • The URL structure of the website was changed.

Broken links are not only bad for user experience but can also be harmful to your site's SEO. A 404 error occurred when content was either moved or deleted without a proper redirect.

Here you can regularly audit broken links on your website and try to remove or update them.

6. Title and description

Two of the most important on-page elements for SEO are titles and descriptions. The title tag is important for ranking, as it is the first place Google looks to establish a page's relevance to a search term.

The description tag is also important for SEO, and it's effectively your first pitch to users. Here is how they look:

image-20210506-111509

Title tag (<title>) can be found at the top of the page code, in the head section. Making changes to this tag can help to raise website rankings quickly. Search engines use the title tag as a summary of the page content.

Description tag can get you more clicks from search results and increase CTR (click-through rate). Important elements to remember for a description are summary, length, and call to action.

7. The site isn’t indexed correctly

For all website content, you need to ensure that your website is indexable so that it can show up in Google's search engine results. You can't drive organic traffic to your website via organic search if the website is not indexed.

Google's index is a list of all web pages that the search engine knows about. If Google doesn't index your website, your site won't appear in search results. Indexing means that the site is stored in Google's databases.

It can take Google a few days to a few weeks to index a site. Here is how to check if your website is indexed:

  1. Go to Google;

  2. In the search bar, type 'site:example.com';

  3. Look under and see how many of your pages Google has indexed;

  4. If zero results show up, the page isn't indexed.

8. Social media meta tags

Almost every website has share buttons that users can easily share website content on social media. In this section, we will go through Facebook and Twitter meta tags.

Facebook offers various options on how a shared content appears on its timeline. Unless otherwise specified, every website defaults to the type called 'website'.

When someone shares the homepage of, for example, some travel website, Facebook displays it like this:

image-20210506-113747

Twitter, as well as Facebook, has multiple ways to format shared content that appear on its feed, but we will show one that is similar to the above example from Facebook, which Twitter calls the 'Summary Card with Large Image':

image-20210506-113807

As we can see, each features some attributes of the shared content: image, title, description, and domain. We can specify these attributes with <meta> tags. Facebook and Twitter scrape shared content and read its <meta> tags to display appropriate information. Facebook uses <meta> tags leveraging the Open Graph protocol, and Twitter has its own <meta> tags similar to the Open Graph protocol.

1<meta property="og:title" content="European Travel Destinations">
2<meta property="og:description" 
3      content="Offering tour packages for individuals or groups.">
4<meta property="og:image" content="http://euro-travel-example.com/thumbnail.jpg">
5<meta property="og:url" content="http://euro-travel-example.com/index.html">
1<meta name="twitter:title" content="European Travel Destinations">
2<meta name="twitter:description" 
3      content="Offering tour packages for individuals or groups.">
4<meta name="twitter:image" content="http://euro-travel-example.com/thumbnail.jpg">
5<meta name="twitter:card" content="summary_large_image">

Here is a Facebook and Twitter table with guidelines on how to use <meta> tags:

image-20210506-114801

9. robots.txt file

A robots.txt file tells search engine crawlers which pages or files the crawler can or cannot request from the website. This file is mainly used to avoid overloading your website with requests. You shouldn't use the robots.txt file as means to hide your web pages from Google search results.

A robots.txt file lives at the root of your website. For site http://www.example.com, the robots.txt file lives at www.example.com/robots.txt. This file is a plain text file consists of one or more rules. Each rule blocks or allows access for a given crawler to a specified file path in that website.

Here is an example of a robots.txt file with two rules:

1# Group 1
2User-agent: Googlebot
3Disallow: /nogooglebot/
4
5# Group 2
6User-agent: *
7Allow: /
8
9Sitemap: http://www.example.com/sitemap.xml

You can see that a user agent named 'Googlebot' cannot crawl the http://example.com/nogooglebot/ directory or any subdirectories in the first block. In the second block, all other user agents are allowed to crawl the entire site. And at the bottom is the path to the sitemap, http://www.example.com/sitemap.xml.

Your website can have only one robots.txt file, and it can be created in any text editor.

10. Rel canonical

If you have a single page accessible by multiple URLs, or different pages with similar content, Google sees these as duplicate versions of the same page. Google will choose one URL as the canonical version and crawl that, and all other URLs will be crawled less often.

A canonical URL is the URL of the page that Google thinks is most representative of a set of duplicate pages on your site.

Google chooses the canonical page based on several factors, such as whether the page is served via HTTP or HTTPS, page quality, presence of the URL in a sitemap, and any rel=canonical labeling.

There are many reasons why you should explicitly choose a canonical page:

  • To specify the URL that you want people to see in search results;

  • To simplify tracking metrics for a single topic;

  • To manage syndicated content;

  • To avoid spending crawling time on duplicate pages.

Conclusion

Those were the top 10 technical issues that affect SEO. In my experience, fixing those can increase your website reach, and it will be much more friendly for crawlers. There are so many SEO audit tools to show you what you should fix on your website, and I recommend using them. In the beginning, you can test your website with the Google Lighthouse tool and see what your SEO result is. After that, you should see issues you can fix and some recommendations for a better score.