Lifetime update

100% Secure Checkout

How to speed-up Divi

Stone age tool versus new technologies

Stop using the HTTP / 1 recommendations – we already have HTTP/3.

How to take care of speed on a normal website?

What did I save and what did I gain by switching to the concepts behind HTTP/2 and HTTP/3.

I manage 168 domains.

In the Google index I have 290.640 indexed subpages – the sum of pages from all domains.

I used the solutions from the article and:

• I have reduced the server’s RAM requirement by 13 GB.

• I saved about 150 GB per month on transfer.

I show it from the perspective of an agency that maintains customer pages, but in a way that can be implemented also on shared hosting.

So let’s stop listening to test recommendations based on HTTP/1 guidelines and let’s really speed up our pages.

We demand more and more from the websites – nicer, faster and more

Websites are no longer static business pages. E-shops, booking calendars, management applications. Everything should be nice, fast and perfect. Huge functionality in an elegant and aesthetic packaging. Easy to use. You have to single out you and your product from the rest. The website is supposed to tell a story, inspire, draw in and absorb.

All marketing says that feelings sell, and the offer is intended to appeal to emotions and feelings. Video, high resolution photos, animations are displacing text sheets. Websites are not like 10 years ago. They are mini applications with a nice front and complex business logic.

We have more and more flexible tools

The trick is to create place where is complicated business logic and a nice and intuitive website.

You need the right tools to create beautiful and functional pages.

Flexible enough tools to meet different projects. Quickly and efficiently implemented. Providing options and options that will allow you to present complex things in a simple and elegant way for the user. As we know, WordPress gives us flexibility. DIVI adds beauty and aesthetics.

This duo allows you to quickly create something functional and tasteful.

Thanks to beautiful photos in great quality, font, game of colors and tones, animations and interactivity, you will interest the user in your product and company history.

Flexible tools that increase “page weight”.

Excel spreadsheets have over 750 functions. You probably only use 20 of these functions. Why are there so many possibilities? Because each of us uses 20 different functions. Same with WordPress and DIVI.

We get a lot of features, and you probably only use some of them. Photos, fonts, CSS, animations, JavaScript and much more. The price for flexibility, the enormity of functions and possibilities is the weight of the page. Technology goes ahead and solves these problems.

Why can’t the developers improve this?

Because there is nothing to fix here, nothing is broken or badly done. Treat WordPress and DIVI as a buffet from which you choose what you need. ToolBox created in this way is a set of tools that helps you at work. A set full of flexibility and the ability to create, tune and personalize tools for yourself. The end result is optimized, not the tools themselves.

How to take care of speed on a normal website?

• The starting point is to determine the measure by which we will assess whether the problem concerns us.

• We will not check whether we are making progress without measure.

• Without a common measure, each of us may have different conclusions from the same observations.

• I have imposed a restriction so that the measures and solutions can be used by a freelancer or agency employee without any special server requirements.

HTTP/1 – retro tests and retro recommendations.

Old tests based on Yslow or curl are limited to measuring time, number of connections and file size.Apart from the fact that Yslow had the last update 6 years ago, it does not take into account anything that was created in the last decade.Does not include changes in the construction of websites. Pages from full server-side rendering have switched to shifting a large amount of calculations to the user.

Relying on old guidelines from HTTP /1 is wrong because it is assumed that:

– 1 KB upload is better than 1 MB. But are you sure that 1 KB script with an infinite loop, killing CPU is better than a 1 MB photo?

– the number of simultaneous downloads per domain limited to 8. We get recommendations on sub-domains and CDN to bypass the HTTP/1 limit when we have HTTP/2 and we can provide many files with one connection.

– query time measurement. Here, the factor is half important.

Faster delivery of page elements is important. Omitting how elements are delivered, what elements they are and where they are located, lies in the result. If the user can read the article, slow loading of the photo in the footer, which he cannot see, should not fail in a positive rate.

These types of tests are convenient because they are easy to cheat.

Not considering the new HTML 5 standards, asynchronous attributes, HTTP / 3 or file types such as webp – localhost will always win with online. A static file will always win with PHP.

You can get 100% points, although on a device with a small amount of RAM, a weak CPU or no GPU support, the site may be unusable.

In a moment we will develop an example from HTTP/2 / HTTP/ 3 – for now remember:

Applying tests to HTTP/1 in times of HTTP/3 is fooling yourself and the client.

A new way to measure optimization

Referring to the requirements that customers set for us when creating websites and examples with retro tests, he wants to show that one should not work in the field of numbers and laboratory measurements.

Act in the sphere of user’s feelings

There is a reason why we are talking about the impression of page speed and the feeling that the page is cropping, silting.

So which “feelings” are worth measuring?

Time to first byte. TTFB. The time counted from sending a request by the customer until the first byte of the response is received gives us a real picture of the efficiency of carrying out user requests. This time includes not only travel time for questions and answers, but more importantly, how quickly the answer was prepared. There is nothing about the size of the answer in this measurement. You can send 0.1 KB messages on the messenger or 1 GB of video. It is important that you quickly responded to the user’s request and that something is already happening.

DOM Content Loaded – DCL is the time to load and process HTML responses. The browser analyzed and created the DOM model of our site, without style sheets, images, iframes, JavaScript. This is also the first analysis of what remains to be done. Note that the measurement includes HTML code analysis. It’s easy to convert into a DOM, not the size in KB.

First Meaningful Paint. First painting – One of the most important elements of Nielsen’s Heuristics. Visible response to user action. It doesn’t matter what and how much you deliver. It is important how quickly the user sees that something is happening, visual identification confirms that he typed in correctly.

Time to Interact – First interaction – Measurement showing when you can start using the page. So what if the user sees the site as nothing can be done.

First CPU Idle – The time during which the device can take care of the user and not the operation of your website. Interactivity and smoothness of operation is strongly limited when the CPU and GPU are kept 100% occupied.

WARNING! Note that these tests differ from tests for HTTP / 1 in that measurements of processing times of elements have been added. What counts in new tests is the effect your website has on your site. The analysis and processing of HTML, CSS, JavaScript is as important as the speed of their delivery.

Below we will discuss how to easily improve each of these measures. At the end I will show what HTTP / 2 and HTTP / 3 improve in all these measures.

How to use LightHouse?

LightHouse is built-in or available as a plugin in all Chromium based browsers.

It is installed in Google Chrome and Microsoft EDGE as standard. I recommend using it locally when creating the page. To start LightHouse, press F12 and in DevTools select the Audits tab. The settings are so simple that I will not discuss them. After stopping the mouse over the option we have additional explanations.

Google PageSpeed ​​is an online tool based on LightHouse, which in addition to the test also shows us the results collected from real website users.

A new way of optimization

I will repeat myself because it is important. What counts in new tests is the effect that your website has on your site. The analysis and processing of HTML, CSS, JavaScript is as important as the speed of their delivery. Let’s now try to change from feelings and impressions to specific measures and solutions based on the result from LightHouse, when we talk about the impression of page speed or feeling, silent. So we have two places we need to take care of,

Delivery Speed

The delivery speed is HTTP/1, which is why it is the oldest and the most refined part of the WordPress community. I will focus here only on the differences and news in HTTP/2 and HTTP/3.

User cache

First of all, we should start using Service Workers. There is no faster and better way to deliver content to users than the proxy server installed in your browser. It is not a cache but a full-fledged proxy server that can modify queries and responses. Service Workers allows us to display content immediately from the cache in the background, update and replace it on the screen. The same scheme of operation allows our application to function, even when the user has no access to the Internet – we show from the cache, and when it returns online, we download new content and update what the user sees. We strive to have the highest Hit-Rate in Service Workers, i.e. we want as many things as possible prepared and preloaded into the cache. Service Workers is a new type of cache that gives us full control over this cache. Service Workers replaces AppCache and browser cache.

Browser cache. This is an old and well described cache system. Each plugin and guide deals with this kind of cache. The rule is simple: The highest level of hits from the cache and the least questions to the server. This is a fallback for ServiceWorker. In short: the more good old browser cache the better.

Let’s sum up the profits and get to the interesting part.

• User side cache brings Time to first byte to almost zero.

• DOM Content Loaded drops dramatically.

• HTML analysis starts without waiting for the web.

• If the other elements of the page are in the user’s cache, we also reduce the first painting and Time to Interact and network transfer times.


If we do not have the needed element in the user’s cache, we ask the server about it.

This occurs when the Service Workers and Cache browsers have no response.

The sooner we answer, the better. Ideally, if the CDN or server were close to the user and contained a ready answer, and the whole thing works with the magic of HTTP/2 or HTTP/3.

HTTP/2 is the official standard for 5 years, and its SPDY prototype was already in Internet Explorer 11, FireFox 11, Opera 12 so you can confidently use it in your projects.

Let’s get to the news.

HTTP/2 push to the new incarnation of EDGE Side Includes.

Server or CDN. when receiving a page request, on the fly, it can dynamically attach additional elements to the basic answer.

Push is most often used to stick CSS to HTML, but ONLY if the user does not have our CSS.

And here is a silent revolution!

No more merging files into mega-packages.

No more concatenation, CSS appending in head etc.

We are gradually expanding our site. We send the necessary minimum through the network, the user can work and in the background we are slowly preparing for the next steps. Less for transfer, less for server compression, less for user decompression and analysis, less for server and user caching.

Wptypek Performance

The DRY (Don’t Repeat Yourself) approach, which I will show later when we learn the next elements.

How do we know that the user already has CSS?

The first and more difficult option are the questions passing through Service Workers can add additional headers to queries, e.g. X-ihave-CSS = true. The easiest way to check the “referer” and “technical cookies” header in Apache2 / nginx / CDN. If the referer is your domain or we have a technical cookie set, eg “I-have-CSS”, I think you know what to do.

Can you make a mistake here?

If you incorrectly identify a returning user and he has an empty cache, and you do not add a push, then everything will work the same way – the missing elements will be drawn by the browser. Nothing will break.

It is only a mistake to add everything, everywhere and everything in push. In this case, each time you send a huge package of unnecessary items. Adding always push with fonts, CSS, JavaScript will add these elements to each query, even if the user asks for a logo, pictures, robots.txt, API, …

Push is about progressively attaching the smallest, necessary package of elements.

HTTP/2 push gives us the ability to respond to “who asks”

At the request for HTML, we add HTTP/2 push and send HTML to CSS, JavaScript, Service Workers script, manifest. We send only HTML to returning users at a request for HTML. If we have server or CDN with HTTP/2 or HTTP/3, it is a mistake to add everything in the head by Autooptimize plugins. Why send, compress, decompress and analyze everything every time?

Remember that HTTP/2 push is not just about HTML queries. Nothing prevents you from sending, for example, an additional CSS file with font animations and the woff2 font to your CSS request.

HTTP/3, because HTTP/2 was already too slow

HTTP /1.1 and HTTP/2 use TCP as the transmission layer. We still have a UDP layer.

Communication via TCP means that the server must wait each time for a response, confirming receipt of the previous data packet before sending the next one. If one packet is lost, the TCP recipient suspends all subsequent packets, and as a result the application must wait for retransmission – even if it could handle other packets at that time. To put it more vividly – HTTP/1.1 and HTTP/2 this is a conversation in which you will not send another message until the other party confirms that the previous message has been read correctly.

Currently, if an error occurs during data transfer, HTTP/1 and HTTP/2 will stop all page loading.

In HTTP/3, the page loading process will continue and the damaged item will be downloaded again.

With HTTP/3 we will have multiplexed sending of many files without blocking the header. Minimizing congestion and transmission repetition. Shortening the connection time.

Test results from HTTP/1 will become completely inadequate in such realities.

When will it work?

Mozilla and Chrome are already testing HTTP/3. Microsoft in EDGE added support on October 4, 2019.

Most popular browsers plan to activate HTTP/3 in 2020.
The new protocol will also work better for mobile devices, where we switch between GSM and WiFi transmitters. At the moment, changing the IP breaks the connection and requires it to be re-established. In HTTP/3, the protocol deals with connection continuity, and we browse pages as if nothing has changed.


• We strive to have ready answers to user requests.

• Keep pre-prepared answers as close to the user as possible.

• HTTP/2 push is a block approach. We store and send the necessary minimum.

• HTTP/2 push is the ability to send multiple files in response to one query.

2 – TTFB – Response Cache priority

Analysis and processing speed

We have already discussed the speed of delivering content to the user. Here the old HTTP / 1 tests end, but not LightHouse. We know that the Google tool measures the effect that our code has on a user’s device.

We have no influence on the quality of end devices. We all have different models and thus WiFi / GSM modems, different displays, different CPUs + GPUs.

In LightHouse these differences are minimized by imposing restrictions (trhottling) on the connection and the CPU.

What connects these 3 measures?

• DOM Content Loaded

• First Meaningful Paint

• Time to Interact

DOM Content Loaded includes the ease of converting HTML to Document Model Object.

In First Meaningful Paint, it is important how quickly the user sees the first elements on the page and this includes processing the CSS code into the CSS Model Object and connecting it to the Document Model Object.

Here it depends on the combination of the previous two measures and how JavaScript:

  •  extensively uses CPU calculations and
  • how deep changes it makes in DOM and CSSOM.

All 3 measures combine the speed at which the browser engine converts the code to Model Object. They also have a relationship – each subsequent one depends on the previous one.

Dom, cssom

3 – Dependence of  TTFB, DOM Content Loaded, First Meaningful Paint and Time to Interact

The main advantage and disadvantage of PageBuilders is the fact that it contains a lot of modules and rules describing how to place any element on the page. Such flexibility results in substantial HTML and CSS files. As I mentioned before, it is a cost of flexibility. In most cases, we use about 20% of these rules on the page. 80% – so much unnecessary code must be downloaded and analyzed by the user.

How to optimize DOM and CSSOM in DIVI?

4 –  First Meaningful Paint dependence on HTML and  CSS

We can get faster conversion of HTML to DOM and CSS to CSSOM in two ways

1. maximum simplification of CSS and HTML code

2. not counting all CSS and HTML every time

Maximum simplification of HTML and CSS code

When writing individual skins (themes), always use tools like uncss, purgecss. They optimize the code.

In other cases, it is worth using WordPress plugins for example to clean up the resulting HTML and CSS code. Leaving only those actually used 20% of the rules drastically reduces the time needed for HTML and CSS analysis.

Delta. Minimal file to be converted

At DIVI, we turn off everything we don’t use. So many changes on the tool side and everything is OK.

WordPress itself adds many unnecessary class attributes to the HTML code.

All Page Builders such as DIVI have their universal code

Speed up divi

5 – Breaking down CSS into modules.

Do not follow the recommendations of a decade ago!

Don’t build mega packs. Use progressive sending of elements.

Put it all together (summary)

1. Service Workers or Browser Cache

1. CDN

2. Proxy server, Varnish, nginx microcache

3. WordPress plugins response cache type: WPTypek Performacne

4. WordPress plugins optimizing DOM and CSSOM type WPTypek performance

Article prepared by: Dawid Rzepczyński @nebuso


6 months support

Lifetime update

For all Product on Wptypek

Automatic updates

For all product on wptypek

100% Secure Checkout

PayPal / MasterCard / Visa