Responsive Web Design: Best Practices for Improving Performance

RWD

User are impatient. Majority would leave a resource if it failed delivering the core user experience in 1-3 seconds. Today when mobile overtakes fixed Internet access you need to think of how the app loads on a low bandwidth, what images must requested on what devices, how stylesheets and JavaScript are cached e.t.c. Fortunately there a number of amazing tools such as PageSpeed Insights, WEBPAGETEST, Pingdom Website Speed Test. So you can find out what’s wrong with your app in terms of web-performance. Chore DevTools emulator gives an idea how you app renders with different connection speeds. As you see there are many ways to discover if you have performance issues and what they are. But that brings to the question “how to solve them?”. Here I am to share my experience.

Cutting the mustard

First of all let’s face our challenge. What we have to do is to reduce user response time as much as possible. User gets irritated when sees for too long a blank page or even a loading splash screen. But how? We are not going to throw away all these fancy UI features that make our app unique. We don’t need to. It’s fine if user doesn’t get fully-featured UI immediately. But we have to provide the core look and feel very soon. So we ought to “cut the mustard” – determine what components belong to the core UX, what are secondary, what are, probably tertiary. For example the project I’ve been working on lately we split into following parts:

  • Core content - Essential HTML and CSS, usable non-JavaScript-enhanced experience
  • Enhancement - JavaScript, geolocation, touch support, enhanced CSS, web fonts, images, widgets
  • Leftovers - Analytics, advertising, third-party content

Non-blocking CSS and critical path

In order to deliver the core content faster we can leverage the technique known as critical path CSS (http://css-tricks.com/authoring-critical-fold-css/) . We need to extract core-ux CSS, minify and inline it into HTML. The main CSS we load by JavaScript asynchronously. It happens fast enough under usual conditions on desktop so you hardly can see the difference. On mobile devices (especially with slow connection) user gets the core experience still close to immediately and it enhances as soon as main CSS is available.

I’m using asynccss library for loading main CSS. On the first run the library reads content of specified files via XHR on DOM ready event, injects it into HTML and saves in the localStorage. Next time the library takes formerly stored CSS content from localStorage and therefore supplies it to HTML in no time.

You can manually download it or install with package manager:

$npm install asynccss --save

Usage example:

<head>
...
<script type="text/javascript">
  (function(){
    <?php include "./asyncCss.min.js"; ?>
    try {
      asyncCss( [ "css/foo.css", "css/bar.css" ] );
    } catch( e ) {
      console.log( "asyncCss: exception " + e );
    }
  }());
</script>
<noscript>
<link href="css/foo.css" rel="stylesheet">
<link href="css/bar.css" rel="stylesheet">
</noscript>
...
</head>

Progression of page load on a slow connection

Progression of page load on a slow connection

Progressive enhancement

Progressive enhancement implies that user receives a decent functional UI even before/without JavaScript loaded. However the UI gets enhances as JavaScript modules load. By using CSS tricks such as Checkbox-hack, pseudo-selectors :target and :hover we can achieve generic UX patterns such as tree menu, expandable and tabbed areas, dropdown menus, push toggles, slideshows, modal window and others without JavaScript at all. So when we implement interactive components with CSS we speed up user response (user isn’t blocked by JavaScript loading). Besides, we reduce our JavaScript codebase.

Modal window by :target

Modal window by :target

Dropdown menu by checkbox-hack

Dropdown menu by checkbox-hack

Stateless design

When the pages of core content are stateless we can fetch them from a high-performance cache storage. i.e. Memcached in almost no time. Let’s say news details page consists of navigation and news content. That’s the core. News entry may contain user comments block, with a dynamic content - actual comments. But that’s an enhancement. So we can load the block separately, when user scrolls down to the news bottom line. Therefore the core content comes from the cache, enhancement modules are lazy-loaded. Another example will be ‘flying’ pages. The footer comprises a link to Terms of Services page that shows up in a modal window. We retrieve the page content via XHR after user clicks the link, show it and store in memory for reuse.

What we gain:

  • User gets the page faster (no processing for that sort of content)
  • A number of requests on the page coming from bots and accident landings. So in that case scrolling not performed and we do not load the backend with extra processing for this content.

Conditional image loading

When working on responsive design we have to take care of what images load per a device. For an instance, when page is requested on iPhone 3G in portrait orientation the viewport width is 320px and we shall not load any pictures wider than that. An image 1920px wide targeted to desktop would slow down user response time drastically while being loaded over 3G connection. But if we have a visitor from iPhone 4/5 with Retina we load 2x image (640px wide) to improve UX. Fortunately we can fully control image loading in CSS by using media queries. To make it even easier I use a SASS mixin, what makes the styling conditions look like that:

.section {
  background-image: url("/assets/section/foo.png");
  @include media( "screen", ">w320", "<w768" ){
     background-image: url("/assets/section/foo-320p.png");
  };
  @include media( "screen", ">w320", "<w768", "retina" ){
     background-image: url("/assets/section/foo-640p .png");
  };
}

But what about control over image loading in HTML? HTML5 picture element specs is aimed to provide us the tool. Can we use it already? At the moment WebKit browsers implement it to some degree, but with a polyfill Picturefill we are fine with any widely used browser.

<picture>
 <!--[if IE 9]><video class="is-hidden"<![endif]-->
 <source srcset="/files/news/_946p/foo.jpg, /files/news/_1888p/foo.jpg 2x" media="(min-width: 1024px)">
 <!-- iPad lanscape -->
 <source srcset="/files/news/_944p/foo.jpg, /files/news/_1888p/foo.jpg 2x" media="(min-device-width: 769px) and (max-device-width: 1024px)">
 <!-- iPad portrait -->
 <source srcset="/files/news/_768p/foo.jpg, /files/news/_1536p/foo.jpg 2x" media="(min-device-width: 481px) and (max-device-width: 768px)">
 <!-- iPhone 3-4 landscape -->
 <source srcset="/files/news/_480p/foo.jpg, /files/news/_944p/foo.jpg 2x" media="(min-device-width: 321px) and (max-device-width: 480px)">
 <!-- iPhone 3-4 portrait -->
 <source srcset="/files/news/_320p/foo.jpg, /files/news/_640p/foo.jpg 2x" media="(max-device-width: 320px)">
 <!--[if IE 9]></video><![endif]-->
 <img src="/files/news/_944p/foo.jpg" srcset="/files/news/_1888p/foo.jpg 2x" alt="My Image">
</picture>

Conditional image loading requires a lot of extra image files (one version per every break-point). It would be a hell to maintain if not using an image resize service. In my case it’s a simple script addressed by HTTPD when requested image not found. The script checks for image original availability and validates the requested width against the white-list. If both conditions met it does resize via ImageMagick:

Image resize service sequence diagram

Image resize service sequence diagram

Compressed and minified assets

For both CSS and JavaScript I use pre-processors. So during building process the styles (SASS) sources are compiled into a minified CSS, all the CommonJS modules into a minified JavaScript. In fact even HTML when it comes from Memcached it minified.

Any images used in the app are being optimized without quality loss during upload/deploy. I use ImageMin. Further optimization can be achieved by conditional use of WebP/JPEG XR/Progressive JPEG image encodings.

Avoid redundancies

Weighty JavaScript frameworks, UI libraries and plugins come at a cost. Developer communities aim to provide most universal solution possible and it often means an overhead for your concrete task. If you need a slideshow, don’t use a jQuery-plugin for that. You can probably go without any JavaScript at all. Frameworks improves code maintainability, what is utterly important. That’s why I rely on BackboneJS optimized port – ExoskeletonJS. It’s only ~8KB compressed code that brings a perfect concern separation and consistent abstraction.

Another part that shall be checked for redundancies is CSS. You may have seen Addy Osmani advocating UNCSS utility. It analyzes your CSS in a smart way and removes all the unused rules. It’s sort of a remedy, but it rises a question: “why do you have redundant rules in your CSS in the first place?”. Because of a poor maintainability. Keep to component-based CSS with short selectors and you will get a maintainable, clean and light-weight code

Further reading