Manual change detection in Angular for performance

There are special use cases where you might need to customize change detection in Angular applications for performance reasons. The majority of the time, default change detection is the correct way to go. I’ve worked with several customers recently who were using very large datasets within Angular 8 components and this caused some interesting UI slowdowns as a side-effect.

Fortunately, and unsurprisingly, Angular does allow you modify change detection, for example ChangeDetectionStrategy.OnPush and ChangeDetectorRef. In both cases that I worked on, my recommendation was to lean towards ChangeDetectorRef in order to provide granular, explicit control over when a component updated to reduce performance impacts on the UI.

In one case, the change detection needed to be slowed down and it was suitable to run it on a timer loop. Here’s the pseudo-code, and there are plenty of similar examples on the internet including in the Angular documentation:

import { Component, ChangeDetectorRef, . . .  } from '@angular/core';

constructor(private changeDetector: ChangeDetectorRef) {
    changeDetector.detach();
    setInterval(() => {
      this. changeDetector.detectChanges();
    }, 5000);
  }

In the other use case, the change detection only needed to happen when an Observable from a service was updated. That implementation used a pattern similar to this pseudo-code:

import { Component, ChangeDetectorRef, OnInit } from '@angular/core';

constructor(private stateMgmtService: StateMgmtService, private changeDetector: ChangeDetectorRef) {}

public messages$: Observable<MySpecialArray[]>
public list: Subscription:
public data: any[] = [];

ngOnInit() {
   this.changeDetector.detach(); 
   this.messages$ = this.stateMgmtService.getSomeData();
   this.list = this.message$.subscribe({
      next: x => {
         // . . . do some calculations against x
         this.data = x;
         // Only detect changes on next
         this.changeDetector.detectChanges();
      }
   })

}

And here’s the component.html:

<!-- https://material.angular.io/cdk/scrolling/overview -->
<cdk-virtual-scroll-viewport [itemSize]="8" class="point-viewport">
  <div *cdkVirtualFor="let location of data”>
    {{location}}
  </div>
</cdk-virtual-scroll-viewport>

Caveat Emptor. There are technical debt issues when you implement manual control on Angular change detection. Instead of a nice, loosely coupled approach to handling UI updates, you step into potentially creating inconsistencies between how different components handle updates and this adds complexity to your application. This can also affect how you write unit tests and can introduce unforeseen bugs. With all that said, sometimes you have to make decisions based on your unique requirements and you have to take the best approach for your circumstances.

Performance comparison between readAsDataUrl and createObjectURL

If you work with applications that handle uploading images as blobs then you’ve most likely wondered if it’s faster to convert the image using FileReader.readAsDataUrl() or URL.createObjectURL(). For our implementations in the geographic mapping industry we often typically request dozens, hundreds or sometimes thousands of relatively tiny map tile images such as .png and .jpeg in a single user session. There’s always a question of loading and rendering performance.

I was working on a related customer question and I was curious which one is faster in our use cases, so I did some simple testing. A common example of an online web map contains tiles that are 256×256 and vary in size from around 2Kb to 15kb. I assumed for the file types and sizes we use the results would be different because I’d read that createObjectUrl() is typically faster.

TLDR;

The results were surprising to me. For our use cases with relatively small .png images I saw the following results:

  • Chrome: readAsDataUrl() was consistently faster uncached
  • Firefox: createObjectUrl() was consistently faster uncached
  • Safari: was inconsistent between the two.

Test appHere is the link to the test app.

Note that Safari had unpredictable performance in that sometimes readAsDataUrl() was faster than createObjectUrl(). I saw the same behavior for cached and uncached tests and didn’t have time to investigate further.

YMMV!

Just a caveat that since we use lots of small images, your mileage may vary if you use larger images. I hope someone reads this and devises a test for larger images and then shares the results.

The Tests

I tested basic performance of pulling a map tile image from a CDN then using performance.now() to determine the time to create the image and then append it to an HTML list element. I did try to build the code in a way that each loop of the test used a different image to avoid any unintentional optimizations, such as sharing an image in-memory, or in-browser caching. I also ran the comparative tests recursively to try and normalize for readAsDataUrl() being asynchronous. I didn’t have time to investigate memory usage between the two patterns.

Note that testing with the browser console closed will be significantly faster than testing with the console open.  I used a 2018 Macbook Pro, 16GB with DDR4 RAM. I cleared the browser cache before each loop and I used this test app. In the code, each pattern goes through 25 loops and is averaged.

Chrome 80
Test results averaged
Test 1
(ms)
Test 2
(ms)
Test 3
(ms)
Test 4
(ms)
readAsDataUrl0.7890.8270.8130.839
createObjectURL1.6841.6381.6411.544
Firefox 73Test 1
(ms)
Test 2
(ms)
Test 3
(ms)
Test 4
(ms)
readAsDataUrl1.481.281.281.2
createObjectURL1.120.841.041.0
Safari 13Test 1
(ms)
Test 2
(ms)
Test 3
(ms)
Test 4
(ms)
readAsDataUrl0.360.280.720.68
createObjectURL1.960.60.561.6

Conclusions

If you are only uploading a few smaller images, then wondering which approach is faster probably isn’t a good use of your time – either one is good. If you handle hundreds or thousands of smaller images per user session then it might be worth some testing. Based on these quick tests, and more testing is needed to be truly definitive, it really depends on which browsers your users prefer. For example, if you are building hybrid apps then you have control over which browser. In a pure web application you don’t typical have control over what users use in the wild.

I didn’t test larger sized images or images of a different type, such as .jpeg. I’m curious what type of test results those might produce.

Web Worker Performance Tips 101

There are many potential benefits to using web workers. They can provide a significant web application performance boost by moving heavy-duty work off the main browser thread. It’s also true that in certain instances you may be slowing down your application in ways you didn’t expect.

Tip #1 – The cost of using a web worker is not free. JavaScript must serialize your data to pass it to a background thread and it must also serialize data when sending it from a background thread back to the main thread. That’s two serialization processes for each round trip and each process takes CPU cycles and time. Depending on what type of data and how large it is you may be surprised how long it can take.

Tip #2 – Not all browsers treat web workers equally. Web worker performance gains in one browser may not represent a similar gain in a different browser. If you are building a cross-browser application, make sure you specifically test and measure each of your web workers in the various browsers that you will be supporting. This often takes developers by surprise. The issue is mainly due to differences in how each browser vendor implements their data serialization algorithms. If you want more information on this, its officially referred to as structural cloning algorithms.

Tip #3 – Measure the total time it takes to use a web worker. Set console.time() before the initializing the worker and set console.timeEnd() where you get a message back. You’ll want to compare these results against running the same code directly on the main thread.

Example:

    console.time("parseTestTimer"); // Start the timer
    
    // Initialize the worker
    var worker = new Worker("ParserWorker.js");

    // Send the data to the worker
    worker.postMessage([first.value,second.value]);

    // Get the data back from the worker
    worker.onmessage = function(result){
        console.timeEnd("parseTestTimer"); // End the timer
        // Do something
    }

Tip #4 – Even using binary transferable objects can have a cost. The Transferable pattern for web workers are designed for high performance, however depending on what you are transferring, the browser, the browser version and the device type (mobile vs desktop) your mileage may vary. In more technical terms this pattern, at least in theory, uses a zero-copy, pass-by-reference approach which is intended to have very low overhead. You should definitely consider testing the transferable objects pattern and compare timing benchmarks against the standard web worker postMessage() pattern. You might as well be thorough, especially since there’s no guarantees of how each browser vendor implemented this functionality under the hood.

Example:

    // Transferrable object pattern using binary data
    worker.postMessage(uInt8Array.buffer, [uInt8Array.buffer]);

Additional References:

Advanced Web Worker Performance – this post provides several important details for determine if your web worker is provide a positive or negative performance gain.

Samples apps demoing no workers, one worker and two workers

MDN – Structured Clone Algorithms

HTML Living Standard – Transferable Objects