5 years ago
Stuart Larsen #article
The other day I found out my site was taking more than 5 seconds to load for some clients. I was a little shocked and frustrated.
Csper's slow page speed on Google's PageSpeed Insight
I hadn't looked at my page speed in awhile. I knew it was probably slipping, but I didn't realize it had gotten that bad! Thinking back, part of the reason I didn't realize how bad it had become was because I've optimized the website for myself. I live in NYC, and my web server runs in Google Cloud Platform us-central1-a. I have fiber internet, and a fast desktop. Unfortunately most people do not live next to my data centers, with fiber, on a fast desktop.
This blog post documents the improvements I've made to speed up my website (csper.io), specifically looking at initial loading time.
Also it's probably important to note that this work was done through the lense of a solo founder. Most of the improvements are of the category of what can I do "now" that's "good enough". And then revisit again in 3 months.
Csper's backend is written in Go. The frontend is in Angular. I deploy both the backend and front end in the same docker container onto google cloud Kubernetes (GKE).
Google's PageSpeed Insight said images was my biggest problem, so I decided to tackle that first.
First step was converting all my images to webp. webP images have better compression and are smaller for equivalent quality. Google has a some nice tools for converting images. (I had to build them on my mac. Here's a gist. I like to keep scripts like that in my makefile incase I have to move to a new system.)
Thankfully all my images are in one folder (technically two), so it was easy to convert them all to webp:
I also needed to resize images. Previously I was displaying the same image no matter the window size. (Technically I did some ad hoc resizing if the image is egregiously big.)
But now I just create a couple versions of each image (cheap storage ftw), and use the correct sizing depending on what I think the window size will be.
Sadly the gif2webp
doesn't support the -resize parameter. So I found another tool (gifsicle) to first resize the gif's and then convert them to webp.
In a perfect world the script should check if the image already exists, but sometimes I modify the base image and want all the versions to change.
So now when I want to add a new iamge, I just place the new image in the folder, run the script, and go back to doing whatever I was doing. The script takes 30 seconds for my 200 images.
All static assets were originally being served from my Go servers. This is not ideal for a couple of reasons, but the big thing for me is scaling characteristics. So instead I created a CDN on google cloud. I watched this Cloud Next '19 CDN Video and just followed along.
I can't remember the exact steps, but took maybe 10 minutes to setup. The steps went something like:
Wait a few minutes, and then the CDN should be live. Then I moved all the images to the bucket:
Now all assets are cached around the world. Neat. https://assets.csper.io/csper.webp.
Then I used the advance programming tool known as 'Ctrl-F' to find all the places I used images and replaced them with the new URL. For the smaller images I used "200" and "600" image size images. https://assets.csper.io/csper200.webp.
I didn't take a screenshot of the page speed improvement, but I think correctly handling images cut off like 1.5 seconds from page load.
For awhile I've been looking at Angular Universal. It could help with both my initial page speed, and SEO (search engine optimization). The SEO is important because most my paying customers come from Google Search, so being SEO friendly is something I've learned to be very be considerate of.
But SEO can be a little tricky with tools like Angular.
(A quick introduction for those who aren't as familiar with Single Page App (SPA) frameworks like Angular). When you visit an Angular webpage like Csper, you get an almost empty index.html file (check it out at view-source:https://csper.io/, you'll need to copy and paste that on chrome, for browser security reasons you can't click on view-source links).
You actually get the exact same index.html on any url on Csper. So https://csper.io/blog/csper-page-speed (this page) loads an almost empty index.html file. That almost empty index.html includes a script tag to my angular javascript app. Then angular/my app is loaded, it inspects the current URL, and then bootstraps the content for that page.
Why both with that? Angular/React are very quick to develop on, and they make code re-use very easy, and help organize complexity within larger dynamic applications. But one downside is that web crawlers don't like running javascript. These crawlers includes search engines, and social media cards. (Some search engines supposedly will render the javascript, but my SEO got way better when I started using a prerender service).w
As a "good enough for now" solution, I've been using a pre-render service (prerender.io). Google calls this Dynamic Rendering. So when I see that a crawler is accessing one of my pages (based off useragent), I call out to a prerender service (prerender.io), it grabs the page and converts it to plain HTML, then I send that instead of my normal index.html file.
It works, but it's not something I'm a fan of. I have to refresh the prerender cache whenever I modify a page, and it's a point of failure. Although I am thankful it exists for now.
(It frustrates me that crawlers don't run Javascript to experience the true user experience. It seems like that should be their job. But instead website owners have to bow down to them and implement this crap. But there's too many other things to do, so you pay the tax, implement it and move on)
From https://developers.google.com/web/updates/2019/02/rendering-on-the-web
Angular supports something called Server Side Rendering through angular Universal. With it I can setup a little nodejs server that renders the requested web page on server side before each request and send the results to the client. So that the initial page request is already populated with the proper content without needing javascript.
This is nice for SEO, and also low-compute devices (mobile) or high latency devices (it's pre-computed and less round trips).
I spent like two hours trying to get Angular Universal to work, but ultimately I git reset out of there. It was tricky because you can't use browser APIs within angular universal (the rendering is happening in a node environment, not a browser, so it's missing certain APIs like window
). Both my code and my dependencies had a fair usage of window
. I converted a couple of the usages of window
(or verified the platform_id) but then gave up. I was also rushing through it, and it looks like a project that needs more than some skimming of the docs and praying.
Prerender is already getting the core job done, in the future I'll maybe try again. We got bigger fish to fry.
A few days ago I bumped from angular 8 to 9. It was refreshingly easy. I sadly didn't record how much this sped up my website (if any). (I wasn't planning on writing a blog post).
To update from 8 to 9 was as easy as:
Angular 9 uses a new engine called 'Ivy', supposedly it's faster. I have no data either way.
Csper is both a website and a webapp. Meaning it has a bunch of static pages (like the home/blog/docs/etc). Along with the full paid web application (used for aggregating/analyzing/viewing millions of content security policy reports).
Originally when you visit csper.io, even if it's the home/docs/blog, you get the javascript bundle for everything. Just one giant bundle. This also includes 3d party libraries for charting, icons, time parsing, components, and other stuff. That's not ideal. Most people don't need all that javascript.
Angular supports something called lazy loading. This splits the code into little chunks that are loaded when they are needed.
Now when you visit Csper's public facing pages (such as this blog post), you get a smaller public facing bundle. Then when you navigate to the more complex part of the website, you get the full web app bundle. (You can verify this by logging into Csper, opening the network tab, then navigating to a route that starts with /p/ or /org/ to see the additional bundles.)
To split code into bundles I had to move things into angular modules. It was a little bit confusing knowing what's supposed to be a provider
vs declares
vs imports
, etc. But after reading the helpful error messages I got it all sorted out. Took maybe 2 hours to sort all the components/pipes/services into the correct modules (maybe about 75 different components). (I had to create a SharedModule too for common utilities).
I think moving to lazy loading modules cut out about .5 seconds of page load time on mobile clients.
Angular/Webpack has a cool feature where you can visually inspect your modules.
To see the module sizes you run:
Webpack Bundle Analyzer for Csper
From there I was able to see what was taking up space in the bundles. There were two non-essential big packages: lodash and momentjs.
It turns out lodash is a beefy boy! And I was only using it one place! (orderBy
).
Instead of using the entire lodash bundle you can choose to import only one lodash function at a time.
I forget how much this took off the bundle size, but I think it was like 30kb after gzip?
moment.js is also a beefy boy! It supports a bunch of locales, which is great, but heavy.
It would of been a lot of work to completely remove moment.js from my app. Thankfully the only place I moment on the public pages was the blog for displaying "posted 5 minutes ago". So instead I found an angular pipe that is much smaller and replaced those in the public bundle.
So now moment.js is only loaded once you go into the full web app portion of the website.
Google Page Insights was telling me that I had unused CSS. I was hoping this was something that angular would do automatically, but it does not. But after running into the problem below, I realized that it's good that angular doesn't try to do it automatically.
But anyways, to remove unused CSS, I used a tool called purifyCSS. I found some gulpfile (I can't remember where, but thank you to whoever posted it):
Now I run gulp purifyCSS
after I do an ng build --prod
(as a Makefile command). purifyCSS says it's taking out about 11% of the CSS. Neat. You can use the rejected
option above to see what it is removing.
I use prismjs for highlighting my code in the blog/docs. But the raw HTML content for the blog/docs comes from my golang server (I built a tiny little CMS). This means that the purifyCSS gulpfile won't know about my usage of classes from my CMS, specifically prism code highlighting. So purifyCSS was stripping out the prismjs CSS. Thankfully purifyCSS has a whitelist, so I added them to the whitelist and it's working again.
My .js/.css bundles were also served from my golang server. It's not ideal, but it was working mostly fine. If there was ever a spike in traffic, the container could just scale due to the k8s auto-scaling group. But as far as I know it's never had to scale. (Off topic, but when I first started the startup, I was super worried about handling A LARGE AMOUNT OF TRAFFIC. Because obviously my startup would be a huge success. That doesn't happen. It's a slow ramp.)
But, a large number of my customers are not based in the U.S. So serving content from my docker containers in the USA is not ideal for everyone. So it's time to move to a CDN and cache at an edge closer to everyone.
As first I wanted to put my angular assets (.js/.css) into a bucket, and serve it from the same csper.io domain. This is normally possible using the google cloud load balancer by just specifying path matching. Only anything starting with /api
would go to my Go server, everything else would to the bucket.
But, I configure my google cloud load balancer using Kubenertes. After I made the changes Kubernetes overwrote them.
There didn't seem to be an easy solution to specify buckets in GKE Kubernetes yaml files so I gave up on that avenue.
Instead I decided to just use the assets CDN I created for images. I would load all the .js and .css files from there, and continue loading my index.html from the golang server.
Not perfect, but good enough for now.
(Also, now that I write this blog post, I realize that I currently need to serve my index.html file from my golang server so that I can detect crawler user agents and pre-render for SEO. So it's somewhat good that it didn't work.).
After some googling I found out I can instead specify a deploy-url
when building with angular. This will use absolute URLs for the script and style tags within the index.html file.
And then now when I build my angular app, I just copy the files to the bucket.
That -z
option is important for gziping the files before upload. At first I deployed to staging without the compression and my page speed jumped up by 3 seconds. "wow cdn's are slowwww", then I realized it was because I forgot to compress the files. Whoops. Those two characters are the difference between three seconds on initial page load.
There were some other minor improvements:
I should be using responsive images. I currently show the same image no matter the window size. I kind of fudge this by using bootstrap grids and mostly displaying images in half screen segments on desktop and full screen on mobile. So they're roughly the same size. But it looks like you can specify a srcset
on images and the browser will pick out the correct image.
I kind of arbitrarily chose the breakpoints of my images as 200px, 600px, 900px. I want to do some research first on the proper sizes and then add srcset
s.
Instead of Angular Universal I want to look into static SSR for my public facing pages. I don't have that many public pages, and they don't change more than maybe once/twice a week (new blog posts, etc). So I could build a pre-rendered copy of all of them whenever I make a change, and put those on the CDN. Then it would be lightning fast for first page load, and SEO friendly.
This is probably the big step I need for a score of 99 on mobile.
Google is still complaining about initial CSS being too big. I couldn't find a great solution in angular for this that I could do quickly, eventually I'd like to revisit. This would also be fixed by moving to static SSR or CSR, so I'm delaying it.
I'll probably tackle this in a few weeks, but I wanted to learn more before I make any big decisions. But the API endpoints and data should live closer to my customers.
I think it's possible to have my k8s containers sit in data centers all over the world (instead of just us-central), and I can also shared my MongoDB cluster based on location, so that Csper's API is fast no matter where in the world.
Right now it's using Google Cloud's backbone, so it's decent, but moving closer to individual customers would be nice.
At the end the page speed should be a little better for anyone.
Final Score for Desktop and Mobile
Well, that's the changes for now. If you have any suggestions I'd love to hear them! stuart at csper.io.
Cheers,
Stuart
Stay up to date with the latest Content Security Policy news, product updates, discounts, and more!