Chris Padilla/Blog
My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.
- describe Is a wrapper for related tests.
- it is the keyword for a test. Essentially saying "Test this procedure."
- request is doing just that, sending and http request.
- expect is our assertion. We are expecting our status to equal 200 for this link.
- Links with and without "Preloading"
Font-Display
settings "Block" and "Swap"- Running the server as a Lambda function
- Hosting the server on an EC2 instance
- Managing load balancing and scaling with Elastic Beanstalk
- Lambda functions are AWS's solution for Serverless Functions
- EC2 (Elastic Compute Cloud) is a hosting platform for cloud computing
- Elastic Beanstalk is an orchestration service that wrangles EC2, S3s, CloudWatch, Elastic Load, and many other good-to-haves in hosting an application
- Match port number to the Internet port
- Ensure the version of Node is within AWS's accepted range
- Generate static files
- Alias the route to our static files
- Include a Procfile for defining start script
- Wires that connect to internet.
- A modem that is the gateway to the internet.
- Wires connecting modem to router
- A router that connects to other devices on network and connects to modem for internet
- Computers / cell phones
- The router can still connect to other devices if modem goes down
- If the router goes down instead, no connection are possible. Even if internet connection still coming in.
- Only one can be attached to a VPC.
- While there are active AWS Resources attached to the VPC, you can not detach the IGW. (Such as EC2 or RDS Database)
- You can have multiple active route tables in a VPC
- You can't delete a route table with active "dependencies" (associated subnets)
- us-east-1a
- us-east-1b
- us-east-1c
- us-east-1d
- Invest regularly
- Diversify for long term success
- Balance Conservative and high risk/high reward investments
- Investors aim to buy low and sell high (emerging tech)
- Portfolio's should be reviewed and re-evaluated regularly
- Learn one new language every year (this year — Python)
- Read a technical book each month
- Participate in User Groups
- Experiment with different environments (atm - shell and markdown)
- Stay Current (Syntax)
- 5 why's
- Who benefits?
- What's the context?
- When or Where would this work?
- Why is this a problem?
- Keep code decoupled. More later.
- Avoid global data. You can mitigate this by passing context into modules or as parameters in React. So redux stores app level data, but you mitigate this by only requesting what you need.
- Avoid similar functions.
- Architecture
- New functionality
- Structure or contents of external data
- Third party tools or components
- Performance issues
- User Interface Design
- Correctness
- Completeness (limited functions)
- Robustness (minimal error checking)
- Style (code style and documentation)
- Insurance against obsolescence
- Leverage existing tools
- Easier testing
- Building the book
- Code inclusion and highlighting
- Website updates
- Including equations
- Index generator
- Set up GA4 at analytics.google.com
- Take your GA4 ID over to Tag Manager and create a new GA4 Config Tag.
- Use that config tag in your new custom events.
Setting Up End-to-End Testing with Cypress
After musing on the design benefits of testing, I'm rolling up my sleeves! I'm diving into setting up Cypress with my blog.
This was also inspired by a message I received lately about my RSS feed being down! After hastily writing a fix, I wanted to start setting up measures to keep a major bug from flying under the radar.
Cypress
A post for another day is the different types of tests in software. The quick run down, from atomic to all-encompassing, goes like this: Static Tests, Unit Tests, Integration Tests, and End-to-End Tests.
Cypress is on the farther end of that spectrum, encompassing End-to-End testing. Cypress will spin up a browser, walk through your application, and use the site as a user would.
There's a suite of tools and assertion methods included from the popular Mocha and Chai testing frameworks.
For my blog, I don't have many complex features. It's largely a static site. If we were to define the happy path, it would mostly entail clicking links and reading articles. So asserting that will be as simple as gathering links and checking their status codes on request.
Cypress setup is laid out nicely in their docs. For a few more handy utility methods, you can also install the Cypress Testing Library:
$ npm i -D cypress @testing-library/cypress
Note the -D
flag. For both Cypress and any testing packages, you'll want to save these as Dev Dependencies. We're not planning on shipping any of these modules with the app, so it's an important distinction.
Writing Tests
After initializing the app with the following command-
npx cypress open
—Cypress will lay out boiler plate files and directories for the app. I'm going to add a cypress/e2e/verifyLinks.js
file for my own tests. Here's what it will look like:
import { BASE_URL, BASE_URL_DEV } from '../../lib/constants';
const baseUrl = process.env.NODE_ENV === 'production' ? BASE_URL : BASE_URL_DEV;
describe('RSS', () => {
it('Loads the feed', () => {
cy.request(`${baseUrl}/api/feed`).then((res) => {
expect(res.status).to.eq(200);
});
});
});
To break down a few of the testing specific methods:
A simple example, but once this is up and running, it will verify that the request is indeed returning an "OK" status, and we can rest all the more easily tonight.
And that's enough to get started! Next steps would be to write a few more critical tests. Perhaps crawling the page for links and verifying that they are all returning 200. And with more tests, integrating into a CI workflow would make the app all the more secure without as the manual checks in our local environment. More to come!
Font-Display and CLS
Overview
I've written before on how tricky it can be dealing with external fonts and maintaining a healthy CLS score. This week, I had the chance to revisit it with the suggestion of exploring the particular load settings.
The Fastest Google Fonts
Below I'm snipping Chris Coyier's sample from Harry Roberts' amazingly thorough analysis on font performance. The idea is that you can render text immediately and load fonts quickly by adding preload and preconnect link attributes to your sources.
<link rel="preconnect"
href="https://fonts.gstatic.com"
crossorigin />
<link rel="preload"
as="style"
href="$CSS&display=swap" />
<link rel="stylesheet"
href="$CSS&display=swap"
media="print" onload="this.media='all'" />
In my use case, this is exactly what was already in place.
Variables - Preload and Font-Display
Breaking down the different parts, we have a few variables to play with:
Preloading does not block, but marks the asset as a high priority and schedules it early on in the page lifecycle. More details available on MDN
Preconnect is similar. It lets the browser know that resources will be needed from this source early on.
As per Google's Definition:
Block, the browser default, leaves a 3 second window for the font to be fetched and loaded before "inking" the text. (Text is rendered, but in "invisible ink")
Swap renders text immediately and then swaps the font style on load.
Experimenting
Even with preloading and preconnecting in place, there's still enough delay where the CLS score was catching the "swap" on the page.
So I toyed around with the above variables, mixing and matching preloading with block and swap.
Here's what I found: There was a noticeable drop in both of the options that use the font-display: block;
setting! From a 0.003 range to a 0.002 range on page with mostly hero and navigation text.
Of course, there's a tradeoff: The CLS score here is improved as a result of delaying text rendering. It's only a 3 second window, but it's common practice now to be aiming for 2 second load times for webpages. At the very least, in this case, a backup font is loaded after that time elapses.
I'm waiting to hear back on what the field data reports with this change, but I'm hoping this is a move in the right direction!
Testing Software for the Same Reason You Write Notes
My work as of right now isn't largely Test Driven. The scale I work at isn't as dependent on tests, the codebases are small enough where I'm not as concerned about glossing over any unwanted side effects.
However, I can't deny the benefits of testing.
Of course, having tests in place help solidify the sturdiness of your application. There's a certain reassurance that comes with shipping code that is backed by working tests. Then also while developing, there's a much tighter feedback loop for the coverage areas of your code when refactoring.
Dave Thomas and Andy Hunt in The Pragmatic Programmer make a much more interesting case: Writing tests leads to writing better code. The gist being that testable code is well-organized and thought-out code.
I think, largely, there's a parallel here with writing blogs/articles/notes as a mode of thinking.
I mentioned in my 2022 reflection that "Articles help solidify knowledge." Put another way, they bring an organization to the otherwise sloshy thought process that our human brain operates under.
A quote of a quote: Chris Coyier links Mark Brooker on the Magic of Writing with this gist:
I find, more often than not, that I understand something much less well when I sit down to write about it than when I’m thinking about it in the shower. In fact, I find that I change my own mind on things a lot when I try write them down. It really is a powerful tool for finding clarity in your own mind.
So here's my understanding:
Even if you aren't dependent on the tool of running tests locally and in a CI/CD integration, there are still design benefits to writing tests. Code that doesn't have tests may not be testable, which lends itself to a more interdependent design rather than a modular one.
2022
I'm usually highly preoccupied with today and tomorrow — I easily forget yesterday. So, I'm going to counter that by being intentional here about taking a pause, reflecting back on what's been a really exciting year.
Software
I released a video game with my sister! It was a big stretch, it was fun, it was a real challenge at times, and it was ridiculously rewarding! I'm proud of us for making something so expansive. And I'm ready to work on a much smaller and more divergent scale.
I'm continuing the cycle of learning, building, and serving at AptAmigo! Nothing but good there still, I'm very excited to continue contributing!
My Personal Site
This one! I'm happy to have launched my site this year. I've really enjoyed dipping into the codebase with new feature ideas. This site has been a fun sandbox! There's also something to be said about having a home on the internet. Maybe that's a bit too romantic, but I enjoy having a sort of stage for all kinds of creative projects. A paradoxically intimate and wildly public one.
Writing
This year I wrote 51 articles on my blog. Most of them technical, a few on creativity, some on books, and a handful just for fun.
If you've had the pleasure of writing or creating on the internet, you know the intrinsic value of it. I'm so happy to find that the reward of a digital garden is in the tending. Articles help solidify knowledge, they give platforms for exploring, and it's a handy way to help others with their problems by sharing what I learn. I know many people say that the benefit of writing on the internet is the connections between ideas that come over time, and I'm excited to start seeing those insights come together!
Music
I wrote about how it was surprisingly hard to get started writing music. Even though I've performed for most of my life, writing music had been a dream for just as long. This year I broke through the atmosphere! I have been writing and making music all year. I've learned guitar, piano, and basic DAW skills. With 12 albums out and a few more on the way already, I'm really excited to continue exploring this new vehicle of expression.
Books
I tried not to read so much this year and failed wonderfully. I dipped into books on coding, creativity, personsal psychology, and fiction. You can see what all I read over on My Reading Year.
Dallas
Miranda and I moved to Dallas this year! We both love it here so far! We're not entirely sure where we'll be in a few years, but it's safe to say we're happy to still be in Texas and in a place where we have friends both old and new.
I Turned 30
Now halfway through my first year of a new decade, I can distinctly feel the difference between phases of life. IYKYK. If you don't, I wrote a few lessons from my time in my 20s.
2023
These days, I'm wary of specific, long term goals. But I do have horizons I'm stepping towards. And honestly, they look a lot like the paragraphs above! More music, software, connection, and writing.
I'll see you there!
Hosting an Express Rendered React App on Elastic Beanstalk
I'm picking up the story from this article on moving my old projects from Heroku to AWS. Previously, I covered how to setup an Express app in Node for deployment to Elastic Beanstalk. This time, we're going to look at wrangling in a React app served from an Express app all in one application for Elastic Beanstalk.
Config Files
There are a few config files we need to set up for production on Elastic Beanstalk:
proxy.config
When setting up our Express App to serve React, our server port may have been 4000 or 5000. But when setting up an Express App for Elastic Beanstalk, we used 8081, the port reserved for HTTP calls, in our CRA proxy. We did this to avoid the Nginx 502 status code. So here, my React proxy will utilize that same port:
// client/package.json
{
...
"proxy": "http://localhost:5000/",
}
There's a bit of murky documentation and understanding in this realm, but there's another way to adjust this for production through a .ebextensions/proxy.config
file. If you want to run port 4000 for the server in local development and configure your application to route to the same port in Elastic Beanstalk, you can do just that.
My solution required both setting my port to 8081 and including the proxy.config
file, though my understanding is you should be ok with just one or the other. In my case, I was able to test locally with the port set to 8081, so I had no qualms using it, though I know it's meant to explicitly stay open for other purposes. This portion will be a chose your own adventure!
You can follow this AWS doc on setting up the poxy.config
with this added line:
// .ebextensions/proxy.config
files:
/etc/nginx/conf.d/proxy.conf:
...
content: |
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
One other portion is important for serving React static files: creating an aliast for our build folder.
By default, aws will search for static files at the root. The path will look like this:
/var/app/current/static;
Our's will be nested within our application, though. So we'll want to add this adjustment as well:
location /static {
alias /var/app/current/client/build/static;
}
Procfile
Supposedly, Elastic Beanstalk knows to use npm start by default when working with a Node environment.
Continuing with the murky documentation from earlier, this wasn't the case initially when I first uploaded the project. Adding a Procfile seemed to move the needle for me. It's a simple addition to the root of the project:
// Procfile
web: npm start
YMMV here, most dialogue around this actually pointed to this as an outdated solution. It may work for your case, however, as it did for me!
Deploy!
Just as before, you should be set to deploy either through the EB CLI or from the AWS Console. This guide mentioned previously will get you through the console.
And that's it! We're set to work with React and Express side-by-side within one host in the cloud!
Rendering a React App from an Express Server
I'm revisiting my older portfolio pieces from a few years ago with a new understanding. One of my favorites is a MERN stack app where React is served from the Express server. Here, I'll share the birds-eye view on setting up these applications to be served from the same host.
Why One App?
The common way to serve a stack with React and Express is to host them separately. Within the MERN stack (MongoDB, Express, React, and Node), your databases is hosted from one source, Express and Node from an application platform, and your React app from another, JAMstack friendly source such as Netlify or Vercel.
The benefits of that are having systems with high orthogonality, as Dave Thomas and Andy Hunt would put it in The Pragmatic Programmer. With one application purely concerned about rendering and another handling backend logic, you leave yourself with the freedom and flexibility to swap either out without much fuss.
Segmenting the application in this way also allows you to serve your API to multiple platforms. You can support a web client as well as a mobile app with the same API this way.
However, it gets tricky if you begin to incorporate template-engine rendered pages from your express app in addition to a React application. This may be the case in a site that you want to optimize with templated landing pages, and that also hosts a dynamic web application following user login.
Following the curiosity of exploring the later option, I'm going to dive into deploying my Express application with a React app from the same application server.
File Structure
When starting from scratch, you'll want to initialize your express app at the root, and then the React app one level lower, within a client
folder. It will look something like this:
.
├── .ebextensions/
├── modules/
│ └── post.html
├── client/
│ ├── build/
│ ├── public
│ ├── src
│ ├── package.json
│ └── package-lock.json
├── models/
├── routes/
├── modules/
├── package.json
├── Procfile
├── README.md
└── server.js
You'll notice two package.json
files. Be mindful of what directory you're in when installing packages from the CLI.
You'll also see two items specific to our AWS setup: Profile
and .ebextension/
. More on those later!
Proxy Express
In the React app, I have requests that use same-origin requests like so:
axios
.get(`/api/${params}`)
.then((movies) => {
const moviesObj = {};
movies.data.forEach((movie) => (moviesObj[movie._id] = movie));
this.setState({
movies: moviesObj,
});
})
.catch((err) => console.log(err));
Note the /api/${params}
URL. Since we're serving from the same source, there's no need to express another origin in the URL.
That's the case in production, but we also have an issue locally. When we run Express and App in local development, it's typically with these npm scripts:
// package.json
{
...
"scripts": {
"server": "nodemon server.js",
"client": "npm start --prefix client",
"dev": "concurrently \"npm run server\" \"npm run client\"",
}
...
}
A couple of less familiar pieces to explain:
The --prefix client
is simply telling the terminal to run this script from the client/ directory, since that's where our React app is located.
Concurrently is a dependency that does just that: allows us to run both servers simultaneously form the same terminal. You could just as easily run them from separate terminals.
Either way, we have an issue of React listening on port 3000 and our server on a separate port like 4000.
We'll navigate both the local development and production issue with a proxy.
All we need to do is add a line to our React package.json.
// client/package.json
{
...
"proxy": "http://localhost:5000/",
}
This tells our react script to redirect any api calls to our server running locally on port 5000.
Serving React Static Files
We're in the home stretch! Now all we need to do is adjust our express server to render React files when requesting the appropriate routes.
First, build the React application.
$ client/ npm run build
You can set up a build method in your npm script to run this when you deploy the application. That will likely be preferable since you'll be making updates to the React app. For simplicity here, this should get us up and running, though.
Doing this will populate your client/build
folder with the assets for your React app. Great!
Now to wire our Express app to our static files.
There are two approaches to do this between production and development, but ultimately the core approach is the same. We need to add these key lines near the end of our application:
// Server.js
...
// Serve Static Assets if in Production
app.use(express.static(path.resolve(__dirname, 'client', 'build')));
app.get('*', (req, res) => {
res.sendFile(path.resolve(__dirname, 'client', 'build', 'index.html'));
});
// Ports
...
We're serving the static files from our build folder, and then any request that doesn't match any routes for our api higher up in the file will serve the index of our React app instead.
This will work fine in production when we configure our host to the same source. This, however, will override our concurrently running React app on port 3000 in local development.
To mitigate this, we just need to add a conditional statement:
// server.js
if (process.env.NODE_ENV === 'production') {
// Set static folder
app.use(express.static('/var/app/current/client/build'));
app.get('*', (req, res) => {
res.sendFile(path.resolve(__dirname, 'client', 'build', 'index.html'));
});
}
Easy! If in production, reroute requests to the index of our React app.
Wrapping Up
And we're all set! So we've covered how to allow Express and React to communicate from the same source. Next we'll look at how to configure this application for deployment to Elastic Beanstalk on AWS.
Hosting a Node Express App on AWS Elastic Beanstalk
Heroku has discontinued their free hosting tier for web applications. A major disappointment for many a side-projector! Several of my first web apps were still being hosted on Heroku, so it was time to re-evaluate.
There are a few other options. Render and Digital Ocean have low cost options. As you can tell by the title of the article, though, I felt it was time to explore hosting on AWS.
Elastic Beanstalk
There are a few options for hosting:
For those unfamiliar:
So, maybe it's unfair to say these are different options: Technically Elastic Beanstalk will make use of an EC2 instance with several other goodies baked in to handle the need to up and down scale my apps as needed.
I'm throwing in running the server as a lambda function as a fun idea. I'm not caching on the server directly, so it's potentially an option. However, I wanted to start with a more direct and traditional approach so that I have the experience for larger applications that require a regularly running server.
For quick implementation and a nice learning opportunity, I opted for Elastic Beanstalk.
Code Pipeline
My CI/CD needs are pretty minimal for my old portfolio projects, but nonetheless, I like being able to push to Github and then let the deploy happen automatically. So I'm setting up my EB applications with code pipeline connecting to my repositories as well.
Set Up for AWS
There are a few things we'll want to do to prepare for deploying to AWS:
In another article, I'll go into the details of generating and routing to our static files. For now, let's look at what getting an Express server that renders templates would look like with the first two steps.
Match Port Number
If we use the typical fallback port for servers in local development: 3000, 5000, or 7000, you'll run into an Nginx error with status code 502: bad gateway. To prevent this, we have to set our default port number to 8081, the port typically used for HTTP protocols.
Depending on how your express app is structure, this can be updated in the bin/www
file:
// bin/www
/**
* Get port from environment and store in Express.
*/
var port = normalizePort(process.env.PORT || '8081');
app.set('port', port);
Or in your server file directly:
// server.js
const port = process.env.PORT || 8081;
app.listen(port, () => console.log(`Server started on port ${port}`));
Match Node Version
The apps I worked with were from several years ago. And things have changed! I had to bump up multiple node version to comply with the AWS environment. This is done easily in the package.json
file. It's worth verifying that your app is still running after making these changes and switching your local node with Node Version Manager:
// package.json
{
...
"engines": {
"node": "^16.0.0",
"npm": "6.13.4"
}
}
Deploying
You have a couple of options for deploying: Downloading the EB CLI, or using the web console. The web console is fairly straightforward and allows for easily bouncing between code pipeline, your application, and the environment generated from there. This guide will get you there.
More To Come
So that's getting an Express app up on Elastic Beanstalk! Next time I'll talk about bringing in React within the same project and the pitfalls to watch out for.
Amazon Virtual Private Clouds
I'm continuing research on cloud architecture this week. Here are some notes on Virtual Private Clouds (VPC). In these notes, I'll be covering what they are, why use them, and what are the parts that make up a VPC.
VPC Overview
A VPC is a private sub-section of AWS that you control, where you can place your resources (EC2's, S3s, Databases). You have full control over who has access to these resources. AWS calls these subnets, IP address ranges, and subnets
Similar to a Facebook profile - a VPC allows you to control who can view your photos, posts, and videos.
The advantage of VPC within a public cloud provider is mainly enhanced security. You can be explicit about what resources are made publicly available, and what resources have strict access. An example would be making a web server publicly available through HTTP and HTTPS protocols, while limiting access to the connected database.
VPC's also allow you to specify a unique IP range for your application. Without a VPC, your IP range may be shared with other services on a public cloud provider. Should one of the other applications be flagged as malicious, a DNS will lump your application in with any access restrictions.
Home Network Analogy
VPC's can be likened to a Home network. In your home network, you have:
The home private network is STILL private, even though it's connected to the internet.
Differences between removing routers and modems from your system is:
The external data flow is as follows:
Internet => Modem => Router / Switch => Firewall => Devices
For VPC's, the data flow is:
Internet => Internet Gateway => Route Table => Network Access Control List (NACL) => EC2 instances (Public) => Private Subnets.
With an analogy set, let's look at the different parts
Internet Gateways (IGW)
These are a combination of hardware and software that provides your private network with a route to the world outside of the VPC. (Horizontally scaled so you have no bandwidth strain)
These get attached to VPC's. Without it, your VPC can communicate internally, but not with the internet.
Worth noting:
Route Tables (RT)
These are rules that determine where network traffic is directed.
You'll have a Main route table, and possible supplemental route tables.
You can detach the IGW from the VPC, and then the route will lead to a "black hole" as AWS puts it.
Network Access Control Lists (NACL)
NACL are an optional layer of security that acts as a firewall controlling traffic in and out of subnets. They have both inbound and outbound traffic. All traffic allowed by default.
Rules are evaluated by rule number, from lowest to highest. First rule evaluated that matches traffic type gets immediately applied and executed regardless of the rules that come after.
The wildcard symbol *
is a catch all. If we don't allow traffic, it's denied by default.
Creating a New Network ACL will deny all by default. You add rules from there.
Different subnets can have different NACLs.
You can control allowed protocols. If hosting a web server, you may only want to have HTTP and HTTPS.
A subnet can only be associated with one NACL at a time.
Once resources are inside, AWS resources may have their own security measures (called Security Groups.) EC2s, for example, can set their own limits on what protocols it allows in.
Subnets
Definition: A sub-section of a network. Includes all the computers in a specific location.
A loose analogy - If your ISP is a network, your home is a subnetwork.
Subnets may be named like the following group:
Each are within separate availability zones. This helps create redundancy, availability, and fault tolerance.
Public v Private Subnets
Public subnets have a route to the internet. Private subnets do not.
Both will have separate route tables. One to internet, one not.
In relation to your VPC and Availability Zones: A VPC spans multiple availability zones. A subnet is designated to only one.
Availability Zones
VPC's come with multiple availability zones. They are physically separated within a region, where as subnets were logically separated. This allows our applications to have High Availability and Fault Tolerance - two important paradigms in cloud architecture.
Availability Zone definition: Distinct location that are engineered to be isolated from failures in other Availability Zones. By launching instances in separate Availability Zones, you can protect your app from the failure of a single location.
These are a core benefit of using cloud services in AWS. We want duplicate resources span Availability Zones.
You'll have a Primary Web Server, and a back up. As well as a Primary Redis DB, and a failover.
Cloud infrastructure helps in the event of local disaster. If your home server dies, you need one off site.
A little more on High Availability and Fault Tolerance:
High Availability means the least amount of downtime as possible. It's what results in someone saying "My website is always available. I can always access my data in the cloud."
Fault Tolerant is resistance to error. It results in someone saying "One of my web servers failed, but my backup immediately took over. Or if something fails, it can repair itself."
Nat Gateway
AWS has a shared responsibility model - there are portions of security that you are responsible, and portions that AWS is responsible for.
We are responsible for maintaining the OS of our systems. We need to update systems regularly with patches from the internet.
So, question - How do we download updates to private networks?
NAT Gateway solve this. A NAT Gateway sits within the public subnet and has an EIP - Elastic IP Address.
It has a route to the internet gateway. Once it's set up, we can update the Route Table to include the Nat Gateway.
Destination: 0.0.0.0/0
Target: nat-id
A NAT gateway does not accept inbound traffic initiated from the internet. It only takes in outbound requests from subnet and receives responses from that request.
You don't have to manage the config for this. You will, however, need to setup NAT gateways within all of your private subnets.
Sources
My Reading Year, 2022
I made a conscious effort to actually read less this year. Kind of a weird way to start my first "Books I've read this year" sort of blog post, but it's the truth!
I personally can fall into a point where reading is a vice. I end up reading more than making things. So I tried to take a break this year.
But it didn't work!! I still read a few good books this year. Many of them are more in line with actual questions I had, specific areas I wanted to grow in, and the like. So this list is a pretty good representation of where my head has been this year.
Note: I could do the blog-thing where I provide Amazon affiliate links. But I'm not all that interested in getting paid cents if you purchase the book. I'd just rather you let me know if you've read a book. We can have a meeting of the minds on it! 🧠
Software / Career

The Pragmatic Programmer by Andy Hunt and Dave Thomas
Timeless principles for developing software. Such a wide range of topics relating to the job are covered, it feels like a must read for anyone new to the field! (👋) How to prototype, how to maintain software, how to manage projects, communicating with non-technical collaborators. It's all here! I even kept thorough notes throughout.

Pragmatic Thinking and Learning by Andy Hunt
Could have easily been titled: How to Learn Anything. A very thorough guide on utilizing the whole brain to gain mastery in a new thought-driven domain. Excellent read, plenty of great exercises for really connecting ideas.

The Passionate Programmer
Career advice for software engineering from a former full time sax player gone programmer. I am very squarely the target demographic for this book. It was very reassuring to hear that everything that applied in music similarly applies in this field.

The Personal MBA by Josh Kaufman
When I was a teacher, I tore through tons of business books. This one might be my favorite. Really, it's part nuts-and-bolts of business, and part addressing the mindset and personal psychology in taking on such a full bodied endeavor. It's also a great springboard into his reading list of 100 other great books for deeper diving.

Ask Iwata by Satoru Iwata
I've written about my favorite nuggets from this book already: serving those in front of you and Iwata's insight on working with creative people. It's not a full blown biography, but pieces of interviews Iwata has given that are strung together to tell his story in broad strokes. It turned out to be a surprisingly insightful read on leadership, creativity, and management. And Iwata's story is simply legendary.
Non-Fiction

Show Your Work by Austin Kleon
"How would Brian Eno write a Content Marketing book", as the author puts it. I'm a big fan of Kleon's books and blog. I don't think everyone doing creative work needs to go to the extreme of "Sharing something everyday" and becoming internet famous. But he writes about reframing "marketing" as "community building", being part of a scene over screaming into the void, and that alone is worth the cost of entry on this quick read.

4,000 Weeks by Oliver Burkeman
A playful antidote to self-help that somehow still fits in the genre. A pretty humbling read about being satisfied with doing less and taming infinite ambitions. More on the philosophy of deciding on what's worth doing when you know you have limited time instead of trying to cram everything in. I may not have mastered the material, I still dabble, but the message helped quell the inner voice that's admittedly frequently on The Search For Glory.

The Principle of 18 by Eyal Danon
A life changer, honestly. The gist is that there are 5 phases of life spanning 18 years. Dreamer, Explorer, Builder, Mentor, and Giver. Each builds on the previous, and each has different major motivations. Most relevant for me was reading about Eyal's proposed difference between the ages from 18-36 and 36-54. Before 36, it's crucial to be fully exploring and experimenting professionally so that you can execute without doubt and distraction in the following phase of life. A great balance, embracing the current trend towards minimalism, essentialism, hyper focus, etc, while also allowing time and space to actually breath and discover what's uniquely interesting.

The Time Paradox by Philip Zimbardo
Interesting lens on how the way we perceive time shapes us. Future focused folks are the sort that develop lists, set goals, and achieve them. Present focused people, alternatively, are "in the moment", enjoy richness and are generally more playful. Past positive people are strongly tradition focused, warm, and maintain strong relationships. A big generalization, there are many more interesting insights through the book. The authors conclude with recommending a healthy mix of the different perspective for a full and rich life back then, now, and in the future.

The 12 Stages of Healing by Donny Epstein
If you know, you know. Nothing compares to Network Spinal. Donny's book is a tremendous introduction to the philosophy as well as a field guide for navigating the different rhythms of life.
Music & Art

The Listening Book by W.A. Mathieu
Everyone should read this! Even non musicians. The book takes the pure meditative quality of listening to and reveling in sound from the start and further combs towards practicing music. Absolutely beautiful. So many wonderful insights on our relation to sound and being a creative musician in the world.

Big Magic by Elizabeth Gilbert
A re-read for me, one of my favorite books on living creatively. The secret is bouncing between serious, regular dedication to what you care about doing, and also not taking it that seriously, making the work playful as you do it. After spending so much time with creative work being purely a topic of career, this really helped with opening it up as a calling.

Gesture Drawing for Animation by Walt Stanchfield
I've picked up drawing this year, and this was my first book on it. Walt Stanchfield was a Disney animator and teacher to other Disney artists. Plenty of the techniques are still beyond me, but it's fun all the same. Walt writes with such a fire and emphasis on expression over accuracy. Not to mention his life story - an animator, tennis player, piano player, musician, poet, a real renaissance man! I also wrote a bit about his perspective on performing without an audience, a very new and real sensation for me.

The Jazz Piano Book by Mark Levine
The first Jazz book I've picked up that actually takes you from zero to improvising. Too many other books I've read assume some sort of prior knowledge or experience. Needless to say, I haven't finished it yet, but what I've gone through has already gotten me on the path more than any other method.

Hal Leonard Guitar Method
I've been learning guitar for a couple of years. I've largely done it the self taught way, hacking through chords from Radiohead and Coldplay songs, and trying to pick things up by ear. It's helped, but man, nothing beats a good ol' fashioned method book! This one focuses pretty heavily on lead guitar. Lots of spirituals and traditional tunes. I may never get called to play "Simple Gifts" for a gig, but playing these tuneful lines has helped my melodic playing and helped me really learn the notes on the guitar.

Remixing the Classroom by Randall Everett Allsup
An argument for how classroom music favors teacher-lead instruction and skill development (good things) in favor of nurturing creativity and really fostering a life long interest in engaging with music (not so good.) I love band and how it's taught today, and at the same time I agree with the author in that there's room for more genuine play in those spaces. Ends with a bit of pessimism, but it was interesting all the same.
Fiction

Laserwriter II by Tamara Shopsin
Quirky characters, old computer hardware, and moments of surrealism. I liked this book so much that I wrote an album inspired by it!

The Light Fantastic by Terry Pratchett
I'm starting a campaign to read every Discworld book. I've hopped around, and I've finally settled on reading them in order.
I read up to the 6th book this year, Wyrd Sisters. But the Light Fantastic was my favorite. Still wildly funny, but there's a more serious tone at the start that quickly reshapes even in the next book onward. If these books were illustrations, later books are fully colored in more of a cartoony style, and this one was done a darker, more engergetic ink style.
The Pragmatic Programmer by Andy Hunt and Dave Thomas
I kept thorough notes while reading The Pragmatic Programmer. This isn't a review so much as a public sharing of those notes! To serve as a refference for present you and future me.
A Pragmatic Philosophy
Software Entropy
Entropy = level of disorder in a system. The universe works towards maximum entropy.
Broken Windows are the first sign of entropy. When one thing is out of place and not fixed, the rest of the neighborhood goes.
When adding code, do no harm.
Technical debt = rot. Same topic.
Stone Soup and Boiled Frogs
Ask for forgiveness, not permission. Be a catalyst for change.
Show success before asking for help.
Remember the Big Picture.
Maintain awareness around you. A la Navy SEALS.
Good-Enough Software
The scope and quality of your software should be a part of the discussion when planning for it. With clients, talk about tradeoffs. Don't aim for perfection every time. Know when to ship good-enough software. Again, discuss this with the client. It's not all up to you.
Example: SSR and React Portal aren't playing nice. Do the research to discuss solutions. Leave the decision to client for whether or not this should stop us from shipping the code.
Your Knowledge Portfolio
Investing in your knowledge and experience is your most valuable asset. Stagnating will mean the industry will pass you by.
Serious investors:
Suggested Goals:
It doesn't matter if you use this tech on a project or not - the engagement with new ideas and ways of doing things will change how you program.
Think critically. Be mindful of weather or not something is valuable to place in the knowledge portfolio. Consider:
Go far: If you are in implementation, find a book on design.
A Pragmatic Approach
The Essence of Good Design
ETC — Make everything Easy To Change. We can't predict the needs of the future, so mainain flexibility in design now. That means modularity, decoupling, and single sources of truth.
DRY — The Evils of Duplication
DRY Don't repeat yourself. This is more nuanced than "Don't Copy/Paste"
Maintenance is not done after a project is completed, it is a continual part of the process. You are a gardener, continue to garden and maintain.
DRY Is maintaining so that every piece of knowledge has a single, unambiguous, authoritative representation within the system.
Example: Regions stored in the DB.
GraphQL is a brilliant implementation of DRY - It's self documenting and APIs are automatically generated.
def validate_age(val):
validate_type(val)
validate_min_integer(val)
def validate_quantity(val):
validate_type(val)
validate_min_integer(val)
This does not violate the DRY principle because these are separate pieces of knowledge. They use the same code (think of CSS copying), but they don't need to share the same function. One validates age, one validates quantity. We keep it ETC by keeping these procedures separate, even if they use the same code.
Documentation is often duplication. Write readable code, and you won't have to worry about documenting.
DRY in Data can often be mitigated through calculation.
You don't need to store the averageRent, just the rent prices. You can break this rule, so long as you keep it close to the module. Make it so that when a value changes, calculations are done to update it.
A general rule for Classes and modular coding is to make any outside endpoints an accessor or setting function as opposed to exposing access to the metal. By doing this, you make it easier to add adjustments to those methods (setting a value can allow for later triggering off other internal methods. Getting methods allow you to obfuscate if the value is calculated or directly accessed, it shouldn't matter either way.
Inter-developer Duplication
Keeping clear communication among teams will help keep from code duplication.
Orthogonality
^
|
|
|
__________>
Two lines are orthogonal if they can move in their direction without going into the other axis. So an X/Y axis is orthogonal because no movement in their direction requires a change in another axis.
This is an ideal in our code. It's not necessarily achievable to perfection, but getting 80% there is a goal. The author's note that in reality, most real-world requirements will require multiple function changes in the system. In an orthological system, though, it's only one module within those functions that changes. That's the scope of it.
A helicopter is a non orthogonal system, requiring regular balancing.
Benefits include a boost in productivity, flexibility, and simplicity.
You also reduce the risk of one change ruining another part of the code.
You know this as component-based design.
Even in design, consider the orthogonality. Is your system for user id's orthogonal if your user id is their phone number? No!
Be mindful of third party libraries in orthogonal systems. If you need to access objects in a special way with other libraries, it's likely not orthogonal. At the very least, wrap the handler in something that can isolate that logic.
Coding
What to do this while coding:
Reversibility
There are no final decisions
We can't rely on the same vendors over time. To mitigate this, hide third-party APIs behind your own abstraction layers. Break your code into components, even if you deploy to a single server. This mirror's Wes Bos' advice to, when working with server code, write the function itself, then write a handler that imports that code and runs it.
Forgo Following Fads
Tracer Bullets
An approach that is not the same as Prototyping. The means of tracer bullets is to find the target while laying down the skeleton for your project.
An example: Getting a "hello, world" app up that utilizes many different systems together.
Tracer bullets don't always hit their target, get accustomed to the fact that they most likely won't up front. Using light weight code makes it easier to adapt.
Prototyping and Post It Notes
Prototyping by contrast is a throw away. It can include high level code, or not. It can be post it notes and still images, or even just drawing on a white board!
You can prototype:
Again, many of these solutions are fine on a white board, or you can code something up that's more involved for testing.
You can forget about:
Communicate that this code is meant to be thrown away. You may be better of with tracer bullets if your management is likely to want to deploy this.
Domain Languages
Internal Language
That using a programming language primarily as it's means of communication. React and Jest are good examples of this.
The strength here is that you have a lot of flexibility with the language. You can use the language to create several tests automatically, for example.
External Language
That using a meta-language, requiring a parser to implement. JSON, YAML, and CSV are good examples of this. They contain information and data, but needs parsing to turn into action. The most extreme example is an application that uses it's own custom language (GROQ is an example of this). If there is a client using your product, use this and reach for off the shelf external language solutions (JSON, YAML, CSV for client products)
Mix of both
Using methods and functions are a good in between. Jest uses functions (do, if, case) that have their own language and "syntax", but are, at the end of the day, functions. This is most ideal in most cases if programmers are using your solution.
test('two plus two', () => {
const value = 2 + 2;
expect(value).toBeGreaterThan(3);
expect(value).toBeGreaterThanOrEqual(3.5);
expect(value).toBeLessThan(5);
expect(value).toBeLessThanOrEqual(4.5);
// toBe and toEqual are equivalent for numbers
expect(value).toBe(4);
expect(value).toEqual(4);
});
Chris' Notes!
An example of this is ACNM. You're using React to write code for yourself. You're using Sanity to generate JSON objects that are then parsed and controlled by your application.
Estimating
You can't truly estimate a specific project until you are iterating on it, if it's large enough.
Consider the time range of the project, and use appropriate quote to estimate in (330 days is specific, 6 months is vague).
Breaking down a project can help you give a ballpark answer to how long something will take. It will also help you say "If you want to do Y instead, we could cut time in half"
Keeping track of your estimates is good — It well help teach your gut and intuition on how to give better estimates as a lead.
PERT (Program Evaluation Review Technique) is a system using Optimistic, most likely, and pessimistic estimates. A good way to start, allowing for a range with specific scenarios, vs just a large ball park guess with padding.
The only way to refine an estimate is to iterate. How long will this take? How long is a string? There are so many factors at play that are not the same - team productivity, features, unforeseen issues....
The schedule will iterate with the project. You won't get a clear answer until you are getting closer. Avoid hard dates off into the future.
Always say "I'll get back to you." Let things take how long they take.
This is for you too! Allow things to take as long as they take, don't feel rushed or pressured to produce. They take as long as they take.
The Basic Tools
At this point, the tools become conduits from he maker's brain to the finished product
Start with a basic set of generally applicable tools. Let need drive your acquisitions.
Many new programmers make the mistake of adopting a single power tool, such as... an IDE.
The Power of Plain Text
[There's a] difference between human readable and human understandable.
Easier Testing If you use plain text to create synthetic data to drive system tests, then it is a simple matter to add, update, or modify the test data without having to create any special tools to do so (Chris here – AKA, no mocking!)
Version Control
Invaluable tool. Serves as a time machine, collaborative tool, safe test space for concurrent development, and a back up of the project. (and your most important files!!)
Text Manipulation
(This book was done in plain text and manipulation is done in a number of ways)
Engineering Daybooks
We use them to take notes in meetings, to jot down what we're working on.... leave reminders where we put things, etc...
It acts as a kind of rubber duck... when you stop to write something down, your brain may switch gears, almost as if talking to someone...you may realize that what you'd just done is just plain wrong.
Pragmatic Paranoia
You can't trust the data out there or even your own application. You have to continually write safeguards for your code. Consider python - When writing a crawler, you have to assume you'll get bad information, or changes will occur. Assume the data you are trying to grab is very brittle.
True in react as well. Assume error
Design by Contract
In the human world, contracts help add predictability to our interactions. In the computer world, this is true too.
A contract has a precondition, a postcondition... and then there's Class Invariants
Precondition Handled by the caller, ensuring that good data and conditions are being passed to the routine.
The alternative? Bugs and errors. By setting up preconditions, you allow a safe post condition
Example:
if availability_regex:
unit_dict['date_available'] = standardize_date(availability_regex[0], output='str', default=True)
Here we're only calling standardize_date if we have an availability_regex. Another python example
if chunk.getAttribute('name'):
name = chunk['name']
# Condensed into
name = chunk.getAttribute('name')
if not name:
rause AptError("No Name found")
The Authors in Dead Programs Tell No Lies Actually say to crash when necessary. Get this straight - some of this advice is conflicting and situational. Sometimes you'll want to avoid running code from the outside as above. Sometimes you'll want to raise exceptions.
This is actually why people like TypeScript. There's an initial headache of getting everything set up. BUT once things are up and running, then you can rest assured that your code will work solidly. Communication will be clear, it incorporates documentation in that way.
Who's responsible?
Who is responsible for checking the precondition - the caller or the routine being called?
Here's an example in React. The routine is:
renderGraph = () => {
const {data, color, options, responsiveOptions, animationStyle, showPoints} = this.props;
let update = false;
if(this.graphElement.current && Array.isArray(data?.series)) {
// Render the graph
}
}
and here is the caller
componentDidMount() {
this.renderGraph();
}
Here the routine is responsible for validating the inputs. The issue here is that it will be called, but then there's no guarantee that it's doing what it set out to do. The contract is broken silently.
Perhaps this is just more acceptable in asynchronous code? We are accepting that "We may not have all the information we need on first call. So let's wait until the next call."
The issue is in clarity. I see it as I code. I see "Oh, it's called on mount, but it's called on updates too, so there's no telling if it's actually doing what it needs to do."
But again - we are dealing with heavily event driven programming, so the rules may not apply. For now, file this under "Good to know for Python."
Assertions You can partially emulate these checks with an assertive language such as TypeScript. However, it won't cover all of your bases. Consider DBC more of a design philosophy than a need for tooling.
DBC and Crashing Early
Crashing early, although painful, is a good thing. When you crash early, you can get to the root of the problem quicker.
The author's answered the thought I had: It's actually not as desirable in this philosophy for sqrt
to return NaN
, because it may only be ages later that you realize that the issue was with what you provided to sqrt
, several functions later.
In conclusion - DBC is a proactive way of writing code so that you can find problems earlier. This can be implemented with test and documentation, or consider it a personal design philosophy.
The author's even make a case that DBC is different and preferable to TDD as it's more efficient and
Possible examples
Some libraries exist to use this in JS. Here's a babel plugin with pre and post conditions:
function withdraw (fromAccount, amount) {
pre: {
typeof amount === 'number';
amount > 0;
fromAccount.balance - amount > -fromAccount.overdraftLimit;
}
post: {
fromAccount.balance - amount > -fromAccount.overdraftLimit;
}
fromAccount.balance -= amount;
}
and with Invariants:
function withdraw (fromAccount, amount) {
pre: {
typeof amount === 'number';
amount > 0;
}
invariant: {
fromAccount.balance - amount > -fromAccount.overdraftLimit;
}
fromAccount.balance -= amount;
}
The current JS in your writing is to handle assertions manually:
function withdraw (fromAccount, amount) {
if(!fromAccount || !amount) return null;
. . .
}
but this is only the precondition. Not to mention that this is part of the routine handling the issue.
Semantic invariants
These are a philosophical contract. A more broad principle that guides development. Example: Credit card transactions: "Err in favor of the consumer."
Dynamic contracts and agents
"I can't provide this, but if you give me this, then I might provide something else." High level stuff. Contracts negotiated by our programs. If you have xyz, I can return abc. Very interesting. Think of how GraphQL dynamically creates types. When it can dynamically look for what it needs out of given inputs, then it can solve negotiation issues.
Dead Programs Tell No Lies
Here we go!!
In some environments, it may be inappropriate simply to exit a running program. You may have claimed resources that may need released, error logs to handle, open transactions to clean, or to interact with other processes still.
AND YET the basic principle stays the same. Terminate the function within that system when an error occurs to prevent
Example in Python:
def collect_and_update(region, address, update = True):
db = Db().db
building = db.buildings.find_one({'region': region, 'address': address}, projection={'region': 1, 'name': 1, 'address': 1, 'state': 1, 'city': 1, 'collector': 1})
if not building:
raise AptError('Building not found: {}, {}'.format(address, region))
if not building.get('collector', {}).get('url'):
raise AptError('{} does not have Collector url'.format(address))
if not building.get('collector', {}).get('collectorType'):
raise AptError('{} does not have Collector type'.format(address))
Here, the raise keyword stops the program.
Example in React:
const data = useMemo(() => {
if(averagePriceAggregate) {
const dataRes = {series: [], labels: []};
...
}
}
No error is raised, but the code is encapsulated by an if statement to ensure it has the data it needs and will not run the script if it doesn't.
Who's Responsible for the precondition? Well, it actually depends on your environment.
Assertive Programming
Assert against the impossible. If you think it is impossible... It's probably possible. Validate often.
This is not to replace real error handling. If there is an issue, log and handle the error. Use assertions to pass on to the error logger. Terminate if necessary.
When asserting, do not create side effects. No (array.pop() == null) checks
How to Balance Resources
Finish what you start - close files. Careful of coupling.
Act Locally Keep scope close. Encapsulate. Smaller scope = better. Less coupling.
When Deallocating resources, do so in the opposite order of allocation.
When allocating the same set of resources in different places, always allocate in the same order
Be mindful of balancing long term. Log files are an often ignored memory hog over time.
Object oriented languages mirror this - there's a constructor and then destructor (you don't normally worry about the de-structure.)
In your case, event listeners - you want to add, then remove.
With exceptions, you can balance this neatly with a try...catch...finally block, or with context managers.
In python, the with...as keyword allows you to open a file, and then it gets closed after leaving the scope.
In JS, you have try, catch, finally. Though, be sure to allocate the resource before the try catch statement.
try {
allocateResource() // Goes wrong, the resource is not opened
} catch {
// handle error
} finally {
closeResource() // oops, it never got fully opened!
}
Wrapper functions are helpful for managing and logging your resources. More advanced topic, but this can be a way to go about it in other languages.
Don't Outrun Your Headlights
In small and big ways, don't outrun your headlights. Avoid "Fortune Telling." Keep the feedback loop tight. Hit save after a few lines. Pass a test when you add code. Plan work a few hours or days ahead at most.
Notice that headlights also only go in one direction You may be thinking about the UI when you code, and then need to take a moment to see how it's balanced out the API or another resource.
Black Swans are unpredictable, and yet are guaranteed. No one talks about Motif or OpenLook anymore, because the browser-centric web quickly dominated the landscape.
Not to mention the current Federal Reserve raise in interest rates.
Oh hey! You are a REAL DEAL programmer as you create REAL UIs with the web!
Bend or Break
Decoupling
Train Wrecks
Be careful about how much knowledge one part of the code is expected to have about the other part of the code. Ideally, it's only a few levels deep.
For example, this...
customer
.orders()
.find(order_id)
.getTotals()
.applyDiscount()
should more ideally be
customer
.findOrder(order_id)
.applyDiscount
Not necessarily
customer.applyDiscountToOrder(order_id)
Because it is ok for some global understanding. It is assumed that orders can be adjusted directly after being accessed from the customer.
The Law (rule of thumb) of Demeter simplified: Don't chain method calls.
Again, this is not a law, but a rule of thumb, as the above example demonstrates. Not chaining helps with decoupling.
Language level api's are the exception. It's perfectly find to chain:
orders
.filter(filterFunc)
.map(mapFunc)
.slice(0, 5)
because you won't expect that to change anytime soon. It's about mitigating change.
Configuration
Use external configuration for your app (.env files). It's secure and keeps your app flexible. You can have different configs for different environments and deploys.
You can store it behind an API and DB for most flexible use. DB solution is best if it will be changed by the customer.
configuration-as-a-service Keeping it behind an API, again, keeps it flexible. An app shouldn't need to stop and rerun if something here changes (different API key, different port, credentials change). API-ify this aspect for maximum flexibility.
While You are Coding
Refactoring
It is natural for software to change. Software is not a building. It is akin to gardening, meant to be flexible and organic and needing regular nurturing.
Martin Fowler - An early writer on Refactoring
Definition: Refactoring is intentional and is a process that does not change the external behavior. No new features while refactoring!
When to Refactor
Often and in small doses. Best done when you see a pain point.
Also, right upon getting a feature to work. How can this be made more clear?
You shouldn't need a week to refactor.
Good tests are integral to refactoring. You are alerted immediately when you make an unintentional change thanks to tests.
Before the Project
The Requirements Pit
No one knows exactly what they want
In the early days, people only automated when they knew exactly what they wanted. This is not the case today. Software needs greater flexibility.
When given a requirement, your gut instinct should be to ask more clarifying questions. If you don't have any, build and ask "is this what you mean?"
Deliver facts on the situation and let the client make the decision.
Requirements are learned in a feedback loop
Consulting - ask why 5 times, and you'll get to the root. Yes, be annoying, it's ok.
Requirements vs policy: Requirements are a hard and fast thing (Must run under 500ms). Policy, however, is often configurable. For example: Color scheme, text, fonts, authorizations: These are configurable, and are therefor policy.
Requirements may shift when the user gets their hands on it. They may prefer different workflows. This is why short iterations work best.
A Better Way
Use index cards to gather requirements. Use a kanban board to show progress. Share the board with clients so they can see the effect of a "wafer thin mint" and they can help decide what to move along. Get them involved in the process - it's all feedback loops.
Maintain a glossary to align communication.
Excluding Internal Traffic in Analytics
It's not as clean as UA, sadly.
With Universal Analytics, Google's own Opt-Out plugin worked nicely. Unfortunately, it doesn't seem to be configured to work well with GA4.
Julius Fedorovicius has a fantastic article on what other options are available.
Google recommends filtering by IP address, but that's really not feasible with a company larger than 5 people!
The article walks through a great work around, exposing Google's traffic_type=internal
parameter that it sets on events when there is an IP match.
The two options from there are to set this with either cookies or JavaScript. Both are imperfect in their own way, but all of these methods together end up being a useable solution.
Update: An alternate approach is to set the internal traffic from a custom event. If tag manager is already being used, it's likely there are custom events already set up for when an admin logs in. So you can trigger on admin login to set the internal traffic.
I can't recommend Julius Fedorovicius' article and site enough for all help on all the different growing pains from UA to GA4.
Here's hoping the ol' opt-out plugin gets an update sometime!
Debouncing in React (& JS Functions as Objects)
Debouncing take a bit of extra consideration in React. I had a few twists and turns this week working with them, so let's unpack how to handle them properly!
Debouncing Function in Vanilla JS
Lodash has a handy debounce method. Though, we could also just as simply write our own:
const debounce = (function, timeout) =>{
let timer;
return (...args) => {
clearTimeout(timer);
timer = setTimeout(() => { function(args); }, timeout);
};
}
In essence, we ant to call a function only after a given cool down period determined by timeout
.
Lodash comes with some nice methods for canceling and flushing your calls. They also handles edge cases very nicely, so I would recommend their method over writing your own.
const wave = () => console.log('👋');
const waveButChill = debounce(wave, 1000);
window.addEventListener('click', logButChill);
// CLICK 50 TIMES IN ONE SECOND
👋
With the above code, if I TURBO CLICKED 50 times per second, only one click event would fire after the 1 second cooldown period.
React
Let's set the stage. Say we have an input with internal state and we want to send an API call after we stop typing. Here's what we'll start with:
import React, {useEffect} from 'react';
import {debounce} from 'lodash.debounce';
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
expensiveDataQuery(value);
}, [value]);
const expensiveDataQuery = () => {
// get data
};
const handleChange = (e) => {
setValue(e.currentTarget.value);
};
return (
<input value={value} onChange={handleChange}/>
);
};
export default Input;
Instead of fetching on submit, we're set to listen to each keystroke and send a new query each time. Even with a quick API call, that's not very efficient!
Naive Approach
The naive approach to this would be to create our debounce as we did above in within the component, like so:
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
fetchButChill(value);
}, [value]);
const fetchButChill = debounce(expensiveDataQuery, 1000);
. . .
}
What you'll notice though, is that you'll still have a query sent for each keystroke.
The reason for this is that a new function is created on each component re-render. So our timeout method is never cleared out, but a new timeout method is created with each state update.
useCallback
You have a couple of options to mitigate this: useCallback
, useRef
, and useMemo
. All of these are ways of keeping reference between component re-rendering.
I'm partial to useMemo
, though the react docs state that useCallback
is essentially the same as writing useMemo(() => fn, deps)
, so we'll go for the slightly cleaner approach!
Let's swap out our fetchButChill with useCallBack
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
fetchButChill(value);
}, [value]);
const fetchButChill = useCallBack(debounce(expensiveDataQuery, 1000), []);
. . .
};
Just like useMemo
, we're passing in an empty array to useCallback
to let it know that this should only memoize on component mount.
Clearing after Unmount
An important edge case to consider is what happens if our debounce interval continues after the component has unmounted. To keep our app clean, we'll want a way to cancel the call!
This is why lodash
is handy here. Our debounced function comes with method attached to the function!
WHAAAAAAT
A fun fact about JavaScript is that functions are objects under the hood, so you can store methods on functions. That's exactly what Lodash has done, and it's why we can do this:
fetchButChill(value);
fetchButChill.cancel();
fetchButChill.cancel();
will do just that, it will cancel out debounced functions before being called.
Let's finish this up by adding this within a useEffect
!
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
fetchButChill(value);
return () => fetchButChill.cancel();
}, [value]);
const fetchButChill = useCallBack(debounce(expensiveDataQuery, 1000), []);
. . .
};
Migrating Tag Manager to Google Analytics 4
Code Set Up
If you're using Google Tag Manager, you are already set up in the code to be funneling data to GA4. Alternatively, you can walk through the GA4 Setup Assistant and get A Google Site Tag. It may look something like this:
<script async src="https://www.googletagmanager.com/gtag/js?id=G-24HREK6MCT"></script>
<script>
window.dataLayer = window.dataLayer || [];
...
gtag('config', 'UA-Something")
</script>
Two things are happening - we're instantiating the google tag manager script, and we're creating a dataLayer to access any analytics information.
The dataLayer
is good to note because we actually have access to this at anytime in our own code. We could push custom analytics events simply by adding an event to the dataLayer
array, such as window.dataLayer.push('generate_lead')
Tag Manager
If you're already using Tag Manager, you'll want to 1. Add a new config for GA4 and 2. update any custom events, converting them to GA4 configured events.
It's advised to keep both GA4 and UA tags running simultaneously for at least a year to confirm there's enough time for a smooth migration. Fortunately for us, it's easy to copy custom event tags and move them to a separate folder within tag manager.
Custom Event Considerations
Dimensions & Metrics
GA4 has two means of measuring custom events: as Dimensions or as Metrics. The difference is essentially that a dimension is a string value, while a metric is numeric.
More is available in Google's Docs.
Variables in Custom Events
Just as you had a way of piping variables into Category, Action, Label, and Value fields in UA, you can add them to your custom events in GA4.
GA4 has a bit more flexibility by allowing you to set event parameters. You can have an array of parameters with a name-value pair. So on form submit, you could have a "budget" name and a "{{budget}}" value on an event. As we alluded to above, you can provide this by manually pushing an event through your own site's JavaScript.
Resources
Analytics Mania has a couple of very thorough articles on migrating to GA4 and testing your custome events in Tag Manager.
Sustaining Creativity
I've been thinking about this a lot. I went from making music in a clearly defined community to a much more amorphous one. When walking a more individualist road after being solely communally based for so long, what's the guiding purpose?
So the question on my mind has really been this: what's the motive behind continuing to work in a creative discipline?
Nothing here is really a prescription. It's mostly me figuring it out as I go. I write a lot of "You"s in this, but really I mean "me, Chris Padilla." If any of this is helpful to you, dear reader, by all means take what works! If you have perspectives on this, drop me a line.
So here we go! Three different categories and motives for making stuff:
Personal Creativity
I like making stuff! Just doing it lights me up. The most fun is when it's a blank canvas and I'm just following my own interest. It's just for me because I'm only taking in what sounds resonate with me, what themes come to mind, and what tools I have to make a thing.
I still share because it's fun to do so! It contributes to the pride of having made something that didn't exist before. A shared memento from the engagement with the spirit of creativity. But, any benefit other people get from it is merely a side effect of the process. It's not the purpose.
An interesting nuance that is starting to settle in as I do this more and more — there is no arrival point here. Creativity is an infinite game with no winners and losers, just by playing you are getting the reward and benefits then and there. This alone is a really juicy benefit to staying creative. But maybe it's not quite enough —
Gifts
Creativity for other people. Coming from a considerate place, a genuine interest in serving the person on the other side of it. Often this feels like a little quest or challenge, because I'm tasked to use the tools and skills I have to help, entertain, or bring beauty to the audience on the other end.
I'm pretty lucky in that I've pretty much always done creative work for others that has also lead to getting paid for it. Even my current work in software engineering I consider gifts. Money is part of it, but the empathetic nature of building for a specific group of people makes it feel like a gift.
$$$
Sometimes, ya gotta do what ya gotta do. In some ways, this is what separates professionals from amateurs. Teaching the student that's a bit of extra work, learning a new technology because it's popular in the market, or drawing commissions.
(Again, on a motivation level, I don't have much in my life that falls into this category. I'm very, VERY lucky to be working in a field that is interesting, and I have a pretty direct feeling of that work being of service — that work being a gift. BUT I've been in positions before where some of my work was more for those dollars.)
Actually, Game Director Masahiro Sakurai of Nintendo fame talks about this. A professional does what's tasked in front of them, even if it's not what you'd initially find interesting or fun. Even video game dev has it's chores!
This type of work is not inherently sell-out-y. You can still find the joy in the work and you can still find the purpose behind it. Shifting to a gift mindset here helps. Be wary of doing anything purely for this chunk of the venn diagram with no overlap.
A classic musician's rule of thumb for taking on a gig: "It has to have at least two of these three things: 1. Pay well 2. Have great music 3. Work with great people."
The Gist: Watch your mindset.
There's a balance between gift giving and creating just for you, I've been finding.
Things we make for our own pure expression and curiosity does not need to be weighed down by the expectation of other people loving it or of it selling wildly well. The gift is in following your own creative curiosity. And that's great!
If you're ONLY making things for yourself, and you're not finding ways to serve other people, then you'll be isolated and not fully fulfilled by what you're doing. Finding ways to give creatively is the natural balance for that.
A side note: Go for things that involve a few people, IRL. Nothing quite beats joining someone's group to make music in person, teaching someone how to do what you do, or making a physical gift for someone special!
Creating a Newsletter Form in React
Twitter is in a spot, so it's time to turn to good ol' RSS feeds and email for keeping up with your favorite artists, developers, and friends!
We built one for our game. This is another case in which building forms are more interesting than you'd expect.
Component Set Up
To get things started, I've already built an API similar to the one outlined here in my Analytics and CORS post
There are ultimately three states for this simple form: Pre-submitting, success, and failure.
Here's the state that accounts for all of that:
// Newsletter.js
import React from 'react';
import styled from 'styled-components';
import { useState } from 'react';
import { signUpForNewsletter } from '../lib/util';
const defaultMessage = 'Enter your email address:';
const successMessage = 'Email submitted! Thank you for signing up!';
const Newsletter = () => {
const [emailValue, setEmailValue] = useState('');
const [message, setMessage] = useState(defaultMessage);
const [emailSuccess, setEmailSuccess] = useState(false);
. . .
};
We're holding the form value in our emailValue
state. message
is what is displayed above our input to either prompt them to fill the form, or inform them they succeeded. emailSuccess
is simply state that will adjust styling for our success message later.
Rendering Our Component
Here's is that state in action in our render method:
// Newsletter.js
return (
<StyledNewsletter onSubmit={handleSubmit}>
<label
htmlFor="email"
style={{ color: emailSuccess ? 'green' : 'inherit' }}
>
{message}
</label>
<input
type="email"
name="email"
id="email"
value={emailValue}
onChange={(e) => setEmailValue(e.currentTarget.value)}
/>
<button type="submit">Sign Up</button>
</StyledNewsletter>
);
Setting our input to email will give us some nice validation out of the box. I'm going against the current common practice by using inline styles here for simplicity.
Handling Submit
Let's take a look at what happens on submit:
// Newsletter.js
const handleSubmit = async (e) => {
e.preventDefault();
if (emailValue && isValidEmail(emailValue)) {
const newsletterRes = await signUpForNewsletter(emailValue);
if (newsletterRes) {
setEmailValue('');
setEmailSuccess(true);
setMessage(successMessage);
} else {
window.alert('Oops! Something went wrong!');
}
} else {
window.alert('Please provide a valid email');
}
};
The html form, even when we prevent the default submit action, actually still checks the email input against it's built in validation method. A great plus! I have a very simple isValidEmail
method in place just to double check.
Once we've verified everything looks with our inputs, on we go to sending our fetch request.
// util.js
export const signUpForNewsletter = (email) => {
const data = { email };
if (!email) console.error('No email provided', email);
return fetch('https://coolsite.app/api/email', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(data),
})
.then((response) => response.json())
.then((data) => {
console.log('Success:', data);
return true;
})
.catch((error) => {
console.error('Error:', error);
return false;
});
};
I'm including return statements and a handler based on those return statements later with if (newsletterRes) ...
in our component. If it's unsuccessful, returning false will go into our very simple window.alert
error message. Else, we continue on to updating the state to render a success message!
Wrap Up
That covers all three states! Inputing, error, and success. This, in my mind, is the bare bones of getting an email form setup! Yet, there's already a lot of interesting wiring that goes into it.
From a design standpoint, a lot of next steps can be taken to build on top of this. From here, you can take a look at the API and handle an automated confirmation message, you can include an unsubscribe flow, and you can include a "name" field to personalize the email.
Even on the front end, a much more robust styling for the form can be put in place.
Maybe more follow up in the future. But for now, a nice sketch to get things started!
Here's the full component in action:
// Newsletter.js
import React from 'react';
import styled from 'styled-components';
import { useState } from 'react';
import { signUpForNewsletter } from '../lib/util';
const defaultMessage = 'Enter your email address:';
const successMessage = 'Email submitted! Thank you for signing up!';
const Newsletter = () => {
const [emailValue, setEmailValue] = useState('');
const [message, setMessage] = useState(defaultMessage);
const [emailSuccess, setEmailSuccess] = useState(false);
function isValidEmail(email) {
return /\S+@\S+\.\S+/.test(email);
}
const handleSubmit = async (e) => {
e.preventDefault();
if (emailValue && isValidEmail(emailValue)) {
const newsletterRes = await signUpForNewsletter(emailValue);
if (newsletterRes) {
setEmailValue('');
setEmailSuccess(true);
setMessage(successMessage);
} else {
window.alert('Oops! Something went wrong!');
}
} else {
window.alert('Please provide a valid email');
}
};
return (
<StyledNewsletter onSubmit={handleSubmit}>
<label
htmlFor="email"
style={{ color: emailSuccess ? 'green' : 'inherit' }}
>
{message}
</label>
<input
type="email"
name="email"
id="email"
value={emailValue}
onChange={(e) => setEmailValue(e.currentTarget.value)}
/>
<button type="submit">Sign Up</button>
</StyledNewsletter>
);
};
export default Newsletter;
const StyledNewsletter = styled.form`
display: flex;
flex-direction: column;
max-width: 400px;
font-family: inherit;
font-size: inherit;
padding: 1rem;
text-align: center;
align-items: center;
margin: 0 auto;
label {
margin: 1rem 0;
}
#email {
width: 80%;
padding: 0.5rem;
/* border: 1px solid #75ddc6;
outline: 3px solid #75ddc6; */
font-family: inherit;
font-size: inherit;
}
button[type='submit'] {
position: relative;
border-radius: 15px;
height: 60px;
display: flex;
-webkit-box-align: center;
align-items: center;
-webkit-box-pack: center;
justify-content: center;
padding: 2rem;
font-weight: bold;
font-size: 1.3em;
margin-top: 1rem;
background-color: var(--cream);
color: var(--brown-black);
border: 3px solid var(--brown-black);
transition: transform 0.2s ease;
text-transform: uppercase;
}
button:hover {
color: #34b3a5;
background-color: var(--cream);
border: 3px solid #34b3a5;
cursor: pointer;
}
`;